uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,154,428
arxiv
\section{Introduction} In this paper, we study the stochastically forced system of isentropic Euler equations of gas dynamics with a $\gamma$-law for the pressure. \smallskip Let $(\Omega,\mathcal{F},\mathbb{P},(\mathcal{F}_t),(\beta_k(t)))$ be a stochastic basis, let $\mathbb{T}$ be the one-dimensional torus, let $T>0$ and set $Q_T:=\mathbb{T}\times(0,T)$. We study the system \begin{subequations}\label{stoEuler} \begin{align} &d\rho+\partial_x (\rho u) dt=0, &\mbox{ in }Q_T,\label{masse}\\ &d(\rho u)+\partial_x (\rho u^2+p(\rho)) dt=\Phi(\rho,u) dW(t),&\mbox{ in }Q_T,\label{impulsion}\\ &\rho =\rho_0, \quad \rho u=\rho_0 u_0,&\mbox{ in }\mathbb{T}\times\{0\},\label{IC} \end{align} \end{subequations} where $p$ follows the $\gamma$-law \begin{equation} p(\rho)=\kappa\rho^\gamma,\quad \kappa=\frac{\theta^2}{\gamma},\quad \theta=\frac{\gamma-1}{2}, \label{gammalaw}\end{equation} for $\gamma>1$, $W$ is a cylindrical Wiener process and $\Phi(0,u)=0$. Therefore the noise affects the momentum equation only and vanishes in vacuum regions. Our aim is to prove the existence of solutions to \eqref{stoEuler} for general initial data (including vacuum), \textit{cf.} Theorem~\ref{th:martingalesol} below. \medskip There are to our knowledge no existing results on stochastically forced systems of first-order conservation laws, with the exception of the papers by Kim, \cite{Kim11}, and Audusse, Boyaval, Goutal, Jodeau, Ung, \cite{ABGJU15}. In \cite{Kim11} the problematic is the possibility of global existence of \textit{regular} solutions to symmetric hyperbolic systems under suitable assumptions on the structure of the stochastic forcing term. In \cite{ABGJU15} is derived a shallow water system with a stochastic Exner equation as a model for the dynamics of sedimentary river beds. On second-order stochastic systems, and specifically on the stochastic compressible Navier-Stokes equation\footnote{which, to be exact, is first-order in the density and second-order in the velocity}, different results have been obtained recently, see the papers by Breit, Feireisl, Hofmanov{\'a}, Maslowski, Novotny, Smith, \cite{FeireislMaslowskiNovotny13,BreitHofmanova14,BreitFeireislHofmanova15,Smith15} (see also the older work by Tornare and Fujita, \cite{TornareFujita97}).\medskip The \textit{incompressible} Euler equations with stochastic forcing terms have been studied in particular by Bessaih, Flandoli,\cite{Bessaih99,BessaihFlandoli99,Bessaih00,Bessaih08}, Capi{\'n}ski, Cutland, \cite{CapinskiCutland99}, Brze{\'z}niak, Peszat, \cite{BrzezniakPeszat01}, Cruzeiro, Flandoli, Malliavin, \cite{CruzeiroFlandoliMalliavin07}, Brze{\'z}niak, Flandoli, Maurelli, \cite{BrzezniakFlandoliMaurelli14}, Cruzeiro and Torrecilla, \cite{CruzeiroTorrecilla15}. \medskip In the deterministic case, the existence of weak entropy solutions to the isentropic Euler system has been proved by Lions, Perthame, Souganidis in \cite{LionsPerthameSouganidis96}. Let us mention also the anterior papers by Di Perna \cite{Diperna83a}, Ding, Chen, Luo \cite{DingChenLuo85}, Chen \cite{Chen86}, Lions, Perthame, Tadmor \cite{LionsPerthameTadmor94ki}. The uniqueness of weak entropy solutions is still an open question. \medskip For {\it scalar} non-linear hyperbolic equations with a stochastic forcing term, the theory has recently known a lot of developments. Well-posedness has been proved in different contexts and under different hypotheses and also with different techniques: by Lax-Oleinik formula (E, Khanin, Mazel, Sinai \cite{EKMS00}), Kruzhkov doubling of variables for entropy solutions (Kim \cite{Kim03}, Feng, Nualart \cite{FengNualart08}, Vallet, Wittbold \cite{ValletWittbold09}, Chen, Ding, Karlsen \cite{ChenDingKarlsen12}, Bauzet, Vallet, Wittbold \cite{BauzetValletWittbold12}), kinetic formulation (Debussche, Vovelle \cite{DebusscheVovelle10,DebusscheVovelle10revised}). Resolution in $L^1$ has been given in \cite{DebusscheVovelle14}. Let us also mention the works of Hofmanov\'a in this fields (extension to second-order scalar degenerate equations, convergence of the BGK approximation \cite{Hofmanova13b,DebusscheHofmanovaVovelle15,HofmanovaBGK13}) and the recent works by Hofmanov{\'a}, Gess, Lions, Perthame, Souganidis \cite{LionsPerthameSouganidis12,LionsPerthameSouganidis13,LionsPerthameSouganidis13,GessSouganidis14,GessSouganidis14inv,Hofmanova15} on scalar conservation laws with quasilinear stochastic terms. \medskip We will show existence of martingale solutions to \eqref{stoEuler}, see Theorem~\ref{th:martingalesol} below. The procedure is standard: we prove the convergence of (subsequence of) solutions to the parabolic approximation to \eqref{stoEuler}. For this purpose we have to adapt the concentration compactness technique (\textit{cf.} \cite{Diperna83a,LionsPerthameSouganidis96}) of the deterministic case to the stochastic case. Such an extension has already been done for scalar conservation laws by Feng and Nualart \cite{FengNualart08} and what we do is quite similar. The mode of convergence for which there is compactness, if we restrict ourselves to the sample variable $\omega$, is the convergence in law. That is why we obtain martingale solutions. There is a usual trick, the Gy\"ongy-Krylov characterization of convergence in probability, that allows to recover pathwise solutions once pathwise uniqueness of solutions is known (\textit{cf.} \cite{GyongyKrylov96}). However for the stochastic problem \eqref{stoEuler} (as it is already the case for the deterministic one), no such results of uniqueness are known. \medskip A large part of our analysis is devoted to the proof of existence of solutions to the parabolic approximation. What is challenging and more difficult than in the deterministic framework for the stochastic parabolic problem is the issue of positivity of the density. We solve this problem by using a regularizing effect of parabolic equations with drifts and a bound given by the entropy, quite in the spirit of Mellet, Vasseur, \cite{MelletVasseur09}, \textit{cf.} Theorem~\ref{th:uniformpositive}. Then, the proof of convergence of the parabolic approximation~\eqref{stoEulereps} to Problem~\eqref{stoEuler} is adapted from the proof in the deterministic case to circumvent two additional difficulties: \begin{enumerate} \item there is a lack of compactness with respect to $\omega$; one has to pass to the limit in some stochastic integrals, \item there are no ``uniform in $\varepsilon$" $L^\infty$ bounds on solutions (here $\varepsilon$ is the regularization parameter in the parabolic problem~\eqref{stoEulereps}). \end{enumerate} Problem 1. is solved by use of convergence in law and martingale formulations, Problem 2. is solved thanks to higher moment estimates (see \eqref{estimmomenteps2} and \eqref{corestimgradientepsrho}-\eqref{corestimgradientepsu}). We will give more details about the main problematic of the paper in Section~\ref{sec:prob}, after our framework has been introduced more precisely. Note that Problem 2. also occurs in the resolution of the isentropic Euler system for flows in non-trivial geometry, as treated by Le Floch, Westdickenberg, \cite{LeFlochWestdickenberg07}. \section{Notations and main result} \subsection{Stochastic forcing}\label{sec:stoForce} Our hypotheses on the stochastic forcing term $\Phi(\rho,u) W(t)$ are the following ones. We assume that $W=\sum_{k\geq 1}\beta_k e_k$ where the $\beta_k$ are independent Brownian processes and $(e_k)_{k\geq 1}$ is a complete orthonormal system in a Hilbert space $\mathfrak{U}$. For each $\rho\geq 0, u\in\mathbb{R}$, $\Phi(\rho,u)\colon \mathfrak{U}\to L^2(\mathbb{T})$ is defined by \begin{equation}\label{sigmakstar} \Phi(\rho,u)e_k=\sigma_k(\cdot,\rho,u)=\rho\sigma_k^*(\cdot,\rho,u), \end{equation} where $\sigma_k^*(\cdot,\rho,u)$ is a $1$-periodic continuous function on $\mathbb{R}$. More precisely, we assume $\sigma_k^*\in C(\mathbb{T}_x\times\mathbb{R}_+\times\mathbb{R})$ and the bound \begin{equation}\label{A0} \mathbf{G}(x,\rho,u):=\bigg(\sum_{k\geq 1}|\sigma_k(x,\rho,u)|^2\bigg)^{1/2}\leq {A_0}\rho\left[1+u^{2}+\rho^{2\theta }\right]^{1/2}, \end{equation} for all $x\in\mathbb{T}$, $\rho\geq 0$, $u\in\mathbb{R}$, where ${A_0}$ is some non-negative constant. Depending on the statement, we will sometimes also make the following localization hypothesis: for $\varkappa>0$, denote by $z=u-\rho^\theta$, $w=u+\rho^\theta$ the Riemann invariants for \eqref{stoEuler} and by $\Lambda_\varkappa$ the domain \begin{equation}\label{invariantregion} \Lambda_\varkappa=\left\{(\rho,u)\in\mathbb{R}_+\times\mathbb{R}; -\varkappa\leq z\leq w\leq \varkappa\right\}. \end{equation} We will establish some of our results (more precisely: the resolution of the approximate parabolic Problem~\eqref{stoEulereps}) under the hypothesis that there exists $\varkappa>0$ such that \begin{equation}\label{Trunc} \mathrm{supp}(\mathbf{G})\subset \mathbb{T}_x\times\Lambda_\varkappa. \end{equation} We define the auxiliary space $\mathfrak{U}_0\subset\mathfrak{U}$ by \begin{equation}\label{defUUU0} \mathfrak{U}_0=\bigg\{v=\sum_{k\geq1}\alpha_k e_k;\;\sum_{k\geq1}\frac{\alpha_k^2}{k^2}<\infty\bigg\}, \end{equation} and the norm $$ \|v\|^2_{\mathfrak{U}_0}=\sum_{k\geq1}\frac{\alpha_k^2}{k^2},\qquad v=\sum_{k\geq1}\alpha_k e_k. $$ The embedding $\mathfrak{U}\hookrightarrow\mathfrak{U}_0$ is then an Hilbert-Schmidt operator. Moreover, trajectories of $W$ are $\mathbb{P}$-a.s. in $C([0,T];\mathfrak{U}_0)$ (see Da Prato, Zabczyk \cite{DaPratoZabczyk92}). We use the path space $C([0,T];\mathfrak{U}_0)$ to recover the cylindrical Wiener process $W$ in certain limiting arguments, \textit{cf.} Section~\ref{subsec:compact} for example. \subsection{Notations}\label{sec:notations} We denote by \begin{equation}\label{defUUU} \mathbf{U}=\begin{pmatrix}\rho\\ q\end{pmatrix}, \quad\mathbf{F}(\mathbf{U})=\begin{pmatrix} q\\ \frac{q^2}{\rho}+p(\rho)\end{pmatrix},\quad q=\rho u, \end{equation} the $2$-dimensional unknown and flux of the conservative part of the problem. We also set $$ \psi_k(\mathbf{U})=\begin{pmatrix}0 \\ \sigma_k(\mathbf{U})\end{pmatrix},\quad \mathbf{\Psi}(\mathbf{U})=\begin{pmatrix}0 \\ \Phi(\mathbf{U})\end{pmatrix}. $$ With the notations above, \eqref{stoEuler} can be more concisely rewritten as the following stochastic first-order system \begin{equation}\label{stoEuler1} d\mathbf{U}+\partial_x\mathbf{F}(\mathbf{U})dt=\mathbf{\Psi}(\mathbf{U})dW(t). \end{equation} If $E$ is a space of real-valued functions on $\mathbb{T}$, we will denote $\mathbf{U}(t)\in E$ instead of $\mathbf{U}(t)\in E\times E$ when this occurs. Similarly, we will denote $\mathbf{U}\in E$ instead of $\mathbf{U}\in E\times E$ if $E$ is a space of real-valued functions on $\mathbb{T}\times[0,T]$ (see the statement of Definition~\ref{def:entropysol} as an example). \medskip We denote by $\mathcal{P}_T$ the predictable $\sigma$-algebra on $\Omega\times[0,T]$ generated by $(\mathcal{F}_t)$. \medskip We will also use the following notation in various estimates below: $$ A=\mathcal{O}(1)B, $$ where $A,B\in\mathbb{R}_+$, with the meaning $A\leq CB$ for a constant $C\geq 0$. In general, the dependence of $C$ over the data and parameters at stake will be given in detail, see for instance Theorem~\ref{th:existspatheps} below. We use the notation $$ A\lesssim B $$ with the same meaning $A\leq CB$, but when the constant $C\geq 0$ depends only on $\gamma$ and nothing else, $C$ being bounded for $\gamma$ in a compact subset of $[1,+\infty)$. In this last case, $C$ depends sometimes even not on $\gamma$ and is simply a numerical constant (see Appendix~\ref{app:regparabolic} for instance). \medskip \subsection{Entropy Solution} In relation with the kinetic formulation for~\eqref{stoEuler} in \cite{LionsPerthameTadmor94ki}, there is a family of entropy functionals \begin{equation} \eta(\mathbf{U})=\int_{\mathbb{R}} g(\xi)\chi(\rho,\xi-u)d\xi, \quad \textrm{ with } q=\rho u, \label{entropychi}\end{equation} for \eqref{stoEuler}, where \begin{equation*} \chi(\mathbf{U})=c_\lambda(\rho^{2\theta}-u^2)^\lambda_+,\quad\lambda=\frac{3-\gamma}{2(\gamma-1)}, \quad c_\lambda=\left(\int_{-1}^1 (1-z^2)_+^\lambda \,dz\right)^{-1}, \end{equation*} $s_+^\lambda:=s^\lambda\mathbf{1}_{s>0}$. Indeed, if $g\in C^2(\mathbb{R})$ is a convex function, then $\eta$ is of class $C^2$ on the set $$ \mathcal{U}:=\left\{\mathbf{U}=\begin{pmatrix}\rho\\ q\end{pmatrix}\in\mathbb{R}^2;\rho>0\right\} $$ and $\eta$ is a convex function of the argument $\mathbf{U}$. Formally, by the It\={o} Formula, solutions to \eqref{stoEuler} satisfy \begin{equation} d\mathbb{E}\eta(\mathbf{U})+\partial_x \mathbb{E} H(\mathbf{U}) dt=\frac12 \mathbb{E}\partial^2_{qq} \eta(\mathbf{U})\mathbf{G}^2(\mathbf{U})dt, \label{entropyeq}\end{equation} where the entropy flux $H$ is given by \begin{equation} H(\mathbf{U})=\int_\mathbb{R} g(\xi)[\theta\xi+(1-\theta)u]\chi(\rho,\xi-u)d\xi, \quad \textrm{ with } q=\rho u. \label{entropychiflux}\end{equation} Note that, by a change of variable, we also have \begin{equation} \label{eqeta} \eta(\mathbf{U})=\rho c_\lambda \int_{-1}^1 g\left(u+z\rho^{\theta}\right) (1-z^2)^\lambda_+ dz \end{equation} and \begin{equation} \label{eqH} H(\mathbf{U})=\rho c_\lambda \int_{-1}^1 g\left(u+z\rho^{\theta}\right) \left(u+z\theta\rho^{\theta}\right)(1-z^2)^\lambda_+ dz. \end{equation} In particular, for $g(\xi)=1$ we obtain the density $\eta_0(\mathbf{U})=\rho$. To $g(\xi)=\xi$ corresponds the impulsion $\eta(\mathbf{U})=q$ and to $g(\xi)=\frac12 \xi^2$ corresponds the energy \begin{equation}\label{entropyenergy} \eta_E(\mathbf{U})=\frac12\rho u^2+\frac{\kappa}{\gamma-1}\rho^\gamma. \end{equation} Note the form of the energy, in particular the fact that the hypothesis \eqref{A0} on the noise gives a bound \begin{equation}\label{noisebyenergy} \mathbf{G}^2(x,\mathbf{U})=\sum_{k\geq 1}|\Phi(\rho,u)e_k(x)|^2\leq \rho A_0^\sharp(\eta_0(\mathbf{U})+\eta_E(\mathbf{U})), \end{equation} for a constant $A_0^\sharp$ depending on ${A_0}$ and $\gamma$ (recall that $\eta_0(\mathbf{U}):=\rho$). If \eqref{entropyeq} is satisfied with an inequality $\leq$, then formally \eqref{noisebyenergy} and the Gronwall Lemma give a bound on $\mathbb{E}\int_\mathbb{T}(\eta_0+\eta_E)(\mathbf{U})(t) dx$ in terms of $\mathbb{E}\int_T(\eta_0+\eta_E)(\mathbf{U})(0) dx$. Indeed, we have $\partial^2_{qq}\eta_{E}(\mathbf{U})=\frac{1}{\rho}$ and, therefore, \begin{equation*} \mathbb{E}\partial^2_{qq} \eta_E(\mathbf{U})\mathbf{G}^2(\mathbf{U})\leq A_0^\sharp\mathbb{E}(\eta_0(\mathbf{U})+\eta_E(\mathbf{U})). \end{equation*} \medskip We will prove rigorously uniform bounds for approximate parabolic solutions in Section~\ref{sec:entropybounds}. The above formal computations are however sufficient for the moment to introduce the following definition. \begin{definition}[Entropy solution] Let $\rho_0, u_0\in L^2(\mathbb{T})$ with $\rho_0\geq 0$ a.e. and let $\mathbf{U}_0=\begin{pmatrix}\rho_0\\ \rho_0 u_0\end{pmatrix}$ satisfy $$ \int_\mathbb{T} \rho_0(1+u^{2}_0+\rho_0^{2\theta}) dx<+\infty. $$ A process $(\mathbf{U}(t))$ with values in $W^{-2,2}(\mathbb{T})$ is said to be a pathwise weak entropy solution to \eqref{stoEuler} with initial datum $\mathbf{U}_0$ if \begin{enumerate} \item the bound \begin{equation}\label{boundEntropyDef} \mathbb{E}\, \underset{0\leq t\leq T}{\esssup}\,\int_{\mathbb{T}}\eta(\mathbf{U}(x,t))dx<+\infty, \end{equation} is satisfied for $\eta=\eta_E$, the energy defined in \eqref{entropyenergy}, \item almost surely, $\mathbf{U}\in C([0,T],W^{-2,2}(\mathbb{T}))$ and $(\mathbf{U}(t))$ is predictable, \item $\Phi(\mathbf{U})$ satisfies \begin{equation}\label{predictPhi} \Phi(\mathbf{U})\in L^2\big(\Omega\times[0,T],\mathcal{P}_T,d\mathbb{P}\times dt;L_2(\mathfrak{U};L^2(\mathbb{T}))\big), \end{equation} where $L_2(\mathfrak{U};L^2(\mathbb{T}))$ is the space of Hilbert-Schmidt operators from $\mathfrak{U}$ into $L^2(\mathbb{T})$, \item for any $(\eta,H)$ given by \eqref{entropychi}-\eqref{entropychiflux}, where $g\in C^2(\mathbb{R})$ is convex and subquadratic\footnote{in the sense that $g$ satisfies \eqref{gsubquad}}, almost surely, $\eta(\mathbf{U})$ and $H(\mathbf{U})\in L^1(Q_T)$ and, for all $t\in(0,T]$, for all non-negative $\varphi\in C^1(\mathbb{T})$, and non-negative $\alpha\in C^1_c([0,t))$, $\mathbf{U}$ satisfies the following entropy inequality: \begin{align} &\int_0^t \big\langle \eta(\mathbf{U})(s),\varphi\big\rangle\alpha'(s)+\big\langle H(\mathbf{U})(s),\partial_x \varphi\big\rangle\alpha(s)\, ds\nonumber\\ &+\int_0^t\big\langle \mathbf{G}^2(x,\mathbf{U})\partial^2_{qq}\eta(\mathbf{U}),\varphi\big\rangle\alpha(s)\,ds+\big\langle \eta(\mathbf{U}_0),\varphi\big\rangle\alpha(0)\nonumber\\ &+\sum_{k\geq 1}\int_0^t \big\langle\sigma_k(x,\mathbf{U})\partial_q\eta(\mathbf{U}),\varphi\big\rangle\alpha(s)\,d\beta_k(s)\geq 0.\label{Entropy} \end{align} \end{enumerate} \label{def:entropysol}\end{definition} \begin{remark} A pathwise weak entropy solution $\mathbf{U}$ is a priori a process $(\mathbf{U}(t))$ with values in $W^{-2,2}(\mathbb{T})$, a space of distributions. In item \textit{4.} we require that $\eta(\mathbf{U})$ and $H(\mathbf{U})$ are functions (in $L^1(Q_T)$). Taking $(\eta,H)(\mathbf{U})=(\rho,q)$ (this corresponds to $g(\xi)=1$ in \eqref{entropychi}-\eqref{entropychiflux}) we see that almost surely $\mathbf{U}$ is a function in $L^1(Q_T)$. Actually, we will prove the existence of a martingale weak entropy solution $\mathbf{U}$ to \eqref{stoEuler} (see Theorem~\eqref{th:martingalesol}) satisfying $q=0$ in the vacuum region $\rho=0$ (see \eqref{0Vacuum}). Note also that, with the choice $(\eta,H)(\mathbf{U})=\pm(\rho,q)$, we infer from \eqref{Entropy} the weak formulation of Equation~\eqref{stoEuler}. \end{remark} \begin{remark} By \eqref{predictPhi}, the stochastic integral $t\mapsto\int_0^t\Phi(\mathbf{U})(s) dW(s)$ is a well defined process taking values in $L^2(\mathbb{T})$ (see \cite{DaPratoZabczyk92} for the details of the construction). There is a little redundancy here in the definition of entropy solutions since, apart from the predictability, the integrability property \eqref{predictPhi} will follow from \eqref{A0} and the bounds \eqref{boundEntropyDef}, \textit{cf.} \eqref{noisebyenergy}. \end{remark} In Definition~\ref{def:entropysol}, the notion of solution considered is weak in space-time, strong with respect to $\omega$. The following notion of solution is weak in $(x,t,\omega)$. \begin{definition}[Martingale solution] Let $\rho_0, u_0 \in L^2(\mathbb{T})$ with $\rho_0\geq 0$ a.e. and let $\mathbf{U}_0=\begin{pmatrix}\rho_0\\ \rho_0 u_0\end{pmatrix}$ satisfy $$ \int_T \rho_0(1+u^{2}_0+\rho_0^{2\theta}) dx<+\infty. $$ A martingale weak entropy solution to \eqref{stoEuler} with initial datum $\mathbf{U}_0$ is a multiplet $$ (\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P},(\tilde\mathcal{F}_t),\tilde W,\tilde\mathbf{U}), $$ where $(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P})$ is a probability space, with filtration $(\tilde\mathcal{F}_t)$ satisfying the usual conditions, and $\tilde W$ a $(\tilde\mathcal{F}_t)$-cylindrical Wiener process, and $(\tilde\mathbf{U}(t))$ defines, according to Definition~\ref{def:entropysol}, a pathwise weak entropy solution to~\eqref{stoEuler} with initial datum $\mathbf{U}_0$. \label{def:martingalesol}\end{definition} In summary, if after the substitution \begin{equation}\label{substitution} \big(\Omega,\mathcal{F},(\mathcal{F}_t),\mathbb{P},W\big)\leftarrow\big(\tilde{\Omega},\tilde{\mathcal{F}},(\tilde{\mathcal{F}}_t),\tilde{\mathbb{P}},\tilde{W}\big), \end{equation} $\tilde\mathbf{U}$ is a pathwise weak entropy solution to \eqref{stoEuler}, then we say that $\tilde\mathbf{U}$ (or, to be more rigorous, $(\tilde\Omega,\tilde\mathcal{F},\tilde\mathbb{P},(\tilde\mathcal{F}_t),\tilde W,\tilde\mathbf{U})$) is a martingale weak entropy solution to \eqref{stoEuler}. The substitution \eqref{substitution} leaves invariant the \textit{law} of the resulting process $(\mathbf{U}(t))$. The fact is that we are in most cases interested only in the law of the process. An example is the discussion on the large time behaviour and invariant measures given in Section~\ref{sec:conclusion}. \begin{theorem}[Main result] Let $p\in\mathbb{N}$ satisfy $p\geq 4+\frac{1}{2\theta}$. Assume that the structure and growth hypothesis \eqref{A0} on the noise are satisfied. Let $\rho_0, u_0\in L^2(\mathbb{T})$ with $\rho_0\geq 0$ a.e. and let $\mathbf{U}_0=\begin{pmatrix}\rho_0\\ \rho_0 u_0\end{pmatrix}$ satisfy $$ \int_T \rho_0(1+u^{4p}_0+\rho_0^{4\theta p}) dx<+\infty. $$ Then there exists a martingale solution to \eqref{stoEuler} with initial datum $\mathbf{U}_0$. \label{th:martingalesol}\end{theorem} \subsection{Organization of the paper and main problematic}\label{sec:prob} The paper is organized as follows. In Section~\ref{sec:parabolicapproximation}, we prove the existence of strong solutions to the parabolic approximation of Problem~\eqref{stoEuler}, see Problem~\eqref{stoEulereps}. The parabolic approximation to Problem~\eqref{stoEuler} is a stochastic parabolic PDE with singularity at the state-value $\rho=0$. To get existence of a solution to \eqref{stoEulereps}, we use a priori estimates: some are naturally furnished by the entropy balance equations, see Corollary~\ref{cor:boundmoments}, Corollary~\ref{cor:boundgradient2tau}. These estimates are however of no use in the vacuum region $\{\rho=0\}$ (observe that, indeed, a factor $\rho$ is present in each of the estimates stated in Corollary~\ref{cor:boundmoments}, Corollary~\ref{cor:boundgradient2tau}). For the isentropic Euler system, an estimate still of use in the vacuum region is an $L^\infty$ estimate given by the invariance of some regions $\Lambda_\varkappa$ defined with the help of the Riemann invariants (see the definition of $\Lambda_\varkappa$ in \eqref{invariantregion}). In our stochastic setting, we can use such invariant regions provided the noise is compactly supported (but here the $L^\infty$ estimates will be lost when $\varepsilon \to 0$). This is what we assume, see hypothesis~\eqref{Trunceps}. We need crucially this estimate ``still of use in the vacuum" to prove the last a priori estimate necessary for the existence of a solution to the parabolic approximation~\eqref{stoEulereps}, which is the positivity of the density, see Section~\ref{sec:PositiveDensity}. The positivity results is obtained thanks to the regularizing effects of the heat equation. This is the subject of Appendix~\ref{app:boundfrombelow}. \medskip All these a priori estimates are proved rigorously on an approximation of the solution to the parabolic approximation obtained by time splitting in Section~\ref{sec:ParabolicProblemSol}. Once the existence of solutions to the parabolic approximation of Problem~\eqref{stoEuler} has been proved, we want to take the limit on the regularizing parameter to obtain a martingale solution to \eqref{stoEuler}. As in the deterministic case \cite{Diperna83a,Diperna83b, LionsPerthameSouganidis96}, we use the concept of measure-valued solution (Young measure) to achieve this. In Section~\ref{sec:YoungMeasures} we develop the tools on Young measure (in our stochastic framework) which are required. This is taken in part (but quite different) from Section~4.3 in \cite{FengNualart08}. We also use the probabilistic version of Murat's Lemma from \cite[Appendix A]{FengNualart08}, to identify the limiting Young measure. This is the content of Section~\ref{sec:reductionYoung}, which requires two other fundamental tools: the a\-na\-ly\-sis of the consequences of the div-curl lemma in \cite[Section I.5]{LionsPerthameSouganidis96} and an identification result for densely defined martingales from \cite[Appendix A]{Hofmanova13b}. We obtain then the existence of a martingale solution to \eqref{stoEuler}. In Section~\ref{sec:conclusion} we discuss the existence of invariant measures to \eqref{stoEuler}. As explained above, we need at some point some bounds from below on solutions to ($1$-dimensional here) parabolic equations, which are developed in Appendix~\ref{app:boundfrombelow}. We also need some regularity results, with few variations, on the ($1$-dimensional) heat semi-group, and those are given in Appendix~\ref{app:regparabolic}. \subsection*{Acknowledgements} We thank warmly Martina Hofmanov\'a, for her help with Section~\ref{sec:parabolicapproximation}, and Franco Flandoli, who suggested us the use of the splitting method in Section~\ref{sec:parabolicapproximation}. We also thank an anonymous referee, whose earnest work helped us to improve our paper. \section{Parabolic Approximation}\label{sec:parabolicapproximation} For $\varepsilon>0$, we consider the following second-order approximation to \eqref{stoEuler} \begin{subequations}\label{stoEulereps} \begin{align} d{\mathbf{U}_\varepsilon}+\partial_x\mathbf{F}({\mathbf{U}_\varepsilon})dt&=\varepsilon\partial^2_{xx}{\mathbf{U}_\varepsilon} dt+\mathbf{\Psi}^\varepsilon({\mathbf{U}_\varepsilon})dW(t), \label{eq:stoEulereps}\\ \nonumber\\ {\mathbf{U}_\varepsilon}_{|t=0}&={\mathbf{U}_\varepsilon}_0.\label{IC:stoEulereps} \end{align} \end{subequations} Recall that $\mathbf{U}$ and $\mathbf{F}(\mathbf{U})$ are defined by $$ \mathbf{U}=\begin{pmatrix}\rho\\ q\end{pmatrix}, \quad\mathbf{F}(\mathbf{U})=\begin{pmatrix} q\\ \frac{q^2}{\rho}+p(\rho)\end{pmatrix}. $$ Problem \eqref{stoEulereps} is a regularized version of Problem \eqref{stoEuler}: this is a parabolic regularization of \eqref{stoEuler} and we will also assume more regularity than in \eqref{stoEuler} on the coefficients of the noise. More precisely, as in \eqref{stoEuler} we assume that there is no noise in the evolution equation for ${\rho}_\varepsilon$: the first component of $\mathbf{\Psi}^\varepsilon({\mathbf{U}_\varepsilon})$ is $0$. For each given $\mathbf{U}$, the second component is the map $\Phi^\varepsilon(\mathbf{U})\colon \mathfrak{U}\to L^2(\mathbb{T})$ given by $$ \left[\Phi^{\varepsilon}(\rho,u)e_k\right](x)=\sigma^{\varepsilon}_k(x,\rho,u), $$ where $\sigma^\varepsilon_k$ is a continuous function of its arguments. We assume (compare to \eqref{A0}) \begin{equation}\label{A0eps} \mathbf{G}^\varepsilon(x,\rho,u):=\bigg(\sum_{k\geq 1}|\sigma_k^\varepsilon(x,\rho,u)|^2\bigg)^{1/2}\leq {A_0}\rho\left[1+u^{2}+\rho^{2\theta}\right]^{1/2}, \end{equation} for all $x\in\mathbb{T}$, $\mathbf{U}\in\mathbb{R}_+\times\mathbb{R}$. We will also assume that $\mathbf{G}^\varepsilon$ is supported in an invariant region: there exists $\varkappa_\varepsilon>0$ such that \begin{equation}\label{Trunceps} \mathrm{supp}(\mathbf{G}^\varepsilon)\subset \mathbb{T}_x\times\Lambda_{\varkappa_\varepsilon}, \end{equation} where the region $\Lambda_\varkappa$ is defined by \eqref{invariantregion}. Note that this gives \eqref{A0eps}, but with a constant ${A_0}$ depending on $\varkappa_\varepsilon$: we have indeed \begin{equation}\label{BoundTrunceps} |\mathbf{G}^\varepsilon(x,\mathbf{U})|\leq M(\varkappa_\varepsilon), \end{equation} for all $x\in\mathbb{T}$, $\mathbf{U}\in\mathbb{R}_+\times\mathbb{R}$. Note however that, in \eqref{A0eps}, ${A_0}$ is assumed independent on $\varepsilon$. Eventually, we will assume that the following Lipschitz condition is satisfied: \begin{equation}\label{Lipsigmaeps} \sum_{k\geq 1}\left|\sigma_k^\varepsilon(x,\mathbf{U}_1)-\sigma_k^\varepsilon(x,\mathbf{U}_2)\right|^2\leq C(\varepsilon,R)|\mathbf{U}_1-\mathbf{U}_2|^2, \end{equation} for all $x\in\mathbb{T}$, $\mathbf{U}_1,\mathbf{U}_1\in D_R$, where $C(\varepsilon,R)$ is a constant depending on $\varepsilon$ and $R$. Here, for $R>1$, $D_R$ denotes the set of $\mathbf{U}\in\mathbb{R}_+\times\mathbb{R}$ such that \begin{equation}\label{defDR} R^{-1}\leq\rho\leq R,\quad |q|\leq R. \end{equation} \subsection{Pathwise solution to the parabolic problem}\label{sec:pathwiseParabolicProblem} \begin{definition}[Bounded solution to the parabolic approximation] Let $\mathbf{U}_0\in L^\infty(\mathbb{T})$ satisfy $\rho_0\geq c_0$ a.e. in $\mathbb{T}$, where $c_0>0$. Let $T>0$. Assume \eqref{A0eps}. A process $(\mathbf{U}(t))_{t\in[0,T]}$ with values in $(L^2(\mathbb{T}))^2$ is said to be a \textrm{bounded solution} to \eqref{stoEulereps} if it is a predictable process such that \begin{enumerate} \item almost surely, $\mathbf{U} \in C([0,T];L^2(\mathbb{T}))$, \item there exists some random variables $c_\mathrm{min}$, $C_\mathrm{max}$ with values in $(0,+\infty)$ such that, almost surely, \begin{equation}\label{asregsoleps} c_\mathrm{min}\leq\rho\leq C_\mathrm{max},\quad |q|\leq C_\mathrm{max},\;\mbox{ a.e. in }Q_T, \end{equation} \item almost surely, for all $t\in[0,T]$, for all test function $\varphi\in C^2(\mathbb{T};\mathbb{R}^2)$, the following equation is satisfied: \begin{align} \big\langle \mathbf{U}(t),\varphi\big\rangle=\big\langle \mathbf{U}_0,\varphi\big\rangle+\int_0^t&\big\langle \mathbf{F}(\mathbf{U}),\partial_x\varphi\big\rangle+\varepsilon\big\langle\mathbf{U},\partial^2_{xx}\varphi\big\rangle\,d s\nonumber\\ &+\int_0^t\big\langle\mathbf{\Psi}^\varepsilon(\mathbf{U})\,d W(s),\varphi\big\rangle.\label{EqBoundedSolution} \end{align} \end{enumerate} \label{def:pathsoleps}\end{definition} We will prove the existence of pathwise solutions to the parabolic stochastic problem~\eqref{stoEulereps} satisfying uniform (or weighted) estimates with respect to $\varepsilon$. If $\eta$ is an entropy function given by \eqref{entropychi} with a convex function $g$ of class $C^2$, we denote by $$ \Gamma_\eta(\mathbf{U})=\int_{\mathbb{T}}\eta(\mathbf{U}(x))dx, $$ the total entropy of a function $\mathbf{U}\colon\mathbb{T}\to\mathbb{R}^2$. \begin{theorem}[Existence of pathwise solution to \eqref{stoEulereps}] Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_0$ a.e. in $\mathbb{T}$, for a positive constant $c_0$. For $m\in\mathbb{N}$, let $\eta_m$ denote the entropy associated to $\xi\mapsto \xi^{2m}$ by \eqref{entropychi}. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied and that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$. Then the problem \eqref{stoEulereps} admits a unique bounded solution ${\mathbf{U}_\varepsilon}$, which has the following property: \begin{enumerate} \item it satisfies some moment estimates: for all $m\in\mathbb{N}$, \begin{equation}\label{estimmomenteps2} \mathbb{E}\sup_{t\in[0,T]}\int_{\mathbb{T}^1}\left(|{u}_\varepsilon|^{2m}+|{\rho}_\varepsilon|^{m(\gamma-1)}\right) {\rho}_\varepsilon dx=\mathcal{O}(1), \end{equation} where $\mathcal{O}(1)$ depends on $T$, $\gamma$, on the constant ${A_0}$ in \eqref{A0eps}, on $m$ and on $\mathbb{E}\Gamma_{\eta}({\mathbf{U}_\varepsilon}_0)$ for $\eta\in\{\eta_0,\eta_{2m}\}$, \item it satisfies the following gradient estimates: for all $m\in\mathbb{N}$, \begin{equation}\label{corestimgradientepsrho} \varepsilon\mathbb{E}\iint_{Q_T} \left(|{u}_\varepsilon|^{2m}+{\rho}_\varepsilon^{2m\theta}\right){\rho}_\varepsilon^{\gamma-2}|\partial_x {\rho}_\varepsilon|^2 dx dt=\mathcal{O}(1), \end{equation} and \begin{equation}\label{corestimgradientepsu} \varepsilon\mathbb{E}\iint_{Q_T} \left(|{u}_\varepsilon|^{2m}+{\rho}_\varepsilon^{2m\theta}\right){\rho}_\varepsilon|\partial_x {u}_\varepsilon|^2 dx dt=\mathcal{O}(1), \end{equation} where $\mathcal{O}(1)$ depends on $T$, $\gamma$, on the constant ${A_0}$ in \eqref{A0eps} and on the initial quantities $\mathbb{E}\Gamma_{\eta}(\mathbf{U}_0)$ for $\eta\in\{\eta_0,\eta_{2m+2}\}$, \item the region $\Lambda_{\varkappa_\varepsilon}$ is an invariant region: a.s., for all $t\in[0,T]$, ${\mathbf{U}_\varepsilon}(t)\in\Lambda_{\varkappa_\varepsilon}$. \end{enumerate} Besides, ${\mathbf{U}_\varepsilon}$ has the regularity $L^2_\omega C^\alpha_t W^{1,2}_x$ ($\alpha<1/4$) and $L^2_{\omega}C^0_tW^{2,2}_x$, see \eqref{HoldertH1xBounded}-\eqref{LinftytH2xBounded}, and $\mathbf{U}_\varepsilon$ satisfies the following entropy balance equation: for all entropy-entropy flux pair $(\eta,H)$ where $\eta$ is of the form \eqref{entropychi} with a convex function $g$ of class $C^2$, almost surely, for all $t\in[0,T]$, for all test function $\varphi\in C^2(\mathbb{T})$, \begin{align} \big\langle \eta({\mathbf{U}_\varepsilon}(t)),\varphi\big\rangle+&\varepsilon\int_0^t\big\langle \eta''({\mathbf{U}_\varepsilon})\cdot({\partial_x\mathbf{U}_\varepsilon},{\partial_x\mathbf{U}_\varepsilon}),\varphi\big\rangle ds\nonumber\\ =&\big\langle \eta({\mathbf{U}_\varepsilon}_0),\varphi\big\rangle+\int_0^t\left[ \big\langle H({\mathbf{U}_\varepsilon}),\partial_x\varphi\big\rangle+\varepsilon\big\langle\eta({\mathbf{U}_\varepsilon}),\partial^2_x\varphi\big\rangle\right]d s\nonumber\\ &+\int_0^t \big\langle\eta'({\mathbf{U}_\varepsilon})\mathbf{\Psi}^{\varepsilon}({\mathbf{U}_\varepsilon})\,d W(s),\varphi\big\rangle\nonumber\\ &+ \frac{1}{2}\int_0^t\big\langle\mathbf{G}^{\varepsilon}({\mathbf{U}_\varepsilon})^2\partial^2_{qq} {\eta}({\mathbf{U}_\varepsilon}),\varphi\big\rangle ds.\label{Itoentropyeps} \end{align} \label{th:existspatheps}\end{theorem} To prove the existence of such pathwise solutions, we will prove first the existence of martingale solution and then use the Gy{\"o}ngy-Krylov argument~\cite{GyongyKrylov96} to conclude (section \ref{subsec:prooftheps}). This means that we have to prove a result of pathwise uniqueness, which is given by the following theorem. \begin{theorem}[Uniqueness of bounded solution to \eqref{stoEulereps}] Let ${\mathbf{U}_\varepsilon}_0\in L^\infty(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_0$ a.e. in $\mathbb{T}$, for a positive constant $c_0$. Let $T>0$. Assume that hypotheses \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied. Then, the problem \eqref{stoEulereps} admits at most one bounded solution ${\mathbf{U}_\varepsilon}$. \label{th:uniqpatheps}\end{theorem} \begin{proof} Let $S_\varepsilon(t)=S(\varepsilon^{-1}t)$, where $S(t)$ is the heat semi-group on $\mathbb{T}$. From the weak formulation \eqref{EqBoundedSolution} follows the mild formulation: almost surely, for all $t\in[0,T]$, \begin{equation}\label{MildBoundedSolution} \mathbf{U}(t)=S_\varepsilon(t)\mathbf{U}_0-\int_0^t \partial_x S_\varepsilon(t-s)\mathbf{F}(\mathbf{U}(s))ds+\int_0^t S_\varepsilon(t-s)\mathbf{\Psi}^\varepsilon(\mathbf{U}(s))\,d W(s), \end{equation} (see, \textit{e.g.}, \cite{Ball77} in the deterministic case and \cite[Proposition~3.7]{GyongyRovira00} for a stochastic version of that result). Note that each member of \eqref{MildBoundedSolution} is almost surely in $C([0,T];L^2(\mathbb{T}))$: this is the case of $\mathbf{U}$ by Definition~\ref{def:pathsoleps}; the term $S_\varepsilon(t)\mathbf{U}_0$ is deterministic and continuous in $t$ with values in $L^2(\mathbb{T})$ by continuity of the semi-group $(S_\varepsilon(t))$. To prove the continuity of the two remaining terms in \eqref{MildBoundedSolution}, let us set \begin{align*} \mathcal{T}_\mathrm{det}\mathbf{U}(t)&=\int_0^t \partial_x S_\varepsilon(t-s)\mathbf{F}(\mathbf{U}(s))ds,\\ \mathcal{T}_\mathrm{sto}\mathbf{U}(t)&=\int_0^t S_\varepsilon(t-s)\mathbf{\Psi}^\varepsilon(\mathbf{U}(s))\,d W(s). \end{align*} Let $L(R)$ denote the Lipschitz constant of $\mathbf{F}$ on $D_R$. Let $\omega\in\Omega$ be such that $\mathbf{U}(x,t) \in D_R$ for a.e. $(x,t)\in Q_T$. Since $\mathbf{U}$ is a bounded solution, such a bound is satisfied for almost all $\omega$, provided $R=R(\omega)$ is large enough. By \eqref{partialKtp} with $j=1$, $k=1$, $p=2$, we have, with $S(t)u=K_t \ast u$, \begin{align*} &\|\partial_x S_\varepsilon(t_2-s)\mathbf{F}(\mathbf{U}(s))-\partial_x S_\varepsilon(t_1-s)\mathbf{F}(\mathbf{U}(s))\|_{L^2(\mathbb{T})}\\ \\ \lesssim\ & \|\partial_x S_\varepsilon(t_2-s)\mathbf{F}(\mathbf{U}(s))-\partial_x S_\varepsilon(t_1-s)\mathbf{F}(\mathbf{U}(s))\|_{L^\infty(\mathbb{T})}\\ \\ \leq\ &\|\partial_x K_{\varepsilon(t_2-s)}-\partial_x K_{\varepsilon(t_1-s)}\|_{L^2(\mathbb{T})}\|\mathbf{F}(\mathbf{U}(s))\|_{L^\infty(\mathbb{T})}\\ \\ \lesssim\ &\varepsilon^{-7/4}\int_{t_1-s}^{t_2-s} t^{-7/4} dt\ \|\mathbf{F}\|_{L^\infty(D_R)}\\ \\ \lesssim\ &\varepsilon^{-7/4}\left[(t_2-s)^{-3/4}-(t_1-s)^{-3/4}\right]\|\mathbf{F}\|_{L^\infty(D_R)}. \end{align*} Similarly, taking $j=1$, $k=0$, $p=2$ in \eqref{partialKtp}, we obtain $$ \left\|\int_{t_1}^{t_2}\partial_x S_\varepsilon(t-s)\mathbf{F}(\mathbf{U}(s))ds\right\|_{L^2(\mathbb{T})}\lesssim\varepsilon^{-1/2}(\sqrt{t_2}-\sqrt{t_1})\|\mathbf{F}\|_{L^\infty(D_R)}. $$ It follows that \begin{equation}\label{HolderTdet} \left\|\mathcal{T}_\mathrm{det}\mathbf{U}(t_2)-\mathcal{T}_\mathrm{det}\mathbf{U}(t_1)\right\|_{L^2(\mathbb{T})} \lesssim \varepsilon^{-7/4}\|\mathbf{F}\|_{L^\infty(D_R)}\delta_{\mathrm{det}}(t_1,t_2), \end{equation} where \begin{equation}\label{defdeltadet} \delta_{\mathrm{det}}(t_1,t_2)=\sqrt{t_2}-\sqrt{t_1}+\int_0^{t_1}\left[(t_2-s)^{-3/4}-(t_1-s)^{-3/4}\right]ds. \end{equation} We use the same kind of estimates to show the continuity of the stochastic term. Instead of fixed times $t_1,t_2$, let us consider some stopping times $T_1\leq T_2$ satisfying $T_i\leq T$ a.s. for $i=1,2$. Recall (see Corollary~5.10 p.52 in \cite{DoleansDade77} for example) that $$ \int_0^{{T}_i} S_\varepsilon({T}_i-s)\mathbf{\Psi}^\varepsilon(\mathbf{U}(s))\,d W(s)=\int_0^T \mathbf{1}_{s\in[0,{T}_i]} S_\varepsilon({T}_i-s)\mathbf{\Psi}^\varepsilon(\mathbf{U}(s))\,d W(s). $$ By It\={o}'s Isometry and the bound \eqref{BoundTrunceps}, we have therefore \begin{align} &\mathbb{E}\left\|\mathcal{T}_\mathrm{sto}\mathbf{U}({T}_2)-\mathcal{T}_\mathrm{sto}\mathbf{U}({T}_1)\right\|_{L^2(\mathbb{T})}^2\nonumber\\ =\ &\mathbb{E}\int_{{T}_1}^{{T}_2}\|S_\varepsilon({T}_2-s)\mathbf{G}^\varepsilon(\mathbf{U}(s))\|_{L^2(\mathbb{T})}^2 ds+\mathbb{E}\int_{0}^{{T}_1}\|\left[S_\varepsilon({T}_2-s)-S_\varepsilon({T}_1-s)\right]\mathbf{G}^\varepsilon(\mathbf{U}(s))\|_{L^2(\mathbb{T})}^2 ds\nonumber\\ \lesssim\ & \mathbb{E}({T}_2-{T}_1)M(\varkappa_\varepsilon)^2+\mathbb{E}\int_{0}^{{T}_1}\left|\varepsilon^{-5/4}\left[({T}_2-s)^{-1/4}-({T}_1-s)^{-1/4}\right]\right|^2 ds M(\varkappa_\varepsilon)^2\nonumber\\ \lesssim\ & \varepsilon^{-5/2}M(\varkappa_\varepsilon)^2 \mathbb{E} \delta_{\mathrm{sto}}({T}_1,{T}_2)^2,\label{TstoStop} \end{align} where \begin{equation}\label{defdeltasto} \delta_{\mathrm{sto}}(t_1,t_2)^2=(t_2-t_1)+\int_{0}^{t_1}\left[(t_2-s)^{-1/4}-(t_1-s)^{-1/4}\right]^2ds. \end{equation} Note that the estimate on $\mathcal{T}_\mathrm{det}\mathbf{U}$ can also be adapted to the case where $t_i={T}_i(\omega)$ for ${T}_1\leq{T}_2$ some stopping times as above. In particular, we have \begin{equation}\label{TdetStop} \mathbb{E}\left\|\mathcal{T}_\mathrm{det}\mathbf{U}({T}_2\wedge{T}_R)-\mathcal{T}_\mathrm{det}\mathbf{U}({T}_1\wedge{T}_R)\right\|_{L^2(\mathbb{T})}^2 \lesssim \varepsilon^{-7/2}\|\mathbf{F}\|_{L^\infty(D_R)}^2\mathbb{E}\delta_{\mathrm{det}}({T}_1\wedge{T}_R,{T}_2\wedge{T}_R)^2, \end{equation} where \begin{equation}\label{deftauR} {T}_R=\inf\left\{t\in[0,T];\mathbf{U}(t)\notin D_R\right\}. \end{equation} Let $\sigma$ be a stopping time such that $\sigma\leq T$ almost surely. If $\sigma$ takes a finite number on values $\sigma_1,\ldots,\sigma_n$, then by \eqref{MildBoundedSolution}, almost surely on $\{\sigma=\sigma_k\}$, for all $t\in[0,\sigma_k]$, \eqref{MildBoundedSolution} is satisfied. Equivalently, we have: almost surely, for all $t\in[0,T]$, \begin{align} \mathbf{U}(t\wedge\sigma)=\ & S_\varepsilon(t\wedge\sigma)\mathbf{U}_0-\int_0^{t\wedge\sigma} \partial_x S_\varepsilon(t\wedge\sigma-s)\mathbf{F}(\mathbf{U}(s))ds\nonumber\\ &+\int_0^{t\wedge\sigma} S_\varepsilon(t\wedge\sigma-s)\mathbf{\Psi}^\varepsilon(\mathbf{U}(s))\,d W(s).\label{MildBoundedSolutionTau} \end{align} Let $\sigma^n$ be a sequence of simple stopping times converging to $\sigma$ in $L^1(\Omega)$ and such that $\sigma^n\geq\sigma$ for all $n$, \textit{e.g.} $\sigma^n=2^{-n}[2^n\sigma+1]$, where $[t]$ is the integer part of $t$. If $\alpha>0$, we have, by \eqref{TdetStop} and the Markov inequality, for $R>0$, \begin{equation*} \mathbb{P}\left[\left\|\mathcal{T}_\mathrm{det}\mathbf{U}(\sigma^n)-\mathcal{T}_\mathrm{det}\mathbf{U}(\sigma)\right\|_{L^2(\mathbb{T})}>\alpha\right] \lesssim \mathbb{P}(T_R<T)+\alpha^{-1}\varepsilon^{-7/4}\|\mathbf{F}\|_{L^\infty(D_R)}\mathbb{E}\delta_{\mathrm{det}}(\sigma,\sigma^n). \end{equation*} Since $\mathbb{P}(T_R<T)\to 0$ when $R\to+\infty$, it follows that $\mathcal{T}_\mathrm{det}\mathbf{U}(\sigma^n)\to\mathcal{T}_\mathrm{det}\mathbf{U}(\sigma)$ in $L^2(\mathbb{T})$ in probability. Using \eqref{TstoStop}, we can also pass to the limit in the stochastic term to show that \eqref{MildBoundedSolutionTau} holds true when $\sigma$ is a general stopping time.\medskip Now we consider two bounded solutions $\mathbf{U}_1$, $\mathbf{U}_2$ to \eqref{stoEulereps}. Let $R>1$ be such that ${\mathbf{U}_\varepsilon}_0\in D_R$, let \begin{equation*} {T}_R^{1,2}=\inf\left\{t\in[0,T];\mathbf{U}^1(t)\mbox{ or }\mathbf{U}^2(t)\notin D_R\right\}. \end{equation*} By \eqref{regSr2}, we have: almost surely, for $0\leq s\leq t\wedge{T}_R^{1,2}$, \begin{align*} &\|\partial_x S_\varepsilon(t\wedge{T}_R^{1,2}-s)\left[\mathbf{F}(\mathbf{U}_1(s))-\mathbf{F}(\mathbf{U}_2(s))\right]\|_{L^2(\mathbb{T})}\\ \\ \leq\ & \varepsilon^{-1/2}(t\wedge{T}_R^{1,2}-s)^{-1/2}L(R)\sup_{s\in[0,t\wedge{T}^{1,2}_R]}\|\mathbf{U}_1(s)-\mathbf{U}_2(s)\|_{L^2(\mathbb{T})}. \end{align*} This gives \begin{multline}\label{diffTdet} \mathbb{E}\left\|\mathcal{T}_\mathrm{det}\mathbf{U}_1(t\wedge{T}^{1,2}_R)-\mathcal{T}_\mathrm{det}\mathbf{U}_2(t\wedge{T}^{1,2}_R)\right\|_{L^2(\mathbb{T})}^2\\ \leq 4\varepsilon^{-1}L(R)^2 \ t\ \mathbb{E}\sup_{s\in[0,t]}\|\mathbf{U}_1(s\wedge{T}^{1,2}_R)-\mathbf{U}_2(s\wedge{T}^{1,2}_R)\|_{L^2(\mathbb{T})}^2. \end{multline} By It\={o}'s Isometry and the bound \eqref{Lipsigmaeps}, we have \begin{multline}\label{diffTsto} \mathbb{E}\left\|\mathcal{T}_\mathrm{sto}\mathbf{U}_1(t\wedge{T}^{1,2}_R)-\mathcal{T}_\mathrm{sto}\mathbf{U}_2(t\wedge{T}^{1,2}_R)\right\|_{L^2(\mathbb{T})}^2\\ \leq C(\varepsilon,R)\ t\ \mathbb{E}\sup_{s\in[0,t]}\|\mathbf{U}_1(s\wedge{T}^{1,2}_R)-\mathbf{U}_2(s\wedge{T}^{1,2}_R)\|_{L^2(\mathbb{T})}^2. \end{multline} It follows from \eqref{MildBoundedSolutionTau}, \eqref{diffTdet}, \eqref{diffTsto} that \begin{multline*} \mathbb{E}\sup_{s\in[0,t]}\|\mathbf{U}_1(s\wedge{T}^{1,2}_R)-\mathbf{U}_2(s\wedge{T}^{1,2}_R)\|_{L^2(\mathbb{T})}^2\\ \leq \tilde C(\varepsilon,R)\ t\ \mathbb{E}\sup_{s\in[0,t]}\|\mathbf{U}_1(s\wedge{T}^{1,2}_R)-\mathbf{U}_2(s\wedge{T}^{1,2}_R)\|_{L^2(\mathbb{T})}^2, \end{multline*} where $\tilde C(\varepsilon,R)=4\varepsilon^{-1}L(R)^2+C(\varepsilon,R)$. For $t<t_1:=1/\tilde C(\varepsilon,R)$, we obtain: almost surely, $\mathbf{U}_1=\mathbf{U}_2$ on the interval $[0,t_1\wedge{T}^{1,2}_R]$. We then repeat the argument on the intervals $[kt_1,(k+1)t_1]$, $k=1,\ldots$ This is licit since the semi-group property shows that \eqref{MildBoundedSolutionTau} holds true when starting from time $t_1$: \begin{align*} \mathbf{U}(t\wedge\sigma+t_1\wedge\sigma)=\ & S_\varepsilon(t\wedge\sigma)\mathbf{U}(t_1\wedge\sigma)-\int_{0}^{t\wedge\sigma} \partial_x S_\varepsilon(t\wedge\sigma-s)\mathbf{F}(\mathbf{U}(s+t_1\wedge\sigma))ds\\ &+\int_{0}^{t\wedge\sigma} S_\varepsilon((t\wedge\sigma)\mathbf{\Psi}^\varepsilon(\mathbf{U}(s+t_1\wedge\sigma))\,d W(s). \end{align*} This gives $\mathbf{U}_1=\mathbf{U}_2$ a.s. on $[0,{T}^{1,2}_R]$. Since ${T}_R^{1,2}\to T$ almost surely as $R\to+\infty$, we conclude to $\mathbf{U}_1=\mathbf{U}_2$ a.s. \end{proof} \begin{remark} Assume $\mathbf{\Psi^\varepsilon}=0$. In this deterministic case the random va\-ria\-ble $c_\mathrm{min}$ and $C_\mathrm{max}$ in Definition~\ref{def:pathsoleps} are taken to be constants. Set $$ R=\max\left(c_\mathrm{min,1}^{-1},C_\mathrm{max,1},c_\mathrm{min,2}^{-1},C_\mathrm{max,2}\right). $$ By the bound \eqref{diffTdet}, we obtain the following estimate: \begin{equation*} \sup_{t\in[0,T]}\|\mathbf{U}_1(t)-\mathbf{U}_2(t)\|_{L^(\mathbb{T})}\leq C(T,R,\varepsilon)\|\mathbf{U}_1(0)-\mathbf{U}_2(0)\|_{L^(\mathbb{T})}, \end{equation*} where $\mathbf{U}_1$ and $\mathbf{U}_2$ are two bounded solutions to Problem~\eqref{stoEulereps} and $C(T,R,\varepsilon)$ is a constant depending on $T$, $R$ and $\varepsilon$. \label{rk:uniqpathepsdet}\end{remark} In the following proposition, we use the fractional Sobolev space $W^{s,2}(\mathbb{T})$, defined in Appendix~\ref{app:regparabolic}. \begin{proposition}[Regularity of bounded solutions to \eqref{stoEulereps}] Let ${\mathbf{U}_\varepsilon}_0\in W^{1,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_0$ a.e. in $\mathbb{T}$, for a positive constant $c_0$. Let $T>0$. Assume that hypothesis \eqref{Trunceps} is satisfied. Let ${\mathbf{U}_\varepsilon}$ be a bounded solution to Problem~\eqref{stoEulereps}. Then, for all $\alpha\in[0,1/4)$, ${\mathbf{U}_\varepsilon}(\cdot\wedge{T}_R)$ has a mo\-di\-fi\-ca\-tion whose trajectories are almost surely in $C^{\alpha}([0,T];L^2(\mathbb{T}))$ and such that \begin{equation}\label{HolderURalpha} \mathbb{E}\|{\mathbf{U}_\varepsilon}(\cdot\wedge{T}_R)\|_{C^{\alpha}([0,T];L^2(\mathbb{T}))}^2\leq C(R,\varepsilon,T,\alpha,{\mathbf{U}_\varepsilon}_0), \end{equation} where ${T}_R$ is the exit time from $D_R$ (see \eqref{deftauR}) and $C(R,\varepsilon,T,\alpha)$ is a constant depending on $R$, $T$, $\varepsilon$, $\alpha$ and $\|{\mathbf{U}_\varepsilon}_0\|_{W^{1,2}(\mathbb{T})}$. Furthermore, for every $s\in[0,1)$, ${\mathbf{U}_\varepsilon}$ satisfies the estimate \begin{equation}\label{H1UepsR} \sup_{t\in[0,T]}\mathbb{E}\|{\mathbf{U}_\varepsilon}(t\wedge{T}_R)\|_{W^{s,2}(\mathbb{T})}^2\leq C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0) \end{equation} where $C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0)$ is a constant depending on $R$, $T$, $\varepsilon$, $s$ and $\|{\mathbf{U}_\varepsilon}_0\|_{W^{1,2}(\mathbb{T})}$.\medskip If additionally ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ and the Lipschitz condition \eqref{Lipsigmaeps} is satisfied, then \begin{equation}\label{HoldertH1xBounded} \mathbb{E}\|{\mathbf{U}_\varepsilon}(t\wedge T_R)\|_{C^\alpha([0,T];W^{1,2}(\mathbb{T}))}^2\leq C(R,\varepsilon,T,\alpha,{\mathbf{U}_\varepsilon}_0), \end{equation} for all $\alpha\in [0,1/4)$, and \begin{equation}\label{LinftytH2xBounded} \sup_{t\in[0,T]}\mathbb{E}\|{\mathbf{U}_\varepsilon}(t\wedge T_R)\|_{W^{2,2}(\mathbb{T})}^2\leq C(R,\varepsilon,T,{\mathbf{U}_\varepsilon}_0), \end{equation} where $C(R,\varepsilon,T,{\mathbf{U}_\varepsilon}_0)$ is a constant depending on $R$, $T$, $\varepsilon$, on the constant $C(\varepsilon,R)$ in \eqref{Lipsigmaeps}, and on $\|{\mathbf{U}_\varepsilon}_0\|_{W^{2,2}(\mathbb{T})}$. \label{prop:regboundedeps}\end{proposition} \begin{proof} \textbf{Step 1.} Note first that ${\mathbf{U}_\varepsilon}_0\in W^{1,2}(\mathbb{T})$ gives (see \eqref{toHolderU0}) $$ (x,t)\mapsto S_\varepsilon(t){\mathbf{U}_\varepsilon}_0(x)\in C^{1/2}([0,T);L^2(\mathbb{T})), $$ with \begin{equation}\label{HolderU0} \|S_\varepsilon(t_2){\mathbf{U}_\varepsilon}_0-S_\varepsilon(t_1){\mathbf{U}_\varepsilon}_0\|_{L^2(\mathbb{T})}\lesssim \varepsilon^{-1/2}|t_2-t_1|^{1/2}\|{\mathbf{U}_\varepsilon}_0\|_{W^{1,2}(\mathbb{T})}. \end{equation} Next, to prove the H\"older regularity of ${\mathbf{U}_\varepsilon}$ in $t$, we use the estimates \eqref{TstoStop} and \eqref{TdetStop} established in the proof of Theorem~\ref{th:uniqpatheps}. By \eqref{defdeltadet} and \eqref{defdeltasto}, we have $$ \delta_\mathrm{det}(t_2,t_1)\leq (t_2-t_1)^{1/2}+\int_0^{+\infty}\left[(1+s)^{-3/4}-s^{-3/4}\right]ds\ (t_2-t_1)^{1/4}, $$ and $$ \delta_{\mathrm{sto}}(t_1,t_2)^2\leq(t_2-t_1)+\int_{0}^{+\infty} \left[(1+s)^{-1/4}-s^{-1/4}\right]^2ds\ (t_2-t_1)^{1/2}. $$ It follows that \begin{equation}\label{HolderUR} \mathbb{E}\|\mathbf{U}(t_2\wedge{T}_R)-\mathbf{U}(t_1\wedge{T}_R)\|_{L^2(\mathbb{T})}^2\leq C(R,\varepsilon,T,{\mathbf{U}_\varepsilon}_0)\max\left(t_2-t_1, (t_2-t_1)^{1/2}\right), \end{equation} for all $0\leq t_1\leq t_2\leq T$, where $C(R,\varepsilon,T,{\mathbf{U}_\varepsilon}_0)$ is a constant depending on $R$, $T$, $\varepsilon$ and $\|{\mathbf{U}_\varepsilon}_0\|_{W^{1,2}(\mathbb{T})}$. We can improve the bound \eqref{HolderUR} as follows: first, we deduce from \eqref{HolderTdet} that, for all $k\geq 1$, \begin{multline}\label{TdetStopk} \mathbb{E}\left\|\mathcal{T}_\mathrm{det}\mathbf{U}(t_2\wedge{T}_R)-\mathcal{T}_\mathrm{det}\mathbf{U}(t_1\wedge{T}_R)\right\|_{L^2(\mathbb{T})}^{2k}\\ \lesssim \varepsilon^{-7k/2}\|\mathbf{F}\|_{L^\infty(D_R)}^{2k}\mathbb{E}\delta_{\mathrm{det}}(t_1\wedge{T}_R,t_2\wedge{T}_R)^{2k}\\ \leq C(R,\varepsilon,T,k)\max\left((t_2-t_1)^{k/2},(t_2-t_1)^k\right), \end{multline} where $C(R,\varepsilon,T)$ is a constant depending on $R$, $T$, $\varepsilon$, $k$. By the Burkholder-Davis-Gundy inequality, we also have the following analogue to \eqref{TstoStop}: \begin{align} &\mathbb{E}\left\|\mathcal{T}_\mathrm{sto}\mathbf{U}({T}_2)-\mathcal{T}_\mathrm{sto}\mathbf{U}({T}_1)\right\|_{L^2(\mathbb{T})}^{2k}\nonumber\\ \lesssim\ &\mathbb{E}\left[\int_{{T}_1}^{{T}_2}\|S_\varepsilon({T}_2-s)\mathbf{G}^\varepsilon(\mathbf{U}(s))\|_{L^2(\mathbb{T})}^2 ds\right]^k \nonumber\\ &+\mathbb{E}\left[\int_{0}^{{T}_1}\|\left[S_\varepsilon({T}_2-s)-S_\varepsilon({T}_1-s)\right]\mathbf{G}^\varepsilon(\mathbf{U}(s))\|_{L^2(\mathbb{T})}^2 ds \right]^{k} \nonumber\\ \lesssim\ & \mathbb{E}({T}_2-{T}_1)^k M(\varkappa_\varepsilon)^{2k} +\mathbb{E}\left[\int_{0}^{{T}_1}\left|\varepsilon^{-5/4}\left[({T}_2-s)^{-1/4}-({T}_1-s)^{-1/4}\right]\right|^2 ds\right]^k M(\varkappa_\varepsilon)^{2k}\nonumber\\ \leq\ & C(R,\varepsilon,T,k)\max\left((T_2-T_1)^{k/2},(T_2-T_1)^k\right),\label{TstoStopk} \end{align} where $C(R,\varepsilon,T,k)$ is a constant depending on $R$, $T$, $\varepsilon$, $k$. By \eqref{HolderU0}, \eqref{TdetStopk} and \eqref{TstoStopk}, we obtain \begin{equation}\label{HolderURk} \mathbb{E}\|\mathbf{U}(t_2\wedge{T}_R)-\mathbf{U}(t_1\wedge{T}_R)\|_{L^2(\mathbb{T})}^{2k}\leq C(R,\varepsilon,T,k)\max\left((t_2-t_1)^{k/2},(t_2-t_1)^k\right), \end{equation} for all $0\leq t_1\leq t_2\leq T$, where $C(R,\varepsilon,T,k,{\mathbf{U}_\varepsilon}_0)$ is a constant depending on $R$, $T$, $\varepsilon$, $k$ and $\|{\mathbf{U}_\varepsilon}_0\|_{W^{1,2}(\mathbb{T})}$. By the Kolmogorov's criterion, the existence of a modification with trajectories almost surely $C^\alpha$ and \eqref{HolderURalpha} follow from \eqref{HolderURk}. \medskip \textbf{Step 2.} The proof of the regularity in $x$ of ${\mathbf{U}_\varepsilon}$ is also standard: by the contraction property, we have \begin{equation}\label{H2U0} \|S_\varepsilon(\cdot){\mathbf{U}_\varepsilon}_0\|_{C([0,T];W^{1,2}(\mathbb{T}))}\leq \|{\mathbf{U}_\varepsilon}_0\|_{W^{1,2}(\mathbb{T})}. \end{equation} Let $s\in(0,1)$. To prove \eqref{H1UepsR}, we use the identity \eqref{BesovBessel}. By \eqref{regSJ}, we have \begin{align} \|J^s\mathcal{T}_\mathrm{det}{\mathbf{U}_\varepsilon}(t\wedge{T}_R)\|_{L^2(\mathbb{T})}&\leq C(R,\varepsilon,T,s),\label{xTdet1}\\ \mathbb{E}\|J^s\mathcal{T}_\mathrm{sto}{\mathbf{U}_\varepsilon}(t\wedge{T}_R)\|_{L^2(\mathbb{T})}^2&\leq C(R,\varepsilon,T,s),\label{xTsto1} \end{align} where $C(R,\varepsilon,T,s)$ is a constant depending on $R$, $T$, $\varepsilon$, $s$. Indeed, the left-hand side in \eqref{xTdet1} is bounded by \begin{equation}\label{toxTdet1} \int_0^t(t-r)^{-\frac{1+s}{2}}dr\ C(R,\varepsilon), \end{equation} and the left-hand side in \eqref{xTsto1} is bounded by \begin{equation}\label{toxTsto1} \int_0^t(t-r)^{-s}dr\ C(R,\varepsilon), \end{equation} where $C(R,\varepsilon)$ depends on $R$ and $\varepsilon$. With \eqref{H2U0}, \eqref{xTdet1} and \eqref{xTsto1} give \eqref{H1UepsR}. \medskip \textbf{Step 3.} Let us assume now that ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ and that the Lipschitz condition \eqref{Lipsigmaeps} is satisfied. By \eqref{Foperates} and \eqref{H1UepsR}, we have $$ \sup_{t\in[0,T]}\mathbb{E}\|\mathbf{F}({\mathbf{U}_\varepsilon})(t\wedge{T}_R)\|_{W^{s,2}(\mathbb{T})}^2,\leq C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0), $$ and $$ \sup_{t\in[0,T]}\mathbb{E}\|\mathbf{G}^\varepsilon({\mathbf{U}_\varepsilon})(t\wedge{T}_R)\|_{W^{s,2}(\mathbb{T})}^2\leq C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}) $$ where $C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0)$ is a constant depending on $R$, $T$, $\varepsilon$, $s$, $\|{\mathbf{U}_\varepsilon}_0\|_{W^{1,2}(\mathbb{T})}$ and also on $\mathbf{F}$ and on the constant $C(\varepsilon,R)$ in \eqref{Lipsigmaeps}. Using the decompositions $$ J^{2s}\partial_x S_\varepsilon(t-r)\mathbf{F}({\mathbf{U}_\varepsilon})=J^s\partial_x S_\varepsilon(t-r)J^s\mathbf{F}({\mathbf{U}_\varepsilon}), $$ and $$ J^{2s}\partial_x S_\varepsilon(t-r) \sigma_k(\mathbf{U}_\varepsilon)=J^s\partial_x S_\varepsilon(t-r) J^s\sigma_k(\mathbf{U}_\varepsilon), $$ we deduce as in \eqref{xTdet1}-\eqref{xTsto1} that, for all $s\in[\frac12,1)$, and for some constants $C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0)$ possibly varying from lines to lines, $$ \sup_{t\in[0,T]}\mathbb{E}\|J^{2s-1}J\mathcal{T}_\mathrm{det}{\mathbf{U}_\varepsilon}(t\wedge{T}_R)\|_{L^2(\mathbb{T})}\leq C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0) $$ and $$ \sup_{t\in[0,T]}\mathbb{E}\|J^{2s-1}J\mathcal{T}_\mathrm{sto}{\mathbf{U}_\varepsilon}(t\wedge{T}_R)\|_{L^2(\mathbb{T})}\leq C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0). $$ This shows that $$ \sup_{t\in[0,T]}\mathbb{E}\|J{\mathbf{U}_\varepsilon}(t\wedge{T}_R)\|_{W^{2s-1,2}(\mathbb{T})}\leq C(R,\varepsilon,T,s,{\mathbf{U}_\varepsilon}_0). $$ In particular, almost surely, \begin{equation}\label{SobolevUepsbounded} \partial_x{\mathbf{U}_\varepsilon}(\cdot\wedge{T}_R)\in C([0,T];W^{2s-1,2}(\mathbb{T})), \end{equation} and $\partial_x{\mathbf{U}_\varepsilon}$ is solution to the fixed-point equation \begin{equation}\label{FixedpartialU} \partial_x{\mathbf{U}_\varepsilon}=S_\varepsilon(\cdot)\partial_x{\mathbf{U}_\varepsilon}_0+\mathcal{T}_\mathrm{det}(D\mathbf{F}({\mathbf{U}_\varepsilon})\cdot\partial_x{\mathbf{U}_\varepsilon}) +\mathcal{T}_\mathrm{sto}(D\mathbf{F}({\mathbf{U}_\varepsilon})\cdot\partial_x{\mathbf{U}_\varepsilon}), \end{equation} on $[0,T_R]$. By \eqref{FixedpartialU}, we can estimate $J\partial_x \mathbf{U}_\varepsilon$. Indeed, \eqref{SobolevUepsbounded} gives $$ \partial_x{\mathbf{U}_\varepsilon}(\cdot\wedge{T}_R)\in C([0,T]\times \mathbb{T}), $$ almost surely. Using \eqref{SobolevAlgebra}, we obtain \begin{equation}\label{CtH2xBounded} \sup_{t\in[0,T]}\mathbb{E}\|{\mathbf{U}_\varepsilon}(t)\|_{W^{2,2}(\mathbb{T})}^2\leq C(R,\varepsilon,T,{\mathbf{U}_\varepsilon}_0), \end{equation} and therefore \eqref{LinftytH2xBounded}. By \eqref{FixedpartialU} and \eqref{SobolevAlgebra} we obtain \eqref{HoldertH1xBounded} by the same proof as \eqref{HolderURalpha}. \end{proof} \begin{remark} By using \eqref{H1UepsR} (\textit{resp.} \eqref{CtH2xBounded}) it is possible to improve \eqref{HolderURalpha} (\textit{resp.} \eqref{HoldertH1xBounded}) to the range $\alpha\in[0,3/8)$. We will not need it anyhow. \label{rk:betterHolder}\end{remark} \subsection{Solution to the parabolic problem}\label{sec:ParabolicProblemSol} \subsubsection{Time splitting}\label{sec:TimeSplitting} To prove the existence of a solution to \eqref{stoEulereps}, we perform a splitting in time. Let $\tau>0$. Set $t_k=k\tau$, $k\in\mathbb{N}$. We solve alternatively the deterministic, parabolic part of \eqref{stoEulereps} on time intervals $[t_{2k},t_{2k+1})$ and the stochastic part of \eqref{stoEulereps} on time intervals $[t_{2k+1},t_{2k+2})$, \textit{i.e.} \begin{itemize} \item for $t_{2k} \leq t < t_{2k+1}$, \begin{subequations}\label{splitEulerDet} \begin{align} \partial_t \mathbf{U}^\tau+2\partial_x\mathbf{F}(\mathbf{U}^\tau)&=2\varepsilon\partial^2_{xx}\mathbf{U}^\tau&\mbox{ in } Q_{t_{2k},t_{2k+1}}, \label{splitEulerDetEq}\\ \mathbf{U}^\tau(t_{2k})&=\mathbf{U}^\tau(t_{2k}-)&\mbox{ in }\mathbb{T},\label{splitEulerDetCI} \end{align} \end{subequations} \item for $t_{2k+1} \leq t < t_{2k+2}$, \begin{subequations}\label{splitEulerSto} \begin{align} d\mathbf{U}^\tau&=\sqrt{2} \mathbf{\Psi}^{\varepsilon,\tau}(\mathbf{U}^\tau)dW(t)&\mbox{ in }Q_{t_{2k+1},t_{2k+2}},\label{splitEulerStoEq}\\ \mathbf{U}^\tau(t_{2k+1})&=\mathbf{U}^\tau(t_{2k+1}-)&\mbox{ in }\mathbb{T}.\label{splitEulerStoCI} \end{align} \end{subequations} \end{itemize} Note that we took care to speed up the deterministic equation \eqref{splitEulerDetEq} by a factor $2$ and the stochastic equation \eqref{splitEulerStoEq} by a factor $\sqrt{2}$. This rescaling procedure should yield a solution $\mathbf{U}^\tau$ consistent with the solution of \eqref{stoEulereps} when $\tau\to 0$. In \eqref{splitEulerSto} we have also truncated (in the number of ``modes") the coefficient $\mathbf{\Psi}^{\varepsilon}$ into a coefficient $\mathbf{\Psi}^{\varepsilon,\tau}$: we assume that, for a finite integer $K^\tau\geq 1$, for each $\rho\geq 0, u\in\mathbb{R}$, we have \begin{equation}\label{sigmakstartau} \left[\Phi^{\varepsilon,\tau}(\rho,u)e_k\right](x)=\sigma_k^{\varepsilon,\tau}(x,\rho,u):=\zeta_{\alpha_\tau}*\sigma^\varepsilon_k(x,\rho,u)\mathbf{1}_{k\leq K^\tau}. \end{equation} Then $\mathbf{\Psi}^{\varepsilon,\tau}$ is defined as the vector with first component $0$ and second component $\Phi^{\varepsilon,\tau}(\rho,u)$. Here $\alpha_\tau$ is a sequence tending to $0$ with $\tau$ and $\zeta_\alpha$ is the regularizing kernel defined by $$ \zeta_\alpha(x,\rho,u)=\frac{1}{\alpha^{3}}\bar\zeta\left(\frac{x}{\alpha}\right)\bar\zeta\left(\frac{\rho}{\alpha}\right)\bar\zeta\left(\frac{u}{\alpha}\right), $$ where $\bar\zeta$ is the non-negative smooth density of a probability measure. To define the convolution product with respect to $\rho$ in \eqref{sigmakstartau} we have set $\sigma^\varepsilon_k(x,\rho,u)=0$ for $\rho\leq 0$: this is consistent with the bound \eqref{A0eps} which gives $\sigma^\varepsilon_k(x,\rho,u)=0$ for $\rho=0$. We assume furthermore that $\bar\zeta$ is compactly supported in $\mathbb{R}_+$. Then, by \eqref{A0eps}, we have, for $\alpha_\tau$ small enough, \begin{equation}\label{A0epstau} \mathbf{G}^{\varepsilon,\tau}(x,\rho,u):=\bigg(\sum_{k\geq 1}|\sigma_k^{\varepsilon,\tau}(x,\rho,u)|^2\bigg)^{1/2}\leq 2 A_0\rho\left[1+u^{2}+\rho^{2\theta}\right]^{1/2}, \end{equation} for all $x\in\mathbb{T}$, $\mathbf{U}\in\mathbb{R}_+\times\mathbb{R}$. By \eqref{Trunceps}, we have, for $\alpha_\tau$ small enough with respect to $\varkappa_\varepsilon$, \begin{equation}\label{Truncepstau} \mathrm{supp}(\mathbf{G}^{\varepsilon,\tau})\subset\mathbb{T}_x\times\Lambda_{2\varkappa_\varepsilon}. \end{equation} If follows also from \eqref{BoundTrunceps} and \eqref{Lipsigmaeps} that \begin{equation}\label{BoundTruncepstau} |\mathbf{G}^{\varepsilon,\tau}(x,\mathbf{U})|\leq M(\varkappa_\varepsilon), \end{equation} and \begin{equation}\label{Lipsigmaepstau} \sum_{k\geq 1}\left|\sigma_k^{\varepsilon,\tau}(x,\mathbf{U}_1)-\sigma_k^{\varepsilon,\tau}(x,\mathbf{U}_2)\right|^2\leq C(\varepsilon,R)|\mathbf{U}_1-\mathbf{U}_2|^2, \end{equation} or all $x\in\mathbb{T}$, $\mathbf{U}_1,\mathbf{U}_2\in\mathbb{R}_+\times\mathbb{R}$. \medskip For further use, we note here that \eqref{A0epstau} gives \begin{equation}\label{noisebyenergyepstau} |\mathbf{G}^{\varepsilon,\tau}(x,\mathbf{U})|^2\leq \rho A_0^\sharp(\eta_0(\mathbf{U})+\eta_E(\mathbf{U})), \end{equation} where $A_0^\sharp$ depends on ${A_0}$ and $\gamma$ only (compare to \eqref{noisebyenergy}). \medskip Let us define the following indicator functions \begin{equation}\label{defhtau} \mathbf{1}_\mathrm{det}=\sum_{k\geq 0}\mathbf{1}_{[t_{2k},t_{2k+1})},\quad \mathbf{1}_\mathrm{sto}=1-\mathbf{1}_\mathrm{det}, \end{equation} which will be used to localize various estimates below. \begin{definition}[Pathwise solution to the splitting approximation] Let $\mathbf{U}_0\in L^\infty(\mathbb{T})$ satisfy $\rho_0\geq c_0$ a.e. in $\mathbb{T}$, for a positive constant $c_0$. Let $T>0$. A process $(\mathbf{U}(t))_{t\in[0,T]}$ with values in $L^2(\mathbb{T})$ is said to be a pathwise solution to \eqref{splitEulerDet}-\eqref{splitEulerSto} if it is a predictable process such that \begin{enumerate} \item almost surely, $\mathbf{U} \in C([0,T];L^2(\mathbb{T}))$, \item there exists some random variables $c_\mathrm{min}$, $C_\mathrm{max}$ with values in $(0,+\infty)$ such that, almost surely, \begin{equation}\label{asregsolepstau} c_\mathrm{min}\leq\rho\leq C_\mathrm{max},\quad |q|\leq C_\mathrm{max}\mbox{ a.e. in }Q_T, \end{equation} \item almost surely, for all $t\in[0,T]$, for all test function $\varphi\in C^2(\mathbb{T};\mathbb{R}^2)$, the following equation is satisfied: \begin{align} \big\langle \mathbf{U}(t),\varphi\big\rangle=\big\langle \mathbf{U}_0,\varphi\big\rangle&+2\int_0^t\mathbf{1}_\mathrm{det}(s)\left[ \big\langle\mathbf{F}(\mathbf{U}),\partial_x\varphi\big\rangle+\varepsilon\big\langle\mathbf{U},\partial^2_{xx}\varphi\big\rangle\right]d s\nonumber\\ &+\sqrt{2}\int_0^t\mathbf{1}_\mathrm{sto}(s)\big\langle\mathbf{\Psi}^\varepsilon(\mathbf{U})\,d W(s),\varphi\big\rangle.\label{weaksoltau} \end{align} \end{enumerate} \label{def:pathsoltau}\end{definition} \begin{proposition}[Pathwise solution to the splitting approximation] Let $T>0$, let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ satisfy $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that \eqref{A0} is satisfied. Then there exists a unique pathwise solution $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto}. Let $g\in C^2(\mathbb{R})$ be a convex function. Given an entropy-entropy flux pair $(\eta,H)$ defined by \eqref{entropychi}-\eqref{entropychiflux}, $\mathbf{U}^\tau$ satisfies the following entropy balance equation: almost surely, for all $t\in[0,T]$, for all test function $\varphi\in C^2(\mathbb{T})$, \begin{align} \big\langle \eta(\mathbf{U}^\tau(t)),\varphi\big\rangle =&\big\langle \eta(\mathbf{U}_0),\varphi\big\rangle+2\int_0^t\mathbf{1}_\mathrm{det}(s)\left[ \big\langle H(\mathbf{U}^\tau),\partial_x\varphi\big\rangle+\varepsilon\big\langle\eta(\mathbf{U}^\tau),\partial^2_{xx}\varphi\big\rangle\right]d s\nonumber\\ &-2\varepsilon\int_0^t\mathbf{1}_\mathrm{det}(s)\big\langle \eta''(\mathbf{U}^\tau)\cdot(\partial_x\mathbf{U}^\tau,\partial_x\mathbf{U}^\tau),\varphi\big\rangle ds\nonumber\\ &+\sqrt{2}\int_0^t\mathbf{1}_\mathrm{sto}(s)\big\langle\eta'(\mathbf{U}^\tau)\mathbf{\Psi}^{\varepsilon,\tau}(\mathbf{U}^\tau)\,d W(s),\varphi\big\rangle\nonumber\\ &+ \int_0^t\mathbf{1}_\mathrm{sto}(s)\big\langle\mathbf{G}^{\varepsilon,\tau}(\mathbf{U}^\tau)^2\partial^2_{qq} {\eta}(\mathbf{U}^\tau),\varphi\big\rangle ds.\label{Itoentropytau} \end{align} \label{prop:pathsoltau}\end{proposition} \begin{proof} The deterministic problem \eqref{splitEulerDet} is solved in \cite{LionsPerthameSouganidis96}: for Lipschitz con\-ti\-nu\-ous initial data $(\rho_0,q_0)$ with an initial density $\rho_0$ uniformly positive, say $\rho_0\geq c_0>0$ on $\mathbb{T}$, the Problem~\eqref{splitEulerDet} admits a unique solution $\mathbf{U}$ in the class of functions $$ \mathbf{U}\in L^\infty(0,\tau,W^{1,\infty}(\mathbb{T}))\cap C([0,t_1];L^2(\mathbb{T}));\quad \rho\geq c_1\mbox{ on }\mathbb{T}\times[0,t_1]. $$ Here $c_1>0$ is a constant depending continuously on $t_1$ and on $c_0$, $\|\rho_0\|_{L^\infty(\mathbb{T})}$, $\|q_0\|_{L^\infty(\mathbb{T})}$ (see Theorem~\ref{th:uniformpositive} and Remark~\ref{rk:positivityfordet} in this paper for more details about this positivity result). By \eqref{LinftytH2xBounded}, we have $\mathbf{U}(t_1)\in W^{2,2}(\mathbb{T})$.\medskip In a second step, we solve the stochastic problem \eqref{splitEulerSto} on the interval $[t_1,t_2)$ . It is an ordinary stochastic differential equation. The coefficients of the noise in \eqref{sigmakstartau} are functions with bounded derivatives at all orders. Since $x\mapsto\rho^\tau(x,t_1)$ is in $W^{2,2}(\mathbb{T})$, we may rewrite the second equation of \eqref{splitEulerSto} as \begin{equation}\label{splitEulerSto2} dq=\sum_{k=1}^{K^\tau}g_k(x,q)d\beta_k(t), \end{equation} where $g_k$ satisfies \begin{equation}\label{reggk} \partial_x^m\partial_q^l g_k\in L^\infty(\mathbb{R};L^2(\mathbb{T})), \end{equation} for all $l\geq 0$, $m\in\{0,1,2\}$. The existence of a solution to \eqref{splitEulerSto2} on $(t_1,t_2)$ with initial datum $q(x,t_1)$ at $t=t_1$ is ensured by a classical fixed point theorem, in the space of adapted functions $$ q\in C([t_1,t_2];L^2(\Omega\times\mathbb{T})). $$ By differentiating once with respect to $x$ in \eqref{splitEulerSto2}, we obtain $$ d(\partial_x q)=\sum_{k=1}^{K^\tau}\big(\partial_x g_k(x,q)+\partial_q g_k(x,q) (\partial_x q)\big)d\beta_k(t). $$ By the It\={o} Formula and the Gronwall Lemma, it follows that \begin{equation}\label{firstdiffqtau0} \sup_{t\in[t_1,t_2]}\mathbb{E}\|\partial_x q\|_{L^p(\mathbb{T})}^p\leq C\mathbb{E}\|\partial_x q(t_1)\|_{L^p(\mathbb{T})}^p,\quad p\geq 2, \end{equation} where the constant $C$ depends on the function $g_k$'s, on $p$ and on $\tau$. By differentiating again in \eqref{splitEulerSto2}, we have \begin{align} d(\partial_{xx}^2 q)=\sum_{k=1}^{K^\tau}\big(&\partial^2_{xx} g_k(x,q)+2\partial^2_{xq} g_k(x,q) (\partial_x q) +\partial^2_{qq} g_k(x,q) |\partial_x q|^2+\partial_q g_k(x,q) (\partial^2_{xx} q)\big)d\beta_k(t).\label{seconddiffqtau-1} \end{align} Using \eqref{firstdiffqtau0} with $p=2$ and $p=4$, the It\={o} Formula and the Gronwall Lemma, we obtain \begin{equation}\label{seconddiffqtau} \sup_{t\in[t_1,t_2]}\mathbb{E}\|\partial_{xx}^2 q\|_{L^2(\mathbb{T})}^2\leq C\big(\mathbb{E}\|\partial_{xx}^2 q(t_1)\|_{L^2(\mathbb{T})}^2+\mathbb{E}\|\partial_x q(t_1)\|_{L^2(\mathbb{T})}^2+\mathbb{E}\|\partial_x q(t_1)\|_{L^4(\mathbb{T})}^4\big), \end{equation} where the constant $C$ depends on the function $g_k$'s and on $\tau$. By the Doob's Martingale Inequality, we have therefore \begin{align*} &\mathbb{E}\sup_{t\in[t_1,t_2]}\Big\|\int_{t_1}^t \partial_q g_k(x,q(s)) \partial^2_{xx} q(s) d\beta_k(s)\Big\|_{L^2(\mathbb{T})}^2\\ \leq &2 \mathbb{E}\Big\|\int_{t_1}^{t_2} \partial_q g_k(x,q(s)) \partial^2_{xx} q(s) d\beta_k(s)\Big\|_{L^2(\mathbb{T})}^2\\ \leq &C\big(\mathbb{E}\|\partial_{xx}^2 q(t_1)\|_{L^2(\mathbb{T})}^2+\mathbb{E}\|\partial_x q(t_1)\|_{L^2(\mathbb{T})}^2+\mathbb{E}\|\partial_x q(t_1)\|_{L^4(\mathbb{T})}^4\big). \end{align*} Returning to \eqref{seconddiffqtau-1}, we deduce that \begin{equation}\label{seconddiffqtau+1} \mathbb{E}\sup_{t\in[t_1,t_2]}\|\partial_{xx}^2 q\|_{L^2(\mathbb{T})}^2\leq C\big(\mathbb{E}\|\partial_{xx}^2 q(t_1)\|_{L^2(\mathbb{T})}^2+\mathbb{E}\|\partial_x q(t_1)\|_{L^2(\mathbb{T})}^2+\mathbb{E}\|\partial_x q(t_1)\|_{L^4(\mathbb{T})}^4\big). \end{equation} By a similar argument, using Doob's Martingale Inequality, we can improve \eqref{firstdiffqtau0} into \begin{equation}\label{firstdiffqtau+1} \mathbb{E}\sup_{t\in[t_1,t_2]}\|\partial_x q\|_{L^p(\mathbb{T})}^p\leq C\mathbb{E}\|\partial_x q(t_1)\|_{L^p(\mathbb{T})}^p,\quad p\geq 2. \end{equation} Note that differentiation in \eqref{splitEulerSto2} has to be justified. The argument is standard: to obtain a solution to \eqref{splitEulerSto2} which satisfies \eqref{firstdiffqtau+1} and \eqref{seconddiffqtau+1}, we simply prove existence by using a fixed-point method in a smaller space, in\-cor\-po\-ra\-ting the bounds \eqref{firstdiffqtau+1} and \eqref{seconddiffqtau+1}. By \eqref{seconddiffqtau+1}, we conclude that $\mathbf{U}(t_2)\in W^{2,2}(\mathbb{T})$. Of course $\rho(t_2)=\rho(t_1)\geq c_1$ a.e. on $\mathbb{T}$. The initial datum $\mathbf{U}(t_2)$ is therefore admissible with regard to the re\-so\-lu\-tion of the deterministic problem \eqref{splitEulerDet} on $Q_{t_2,t_3}$. By iterating the procedure, we build $\mathbf{U}^\tau$ on the whole interval $[0,T]$. On intervals $[t_{2k+1},t_{2k+2}]$ (stochastic evolution), the density $\rho$ is unchanged. On intervals $[t_{2k},t_{2k+1}]$ the positivity of $\rho$ at $t=t_{2k}$ is preserved by Theorem~\ref{th:uniformpositive} and Remark~\ref{rk:positivityfordet}. Therefore there exists a random variable $c_\mathrm{min}$ (the possibility that it depends on $\tau$ is not excluded at this stage of the proof) such that, almost surely $\rho^\tau\geq c_\mathrm{min}$ a.e. on $Q_T$. \medskip Regarding the measurability of $\mathbf{U}^\tau$, we observe that the function $\mathbf{U}^\tau(t_2)$ is $\mathcal{F}_{t_2}$-measurable. Since $\mathbf{U}^\tau(t_2)\mapsto (\mathbf{U}^\tau(t))_{t\in[t_2,t_3]}$ is Lipschitz continuous from $L^2(\mathbb{T})^2$ into $C([t_2,t_3];L^2(\mathbb{T})^2)$ by Remark~\ref{rk:uniqpathepsdet}, the random variable $\mathbf{U}^\tau(t)$ is $\mathcal{F}_{t_2}$-measurable for every $t\in[t_2,t_3]$. In particular, $\mathbf{U}^\tau(t)$ is adapted on $[t_2,t_3]$. Repeating the argument, we obtain that $\mathbf{U}^\tau(t)$ is adapted. Since $\mathbf{U}^\tau$ is almost surely in $C([0,T];L^2(\mathbb{T}))$, it has a modification which is predictable.\medskip This achieves the proof of the existence of a pathwise solution $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto}. The uniqueness is a consequence of the uniqueness properties for the deterministic and the stochastic problems. Similarly, the entropy balance equation \eqref{Itoentropytau} is obtained by using the following entropy balance law on $[t_{2k},t_{2k+1}]$: \begin{align} \big\langle \eta(\mathbf{U}^\tau(t)),\varphi\big\rangle =&\big\langle \eta(\mathbf{U}^\tau(t_{2k})),\varphi\big\rangle+2\int_{t_{2k}}^t\mathbf{1}_\mathrm{det}(s)\left[ \big\langle H(\mathbf{U}^\tau),\partial_x\varphi\big\rangle+\varepsilon\big\langle\eta(\mathbf{U}^\tau),\partial^2_{xx}\varphi\big\rangle\right]d s\nonumber\\ &-2\varepsilon\int_{t_{2k}}^t\mathbf{1}_\mathrm{det}(s)\big\langle \eta''(\mathbf{U}^\tau)\cdot(\partial_x \mathbf{U}^\tau,\partial_x \mathbf{U}^\tau),\varphi\big\rangle ds, \label{Itoentropytaudet} \end{align} for all $t\in [t_{2k},t_{2k+1}]$, and by combining \eqref{Itoentropytaudet} with the identity \begin{align} \big\langle \eta(\mathbf{U}^\tau(t)),\varphi\big\rangle =&\big\langle \eta(\mathbf{U}^\tau(t_{2k+1})),\varphi\big\rangle +\sqrt{2}\int_{t_{2k+1}}^t\mathbf{1}_\mathrm{sto}(s)\big\langle\eta'(\mathbf{U}^\tau)\mathbf{\Psi}^{\varepsilon,\tau}(\mathbf{U}^\tau)\,d W(s),\varphi\big\rangle\nonumber\\ &+\int_{t_{2k+1}}^t\mathbf{1}_\mathrm{sto}(s)\big\langle\mathbf{G}^{\varepsilon,\tau}(\mathbf{U}^\tau)^2\partial^2_{qq} {\eta}(\mathbf{U}^\tau),\varphi\big\rangle ds,\label{Itoentropytausto} \end{align} for all $t\in [t_{2k+1},t_{2k+2}]$. We deduce \eqref{Itoentropytausto} from the stochastic equation \eqref{splitEulerSto} (where $x$ is a parameter) and the It\={o} Formula, which we sum against $\varphi$. This concludes the proof of the proposition. \end{proof} \subsubsection{Entropy bounds}\label{sec:entropybounds} If $\eta\in C(\mathbb{R}^2)$ is an entropy and $\mathbf{U}\colon\mathbb{T}\to\mathbb{R}^2$, we denote by $$ \Gamma_\eta(\mathbf{U}):=\int_{\mathbb{T}}\eta(\mathbf{U}(x))dx $$ the averaged entropy of $\mathbf{U}$. \medskip \begin{proposition}[Entropy bounds] Let $m\in \mathbb{N}$. Let $\eta_m$ be the entropy given by \eqref{entropychi} with $g(\xi)=\xi^{2m}$. Let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that the growth condition \eqref{A0eps} is satisfied and that $$ \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2m}(\mathbf{U}_{\varepsilon 0}) \right)dx <+\infty. $$ Then the solution $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto} satisfies the estimate \begin{equation}\label{estimmomentepstau} \mathbb{E}\sup_{t\in[0,T]}\Gamma_{\eta}(\mathbf{U}^\tau(t))+2\varepsilon\mathbb{E}\iint_{Q_T} \mathbf{1}_\mathrm{det}\eta''(\mathbf{U}^\tau)\cdot(\partial_x \mathbf{U}^\tau,\partial_x \mathbf{U}^\tau)dx dt=\mathcal{O}(1), \end{equation} where the quantity denoted by $\mathcal{O}(1)$ depends only on $T$, $\gamma$, on the constant ${A_0}$ in \eqref{A0eps}, on $m$ and on the initial quantities $\mathbb{E}\Gamma_{\eta}(\mathbf{U}_0)$ for $\eta\in\{\eta_0,\eta_{2m}\}$. \label{prop:boundetam}\end{proposition} \begin{proof} To prove Proposition~\ref{prop:boundetam} we will use the following result. \begin{lemma}\label{lemmaentropies} Let $m,n\in\mathbb{N}$. Then \begin{equation}\label{estimetabelow} \rho(u^{2m}+\rho^{2m\theta})=\mathcal{O}(1){\eta_m}(\mathbf{U}),\quad {\eta_m}(\mathbf{U})=\mathcal{O}(1)\left[\rho(u^{2m}+\rho^{2m\theta})\right], \end{equation} where $\mathcal{O}(1)$ depends on $m$; \begin{equation}\label{productnm} \eta_m(\mathbf{U})\cdot\eta_n(\mathbf{U})=\mathcal{O}(1)\left[\rho\eta_{m+n}(\mathbf{U})\right], \end{equation} where $\mathcal{O}(1)$ depends on $m$ and $n$; \begin{equation}\label{rhoetam} \rho\eta_m(\mathbf{U})=\mathcal{O}(1)\left[\eta_m(\mathbf{U})+\eta_p(\mathbf{U})\right], \end{equation} for any $p\geq m+\frac{1}{2\theta}$, where $\mathcal{O}(1)$ depends on $m$ and $p$, and \begin{equation}\label{nversus0m} \eta_n(\mathbf{U})=\mathcal{O}(1)\left[\eta_{0}(\mathbf{U})+\eta_{m}(\mathbf{U})\right], \end{equation} where $\mathcal{O}(1)$ depends on $m$ and $n$ if $0\leq n\leq m$. Besides, Hypothesis~\eqref{A0epstau} gives the following bounds: \begin{equation}\label{dqetaG} \mathbf{G}^{\varepsilon,\tau}(\mathbf{U})^2|\partial_q{\eta_m}(\mathbf{U})|^2=\mathcal{O}(1)\left[\eta_{0}(\mathbf{U})+\eta_{2m}(\mathbf{U})\right], \end{equation} and \begin{equation}\label{dqqetaG} \mathbf{G}^{\varepsilon,\tau}(\mathbf{U})^2\partial^2_{qq} {\eta_m}(\mathbf{U})=\mathcal{O}(1)\left[\eta_0(\mathbf{U})+\eta_m(\mathbf{U})\right]. \end{equation} \end{lemma} \begin{proof} The second estimate in \eqref{estimetabelow}, the estimates \eqref{productnm} and \eqref{rhoetam}, are all obtained by repeated applications of the Young Inequality. The first estimate in \eqref{estimetabelow} is proved by developing the term $g(u+z\rho^\theta)$ in \eqref{eqeta}: \begin{equation}\label{developetam} \eta_m(\mathbf{U})=\rho c_\lambda\sum_{j=0}^{2m}\binom{2m}{j}\int_{-1}^1 u^j z^{2m-j}\rho^{2\theta(2m-j)}(1-z^2)^\lambda dz. \end{equation} The terms with odd index $j$ in the sum in the right-hand side of \eqref{developetam} all vanish. Therefore only non-negative terms remain: \begin{align*} \eta_m(\mathbf{U})&\geq \rho c_\lambda\sum_{j\in\{0,2m\}}\binom{2m}{j}\int_{-1}^1 u^j z^{2m-j}\rho^{2\theta(2m-j)}(1-z^2)^\lambda dz\\ &=\rho\left(\rho^{2\theta m}+d_\lambda(m)u^{2m}\right), \end{align*} where the coefficient $d_\lambda(m)$ is given by $$ d_\lambda(m)=c_\lambda\int_{-1}^1 z^{2m}(1-z^2)^\lambda dz. $$ Let us now give the details of the proof of \eqref{rhoetam}: using \eqref{estimetabelow}, it is sufficient to get an estimate on $\rho^2(u^{2m}+\rho^{2m\theta})$. If $\rho\leq 1$, then $\eta_m(\mathbf{U})$ will provide an upper bound by \eqref{estimetabelow} again. If $\rho\geq 1$, then $\rho^{2m\theta+1}\leq\rho^{2p\theta}$ and $$ \rho u^{2m}\leq \frac{\rho^\alpha}{\alpha}+\frac{u^{2m\beta}}{\beta},\quad\frac{1}{\alpha}+\frac{1}{\beta}=1. $$ Taking $\beta=\frac{p}{m}$ gives $\alpha=\frac{p}{p-m}\leq 2p\theta$, hence $$ \rho u^{2m}=\mathcal{O}(1)\left[u^{2p}+\rho^{2p\theta}\right] $$ since $\rho\geq 1$. We conclude to \eqref{rhoetam}. To obtain \eqref{dqetaG} and \eqref{dqqetaG}, we observe that \eqref{A0epstau} is equivalent to \begin{equation}\label{HypGeta} \mathbf{G}^{\varepsilon,\tau}(\mathbf{U})^2=\mathcal{O}(1)\left[\rho\left(\eta_0(\mathbf{U})+\eta_1(\mathbf{U})\right)\right]. \end{equation} Also, by \eqref{entropychi} and \eqref{estimetabelow}, we have $$ |\partial_q{\eta_m}(\mathbf{U})|^2=\mathcal{O}(1)\left[\frac{1}{\rho^2}\eta_{2m-1}(\mathbf{U})\right], \, \partial^2_{qq}\eta_m(\mathbf{U})=\mathcal{O}(1)\left[\frac{1}{\rho^2}\eta_{m-1}(\mathbf{U})\right]. $$ Using \eqref{productnm}, \eqref{nversus0m}, we deduce \eqref{dqetaG} and \eqref{dqqetaG}. \end{proof} We go on now with the proof of Proposition~\ref{prop:boundetam}: we apply the entropy balance equation \eqref{Itoentropytau} to $\mathbf{U}^\tau$ with $\varphi\equiv 1$ and take expectation in both sides. This gives \begin{equation*} \mathbb{E} \Gamma_{\eta_m}(\mathbf{U}^\tau(t))+2\varepsilon\mathbb{E}\iint_{Q_t} \mathbf{1}_\mathrm{det} \eta''(\mathbf{U}^\tau)\cdot(\partial_x \mathbf{U}^\tau,\partial_x \mathbf{U}^\tau)dx ds =\mathbb{E}\Gamma_{\eta_m}(\mathbf{U}^\tau_0)+\mathbb{E} R_{\eta_m}(t), \end{equation*} where $$ R_{\eta_m}(t):=\iint_{Q_t} \mathbf{1}_\mathrm{sto}\mathbf{G}^{\varepsilon,\tau}(\mathbf{U}^\tau)^2\partial^2_{qq} {\eta_m}(\mathbf{U}^\tau) dx ds $$ is the It\={o} correction term. If $m=0$, then $\partial^2_{qq} \eta=0$ and we obtain (note the difference with \eqref{estimmomentepstau} in the first term) \begin{equation}\label{estimmomentepstau0} \sup_{t\in[0,T]}\mathbb{E}\Gamma_{\eta_0}(\mathbf{U}^\tau(t))+2\varepsilon\mathbb{E}\iint_{Q_T} \mathbf{1}_\mathrm{det}\eta_0''(\mathbf{U}^\tau)\cdot(\partial_x \mathbf{U}^\tau,\partial_x \mathbf{U}^\tau)dx dt=\mathcal{O}(1). \end{equation} If $m\geq 1$, then \eqref{dqqetaG} gives \begin{equation}\label{preGronwall} \mathbb{E} R_{\eta_m}(t) =\mathcal{O}(1)\left[\int_0^t\mathbb{E} (\Gamma_{\eta_m}(\mathbf{U}^\tau(s))+\Gamma_{\eta_0}(\mathbf{U}^\tau(s)))ds\right]. \end{equation} We use Gronwall's Lemma and \eqref{estimmomentepstau0} and deduce the following preliminary estimate \begin{equation}\label{estimmomentepstau0m} \sup_{t\in[0,T]}\mathbb{E}\Gamma_{\eta_m}(\mathbf{U}^\tau(t))+2\varepsilon\mathbb{E}\iint_{Q_T} \mathbf{1}_\mathrm{det}\eta_m''(\mathbf{U}^\tau)\cdot(\partial_x \mathbf{U}^\tau,\partial_x \mathbf{U}^\tau)dx dt=\mathcal{O}(1). \end{equation} To prove \eqref{estimmomentepstau}, we have to take into account the noise term, \textit{i.e.} we apply the entropy balance equation \eqref{Itoentropytau} to $\mathbf{U}^\tau$ with $\varphi\equiv 1$ and do not take expectation this time: we have then \begin{equation}\label{withdissip} 0\leq \Gamma_{\eta_m}(\mathbf{U}^\tau(t))= \Gamma_{\eta_m}(\mathbf{U}^\tau_0)+M_{\eta_m}(t)+ R_{\eta_m}(t)-D_{\eta_m}(t) \end{equation} where $$ M_{\eta_m}(t)=\sqrt{2}\sum_{k\geq 1} \int_0^t \mathbf{1}_{\mathrm{sto}}(s)\<\sigma_k^{\varepsilon,\tau}(\mathbf{U}^\tau(s)),\partial_q{\eta_m}(\mathbf{U}^\tau(s))\>_{L^2(\mathbb{T})} d\beta_k(s) $$ and $$ D_{\eta_m}(t)=2\iint_{Q_t}\mathbf{1}_\mathrm{det}\eta_m''(\mathbf{U}^\tau)\cdot(\partial_x \mathbf{U}^\tau,\partial_x \mathbf{U}^\tau)dx ds. $$ Since $D_{\eta_m}\geq 0$, \eqref{withdissip} gives $$ 0\leq \Gamma_{\eta_m}(\mathbf{U}^\tau(t)) \leq \Gamma_{\eta_m}(\mathbf{U}^\tau_0)+M_{\eta_m}(t)+ R_{\eta_m}(t). $$ Similarly as for \eqref{preGronwall}, we have $$ \ds \mathbb{E} \sup_{t\in[0,T]} |R_{\eta_m}(t)| = \mathcal{O}(1)\left[\int_0^T\mathbb{E} (\Gamma_{\eta_m}(\mathbf{U}^\tau(s))+\Gamma_{\eta_0}(\mathbf{U}^\tau(s)))ds\right], $$ and therefore, by \eqref{estimmomentepstau0m}, the last term $R_{\eta_m}$ satisfies the bound $$ \mathbb{E}\sup_{t\in[0,T]}|R_{\eta_m}(t)|=\mathcal{O}(1). $$ By the Doob's Martingale Inequality, we also have \begin{align*} \mathbb{E}\sup_{t\in[0,T]}\left|M_{\eta_m}(t)\right|&\leq C\mathbb{E}\left(\int_0^T\sum_{k\geq 1}\<\sigma_k^{\varepsilon,\tau}(\mathbf{U}^\tau(s)),\partial_q{\eta_m}(\mathbf{U}^\tau(s))\>_{L^2(\mathbb{T})}^2\, ds\right)^{1/2}\\ &\leq C\mathbb{E}\left(\iint_{Q_T}\mathbf{G}^{\varepsilon,\tau}(\mathbf{U}^\tau)^2|\partial_q{\eta_m}(\mathbf{U}^\tau)|^2\, dx ds\right)^{1/2} \end{align*} for a given constant $C$. By \eqref{dqetaG} and \eqref{estimmomentepstau0m} (with $2m$ instead of $m$) we obtain $$ \mathbb{E}\sup_{t\in[0,T]}\left|M_\eta(t)\right|=\mathcal{O}(1). $$ This concludes the proof of the proposition. \end{proof} \begin{corollary}[Bounds on the moments] Let $m\in\mathbb{N}$. Let $\eta_m$ be the entropy given by \eqref{entropychi} with $g(\xi)=\xi^{2m}$. Let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that the growth condition \eqref{A0eps} is satisfied and that $$ \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2m}(\mathbf{U}_{\varepsilon 0}) \right)dx <+\infty. $$ Then, the solution $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto} satisfies: \begin{equation}\label{estimmomentepstau2} \mathbb{E}\sup_{t\in[0,T]}\int_{\mathbb{T}}\left(|u^\tau|^{2m}+|\rho^\tau|^{m(\gamma-1)}\right) \rho^\tau dx=\mathcal{O}(1), \end{equation} where $\mathcal{O}(1)$ depends only on $T$, $\gamma$, on the constant ${A_0}$ in \eqref{A0eps}, on $m$ and on the initial quantities $\mathbb{E}\Gamma_{\eta}(\mathbf{U}_0)$ for $\eta\in\{\eta_0,\eta_{2m}\}$. \label{cor:boundmoments}\end{corollary} To conclude this part we complete Lemma~\ref{lemmaentropies} with the following result, which will be used later, in particular to get some estimates on the moments of entropy-entropy flux pairs. \begin{lemma}\label{lemmaentropies2} For $m\in\mathbb{N}$, let $(\eta_m,H_m)$ be the entropy-entropy flux pair associated to the function $g(\xi)=\xi^{2m}$ by \eqref{entropychi}-\eqref{entropychiflux}. Let $s>1$. Then \begin{align*} |\eta_m(\mathbf{U})|^s&=\mathcal{O}(1)\left[\eta_0(\mathbf{U})+\eta_p(\mathbf{U})\right],\quad p\geq ms+\frac{s-1}{2\theta},\\ |H_m(\mathbf{U})|^s&=\mathcal{O}(1)\left[\eta_0(\mathbf{U})+\eta_p(\mathbf{U})\right],\quad p\geq (m+1/2)s+\frac{s-1}{2\theta},\\ |u \eta_m(\mathbf{U})|^s&=\mathcal{O}(1)\left[\eta_0(\mathbf{U})+\eta_p(\mathbf{U})\right],\quad p\geq (m+1/2)s+\frac{s-1}{2\theta},\\ |u H_m(\mathbf{U})|^s&=\mathcal{O}(1)\left[\eta_0(\mathbf{U})+\eta_p(\mathbf{U})\right],\quad p\geq (m+1)s+\frac{s-1}{2\theta}, \end{align*} where $\mathcal{O}(1)$ depends on $m$, $s$ and the exponent $p$ of each equation. \end{lemma} \begin{proof} By \eqref{estimetabelow}, $|\eta_m(\mathbf{U})|^s=\mathcal{O}(1)\left[\rho^s|u|^{2ms}+\rho^{s+2m\theta s}\right]$. Let $\tilde s\geq ms$. By the Young Inequality, we have \begin{equation}\label{stildeseta} \rho^s|u|^{2ms}\leq C_{s,\tilde s}\rho\left(|u|^{2\tilde s}+\rho^{\frac{(s-1)\tilde s}{\tilde s-ms}}\right). \end{equation} Let $\tilde s=ms+\frac{s-1}{2\theta}$. If $p\geq\tilde s$, then $$ \frac{(s-1)\tilde s}{\tilde s-ms}=2\theta\tilde s\leq 2\theta p $$ and \eqref{stildeseta} gives $$ \rho^s|u|^{2ms}=\mathcal{O}(1)\left[\eta_0(\mathbf{U})+\eta_p(\mathbf{U})\right]. $$ We also have $$ \rho^{s+2m\theta s}=\rho\rho^{2\theta\tilde s}=\mathcal{O}(1)\left[\eta_0(\mathbf{U})+\eta_p(\mathbf{U})\right] $$ and the first estimate follows. The proof of the three other estimates is similar. \end{proof} \subsubsection{$L^\infty$ estimates}\label{sec:Linfty} \begin{proposition}[$L^\infty$ estimates] Let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that the growth condition \eqref{A0eps} and the localization condition \eqref{Trunceps} are satisfied and that $\mathbf{U}_0\in\Lambda_{\varkappa_\varepsilon}$. Then the so\-lu\-tion $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto} satisfies: almost surely, for all $t\in[0,T]$, $\mathbf{U}^\tau(t)\in\Lambda_{2\varkappa_\varepsilon}$. In particular, almost surely, $\|u^\tau\|_{L^\infty(Q_T)}\leq 2\varkappa_\varepsilon$ and $\|\rho^\tau\|_{L^\infty(Q_T)}^\theta\leq 2\varkappa_\varepsilon$. \label{prop:meanLinfty}\end{proposition} \begin{proof} It is well-known (\textit{cf.} \cite[Section 4.]{Diperna83b} and \cite{ChuehConleySmoller77}) that $\Lambda_{\varkappa}$ is an invariant region for \eqref{splitEulerDet}. In a few lines, a possible sketchy argument is the following one (see \cite{Diperna83b,ChuehConleySmoller77} for a complete proof). Let $\mathbf{U}$ be a smooth solution to the system $$ \partial_t \mathbf{U}+\partial_x\mathbf{F}(\mathbf{U})=\varepsilon\partial^2_{xx}\mathbf{U} $$ in $Q_T$. Let $z=u-\rho^\theta$, $w=u+\rho^\theta$ denote the Riemann invariants. Set also $$ c=\sqrt{p'(\rho)}=\theta\rho^\theta\quad\mbox{ and } \quad P=\begin{pmatrix} 1 & 1 \\ u+c & u-c \end{pmatrix}. $$ The inverse of $P$ is $$ P^{-1}=\frac{1}{2c}\begin{pmatrix} -u+c & 1 \\ u+c & -1 \end{pmatrix}, $$ and $P^{-1}D\mathbf{F}(U)P=D:=\mathrm{diag}(u+c, u-c)$. The vector $$ \mathbf{V}=\begin{pmatrix} w \\ -z \end{pmatrix}=\begin{pmatrix} u+\rho^\theta \\ -u+\rho^\theta \end{pmatrix} $$ satisfies, for $\partial$ a derivation, $\partial\mathbf{V}=\frac{2c}{\rho}P^{-1}\partial \mathbf{U}$ and, thus, \begin{equation}\label{UtoV} \partial_t\mathbf{V}+D\partial_x\mathbf{V}=\varepsilon\partial^2_{xx}\mathbf{V}-\varepsilon\partial_x\left(\frac{2c}{\rho}P^{-1}\right)\partial_x\mathbf{U}. \end{equation} Computing the last term in the equation \eqref{UtoV} yields the system \begin{subequations}\label{parabwz} \begin{align} \partial_t w+(u+c)\partial_x w&=\varepsilon\partial^2_{xx} w+\frac{\varepsilon}{2c}\left(|\partial_x w|^2-|\partial_x z|^2\right), \label{parabwzw}\\ \partial_t (-z)+(u-c)\partial_x(-z)&=\varepsilon\partial^2_{xx} (-z)+\frac{\varepsilon}{2c}\left(|\partial_x z|^2-|\partial_x w|^2\right). \label{parabwzz} \end{align} \end{subequations} Both equations in \eqref{parabwz} satisfy a maximum principle. In \eqref{splitEulerSto}, $\rho(t)$ is constant. Dividing by $\rho$ the equation on $q=\rho u$, we deduce from \eqref{splitEulerSto} a stochastic differential equation on $u$. Using again that $\rho(t)$ is constant, this gives a stochastic differential equation on $w$ with $x$ as a parameter and similarly for $z$. By the truncature hypothesis \eqref{Trunceps}, we have the localization property \eqref{Truncepstau} and the region $\Lambda_{2\varkappa_\varepsilon}$ is also an invariant domain for \eqref{splitEulerSto}. It follows that, a.s., for all $t\in[0,T]$, $\mathbf{U}^\tau(t)\in\Lambda_{2\varkappa_\varepsilon}$. \end{proof} \subsubsection{Gradient estimates}\label{sec:gradestimates} In Proposition~\ref{prop:boundetam} above, we have obtained an estimate on $\mathbf{U}^\tau_x$. In the case where $\eta=\eta_E$ is the energy (this corresponds to $g(\xi)=\frac12\xi^2$), we have \begin{equation}\label{gradientEnergy} \eta_E''(\mathbf{U})\cdot(\partial_x \mathbf{U},\partial_x \mathbf{U})=\theta^2|\rho|^{\gamma-2}|\partial_x \rho|^2+\rho|\partial_x u|^2. \end{equation} More generally, we prove the following weighted estimates (see in particular Corollary~\ref{cor:boundgradient2tau} below). \begin{proposition}[Gradient bounds] Let $m\in \mathbb{N}$. Let $\eta_m$ be the entropy given by \eqref{entropychi} with $g(\xi)=\xi^{2m}$. Let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that the growth condition \eqref{A0eps} is satisfied and that $$ \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2m}(\mathbf{U}_{\varepsilon 0}) \right)dx <+\infty. $$ Then, the solution $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto} satisfies the estimate \begin{align} &\varepsilon\mathbb{E}\iint_{Q_T}\mathbf{1}_{\mathrm{det}}(t)\,G^{[2]}(\rho^\tau,u^\tau) \big[\theta^2|\rho^\tau|^{\gamma-2} |\partial_x \rho^\tau|^2+\rho^\tau |\partial_x u^\tau|^2\big]dx dt\nonumber\\ \leq&\varepsilon\mathbb{E}\iint_{Q_T}\mathbf{1}_{\mathrm{det}}(t)\,G^{[1]}(\rho^\tau,u^\tau)\big[2\theta |\rho^\tau|^{\frac{\gamma-2}{2}} |\partial_x \rho^\tau|\cdot |\rho^\tau|^{1/2} |\partial_x u^\tau|\big] dx dt +\mathcal{O}(1),\label{estimgradientepstau} \end{align} where \begin{align*} G^{[2]}(\rho,u)&=c_\lambda \int_{-1}^1 g''(u+z\rho^\theta)(1-z^2)_+^\lambda dz,\\ G^{[1]}(\rho,u)&=c_\lambda \int_{-1}^1 |z| g''(u+z\rho^\theta)(1-z^2)_+^\lambda dz, \end{align*} and $\mathcal{O}(1)$ depends on $T$, $\gamma$, on the constant ${A_0}$ in \eqref{A0eps} and on the initial quantities $\mathbb{E}\Gamma_{\eta}(\mathbf{U}_0)$ for $\eta\in\{\eta_0,\eta_{2m}\}$. \label{prop:boundgradient2tau}\end{proposition} \begin{proof} We introduce the probability measure $$ dm_\lambda(z)=c_\lambda (1-z^2)^\lambda_+ dz $$ and the $2\times 2$ matrix $$ S=\begin{pmatrix} 1 & 0 \\ u &1 \end{pmatrix}, $$ which satisfies \begin{equation}\label{introW} \partial_x \mathbf{U}=S \mathbf{W},\quad \mathbf{W}:= \begin{pmatrix} \partial_x\rho \\ \rho\partial_x u \end{pmatrix}. \end{equation} By \eqref{estimmomentepstau}, we then have \begin{equation}\label{Sconvex} \varepsilon\int_0^T \mathbb{E}\int_{\mathbb{T}} \mathbf{1}_{\mathrm{det}}(t)\,\<S^* \eta''(\mathbf{U}^\tau)S\mathbf{W},\mathbf{W}\> dx dt=\mathcal{O}(1), \end{equation} where $\<\cdot,\cdot\>$ is the canonical scalar product on $\mathbb{R}^2$ and $S^*$ is the adjoint of $S$ for this scalar product. We compute $$ \eta''(\mathbf{U})=\frac1\rho\int_\mathbb{R} \left[A(z)g'\left(u+z\rho^{\theta}\right)+B(z)g''\left(u+z\rho^{\theta}\right)\right]dm_\lambda(z), $$ where $$ A(z)=\begin{pmatrix} \frac{\gamma^2-1}{4}z\rho^\theta & 0 \\ 0 &0 \end{pmatrix},\quad B(z)=\begin{pmatrix} \left[-u+\theta z\rho^\theta\right]^2 & -u+\theta z\rho^\theta \\ -u+\theta z\rho^\theta &1 \end{pmatrix}. $$ In particular $$ S^*AS(z)=\begin{pmatrix} \frac{\gamma^2-1}{4} z\rho^\theta & 0 \\ 0 &0 \end{pmatrix},\quad S^*BS(z)=\begin{pmatrix} \theta^2 z^2\rho^{2\theta} & \theta z\rho^\theta \\ \theta z\rho^\theta &1 \end{pmatrix}, $$ and \eqref{introW}-\eqref{Sconvex} give \begin{equation} \varepsilon\mathbb{E}\iint_{Q_T}\mathbf{1}_{\mathrm{det}}(t)\,\left(\mathbf{I}|\partial_x \rho^\tau|^2+\mathbf{J}\partial_x\rho^\tau\cdot|\rho^\tau|^{1/2}\partial_x u^\tau+\mathbf{K}\rho^\tau|\partial_x u^\tau|^2\right) dx dt =\mathcal{O}(1), \label{IJKeps}\end{equation} where \begin{equation*} \mathbf{I}=|\rho^\tau|^{2\theta-1}\int_\mathbb{R} \theta^2 z^2 g''\left(u^\tau+z|\rho^\tau|^\theta\right) dm_\lambda(z) +|\rho^\tau|^{\theta-1}\int_\mathbb{R} \frac{\gamma^2-1}{4} z g'\left(u^\tau+z|\rho^\tau|^{\theta}\right) dm_\lambda(z), \end{equation*} and \begin{align*} \mathbf{J}=2|\rho^\tau|^{\theta-\frac12}\int_\mathbb{R} \theta z g''\left(u^\tau+z|\rho^\tau|^{\theta}\right) dm_\lambda(z),\quad \mathbf{K}=\int_\mathbb{R} g''\left(u^\tau+z|\rho^\tau|^{\theta}\right) dm_\lambda(z). \end{align*} We observe that $2z dm_\lambda(z)=-\frac{c_\lambda}{\lambda+1}d(1-z^2)_+^{\lambda+1}$. By integration by parts, the second term in $\mathbf{I}$ can therefore be written $$ \frac{1}{\lambda+1}|\rho^\tau|^{2\theta-1}\int_\mathbb{R} \frac{\gamma^2-1}{8}(1-z^2) g''\left(u^\tau+z|\rho^\tau|^{\frac{\gamma-1}{2}}\right) dm_\lambda(z). $$ Since $\frac{\gamma^2-1}{8}\frac{1}{\lambda+1}=\theta^2$, we have $$ \mathbf{I}=|\rho^\tau|^{2\theta-1}\int_\mathbb{R} \theta^2 g''\left(u^\tau+z|\rho^\tau|^{\frac{\gamma-1}{2}}\right) dm_\lambda(z). $$ This gives \eqref{estimgradientepstau}. \end{proof} \bigskip We apply \eqref{estimgradientepstau} with $g(\xi):=|\xi|^{2m+2}$ and $\eta=\eta_{m+1}$ given by \eqref{entropychi}. Then we compute explicitly $$ G^{[2]}(\rho,u)-G^{[1]}(\rho,u)=(2m+2)(2m+1)\sum_{k=0}^m\binom{2m}{2k}a_k \rho^{2\theta k}u^{2(m-k)}, $$ where the coefficients $$ a_k=c_\lambda\int_{-1}^1 |z|^{2k}(1-|z|)(1-z^2)_+^\lambda dz $$ are positive. By letting $m$ vary, we obtain the following result. \begin{corollary} Let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Let $\eta_m$ be the entropy given by \eqref{entropychi} with $g(\xi)=\xi^{2m}$. Assume that the growth condition \eqref{A0eps} is satisfied and that $$ \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2m}(\mathbf{U}_{\varepsilon 0}) \right)dx <+\infty. $$ Then, the solution $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto} satisfies the estimate \begin{equation}\label{corestimgradientepsrhotau} \varepsilon\mathbb{E}\iint_{Q_T} \mathbf{1}_{\mathrm{det}}(t)\,\left(|u^\tau|^{2m}+|\rho^\tau|^{2m\theta}\right)|\rho^\tau|^{\gamma-2}|\partial_x \rho^\tau|^2 dx dt=\mathcal{O}(1), \end{equation} and \begin{equation}\label{corestimgradientepsutau} \varepsilon\mathbb{E}\iint_{Q_T} \mathbf{1}_{\mathrm{det}}(t)\,\left(|u^\tau|^{2m}+|\rho^\tau|^{2m\theta}\right)\rho^\tau|\partial_x u^\tau|^2 dx dt=\mathcal{O}(1), \end{equation} for all $m\in\mathbb{N}$, where $\mathcal{O}(1)$ depends on $T$, $\gamma$, on the constant ${A_0}$ in \eqref{A0} and on the initial quantities $\mathbb{E}\Gamma_{\eta}(\mathbf{U}_0)$ for $\eta\in\{\eta_0,\eta_{2m+2}\}$. \label{cor:boundgradient2tau}\end{corollary} \subsubsection{Positivity of the density}\label{sec:PositiveDensity} \begin{proposition}[Positivity] Let $\mathbf{U}^\tau$ be the solution to \eqref{splitEulerDet}-\eqref{splitEulerSto} with initial datum $\mathbf{U}_0=(\rho_0,q_0)$ and assume that $\rho_0$ is uniformly positive: there exists $c_0>0$ such that $\rho_0\geq c_0$ a.e. on $\mathbb{T}$. Let $m>6$. Then there is a random variable $c_\mathrm{min}$ with values in $(0,+\infty)$ depending on $c_0$, $T$, \begin{equation}\label{normforpos} \iint_{Q_T}\mathbf{1}_\mathrm{det}(t)\rho^\tau|\partial_x u^\tau|^2 dx dt\mbox{ and }\iint_{Q_T}|u^\tau|^m dx dt \end{equation} only (in the sense that $c_\mathrm{min}^{-1}$ is a continuous non-decreasing function of these former quantities), such that, almost surely, \begin{equation}\label{rhotaupos} \rho^\tau\geq c_\mathrm{min} \end{equation} a.e. in $\mathbb{T}\times[0,T]$. \label{prop:uniformpositivepositive}\end{proposition} \begin{proof} We apply Theorem~\ref{th:uniformpositive} proved in Appendix~\ref{app:boundfrombelow}. \end{proof} \subsubsection{Regularity of $U^\tau$}\label{sec:HDUtau} Proposition~\ref{prop:meanLinfty} and Corollary~\ref{cor:boundgradient2tau} give a control (on the expectancy) of the two quantities in \eqref{normforpos} in Proposition~\ref{prop:uniformpositivepositive}. By the Markov inequality, we have therefore $$ \mathbb{P}\left(\iint_{Q_T}\mathbf{1}_\mathrm{det}(t)\rho^\tau|\partial_x u^\tau|^2 dx dt\geq R\;\&\; \|u^\tau\|_{L^m(Q_T)}\geq R\right)\leq\frac{C(\varepsilon)}{R} $$ where the constant $C(\varepsilon)$ depend on $\varepsilon$ and also on $T$, $\gamma$, on the constant ${A_0}$ in \eqref{A0}, and on $\|\mathbf{U}_0\|_{L^\infty(\mathbb{T})}$. This shows that \eqref{rhotaupos} is satisfied with a random variable $c_\mathrm{min}$ independent on $\tau$. Combining this bound from below with the bounds from above obtained in Proposition~\ref{prop:meanLinfty}, we deduce the following result. \begin{proposition}[$\mathbf{U}^\tau$ is a bounded solution] Let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that the growth condition \eqref{A0eps} and the localization condition \eqref{Trunceps} are satisfied and that $\mathbf{U}_0\in\Lambda_{\varkappa_\varepsilon}$. Then there exists some random variables $c_\mathrm{min}^\varepsilon$, $C_\mathrm{max}^\varepsilon$ with values in $(0,+\infty)$, $c_\mathrm{min}^\varepsilon$, $C_\mathrm{max}^\varepsilon$ being independent on $\tau$, such that the solution $\mathbf{U}^\tau$ to \eqref{splitEulerDet}-\eqref{splitEulerSto} is bounded as follows: almost surely, \begin{equation}\label{asregsolepsOK} c_\mathrm{min}^\varepsilon\leq\rho^\tau\leq C_\mathrm{max}^\varepsilon,\quad |q^\tau|\leq C_\mathrm{max}^\varepsilon,\;\mbox{ a.e. in }Q_T. \end{equation} \label{prop:Utaubounded}\end{proposition} We use Proposition~\ref{prop:Utaubounded} in particular to obtain some estimates on H\"older or Sobolev norms of $\mathbf{U}^\tau$ independent on $\tau$. We let $T_R$ denote the exit time \begin{equation}\label{defTRtau} T_R=\inf\left\{t\in[0,T];\mathbf{U}^\tau(t)\notin D_R\right\}, \end{equation} where $D_R$ is defined in \eqref{defDR}. By \eqref{asregsolepsOK}, the probability \begin{equation}\label{minPTRtauT} \mathbb{P}(T_R=T)\geq \mathbb{P}\left(c_\mathrm{min}^\varepsilon>R^{-1}\; \&\; R>C_\mathrm{max}^\varepsilon\right) \end{equation} is bounded from below independently on $\tau$. \begin{proposition}[Regularity of $\mathbf{U}^\tau$] Let $\mathbf{U}_0\in (W^{2,2}(\mathbb{T}))^2$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that the growth condition \eqref{A0eps} and the localization condition \eqref{Trunceps} are satisfied and that $\mathbf{U}_0\in\Lambda_{\varkappa_\varepsilon}$. Let $\mathbf{U}^\tau$ be the solution to \eqref{splitEulerDet}-\eqref{splitEulerSto}. Then, for all $\alpha\in(0,1/4)$, $\mathbf{U}^\tau(\cdot\wedge T_R)$ has a mo\-di\-fi\-ca\-tion whose trajectories are almost surely in $C^{\alpha}([0,T];L^2(\mathbb{T}))$ and such that \begin{equation}\label{HolderUtauRalpha} \mathbb{E}\|\mathbf{U}^\tau(\cdot\wedge T_R)\|_{C^{\alpha}([0,T];L^2(\mathbb{T}))}^2\leq C(R,\varepsilon,T,\alpha,\mathbf{U}_0), \end{equation} where $C(R,\varepsilon,T,\alpha)$ is a constant depending on $R$, $T$, $\varepsilon$, $\alpha$, $\|\mathbf{U}_0\|_{W^{1,2}(\mathbb{T})}$ but not on $\tau$. Furthermore, for every $s\in[0,1)$, $\mathbf{U}^\tau$ satisfies the estimate \begin{equation}\label{H1UepstauR} \sup_{t\in[0,T]}\mathbb{E}\|\mathbf{U}^\tau(t\wedge T_R)\|_{W^{s,2}(\mathbb{T})}^2\leq C(R,\varepsilon,T,s,\mathbf{U}_0), \end{equation} where $C(R,\varepsilon,T,s,\mathbf{U}_0)$ is a constant depending on $R$, $T$, $\varepsilon$, $s$ and $\|\mathbf{U}_0\|_{W^{1,2}(\mathbb{T})}$ but not on $\tau$. If additionally ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ and the Lipschitz condition \eqref{Lipsigmaeps} is satisfied, then \begin{equation}\label{HoldertH1xBoundedtau} \mathbb{E}\|\mathbf{U}^\tau(t\wedge T_R)\|_{C^\alpha([0,T];W^{1,2}(\mathbb{T}))}^2\leq C(R,\varepsilon,T,\alpha,{\mathbf{U}_\varepsilon}_0), \end{equation} for all $\alpha\in [0,1/4)$, and \begin{equation}\label{LinftytH2xBoundedtau} \sup_{t\in[0,T]}\mathbb{E}\|\mathbf{U}^\tau(t\wedge T_R)\|_{W^{2,2}(\mathbb{T})}^2\leq C(R,\varepsilon,T,{\mathbf{U}_\varepsilon}_0), \end{equation} for some constants $C(R,\varepsilon,T,\alpha,{\mathbf{U}_\varepsilon}_0)$ and $C(R,\varepsilon,T,{\mathbf{U}_\varepsilon}_0)$ depending on $\alpha$, $R$, $T$, $\varepsilon$, on the constant $C(\varepsilon,R)$ in \eqref{Lipsigmaeps}, on $\|{\mathbf{U}_\varepsilon}_0\|_{W^{2,2}(\mathbb{T})}$, but not on $\tau$. \label{prop:regUtau}\end{proposition} \begin{proof} We only give the sketch of the proof since this is very similar to the proof of Proposition~\ref{prop:regboundedeps}. First, we establish, for $\mathbf{U}^\tau$, an identity analogous to \eqref{MildBoundedSolution}. For Problem~\eqref{splitEulerDet} we have the mild formulation \begin{equation}\label{milddet} \mathbf{U}^\tau(t)=S_{2\varepsilon}(t-t_{2n})\mathbf{U}^\tau(t_{2n})-2\int_{t_{2n}}^{t} \partial_x S_{2\varepsilon}(t-s)\mathbf{F}(\mathbf{U}^\tau(s))ds \end{equation} for $t_{2n}\leq t\leq t_{2n+1}$, and, for Problem~\eqref{splitEulerSto}, we have the integral formulation \begin{equation}\label{mildsto} \mathbf{U}^\tau(t)=\mathbf{U}^\tau(t_{2n+1})+\sqrt{2}\int_{t_{2n+1}}^{t} \mathbf{\Psi}^{\varepsilon,\tau}(\mathbf{U}^\tau(s))\,d W(s), \end{equation} for $t_{2n+1}\leq t\leq t_{2n+2}$. By combining \eqref{milddet} and \eqref{mildsto}, we obtain the identity \begin{multline}\label{MildSolutiontau} \mathbf{U}^\tau(t)=S_\varepsilon(t_\sharp)\mathbf{U}_0-\int_0^{t_\sharp} \partial_x S_\varepsilon(t_\sharp-s)\mathbf{F}(\mathbf{U}^\tau(s_\flat))ds\\ +\sqrt{2}\int_0^{t} \mathbf{1}_\mathrm{sto}(s)S_\varepsilon(t_\sharp-s_\sharp)\mathbf{\Psi}^{\varepsilon,\tau}(\mathbf{U}^\tau(s))\,d W(s), \end{multline} where we have set $$ t_\sharp=\min(2t-t_{2n},t_{2n+2}),\quad t_\flat=\frac{t+t_{2n}}{2},\quad t_{2n}\leq t< t_{2n+2}. $$ Then we proceed as in the proof of Proposition~\ref{prop:regboundedeps}. Note that $t\mapsto t_\sharp$ is $2$-Lipschitz continuous and that we have the control~\eqref{BoundTruncepstau}, therefore (compare with \eqref{HolderURk}), $\mathbf{U}^\tau$ satisfies \begin{equation}\label{HolderUtauRk} \mathbb{E}\|\mathbf{U}^\tau(t\wedge T_R)-\mathbf{U}^\tau(s\wedge T_R)\|_{L^2(\mathbb{T})}^{2k}\leq C(R,\varepsilon,T,k)\max\left((t-s)^{k/2},(t-s)^k\right), \end{equation} for all $0\leq s\leq t\leq T$, where $C(R,\varepsilon,T,k,{\mathbf{U}_\varepsilon}_0)$ is a constant depending on $R$, $T$, $\varepsilon$, $k$, $\|\mathbf{U}_0\|_{W^{1,2}(\mathbb{T})}$ but not on $\tau$. This gives \eqref{HolderUtauRalpha} by the Kolmogorov's criterion.\medskip To obtain the regularity in $x$ \eqref{H1UepstauR}, we also proceed as in the proof of Proposition~\ref{prop:regboundedeps}. Let $s \in [0,1)$. The estimates \eqref{xTdet1}-\eqref{xTsto1} hold true here: the dependence on time being slightly different between \eqref{MildBoundedSolution} and \eqref{MildSolutiontau}, the bounds \eqref{toxTdet1} and \eqref{toxTsto1} have to be replaced by \begin{equation}\label{toxTdet1tau} \int_0^{t_\sharp}(t_\sharp-r)^{-\frac{1+s}{2}}dr\ C(R,\varepsilon), \end{equation} and \begin{equation}\label{toxTsto1tau} \int_0^t\mathbf{1}_\mathrm{sto}(r)(t_\sharp-r_\sharp)^{-s}dr\ C(R,\varepsilon), \end{equation} respectively. In \eqref{toxTdet1tau}, we have $$ \int_0^{t_\sharp}(t_\sharp-r)^{-\frac{1+s}{2}}dr\leq \frac{2}{1-s}T^{\frac{1-s}{2}}, $$ while, for $t_{2n}\leq t\leq t_{2n+2}$ (and thus $2n\tau\leq T$), the integral term in \eqref{toxTsto1tau} is \begin{align*} \int_0^t\mathbf{1}_\mathrm{sto}(r)(t_\sharp-r_\sharp)^{-s}dr&=\sum_{k=1}^n \tau (t_{2k})^{-s}\leq C(s) T^{1-s}, \end{align*} where $C(s)$ depends on $s$ only. The proof of \eqref{HoldertH1xBoundedtau}-\eqref{LinftytH2xBoundedtau} is similar to the proof of the estimates \eqref{HoldertH1xBounded}-\eqref{LinftytH2xBounded} for the solution to \eqref{stoEulereps}, \textit{cf.} the proof of Proposition~\ref{prop:regboundedeps}. \end{proof} Our aim is now to pass to the limit $[\tau\to0]$ in the entropy balance equation \eqref{Itoentropytau}. There remains two questions to solve to do this: \begin{enumerate} \item how to get compactness with respect to $\omega$? \item How to pass to the limit in the stochastic integral in \eqref{Itoentropytau}? \end{enumerate} These are indeed the two remaining questions since Proposition~\ref{prop:regUtau} gives compactness of the sequence $(\mathbf{U}^\tau)$ in $(t,x)$. The answer to 1. is that compactness of the laws of the process is the accurate notion here (actually there is not even a topology on $\Omega$: with that respect, "compactness with respect to $\omega$" has no sense). To solve question 2., we use a martingale characterization of the stochastic integral, see Section~\ref{subsec:identif}. \subsubsection{Compactness argument}\label{subsec:compact} We introduce the independent processes $X^\tau_1,X^\tau_2,\ldots$ defined by $$ X^\tau_k(t)=\sqrt{2}\int_0^t \mathbf{1}_{\mathrm{sto}}(s)d\beta_k(s) $$ and set \begin{equation}\label{defWtau} W^\tau(t)=\sum_{k\geq 1} X^\tau_k(t) e_k. \end{equation} The random variable $X_k(t)$ is Gaussian, with mean $0$ and variance given by $$ \sigma_\tau^2(t)=t_{2n}+2(t-t_{2n+1}),\quad t\in[t_{2n},t_{2n+1}]. $$ Let $0\leq s_1\leq\ldots\leq s_m\leq T$ be $m$ given points in $[0,T]$. We have $|\sigma_\tau^2(t)-t|\leq\tau$ for all $t\in[0,T]$, therefore the finite dimensional distribution of $(X^\tau_1(s_j))_{1,m}$ converges in law to the distribution of $(\beta_1(s_i))_{1,m}$ when $\tau\to0$. Besides, $(X^\tau_1)$ is tight in $C([0,T])$ since $\mathbb{E} \|X^\tau_1\|_{C^\alpha([0,T])}$ is bounded uniformly with respect to $\tau$ for any $\alpha<1/2$. By \cite[Theorem 8.1]{BillingsleyBook}, $(X_1^\tau)$ converges in law to $\beta_1$ on $C([0,T])$. We have the same result $X^\tau_k\to\beta_k$ in law for each $k\geq 2$, since the distributions are all the same.\medskip Let $\mathfrak{U}_0$ be defined by \eqref{defUUU0} and let \begin{equation}\label{WpathSpace} \mathcal{X}_W=C\big([0,T];\mathfrak{U}_0\big) \end{equation} denote the path space of $W^\tau$. Since the embedding $\mathfrak{U}\hookrightarrow\mathfrak{U}_0$ is Hilbert-Schmidt, the $\mathcal{X}_W$-valued process $W^\tau$ converges in law to $W$ when $\tau\to 0$ (again, we can use \cite[Theorem 8.1]{BillingsleyBook}). \medskip Define the path space $\mathcal{X}=\mathcal{X}_\mathbf{U}\times\mathcal{X}_W$, where $$ \mathcal{X}_\mathbf{U}=C\big([0,T];L^2(\mathbb{T})\big). $$ Let us denote by $\mu^\tau_\mathbf{U}$ the law of $\mathbf{U}^\tau$ on $\mathcal{X}_\mathbf{U}$. The joint law of $\mathbf{U}^\tau$ and $W^\tau$ on $\mathcal{X}$ is denoted by $\mu^\tau$. \begin{proposition}[Tightness of $(\mu^\tau)$] Let $\mathbf{U}_0\in W^{2,2}(\mathbb{T})$ be such that $\rho_0\geq c_0$ a.e. in $\mathbb{T}$ for a given constant $c_0>0$. Assume that the growth condition \eqref{A0eps} and the localization condition \eqref{Trunceps} are satisfied and that $\mathbf{U}_0\in\Lambda_{\varkappa_\varepsilon}$. Let $\mathbf{U}^\tau$ be the solution to \eqref{splitEulerDet}-\eqref{splitEulerSto}. Then the set $\{\mu^\tau;\,\tau\in(0,1)\}$ is tight and therefore relatively weakly compact in the set of probability measures on $\mathcal{X}$. \label{prop:tight}\end{proposition} \begin{proof} First, we prove tightness of $\{\mu_\mathbf{U}^\tau;\,\tau\in(0,1)\}$ in $\mathcal{X}_\mathbf{U}$. Let $\alpha\in(0,1/4)$ and $s\in (0,1)$. Then $$ K_{M}:=\big\{\mathbf{U}\in\mathcal{X}_\mathbf{U} ;\|\mathbf{U}\|_{C^\alpha([0,T];L^2(\mathbb{T}))}+\|\mathbf{U}\|_{L^2([0,T];W^{s,2}(\mathbb{T}))}\leq M\big\} $$ is compact in $\mathcal{X}_\mathbf{U}$ \cite{Simon87}. Recall that the stopping time $T_R$ is defined by \eqref{defTRtau}. Note also that a consequence of the $L^\infty_t$-estimate \eqref{H1UepstauR}, is the $L^2_t$-estimate \begin{equation*} \mathbb{E}\int_0^T \|\mathbf{U}^\tau(t\wedge T_R)\|_{W^{s,2}(\mathbb{T})}^2 dt\leq C(R,\varepsilon,T,s,\mathbf{U}_0), \end{equation*} which gives \begin{equation}\label{H1UepstauRL2} \mathbb{E}\|\mathbf{U}^\tau(t\wedge T_R)\|_{L^2(0,T;W^{s,2}(\mathbb{T}))}^2\leq C(R,\varepsilon,T,s,\mathbf{U}_0), \end{equation} by the Fubini Theorem. By \eqref{HolderUtauRalpha}, \eqref{H1UepstauRL2}, \eqref{minPTRtauT} and the Markov inequality, we obtain the estimate \begin{align*} \mathbb{P}(\mathbf{U}^\tau\notin K_M)&\leq \mathbb{P}(T_R<T)+\mathbb{P}(\mathbf{U}^\tau\notin K_M\;\&\; T_R=T)\\ &\leq \mathbb{P}\left(c_\mathrm{min}^\varepsilon<R^{-1}\right)+\mathbb{P}\left(C_\mathrm{max}^\varepsilon>R\right)+\frac{C(R,\varepsilon,T,\alpha,s,\mathbf{U}_0)}{M^2}. \end{align*} Therefore, given $\eta>0$ there exists $R,M>0$ such that $$ \mu_\mathbf{U}^\tau(K_{M})\geq 1-\eta, $$ \textit{i.e.} $(\mu_\mathbf{U}^\tau)$ is tight. We have proved above that the law $\mu_{W^\tau}$ is tight. The set of the joint laws $\{\mu^\tau;\,\tau\in(0,1)\}$ is therefore tight. By Prohorov's theorem, it is relatively weakly compact. \end{proof} \bigskip Let now $(\tau_n)$ be a sequence decreasing to $0$. Up to a subsequence, there is a probability measure $\mu_\varepsilon$ on $\mathcal{X}$ such that $(\mu^{\tau_n})$ converges weakly to $\mu$. By the Skorohod Theorem \cite[p.~70]{BillingsleyBook}, we can assume almost sure convergence of the random variables by changing the probability space. \begin{proposition}\label{prop:Skorohod} There exists a probability space $(\tilde{\Omega}^\varepsilon,\tilde{\mathcal{F}}^\varepsilon,\tilde{\mathbb{P}}^\varepsilon)$, a sequence of $\mathcal{X}$-valued random variables $(\tilde{\mathbf{U}}^{\tau_n},\tilde{W}^{\tau_n})_{n\in\mathbb{N}}$ and a $\mathcal{X}$-valued random variable $(\tilde{\mathbf{U}}_\varepsilon,\tilde{W}_\varepsilon)$ such that \begin{enumerate} \item the laws of $(\tilde{\mathbf{U}}^{\tau_n},\tilde{W}^{\tau_n})$ and $(\tilde{\mathbf{U}}_\varepsilon,\tilde{W}_\varepsilon)$ under $\,\tilde{\mathbb{P}}^\varepsilon$ coincide with $\mu^{\tau_n}$ and $\mu_\varepsilon$ respectively, \item $(\tilde{\mathbf{U}}^{\tau_n},\tilde{W}^{\tau_n})$ converges $\,\tilde{\mathbb{P}}^\varepsilon$-almost surely to $(\tilde{\mathbf{U}}_\varepsilon,\tilde{W}_\varepsilon)$ in the topology of $\mathcal{X}$. \end{enumerate} \end{proposition} We had dropped the variable $\varepsilon$ in most of the quantities defined by the splitting scheme, in particular $\mathbf{U}^\tau$, $W^\tau$, etc. We reintroduce the dependence on $\varepsilon$ for the limits $\mathbf{U}_\varepsilon$, $W_\varepsilon$ etc. to indicate the relation to Problem~\eqref{stoEulereps}. \subsubsection{Identification of the limit}\label{subsec:identif} Our aim in this section is to identify the limit $(\tilde{\mathbf{U}}_\varepsilon,\tilde{W}_\varepsilon)$ given by Proposition~\ref{prop:Skorohod}. \medskip Let $(\tilde{\mathcal{F}}^\varepsilon_t)$ be the $\tilde{\mathbb{P}}^\varepsilon$-augmented canonical filtration of the process $(\tilde{\mathbf{U}}_\varepsilon,\tilde{W}_\varepsilon)$, \textit{i.e.} $$ \tilde{\mathcal{F}}^\varepsilon_t=\sigma\big(\sigma\big(\varrho_t\tilde{\mathbf{U}}_\varepsilon,\varrho_t\tilde{W}_\varepsilon\big)\cup\big\{N\in\tilde{\mathcal{F}^\varepsilon};\;\tilde{\mathbb{P}}^\varepsilon(N)=0\big\}\big),\quad t\in[0,T], $$ where $\varrho_t$ is the operator of restriction to the interval $[0,t]$ defined as follows: if $E$ is a Banach space and $t\in[0,T]$, then \begin{equation}\label{restr} \begin{split} \varrho_t:C([0,T];E)&\longrightarrow C([0,t];E)\\ k&\longmapsto k|_{[0,t]}. \end{split} \end{equation} \begin{proposition}[Martingale solution to \eqref{stoEulereps}]\label{prop:martsoleps} The sextuplet $$\big(\tilde{\Omega}^\varepsilon,\tilde{\mathcal{F}}^\varepsilon,(\tilde{\mathcal{F}}^\varepsilon_t),\tilde{\mathbb{P}}^\varepsilon,\tilde{W}_\varepsilon,\tilde{\mathbf{U}}_\varepsilon\big)$$ is a martingale bounded solution to \eqref{stoEulereps}. \end{proposition} By martingale bounded solution, we mean the following: $$ \big(\tilde{\Omega}^\varepsilon,\tilde{\mathcal{F}}^\varepsilon,(\tilde{\mathcal{F}}^\varepsilon_t),\tilde{\mathbb{P}}^\varepsilon,\tilde{W}_\varepsilon\big) $$ is a stochastic basis and $\tilde{\mathbf{U}}_\varepsilon$ is a bounded solution, in the sense of Definition~\ref{def:pathsoleps}, to \eqref{stoEulereps} after the substitution $$ \big(\Omega,\mathcal{F},(\mathcal{F}_t),\mathbb{P},W,\mathbf{U}_\varepsilon\big)\leftarrow\big(\tilde{\Omega}^\varepsilon,\tilde{\mathcal{F}}^\varepsilon,(\tilde{\mathcal{F}}^\varepsilon_t),\tilde{\mathbb{P}}^\varepsilon,\tilde{W}_\varepsilon,\tilde{\mathbf{U}}_\varepsilon\big). $$ This substitution leaves invariant the law of the resulting process $(\mathbf{U}_\varepsilon(t))$.\medskip The proof of Proposition~\ref{prop:martsoleps} uses a method of construction of martingale solutions to SPDEs that avoids in part the use of representation Theorem. This technique has been developed in Ondrej\'at \cite{Ondrejat10}, Brze\'zniak, Ondrej\'at \cite{BrzezniakOndrejat11} and used in particular in Hofmanov\'a, Seidler \cite{HofmanovaSeidler12} and in \cite{Hofmanova13b,DebusscheHofmanovaVovelle15}. \medskip Recall that $\mathbf{F}$, the flux function in Equation~\eqref{stoEuler}, is defined by \eqref{defUUU}. Let us define for all $t\in[0,T]$ and a test function $\varphi=(\varphi_1,\varphi_2)\in C^\infty(\mathbb{T};\mathbb{R}^2)$, \begin{equation*} \begin{split} M^\tau(t)&=\big\langle \mathbf{U}^\tau(t),\varphi\big\rangle-\big\langle \mathbf{U}_{\varepsilon 0},\varphi\big\rangle-2\int_0^t \mathbf{1}_\mathrm{det}(s)\big\langle\mathbf{F}(\mathbf{U}^\tau),\partial_x \varphi\big\rangle+\varepsilon\big\langle\mathbf{U}^\tau,\partial^2_{xx} \varphi\big\rangle\,d s,\\ \tilde{M}^\tau(t)&=\big\langle \tilde{\mathbf{U}}^\tau(t),\varphi\big\rangle-\big\langle \tilde{\mathbf{U}}_{\varepsilon 0},\varphi\big\rangle-2\int_0^t \mathbf{1}_\mathrm{det}(s)\big\langle\mathbf{F}(\tilde\mathbf{U}^\tau),\partial_x \varphi\big\rangle+\varepsilon\big\langle\tilde\mathbf{U}^\tau,\partial^2_{xx} \varphi\big\rangle\,d s,\\ \tilde{M}_\varepsilon(t)&=\big\langle \tilde{\mathbf{U}}_\varepsilon(t),\varphi\big\rangle-\big\langle \tilde{\mathbf{U}}_{\varepsilon 0},\varphi\big\rangle-\int_0^t\big\langle\mathbf{F}(\tilde{\mathbf{U}}_\varepsilon),\partial_x \varphi\big\rangle+\big\langle\tilde{\mathbf{U}}_\varepsilon,\partial^2_{xx} \varphi\big\rangle\,d s. \end{split} \end{equation*} The proof of Proposition~\ref{prop:martsoleps} will be a consequence of the following two lemmas. \begin{lemma}\label{lem:tildeW} The process $\tilde{W}_\varepsilon$ has a modification which is a $(\tilde{\mathcal{F}}^\varepsilon_t)$-adapted $\mathfrak{U}_0$-cylindrical Wiener process, and there exists a collection of mutually independent real-valued $(\tilde{\mathcal{F}}^\varepsilon_t)$-Wiener processes $\{\tilde{\beta}_k^\varepsilon\}_{k\geq1}$ such that \begin{equation}\label{Wienertildeeps} \tilde{W}_\varepsilon=\sum_{k\geq1}\tilde{\beta}_k^\varepsilon e_k \end{equation} in $C([0,T];\mathfrak{U}_0)$ \end{lemma} \begin{proof} it is clear that $\tilde{W}_\varepsilon$ is a $\mathfrak{U}_0$-cylindrical Wiener process (this notion is stable by convergence in law; actually it can be characterized in terms of the law of $\tilde{W}_\varepsilon$ uniquely if we drop the usual hypothesis of a.s. continuity of the trajectories. This latter can be recovered, after a possible modification of the process, by using the Kolmogorov's Theorem). Also $\tilde{W}_\varepsilon$ is $(\tilde{\mathcal{F}}^\varepsilon_t)$-adapted by definition of the filtration $(\tilde{\mathcal{F}}^\varepsilon_t)$. By \cite[Proposition~4.1]{DaPratoZabczyk92}, we obtain the decomposition~\eqref{Wienertildeeps}. \end{proof} \begin{lemma}\label{lem:tildeM} The processes $\tilde{M}_\varepsilon$, \begin{equation*} \tilde{M}^2_\varepsilon-\sum_{k\geq1}\int_0^{\overset{\cdot}{}}\big\langle \sigma_k^{\varepsilon}(\tilde{\mathbf{U}}_\varepsilon),\varphi_2\big\rangle^2\,d r\quad\mbox{and}\quad\tilde{M}_\varepsilon\tilde{\beta}^\varepsilon_k-\int_0^{\overset{\cdot}{}}\big\langle \sigma_k^{\varepsilon}(\tilde{\mathbf{U}}_\varepsilon),\varphi_2\big\rangle\,d r, \end{equation*} are $(\tilde{\mathcal{F}}^\varepsilon_t)$-martingales. \end{lemma} \begin{proof} We fix some times $s,t\in[0,T],\,s\leq t,$ and a continuous function $$ \gamma:C\big([0,s];L^2(\mathbb{T})\big)\times C\big([0,s];\mathfrak{U}_0\big)\longrightarrow [0,1]. $$ Let us set $\tilde{X}^\tau_k=\<\tilde{W}^\tau,e_k\>_\mathfrak{U_0}$. For all $\tau\in(0,1)$, the process $$ M^\tau=\sum_{k\geq1}\int_0^{\overset{\cdot}{}} \big\langle \sigma_k^{\tau}(\mathbf{U}^\tau),\varphi_2\big\rangle\,dX_k^\tau(r) $$ is a square integrable $(\mathcal{F}_t)$-martingale and therefore $$ M^\tau_2:=(M^\tau)^2-\sum_{k\geq1}\int_0^{\overset{\cdot}{}} \big\langle \sigma_k^{\tau}(\mathbf{U}^\tau),\varphi_2\big\rangle^2\,d \<\!\<X^\tau\>\!\>(r), $$ and $$ M^\tau_3:=M^\tau\beta_k-\int_0^{\overset{\cdot}{}} \big\langle \sigma_k^{\tau}(\mathbf{U}^\tau),\varphi_2\big\rangle\,d \<\!\<X^\tau\>\!\>(r) $$ are $(\mathcal{F}_t)$-martingales, where we have denoted by $$ \<\!\<X^\tau\>\!\>(t)=2\int_0^t \mathbf{1}_\mathrm{sto}(r)dr $$ the quadratic variation of $X_k^\tau$ (note that $\<\!\<X^\tau\>\!\>(t)\to t$ when $\tau\to 0$). Besides, it follows from the equality of laws that \begin{equation*} \tilde{\mathbb{E}}^\varepsilon\,\gamma\big(\varrho_s \tilde{\mathbf{U}}^\tau,\varrho_s\tilde{W}^\tau\big)\big[\tilde{M}^\tau(t)-\tilde{M}^\tau(s)\big] =\mathbb{E}\,\gamma\big(\varrho_s \mathbf{U}^\tau,\varrho_s W^\tau\big)\big[M^\tau(t)-M^\tau(s)\big]. \end{equation*} hence $$ \tilde{\mathbb{E}}^\varepsilon\,\gamma\big(\varrho_s \tilde{\mathbf{U}}^{\tau_n},\varrho_s\tilde{W}^{\tau_n}\big)\big[\tilde{M}^{\tau_n}(t)-\tilde{M}^{\tau_n}(s)\big]=0, $$ for all $n$. We can pass to the limit in this equation, due to the moment estimates \eqref{estimmomentepstau2} and the Vitali convergence theorem. We obtain \begin{equation*} \tilde{\mathbb{E}}^\varepsilon\,\gamma\big(\varrho_s\tilde{\mathbf{U}}_\varepsilon,\varrho_s\tilde{W}_\varepsilon\big)\big[\tilde{M}_\varepsilon(t)-\tilde{M}_\varepsilon(s)\big]=0, \end{equation*} \textit{i.e.} $\tilde{M}_\varepsilon$ is a $(\tilde{\mathcal{F}}^\varepsilon_t)$-martingale. We proceed similarly to show that $$ \tilde{M}_{\varepsilon 2}:=\tilde{M}_\varepsilon^2-\sum_{k\geq1}\int_0^{\overset{\cdot}{}}\big\langle \sigma_k^{\varepsilon}(\tilde{\mathbf{U}}_\varepsilon),\varphi_2\big\rangle^2\,d r $$ is a $(\tilde{\mathcal{F}}^\varepsilon_t)$-martingale, by passing to the limit in the identity $$ \tilde{\mathbb{E}}^\varepsilon\,\gamma\big(\varrho_s \tilde{\mathbf{U}}^\tau,\varrho_s\tilde{W}^\tau\big)\big[\tilde{M}_2^\tau(t)-\tilde{M}_2^\tau(s)\big]=0, $$ and again similarly for $$ \tilde{M}_{\varepsilon 3}:=\tilde{M}_\varepsilon\tilde{\beta}^\varepsilon_k-\int_0^{\overset{\cdot}{}}\big\langle \sigma_k^{\varepsilon}(\tilde{\mathbf{U}}_\varepsilon),\varphi_2\big\rangle\,d r. $$ This concludes the proof of Lemma~\ref{lem:tildeM}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:martsoleps}] Once the above lemmas are established, we can show that \begin{equation}\label{EqA3Martina} \tilde{\mathbb{E}}^\varepsilon\left[\left(\tilde{M}_\varepsilon(t)-\tilde{M}_\varepsilon(s)\right)\int_s^t\<hdW(\sigma),\varphi_2\> -\sum_{k\geq 1}\int_s^t \<h(\sigma)e_k,\varphi_2\>\< \sigma_k^{\varepsilon}(\tilde{\mathbf{U}}_\varepsilon)(\sigma),\varphi_2\>d\sigma \Big| \tilde{\mathcal{F}}^\varepsilon_s\right]=0, \end{equation} for all $(\tilde{\mathcal{F}}^\varepsilon_t)$-predictable, $L_2(\mathfrak{U},L^2(\mathbb{T}))$-valued process satisfying \begin{equation}\label{normHSh} \int_0^T\|h(\sigma)\|^2_{L_2(\mathfrak{U},L^2(\mathbb{T}))} d\sigma<+\infty. \end{equation} Here, if $H$ is a given Hilbert space, $L_2(\mathfrak{U},H)$ is the set of Hilbert-Schmidt operators $\mathfrak{U}\to H$. In particular, in \eqref{normHSh}, we have $$ \|h(\sigma)\|^2_{L_2(\mathfrak{U},L^2(\mathbb{T}))}=\sum_{k\geq 1}\|h(\sigma)e_k\|_{L^2(\mathbb{T})}^2. $$ Equation~\eqref{EqA3Martina} is proved in \cite[Proposition~A.1]{Hofmanova13b}. Taking $s=0$ and $h= \Phi^\varepsilon(\tilde{\mathbf{U}}_\varepsilon)$ in \eqref{EqA3Martina}, we obtain $$ \tilde{\mathbb{E}}^\varepsilon\sum_{k\geq 1}\left[\tilde{M}_\varepsilon(t)\int_0^t\<\sigma_k^{\varepsilon}(\tilde{\mathbf{U}}_\varepsilon),\varphi_2\>ds-\int_0^t \< \sigma_k^{\varepsilon}(\tilde{\mathbf{U}}_\varepsilon),\varphi_2\>^2 ds\right]=0. $$ This shows that \begin{equation}\label{EqA3Martina2} \tilde{\mathbb{E}}^\varepsilon\left[\tilde{M}_\varepsilon(t)-\sum_{k\geq 1}\int_0^t\big\langle\sigma_k^\varepsilon(\tilde{\mathbf{U}}_\varepsilon)\,d\tilde\beta_k^\varepsilon(s),\varphi_2\big\rangle\right]^2=0. \end{equation} Accordingly, we have \begin{align*} \big\langle \tilde{\mathbf{U}}_\varepsilon(t),\varphi\big\rangle=\big\langle \tilde{\mathbf{U}}_{\varepsilon 0},\varphi\big\rangle+\int_0^t&\big\langle\mathbf{F}(\tilde{\mathbf{U}}_\varepsilon),\partial_x \varphi\big\rangle+\big\langle \tilde{\mathbf{U}}_\varepsilon,\partial^2_x\varphi\big\rangle\,d s\\ &+\sum_{k\geq 1}\int_0^t\big\langle\sigma_k^\varepsilon(\tilde{\mathbf{U}}_\varepsilon)\,d\tilde{\beta}_k^\varepsilon,\varphi_2\big\rangle,\quad t\in[0,T],\;\;\tilde{\mathbb{P}}^\varepsilon\text{-a.s.}, \end{align*} and this gives the weak formulation \eqref{EqBoundedSolution} $\tilde{\mathbb{P}}^\varepsilon$-almost surely. By Proposition~\ref{prop:Utaubounded}, we have \eqref{asregsoleps} $\tilde{\mathbb{P}}^\varepsilon$-almost surely. This concludes the proof of Proposition~\ref{prop:martsoleps}. \end{proof} \subsubsection{Proof of Theorem~\ref{th:existspatheps}}\label{subsec:prooftheps} We apply the Gy{\"o}ngy-Krylov argument~\cite{GyongyKrylov96}, see also \cite[Section 4.5]{Hofmanova13b}, which shows that the existence of a martingale solution and uniqueness of pathwise solutions (Theorem~\ref{th:uniqpatheps}) give existence and uniqueness of pathwise solutions and convergence in probability in $\mathcal{X}_\mathbf{U}=C([0,T);L^2(\mathbb{T}))$ of the whole sequence $(\mathbf{U}^{\tau_n})$ to $\mathbf{U}_\varepsilon$. If $\mathbf{U}\mapsto J(\mathbf{U})\in[0,+\infty]$ is a lower semi-continuous functional on the space $\mathcal{X}$, then $\mathbf{U}\mapsto \mathbb{E} J(\mathbf{U})$ is a lower semi-continuous functional on the space $L^1(\Omega;\mathcal{X})$ endowed with the topology of convergence in probability. To prove this fact we apply the inequality $$ \mathbb{E} J(\mathbf{U})\leq \mathbb{E}\left(\mathbf{1}_{\|\mathbf{U}-\mathbf{U}^n\|\leq \varepsilon} J(U)\right)+\mathbb{P}\left(\|\mathbf{U}-\mathbf{U}^n\|>\varepsilon\right). $$ In particular the moment estimate \eqref{estimmomenteps2} follows from the moment estimate \eqref{estimmomentepstau2} for $\mathbf{U}^\tau$ and the gradient estimates \eqref{corestimgradientepsrho} and \eqref{corestimgradientepsu} are deduced from the corresponding estimates \eqref{corestimgradientepsrhotau} and \eqref{corestimgradientepsutau} satisfied by $\mathbf{U}^\tau$. Also we have the regularity \eqref{HoldertH1xBounded}-\eqref{LinftytH2xBounded} as a consequence of \eqref{HoldertH1xBoundedtau}-\eqref{LinftytH2xBoundedtau}. By \eqref{HoldertH1xBoundedtau}-\eqref{LinftytH2xBoundedtau} we also have, up to a subsequence, and in probability, convergence of $\mathbf{U}^{\tau_n}$ to $\mathbf{U}_\varepsilon$ in $C([0,T];W^{1,2}(\mathbb{T}))$. This convergence is strong enough to obtain the entropy balance equation \eqref{Itoentropyeps} by taking the limit in Equation~\eqref{Itoentropytau}. This concludes the proof of Theorem~\ref{th:existspatheps}. \section{Probabilistic Young measures}\label{sec:YoungMeasures} Let $\mathbf{U}_\varepsilon$ be the solution to \eqref{stoEulereps} given in Theorem~\ref{th:existspatheps}. Our aim is to prove the convergence of $(\mathbf{U}_\varepsilon)$. The standard tool for this is the notion of measure-valued solution introduced by Di Perna, \cite{Diperna83a}. In this section we give some precisions about it in our context of random solutions. More precisely, we know that, almost surely, $({\mathbf{U}_\varepsilon})$ defines a Young measure $\nu_\varepsilon$ on $\mathbb{R}_+\times\mathbb{R}$ by the formula \begin{equation} \<\nu^\varepsilon_{x,t},\varphi\>:=\<\delta_{{\mathbf{U}_\varepsilon}(x,t)},\varphi\>=\varphi({\mathbf{U}_\varepsilon}(x,t)),\quad\forall\varphi\in C_b(\mathbb{R}_+\times\mathbb{R}). \label{defnueps}\end{equation} Our aim is to show that $\nu_\varepsilon\rightharpoonup\nu$ (in a sense to be specified), where $\nu$ has some specific properties. To that purpose, we will use the probabilistic compensated compactness method developed in the Appendix of \cite{FengNualart08} and some results on the convergence of probabilistic Young measures that we introduce here. Note that the notion of random Young measure has also been introduced and developed by Brze{\'z}niak and Serrano in \cite{BrzezniakSerrano13}, compare in particular \cite[Lemma 2.18]{BrzezniakSerrano13} and Proposition~\ref{prop:compactYproba} below. \subsection{Young measures embedded in a space of Probability measures}\label{sec:YoungMeasuresProbas} Let $(Q,\mathcal{A},\lambda)$ be a finite measure space. Without loss of generality, we will assume $\lambda(Q)=1$. A Young measure on $Q$ (with state space $E$) is a mea\-su\-ra\-ble map $Q\to\mathcal{P}_1(E)$, where $E$ is a topological space endowed with the $\sigma$-algebra of Borel sets, $\mathcal{P}_1(E)$ is the set of probability measures on $E$, itself endowed with the $\sigma$-algebra of Borel sets corresponding to the topology defined by the weak\footnote{actually, weak convergence {\it of probability measures}, also corresponding to the tight convergence of finite measures} convergence of measures, {\it i.e.} $\mu_n\to\mu$ in $\mathcal{P}_1(E)$ if \begin{equation*} \<\mu_n,\varphi\>\to\<\mu,\varphi\>,\quad\forall\varphi\in C_b(E). \end{equation*} As in \eqref{defnueps}, any measurable map $w\colon Q\to E$ can be viewed as a Young measure $\nu$ defined by \begin{equation*} \<\nu_z,\varphi\>=\<\delta_{w(z)},\varphi\>=\varphi(w(z)),\quad\forall\varphi\in C_b(E),\quad\mbox{for }\lambda-\mbox{almost all } z\in Q. \end{equation*} A Young measure $\nu$ on $Q$ can itself be seen as a probability measure on $Q\times E$ defined by \begin{equation*} \<\nu,\psi\>=\int_Q\int_E \psi(z,p) d\nu_z(p) d\lambda(z),\quad\forall\psi\in C_b(Q\times E). \end{equation*} We then have, for all $\psi\in C_b(Q)$ ($\psi$ independent on $p\in E$), $\<\nu,\psi\>=\<\lambda,\psi\>$, that is to say \begin{equation} \pi_*\nu=\lambda, \end{equation} where $\pi$ is the projection $Q\times E\to Q$ and the push-forward of $\nu$ by $\pi$ is defined by $\pi_*\nu(A)=\nu(\pi^{-1}(A))$, for all Borel subset $A$ of $Q$. Assume now that $Q$ is a compact subset of $\mathbb{R}^s$ and $E$ is a closed subset of $\mathbb{R}^m$, $m,s\in\mathbb{N}^*$, and, conversely, let $\mu$ is a probability measure on $Q\times E$ such that $\pi_*\mu=\lambda$. Then, by the Slicing Theorem (\textit{cf.} Attouch, Buttazzo, Michaille \cite[Theorem~4.2.4]{AttouchButtazzoMichaille06}), we have: for $\lambda$-a.e. $z\in Q$, there exists $\mu_z\in\mathcal{P}_1(E)$ such that, \begin{equation*} z\mapsto \<\mu_z,\varphi\> \end{equation*} is measurable from $Q$ to $\mathbb{R}$ for every $\varphi\in C_b(E)$, and \begin{equation*} \<\mu,\psi\>=\int_Q\int_E \psi(z,p) d\mu_z(p) d\lambda(z), \end{equation*} for all $\psi\in C_b(Q\times E)$. This means precisely that $\mu$ is a Young measure on $Q$. We therefore denote by \begin{equation*} \mathcal{Y}=\left\{\nu\in\mathcal{P}_1(Q\times E);\pi_*\nu=\lambda\right\} \end{equation*} the set of Young measures on $Q$. \medskip We use now the Prohorov's Theorem, \textit{cf.} Billingsley \cite[Theorem~5.1]{BillingsleyBook}, to give a compactness criteria in $\mathcal{Y}$. We assume that $Q$ is a compact subset of $\mathbb{R}^s$ and $E$ is a closed subset of $\mathbb{R}^m$. We also assume that the $\sigma$-algebra $\mathcal{A}$ of $Q$ is the $\sigma$-algebra of Borel sets of $Q$. \begin{proposition}[Bound against a Lyapunov functional] Let $\eta\in C(E;\mathbb{R}_+)$ satisfy the growth condition \begin{equation*} \lim_{p\in E,|p|\to+\infty} \eta(p)=+\infty. \end{equation*} Let $C>0$ be a positive constant. Then the set \begin{equation} K_C=\left\{\nu\in\mathcal{Y};\int_{Q\times E}\eta(p)d\nu(z,p)\leq C\right\} \label{compactKC}\end{equation} is a compact subset of $\mathcal{Y}$. \label{prop:compactY}\end{proposition} \begin{proof} The condition $\pi_*\nu=\lambda$ being stable by weak convergence, $\mathcal{Y}$ is closed in $\mathcal{P}_1(Q\times E)$. By Prohorov's Theorem, \cite[Theorem~5.1]{BillingsleyBook}, $K_C$ is relatively compact in $\mathcal{Y}$ if, and only if it is tight. Besides, $K_C$ is closed since \begin{equation*} \int_{Q\times E}\eta(p)d\nu(z,p)\leq\liminf_{n\to+\infty}\int_{Q\times E}\eta(p)d\nu_n(z,p) \end{equation*} if $(\nu_n)$ converges weakly to $\nu$. It is therefore sufficient to prove that $K_C$ is tight, which is classical: let $\varepsilon>0$. For $R\geq 0$, let \begin{equation*} V(R)=\inf_{|p|>R}\eta(p). \end{equation*} Then $V(R)\to +\infty$ as $R\to+\infty$ by hypothesis and, setting $M_R=Q\times [\overline{B}(0,R)\cap E]$, we have \begin{equation*} V(R)\nu(M_R^c)\leq \int_{Q\times E}\eta(p)d\nu(z,p)\leq C, \end{equation*} for all $\nu\in K_C$, whence $\sup_{\nu\in K_C}\nu(M_R^c)<\varepsilon$ for $R$ large enough. \end{proof} \subsection{A compactness criterion for probabilistic Young measures}\label{sec:YoungMeasuresProbasProbas} As above, we assume that $Q$ is a compact subset of $\mathbb{R}^s$ and $E$ is a closed subset of $\mathbb{R}^m$. We endow $\mathcal{P}_1(Q\times E)$ (and thus $\mathcal{Y}$ also) with the Prohorov's metric $d$. Then $(\mathcal{P}_1(Q\times E),d)$ is a complete, separable metric space, weak convergence coincides with $d$-convergence, and a subset $A$ is relatively compact if, and only if it is tight, \cite[p.72]{BillingsleyBook}. \begin{definition} A random Young measure is a $\mathcal{Y}$-valued random variable. \end{definition} \begin{proposition} Let $\eta\in C(E;\mathbb{R}_+)$ satisfy the growth condition \begin{equation*} \lim_{p\in E,|p|\to+\infty} \eta(p)=+\infty. \end{equation*} Let $M>0$ be a positive constant. If $(\nu_n)$ is a sequence of random Young measures on $Q$ satisfying the bound \begin{equation*} \mathbb{E}\int_{Q\times E}\eta(p)d\nu_n(z,p)\leq M, \end{equation*} then, up to a subsequence, $(\nu_n)$ is converging in law. \label{prop:compactYproba}\end{proposition} \begin{proof} Let $\mathcal{L}(\nu_n)\in\mathcal{P}_1(\mathcal{Y})$ denote the law of $\nu_n$. To prove that it is tight, we use the Prohorov's Theorem. Let $\varepsilon>0$. For $C>0$, let $K_C$ be the compact set defined by \eqref{compactKC}. If $\nu$ is a random Young measure, then we have \begin{equation*} \mathbb{P}(\nu\notin K_C)=\mathbb{P}\left(1<\frac{1}{C}\int_{Q\times E}\eta(p)d\nu(z,p)\right)\leq\frac{1}{C}\mathbb{E}\int_{Q\times E}\eta(p)d\nu(z,p), \end{equation*} hence \begin{equation*} \sup_{n\in\mathbb{N}}\mathcal{L}(\nu_n)(\mathcal{Y}\setminus K_C)=\sup_{n\in\mathbb{N}}\mathbb{P}(\nu_n\notin K_C)\leq\frac{M}{C}<\varepsilon, \end{equation*} for $C$ large enough, which proves the result. \end{proof} We end this section with a result about random Young measure being almost surely Dirac masses. \begin{definition}[Random Dirac mass] Let $r\geq 1$ and let $\nu$ be a random Young measure. We say that $\nu$ is an $L^r$-random Dirac mass if there exists $u\in L^r(\Omega\times Q;E)$ such that, almost-surely, $\nu=\delta_u\rtimes\lambda$, \textit{i.e.} (indicating by the superscript $\omega$ the dependence on $\omega$): for $\mathbb{P}$-almost all $\omega\in\Omega$, \begin{equation}\label{eqRandomDiracmass} \int_{Q\times E}\varphi(p,z) d\nu_{z}^\omega(p)d\lambda(z)=\int_Q \varphi(u^\omega(z),z) d\lambda(z), \end{equation} for all $\varphi\in C_b(Q\times E)$. \end{definition} \begin{proposition} Let $r\geq 1$, let $\nu$ be a random Young measure on the probability space $(\Omega,\mathbb{P})$ and let $\tilde\nu$ be a random Young measure on a probability space $(\tilde\Omega,\tilde\mathbb{P})$ such that $\nu$ and $\tilde\nu$ have same laws. Then $\nu$ is an $L^r$-random Dirac mass if, and only if, $\tilde\nu$ is an $L^r$-random Dirac mass, \textit{i.e.} the fact that $\nu$ is an $L^r$-random Dirac mass depends on the distribution of $\nu$ uniquely. \label{prop:LawDiracMass}\end{proposition} \begin{proof} We denote by $\tilde\mathbb{E}$ the expectancy with respect to $\tilde\mathbb{P}$. Let $\psi\colon\mathbb{R}^m\to\mathbb{R}$ be a strictly convex function satisfying the growth condition $$ C_1 |p|^r\leq |\psi(p)|\leq C_2(1+|p|^r). $$ If $\nu$ is an $L^r$-random Dirac mass, then \begin{equation}\label{psiDirac} \mathbb{E}\int_{Q\times E}\psi(p)d\nu_{z}(p)d\lambda(z)=\mathbb{E}\int_Q\psi\left(\int_E p d\nu_{z}(p)\right) d\lambda(z), \end{equation} and both sides of the equation (equal to $\mathbb{E}\|\psi(u)\|_{L^1(Q)}$) are finite. Equation \eqref{psiDirac} can be rewritten \begin{equation} \mathbb{E}\varphi(\nu)=\mathbb{E}\theta(\nu), \label{varphitheta}\end{equation} where the functions $\varphi$ and $\theta$ are defined on $\mathcal{Y}$ as the applications $$ \varphi\colon\mu\mapsto\int_{Q\times E}\psi(p)d\mu_z(p)d\lambda(z),\quad\theta\colon\mu\mapsto\int_Q\psi\left(\int_E p d\mu_{z}(p)\right) d\lambda(z). $$ The function $\varphi$ is continuous on $\mathcal{Y}$ and, by the Lebesgue dominated convergence theorem, $\theta$ is continuous on the subset $$ \mathcal{Y}_r:=\left\{\mu\in\mathcal{Y};\int_{Q\times E}|p|^r d\mu_z(p)d\lambda(z)<+\infty\right\}. $$ If $\tilde\nu$ has same law as $\nu$, then \eqref{varphitheta} shows that $\tilde\mathbb{P}$-almost surely $\tilde\nu\in \mathcal{Y}_r$, that \begin{equation}\label{psiDiracTilde} \mathbb{E}\int_{Q\times E}\psi(p)d\tilde\nu_{z}(p)d\lambda(z)=\mathbb{E}\int_Q\psi\left(\int_E p d\tilde\nu_{z}(p)\right) d\lambda(z), \end{equation} and that both sides of the equation \eqref{psiDiracTilde} are finite. Note that, $\tilde\mathbb{P}$-almost surely, for $\lambda$-almost all $z\in Q$, \begin{equation}\label{Jensentildenu} \int_E \psi(p)d\tilde\nu_{z}(p)\geq \psi\left(\int_E p d\tilde\nu_{z}(p)\right), \end{equation} by the Jensen Inequality. By strict convexity of $\psi$, there is equality in \eqref{Jensentildenu} if, and only if, $\tilde\nu_z$ is the Dirac mass $\delta_{\tilde u(z)}$, where \begin{equation}\label{nutoutilde} \tilde u(z):=\int_E p d\tilde\nu_{z}(p). \end{equation} Therefore \eqref{Jensentildenu} shows that $\tilde\mathbb{P}$-almost surely, for $\lambda$-almost all $z\in Q$, $\nu_z=\delta_{\tilde u(z)}$. In particular, \eqref{eqRandomDiracmass} is satisfied by $\tilde\nu$ and $\tilde u$. By \eqref{nutoutilde}, $\tilde u$ is measurable from $\Omega\times Q$ to $E$. Since $$ \mathbb{E}\int_Q \psi(u)d\lambda=\mathbb{E}\int_{Q\times E}\psi(p)d\tilde\nu_{z}(p)d\lambda(z)<+\infty $$ in \eqref{psiDiracTilde}, we have $u\in L^r(\Omega\times Q;E)$. \end{proof} \subsection{Convergence to a random Young measure}\label{sec:cvYoungMeasures} Let $\mathbf{U}_\varepsilon$ be a bounded solution to \eqref{stoEulereps}. We will apply the results of paragraphs \ref{sec:YoungMeasuresProbas}-\ref{sec:YoungMeasuresProbasProbas} to the case $Q=Q_T$, $\lambda$ is the $2$-dimensional Lebesgue measure on $Q_T$, $E=\mathbb{R}_+\times\mathbb{R}$ and $\nu^\varepsilon=\delta_{{\mathbf{U}_\varepsilon}}\rtimes\lambda$, that is to say \begin{equation}\label{defYoungeps} \int_{Q_T\times\mathbb{R}_+\times\mathbb{R}}\varphi(x,t,\mathbf{U}) d\nu^\varepsilon_{x,t}(\mathbf{U}) dx dt =\int_{Q_T} \varphi(x,t,\mathbf{U}_\varepsilon(x,t)) dx dt, \end{equation} for all $\varphi\in C_b(Q_T\times\mathbb{R}_+\times\mathbb{R})$. \begin{proposition} Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that \begin{equation}\label{UniformInitialEnergy} \mathbb{E}\int_\mathbb{T} \frac12\rho_{\varepsilon 0} u^2_{\varepsilon 0}+\frac{\kappa}{\gamma-1}\rho^\gamma_{\varepsilon 0}\ dx \end{equation} is bounded uniformly with respect to $\varepsilon$. Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps} and let $\nu^\varepsilon$ be the Random Young measure associated to $\mathbf{U}_\varepsilon$ defined by \eqref{defYoungeps}. Let $(\varepsilon_n)$ be a sequence of real numbers decreasing to zero and let $\mathcal{X}_W$ be the path space defined by \eqref{WpathSpace}. Then, up to a subsequence, there exists a probability space $(\tilde\Omega,\tilde{\mathcal{F}},\tilde\mathbb{P})$, some random variables $(\tilde{\nu}^{\varepsilon_n},\tilde W^{\varepsilon_n})$ and $(\tilde\nu,\tilde W)$ with values in $\mathcal{Y}\times\mathcal{X}_W$ such that \begin{enumerate} \item the law of $(\tilde{\nu}^{\varepsilon_n},\tilde W^{\varepsilon_n})$ under $\tilde\mathbb{P}$ coincide with the law of $(\nu^{\varepsilon_n},W)$, \item $(\tilde{\nu}^{\varepsilon_n},\tilde W^{\varepsilon_n})$ converges $\tilde\mathbb{P}$-almost surely to $(\tilde\nu,\tilde W)$ in the topology of $\mathcal{Y}\times\mathcal{X}_W$. \end{enumerate} \label{prop:cvrandomY}\end{proposition} \begin{proof} let $\eta$ be the entropy (energy in that case) defined by \eqref{entropychi} with $g(\xi)=|\xi|^{2}$. Then $\eta$ is coercive by \eqref{estimetabelow}. For such an $\eta$, \eqref{UniformInitialEnergy} and the uniform estimate \eqref{estimmomenteps2} shows with Proposition~\ref{prop:compactYproba} that the sequence of random Young measures $(\nu^{\varepsilon_n})$ is tight. Since the single random variable $W$ is tight on $\mathcal{X}_W$, the couple $(\nu^{\varepsilon_n},W)$ is tight on $\mathcal{Y}\times\mathcal{X}_W$. Since $\mathcal{Y}$ is separable (\textit{cf.} the introduction of Section~\ref{sec:YoungMeasuresProbasProbas}), $\mathcal{Y}\times\mathcal{X}_W$ is separable and we can apply then the Skorohod Theorem \cite[p.~70]{BillingsleyBook} to conclude. \end{proof} \begin{remark} We may take $\tilde{\Omega}=[0,1]$, with $\tilde\mathcal{F}$ the $\sigma$-algebra of the Borelians on $[0,1]$ and $\tilde\mathbb{P}$ the Lebesgue measure on $[0,1]$, see \cite{Skorohod56}. \label{rk:Skorohod01}\end{remark} \begin{remark} Since $\mathbf{U}_\varepsilon$ is a bounded solution to \eqref{stoEulereps}, we have $$ \mathbf{U}_\varepsilon\in L^r(\Omega\times Q_T;\mathbb{R}_+\times\mathbb{R}) $$ for every $r\geq 1$. By Proposition~\ref{prop:LawDiracMass}, there exists $$ \tilde\mathbf{U}_\varepsilon\in L^r(\tilde\Omega\times Q_T;\mathbb{R}_+\times\mathbb{R}) $$ for all $r\geq 1$ such that, almost surely, $\tilde\nu^\varepsilon=\delta_{\mathbf{U}_\varepsilon}\rtimes\lambda$, \textit{i.e.} almost surely, \begin{equation}\label{defYoungepstilde} \int_{Q_T\times\mathbb{R}_+\times\mathbb{R}}\varphi(x,t,\mathbf{U}) d\tilde\nu^\varepsilon_{x,t}(\mathbf{U}) dx dt =\int_{Q_T} \varphi(x,t,\tilde\mathbf{U}_\varepsilon(x,t)) dx dt, \end{equation} for all $\varphi\in C_b(Q_T\times\mathbb{R}_+\times\mathbb{R})$. Using in particular the identity, $$ \mathbb{E}\int_{Q_T} \varphi(x,t,\tilde\mathbf{U}_\varepsilon(x,t)) dx dt=\mathbb{E}\int_{Q_T} \varphi(x,t,\mathbf{U}_\varepsilon(x,t)) dx dt, $$ we see that $\tilde\mathbf{U}_\varepsilon$ satisfies the same uniform bound \eqref{estimmomenteps2} as $\mathbf{U}_\varepsilon$. \label{rk:UUtilde}\end{remark} \section{Reduction of the Young measure}\label{sec:reductionYoung} Proposition~\ref{prop:cvrandomY} above gives the existence of a random young measure $\tilde\nu$ such that $\tilde\nu_\varepsilon$ converges in law and almost surely in the sense of Young measures to $\tilde\nu$. We will now apply the compensated compactness method to prove that a.s., for a.e. $(x,t)\in Q_T$, either $\tilde\nu_{x,t}$ is a Dirac mass or $\tilde\nu_{x,t}$ is concentrated on the vacuum region $\{\rho=0\}$. To do this, we will use the probabilistic compensated compactness method of \cite{FengNualart08} to obtain a set of functional equations satisfied by $\tilde\nu$. Then we conclude by adapting the arguments of \cite{LionsPerthameSouganidis96}. \subsection{Compensated compactness}\label{sec:CC} Let $\mathcal{G}$ denote the set of functions $g\in C^2(\mathbb{R})$, convex, with $g$ sub-quadratic and $g'$ sub-linear: \begin{equation}\label{gsubquad} |g(\xi)|\leq C(g)(1+|\xi|^2),\quad |g'(\xi)|\leq C(g)(1+|\xi|), \end{equation} for all $\xi\in\mathbb{R}$, for a given constant $C(g)\geq 0$. \subsubsection{Preparation to Murat's Lemma}\label{sec:CCMurat} For $p\in[1,+\infty]$, we denote by $W_0^{1,p}(Q_T)$ the set of functions $u$ in the Sobolev space $W^{1,p}(Q_T)$ such that $u=0$ on $\mathbb{T}\times\{0\}$ and $\mathbb{T}\times\{T\}$. We denote by $W^{-1,p}(Q_T)$ the dual of $W_0^{1,p'}(Q_T)$, where $p'$ is the conjugate exponent to $p$. First we prove the tightness of the sequence $(\varepsilon\eta({\mathbf{U}_\varepsilon})_{xx})_{\varepsilon>0}$. \begin{proposition}[Case $\gamma\leq 2$] We assume $\gamma\leq 2$. Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that \begin{equation}\label{UniformInitialEnergyless2} \mathbb{E}\int_\mathbb{T} \frac12\rho_{\varepsilon 0} u^2_{\varepsilon 0}+\frac{\kappa}{\gamma-1}\rho^\gamma_{\varepsilon 0}\ dx \end{equation} is bounded uniformly with respect to $\varepsilon$. Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps}. Let $r\in(1,2)$ and let $\eta$ be an entropy of the form \eqref{entropychi} with $g\in\mathcal{G}$ (\textit{cf.} \eqref{gsubquad}). Then the sequence of random variables $(\varepsilon \partial_{xx}^2 \eta({\mathbf{U}_\varepsilon}))_{\varepsilon>0}$ is tight in $W^{-1,r}(Q_T)$. \label{prop:Muratgammaless2}\end{proposition} \begin{proof} We suppose first that $\gamma<2$ and we set $ m=\displaystyle\frac{r}{2-r}(2-\gamma). $ We can assume that $r\in\left(\frac{2}{3-\gamma},2\right)$. Then $m>1$. We will show that $(\varepsilon\partial_{xx}^2 \eta({\mathbf{U}_\varepsilon}))$ converges to zero in probability on $W^{-1,r}(Q_T)$ by proving that \begin{equation}\label{eq:murat0lessP} \ds\lim_{\varepsilon\to 0}\varepsilon \partial_x\eta({\mathbf{U}_\varepsilon})=0\mbox{ in probability in }L^r(Q_T). \end{equation} This is equivalent to the convergence in law of the sequence $(\varepsilon \partial_{x} \eta({\mathbf{U}_\varepsilon}))_{\varepsilon>0}$ to $0$ \cite[p.27]{BillingsleyBook}. To obtain \eqref{eq:murat0lessP}, it is sufficient to prove the convergence \begin{equation}\label{eq:murat0less} \ds\lim_{\varepsilon\to 0}\varepsilon \partial_x \eta({\mathbf{U}_\varepsilon})=0\mbox{ in }L^r(Q_T), \end{equation} conditionally to the bounds \begin{equation}\label{LmRless} \|\rho_\varepsilon\|_{L^m(Q_T)}^m\leq R, \end{equation} and \begin{equation} \varepsilon\iint_{Q_T}\Big\{\Big[{\rho}_\varepsilon^\gamma+|{u}_\varepsilon|^4\Big]\rho_{\varepsilon}^{\gamma-2} |\partial_x{\rho}_\varepsilon|^2 +\Big[{\rho}_\varepsilon(1+{\rho}_\varepsilon^{2\theta}+|{u}_\varepsilon|^2)\Big] \rho_\varepsilon|\partial_x {u}_\varepsilon|^2\Big\} dx dt\leq R,\label{gradRless} \end{equation} where $R>1$ is fixed. Indeed, by the estimates \eqref{estimmomenteps2}, \eqref{corestimgradientepsrho}, \eqref{corestimgradientepsu} and the Markov Inequality, the probabilities of the events \eqref{LmRless} and \eqref{gradRless} are arbitrary large for large $R$, uniformly with respect to $\varepsilon$. The proof of \eqref{eq:murat0less} is similar to the analysis in \cite[pp.627-629]{LionsPerthameSouganidis96}, with the difference that we do not use $L^\infty$ estimates here. We note first that, by \eqref{gsubquad}, we have \begin{equation*} |\partial_\rho\eta(\mathbf{U})|\leq C\left(1+|u|^2+\rho^{2\theta}\right), \end{equation*} and $$ |\partial_u\eta(\mathbf{U})| \leq C\rho\left(1+|u|+\rho^\theta\right), $$ for a given non-negative constant that we still denote by $C$. By the Young Inequality, we obtain the bounds \begin{align} |\partial_x \eta({\mathbf{U}_\varepsilon})|^r \leq& C\left\{\left(1+|{u}_\varepsilon|^{2r}+|{\rho}_\varepsilon|^{2r\theta}\right)|\partial_x{\rho}_\varepsilon|^r+\left(1+|{u}_\varepsilon|^r+|{\rho}_\varepsilon|^{r\theta}\right){\rho}_\varepsilon^r|\partial_x {u}_\varepsilon|^r\right\}\nonumber\\ \leq & C\left\{1+\left(1+|{u}_\varepsilon|^{2r}\right)|\partial_x{\rho}_\varepsilon|^r+|{\rho}_\varepsilon|^{4\theta}|\partial_x{\rho}_\varepsilon|^2+{\rho}_\varepsilon\left[1+|{\rho}_\varepsilon|^{2\theta}+|{u}_\varepsilon|^2\right]\rho_\varepsilon|\partial_x {u}_\varepsilon|^2\right\},\label{muratlessr} \end{align} where $C$ denotes some constant possibly varying from places to places that depends only on $r$. By \eqref{LmRless}, \eqref{gradRless} therefore, \begin{equation}\label{LmGradRless} \varepsilon^r\iint_{Q_T}|\partial_x \eta({\mathbf{U}_\varepsilon})|^r\, dx dt\leq C_R\varepsilon^{r-1}+C\varepsilon^r\iint_{Q_T}\left(1+|{u}_\varepsilon|^{2r}\right)|\partial_x{\rho}_\varepsilon|^r\, dx dt, \end{equation} where the constant $C_R$ depends on $R$. Since $\gamma\leq 2$, we have furthermore \begin{align*} \left(1+|{u}_\varepsilon|^{2r}\right)|\partial_x{\rho}_\varepsilon|^r&={\rho}_\varepsilon^{\frac{r}{2}(2-\gamma)}\left(1+|{u}_\varepsilon|^{2r}\right){\rho}_\varepsilon^{\frac{r}{2}(\gamma-2)}|\partial_x{\rho}_\varepsilon|^r\\ &\leq C{\rho}_\varepsilon^{m}+C\left(1+|{u}_\varepsilon|^{4}\right){\rho}_\varepsilon^{\gamma-2}|\partial_x{\rho}_\varepsilon|^2. \end{align*} By \eqref{LmRless}, \eqref{gradRless} and \eqref{LmGradRless}, we conclude to \begin{equation}\label{gammaleq2} \varepsilon^r\iint_{Q_T}|\partial_x \eta({\mathbf{U}_\varepsilon})|^r\, dx dt\leq C_R\varepsilon^{r-1}, \end{equation} for all $\varepsilon\in(0,1)$. This gives the convergence~\eqref{eq:murat0less}. If $\gamma=2$, the arguments used above remain valid, taking $r=2$. \end{proof} \begin{proposition}[Case $\gamma>2$] We assume $\gamma>2$. Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that \begin{equation}\label{UniformInitialEnergyg2} \mathbb{E}\int_\mathbb{T} \frac12\rho_{\varepsilon 0} u^2_{\varepsilon 0}+\frac{\kappa}{\gamma-1}\rho^\gamma_{\varepsilon 0}\ dx \end{equation} is bounded uniformly with respect to $\varepsilon$. Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps}. Assume that there exists $m>4$ such that the sequence $(\varepsilon^{\frac{1}{\gamma-2}}\|u^\varepsilon\|_{L^m(Q_T)})$ is stochastically bounded: for all $\alpha>0$, there exists $M>0$ such that, for all $\varepsilon\in(0,1)$, \begin{equation}\label{uepsLmg2} \mathbb{P}\left(\varepsilon^{\frac{1}{\gamma-2}}\|u^\varepsilon\|_{L^m(Q_T)}> M\right)<\alpha. \end{equation} Let $r\in(1,2)$ and let $\eta$ be an entropy of the form \eqref{entropychi} with $g\in\mathcal{G}$ (\textit{cf.} \eqref{gsubquad}). Then the sequence of random variables $(\varepsilon \partial_{xx}^2 \eta({\mathbf{U}_\varepsilon}))_{\varepsilon>0}$ is tight in $W^{-1,r}(Q_T)$. \label{prop:Murat0g2}\end{proposition} \begin{proof} We begin as in the proof of Proposition~\ref{prop:Muratgammaless2}. Without loss of ge\-ne\-ra\-li\-ty, we assume $\displaystyle\frac{4r}{2-r}\geq m$. We will obtain \eqref{eq:murat0lessP} here by proving that, given $\eta>0$, \begin{equation}\label{eq:murat0g2} \lim_{\varepsilon\to 0}\mathbb{P}(A_{\varepsilon,\eta})=0,\quad A_{\varepsilon,\eta}:=\left\{\|\varepsilon\partial_x \eta(\mathbf{U}_\varepsilon)\|_{L^r(Q_T)}>\eta\right\}. \end{equation} For $R>1$, we consider the events \eqref{gradRless} and \begin{equation}\label{LmR} \|{u}_\varepsilon\|_{L^m(Q_T)}\leq R,\quad \|{u}_\varepsilon\rho_\varepsilon^\frac12\|_{L^2(Q_T)}\leq R. \end{equation} By \eqref{estimmomenteps2}, \eqref{corestimgradientepsrho}, \eqref{corestimgradientepsu} and \eqref{uepsLmg2}, the probability of the event \begin{equation}\label{eventBR} B_{\varepsilon,R}:=\left\{\eqref{gradRless}\;\&\;\eqref{LmR}\;\&\;\varepsilon^{\frac{1}{\gamma-2}}\|u_\varepsilon\|_{L^m(Q_T)}\leq M\right\} \end{equation} is arbitrarily close to $1$ for large $R$, uniformly with respect to $\varepsilon$. To obtain \eqref{eq:murat0g2}, it is therefore sufficient to prove \begin{equation}\label{eq:murat1g2} \lim_{\varepsilon\to 0}\|\varepsilon\partial_x \eta(\mathbf{U}_\varepsilon)\|_{L^r(Q_T)}=0\mbox{ a.e. on }B_{\varepsilon,R}, \end{equation} for every $R>1$. To get \eqref{eq:murat1g2}, we use the estimate \eqref{muratlessr}, which gives \eqref{LmGradRless}. The remaining term in the right-hand side of \eqref{LmGradRless} is estimated as follows: let $\delta>0$. First, we have $1\leq\delta^{r(2-\gamma)/2}\rho_\varepsilon^{r(\gamma-2)/2}$ on the set $\{\rho_\varepsilon\geq \delta\}$ and, by the H\"older Inequality and \eqref{gradRless}, \begin{align*} &\varepsilon^r\iint_{Q_T}\left(1+|{u}_\varepsilon|^{2r}\right)|\partial_x{\rho}_\varepsilon|^r\, dx dt\\ \leq\ & \delta^{r(2-\gamma)/2} \left(\varepsilon^2 \iint_{Q_T}\left(1+|{u}_\varepsilon|^{4}\right)\rho_\varepsilon^{\gamma-2}|\partial_x{\rho}_\varepsilon|^2\, dx dt\right)^{r/2}+\varepsilon^r\iint_{Q_T}\left(1+|{u}_\varepsilon|^{2r}\right)\mathbf{1}_{\rho_\varepsilon<\delta}|\partial_x{\rho}_\varepsilon|^r\, dx dt\\ \leq\ & C_R\varepsilon^{r/2}\delta^{r(2-\gamma)/2} +\varepsilon^r\iint_{Q_T}\left(1+|{u}_\varepsilon|^{2r}\right)\mathbf{1}_{\rho_\varepsilon<\delta}|\partial_x{\rho}_\varepsilon|^r\, dx dt. \end{align*} To estimate the part corresponding to $\{{\rho}_\varepsilon<\delta\}$, we first use the H\"older Inequality to obtain \begin{align} \varepsilon^r\iint_{Q_T}\left(1+|{u}_\varepsilon|^{2r}\right)|\partial_x{\rho}_\varepsilon|^r\mathbf{1}_{{\rho}_\varepsilon<\delta} \leq\ &\varepsilon^{r/2}\Big(\iint_{Q_T}(1+|{u}_\varepsilon|^{2r})^{\frac{2}{2-r}}\Big)^{\frac{2-r}{2}}\Big(\varepsilon\iint_{Q_T}|\partial_x{\rho}_\varepsilon|^2\mathbf{1}_{{\rho}_\varepsilon<\delta}\Big)^{\frac{r}{2}}\nonumber\\ \lesssim\ &\varepsilon^{r/2}(1+\|u_\varepsilon\|_{L^m(Q_T)})^{2r}\Big(\varepsilon\iint_{Q_T}|\partial_x{\rho}_\varepsilon|^2\mathbf{1}_{{\rho}_\varepsilon<\delta}\Big)^{\frac{r}{2}}.\label{rhoinfdelta1} \end{align} Then, we multiply the first Equation of the system~\eqref{eq:stoEulereps}, \textit{i.e.} Equation $$ \partial_t{\rho}_\varepsilon+\partial_x({\rho}_\varepsilon {u}_\varepsilon)=\varepsilon\partial_{xx}^2{\rho}_\varepsilon, $$ by $\min({\rho}_\varepsilon,\delta)$, and then sum the result over $Q_T$. This gives, by \eqref{LmR} and for some constants varying from lines to lines \begin{align*} \varepsilon\iint_{Q_T}|\partial_x{\rho}_\varepsilon|^2\mathbf{1}_{{\rho}_\varepsilon<\delta}&\leq C\delta+C\Big(\iint_{Q_T}{\rho}_\varepsilon|{u}_\varepsilon||\partial_x{\rho}_\varepsilon|\mathbf{1}_{{\rho}_\varepsilon<\delta}\Big)\\ &\leq C\delta+C\delta^{1/2}\Big[\iint_{Q_T}|u_\varepsilon|^2\rho _\varepsilon\Big]^{\frac12}\Big[\iint_{Q_T}|\partial_x{\rho}_\varepsilon|^2\mathbf{1}_{{\rho}_\varepsilon<\delta}\Big]^{\frac12}\\ &\leq C_R\left(\delta+\frac{\delta}{\varepsilon}\right)+\frac{\varepsilon}{2}\iint_{Q_T}|\partial_x{\rho}_\varepsilon|^2\mathbf{1}_{{\rho}_\varepsilon<\delta}, \end{align*} from which we deduce $$ \varepsilon\iint_{Q_T}|\partial_x{\rho}_\varepsilon|^2\mathbf{1}_{{\rho}_\varepsilon<\delta}\leq C_R\left(\delta+\frac{\delta}{\varepsilon}\right). $$ Reporting this result in \eqref{LmGradRless} and \eqref{rhoinfdelta1}, we get \begin{equation}\label{muratdeltaVSeps} \varepsilon^r\iint_{Q_T}|\partial_x \eta({\mathbf{U}_\varepsilon})|^r\, dx dt \leq C_R\big(\varepsilon^{r-1}+\delta^{\frac{r}{2}(2-\gamma)}\varepsilon^{r/2}+\delta^{r/2}(1+\|u_\varepsilon\|_{L^m(Q_T)})^{2r}\big). \end{equation} We take $\delta=o(\varepsilon^{\frac{1}{\gamma-2}})$. On the event $B_{\varepsilon,R}$ (\textit{cf.} \eqref{eventBR}), \eqref{muratdeltaVSeps} reads then $$ \varepsilon^r\iint_{Q_T}|\partial_x \eta({\mathbf{U}_\varepsilon})|^r\, dx dt=o(1). $$ This concludes the proof of \eqref{eq:murat1g2} and of Proposition~\ref{prop:Murat0g2}. \end{proof} \begin{remark}[Growth of $\|u^\varepsilon\|_{L^{4+}(Q_T)}$] Since $\Lambda_{\varkappa_\varepsilon}$ is an invariant region for $\mathbf{U}_\varepsilon$, a sufficient condition to \eqref{uepsLmg2} is that $\varepsilon^{\frac{1}{\gamma-2}}\varkappa_\varepsilon$ is bounded: \begin{equation}\label{growthvarkappaeps} \varepsilon^{\frac{1}{\gamma-2}}\varkappa_\varepsilon\lesssim 1. \end{equation} In that case we have even $\varepsilon^{\frac{1}{\gamma-2}}\|u^\varepsilon\|_{L^\infty(Q_T)}\lesssim 1$ almost surely. \label{rk:growthuepsL4}\end{remark} The next Proposition is similar to Lemma~4.20 in \cite{FengNualart08}. \begin{proposition} Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Let $p\in\mathbb{N}$ satisfy $p\geq 4+\frac{1}{2\theta}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that \begin{equation}\label{UniformInitialEnergyMeps} \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2p}(\mathbf{U}_{\varepsilon 0}) \right)dx \end{equation} is bounded uniformly with respect to $\varepsilon$ (recall that $\eta_m$ denotes the entropy associated by \eqref{entropychi} to the convex function $\xi\mapsto\xi^{2m}$). Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps}. Let $\eta$ be an entropy of the form \eqref{entropychi} with $g\in\mathcal{G}$ (\textit{cf.} \eqref{gsubquad}). Let \begin{equation}\label{defMeps} M^\varepsilon(t)=\int_0^t \partial_q\eta({\mathbf{U}_\varepsilon})(s)\Phi^\varepsilon({\mathbf{U}_\varepsilon})(s)dW(s). \end{equation} Then $\partial_t M^\varepsilon$ is tight in $W^{-1,2}(Q_T)$. \label{prop:Murat2}\end{proposition} \begin{proof} The proof is in essential the proof of Lemma~4.19 in \cite{FengNualart08}. However, we will proceed slightly differently (instead of using Marchaud fractional derivative we work directly with fractional Sobolev spaces and an Aubin-Simon compactness lemma). We begin by giving some precisions on the sense of $\partial_t M^\varepsilon$: this is the random element of $W^{-1,2}(Q_T)$ defined $\mathbb{P}$-almost surely by $$ \<\partial_t M^\varepsilon,z\>_{W^{-1,2}(Q_T),W^{1,2}_0(Q_T)}=-\<M^\varepsilon,\partial_t z\>_{L^2(Q_T),L^2(Q_T)}. $$ Let $0\leq s\leq t\leq T$. In what follows we denote by $C$ any constant, that may vary from line to line, which depends on the data only and is independent on $\varepsilon$. By the Burkholder-Davis-Gundy Inequality, we have \begin{equation*} \mathbb{E}\|M^\varepsilon(t)-M^\varepsilon(s)\|_{L^4(\mathbb{T})}^4\leq C\int_{\mathbb{T}}\mathbb{E}\left|\int_s^t |\partial_q\eta({\mathbf{U}_\varepsilon})|^2|\mathbf{G}^\varepsilon({\mathbf{U}_\varepsilon})|^2 d\sigma\right|^{2} dx, \end{equation*} and, using the H\"older Inequality, \begin{equation*} \mathbb{E}\|M^\varepsilon(t)-M^\varepsilon(s)\|_{L^4(\mathbb{T})}^4\leq C|t-s|\int_s^t\mathbb{E}\int_{\mathbb{T}} \left[|\partial_q\eta({\mathbf{U}_\varepsilon})|^2|\mathbf{G}^\varepsilon({\mathbf{U}_\varepsilon})|^2\right]^{2}d\sigma dx. \end{equation*} By \eqref{gsubquad}, and \eqref{dqetaG} with $m=1$, we have $$ |\partial_q\eta(\mathbf{U})|^2\mathbf{G}^2(\mathbf{U})\leq C(\eta_0(\mathbf{U})+\eta_2(\mathbf{U})). $$ Taking the square of both sides, we obtain \begin{equation}\label{MepsetaG} \left[|\partial_q\eta(\mathbf{U})|^2\mathbf{G}^2(\mathbf{U})\right]^2\leq C(\eta_0(\mathbf{U})+\eta_p(\mathbf{U})) \end{equation} by Lemma~\ref{lemmaentropies2}. The uniform estimate \eqref{estimmomenteps2} and \eqref{UniformInitialEnergyMeps} give \begin{equation}\label{deltaMts} \mathbb{E}\|M^\varepsilon(t)-M^\varepsilon(s)\|_{L^4(\mathbb{T})}^4 \leq C|t-s|^2, \end{equation} and, by integration with respect to $t$ and $s$, \begin{equation} \mathbb{E}\int_0^T\hskip -7pt\int_0^T\frac{\|M^\varepsilon(t)-M^\varepsilon(s)\|_{L^4(\mathbb{T})}^4}{|t-s|^{1+2\nu}}dtds\leq C, \label{Esobolevfrac}\end{equation} as soon as $\nu<1/2$. The left-hand side in this inequality \eqref{Esobolevfrac} is the norm of $M^\varepsilon$ in the space $L^4(\Omega;W^{\nu,4}(0,T;L^4(\mathbb{T})))$. Since $L^4(\mathbb{T})\hookrightarrow H^{-1}(\mathbb{T})$, it follows that \begin{equation*} \mathbb{E}\|M^\varepsilon\|_{W^{\nu,4}(0,T;H^{-1}(\mathbb{T}))}^4\leq C. \end{equation*} We use the continuous injection $$ W^{\nu,4}(0,T;H^{-1}(\mathbb{T}))\hookrightarrow C^{0,\mu}([0,T];H^{-1}(\mathbb{T})) $$ for every $0<\mu<\nu-\frac{1}{4}$ to obtain \begin{equation} \mathbb{E}\|M^\varepsilon\|_{C^{0,\mu}([0,T];H^{-1}(\mathbb{T}))}^4\leq C. \label{Esobolevfrac2}\end{equation} Besides, taking $s=0$ in \eqref{deltaMts} and summing with respect to $t\in(0,T)$ gives also \begin{equation} \mathbb{E}\|M^\varepsilon\|_{L^4(Q_T)}^4\leq C. \label{AubinSimonNice}\end{equation} By the Aubin-Simon compactness Lemma, \cite{Simon87}, the set $$ A_R:=\left\{M\in L^2(Q_T); \|M^\varepsilon\|_{C^{0,\mu}([0,T];H^{-1}(\mathbb{T}))}\leq R,\;\|M\|_{L^4(Q_T)}\leq R\right\} $$ is compact in $C([0,T];H^{-1}(\mathbb{T}))$, hence compact in $L^2(0,T;H^{-1}(\mathbb{T}))$. Consequently \eqref{Esobolevfrac2} and \eqref{AubinSimonNice} show that $(M^\varepsilon)$ is tight as a $L^2(0,T;H^{-1}(\mathbb{T}))$-random variable, and we conclude that $(\partial_t M^\varepsilon)$ is tight as a $W^{-1,2}(Q_T)$-random variable. \end{proof} \subsubsection{Functional equation}\label{sec:CCcl} \begin{proposition} Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Let $p\in\mathbb{N}$ satisfy $p\geq 4+\frac{1}{2\theta}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that \begin{equation}\label{UniformInitialEnergyMurat} \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2p}(\mathbf{U}_{\varepsilon 0}) \right)dx \end{equation} is bounded uniformly with respect to $\varepsilon$. Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps}. If $\gamma>2$, we suppose that \eqref{uepsLmg2} is satisfied. Let $(\eta,H)$ be an entropy-entropy flux of the form \eqref{entropychi}-\eqref{entropychiflux} with $g\in\mathcal{G}$ (\textit{cf.} \eqref{gsubquad}). Then the family $$ \left\{\mathrm{div}_{t,x}(\eta({\mathbf{U}_\varepsilon}),H({\mathbf{U}_\varepsilon}));\varepsilon\in(0,1)\right\} $$ is tight in $W^{-1,2}(Q_T)$. \label{prop:divcurl}\end{proposition} \begin{proof} \textbf{Step 1.} Let $s>2$ be a fixed exponent. We assume that $s$ is close enough to $2$ in order to ensure \begin{equation}\label{pVSs} p\geq \frac32 s+\frac{s-1}{2\theta}. \end{equation} By Lemma~\ref{lemmaentropies2}, we have, under condition \eqref{pVSs}, \begin{equation}\label{momentseta} |\eta(\mathbf{U})|^s,\; |H(\mathbf{U})|^s\leq C(\eta_0(\mathbf{U})+\eta_{p}(\mathbf{U})), \end{equation} for all $\mathbf{U}\in\mathbb{R}_+\times\mathbb{R}$, where $C$ is constant depending on $\gamma$, $s$, $p$ only. By \eqref{UniformInitialEnergyMurat} and the estimate \eqref{estimmomenteps2} on the moments of $\mathbf{U}_\varepsilon$, we deduce that $\eta(\mathbf{U}_\varepsilon)$ and $H(\mathbf{U}_\varepsilon)$ are uniformly bounded in $L^s(\Omega;L^s(Q_T))$. As a consequence, $\mathrm{div}_{t,x}(\eta({\mathbf{U}_\varepsilon}),H({\mathbf{U}_\varepsilon}))$ is stochastically bounded in $W^{-1,s}(Q_T)$.\medskip \textbf{Step 2.} We consider the entropy balance equation~\eqref{Itoentropyeps}, which we rewrite as the following distributional equation on $Q_T$: \begin{equation*} \mathrm{div}_{t,x}(\eta({\mathbf{U}_\varepsilon}),H({\mathbf{U}_\varepsilon})) =-\varepsilon \eta''({\mathbf{U}_\varepsilon})\cdot({\partial_x\mathbf{U}_\varepsilon},{\partial_x\mathbf{U}_\varepsilon})+\varepsilon\partial^2_{xx} \eta({\mathbf{U}_\varepsilon})+\partial_t M^\varepsilon+\frac{1}{2}\mathbf{G}^{\varepsilon}({\mathbf{U}_\varepsilon})^2\partial^2_{qq} {\eta}({\mathbf{U}_\varepsilon}), \end{equation*} where $M^\varepsilon$ is defined by \eqref{defMeps}. Let $r\in(1,2)$. By Proposition~\ref{prop:Muratgammaless2}, Proposition~\ref{prop:Murat0g2} and Proposition~\ref{prop:Murat2}, the families $\{\varepsilon\partial^2_{xx} \eta({\mathbf{U}_\varepsilon})\}_{\varepsilon\in(0,1)}$ and $\{\partial_t M^\varepsilon\}_{\varepsilon\in(0,1)}$ are tight in $W^{-1,r}(Q_T)$ and $W^{-1,2}(Q_T)$ respectively. The two remaining terms $$ \varepsilon\eta''({\mathbf{U}_\varepsilon})\cdot({\partial_x\mathbf{U}_\varepsilon},{\partial_x\mathbf{U}_\varepsilon})\quad\mbox{and}\quad \frac12|\mathbf{G}({\mathbf{U}_\varepsilon})|^2\partial^2_{qq}\eta({\mathbf{U}_\varepsilon}) $$ are stochastically bounded in measure on $Q_T$ by \eqref{corestimgradientepsrho}-\eqref{corestimgradientepsu} and \eqref{A0}-\eqref{estimmomenteps2} respectively (we use \eqref{dqqetaG} with $m=1$ to estimate this latter term). \medskip \textbf{Step 3.} We want now to apply the stochastic version of the Murat's Lemma, Lemma~A.3 in \cite{FengNualart08}. If we refer strictly to the statement of Lemma~A.3 in \cite{FengNualart08}, there is an obstacle here, due to the fact that $\varepsilon \partial_{xx}^2 \eta({\mathbf{U}_\varepsilon})$ is neither tight in $W^{-1,2}(Q_T)$, neither stochastically bounded in measure on $Q_T$. However, in the proof of Lemma~A.3 in \cite{FengNualart08}, the property which is used regarding the term that is stochastically bounded in measure on $Q_T$ is only the fact that it is tight in $W^{-1,r}(Q_T)$ for $1<r<2$, due to the compact injection $W^{1,\sigma}_0(Q_T)\hookrightarrow C(\overline{Q_T})$ for $\sigma>2$. The argument of interpolation theory which combines this compactness result with the stochastic bound in $W^{-1,r}(Q_T)$ can therefore be directly applied here: we deduce that the sequence of $W^{-1,2}(Q_T)$ random variables $$ \mathrm{div}_{t,x}(\eta({\mathbf{U}_\varepsilon}),H({\mathbf{U}_\varepsilon}))=\partial_t\eta({\mathbf{U}_\varepsilon})+\partial_x H({\mathbf{U}_\varepsilon}) $$ is tight. \end{proof} We apply now the div-curl lemma to obtain the functional equation \eqref{resultCC} below. \begin{proposition}[Functional Equation] Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Let $p\in\mathbb{N}$ satisfy $p\geq 4+\frac{1}{2\theta}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that \begin{equation}\label{UniformInitialEnergyDivCurl} \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2p}(\mathbf{U}_{\varepsilon 0}) \right)dx \end{equation} is bounded uniformly with respect to $\varepsilon$. Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps}. If $\gamma>2$, we furthermore suppose that the possible growth of $\varkappa_\varepsilon$ with $\varepsilon$ is limited according to \eqref{growthvarkappaeps}. Let $(\eta,H)$, $(\hat\eta,\hat H)$ be some entropy-entropy flux pairs of the form \eqref{entropychi}-\eqref{entropychiflux} associated to some convex functions $g,\hat g\in\mathcal{G}$ respectively (\textit{cf.} \eqref{gsubquad}). Let $\tilde\nu$ be the random Young measure given by Proposition~\ref{prop:cvrandomY}. Then, almost surely, for a.e. $(x,t)\in Q_T$, \begin{equation}\label{resultCC} \<\hat\eta,\tilde\nu_{x,t}\>\<H,\tilde\nu_{x,t}\>-\<\eta,\tilde\nu_{x,t}\>\<\hat H,\tilde\nu_{x,t}\>=\<\hat\eta H-\eta\hat H,\tilde\nu_{x,t}\>. \end{equation} Besides, if \eqref{resultCC} is realized, then, for all $v,v'\in\mathbb{R}$, \begin{equation} 2\lambda\Big( \langle\chi(v) u \rangle \langle\chi(v')\rangle -\langle\chi(v)\rangle \langle\chi(v') u \rangle \Big) =(v-v') \Big( \langle\chi(v) \chi(v')\rangle -\langle\chi(v)\rangle\langle\chi(v') \rangle \Big), \label{chiinter} \end{equation} where $\chi(\mathbf{U},v)=(v-z)_+^\lambda(w-v)_+^\lambda$, $z:=u-\rho^\theta$, $w:=u+\rho^\theta$, and $$ \langle\chi(v)\rangle=\int \chi(\mathbf{U},v) \, d\tilde\nu_{x,t}(\mathbf{U}). $$ \label{prop:divcurlapplied}\end{proposition} \begin{proof} Let $(\varepsilon_n)$ be the sequence considered in Proposition~\ref{prop:cvrandomY} (to be exact, this is a subsequence of $(\varepsilon_n)$ that we are considering). By Proposition~\ref{prop:LawDiracMass}, $\tilde\nu_{x,t}^{\varepsilon_n}$ is an $L^r$-random Dirac mass for every $n$. In particular, it satisfies almost surely, for a.e. $(x,t)\in Q_T$, the identity \begin{equation}\label{resultCCn} \<\hat\eta,\tilde\nu_{x,t}^{\varepsilon_n}\>\<H,\tilde\nu_{x,t}^{\varepsilon_n}\>-\<\eta,\tilde\nu_{x,t}^{\varepsilon_n}\>\<\hat H,\tilde\nu_{x,t}^{\varepsilon_n}\>=\<\hat\eta H-\eta\hat H,\tilde\nu_{x,t}^{\varepsilon_n}\>. \end{equation} Let $$ X_n(x,t)=(\<\eta,\tilde\nu_{x,t}^{\varepsilon_n}\>,\<H,\tilde\nu_{x,t}^{\varepsilon_n}\>),\quad \hat X_n(x,t)=(\<\hat\eta,\tilde\nu_{x,t}^{\varepsilon_n}\>,\<\hat H,\tilde\nu_{x,t}^{\varepsilon_n}\>). $$ By Remark~\ref{rk:UUtilde} and \eqref{momentseta}, $X_n$ and $\hat X_n$ are $L^2(Q_T)$-valued $L^2$-random variables. By Proposition~\ref{prop:cvrandomY}, they converge almost surely in weak-$L^2(Q_T)$ to the random variables $$ X(x,t)=(\<\eta,\tilde\nu_{x,t}\>,\<H,\tilde\nu_{x,t}\>),\quad \hat X(x,t)=(\<\hat\eta,\tilde\nu_{x,t}\>,\<\hat H,\tilde\nu_{x,t}\>), $$ respectively. Let $$ \hat X_n^\bot=(-\<\hat H,\tilde\nu_{x,t}^{\varepsilon_n}\>,\<\hat\eta,\tilde\nu_{x,t}^{\varepsilon_n}\>) $$ and let $\eta>0$. Note that $$ \mathrm{curl}_{t,x}\hat X_n^\bot=\mathrm{div}_{t,x}\hat X_n. $$ By Proposition~\ref{prop:divcurl} (we use Remark~\ref{rk:growthuepsL4} to ensure that \eqref{uepsLmg2} is satisfied if $\gamma>2$), there exists a compact subset $K_\eta$ of $W^{-1,2}(Q_T)$ such that the event \begin{equation}\label{divcurlcompact} \mathrm{div}_{t,x}X_n\in K_\eta\;\&\; \mathrm{curl}_{t,x}\hat X_n^\bot\in K_\eta \end{equation} has probability greater than $1-\eta$. If \eqref{divcurlcompact} is realized, then the div-curl lemma\footnote{reference} ensures that the product $X_n\cdot\hat X_n^\bot$ is converging in weak-$L^1(Q_T)$ to the product $X\cdot\hat X^\bot$. The product $X_n\cdot\hat X_n^\bot$ is the left-hand side of \eqref{resultCCn}. Therefore, we can pass to the limit in \eqref{resultCCn} to obtain \eqref{resultCC} almost surely conditionally to \eqref{divcurlcompact}, for a.e. $(x,t)\in Q_T$, that is to say for almost all $(\omega,x,t)\in A_\eta$ with $\tilde\mathbb{P}\times\mathcal{L}^2(A_\eta)\geq (1-\eta)\mathcal{L}^2(Q_T)$ (we denote by $\mathcal{L}^2$ the Lebesgue measure on $Q_T$). We consider a sequence $(\eta_n)$ converging to $0$. We can choose the sets $K_{\eta_n}$ as an increasing sequence, in which case $(A_{\eta_n})$ is also increasing. We set $$ A=\bigcup_{n\in\mathbb{N}}A_{\eta_n}. $$ Then $A$ is of full $\tilde\mathbb{P}\times\mathcal{L}^2$-measure and \eqref{resultCC} is satisfied on $A$. The identity \eqref{chiinter} follows from the formulas \eqref{entropychi}, \eqref{entropychiflux} and \eqref{resultCC}. \end{proof} \begin{remark} The function $\chi$ used in the statement of Proposition~\ref{prop:divcurlapplied} is the same as the function $\chi$ used in the formulas~\eqref{entropychi}-\eqref{entropychiflux}. \end{remark} \subsection{Reduction of the Young measure}\label{sec:reduction} We now follow \cite{LionsPerthameSouganidis96} to conclude. We switch from the variables $(\rho,u)$ or $(\rho,q)$ to $(w,z)$, where $w$ and $z$ are the Riemann invariants $$ z=u-\rho^\theta,\quad w=u+\rho^\theta. $$ We write then $\chi(w,z,v)$ for $\chi(\mathbf{U},v)$. Let us fix $(\omega,x,t)$ such that \eqref{chiinter} is satisfied. Set $$ \mathcal{C}=\{v \in \mathbb{R} \, ; \, \langle\chi(v)\rangle >0\}=\bigcup_{(w,z) \in \textrm{supp} \tilde\nu_{x,t}} \{ v \, ; \, z < v < w\}. $$ Let $$ V=\{(\rho,u)\in\mathbb{R}_+\times\mathbb{R}|\rho=0\}=\{(w,z)\in\mathbb{R}^2| w=z\} $$ denote the vacuum region. If $\mathcal{C}$ is empty, then $\tilde\nu_{x,t}$ is concentrated on $V$. Assume $\mathcal{C}$ not empty. By Lemma I.2 in \cite{LionsPerthameSouganidis96} then, $\mathcal{C}$ is an open interval in $\mathbb{R}$, say $\mathcal{C}=]a,b[$, where $-\infty\leq a<b\leq+\infty$ (we use here the french notation for open intervals to avoid the confusion with the point $(a,b)$ of $\mathbb{R}^2$). Furthermore all the computations of \cite{LionsPerthameSouganidis96} apply here, and thus, as in Section I.6 of \cite{LionsPerthameSouganidis96}, we obtain \begin{equation}\label{endLPS96} \langle\rho^{2\lambda\theta} \langle\chi\circ\pi_i\rangle\phi\circ\pi_i \rangle=0, \end{equation} for any continuous function $\phi$ with compact support in $\mathcal{C}$, where $\pi_i\colon\mathbb{R}^2\to\mathbb{R}$ denote the projection on the first coordinate $w$ if $i=1$, and the projection on the second coordinate $z$ if $i=2$. \smallskip Note that, if $\mathrm{supp}(\tilde\nu_{x,t})\setminus V$ is reduced to a single point $\{Q\}$, then $\pi_i(Q)\in \overline{\mathcal{C}}\setminus\mathcal{C}$ for $i=1$ and $i=2$. Assume by contradiction that there exists $Q\in\mathbb{R}^2$ satisfying \begin{equation}\label{hypQ} Q\in\mathrm{supp}(\tilde\nu_{x,t})\setminus V,\quad \pi_i(Q)\in\mathcal{C}, \end{equation} for a $i$ in $\{1,2\}$. Then there exists a neighbourhood $K$ of $Q$ such that $K\cap V=\emptyset$, $\nu_{x,t}(K)>0$, $\pi_i(K)\subset\mathcal{C}$. But then $\<\chi\circ\pi_i\>>0$ on $K$, $\rho>0$ on $K$ and, choosing a continuous function $\phi$ compactly supported in $\mathcal{C}$ such that $\phi>0$ on $K$ we obtain a contradiction to \eqref{endLPS96}. Consequently \eqref{hypQ} cannot be satisfied. This shows that there cannot exists {\it two distinct} points $P,Q$ in $\mathrm{supp}(\tilde\nu_{x,t})\setminus V$. Indeed, if two such points exists, then either $\pi_1(Q)<\pi_1(P)$, and then $Q$ satisfies \eqref{hypQ} with $i=1$, or $\pi_1(Q)=\pi_1(P)$ and, say, $\pi_2(P)<\pi_2(Q)$ and then $Q$ also satisfies \eqref{hypQ}. The other cases are similar by symmetry of $P$ and $Q$. Therefore if $\mathcal{C} \neq \emptyset$, then the support of the restriction of $\tilde\nu_{x,t}$ to $\mathcal{C}$ is reduced to a point. In particular, $a$ and $b$ are finite. Then, by Lemma I.2 in \cite{LionsPerthameSouganidis96}, $P:=(a,b)\in\mathrm{supp}(\nu_{x,t})$ and $\tilde\nu_{x,t}=\tilde\mu_{x,t}+\alpha\delta_{\tilde\mathbf{U}(x,t)}$, where $\tilde\mu_{x,t}=\tilde\nu_{x,t}|_V$. Using \eqref{chiinter}, we obtain $$ 0=(v-v')\chi(b,a,v)\chi(b,a,v')(\alpha-\alpha^2), $$ for all $v,v'\in(a,b)$, and thus $\alpha=0$ or $1$. We have therefore proved the following result. \begin{proposition}[Reduction of the Young measure] Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ satisfy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Let $p\in\mathbb{N}$ satisfy $p\geq 4+\frac{1}{2\theta}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that $$ \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2p}(\mathbf{U}_{\varepsilon 0}) \right)dx $$ is bounded uniformly with respect to $\varepsilon$. Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps}. If $\gamma>2$, we furthermore suppose that the possible growth of $\varkappa_\varepsilon$ with $\varepsilon$ is limited according to \eqref{growthvarkappaeps}. Let $\tilde\nu$ be the random Young measure given by Proposition~\ref{prop:cvrandomY}. Then, almost surely, for a.e. $(x,t)\in Q_T$, either $\tilde\nu_{x,t}$ is concentrated on the vacuum region $V$, or $\tilde\nu_{x,t}$ is reduced to a Dirac mass $\delta_{\tilde\mathbf{U}(x,t)}$. \label{prop:rednu}\end{proposition} \subsection{Martingale solution}\label{sec:martsolfinal} In this section we will prove Theorem~\ref{th:martingalesol}. \subsubsection{An additional continuity estimate}\label{sec:contestimate} In the following statement, $W^{-2,2}(\mathbb{T})$ denotes the dual to the space $W^{2,2}(\mathbb{T})$. \begin{proposition}[Additional continuity estimate] Let ${\mathbf{U}_\varepsilon}_0\in W^{2,2}(\mathbb{T})$ sa\-tis\-fy ${\rho_\varepsilon}_0\geq c_{\varepsilon 0}$ a.e. in $\mathbb{T}$, for a positive constant $c_{\varepsilon 0}$. Let $p\in\mathbb{N}$ satisfy $p\geq 4+\frac{1}{2\theta}$. Assume that hypotheses \eqref{A0eps}, \eqref{Trunceps}, \eqref{Lipsigmaeps} are satisfied, that ${\mathbf{U}_\varepsilon}_0\in\Lambda_{\varkappa_\varepsilon}$ and that \begin{equation}\label{initEntropyAddCont} \mathbb{E}\int_\mathbb{T} \left(\eta_0(\mathbf{U}_{\varepsilon 0})+\eta_{2p}(\mathbf{U}_{\varepsilon 0}) \right)dx \end{equation} is bounded uniformly with respect to $\varepsilon$. Let $\mathbf{U}_\varepsilon$ be the bounded solution to \eqref{stoEulereps}. Let $g\in\mathcal{G}$ (\textit{cf.} \eqref{gsubquad}) and let $(\eta,H)$ be the entropy-entropy flux pair associated to $g$ by \eqref{entropychi}-\eqref{entropychiflux}. Let $B_\varepsilon(t)$ be the random distribution \begin{align} B_\varepsilon(t)=&\eta({\mathbf{U}_\varepsilon}_0)+\int_0^t\left[-\partial_x H({\mathbf{U}_\varepsilon})+\varepsilon\partial^2_{xx}\eta({\mathbf{U}_\varepsilon})\right]d s\nonumber\\ &+\int_0^t \eta'({\mathbf{U}_\varepsilon})\mathbf{\Psi}^{\varepsilon}({\mathbf{U}_\varepsilon})\,d W(s) + \frac{1}{2}\int_0^t\mathbf{G}^{\varepsilon}({\mathbf{U}_\varepsilon})^2\partial^2_{qq} {\eta}({\mathbf{U}_\varepsilon})ds.\label{Itoentropyepseps} \end{align} Then, for all $\alpha\in(0,1/4)$, the $W^{-2,2}(\mathbb{T})$-valued process $(B_\varepsilon(t))$ has a modification which has almost surely $\alpha$-H\"older trajectories and satisfies \begin{equation}\label{timeeps} \mathbb{E}\|B_\varepsilon\|_{C^\alpha([0,T];W^{-2,2}(\mathbb{T}))}^2=\mathcal{O}(1), \end{equation} where $\mathcal{O}(1)$ depends on $\gamma$, $T$, $p$, on the constant $A_0$ in \eqref{A0eps} and on the bound on \eqref{initEntropyAddCont} only. \label{prop:regteps}\end{proposition} \begin{proof} Let $\varphi\in W^{2,2}(\mathbb{T})$ such that $\|\varphi\|_{W^{2,2}(\mathbb{T})} \leq 1$. For $0\leq s\leq t\leq T$, the increment $\<B_\varepsilon(t)-B_\varepsilon(s),\varphi\>_{W^{-2,2}(\mathbb{T}),W^{2,2}(\mathbb{T})}$ is the sum of various terms, which we denote by $D_\varepsilon^j(s,t)$, $j=1,\ldots,4$. The first term is $$ D_\varepsilon^1(s,t)=\int_s^t \<H(\mathbf{U}_\varepsilon(\sigma)),\partial_x\varphi\>_{L^2(\mathbb{T})} d\sigma. $$ By \eqref{momentseta} and \eqref{estimmomenteps2}, we have $$ \mathbb{E}\sup_{\sigma\in[0,T]}\|H(\mathbf{U}_\varepsilon(\sigma))\|_{L^2(\mathbb{T})}^2=\mathcal{O}(1). $$ It is easy to deduce from this estimate the bound $$ \mathbb{E}|D_\varepsilon^1(s,t)|^4=\mathcal{O}(1)(t-s)^4. $$ We obtain the same bounds for $D_\varepsilon^j(s,t)$, $j=2,4$, where \begin{equation*} D_\varepsilon^2(s,t)=\int_s^t \<\varepsilon\eta(\mathbf{U}_\varepsilon(\sigma)),\partial^2_{xx}\varphi\>_{L^2(\mathbb{T})} d\sigma,\quad D_\varepsilon^4(s,t)=\frac{1}{2}\int_s^t\<\mathbf{G}^{\varepsilon}({\mathbf{U}_\varepsilon})^2\partial^2_{qq} {\eta}({\mathbf{U}_\varepsilon}),\varphi\>_{L^2(\mathbb{T})}d\sigma. \end{equation*} To treat the term $D_\varepsilon^4(s,t)$, we use in particular the estimates \eqref{dqqetaG} (with $m=1$), \eqref{momentseta} and \eqref{estimmomenteps2}, which give $$ \mathbb{E}\sup_{\sigma\in[0,T]}\|\mathbf{G}^{\varepsilon}({\mathbf{U}_\varepsilon})^2\partial^2_{qq} {\eta}({\mathbf{U}_\varepsilon})\|_{L^2(\mathbb{T})}^2(\sigma)=\mathcal{O}(1). $$ Eventually, by \eqref{dqetaG} (with $m=1$), \eqref{momentseta} and \eqref{estimmomenteps2} and the Burkholder-Davis-Gundy Inequality, we obtain $$ \mathbb{E}|D_\varepsilon^3(s,t)|^4=\mathcal{O}(1)(t-s)^2, $$ where $$ D_\varepsilon^3(s,t)=\int_s^t \<\eta'({\mathbf{U}_\varepsilon})\mathbf{\Psi}^{\varepsilon}({\mathbf{U}_\varepsilon}),\varphi\>_{L^2(\mathbb{T})}\,d W(\sigma). $$ We conclude by the Kolmogorov Theorem. \end{proof} Let $\mathcal{M}_b(Q_T)$ denote the set of bounded Borel measures on $Q_T$ and $\mathcal{M}_b^+(Q_T)$ denote the subset of non-negative bounded measures. Let $L^2_w(\tilde{\Omega};\mathcal{M}_b(Q_T))$ be the set of $L^2$ mappings $e$ from $(\tilde{\Omega},\tilde\mathcal{F},\tilde\mathbb{P})$ to $\mathcal{M}_b(Q_T)$; the index ``$w$" indicates that the weak-star topology is considered: a basis of neighbourhoods of $m_0\in L^2_w(\tilde{\Omega};\mathcal{M}_b(Q_T))$ is constituted by the sets $$ \left\{m\in L^2(\tilde{\Omega};\mathcal{M}_b(Q_T)), |\mathbb{E}\<m-m_0,\varphi_i\>_{\mathcal{M}_b(Q_T),C(\overline{Q_T})}|<\alpha,\forall i\in I\right\}, $$ where $\alpha>0$ and $\varphi_i\in L^2(\tilde{\Omega};C(\overline{Q}_T))$ and $I$ is finite. \begin{corollary} Under the hypotheses of Proposition~\ref{prop:regteps}, the random measure $\tilde e_\varepsilon$ on $Q_T$ defined by $$ \<\tilde e_\varepsilon,\varphi\>_{\mathcal{M}_b(Q_T),C(\overline{Q_T})}=\iint_{Q_T} \varepsilon\eta''(\tilde{\mathbf{U}}_{\varepsilon})\cdot({\partial_x\tilde{\mathbf{U}}_{\varepsilon}},{\partial_x\tilde{\mathbf{U}}_{\varepsilon}})\varphi(x,t) dx dt $$ is uniformly bounded in $L^2(\tilde\Omega;\mathcal{M}_b^+(Q_T))$. If\footnote{actually we can assume so by referring to the original proof of the Skorohod Theorem, \cite{Skorohod56}, see also Remark~\ref{rk:Skorohod01}} $\tilde{\Omega}=[0,1]$, $\tilde\mathcal{F}$ is the $\sigma$-algebra of Borel sets on $[0,1]$ and $\tilde\mathbb{P}$ the Lebesgue measure on $[0,1]$, then, up to a subsequence, the sequence $(\tilde e_{\varepsilon_n})$ converges to an element $\tilde e\in L^2(\tilde\Omega;\mathcal{M}_b^+(Q_T))$ in the topology of $L^2_w(\tilde{\Omega};\mathcal{M}_b(Q_T))$. \label{cor:bounde}\end{corollary} \begin{proof} We apply the entropy balance equation \eqref{Itoentropyeps} with $\varphi\equiv 1$ and $t=T$. We obtain then, with the notations of Proposition~\ref{prop:regteps}, \begin{equation}\label{etaeeps} \|\eta(\mathbf{U}_\varepsilon)(T)\|_{L^1(\mathbb{T})}+\|e_\varepsilon\|_{\mathcal{M}_b(Q_T)}=\<B_\varepsilon(T),\varphi\>_{W^{-2,2}(\mathbb{T}),W^{2,2}(\mathbb{T})}. \end{equation} By Remark~\ref{rk:UUtilde}, \eqref{momentseta} and \eqref{estimmomenteps2}, we have $\mathbb{E}\|\eta(\mathbf{U}_\varepsilon)(T)\|_{L^1(\mathbb{T})}^2=\mathcal{O}(1)$. By \eqref{timeeps}, we deduce from \eqref{etaeeps} that $$ \mathbb{E}\|\tilde e_\varepsilon\|^2_{\mathcal{M}_b(Q_T)}=\mathbb{E}\|e_\varepsilon\|^2_{\mathcal{M}_b(Q_T)}=\mathcal{O}(1). $$ If $\tilde{\Omega}=[0,1]$, $\tilde\mathcal{F}$ is the $\sigma$-algebra of the Borelians on $[0,1]$ and $\tilde\mathbb{P}$ the Lebesgue measure on $[0,1]$, then, by \cite[Theorem~8.20.3]{Edwards65}, $L^2(\tilde{\Omega};\mathcal{M}_b(Q_T))$ is the dual of the space $L^2(\tilde{\Omega};C(\overline{Q}_T))$ (actually Theorem~8.20.3 in \cite{Edwards65} states this result for $\tilde\Omega$ a Haussdorff locally compact space, $\tilde\mathcal{F}$ being the Borel $\sigma$-algebra and $\tilde\mathbb{P}$ being a positive Radon measure on $\tilde\Omega$). The convergence $\tilde e_{\varepsilon_n}\to\tilde e$ in $L^2_w(\tilde{\Omega};\mathcal{M}_b(Q_T))$ follows from the Banach-Alaoglu Theorem. \end{proof} \subsubsection{Convergence of non-linear functionals of $(\mathbf{U}_\varepsilon)$}\label{subsec:convergence} Let $E=\mathbb{R}_+\times\mathbb{R}$. By Proposition~\ref{prop:rednu}, we have: almost surely, for every continuous and \textit{bounded} function $S$ on $E$ and every $\varphi\in L^\infty(Q_T)$, \begin{equation}\label{eq:cvSeps} \iint_{Q_T} S(\tilde{\mathbf{U}}_{\varepsilon_n}(x,t)) \varphi(x,t) dxdt\to \iint_{Q_T}\int_E S(p)\varphi(x,t)d\tilde\nu_{x,t}(p) dxdt, \end{equation} and we know that $$ \mathrm{supp}(\nu_{x,t})\cap V=\emptyset\Longrightarrow\int_E S(p)d\tilde\nu_{x,t}(p)=S(\tilde U(x,t)). $$ \begin{proposition}[Limit in the vacuum] Let $g\in\mathcal{G}$ (\textit{cf.} \eqref{gsubquad}) and let $(\eta,H)$ be the entropy-entropy flux pair defined by \eqref{entropychi}-\eqref{entropychiflux}. Under the hypotheses of Proposition~\ref{prop:rednu}, the convergence \eqref{eq:cvSeps} holds true, in probability, for every $\varphi\in L^\infty(Q_T)$ and $S\in\{\eta,H\}$. Besides, the limit is trivial in the vacuum region: almost surely, for a.e. $(x,t)\in Q_T$, for $S\in\{\eta,H\}$, \begin{equation}\label{limVacuum} \mathrm{supp}(\tilde\nu_{x,t})\subset V\Longrightarrow\int_E S(p)d\tilde\nu_{x,t}(p)=0. \end{equation} \label{prop:nlinUeps}\end{proposition} \begin{proof} Let $s>1$ satisfy the constraint $p\geq \frac32 s+\frac{s-1}{2\theta}$ (we may take $s>2$ actually). By Lemma~\ref{lemmaentropies2} (with $m=1$) and \eqref{estimmomenteps2}, we have \begin{equation}\label{equi1eps} \mathbb{E}\iint_{Q_T} \left(|\eta(\mathbf{U}_\varepsilon)|^s+|H(\mathbf{U}_\varepsilon)|^s\right) dx dt\leq C, \end{equation} where $C$ is a constant independent on $\varepsilon$. Consequently, \begin{equation}\label{equi1lim} \tilde\mathbb{E}\iint_{Q_T}\int_E \left(|\eta(p)|^s+|H(p)|^s\right) d\tilde\nu_{x,t}(p) dxdt\leq C. \end{equation} These two equi-integrability results ensure that the convergence \eqref{eq:cvSeps} holds true, in $L^1(\tilde\Omega)$, for every $\varphi\in L^\infty(Q_T)$ and $S\in\{\eta,H\}$. Indeed, in the case $S=\eta$ for example, we can apply \eqref{eq:cvSeps} to $S(p)=\eta(p)\chi_R(|\eta(p)|)$ where $\chi_R$ is the truncature function $\chi_R(r)=\chi\left(\frac{r}{R}\right)$ defined by taking $\chi\in C(\mathbb{R}_+)$ a non-negative non-increasing function supported in $[0,2]$ with value $1$ on $[0,1]$. Denoting $$ J_\varepsilon=\iint_{Q_T} \eta(\tilde{\mathbf{U}}_{\varepsilon_n}(x,t)) \varphi(x,t) dxdt,\quad J=\iint_{Q_T}\int_E \eta(p)\varphi(x,t)d\tilde\nu_{x,t}(p) dxdt, $$ and $J_{\varepsilon,R}$, $J_R$ the versions with truncature, we have $$ \tilde\mathbb{E}|J_{\varepsilon_n}-J|\leq \frac{2C}{(2R)^{s-1}}+\tilde\mathbb{E}|J_{\varepsilon_n,R}-J_R| $$ thanks to \eqref{equi1eps} and \eqref{equi1lim}. Since, at fixed $R$, $\lim_{n\to +\infty}\tilde\mathbb{E}|J_{\varepsilon_n,R}-J_R|= 0$ by the dominated convergence Theorem, we get the result. Note that we also established the estimate and limit, for $S=\eta$ or $S=H$, \begin{align} \tilde\mathbb{E}\iint_{Q_T}\int_E |S(p)| d\tilde\nu_{(x,t)}(p) dx dt&<+\infty,\label{estimSmoment}\\ \lim_{R\to+\infty}\tilde\mathbb{E}\iint_{Q_T}\int_E |S(p)|\left[1-\chi_R(|S(p)|)\right]d\tilde\nu_{(x,t)}(p) dx dt=0.\label{limSmoment} \end{align} To prove \eqref{limVacuum}, we use the two last estimates in Lemma~\ref{lemmaentropies2} with $m=1$ and $s>1$ taken close enough to $1$ to ensure that $p\geq 2s+\frac{s-1}{2\theta}$ (we may take $s>2$ again). Then we get the equi-integrability estimates \begin{equation*}\label{equi1epsu} \mathbb{E}\iint_{Q_T} \left(|\eta(\mathbf{U}_\varepsilon)|^s+|H(\mathbf{U}_\varepsilon)|^s\right)|u|^s dx dt\leq C, \end{equation*} and \begin{equation*}\label{equi1limu} \tilde\mathbb{E}\iint_{Q_T}\int_E \left(|\eta(p)|^s+|H(p)|^s\right)|u|^s d\tilde\nu_{x,t} dxdt\leq C, \end{equation*} where $C$ is a constant independent on $\varepsilon$. This means that, in analogy with \eqref{limSmoment}, we can prove, for $S=\eta$ or $S=H$, $$ \lim_{R\to+\infty}\tilde\mathbb{E}\iint_{Q_T}\int_E |S(p)|\left[1-\chi_R(|S(p)|)\chi_R(|u|)\right]d\tilde\nu_{(x,t)}(p) dx dt=0. $$ Let $(R_k)\uparrow +\infty$. There is a subsequence still denoted $(R_k)$ such that, almost-surely, for almost all $(x,t)\in Q_T$, $$ \lim_{k\to+\infty}\int_E |S(p)|\left[1-\chi_{R_k}(|S(p)|)\chi_{R_k}(|u|)\right] d\tilde\nu_{(x,t)} (p) dx dt=0. $$ In particular, if $(\omega,x,t)$ is such that $\mathrm{supp}(\tilde\nu_{x,t})\subset V$, we obtain $$ \int_E S(p)d\tilde\nu_{(x,t)}(p)=\lim_{k\to+\infty}\int_E S(p)\chi_{R_k}(|S(p)|)\chi_{R_k}(|u|)d\tilde\nu_{(x,t)}(p)=0. $$ This concludes the proof of the proposition. \end{proof} \begin{remark} In the case where a priori $L^\infty$ bounds on $(\rho_\varepsilon,u_\varepsilon)$ are known, Proposition~\ref{prop:nlinUeps} is almost automatic. In the absence of such $L^\infty$ bounds it requires some additional estimates to be established. In our context, we have some estimates on moments of arbitrary orders (see \eqref{estimmomenteps2}). In some situations, like the isentropic Euler system with geometric effects, it is quite difficult to obtain enough equi-integrability to conclude. See in particular \cite{LeFlochWestdickenberg07} where such estimates are proved for the isentropic Euler system with geometric effects. \end{remark} Let us set $$ \tilde\mathbf{U}(x,t)=\begin{pmatrix}\tilde\rho(x,t)\\ \tilde q(x,t)\end{pmatrix}=\int_E \begin{pmatrix}\eta(p)\\ H(p)\end{pmatrix} d\tilde\nu_{(x,t)}(p), $$ where $(\eta(p),H(p))=(\rho,q)$, which is the entropy-entropy flux pair obtained when taking $g(\xi)=1$ in \eqref{entropychi}-\eqref{entropychiflux}. The notation is consistent with the result $\tilde\nu_{(x,t)}=\delta_{\tilde\mathbf{U}(x,t)}$ outside the vacuum. By Proposition~\ref{prop:nlinUeps}, we have \begin{equation}\label{0Vacuum} \tilde\mathbf{U}(x,t)=0\mbox{ in the vacuum} \end{equation} and \begin{equation}\label{Diracrac} \int_E S(p) d\tilde\nu_{(x,t)}(p)=S(\tilde\mathbf{U}(x,t)), \end{equation} for almost all $(\omega,x,t)\in\Omega\times Q_T$ if $S=\eta$ or $S=H$, where $(\eta,H)$ is associated to a subquadratic function $g$. Besides, we have the following strong convergence result. \begin{proposition}[Strong convergence] Let $g\in\mathcal{G}$ (\textit{cf.} \eqref{gsubquad}) and let $(\eta,H)$ be the entropy-entropy flux pair defined by \eqref{entropychi}-\eqref{entropychiflux}. Under the hypotheses of Proposition~\ref{prop:rednu}, we have \begin{equation}\label{strongUepsU} \eta(\tilde\mathbf{U}_{\varepsilon_n})\to\eta(\tilde\mathbf{U}),\quad H(\tilde\mathbf{U}_{\varepsilon_n})\to H(\tilde\mathbf{U}) \end{equation} in $L^2(\tilde\Omega\times Q_T)$-strong. \label{prop:nlinUepsStrong}\end{proposition} \begin{proof} We have seen in the proof of Proposition~\ref{prop:nlinUeps} that we can take $s>2$ in the estimates \eqref{equi1eps} and \eqref{equi1lim}. This means that, by using an adapted truncature again, we can prove that $$ \tilde\mathbb{E}\iint_{Q_T} S(\tilde{\mathbf{U}}_{\varepsilon_n}(x,t)) \varphi(x,t) dxdt\to \tilde\mathbb{E}\iint_{Q_T}\int_E S(p)\varphi(x,t)d\tilde\nu_{x,t}(p) dxdt, $$ where \begin{enumerate} \item $S=\eta$ or $S=H$ and $\varphi\in L^2(\tilde\Omega\times Q_T)$, \item $S=|\eta|^2$ or $S=|H^2|$ and $\varphi=1$. \end{enumerate} Then 1. is the weak convergence in $L^2(\tilde\Omega\times Q_T)$, while 2. is the convergence of the norms: combined, they give, for $S=\eta$ or $H$, the strong convergence $$ S(\tilde{\mathbf{U}}_{\varepsilon_n})\to \int_E S(p)d\tilde\nu_{x,t}(p) $$ in $L^2(\tilde\Omega\times Q_T)$. We conclude by \eqref{Diracrac}. \end{proof} \subsubsection{Martingale solution}\label{subsec:martingalesol} Let us apply Proposition~\ref{prop:nlinUepsStrong} to the entropy-entropy flux pair associated to the affine function $g\colon\xi\mapsto\alpha\xi+\beta$. Then $\eta(\mathbf{U})=\alpha q+\beta\rho$. We deduce that \begin{equation}\label{cvUepsn} \tilde{\mathbf{U}}_{\varepsilon_n}\to \tilde\mathbf{U} \end{equation} in $L^2(\tilde\Omega\times Q_T)$ strong. By Proposition~\ref{prop:regteps}, and by considering possibly a subsequence of $(\varepsilon_n)$, we may assume that the process $(\tilde{\mathbf{U}}_{\varepsilon_n}(t))$ converges to $(\tilde\mathbf{U}(t))$ in $L^2(\tilde\Omega;C([0,T];W^{-2,2}(\mathbb{T})))$. Indeed, if we apply Proposition~\ref{prop:regteps} with $g(\xi)=\alpha\xi+\beta$ as above, then $\eta''\equiv0$: by the entropy balance law \eqref{Itoentropyeps}, $B_\varepsilon$ coincide with $\alpha q_\varepsilon+\beta\rho_\varepsilon$. Therefore the trajectories of $(\tilde\mathbf{U}(t))$ are almost surely in $C([0,T];W^{-2,2}(\mathbb{T}))$. \medskip For the moment we have only supposed that $\mathbf{U}_{\varepsilon 0}\in W^{2,2}(\mathbb{T})$ with some uniform bounds. Assume furthermore \begin{equation}\label{cvCI} \lim_{\varepsilon\to0}\mathbf{U}_{\varepsilon 0}=\mathbf{U}_0\quad \mbox{ in }L^2(\mathbb{T}) \end{equation} and a.e. Since $\mathbf{U}_{\varepsilon 0}$ avoids the vacuum ($\rho_{\varepsilon 0}\geq c_{\varepsilon 0}>0$ a.e.), the velocity $u_{\varepsilon 0}=\frac{q_{\varepsilon 0}}{\rho_{\varepsilon 0}}$ is well defined. We assume also the convergence \begin{equation}\label{cvCIu} \lim_{\varepsilon\to0}u_{\varepsilon 0}=u_0\quad \mbox{ in }L^2(\mathbb{T}) \end{equation} and a.e. This means in particular that, for a.e. $x$ in the set $\{\rho_0=0\}$, $q_0(x)=0$. Let $g\in C^2(\mathbb{R})$ be a convex subquadratic function. If \eqref{initEntropyAddCont} is uniformly bounded, then we can apply the dominated convergence Theorem to obtain \begin{equation}\label{cvCIS} \lim_{\varepsilon\to0}\eta(\mathbf{U}_{\varepsilon 0})=\eta(\mathbf{U}_0)\quad \mbox{ in }L^2(\mathbb{T}), \end{equation} for any $\eta$ defined by \eqref{entropychi}.\medskip Recall that $(\tilde\Omega,\tilde\mathbb{P},\tilde\mathcal{F},\tilde{W})$ is given by Proposition~\ref{prop:cvrandomY}. Let $(\tilde{\mathcal{F}}_t)$ be the $\tilde{\mathbb{P}}$-augmented canonical filtration of the process $(\tilde{\mathbf{U}},\tilde{W})$, \textit{i.e.} $$ \tilde{\mathcal{F}}_t=\sigma\big(\sigma\big(\varrho_t\tilde{\mathbf{U}},\varrho_t\tilde{W}\big)\cup\big\{N\in\tilde{\mathcal{F}};\;\tilde{\mathbb{P}}(N)=0\big\}\big),\quad t\in[0,T], $$ where the restriction operator $\varrho_t$ is defined in \eqref{restr}. We will show that the sextuplet $$ \big(\tilde{\Omega},\tilde{\mathcal{F}},(\tilde{\mathcal{F}}_t),\tilde{\mathbb{P}},\tilde{W},\tilde{\mathbf{U}}\big) $$ is a weak martingale solution to \eqref{stoEuler}.\medskip Our aim is to pass to the limit in the balance entropy equation~\eqref{Itoentropyeps}. Actually, given \eqref{strongUepsU}, it would be more natural to pass to the limit in the weak-in-time formulation of \eqref{Itoentropyeps}, which is the following one: almost surely, for all $\varphi\in C^2(\overline{Q_T})$ such that $\varphi\equiv 0$ on $\mathbb{T}\times\{t=T\}$, \begin{align} &\iint_{Q_T} \left[\eta({\mathbf{U}_\varepsilon})\partial_t\varphi+H({\mathbf{U}_\varepsilon})\partial_x\varphi+\varepsilon\eta(\mathbf{U}_\varepsilon)\partial_{xx}^2\varphi \right] dx dt +\int_{\mathbb{T}}\eta(\mathbf{U}_{\eps0})\varphi(0)dx\nonumber\\ +&\int_0^T\int_{\mathbb{T}} \eta'({\mathbf{U}_\varepsilon})\mathbf{\Psi}^{\varepsilon}({\mathbf{U}_\varepsilon})\varphi\, dx d W(t) + \frac{1}{2}\iint_{Q_T}\mathbf{G}^{\varepsilon}({\mathbf{U}_\varepsilon})^2\partial^2_{qq} {\eta}({\mathbf{U}_\varepsilon})\varphi dx dt\nonumber\\ =&\iint_{Q_T} \varepsilon \eta''({\mathbf{U}_\varepsilon})\cdot(\partial_x\mathbf{U}_\varepsilon,\partial_x\mathbf{U}_\varepsilon)\varphi dx dt.\label{Itoentropyepsweak} \end{align} However, we need to work on the processes to pass to the limit in the stochastic integral with the martingale formulation of \eqref{Itoentropyeps}. Therefore, let $\varphi_0\in C^2(\mathbb{T})$ be fixed. Since $$ t\mapsto \big\langle \eta({\tilde\mathbf{U}_{\varepsilon_n}}(t)),\varphi_0\big\rangle $$ converges to $t\mapsto \big\langle \eta(\tilde\mathbf{U}(t)),\varphi_0\big\rangle$ in $L^1(\tilde\Omega\times(0,T))$, we can assume, up to a subsequence (and using the Fubini Theorem), that for a.e. $t\in[0,T]$, almost surely, $\big\langle \eta({\mathbf{U}_{\varepsilon_n}}(t)),\varphi_0\big\rangle$ converges to $\big\langle \eta(\tilde\mathbf{U}(t)),\varphi_0\big\rangle$. Therefore there is a Borel subset $\mathcal{D}$ of $[0,T]$ of full measure such that, for every $t\in\mathcal{D}$, almost surely, we have the convergence \begin{multline*} \big\langle \eta(\tilde{\mathbf{U}}_{\varepsilon_n})(t),\varphi_0\big\rangle-\big\langle \eta({\mathbf{U}_{\varepsilon_n}}_0),\varphi_0\big\rangle- \int_0^t \big\langle H(\tilde{\mathbf{U}}_{\varepsilon_n}),\partial_x \varphi_0\big\rangle+{\varepsilon_n}\big\langle\eta(\tilde{\mathbf{U}}_{\varepsilon_n}),\partial^2_{xx} \varphi_0\big\rangle\,d s\\ \to \big\langle \eta(\tilde{\mathbf{U}})(t),\varphi_0\big\rangle-\big\langle \eta(\mathbf{U}_0),\varphi_0\big\rangle-\int_0^t\big\langle H(\tilde{\mathbf{U}}),\partial_x \varphi_0\big\rangle\,d s \end{multline*} Note that, by \eqref{cvCIS}, we have $0\in\mathcal{D}$. Furthermore, by Corollary~\ref{cor:bounde}, we have for every $Y\in L^2(\tilde\Omega)$, for every $\varphi\in C_b(\overline{Q_T})$, $$ \tilde\mathbb{E}(\<\tilde e_{\varepsilon_n},\varphi\>_{\mathcal{M}_b(Q_T),C_b(\overline{Q_T})}Y)\to \tilde\mathbb{E}(\<\tilde e,\varphi\>_{\mathcal{M}_b(Q_T),C_b(\overline{Q_T})}Y). $$ Let $\mathfrak{A}$ denote the countable set of the atoms of the non-negative measure $\mathbb{E}\tilde e$. Let $\mathfrak{A}^*=\mathfrak{A}\setminus\{0\}$. Replace $\mathcal{D}$ by $\mathcal{D}\setminus\mathfrak{A}^*$. Then $\mathcal{D}$ remains a set of full measure in $[0,T]$ containing $t=0$ and, for every $t\in\mathcal{D}$, for every $\varphi\in C(\mathbb{T})$, we have \begin{equation}\label{cvTeepse} \tilde\mathbb{E}\left(\iint_{\overline{Q_T}}\mathbf{1}_{[0,t)}\varphi d\tilde e_{\varepsilon_n}\ Y\right) \to \tilde\mathbb{E}\left(\iint_{\overline{Q_T}}\mathbf{1}_{[0,t)}\varphi d\tilde e\ Y\right). \end{equation} Let \begin{align*} \tilde M^\varepsilon(t)=&\big\langle \eta(\tilde{\mathbf{U}}_{\varepsilon})(t),\varphi_0\big\rangle-\big\langle \eta({\mathbf{U}_{\varepsilon}}_0),\varphi_0\big\rangle-\int_0^t \big\langle H(\tilde{\mathbf{U}}_{\varepsilon}),\partial_x \varphi_0\big\rangle dx\\ &-\int_0^t {\varepsilon_n}\big\langle\eta(\tilde{\mathbf{U}}_{\varepsilon}),\partial^2_{xx} \varphi_0\big\rangle\,d s +\iint_{\overline{Q_T}}\mathbf{1}_{[0,t)}\varphi_0 d\tilde e_{\varepsilon}, \end{align*} and \begin{equation*} \tilde M(t)=\big\langle \eta(\tilde{\mathbf{U}})(t),\varphi_0\big\rangle-\big\langle \eta(\mathbf{U}_0),\varphi_0\big\rangle-\int_0^t\big\langle H(\tilde{\mathbf{U}}),\partial_x \varphi_0\big\rangle\,d s +\iint_{\overline{Q_T}}\mathbf{1}_{[0,t)}\varphi_0 d\tilde e. \end{equation*} For every $t\in\mathcal{D}$, for every $Y\in L^2(\tilde\Omega)$, we have \begin{equation}\label{cvMartingaleD} \tilde\mathbb{E}\left(\tilde M^{\varepsilon_n}(t) Y\right)\to\tilde\mathbb{E}\left(\tilde M(t) Y\right). \end{equation} With the result of convergence \eqref{cvMartingaleD} at hand, we will prove now that $\tilde M(t)$ is a stochastic integral with respect to $\tilde W$. The argumentation is very similar to the argumentation in Section~\ref{subsec:identif}. First, there exists some independent $(\tilde{\mathcal{F}}_t)$-adapted Wiener processes $(\tilde\beta_k(t))$ such that $$ \tilde W=\sum_{k\geq 1}\tilde\beta_k(t) e_k $$ almost surely in $\mathcal{X}_W$: the proof is analogous to the proof of Lemma~\ref{lem:tildeW}. In analogy with Lemma~\ref{lem:tildeM} then, we can show that the processes \begin{equation}\label{tildeMD} \tilde{M},\;\tilde{M}^2-\sum_{k\geq1}\int_0^{\overset{\cdot}{}}\big\langle \sigma_k(\tilde{\mathbf{U}})\partial_q\eta(\tilde{\mathbf{U}}),\varphi\big\rangle^2\,d r,\; \tilde{M}\tilde{\beta}_k-\int_0^{\overset{\cdot}{}}\big\langle\sigma_k(\tilde{\mathbf{U}})\partial_q\eta(\tilde{\mathbf{U}}),\varphi\big\rangle\,d r \end{equation} are $(\tilde{\mathcal{F}}_t)$-martingales. There is however a notable difference between the result of Lemma~\ref{lem:tildeM} and the result \eqref{tildeMD} here, in the fact that the martingales in \eqref{tildeMD} are indexed by $\mathcal{D}\subset[0,T]$ since we have used the convergence \eqref{cvMartingaleD}. This means that $$ \tilde\mathbb{E}(\tilde M(t)-\tilde M(s)|\tilde{\mathcal{F}}_s)=0 $$ is satisfied only for $s\leq t$ and $s,t\in\mathcal{D}$, and similarly for the other martingales in \eqref{tildeMD}. If all the processes in \eqref{tildeMD} were continuous martingales indexed by $[0,T]$, we would infer, as in the proof of Proposition \ref{prop:martsoleps}, that \begin{align} \big\langle \eta(\tilde{\mathbf{U}})(t),\varphi_0\big\rangle&-\big\langle \eta(\mathbf{U}_0),\varphi_0\big\rangle-\int_0^t\big\langle H(\tilde{\mathbf{U}}),\partial_x \varphi_0\big\rangle\,d s\nonumber\\ &=-\iint_{\overline{Q_T}}\mathbf{1}_{[0,t)}\varphi_0 d\tilde e+\sum_{k\geq 1}\int_0^t \big\langle\sigma_k(\tilde{\mathbf{U}})\partial_q\eta(\tilde{\mathbf{U}}),\varphi_0\big\rangle\,d\tilde{\beta}_k(s),\label{preEntropy} \end{align} for all $t\in[0,T]$, $\tilde{\mathbb{P}}$-almost surely. Nevertheless, $\mathcal{D}$ contains $0$ and is dense in $[0,T]$ since it is of full measure, and it turns out, by the Proposition~A.1 in \cite{Hofmanova13b} on densely defined martingales, that this is sufficient\footnote{indeed, it is possible to prove the equivalent equations to \eqref{EqA3Martina}-\eqref{EqA3Martina2} for all $s,t\in\mathcal{D}$} to obtain \eqref{preEntropy} for all $t\in\mathcal{D}$, $\tilde{\mathbb{P}}$-almost surely. Then we conclude as in the \textit{proof of Theorem~4.13} of \cite{Hofmanova13b}: let $N(t)$ denote the continuous semi-martingale defined by \begin{equation*} N(t)=\int_0^t\big\langle H(\tilde{\mathbf{U}}),\partial_x \varphi_0\big\rangle\,d s+\sum_{k\geq 1}\int_0^t \big\langle\sigma_k(\tilde{\mathbf{U}})\partial_q\eta(\tilde{\mathbf{U}}),\varphi_0\big\rangle\,d\tilde{\beta}_k(s). \end{equation*} Let $t\in(0,T]$ be fixed and let $\alpha\in C^1_c([0,t))$. By the It\={o} Formula we compute the stochastic differential of $N(s)\alpha(s)$ to get \begin{align} 0=\int_0^t N(s)\alpha'(s) ds+\int_0^t &\big\langle H(\tilde{\mathbf{U}}),\partial_x \varphi_0\big\rangle\alpha(s)\,d s\nonumber\\ &+\sum_{k\geq 1}\int_0^t \big\langle\sigma_k(\tilde{\mathbf{U}})\partial_q\eta(\tilde{\mathbf{U}}),\varphi_0\big\rangle\alpha(s)\,d\tilde{\beta}_k(s).\label{preEntropy2} \end{align} By \eqref{preEntropy}, we have $$ N(t)=\big\langle \eta(\tilde{\mathbf{U}})(t),\varphi_0\big\rangle-\big\langle \eta(\mathbf{U}_0),\varphi_0\big\rangle+\iint_{\overline{Q_T}}\mathbf{1}_{[0,t)}\varphi_0 d\tilde e, $$ for all $t\in\mathcal{D}$, $\tilde{\mathbb{P}}$-almost surely. In particular, by the Fubini Theorem, \begin{align} \int_0^t N(s)\alpha'(s) ds=\int_0^t \big\langle &\eta(\tilde{\mathbf{U}})(s),\varphi_0\big\rangle\alpha'(s)\,ds\nonumber\\ &+\big\langle \eta(\mathbf{U}_0),\varphi_0\big\rangle\alpha(0) -\int_{[0,t]}\alpha(\sigma)d\tilde{\rho}(\sigma),\label{preEntropy3} \end{align} $\tilde{\mathbb{P}}$-almost surely, where we have defined the non-negative measure $\tilde{\rho}$ by $$ \tilde{\rho}(B)=\iint_{\overline{Q_T}}\mathbf{1}_{B}\varphi_0 d\tilde e, $$ for $B$ a Borel subset of $[0,T]$. If $\alpha,\varphi_0\geq 0$, then $$ \int_0^t\alpha(\sigma)d\tilde{\rho}(\sigma)\geq 0,\quad\tilde{\mathbb{P}}-\mbox{almost surely}, $$ and we deduce \eqref{Entropy} from \eqref{preEntropy2}, \eqref{preEntropy3}. This concludes the proof of Theorem~\ref{th:martingalesol}. \section{Conclusion}\label{sec:conclusion} We want to discuss in this concluding section some open questions related to the long-time behaviour of solutions to \eqref{stoEuler}. It is known that for \textit{scalar} stochastic conservation laws with additive noise, and for non-degenerate fluxes, there is a unique ergodic invariant measure, \textit{cf.} \cite{EKMS00,DebusscheVovelle14}. Since both fields of \eqref{stoEuler} are genuinely non-linear, a form of non-degeneracy condition is clearly satisfied in \eqref{stoEuler}. Actually, in the deterministic case $\Phi\equiv0$, the solution converges to the constant state determined by the conservation of the two invariants \begin{equation}\label{invariantDET} \int_0^1 \rho(x)dx,\quad \int_0^1 q(x) dx. \end{equation} see \cite[Theorem 5.4]{ChenFrid99}. This indicates that some kind of dissipation effects (\textit{via} interaction of waves, \textit{cf.} also \cite{GlimmLax70}) occur in the Euler system for isentropic gas dynamics. However, in a system there is in a way more room for waves to evolve than in a scalar conservation law, and the long-time behaviour in \eqref{stoEuler} may be different from the one described in \cite{EKMS00,DebusscheVovelle14}. \medskip Specifically, consider the case $\gamma=2$. For such a value the system of Euler equations for isentropic gas dynamics is equivalent to the following Shallow water system: \begin{subequations}\label{SaintVenant} \begin{align} &h_t+ \partial_x (h u)dt=0, &\mbox{ in }Q_T,\label{height}\\ &(h u)_t+\partial_x (h u^2+g\frac{h^2}{2})+gh \partial_x Z=0,&\mbox{ in }Q_T,\label{charge} \end{align} \end{subequations} with $Z(x,t)=\Phi^*(x)\frac{dW}{dt}$ and $Q_T=\mathbb{T}\times(0,T)$. For example, we may take \begin{equation}\label{exampleZ} dZ(x,t)=\sum_{k\in\mathbb{N}}\sigma_k\left[\cos(2\pi k x)d\beta_k^\flat(t)+\sin(2\pi k x)d\beta_k^\sharp(t)\right], \end{equation} with $\sigma\in l^2(\mathbb{N})$ and $\beta_k^\flat(t)$, $\beta_k^\sharp(t)$ some independent Brownian motions on $\mathbb{R}$ (\eqref{exampleZ} is an example of space-homogeneous noise).\medskip When $Z=Z(x)$, \eqref{SaintVenant} is a model for the one-dimensional flow of a fluid of height $h$ and speed $u$ over a ground described by the curve $z=Z(x)$ ($u(x)$ is the speed of the column of water over the abscissa $x$)\footnote{the fact that $u$ is independent on the altitude $z$ is admissible as long as $h$ is small compared to the longitudinal length $L$ of the channel, $L=1$ here, \textit{cf.} \cite{GerbeauPerthame01}}. For a random $Z$ as in \eqref{charge}, the system \eqref{SaintVenant} describes the evolution of the fluid in terms of $(h,u)$ when its behaviour is forced by the moving topography. Note that, for \textit{smooth} solutions to \eqref{SaintVenant}, with a noise given by \eqref{exampleZ}, the balance of Energy writes \begin{equation}\label{ESaintVenant} \frac{d\;}{dt}\mathbb{E}\int_\mathbb{T} \eta_E(\mathbf{U}(x,t))dx=\frac12\|\sigma\|_{l^2(\mathbb{N})}^2\mathbb{E}\int_\mathbb{T} h(x,t) dx,\quad \eta_E(\mathbf{U}):=h\frac{u^2}{2}+g\frac{h^2}{2}. \end{equation} Since the total height $\int_\mathbb{T} h(x,t) dx$ is conserved in the evolution, the input of energy by the noise is done at \textit{constant} rate: \begin{equation}\label{ESaintVenant2} \frac{d\;}{dt}\mathbb{E}\int_\mathbb{T} \eta_E(\mathbf{U}(x,t))dx=\mathrm{Cst}=\frac12\|\sigma\|_{l^2(\mathbb{N})}^2\mathbb{E}\int_\mathbb{T} h_0(x) dx. \end{equation} Of course, the equality is not satisfied \eqref{ESaintVenant}. We have \begin{equation}\label{inESaintVenant} \frac{d\;}{dt}\mathbb{E}\int_\mathbb{T} \eta_E(\mathbf{U}(x,t))dx\leq\frac12\|\sigma\|_{l^2(\mathbb{N})}^2\mathbb{E}\int_\mathbb{T} h_0(x) dx, \end{equation} as a consequence of entropy inequalities. In particular dissipation of energy occurs in shocks. Therefore, the question is to determine if an equilibrium in law (and which kind of equilibrium) for such a random process as the solution to \eqref{SaintVenant} can be reached when time goes to $+\infty$ as a result of the balance between production of energy in the stochastic source term and dissipation of energy in shocks. An hint for the existence of a unique, ergodic, invariant measure is the ``loss of memory in the system" given by the ergodic theorem: if $f$ is a bounded, continuous functional of the solution $\mathbf{U}(t)$, then \begin{equation}\label{ergodicT} \lim_{T\to+\infty}\frac1T\int_0^T f(\mathbf{U}(t)) dt\to \<f,\mu\> \mbox{ a.s.} \end{equation} where $\mu$ is the invariant measure. Before testing the ergodic convergence \eqref{ergodicT}, one has first to restrict the evolution to the right manifold. Indeed, in the scalar case \cite{EKMS00,DebusscheVovelle14}, say for the equation $$ dv+\partial_x (A(v))=\partial_x\phi(x)dW(t),\quad x\in\mathbb{T},t>0, $$ there is a unique invariant measure $\mu_\lambda$ indexed by the constant parameter $$ \lambda=\int_{\mathbb{T}} v(x)dx\in\mathbb{R}. $$ For \eqref{SaintVenant}, the entropy solution is evolving on the manifold $$ \int_{\mathbb{T}} h(x)dx=\mathrm{cst}. $$ Since $\mathbb{E}\int_0^t h(s) d\beta_k^\flat(s)=\mathbb{E}\int_0^t h(s) d\beta_k^\sharp(s)=0$ for all $k$ (this is the expectancy of a stochastic integral), we have a second equation of conservation by \eqref{charge}: $$ \mathbb{E}\int_{\mathbb{T}} q(x)dx=0. $$ It seems therefore that the final equilibrium and the invariant measure, if they exist, should be determined uniquely by the initial value of the parameters \eqref{invariantDET}. This is what we illustrate by numerical approximations on Figure~\ref{fig:testnum}. \begin{figure}[h] \centering \includegraphics[scale=0.6]{4testsT10.png} \label{fig:testnum} \end{figure} On Figure~\ref{fig:testnum}, time is the abscissa coordinate, the averaged energy $$ \frac{1}{t}\int_0^t\int_\mathbb{T} \eta_E(\mathbf{U}(x,s))ds $$ is the ordinate coordinate. There are four different tests corresponding to four different initial conditions. The simulation on the time interval $[0,T]$, $T=10$, has been done several times, for several realizations of the noise therefore. The numerical values corresponding to each test are the following ones: first, we have taken $g=2$, $Z$ as in \eqref{exampleZ} with $\sigma_k=\mathbf{1}_{1\leq k\leq 5}$ and $h_0(x)\equiv 1$ in each four tests. The value of the initial velocity is then $$ u_0(x)=\mathbf{1}_{0<x<1/2}\mbox{ \textcolor{red}{[Test 1]}},\quad u_0(x)=\frac12\mbox{ \textcolor{blue}{[Test 2]}},\quad u_0(x)=0\mbox{ \textcolor{green}{[Test 3]}}, $$ and $$ u_0(x)=-\frac12\mathbf{1}_{0<x<1/2}+\frac12\mathbf{1}_{1/2<x<1}\mbox{ [Test 4]}. $$ For the four test cases considered, the quantity $\int_\mathbb{T} h dx$ is the same of course and $\int_\mathbb{T} q dx$ has a common value in Tests 1-2 and 3-4 respectively. Observe indeed the common convergence in Tests 1-2 and 3-4. The proof of the existence of an invariant measure will be addressed in a future work.
2,869,038,154,429
arxiv
\section{overoveroverview}
2,869,038,154,430
arxiv
\section{Introduction} Consider a system of $N$ unitary-charged particles of negligible mass under the effect of the Coulomb force. We may describe the stationary states using a wave-function $\psi(x_1, \dotsc, x_N)$, where $x_j \in \mathbb{R}^3$; via the Born interpretation, $\abs{\psi(x_1, \dotsc, x_N)}^2$ may be viewed as the density of the probability that the particles occupy the positions $x_1, \dotsc, x_N$, and it is symmetric, since the particles are indistinguishable. When the semi-classical limit is considered, as already proved in \cite{bindini2017, cotar2013, cotar2017, lewin2017}, the stationary states reach the minimum of potential energy, \emph{i.e.}, \begin{equation} \label{pot-min} V_0 = \min_{\psi} V(\psi) = \min \int_{\mathbb{R}^{3N}} c(x_1, \dotsc, x_N) \abs{\psi(x_1, \dotsc, x_N)}^2 \mathop{}\!\mathrm{d} x_1 \dotsm \mathop{}\!\mathrm{d} x_N, \end{equation} where $c$ is the Coulomb (potential) cost function $c \colon (\mathbb{R}^3)^N \to \mathbb{R}$ defined as \[ c(x_1,\dotsc, x_N) = \sum_{1 \leq i < j \leq N} \frac{1}{\abs{x_i - x_j}}. \] This can also be viewed as the exchange correlation functional linking the Kohn-Sham to the Hohenberg-Kohn approach, see for instance \cite{gori2009density}. Given any wave-function $\psi$, define its single-particle density as \[ \rho^{\psi}(x) = \int_{\mathbb{R}^{3(N-1)}} \abs{\psi(x, x_2, \dotsc, x_N)}^2 \mathop{}\!\mathrm{d} x_2 \dotsm \mathop{}\!\mathrm{d} x_N, \] which is quite natural from the physical point of view, since the charge density is a fundamental quantum-mechanical observable. It is a well-known result by Lieb \cite{lieb1983} (see also Levy \cite{levy1979}) that the set of all possible marginal densities is \[ \mathcal{R} = \gra{\rho \in L^1(\mathbb{R}^d) \left | \right. \rho \geq 0, \sqrt{\rho} \in H^1(\mathbb{R}^d), \int_{\mathbb{R}^d} \rho(x) dx = 1}. \] One may thus consider \[ C(\rho) = \min \gra{ \int_{\mathbb{R}^{3N}} c(x_1, \dotsc, x_N) \abs{\psi(x_1, \dotsc, x_N)}^2 \mathop{}\!\mathrm{d} x_1 \dotsm \mathop{}\!\mathrm{d} x_N \left | \right. \rho^{\psi} = \rho }, \] and factorize the original minimum problem \eqref{pot-min} as \[ V_0 = \min_{\rho \in \mathcal{R}} \min_{\rho^\psi = \rho} V(\psi) = \min_{\rho} C(\rho). \] This is a well known approach, which dates back to Thomas and Fermi, and was later revised by Hohenberg and Kohn \cite{hohenberg1964}, Levy \cite{levy1979} and Lieb \cite{lieb1983}, whose questions are still sources of ideas for this field. \vspace{0.5cm} In this paper, firstly we generalize the physical dimension $d = 3$ to any $d \geq 1$. Moreover, we adopt a measure-theoretic approach: instead of considering wave-functions, we set the problem for every probability over $(\mathbb{R}^d)^N$ and formulate the corresponding relaxed minimum problem \[ \mathcal{C}(P) = \min \int_{(\mathbb{R}^d)^N} c(x_1,\dotsc, x_N) \mathop{}\!\mathrm{d} P(x_1, \dotsc, x_N), \] where $P \in \mathcal{P}((\mathbb{R}^d)^N)$ is a probability measure. In this fashion, the single-particle density constraint gives rise to a multi-marginal optimal transport problem of the form \begin{align} \begin{split} \label{multi-problem} C(\rho) = \inf \bigg \{ &\int_{(\mathbb{R}^d)^N} c(x_1,\dotsc, x_N) \mathop{}\!\mathrm{d} P(x_1, \dotsc, x_N) \colon \\ & P \in \mathcal{P}((\mathbb{R}^d)^N), \ \pi_{\#}^i P = \rho, i = 1, \dotsc, N \bigg \}, \end{split} \end{align} where $\rho$ is a fixed probability measure over $\mathbb{R}^d$, and $\pi^i$ is the projection over the $i$-th factor of $(\mathbb{R}^d)^N$. It is a simple and well known observation that the infimum \eqref{multi-problem} is equal to \begin{align} \begin{split} \label{sym-multi-problem} C(\rho) = \inf \bigg \{ &\int_{(\mathbb{R}^d)^N} c(x_1,\dotsc, x_N) \mathop{}\!\mathrm{d} P(x_1, \dotsc, x_N) \colon \\ & P \in \mathcal{P}((\mathbb{R}^d)^N), P \text{ symmetric }, \pi_{\#}^i P = \rho, i = 1, \dotsc, N \bigg \}. \end{split} \end{align} In order to give even a stronger result, we take as a cost function a general repulsive potential, as in the following \begin{mydef} \label{repulsive-cost} A function $c \colon (\mathbb{R}^d)^N \to \mathbb{R}$ is a \emph{repulsive cost function} if it is of the form \[ c(x_1, \dotsc, x_N) = \sum_{1 \leq i < j \leq N} \frac{1}{\omega(\abs{x_i-x_j})} \] where $\omega \colon \mathbb{R}^{+} \to \mathbb{R}^{+}$ is continuous, strictly increasing, differentiable on $(0, +\infty)$, with $\omega(0) = 0$. \end{mydef} Although there are many works about this formulation, and the multi-marginal transport problem in general (see for instance \cite{buttazzo2016, colombo2013multimarginal, colombo2013equality, depascale2015, dimarino2017}), none of them gives a condition on $\rho$ which assures that the infimum in \eqref{sym-multi-problem} is finite. We found that the correct quantity to consider is the one given by the following \begin{mydef} If $\rho \in \mathcal{P}(\mathbb{R}^d)$, the \emph{concentration} of $\rho$ is \[ \mu(\rho) = \sup_{x \in \mathbb{R}^d} \rho(\gra{x}). \] \end{mydef} This allows us to state the main result: \begin{thm} \label{main-thm} Let $c$ be a repulsive cost function, and $\rho \in \mathcal{P}(\mathbb{R}^d)$ with \begin{equation} \label{conc-condition} \mu(\rho) < \frac{1}{N}. \end{equation} Then the infimum in \eqref{sym-multi-problem} is finite. \end{thm} After this paper was already submitted, the author became aware of an independent work in preparation by F. Stra, S. Di Marino and M. Colombo about the same problem. The techniques are different and the second result, although not yet available in preprint form, seems to be closer in the approach to some arguments in \cite{buttazzo2016}. \paragraph{Structure of the paper} In Section \ref{notation} we give some notation, and regroup some definitions, constructions and results to be used later. In particular, we state and prove a simple but useful result about partitioning $\mathbb{R}^d$ into measurable sets with prescribed mass. We then show in Section \ref{sharpness} that the condition \eqref{conc-condition} is sharp, \emph{i.e.}, given any repulsive cost function, there exists $\rho \in \mathcal{P}(\mathbb{R}^d)$ with $\mu(\rho) = 1/N$, and $C(\rho) = \infty$. The construction of this counterexample is explicit, but it is important to note that the marginal $\rho$ depends on the given cost function. Finally we devote Sections \ref{zero-atoms} to \ref{final-section} to the proof of Theorem \ref{main-thm}. The construction is universal, in the following sense: given $\rho \in \mathcal{P}(\mathbb{R}^d)$ such that \eqref{conc-condition} holds, we exhibit a symmetric transport plan $P$ which has support outside the region \[ D_{\alpha} = \gra{(x_1, \dotsc, x_N) \in (\mathbb{R}^d)^N \left | \right. \exists i \neq j \text{ with } \abs{x_i-x_j} < \alpha} \] for some $\alpha > 0$. This implies that $C(P)$ is finite for any repulsive cost function. \paragraph{Aknowledgements} The author is grateful to prof. Luigi Ambrosio and prof. Luigi De Pascale for all their useful remarks, and wish to thank prof. Emmanuel Trélat for his suggestions. \section{Notation and preliminary results} \label{notation} In the following, $x, x_j$ denote elements of $\mathbb{R}^d$, and $X = (x_1, \dotsc, x_N)$ is an element of $(\mathbb{R}^d)^N = \mathbb{R}^{Nd}$. We also indicate with $B(x_j, r)$ a ball with center $x_j \in \mathbb{R}^d$ and radius $r > 0$. Where it is not specified, the integrals are extended to all the space; if $\tau$ is a measure over $\mathbb{R}^d$, we denote by $\abs{\tau}$ its total mass, \emph{i.e.}, \[ \abs{\tau} = \int_{\mathbb{R}^d} \mathop{}\!\mathrm{d} \tau. \] We use the expression \emph{$N$-transport plan for the marginal $\rho$} to denote a probability measure $P \in \mathcal{P}(\mathbb{R}^{Nd})$ with all the marginals equal to $\rho \in \mathcal{P}(\mathbb{R}^d)$. If $P \in \mathcal{M}(\mathbb{R}^{Nd})$ is any measure, we define \[ P_{sym} = \frac{1}{N!} \sum_{s \in S_N} \phi^s_{\#} P, \] where $S_N$ is the premutation group over the elements $\gra{1, \dotsc, N}$, and $\phi^s \colon \mathbb{R}^{Nd} \to \mathbb{R}^{Nd}$ is the function $\phi^s(x_1, \dotsc, x_N) = (x_{s(1)}, \dotsc, x_{s(N)})$. Note that $P_{sym}$ is a symmetric measure; moreover, if $P$ is a probability measure, then also $P_{sym}$ is a probability measure. \begin{lemma} \label{sym-marginals} Let $P \in \mathcal{M}(\mathbb{R}^{Nd})$. Then $P_{sym}$ has marginals equal to \[ \frac{1}{N} \sum_{j = 1}^N \pi^{j}_{\#} P \] \end{lemma} \begin{proof} Since $P_{sym}$ is symmetric, me may calculate its first marginal: \begin{align*} \pi^1_{\#} P_{sym} &= \pi^1_{\#} \left( \frac{1}{N!} \sum_{s \in S_N} \phi^s_{\#} P \right) = \frac{1}{N!} \sum_{s \in S_N} \pi^1_{\#} (\phi^s_{\#} P) \\ &= \frac{1}{N!} \sum_{s \in S_N} \pi^{s(1)}_{\#} P = \frac{1}{N} \sum_{j = 1}^N \pi^{j}_{\#} P, \end{align*} where the last equality is due to the fact that for every $j = 1, \dotsc, N$ there are exactly $(N-1)!$ permutations $s \in S_N$ such that $s(1) = j$. \end{proof} For a symmetric probability $P \in \mathcal{P}(\mathbb{R}^{Nd})$ we will use the shortened notation $\pi(P)$ to denote its marginals $\pi^{j}_{\#} P$, which are all equal. If $\sigma_1, \dotsc, \sigma_N \in \mathcal{M}(\mathbb{R}^d)$, we define $\sigma_1 \otimes \dotsb \otimes \sigma_N \in \mathcal{M}(\mathbb{R}^{Nd})$ as the usual product measure. In similar fashion, if $Q \in \mathcal{M}(\mathbb{R}^{(N-1)d})$, $\sigma \in \mathcal{M}(\mathbb{R}^d)$ and $1 \leq j \leq N$, we define the measure $Q \otimes_j \sigma \in \mathcal{M}(\mathbb{R}^{Nd})$ as \begin{equation} \label{tensor-j} \int_{\mathbb{R}^{Nd}} f \mathop{}\!\mathrm{d} (Q \otimes_j \sigma) = \int_{\mathbb{R}^{Nd}} f(x_1, \dotsc, x_N) \mathop{}\!\mathrm{d} \sigma(x_j) \mathop{}\!\mathrm{d} Q(x_1, \dotsc, \hat{x}_j, \dotsc, x_N) \end{equation} for every $f \in C_b(\mathbb{R}^{Nd})$. \subsection{Partitions of non-atomic measures} Let $\sigma \in \mathcal{M}(\mathbb{R}^d)$ be a finite non-atomic measure, and $b_1, \dotsc, b_k$ real positive numbers such that $b_1 + \dotsb + b_k = \abs{\sigma}$. We may want to write \[ \mathbb{R}^d = \bigcup_{j = 1}^k E_j, \] where the $E_j$'s are disjoint measurable sets with $\sigma(E_j) = b_j$. This is trivial if $d = 1$, since the cumulative distribution function $\phi_{\sigma}(t) = \sigma((-\infty, t))$ is continuous, and one may find the $E_j$'s as intervals. However, in higher dimension, the measure $\sigma$ might concentrate over $(d-1)$-dimensional surfaces, which makes the problem slightly more difficult. Therefore we present the following \begin{prop} \label{slice} Let $\sigma \in \mathcal{M}(\mathbb{R}^d)$ be a finite non-atomic measure. Then there exists a direction $y \in \mathbb{R}^d \setminus \gra{0}$ such that $\sigma(H) = 0$ for all the affine hyperplanes $H$ such that $H \perp y$. \end{prop} In order to prove Proposition \ref{slice}, it is useful to present the following \begin{lemma} \label{measure-lemma} Let $(X,\mu)$ be a measure space, with $\mu(X) < \infty$, and $\{E_i\}_{i \in I}$ a collection of measurable sets such that \begin{enumerate} \item $\mu(E_i) > 0$ for every $i \in I$; \item $\mu(E_i \cap E_j) = 0$ for every $i \neq j$. \end{enumerate} Then $I$ is countable. \end{lemma} \begin{proof} Let $i_1, \dotsc, i_n$ be a finite set of indices. Then using the monotonicity of $\mu$ and the fact that $\mu(E_i \cap E_j) = 0$ if $i \neq j$, \[ \mu(X) \geq \mu \left( \bigcup_{k=1}^n E_{i_k} \right) = \sum_{k=1}^n \mu(E_{i_k}). \] Hence we have that \[ \sup \gra{ \sum_{j \in J} \mu(E_j) \left | \right. J \subset I, J \text{ finite} } \leq \mu(X) < \infty. \] Since all the $\mu(E_i)$ are strictly positive numbers, this is possible only if $I$ is countable. \end{proof} Now we present the proof of Proposition \ref{slice}. \begin{proof} For $k = 0, 1, \dotsc, d-1$ we recall the definitions of the Grassmannian \[ \mathrm{Gr}(k, \mathbb{R}^d) = \gra{v \text{ linear subspace of } \mathbb{R}^d \left | \right. \dim v = k} \] and the affine Grassmannian \[ \mathrm{Graff}(k, \mathbb{R}^d) = \gra{w \text{ affine subspace of } \mathbb{R}^d \left | \right. \dim w = k}. \] Given $w \in \mathrm{Graff}(k, \mathbb{R}^d)$, we denote by $[w]$ the unique element of $\mathrm{Gr}(k, \mathbb{R}^d)$ parallel to $w$. If $S \subseteq \mathrm{Graff}(k, \mathbb{R}^d)$, we say that $S$ is \emph{full} if for every $v \in \mathrm{Gr}(k, \mathbb{R}^d)$ there exists $w \in S$ such that $[w] = v$. For every $k = 1, 2, \dotsc, d-1$ let $S^{k} \subseteq \mathrm{Graff}(k, \mathbb{R}^d)$ be the set \[ S^{k} = \gra{w \in \mathrm{Graff}(k, \mathbb{R}^d) \left | \right. \sigma(w) > 0}. \] The goal is to prove that $S^{d-1}$ is \emph{not} full, while by hypothesis we know that $S^0 = \emptyset$, since $\sigma$ is non-atomic. The following key Lemma leads to the proof in a finite number of steps: \begin{lemma} Let $1 \leq k \leq d-1$. If $S^{k-1}$ is not full, then $S^{k}$ is not full. \end{lemma} \begin{proof} Let $v \in \mathrm{Gr}(k-1, \mathbb{R}^d)$, such that for every $v' \in \mathrm{Graff}(k-1, \mathbb{R}^d)$ with $[v'] = v$ it holds $\sigma(v') = 0$. Consider the collection $W_{v} = \gra{w \in \mathrm{Graff}(k, \mathbb{R}^d) \left | \right. v \subseteq [w]}$. If $w,w' \in W_v$ are distinct, then $w \cap w' \subseteq v'$ for some $v' \in \mathrm{Graff}(k-1, \mathbb{R}^d)$ with $[v'] = v$, thus $\sigma(w \cap w') = 0$. Since the measure $\sigma$ is finite, because of Lemma \ref{measure-lemma} at most countably many elements $w \in W_v$ may have positive measure, which implies that $S^k$ is not full. \end{proof} \end{proof} \begin{corollary} \label{cor-partition-1} Given $b_1, \dotsc, b_k$ real positive numbers with $b_1 + \dotsb + b_k = \abs{\sigma}$, there exist measurable sets $E_1, \dotsc, E_k \subseteq \mathbb{R}^d$ such that \begin{enumerate}[(i)] \item The $E_j$'s form a partition of $\mathbb{R}^d$, \emph{i.e.}, \[ \mathbb{R}^d = \bigcup_{j = 1}^k E_j, \hspace{0.5cm} E_i \cap E_j = \emptyset \text{ if $i \neq j$;} \] \item $\sigma(E_j) = b_j$ for every $j = 1, \dotsc, k$. \end{enumerate} \end{corollary} \begin{proof} Let $y \in \mathbb{R}^d \setminus \gra{0}$ given by Proposition \ref{slice}, and observe that the cumulative distribution function \[ F(t) = \sigma \left( \gra{x \in \mathbb{R}^d \left | \right. x \cdot y < t} \right) \] is continuous. Hence we may find $E_1, \dotsc, E_k$ each of the form \[ E_j = \gra{x \in \mathbb{R}^d \left | \right. t_j < x \cdot y \leq t_{j+1}} \] for suitable $-\infty = t_1 < t_2 < \dotsb < t_k < t_{k+1} = +\infty$, such that $\sigma(E_j) = b_j$. \end{proof} \begin{corollary} \label{cor-partition-2} Given $b_1, \dotsc, b_k$ non-negative numbers with $b_1 + \dotsb + b_k < \abs{\sigma}$, there exists measurable sets $E_0, E_1, \dotsc, E_k \subseteq \mathbb{R}^d$ such that \begin{enumerate}[(i)] \item The $E_j$'s form a partition of $\mathbb{R}^d$, \emph{i.e.}, \[ \mathbb{R}^d = \bigcup_{j = 0}^k E_j, \hspace{0.5cm} E_i \cap E_j = \emptyset \text{ if $i \neq j$;} \] \item $\sigma(E_j) = b_j$ for every $j = 1, \dotsc, k$; \item the distance between $E_i$ and $E_j$ is strictly positive if $i, j \geq 1$, $i \neq j$. \end{enumerate} \end{corollary} \begin{proof} If $k = 1$ the results follows trivially by Corollary \ref{cor-partition-1} applied to $b_1, \abs{\sigma} - b_1$. If $k \geq 2$, define \[ \epsilon = \frac{\abs{\sigma} - b_1 - \dotsb - b_k}{k-1} > 0. \] As before, letting $y \in \mathbb{R}^d \setminus \gra{0}$ given by Proposition \ref{slice} and considering the corresponding cumulative distribution function, we may find $F_1, \dotsc, F_{2k-1}$ each of the form \[ F_j = \gra{x \in \mathbb{R}^d \left | \right. t_j < x \cdot y \leq t_{j+1}} \] for suitable $-\infty = t_1 < t_2 < \dotsb < t_{2k-1} < t_{2k} = +\infty$, such that \begin{align*} \sigma(F_{2j-1}) &= b_j \hspace{0.5cm} \forall j = 1, \dotsc, k \\ \sigma(F_{2j}) &= \epsilon \hspace{0.5cm} \forall j = 1, \dotsc, k-1 \end{align*} Finally we define \begin{align*} E_j &= F_{2j-1} \hspace{0.5cm} \forall j = 1, \dotsc, k \\ E_0 &= \bigcup_{j = 1}^{k-1} F_{2j}. \end{align*} The properties (i), (ii) are immediate to check, while the distance between $E_i$ and $E_j$, for $i, j \geq 1$, $i \neq j$, is uniformly bounded from below by \[ \min \gra{t_{2j+1} - t_{2j} \left | \right. 1 \leq j \leq k-1} > 0. \] \end{proof} \section{The condition \eqref{conc-condition} is sharp} \label{sharpness} In this section we prove that the condition \eqref{conc-condition} is the best possible, \emph{i.e.}, given any repulsive cost function there exists $\rho \in \mathcal{P}(\mathbb{R}^d)$ with $\mu(\rho) = 1/N$ such that $C(\rho) = \infty$. Fix $\omega$ as in Definition \ref{repulsive-cost}, and set \[ k = \int_{B(0,1)} \frac{\omega'(\abs{y})}{\abs{y}^{d-1}} \mathop{}\!\mathrm{d} y. \] Note that $k$ is a positive finite constant, depending only on $\omega$ and the dimension $d$. In fact, integrating in spherical coordinates, \[ k = \int_0^1 \frac{\omega'(r)}{r^{d-1}} \alpha_d r^{d-1} \mathop{}\!\mathrm{d} r = \alpha_d \omega(1), \] where $\alpha_d$ is the $d$-dimensional volume of the unit ball $B(0,1) \subseteq \mathbb{R}^d$. Now define a probability measure $\rho \in \mathcal{P}(\mathbb{R}^d)$ as \begin{equation} \label{rho-def} \int_{\mathbb{R}^d} f \mathop{}\!\mathrm{d} \rho := \frac{1}{N} f(0) + \frac{N-1}{N} \int_{B(0,1)} f(x) \frac{\omega'(\abs{x})}{k\abs{x}^{d-1}} \mathop{}\!\mathrm{d} x \hspace{0.5cm} \forall f \in C_b(\mathbb{R}^d). \end{equation} This measure has an atom of mass $1/N$ in the origin, and is absolutely continuous on $\mathbb{R}^d \setminus \gra{0}$. Hence the concentration of $\rho$ is equal to $1/N$, even if for every ball $B$ around the origin one has $\rho(B) > 1/N$. We want to prove that any symmetric transport plan with marginals $\rho$ has infinite cost. Let us consider, by contradiction, a symmetric plan $P$, with $\pi(P) = \rho$, such that \[ \int \sum_{1 \leq i < j \leq N} \frac{1}{\omega(\abs{x_i-x_j})} \mathop{}\!\mathrm{d} P(X) < \infty \] Then one would have the following geometric properties. \begin{lemma} \label{hyperplanes} \begin{enumerate}[(i)] \item $P(\gra{(x_1, \dotsc, x_N) : \exists i \neq j, x_i = x_j}) = 0;$ \item $P$ is concentrated over the $N$ coordinate hyperplanes $\gra{x_j = 0}$, $j = 1, \dotsc, N$, \emph{i.e.}, \[ \supp(P) \subseteq E := \bigcup_{j = 1}^N \gra{x_j = 0}. \] \end{enumerate} \end{lemma} \begin{proof} (i) Since $\omega(0) = 0$, recalling Definition \ref{repulsive-cost}, the cost function is identically equal to $+\infty$ in the region $\gra{(x_1, \dotsc, x_N) : \exists i \neq j, x_i = x_j}$. Therefore, since by assumption the cost of $P$ is finite, it must be \[ P(\gra{(x_1, \dotsc, x_N) : \exists i \neq j, x_i = x_j}) = 0. \] (ii) Define \begin{align*} p_1 &= P(\gra{x_1 = 0}) \\ p_2 &= P(\gra{x_1 = 0} \cap \gra{x_2 = 0}) \\ &\vdots \\ p_N &= P((0,\dotsc,0)). \end{align*} Note that $p_1 = P(\gra{x_1 = 0}) = \pi(P)(\gra{0}) = \rho(\gra{0}) = 1/N$. We claim that $p_2 = \dotsb = p_N = 0$. It suffices to prove that $p_2 = 0$, since by monotonicity of the measure $P$ we have $p_j \geq p_{j+1}$. Since $P$ has finite cost, \[ \int_{\mathbb{R}^{Nd}} \frac{\mathop{}\!\mathrm{d} P}{\omega(\abs{x_1-x_2})} \] must be finite. However, \[ \int_{\mathbb{R}^{Nd}} \frac{\mathop{}\!\mathrm{d} P}{\omega(\abs{x_1-x_2})} \geq \int_{\gra{x_1 = 0} \cap \gra{x_2 = 0}} \frac{\mathop{}\!\mathrm{d} P}{\omega(\abs{x_1-x_2})} = p_2 \int_{\mathbb{R}^{2d}} \frac{\delta_0(x_1) \delta_0(x_2)}{\omega(\abs{x_1-x_2})} \mathop{}\!\mathrm{d} x_1 \mathop{}\!\mathrm{d} x_2, \] and hence $p_2$ must be zero. By inclusion-exclusion we have \[ P(E) = \sum_{j = 1}^N (-1)^{j+1} \binom{N}{j} p_j = Np_1 = 1, \] and hence $P$ is concentrated over $E$. \end{proof} In view of Lemma \ref{hyperplanes}, letting $H_j = \{x_j = 0\}$ for $j = 1, \dotsc, N$, \[P = \sum_{j = 1}^N P|_{H_j}.\] For every $j = 1, \dotsc, N$ there exists a unique measure $Q_j$ over $\mathbb{R}^{(N-1)d}$ such that, recalling equation \eqref{tensor-j}, $P|_{H_j} = Q_j \otimes_j \delta_0$, with $Q_j(\mathbb{R}^{(N-1)d}) = \frac{1}{N}$. Since $P$ is symmetric, considering a permutation $s \in S_N$ with $s(j) = j$, it follows that $Q_j$ is symmetric; then, considering any permutation in $S_N$ we see that there exists a symmetric probability $Q$ over $\mathbb{R}^{(N-1)d}$ such that $Q_j = \frac{1}{N} Q$ for every $j = 1, \dotsc, N$, \emph{i.e.}, \[P = \frac{1}{N} \sum_{j = 1}^N Q \otimes_j \delta_0.\] Projecting $P$ to its one-particle marginal and using the definition of $\rho$ in \eqref{rho-def}, we get that $\pi(Q)$ is absolutely continuous w.r.t. the Lebesgue measure, with \[ \frac{\mathop{}\!\mathrm{d} \pi(Q)}{\mathop{}\!\mathrm{d} \mathcal{L}^d} = \frac{\chi_{B(0,1)}(x)\omega'(x)}{k \abs{x}^{d-1}}. \] Here we get the contradiction, because \begin{align*} \int c(X) dP(X) &\geq \frac{1}{N} \int \frac{1}{\omega(\abs{x_1-x_2})} \delta_{0} (x_1) \mathop{}\!\mathrm{d} x_1 \mathop{}\!\mathrm{d} Q(x_2, \dotsc, x_N) \\ &= \frac{1}{N} \int \frac{1}{\omega(\abs{x_2})} \mathop{}\!\mathrm{d} Q(x_2, \dotsc, x_N) = \frac{1}{N} \int_{\mathbb{R}^d} \frac{1}{\omega(\abs{x})} \mathop{}\!\mathrm{d} \pi (Q)(x) \\ &= \frac{1}{N} \int_{B(0,1)} \frac{\omega'(\abs{x})}{\omega(\abs{x})} \frac{1}{k \abs{x}^{d-1}} \mathop{}\!\mathrm{d} x = \frac{1}{N}\frac{\alpha_d}{k} \int_0^1 \frac{\omega'(r)}{\omega(r)} \mathop{}\!\mathrm{d} r = +\infty. \end{align*} \section{Non-atomic marginals} \label{zero-atoms} This short section deals with the case where $\rho$ is non atomic, \emph{i.e.}, $\mu(\rho) = 0$. In this case the transport plan is given by an optimal transport map in Monge's fashion, which we proceed to construct. Using Corollary \ref{cor-partition-1}, let $E_1, \dotsc, E_{2N}$ be a partition of $\mathbb{R}^d$ such that \[ \rho(E_j) = \frac{1}{2N} \hspace{0.5cm} \forall j = 1, \dotsc, 2N. \] Next we take a measurable function $\phi \colon \mathbb{R}^d \to \mathbb{R}^d$, preserving the measure $\rho$ and defined locally such that \begin{align*} \phi(E_j) &= E_{j+2} \hspace{0.5cm} \forall j = 1, \dotsc, N-2 \\ \phi (E_{2N-1}) &= E_1 \\ \phi (E_{2N}) &= E_2. \end{align*} The behaviour of $\phi$ on the hyperplanes which separate the $E_j$'s is arbitrary, since they form a $\rho$-null set. Note that $\abs{x - \phi(x)}$ is uniformy bounded from below by some constant $\gamma > 0$, as is clear by the construction of the $E_j$'s (see the proof of Corollary \ref{cor-partition-1}). A transport plan $P$ of finite cost is now defined for every $f \in C_b(\mathbb{R}^{Nd})$ by \[ \int_{\mathbb{R}^{Nd}} f \mathop{}\!\mathrm{d} P = \int_{\mathbb{R}^{Nd}} f(x, \phi(x), \dotsc, \phi^{N-1}(x)) \mathop{}\!\mathrm{d} \rho(x), \] since \[ \int_{\R^{Nd}} c \mathop{}\!\mathrm{d} P = \binom{N}{2} \int_{\mathbb{R}^d} \frac{1}{\omega(\abs{x - \phi(x)})} \mathop{}\!\mathrm{d} \rho(x) \leq \binom{N}{2} \frac{1}{\omega(\gamma)}. \] \section{Marginals with a finite number of atoms} \label{finite-atoms} This section constitutes the core of the proof, as we deal with measures of general form with an arbitrary (but finite) number of atoms. Throughout this and the next Section we assume that the marginal $\rho$ fulfills the condition \eqref{conc-condition}. \subsection{The number of atoms is less than or equal to $N$} Note that, if the number of atoms is at most $N$, then $\rho$ must have a non-atomic part $\sigma$, due to the condition \eqref{conc-condition}. From here on we consider \[ \rho = \sigma + \sum_{i = 1}^k b_i \delta_{x_i}, \] where $b_1 \geq b_2 \geq \dotsb \geq b_k > 0$. We begin with the following \begin{mydef} A \emph{partition} of $\sigma$ of level $k \leq N$ subordinate to $(x_1, \dotsc, x_k;$ $b_1, \dotsc, b_k)$ is \[ \sigma = \tau + \sum_{i = 1}^k \sum_{h = i+1}^N \sigma^i_h, \] where: \begin{enumerate}[(i)] \item $\tau, \sigma^i_h$ are non-atomic measures; \item for every $i$ and every $h \neq k$, the distance between $\supp \sigma^i_h$ and $\supp \sigma^i_k$ is strictly positive; \item for every $i,h$, if $j \leq i$ then $x_j$ has a strictly positive distance from $\supp {\sigma^i_h}$; \item for every $i,h$, $\abs{\sigma^i_h} = b_i$, and $\abs{\tau} > 0$. \end{enumerate} \end{mydef} Note that such a partition may only exists if \begin{equation} \label{partition-condition} \abs{\sigma} > \sum_{i = 1}^k (N-i) b_i. \end{equation} On the other hand, the following Lemma proves that the condition \eqref{partition-condition} is also sufficient to get a partition of $\sigma$. \begin{lemma} \label{existence-partition} Let $(b_1, \dotsc, b_k)$ with $k \leq N$, and \[ \abs{\sigma} > \sum_{i = 1}^k (N-i) b_i. \] Then there exists a partition of $\sigma$ subordinate to $(x_1, \dotsc, x_k; b_1, \dotsc, b_k)$. \end{lemma} \begin{proof} Fix $(x_1, \dotsc, x_k)$ and for every $\varepsilon > 0$ define \[ A_\varepsilon = \bigcup_{j = 1}^k B(x_j, \varepsilon). \] and $\sigma_{\varepsilon} = \sigma \chi_{A_\varepsilon}$. Then take $\varepsilon$ small enough such that \begin{equation} \label{existence-condition-eq1} \abs{\sigma - \sigma_{\varepsilon}} > \sum_{i = 1}^k (N-i)b_i, \end{equation} which is possibile because $\mu(\sigma) = 0$ ($\sigma$ has concentration zero), and hence $\abs{\sigma_\varepsilon} \to 0$ as $\varepsilon \to 0$. Due to Corollary \ref{cor-partition-2}, the set $\mathbb{R}^d \setminus A_{\varepsilon}$ may be partitioned as \[ \mathbb{R}^d \setminus A_{\varepsilon} = \left( \bigcup_{i = 1}^k \bigcup_{h = i+1}^N E^i_h \right) \cup E, \] with $\sigma(E^i_h) = b_i$, and $\mathrm{dist}(E^i_h,E^i_k)$ is uniformly bounded from below. Finally define $\sigma^i_h = \sigma\chi_{E^i_h}$, $\tau = \sigma_\varepsilon + \sigma\chi_{E}$. \end{proof} \begin{prop} \label{partition-construction} Suppose that $k \leq N$ and $(b_1, \dotsc, b_k)$ are such that \begin{equation} \label{existence-condition-eq2} \abs{\sigma} > N b_1 - \sum_{j = 1}^k b_j. \end{equation} Then there exists a transport plan of finite cost with marginals \[ \sigma + \sum_{j = 1}^k b_j \delta_{x_j}. \] \end{prop} \begin{proof} In order to simplify the notation, set $b_{k+1} = 0$. First of all we shall fix a partition of $\sigma$ subordinate to $(x_1, \dotsc, x_k; b_1-b_2, \dotsc, b_{k-1}-b_k, b_k)$. To do this we apply Lemma \ref{partition-condition}, since \begin{align*} \sum_{i = 1}^{k-1} (N-i)(b_{i} - b_{i+1}) + (N-k)b_k &= (N-1)b_1 - \sum_{i = 2}^k b_i < \abs{\sigma}. \end{align*} Next we define the measures $\lambda_i = \delta_{x_1} \otimes \dotsb \otimes \delta_{x_i} \otimes \sigma^i_{i+1} \otimes \dotsb \otimes \sigma^i_N \in \mathcal{M}(\mathbb{R}^{Nd})$. Let us calculate the marginals of $\lambda_i$: since $\abs{\sigma^i_h} = b_i - b_{i+1}$ for all $h = i+1, \dotsc, N$, we get \[ \pi^{j}_{\#} \lambda_i = \begin{cases} (b_i - b_{i+1})^{N-i} \delta_{x_j} & \text{if $0 \leq j \leq i$} \\ (b_i - b_{i+1})^{N-i-1} \sigma^i_j & \text{if $i+1 \leq j \leq N$.} \end{cases} \] Let us define, for $i = 1, \dotsc, k$, the measure \[ P_i = \frac{N}{(b_i - b_{i+1})^{N-i-1}} (\lambda_i)_{sym}, \] where $P_i = 0$ if $b_i = b_{i+1}$. By Lemma \ref{sym-marginals}, the marginals of $P_i$ are equal to \[ \pi(P_i) = \frac{1}{(b_i - b_{i+1})^{N-i-1}} \sum_{j = 0}^N \pi^j_{\#} \lambda_i = \sum_{j = 1}^i (b_i - b_{i+1}) \delta_{x_j} + \sum_{h = i+1}^N \sigma^i_h, \] so that \[ \sum_{i = 1}^k \pi(P_i) = \sum_{j = 1}^k b_j \delta_{x_j} + \sum_{i = 1}^k \sum_{h = i+1}^N \sigma^i_h \] It suffices now to take any symmetric transport plan $P_\tau$ of finite cost with marginals $\tau$, given by the result of Section \ref{zero-atoms}, and finally set \[ P = P_\tau + \sum_{i = 1}^k P_i. \] \end{proof} As a corollary we obtain \begin{thm} \label{N-or-less-atoms} If $\rho$ has $k \leq N$ atoms, then there exists a transport plan of finite cost. \end{thm} \begin{proof} Let \[ \rho = \sigma + \sum_{j=1}^k b_j \delta_{x_j}. \] Note that, since $b_1 < 1/N$, \[ \abs{\sigma} = 1 - \sum_{j = 1}^k b_j > Nb_1 - \sum_{j = 1}^k b_j, \] hence we may apply Proposition \ref{partition-construction} to conclude. \end{proof} \subsection{The number of atoms is greater than $N$} Here we deal with the much more difficult situation in which $\rho$ has $N+1$ or more atoms, \emph{i.e.}, \[\rho = \sigma + \sum_{j = 1}^k b_j \delta_{x_j}\] with $k \geq N+1$ and as before $b_1 \geq b_2 \geq \dotsb \geq b_k > 0$. Note that in this case it might happen that $\sigma = 0$. The main point is to use a double induction on the dimension $N$ and the number of atoms $k$, as will be clear in Proposition \ref{mega-prop}. The following lemma is a simple numerical trick needed for the inductive step in Proposition \ref{mega-prop}. \begin{lemma} \label{t-lemma} Let $(b_1, \dotsc, b_k)$ with $k \geq N+2$ and \begin{equation} \label{main-condition} (N-1)b_1 \leq \sum_{j = 2} ^ k b_j. \end{equation} Then there exist $t_2, \dotsc, t_k$ such that \begin{enumerate}[(i)] \item $t_2 + \dotsb + t_k = (N-1)b_1$; \item for every $j= 2, \dotsc, k$, $0 \leq t_j \leq b_j$, and moreover \[ t_2 \geq \dotsb \geq t_k. \] \[ b_2 - t_2 \geq b_3 - t_3 \geq \dotsb \geq b_{k} - t_{k}, \] \item \[ (N-2) t_2 \leq \sum_{j = 3}^k t_j; \] \item \[ (N-1) (b_2-t_2 ) \leq \sum_{j = 3}^k (b_j-t_j). \] \end{enumerate} \end{lemma} \begin{proof} For $j=2, \dotsc, k$ define \[ p_j = \sum_{h = j}^k b_j, \] and let $\bar{\jmath}$ be the least $j \geq 2$ such that $(N-j+2)b_j \leq p_j$; note that $j = N+2$ works --- hence $\bar{\jmath} \leq N+2$. Define \begin{align*} t_j &= b_j - \frac{p_2 - (N-1)b_1}{N} & \text{for $j = 2, \dotsc, \bar{\jmath}-1$,} \\ t_j &= b_j - \frac{b_j}{p_{\bar{\jmath}}} \frac{p_2 - (N-1)b_1}{N} (N-\bar{\jmath}+2) & \text{for $j = \bar{\jmath}, \dotsc, k$.} \end{align*} Next we prove that this choice fulfills the conditions (i)-(iv). \paragraph{Proof of (i)} \begin{align*} \sum_{j = 2}^k t_j &= p_2 - \frac{p_2 - (N-1)b_1}{N}(\bar{\jmath}-2) - \frac{p_2 - (N-1)b_1}{N} (N-\bar{\jmath}+2) \\ &= p_2 \left( 1 - \frac{\bar{\jmath}-2}{N} - \frac{N-\bar{\jmath}+2}{N} \right) + (N-1)b_1 \left( \frac{\bar{\jmath}-2}{N} + \frac{N-\bar{\jmath}+2}{N} \right) \\ &= (N-1)b_1. \end{align*} \paragraph{Proof of (ii)} In view of the fact that $(N-1)b_1 \leq p_2$ and $\bar{\jmath} \leq N+2$, it is clear that $t_j \leq b_j$. If $j < \bar{\jmath}$ we have $(N-j+2)b_j > p_j$, and hence \[ p_2 = b_2 + \dotsb + b_{j-1} + p_j < (j-2)b_1 + (N-j+2)b_j. \] Thus, since $2 \leq j \leq N+1$, \begin{align*} t_j &= \frac{Nb_j - p_2 + (N-1)b_1}{N} > \frac{Nb_j - (N-j+2)b_j - (j-2)b_1 + (N-1)b_1}{N} \\ &= \frac{(j-2)b_j + (N-j+1)b_1}{N} \geq 0. \end{align*} To show that $t_j \geq 0$ for $j \geq \bar{\jmath}$, we must prove $[p_2-(N-1)b_1](N-\bar{\jmath}+2) \leq Np_{\bar{\jmath}}$, which is trivial if $\bar{\jmath} = N-2$. Otherwise, it is equivalent to \[ -(\bar{\jmath}-2)[p_2 - (N-1)b_1] + N[b_2 + \dotsb + b_{\bar{\jmath}-1} - (N-1)b_1] \leq 0. \] Since $2 \leq \bar{\jmath} \leq N+1$, the first term is negative and $b_2 + \dotsb + b_{\bar{\jmath}-1} - (N-1)b_1 \leq -(N-\bar{\jmath}+1)b_1 \leq 0$. \vspace{0.5cm} Using the fact that $b_2 \geq \dotsb \geq b_k$, it is easy to see that $t_2 \geq \dotsb \geq \dotsb t_{\bar{\jmath}-1}$ and $t_{\bar{\jmath}} \geq \dotsb \geq t_k$ --- note that for $j \geq \bar{\jmath}$ we have $t_j = \alpha b_j$, for some $0\leq \alpha \leq 1$. As for the remaining inequality, \[ t_{\bar{\jmath}-1} \geq t_{\bar{\jmath}} \iff b_{\bar{\jmath}-1} - b_{\bar{\jmath}} \geq \frac{p_2 - (N-1)b_1}{Np_{\bar{\jmath}}} [p_{\bar{\jmath}} -(N-\bar{\jmath}+2)b_{\bar{\jmath}}], \] we already proved \[ \frac{p_2 - (N-1)b_1}{Np_{\bar{\jmath}}} \leq \frac{1}{N-\bar{\jmath}+2}; \] moreover, by definition of $\bar{\jmath}$, we have $(N-\bar{\jmath}+3)b_{\bar{\jmath}-1} > p_{\bar{\jmath}-1}$, or equivalently $(N-\bar{\jmath}+2)b_{\bar{\jmath}-1} > p_{\bar{\jmath}}$. Thus \[ \frac{p_2 - (N-1)b_1}{Np_{\bar{\jmath}}} [p_{\bar{\jmath}} -(N-\bar{\jmath}+2)b_{\bar{\jmath}}] \leq \frac{p_{\bar{\jmath}}}{N-\bar{\jmath}+2} - b_{\bar{\jmath}} < b_{\bar{\jmath}-1} - b_{\bar{\jmath}}, \] as wanted. It is left to show that $b_2-t_2 \geq \dotsb \geq b_k-t_k$. It is trivial to check that $b_2-t_2 = \dotsb = b_{\bar{\jmath}-1} - t_{\bar{\jmath}-1}$, and $b_{\bar{\jmath}} - t_{\bar{\jmath}} \geq \dotsb \geq b_k - t_k$ using $b_{\bar{\jmath}} \geq \dotsb \geq b_k$ as before. Finally, \[ b_{\bar{\jmath}-1}-t_{\bar{\jmath}-1} \geq b_{\bar{\jmath}} - t_{\bar{\jmath}} \iff \frac{p_2-(N-1)b_1}{N} \geq \frac{b_{\bar{\jmath}}}{p_{\bar{\jmath}}} \frac{p_2-(N-1)b_1}{N} (N-\bar{\jmath}+2), \] which is true since $(N-\bar{\jmath}+2)b_{\bar{\jmath}} \leq p_{\bar{\jmath}}$ and $p_2-(N-1)b_1 \geq 0$. \paragraph{Proof of (iii)} The thesis is equivalent to \[ (N-1) t_2 \leq \sum_{j = 2}^k t_j \iff (N-1)t_2 \leq (N-1)b_1, \] and this is implied by $t_2 \leq b_2 \leq b_1$. \paragraph{Proof of (iv)} The thesis is equivalent to \[ N(b_2-t_2) \leq p_2 - (N-1)b_1, \] which is in fact an equality (see the definition of $t_2$). \end{proof} We are ready to present the main result of this Section, which provides a transport plan of finite cost under an additional hypothesis on the tuple $(b_1, \dotsc, b_k)$. The result is peculiar for the fact that it does not involve the non-atomic part of the measure -- it is in fact a general discrete construction to get a purely atomic symmetric measure having fixed purely atomic marginals. \begin{prop} \label{mega-prop} Let $k > N$ and $(b_1, \dotsc, b_k)$ with \begin{equation} \label{mega-prop-condition} (N-1)b_1 \leq b_2 + \dotsb + b_k. \end{equation} Then for every $x_1, \dotsc, x_k \in \mathbb{R}^d$ distinct, there exists a symmetric transport plan of finite cost with marginals $\rho = b_1 \delta_{x_1} + \dotsb + b_k \delta_{x_k}$. \end{prop} \begin{proof} For every pair of positive integers $(N,k)$, with $k > N$, let $\mathfrak{P}(N,k)$ be the following proposition: \vspace{0.5cm} \begin{adjustwidth}{1cm}{1cm} Let $(x_1,\dotsc, x_k; b_1, \dotsc, b_k)$ with $(N-1)b_1 \leq b_2 + \dotsb + b_k$. Then for every $(x_1, \dotsc, x_k)$ there exists a symmetric $N$-transport plan of finite cost with marginals $b_1 \delta_{x_1} + \dotsb + b_k \delta_{x_k}$. \end{adjustwidth} \vspace{0.5cm} We will prove $\mathfrak{P}(N,k)$ by double induction, in the following way: first we prove $\mathfrak{P}(1,k)$ for every $k$ and $\mathfrak{P}(N,N+1)$ for every $N$. Then we prove \[ \mathfrak{P}(N-1,k) \wedge \mathfrak{P}(N,k-1) \implies \mathfrak{P}(N,k). \] \paragraph{Proof of $\mathfrak{P}(1,k)$} This is trivial: simply take $b_1 \delta_{x_1} + \dotsb + b_k \delta_{x_k}$ as a ``transport plan''. \paragraph{Proof of $\mathfrak{P}(N,N+1)$} Let us denote by $A_N$ the $(N+1)\times (N+1)$ matrix \[ A_N = \begin{pmatrix} 0 & 1 & \cdots & 1 \\ 1 & 0 & \cdots & 1 \\ \vdots & & \ddots & \\ 1 & \cdots & 1 & 0 \end{pmatrix}, \] whose inverse is \[ A_N^{-1} = \frac{1}{N} \begin{pmatrix} -(N-1) & 1 & \cdots & 1 \\ 1 & -(N-1) & \cdots & 1 \\ \vdots & & \ddots & \\ 1 & \cdots & 1 & -(N-1) \end{pmatrix} \] Define also the following $(N+1)\times N$ matrix, with elements in $\mathbb{R}^d$: \[ (x_{ij}) = \begin{pmatrix} x_2 & x_3 & \cdots & x_{N+1} \\ x_1 & x_3 & \cdots & x_{N+1} \\ \vdots & \vdots & \ddots & \vdots \\ x_1 & x_2 & \cdots & x_N \end{pmatrix}, \] where the $i$-th row is $(x_1, \dotsc, x_{i-1}, x_{i+1}, \dotsc, x_{N+1})$. We want to construct a transport plan of the form \[ P = N \sum_{i = 1}^{N+1} a_i (\delta_{x_{i1}} \otimes \dotsb \otimes \delta_{x_{iN}})_{sym}, \] where $a_i \geq 0$. Note that, by Lemma \ref{sym-marginals}, the marginals of $P$ are equal to \[ \pi(P) = \sum_{j = 1}^{N+1} \left( \sum_{\substack{ i = 1 \\ i \neq j}}^{N+1} a_i \right) \delta_{x_j}. \] Thus, the condition on the $a_i$'s to have $\pi(P) = \rho$ is \[ A_N \begin{pmatrix} a_1 \\ \vdots \\ a_{N+1} \end{pmatrix} = \begin{pmatrix} b_1 \\ \vdots \\ b_{N+1} \end{pmatrix}, \] \emph{i.e.}, \[ \begin{pmatrix} a_1 \\ \vdots \\ a_{N+1} \end{pmatrix} = A_N^{-1} \begin{pmatrix} b_1 \\ \vdots \\ b_{N+1} \end{pmatrix}. \] Finally, observe that the condition \eqref{main-condition} implies that $a_1 \geq 0$, while the fact that $b_1 \geq b_2 \geq \dotsb \geq b_{N+1}$ leads to $a_1 \leq a_2 \leq \dotsb \leq a_{N+1}$, and hence $a_i \geq 0$ for every $i$ and we are done. \paragraph{Inductive step} Let $(b_1, \dotsc, b_k)$ satisfying \eqref{main-condition}, with $k \geq N+2$ (otherwise we are in the case $\mathfrak{P}(N,N+1)$, already proved). Take $t_2,\dotsc, t_k$ given by Lemma \ref{t-lemma}, and apply the inductive hypotheses to find \begin{itemize} \item a symmetric transport plan $Q_1$ of finite cost in $(N-1)$ variables, with marginals \[ \pi(Q_1) = \sum_{j=2}^k t_j \delta_{x_j}; \] \item a symmetric transport plan $R$ of finite cost in $N$ variables, with marginals \[ \pi(R) = \sum_{j=2}^k (b_j-t_j) \delta_{x_j}. \] \end{itemize} Define \[ Q = \frac{1}{N-1} \sum_{j = 1}^N (Q_1 \otimes_j \delta_{x_1}). \] Since $Q_1$ is symmetric, $Q$ is symmetric. Moreover, using Lemma \ref{t-lemma} (i), \[ \pi(Q) = \frac{1}{N-1} \delta_{x_1} \sum_{j=2}^k t_j + \sum_{j=2}^k t_j \delta_{x_j} = b_1 \delta_{x_1} + \sum_{j=2}^k t_j \delta_{x_j}. \] The transport plan $P = Q+R$ is symmetric, with marginals $\pi(P) = b_1 \delta_{x_1} + \dotsb + b_{k} \delta_{x_k}$. \end{proof} In order to conclude the proof of this Section, we must now deal not only with the non-atomic part of $\rho$, but also with the additional hypothesis of Proposition \ref{mega-prop}. Indeed, the presence of a non-atomic part will fix the atomic mass exceeding the inequality \eqref{mega-prop-condition}, as will be seen soon. \begin{mydef} Given $N$, we say that the tuple $(b_1, \dotsc, b_\ell)$ is \emph{fast decreasing} if \[ (N-j)b_j > \sum_{i > j} b_i \hspace{0.5cm} \forall j = 1, \dotsc, \ell-1. \] \end{mydef} \begin{remark} \label{maximal-fd-part} Note that if $(b_1, \dotsc, b_\ell)$ is fast decreasing, then necessarily $\ell < N$. As a consequence, given any sequence $(b_1, b_2, \dotsc )$, even infinite, we may select its maximal fast decreasing initial tuple $(b_1, \dotsc, b_\ell)$ (which might be empty, \emph{i.e.}, $\ell = 0$). \end{remark} \begin{thm} \label{thm-ell-atoms} If $\rho$ is such that \[\rho = \sigma + \sum_{j = 1}^k b_j \delta_{x_j}\] with $k > N$ atoms, then there exists a transport plan of finite cost. \end{thm} \begin{proof} Consider $(b_1, \dotsc, b_k)$ and use the Remark \ref{maximal-fd-part} to select its maximal fast decreasing initial tuple $(b_1, \dotsc, b_\ell)$, $\ell < N$. Thanks to Proposition \ref{mega-prop}, we may construct a transport plan $P_{\ell+1}$ over $\mathbb{R}^{(N-\ell)d}$ with marginals $b_{\ell+1} \delta_{x_{\ell+1}} + \dotsb + b_k\delta_{x_k}$, since \[ (N-\ell-1)b_{\ell+1} \leq \sum_{j = \ell+2}^k b_j \] by maximality of $(b_1, \dotsc, b_\ell)$ --- and this is condition \eqref{main-condition} in this case. We extend step by step $P_{\ell+1}$ to an $N$-transport plan, letting \[ P_{j} = \frac{1}{N-j} \sum_{i = j}^N (P_{j+1} \otimes_i \delta_{x_j}), \] for $j = \ell, \ell-1, \dotsc, 1$. Let $p_\ell = b_{\ell+1} + \dotsb + b_{k}$, and $q_\ell = \tfrac{p_\ell}{N-\ell}$. We claim that $\abs{P_j} = (N-j+1) q_\ell$. In fact, by construction $\abs{P_{\ell+1}} = p_\ell$, and inductively \[ \abs{P_j} = \frac{1}{N-j} \sum_{i = j-1}^N \abs{P_{j+1}} = \frac{N-j+1}{N-j} (N-j) q_\ell = (N-j+1) q_\ell. \] Moreover, \[ \pi(P_j) = \sum_{i = j}^k q_\ell \delta_{x_i} + \sum_{i = \ell+1}^k b_i \delta_{x_i}. \] This is true by construction in the case $j = \ell+1$, and inductively \[ \pi(P_j) = \frac{1}{N-j} \delta_{x_j} \abs{P_{j+1}} + \frac{N-j}{N-j} \pi(P_{j+1}) = \sum_{i = j}^\ell q_\ell \delta_{x_i} + \sum_{i = \ell+1}^k b_i \delta_{x_i}. \] Note that, for every $i = 1, \dotsc, \ell$, $b_i \geq b_\ell > q_\ell$. We shall find, using Proposition \ref{partition-construction}, a transport plan of finite cost with marginals \[ \sigma + \sum_{i = 1}^\ell (b_i-q_\ell) \delta_{x_i}, \] since the condition \eqref{existence-condition-eq2} reads \[ N(b_1 - q_\ell) - \sum_{i = 1}^\ell (b_i-q_\ell) = Nb_1 - \sum_{i = 1}^\ell b_i - (N-\ell) q_\ell < 1 - \sum_{i = 1}^k b_i = \abs{\sigma}. \] \end{proof} \section{Marginals with countably many atoms} \label{final-section} In this Section we finally deal with the case of an infinite number of atoms, \emph{i.e.}, \[ \rho = \sigma + \sum_{j = 1}^\infty b_j \delta_{x_j} \] with $b_j > 0$, $b_{j+1} \leq b_{j}$ for every $j \geq 1$. The main issue is of topological nature: if the atoms $x_j$ are too close each other (for example, if they form a dense subset of $\mathbb{R}^d$) and the growth of $b_j$ for $j \to \infty$ is too slow, the cost might diverge. With this in mind, we begin with an elementary topological result, in order to separate the atoms in $N$ groups, with controlled minimal distance from each other. \begin{lemma} \label{set-partition} There exists a partition $\mathbb{R}^d = E_2 \sqcup \dotsb \sqcup E_{N+1}$ such that: \begin{enumerate}[(i)] \item for every $j=2, \dotsc, N+1$, $x_j \in \mathring{E}_j$; \item for every $j = 2, \dotsc, N+1$, $\partial E_j$ does not contain any $x_i$. \end{enumerate} \end{lemma} \begin{proof} For $j = 3, \dotsc, N+1$ let $r_j > 0$ small enough such that \[ x_i \notin B(x_j, r_j) \hspace{0.5cm} \text{for every $i = 1, \dotsc, N$, $i \neq j$.} \] Fixed any $j = 3, \dotsc, N+1$, by a cardinality argument there must be a positive real $t_j$ with $0 < t_j < r_j$ and $\partial B(x_j, t_j)$ not containing any $x_i$, $i \geq 1$. We take $E_j = B(x_j, t_j)$ for $j=3, \dotsc, N+1$. Note that this choice fullfills the conditions (i), (ii) for $j = 3, \dotsc, N+1$. Finally, we take \[ E_2 = \mathbb{R}^d \setminus \left( \bigcup_{j = 3}^{N+1} E_j \right) \] Clearly $x_2 \in \mathring{E_2}$, and moreover the condition (ii) is satisfied, since \[ \partial E_2 = \bigcup_{j=3}^{N+1} \partial E_j. \] \end{proof} Consider the partition given by Lemma \ref{set-partition}, and define the corresponding partition of $\mathbb{N}$ given by $\mathbb{N} = A_2 \cup \dotsb \cup A_{N+1}$, where \[ A_j = \gra{i \in \mathbb{N} \left | \right. x_i \in E_j}. \] Next we consider, for every $j = 2, \dotsc, {N+1}$ a threshold $n_j \geq 2$ large enough such that, defining \[ \epsilon_j = \sum_{\substack{ i \geq n_j \\ i \in A_j}} b_i, \] then \begin{equation} \label{eps-condition} \epsilon_2 + \dotsb + \epsilon_{N+1} < \min \gra{ b_{N+1}, \frac{1}{N} - b_1}. \end{equation} This may be done since the series $\sum b_i$ converges, and hence for every $j = 2, \dotsc, {N+1}$ the series \[ \sum_{i \in A_j} b_i \] is convergent. For every $j = 2, \dotsc, N+1$ define the following transport plan: \[ P_j = N \left[ \left(\sum_{i \in A_j, i \geq n_j} b_i \delta_{x_i} \right) \otimes \delta_{x_2} \otimes \dotsb \otimes \hat{\delta}_{x_j} \otimes \dotsb \otimes \delta_{x_{N+1}} \right]_{sym}, \] and note that, by Lemma \ref{sym-marginals}, \[ \pi(P_j) = \epsilon_j \sum_{\substack{h = 2 \\ h \neq j}}^{N+1} \delta_{x_h} + \sum_{\substack{ i \geq n_j \\ i \in A_j}} b_i \delta_{x_i}. \] Then let \[ P_{\infty} = \sum_{j = 2}^{N+1} P_j, \] and observe that \[ \pi(P_{\infty}) = \sum_{j = 2}^{N+1} \left( \sum_{\substack{i = 2 \\ i \neq j}}^{N+1} \epsilon_i \right) \delta_{x_j} + \sum_{j = 2}^{N+1} \sum_{\substack{ i \geq n_j \\ i \in A_j}} b_i \delta_{x_i}. \] Let now \[ \tilde{b}_i = \left\{ \begin{array}{ll} \displaystyle b_i - \sum_{\substack{h = 2 \\ h \neq i}}^{N+1} \epsilon_h &\text{if $2 \leq i \leq N+1$} \\ 0 & \text{if $i \geq n_j$ and $i \in A_j$ for some $j = 2, \dotsc, N+1$} \\ b_i & \text{otherwise}. \end{array} \right. \] We are left to find a transport plan of finite cost with marginals \[ \sigma + \sum_{i = 1}^\infty \tilde{b}_i \delta_{x_i}, \] which has indeed a finite number of atoms. Note that $\tilde{b}_i \geq 0$ for every $i$, thanks to condition \eqref{eps-condition}. Moreover, since $\tilde{b}_1 = b_1$ and $\tilde{b}_j \leq b_j$, then $\tilde{b}_1 \geq \tilde{b}_j$ for every $j \in \mathbb{N}$, as is used in what follows. If \[ (N-1)\tilde{b}_1 \leq \sum_{i = 2}^\infty \tilde{b}_i \] we may conclude using Proposition \ref{mega-prop}. Otherwise, we proceed like in the proof of Theorem \ref{thm-ell-atoms}, with $\{ \tilde{b}_j \}$ replacing $\gra{b_j}$. At the final stage, it is left to check that \[ N(\tilde{b}_1 - \tilde{q}_{k+1}) - \sum_{i = 1}^k (\tilde{b}_i - \tilde{q}_{k+1}) < 1 - \sum_{i = 1}^\infty b_i = \abs{\sigma}. \] Indeed this is true, since using the condition \eqref{eps-condition} one gets \[ N(\tilde{b}_1 - \tilde{q}_{k+1}) - \sum_{i = 1}^k (\tilde{b}_i - \tilde{q}_{k+1}) = Nb_1 - \sum_{i = 1}^\infty b_i + N(\epsilon_2 + \dotsc + \epsilon_{N+1}) < 1 - \sum_{i = 1}^\infty b_i. \] \bibliographystyle{plain}
2,869,038,154,431
arxiv
\section{Introduction, motivation and description of the results}\label{s1} \subsection{Category $\mathcal{O}$}\label{s1.1} Let $\mathfrak{g}$ be a semi-simple, finite dimensional Lie algebra over $\mathbb{C}$ with a fixed triangular decomposition \begin{displaymath} \mathfrak{g}=\mathfrak{n}_-\oplus \mathfrak{h}\oplus\mathfrak{n}_+. \end{displaymath} Consider the Bernstein-Gelfand-Gelfand (BGG) {\em category $\mathcal{O}$} associated to this decomposition. Category $\mathcal O$ plays an important role in modern representation theory and its applications. See e.g., \cite{BGS,Hu,So,Str} and references therein. Indecomposable blocks of $\mathcal{O}$ are described by finite dimensional algebras and possess a number of remarkable symmetries. For example, they have simple preserving duality and exhibit both Ringel self-duality and Koszul self-duality. See \cite{So,BGS,So2}. Category $\mathcal{O}$ has a number of interesting sub- and quotient- categories such as the {\em parabolic category $\mathcal{O}$} associated with the choice of a parabolic subalgebra $\mathfrak{p}$ of $\mathfrak{g}$ (see \cite{RC}) and the {\em $\mathcal{S}$-subcategories in $\mathcal{O}$} associated with $\mathfrak{p}$ (see \cite{FKM}). The latter categories are also known as the subcategories of {\em $\mathfrak{p}$-presentable modules}, see \cite{MS}, and can be alternatively defined as certain Serre quotients of category $\mathcal{O}$. \subsection{Auslander regular algebras}\label{s1.2} A finite dimensional (associative) algebra $A$ is called {\em Auslander-Gorenstein}, see \cite{Iy,CIM}, provided that the (left) regular module ${}_AA$ admits a finite injective coresolution \begin{displaymath} 0\to A\to Q_0\to Q_1\to\dots\to Q_k\to 0, \end{displaymath} such that $\mathrm{proj.dim}(Q_i)\leq i$, for all $i=0,1,\dots,k$. An Auslander-Gorenstein algebra of finite global dimension is called an {\em Auslander regular} algebra. Auslander regular algebras have a number of remarkable homological properties, see, for example, \cite[Theorem~1.1]{Iy} and \cite[Theorem~2.1]{AR}. We identify properties of algebras with that of their module categories, so, in an appropriate case, we can say that $A$-mod is Auslander regular, etc. \subsection{Motivation}\label{s1.3} This paper originates from a question which the second author received from Ren{\'e} Marczinzik in July 2020. The question was whether blocks of category $\mathcal{O}$ are Auslander regular. It was motivated by the observations that the answer is positive in small ranks based on computer calculations using the quiver and relation presentations of blocks of category $\mathcal{O}$ from \cite{St1}. \subsection{The main result}\label{s1.4} The main result of the present paper is the following statement which, in particular, answers positively and vastly generalizes the question posed by Ren{\'e} Marczinzik (see Theorem~\ref{thm3}, Corollary~\ref{cor6}, Theorem~\ref{thm9}, Corollary~\ref{cor10}, Theorem~\ref{thm12} and Theorem~\ref{thm14}): \begin{thmx}\label{thmA1} All blocks of (parabolic) category $\mathcal{O}$ are Auslander regular. All blocks of $\mathcal{S}$-sub\-ca\-te\-go\-ri\-es in $\mathcal{O}$ are Auslander-Gorenstein. \end{thmx} The first two papers \cite{Ma3} and \cite{Ma4} of the ``Some homological properties of category $\mathcal{O}$'' series were devoted to the study of projective dimension of structural modules in category $\mathcal{O}$, with the main emphasis on the projective dimension of indecomposable tilting and injective modules. Our proof of Theorem~\ref{thmA1} is heavily based on these results. \subsection{General setup for similar regularity phenomena}\label{s1.5} We observe that the condition used to define Auslander-Gorenstein and Auslander regular algebras makes perfect sense in the general setup of (generalized) tilting modules in the sense of Miyashita \cite{Mi}. Let $A$ be a finite-dimensional algebra and $T$ an $A$-module. Recall, that $T$ is called a {\em (generalized) tilting module} provided that it has the following properties: \begin{itemize} \item $T$ has finite projective dimension; \item $T$ is ext-self-orthogonal, that is, all extensions of positive degree from $T$ to $T$ vanish; \item the module ${}_AA$ has a finite coresolution by modules in $\mathrm{add}(T)$. \end{itemize} It is a standard fact that $\mathrm{proj.dim}(T)$ equals the length of a minimal coresolution of $A$ by modules from $\mathrm{add}(T)$. Now, given $A$ and a (generalized) tilting $A$-module $T$, we say that $A$ is $T$-regular provided that there is a coresolution \begin{displaymath} 0\to A\to Q_0\to Q_1\to\dots\to Q_k\to 0, \end{displaymath} such that $Q_i\in\mathrm{add}(T)$ and $\mathrm{proj.dim}(Q_i)\leq i$, for all $i=0,1,\dots,k$. The notion of an Auslander-Gorenstein algebra corresponds to the situation when the injective cogenerator is a (generalized) tilting module. \subsection{Regularity phenomena for various generalized tilting modules in category $\mathcal{O}$}\label{s1.6} The bounded derived category of the principal block $\mathcal{O}_0$ of category $\mathcal{O}$ admits two different actions, by derived equivalences, of the braid group associated to $(W,S)$ where $W$ is the Weyl group of $\mathfrak g$ and $S$ the set of simple reflections. These actions are given by the so-called {\em twisting functors}, see \cite{AS,KM}, and {\em shuffling functors}, see \cite{MS}. These actions can be used to define the following four classes of (generalized) tilting modules in $\mathcal{O}_0$: \begin{itemize} \item twisted projective modules; \item twisted tilting modules; \item shuffled projective modules; \item shuffled tilting modules. \end{itemize} In Sections~\ref{s8} and \ref{s9} we explore the regularity phenomena in $\mathcal{O}_0$ with respect to these four families of (generalized) tilting modules. Each of these families contains $|W|$ (generalized) tilting modules with some overlap between the families. \begin{probx}\label{quesregularity} For which of the above generalized tilting modules the category $\mathcal O_0$ has the regularity property? \end{probx} Here is a summary of our results, see Theorems~\ref{thm8.n1} and \ref{thm8.n21}, Propositions~\ref{prop9.n1} and \ref{prop9.n2} and Examples in Subsections~\ref{s8.5} and \ref{s9.4}: \begin{thmx}\label{thmA2} \hspace{2mm} \begin{enumerate}[$($a$)$] \item\label{thmA2.1} The category $\mathcal{O}_0$ has the regularity property with respect to both projective and tilting modules twisted by the longest element in a parabolic subgroup of the Weyl group. \item\label{thmA2.2} The category $\mathcal{O}_0$ has the regularity property with respect to both projective and tilting modules shuffled by a simple reflection. \item\label{thmA2.3} There exist both twisted and shuffled projective and tilting modules, with respect to which the category $\mathcal{O}_0$ does not have the regularity property. \end{enumerate} \end{thmx} \subsection{Projective dimension of twisted and shuffled projective and tilting modules}\label{s1.7} Theorem~\ref{thmA2} suggests that a complete answer to Problem~\ref{quesregularity} is non-trivial. One important step here is the following problem. \begin{probx}\label{prodim} Determine the projective dimensions of twisted and shuffled projective and tilting modules in $\mathcal O_0$. \end{probx} We explore Problem \ref{prodim} in Section~\ref{s10}. Since twisted projective modules coincide with translated Verma modules, while twisted tilting modules coincide with translated dual Verma modules, Problem \ref{prodim} provides a nice connection to the more recent papers \cite{CM,KMM} in the ``Some homological properties of category $\mathcal{O}$'' series. One of the main results of \cite{KMM} determines projective dimension of translated simple modules in $\mathcal{O}$. In Section~\ref{s10} we propose conjectures for projective dimension of twisted and shuffled projective and tilting modules in the spirit of the results of \cite{KMM} and prove a number of partial results. All these conjectures and results are formulated in terms of Kazhdan-Lusztig combinatorics, namely, Lusztig's $\mathfrak{a}$-function from \cite{Lu1,Lu2} and its various generalizations studied in \cite{CM} and \cite{KMM}. The case of shuffled modules seems at the moment to be significantly more difficult than the case of twisted modules. The main reason for this is the fact that, in contrast to twisting functors, shuffling functors do not commute with projective functors. \subsection*{Acknowledgments} This research was partially supported by the Swedish Research Council, G{\"o}ran Gustafsson Stiftelse and Vergstiftelsen. The third author was also partially supported by the QuantiXLie Center of Excellence grant no. KK.01.1.1.01.0004 funded by the European Regional Development Fund. We are especially indebted to Ren{\'e} Marczinzik for the question about Auslander regularity of $\mathcal{O}$, which started the research presented in this paper, and also for his comments on the preliminary version of the manuscript. \section{Auslander-Ringel regular quasi-hereditary algebras}\label{s2} \subsection{Quasi-hereditary algebras}\label{s2.1} Let $\Bbbk$ be an algebraically closed field and $A$ a finite dimensional (associative) $\Bbbk$-algebra. Let $L_1,L_2,\dots,L_n$ be a complete and irredundant list of isomorphism classes of simple $A$-modules. Note that, by fixing this list, we have fixed a linear order on the isomorphism classes of simple $A$-modules, this will be an essential part of the structure we are going to define now. For $i\in\{1,2,\dots,n\}$, we denote by $P_i$ and $I_i$ the indecomposable projective cover and injective envelope of $L_i$, respectively. Denote by $\Delta_i$ the quotient of $P_i$ by the trace in $P_i$ of all $P_j$ with $j>i$. Denote by $\nabla_i$ the submodule of $I_i$ defined as the intersection of the kernels of all homomorphisms from $I_i$ to $I_j$ with $j>i$. The modules $\Delta_i$ are called {\em standard} and the modules $\nabla_i$ are called {\em costandard}. Recall from \cite{CPS,DR}, that $A$ is said to be {\em quasi-hereditary} provided that \begin{itemize} \item the endomorphism algebra of each $\Delta_i$ is $\Bbbk$; \item the regular module ${}_AA$ has a filtration with standard subquotients. \end{itemize} According to \cite{Ri}, if $A$ is quasi-hereditary, then, for each $i$, there is a unique indecomposable module $T_i$, called a {\em tilting module}, which has both, a filtration with standard subquotients and a filtration with costandard subquotients, and, additionally, such that $[T_i:L_i]\neq 0$ while $[T_i:L_j]=0$ for $j>i$. The module $\displaystyle T=\bigoplus_{i=1}^n T_i$ is called the {\em characteristic tilting module} and (the opposite of) its endomorphism algebra is called the {\em Ringel dual} of $A$. For each $M\in A$-mod, there is a unique minimal finite complex $\mathcal{T}_\bullet(M)$ of tilting modules which is isomorphic to $M$ in the bounded derived category of $A$. We will denote by $\mathbf{r}(M)$ the maximal non-negative $i$ such that $\mathcal{T}_i(M)\neq 0$ and by $\mathbf{l}(M)$ the maximal non-negative $i$ such that $\mathcal{T}_{-i}(M)\neq 0$. Note that $\mathbf{l}(M)=0$ if and only if $M$ has a filtration with standard subquotients and $\mathbf{r}(M)=0$ if and only if $M$ has a filtration with costandard subquotients. We refer to \cite{MO2} for further details. \subsection{Auslander-Ringel regular algebras}\label{s2.2} We say that a quasi-hereditary algebra $A$ is {\em Auslander-Ringel regular} provided that there is a coresolution \begin{displaymath} 0\to A\to Q_0\to Q_1\to\dots\to Q_k\to 0, \end{displaymath} such that each $Q_i\in\mathrm{add}(T)$ and $\mathrm{proj.dim}(Q_i)\leq i$, for all $i=0,1,\dots,k$. Note that, being quasi-hereditary, $A$ has finite global dimension (see \cite{CPS,DR}) and that the characteristic tilting module is a (generalized) tilting module. Thus, Auslander-Ringel regularity corresponds to $T$-regularity in the terminology of Subsection~\ref{s1.5}. In Section~\ref{s3}, we will see that blocks of (parabolic) BGG category $\mathcal{O}$ are Auslander-Ringel regular. \section{Regularity phenomena in category $\mathcal{O}$}\label{s3} \subsection{Category $\mathcal{O}$}\label{s3.1} We refer the reader to \cite{Hu} for details and generalities about category $\mathcal{O}$. We denote by $\mathcal{O}_0$ the principal block of $\mathcal{O}$, that is, the indecomposable direct summand of $\mathcal O$ containing the trivial $\mathfrak{g}$-module. The simple modules in $\mathcal{O}_0$ are simple highest weight modules, and their isomorphism classes are naturally indexed by elements of the Weyl group $W$. For $w\in W$, we denote by $L_w$ the simple highest weight module in $\mathcal{O}_0$ with highest weight $w\cdot 0$, where $0$ is the zero element in $\mathfrak{h}^*$ and $\cdot$ is the dot action of $W$. We denote by $P_w$ and $I_w$ the indecomposable projective cover and injective envelope of $L_w$ in $\mathcal{O}_0$, respectively. Let $A$ be a basic, finite dimensional, associative algebra such that $\mathcal{O}_0$ is equivalent to $A$-mod. It is well-known that $A$ is quasi-hereditary with respect to any linear order which extends the dominance order on weights. The latter is given by $\lambda\leq \mu$ if and only if $\mu-\lambda$ is a linear combination of positive roots with non-negative integer coefficients. By \cite{So}, the algebra $A$ admits a Koszul $\mathbb{Z}$-grading. We denote by ${}^{\mathbb{Z}}\mathcal{O}_0$ the category of $\mathbb{Z}$-graded finite-dimensional $A$-modules. We denote by $\langle 1\rangle$ the shift of grading which maps degree $0$ to degree $-1$. We fix standard graded lifts of structural modules so that \begin{itemize} \item $L_w$ is concentrated in degree zero; \item the top of $P_w$ is concentrated in degree zero; \item the socle of $I_w$ is concentrated in degree zero; \item the top of $\Delta_w$ is concentrated in degree zero; \item the socle of $\nabla_w$ is concentrated in degree zero; \item the canonical map $\Delta_w\hookrightarrow T_w$ is homogeneous of degree zero. \end{itemize} For $w\in W$, we denote by $\theta_w$ the indecomposable projective endofunctor of $\mathcal{O}_0$, see \cite{BG}, uniquely defined by the property $\theta_w P_e\cong P_w$. By \cite{St2}, $\theta_w$ admits a natural graded lift normalized by the same condition. We denote by $\geq_{\mathtt{L}}$, $\geq_{\mathtt{R}}$ and $\geq_{\mathtt{J}}$ the Kazhdan-Luszitg left, right and two-sided orders, respectively. \subsection{$\mathcal{O}_0$ is Auslander-Ringel regular}\label{s3.2} \begin{theorem}\label{thm1} The category $\mathcal{O}_0$ is Auslander-Ringel regular. \end{theorem} \begin{proof} Consider the category $\mathcal{LT}(\mathcal{O}_0)$ of linear complexes of tilting modules in $\mathcal{O}_0$, see \cite{Ma,MO}. The algebra $A$ is a balanced quasi-hereditary algebra in the sense of \cite{Ma2} and hence $\mathcal{LT}(\mathcal{O}_0)$ contains the tilting coresolution $\mathcal{T}_\bullet(P_e)$ of the dominant standard module $\Delta_e=P_e$. Due to the Ringel-Koszul self-duality of $\mathcal{O}_0$, the category $\mathcal{LT}(\mathcal{O}_0)$ is equivalent to ${}^{\mathbb{Z}}\mathcal{O}_0$. This implies that the multiplicity of $T_w\langle i\rangle$ as a summand of $\mathcal{T}_i(P_e)$ coincides with the composition multiplicity of $L_{w_0w^{-1} w_0}\langle -i\rangle$ in $\Delta_e$. The latter is given by Kazhdan-Lusztig combinatorics for $(W,S)$. In particular, it is non-zero only if \begin{displaymath} \mathbf{a}(w)=\mathbf{a}(w_0w^{-1} w_0)\leq i\leq\ell(w_0w^{-1} w_0)=\ell(w), \end{displaymath} where $\ell(w)$ is the length of $w$ and $\mathbf{a}$ is Lusztig's $\mathbf{a}$-function from \cite{Lu1,Lu2}. Consequently, $T_w$ can appear (up to shift of grading) only in homological positions $i$ such that $\mathbf{a}(w)\leq i\leq\ell(w)$. Taking into account that $\mathrm{proj.dim.}(T_w)=\mathbf{a}(w)$ by \cite{Ma3,Ma4}, it follows that $\mathcal{T}_i(P_e)$ has projective dimension at most $i$. For $x\in W$, applying $\theta_x$ to $\mathcal{T}_\bullet(P_e)$ gives a tilting coresolution of $P_x$ (not necessarily minimal or linear). Since $\theta_x$ is exact and sends projectives to projectives, it cannot increase the projective dimension. This means that \begin{displaymath} \mathrm{proj.dim.}(\theta_x\mathcal{T}_i(P_e))\leq \mathrm{proj.dim.}(\mathcal{T}_i(P_e))\leq i. \end{displaymath} The claim of the theorem follows. \end{proof} \begin{corollary}\label{cor2} {\hspace{1mm}} \begin{enumerate}[$($i$)$] \item\label{cor2.1} Let $\mathcal{P}_\bullet(T)$ be a minimal projective resolution of $T$. Then $\mathbf{r}(\mathcal{P}_{-i}(T))\leq i$, for all $i\geq 0$. \item\label{cor2.2} Let $\mathcal{T}_\bullet(I)$ be a minimal tilting resolution of the basic injective cogenerator $I$. Then we have $\mathrm{inj.dim.}(\mathcal{T}_{-i}(I))\leq i$, for all $i\geq 0$. \item\label{cor2.3} Let $\mathcal{I}_\bullet(T)$ be a minimal injective coresolution of $T$. Then $\mathbf{l}(\mathcal{I}_i(T))\leq i$, for all $i\geq 0$. \end{enumerate} \end{corollary} \begin{proof} Claim~\ref{cor2.2} is obtained from Theorem~\ref{thm1} using the simple preserving duality on $\mathcal{O}$. Since $\mathcal{O}_0$ is Ringel self-dual, Claim ~\ref{cor2.1} is the Ringel dual of Theorem~\ref{thm1} and, finally, Claim ~\ref{cor2.3} is the Ringel dual of Claim~\ref{cor2.2}. \end{proof} \subsection{$\mathcal{O}_0$ is Auslander regular}\label{s3.3} \begin{theorem}\label{thm3} The category $\mathcal{O}_0$ is Auslander regular. \end{theorem} We will need the following auxiliary statement. \begin{lemma}\label{lem4} For $w\in W$, let $\mathcal{I}_\bullet(T_{w_0w})$ be a minimal injective coresolution of $T_{w_0w}$. Then we have: \begin{enumerate}[$($i$)$] \item\label{lem4.1} The maximal value of $i$ such that $\mathcal{I}_i(T_{w_0w})\neq 0$ equals $\mathbf{a}(w_0w)$. \item\label{lem4.2} Each indecomposable direct summand of $\mathcal{I}_\bullet(T_{w_0w})$ is isomorphic, up to a graded shift, to $I_x$, for some $x\geq_{\mathtt{J}}w$. \item\label{lem4.3} If $\mathcal{I}_i(T_{w_0w})$ has a direct summand isomorphic, up to a graded shift, to $I_x$, for some $x\in W$, then $i\geq \mathbf{a}(w_0x)$. \end{enumerate} \end{lemma} \begin{proof} Using the simple preserving duality, Claim~\ref{lem4.1} is one of the main results of \cite{Ma3,Ma4}. Since $T_{w_0w}\cong \theta_w T_{w_0}$, to prove Claim~\ref{lem4.2}, it is enough to prove the same statement for $\theta_w\mathcal{I}_\bullet(T_{w_0})$. But we have \begin{displaymath} \theta_w I_y= \theta_w \theta_y I_e= \bigoplus_{z\in W}\theta_z^{\oplus m_{w,y}^z}I_e= \bigoplus_{z\in W} I_z^{\oplus m_{w,y}^z} \end{displaymath} and $m_{w,y}^z\neq 0$ only if $z\geq_{\mathtt{J}}w$. Let us prove Claim~\ref{lem4.3}. We start with the case $w=e$. Due to Koszulity of $\mathcal{O}_0$, the minimal injective coresolution $\mathcal{I}_\bullet(T_{w_0})$ of the antidominant tilting$=$simple module $T_{w_0}=L_{w_0}$ is linear and hence is an object in the category $\mathcal{LI}(\mathcal{O}_0)$ of linear complexes of injective modules and is isomorphic to the dominant standard$=$projective object in this category. Due to the Koszul self-duality of $\mathcal{O}_0$, the category $\mathcal{LI}(\mathcal{O}_0)$ is equivalent to ${}^{\mathbb{Z}}\mathcal{O}_0$. This implies that the multiplicity of $I_x\langle i\rangle$ as a summand of $\mathcal{I}_i(T_{w_0})$ coincides with the composition multiplicity of $L_{w_0x^{-1}}\langle -i\rangle$ in $\Delta_e$. The latter is given by Kazhdan-Lusztig combinatorics. In particular, it is non-zero only if $\mathbf{a}(w_0x^{-1})\leq i\leq\ell(w_0x^{-1})$. Consequently, the module $I_x$ can appear (up to shift of grading) only in homological positions $i$ such that $\mathbf{a}(w_0x)=\mathbf{a}(w_0x^{-1})\leq i\leq\ell(w_0x^{-1})$. This proves Claim~\ref{lem4.3} in the case $w=e$. The general case is obtained from this one applying $\theta_w$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm3}] Take the minimal tilting coresolution $\mathcal{T}_\bullet(P_e)$ of $P_e$ considered in the proof of Theorem~\ref{thm1}. We can take a minimal injective coresolution of each $T_x$, up to grading shift, appearing in $\mathcal{T}_\bullet(P_e)$ and glue these into an injective coresolution $\mathcal{I}_\bullet$ of $P_e$. Applying $\theta_w$ to $\mathcal{I}_\bullet$, gives an injective coresolution of $P_w$ without increasing the projective dimensions of homological positions. By \cite{Ma3,Ma4}, the projective dimension of $I_x$ is $2\mathbf{a}(w_0x)$. It is thus enough to show that any graded shift of $I_x$ appearing in $\mathcal{I}_\bullet$ appears only in homological positions $i$ such that $i\geq 2\mathbf{a}(w_0x)$. By Lemma~\ref{lem4}, $I_x$ can only appear in homological position at least $\mathbf{a}(w_0x)$ when coresolving $T_y$. Furthermore, again by Lemma~\ref{lem4}, $I_x$ can only appear in coresolutions of $T_{w_0y}$, where $x\geq_{\mathtt{J}}y$. By Theorem~\ref{thm1}, such $T_{w_0y}$ appears in $\mathcal T_\bullet(P_e)$ in homological positions at least $\operatorname{proj.dim} T_{w_0y}= \mathbf{a}(w_0y)$. Adding these two estimates together, we obtain that $I_x$ appears in $\mathcal{I}_\bullet$ in homological positions at least $\mathbf{a}(w_0y)+\mathbf{a}(w_0x)\geq 2\mathbf{a}(w_0x)$. This completes the proof. \end{proof} \subsection{Singular blocks}\label{s3.4} \begin{corollary}\label{cor6} All blocks of $\mathcal{O}$ are Auslander-Ringel regular, Auslander regular, and have the properties described in Corollary~\ref{cor2}. \end{corollary} \begin{proof} Due to Soergel's combinatorial description of blocks of $\mathcal{O}$ from \cite{So}, each block of category $\mathcal{O}$ is equivalent to an integral block of $\mathcal{O}$ (possibly for a different Lie algebra). Therefore we may restrict our attention to integral blocks. Each regular integral block is equivalent to $\mathcal{O}_0$. Each singular integral block is obtained from a regular integral block using translation to the corresponding wall. These translation functors are exact, send projectives to projectives, injectives to injectives and tiltings to tiltings and do not increase projective dimension, injective dimension, nor the values of $\mathbf{l}$ and $\mathbf{r}$. Therefore the claim follows from Theorems~\ref{thm1} and \ref{thm3} and Corollary~\ref{cor2} applying these translation functors. \end{proof} \subsection{$\mathfrak{sl}_3$-example}\label{s3.5} For the Lie algebra $\mathfrak{sl}_3$, we have $W=\{e,s,t,st,ts,w_0=sts=tst\}$. The projective dimensions of the indecomposable tilting and injective modules in $\mathcal{O}_0$ are given by: \begin{displaymath} \begin{array}{c||c|c|c|c|c|c} w&e&s&t&st&ts&w_0\\ \hline \mathrm{proj.dim}(T_w)&0&1&1&1&1&3 \end{array}\qquad \begin{array}{c||c|c|c|c|c|c} w&e&s&t&st&ts&w_0\\ \hline \mathrm{proj.dim}(I_w)&6&2&2&2&2&0 \end{array} \end{displaymath} The minimal (ungraded) tilting coresolutions of the {\color{blue}indecomposable projectives} in $\mathcal{O}_0$ are: \begin{displaymath} 0\to {\color{blue}P_{e}}\to T_e\to T_s\oplus T_t \to T_{st}\oplus T_{ts}\to T_{w_0}\to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{s}}\to T_e\to T_t \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{t}}\to T_e\to T_s \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{st}}\to T_e\to T_{ts} \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{ts}}\to T_e\to T_{st} \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{w_0}}\to T_{e}\to 0, \end{displaymath} The minimal (ungraded) injective coresolutions of the {\color{blue}indecomposable projectives} in $\mathcal{O}_0$ are: $$ 0\to {\color{blue}P_{e}}\to I_{w_0} \to I_{w_0}^{\oplus 2} \to I_t \oplus I_s\oplus I_{w_0}^{\oplus 2} \to I_{ts}\oplus I_{st} \oplus I_{w_0} \to I_{st}\oplus I_{ts}\to I_s\oplus I_t\to I_e\to 0, $$ \begin{displaymath} 0\to {\color{blue}P_{s}}\to I_{w_0}\to I_{w_0}\to I_s \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{t}}\to I_{w_0}\to I_{w_0}\to I_t \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{st}}\to I_{w_0}\to I_{w_0}\to I_{st} \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{ts}}\to I_{w_0}\to I_{w_0}\to I_{ts} \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{w_0}}\to I_{w_0}\to 0, \end{displaymath} \section{Regularity phenomena in parabolic category $\mathcal{O}^{\mathfrak{p}}$}\label{s4} \subsection{Parabolic category $\mathcal{O}^{\mathfrak{p}}$}\label{s4.1} Fix a parabolic subalgebra $\mathfrak{p}$ of $\mathfrak{g}$ containing $\mathfrak{h}\oplus\mathfrak{n}_+$. Denote by $\mathcal{O}^{\mathfrak{p}}$ the full subcategory of $\mathcal{O}$ consisting of all objects the action of $U(\mathfrak{p})$ on which is locally finite, see \cite{RC}. Then $\mathcal{O}^{\mathfrak{p}}$ is the Serre subcategory of $\mathcal{O}$ generated by all simple modules whose highest weights are (dot-)dominant (by which we mean it is the largest weight in its orbit under the dot action) and integral with respect to the Levi factor of $\mathfrak{p}$. Similarly to Subsection~\ref{s3.4}, we can start with the integral regular situation. Let $W_{\mathfrak{p}}$ denote the Weyl group of the Levi factor of $\mathfrak{p}$ which we view as a parabolic subgroup of $W$. We denote by $w_0^{\mathfrak{p}}$ the longest element in $W_{\mathfrak{p}}$. The principal block $\mathcal{O}^{\mathfrak{p}}_0$ is the Serre subcategory of $\mathcal{O}^{\mathfrak{p}}_0$ generated by $L_w$, where $w$ belongs to the set $\mathrm{short}({}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus\hspace{-1mm} W)$ of shortest coset representatives for cosets in ${}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus\hspace{-1mm} W$. \subsection{$\mathcal{O}^{\mathfrak{p}}_0$ is Auslander-Ringel regular}\label{s4.2} \begin{theorem}\label{thm7} The category $\mathcal{O}^{\mathfrak{p}}_0$ is Auslander-Ringel regular. \end{theorem} \begin{proof} The proof is similar to the proof of Theorem~\ref{thm1}, so we only emphasize the differences. By \cite{BGS}, the Koszul dual of $\mathcal{O}^{\mathfrak{p}}_0$ is the singular integral block $\mathcal{O}_{\lambda}$ of $\mathcal{O}$ where $\lambda$ is chosen such that the dot-stabilizer of $\lambda$ equals $W_{\mathfrak{p'}}$ where $\mathfrak{p'}$ is the $w_0$-conjugate of $\mathfrak{p}$. By \cite{So2}, the block $\mathcal O_\lambda$ is Ringel self-dual, and by \cite{Ma2}, the Ringel duality and the Koszul duality commute. Therefore, the category of linear complexes of tilting modules in $\mathcal{O}^{\mathfrak{p}}_0$ is equivalent to ${}^{\mathbb{Z}}\mathcal{O}_{\lambda}$. By \cite{Ma}, the tilting coresolution of the dominant projective ($=$standard) module in $\mathcal{O}^{\mathfrak{p}}_0$ is $\Delta(\lambda)$, the dominant standard object in ${}^{\mathbb{Z}}\mathcal{O}_{\lambda}$. Denoting by $T_0^\lambda:{}^\mathbb Z\mathcal O_0\to {}^\mathbb Z\mathcal O_\lambda$ the graded translation functor to the $\lambda$-wall, we have $\Delta(\lambda)\cong T_0^\lambda \Delta_e\langle \ell(w_0^{\mathfrak{p'}}) \rangle $. This means that the degree $i$ component of $\Delta(\lambda)$ consists of $T_0^\lambda L_u$ where $L_u$ belongs to the degree $i+\ell(w_0^{\mathfrak{p}})=i+\ell(w_0^{\mathfrak{p'}})$ component of $\Delta_e$ and such that $u\in \mathrm{long}( W /_{W_{\mathfrak{p'}}})=w_0(\mathrm{long}( {}_{W_{\mathfrak{p}}} \hspace{-2mm}\setminus\hspace{-1mm} W))^{-1} w_0$. It follows that the $i$-th component in the tilting coresolution contains only $T_x^{\mathfrak p}$ where $x\in \mathrm{short}( {}_{W_{\mathfrak{p}}} \hspace{-2mm}\setminus\hspace{-1mm} W)$ is such that $\mathbf{a}(w_0(w_0^{\mathfrak{p}} x)^{-1} w_0)\geq i+\ell(w_0^{\mathfrak{p}})=i+\mathbf{a}(w_0^{\mathfrak{p}})$. It remains to check from \cite[Table~2]{CM} that \[\operatorname{proj.dim} T_x^{\mathfrak p} = \mathbf{a}(w_0^{\mathfrak{p}} x)-\mathbf{a}(w_0^{\mathfrak{p}})= \mathbf{a}(w_0(w_0^{\mathfrak{p}} x )^{-1} w_0)-\mathbf{a}(w_0^{\mathfrak{p}})\] and compare with the condition in the previous paragraph. This proves the regularity property for the tilting coresolution of the dominant projective. The regularity property for other projective modules in $\mathcal O^{\mathfrak p}_0$ is obtained by applying projective functors exactly as in Theorem~\ref{thm1}. \end{proof} Let $P^{\mathfrak{p}}$ denote a projective generator, $I^{\mathfrak{p}}$ an injective cogenerator, and $T^{\mathfrak{p}}$ the characteristic tilting module in $\mathcal{O}^{\mathfrak{p}}_0$. Similarly to Corollary~\ref{cor2} (using that $\mathcal O^{\mathfrak p}_0$ is equivalent to its Ringel dual $\mathcal O^{\mathfrak p'}_0$), we have: \begin{corollary}\label{cor8} {\hspace{1mm}} \begin{enumerate}[$($i$)$] \item\label{cor8.1} Let $\mathcal{P}_\bullet(T^{\mathfrak{p}})$ be a minimal projective resolution of $T^{\mathfrak{p}}$ in $\mathcal{O}^{\mathfrak{p}}_0$. Then $\mathbf{r}(\mathcal{P}_{-i}(T^{\mathfrak{p}}))\leq i$, for all $i\geq 0$. \item\label{cor8.2} Let $\mathcal{T}_\bullet(I^{\mathfrak{p}})$ be a minimal tilting resolution of the basic injective cogenerator $I^{\mathfrak{p}}$ in $\mathcal{O}^{\mathfrak{p}}_0$. Then $\mathrm{inj.dim.}(\mathcal{T}_{-i}(I^{\mathfrak{p}}))\leq i$, for all $i\geq 0$. \item\label{cor8.3} Let $\mathcal{I}_\bullet(T^{\mathfrak{p}})$ be a minimal injective coresolution of $T^{\mathfrak{p}}$ in $\mathcal{O}^{\mathfrak{p}}_0$. Then $\mathbf{l}(\mathcal{I}_i(T^{\mathfrak{p}}))\leq i$, for all $i\geq 0$. \end{enumerate} \end{corollary} \subsection{$\mathcal{O}_0^{\mathfrak{p}}$ is Auslander regular}\label{s4.3} \begin{theorem}\label{thm9} The category $\mathcal{O}^{\mathfrak{p}}_0$ is Auslander regular. \end{theorem} \begin{proof} Mutatis mutandis the proof of Theorem~\ref{thm3}. Again, one could emphasize the $2\mathfrak{a}(w_0^{\mathfrak{p}})=2\ell(w_0^{\mathfrak{p}})$ shift for the projective dimension of injective modules in $\mathcal{O}^{\mathfrak{p}}_0$ in \cite[Table~2]{CM}. \end{proof} \subsection{Singular blocks}\label{s4.4} \begin{corollary}\label{cor10} All blocks of $\mathcal{O}^{\mathfrak{p}}$ are both Auslander-Ringel regular and Auslander regular and have the properties described in Corollary~\ref{cor8}. \end{corollary} \begin{proof} Mutatis mutandis the proof of Corollary~\ref{cor6}. \end{proof} \subsection{$\mathfrak{sl}_3$-example}\label{s4.5} For the Lie algebra $\mathfrak{sl}_3$, we have $W=\{e,s,t,st,ts,w_0=sts=tst\}$. Assume that $W_\mathfrak{p}=\{e,s\}$, then $\mathrm{short}({}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus \hspace{-1mm} W)=\{e,t,ts\}$. The projective dimensions of the indecomposable tilting and projective modules in $\mathcal{O}_0^{\mathfrak{p}}$ are given by: \begin{displaymath} \begin{array}{c||c|c|c} w:&e&t&ts\\ \hline \mathrm{proj.dim}(T_w^{\mathfrak{p}}):&0&0&2 \end{array}\qquad \begin{array}{c||c|c|c} w:&e&t&ts\\ \hline \mathrm{proj.dim}(I_w^{\mathfrak{p}}):&4&0&0 \end{array} \end{displaymath} The minimal (ungraded) tilting coresolutions of the {\color{blue}indecomposable projectives} in $\mathcal{O}_0^{\mathfrak{p}}$ are: \begin{displaymath} 0\to {\color{blue}P_{e}^{\mathfrak{p}}}\to T_e^{\mathfrak{p}}\to T_t^{\mathfrak{p}} \to T_{ts}^{\mathfrak{p}}\to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{t}^{\mathfrak{p}}}\to T_{e}^{\mathfrak{p}}\to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{ts}^{\mathfrak{p}}}\to T_{t}^{\mathfrak{p}}\to 0, \end{displaymath} The minimal (ungraded) injective coresolutions of the {\color{blue}indecomposable projectives} in $\mathcal{O}_0^{\mathfrak{p}}$ are: \begin{displaymath} 0\to {\color{blue}P_{e}^{\mathfrak{p}}}\to I_t^{\mathfrak{p}} \to I_s^{\mathfrak{p}}\to I_s^{\mathfrak{p}}\to I_{t}^{\mathfrak{p}}\to I_e^{\mathfrak{p}}\to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{t}^{\mathfrak{p}}}\to I_t^{\mathfrak{p}} \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{ts}^{\mathfrak{p}}}\to I_{ts}^{\mathfrak{p}} \to 0, \end{displaymath} \section{Auslander-Ringel-Gorenstein strongly standardly stratified algebras}\label{s5} \subsection{Strongly standardly stratified algebras}\label{s5.1} In this section we return to the general setup of Subsection~\ref{s2.1}. For $i\in\{1,2,\dots,n\}$, we denote by $\overline{\Delta}_i$ the maximal quotient of $\Delta_i$ satisfying $[\overline{\Delta}_i:L_i]=1$. Denote by $\overline{\nabla}_i$ the maximal submodule of $\nabla_i$ satisfying $[\overline{\nabla}_i:L_i]=1$. The modules $\overline{\Delta}_i$ are called {\em proper standard} and the modules $\overline{\nabla}_i$ are called {\em proper costandard}. Recall that $A$ is said to be {\em standardly stratified} provided that the regular module ${}_AA$ has a filtration with standard subquotients and {\em strongly strandardly stratified} (see \cite{Fr}) if, further, each standard module has a filtration with proper standard subquotients. If $A$ is a strongly standardly stratified algebra, then, by \cite{AHLU}, for each $i$, there is a unique indecomposable module $T_i$, called a {\em tilting module}, which has both a filtration with standard subquotients and a filtration with proper costandard subquotients, and, additionally, such that $[T_i:L_i]\neq 0$ while $[T_i:L_j]=0$, for $j>i$. The module $\displaystyle T=\bigoplus_{i=1}^n T_i$ is called the {\em characteristic tilting module} and (the opposite of) its endomorphism algebra is called the {\em Ringel dual} of $A$. For each $M\in A$-mod, there is a unique minimal bounded from the right complex $\mathcal{T}_\bullet(M)$ of tilting modules which is isomorphic to $M$ in the bounded derived category of $A$. We will denote by $\mathbf{r}(M)$ the maximal non-negative $i$ such that $\mathcal{T}_i(M)\neq 0$. Note that $\mathbf{r}(M)=0$ if and only if $M$ has a filtration with proper costandard subquotients. \subsection{Auslander-Ringel-Gorenstein algebras}\label{s5.2} Let $A$ be strongly standardly stratified. Then an $A$-module having a filtration with standard subquotients has a (finite) coresolution by modules in $\mathrm{add}(T)$. It is also well-known that $T$ has finite projective dimension (see \cite{Fr, AHLU}). We will say that $A$ is {\em Auslander-Ringel-Gorenstein} provided that there is a coresolution \begin{displaymath} 0\to A\to Q_0\to Q_1\to\dots\to Q_k\to 0, \end{displaymath} such that each $Q_i\in\mathrm{add}(T)$ and $\mathrm{proj.dim}(Q_i)\leq i$, for all $i=0,1,\dots,k$. Since the characteristic tilting module is a (generalized) tilting module, Auslander-Ringel-Gorenstein property agrees with $T$-regularity in the terminology of Subsection~\ref{s1.5}. \section{Regularity phenomena in $\mathcal{S}$-subcategories in $\mathcal{O}$}\label{s6} \subsection{$\mathcal{S}$-subcategories in $\mathcal{O}$}\label{s6.1} We again fix a parabolic subalgebra $\mathfrak{p}$ of $\mathfrak{g}$ containing $\mathfrak{h}\oplus\mathfrak{n}_+$ and restrict our attention to the integral part $\mathcal{O}_{\mathrm{int}}$ of $\mathcal{O}$. Let $\mathcal{X}$ denote the Serre subcategory of $\mathcal{O}_{\mathrm{int}}$ generated by all simple highest weight modules whose highest weights $\lambda$ are not anti-dominant with respect to $W_{\mathfrak{p}}$, that is, $w\cdot\lambda< \lambda$ for some $w\in W_{\mathfrak p}$. Denote by $\mathcal{S}^{\mathfrak{p}}$ the Serre quotient category $\mathcal{O}_{\mathrm{int}}/\mathcal{X}$, see \cite{FKM,MS}. From \cite{FKM}, we know that blocks of $\mathcal{S}^{\mathfrak{p}}$ correspond to strongly standardly stratified algebras. Let $\mathcal{S}^{\mathfrak{p}}_0$ be the principal block of $\mathcal{S}^{\mathfrak{p}}$. \subsection{$\mathcal{S}^{\mathfrak{p}}_0$ is Auslander-Ringel-Gorenstein}\label{s6.2} \begin{theorem}\label{thm11} The category $\mathcal{S}^{\mathfrak{p}}_0$ is Auslander-Ringel-Gorenstein. \end{theorem} \begin{proof} By \cite{FKM}, the indecomposable projectives in $\mathcal{S}^{\mathfrak{p}}_0$ are exactly the images of $P_w$, where $w$ belongs to $\mathrm{long}({}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus\hspace{-1mm} W)$ (the set of longest coset representatives in ${}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus\hspace{-1mm} W$). Furthermore, the indecomposable tilting objects in $\mathcal{S}^{\mathfrak{p}}_0$ are exactly the images of $T_w$, where $w\in \mathrm{short}({}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus\hspace{-1mm} W)$. Note that the above objects in $\mathcal{O}$ are exactly those indecomposable projective (resp. tilting) objects which are admissible in the sense of \cite[Lemma~14]{PW}. From \cite[Lemma~14 and Theorem~15]{PW} it follows that the minimal projective resolution (in $\mathcal{O}$) of any $T_w$ as above contains only $P_x$ as above. The Ringel dual of this property is that a minimal tilting coresolution (in $\mathcal{O}$) of any $P_x$ as above contains only $T_w$ as above. Since the projection functor $\mathcal{O}_0\twoheadrightarrow \mathcal{S}^{\mathfrak{p}}_0$ is exact and preserves the projective dimension for the involved projective and tilting modules, see \cite[Theorem~15]{PW}, the claim of our theorem follows from Theorem~\ref{thm1}. \end{proof} \subsection{$\mathcal{S}^{\mathfrak{p}}_0$ is Auslander-Gorenstein}\label{s6.3} \begin{theorem}\label{thm12} The category $\mathcal{S}^{\mathfrak{p}}_0$ is Auslander-Gorenstein. \end{theorem} \begin{proof} The indecomposable injectives in $\mathcal S^{\mathfrak p}_0$ are exactly the images of $I_w$ for $w\in \mathrm{long}({}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus\hspace{-1mm} W)$ and these $I_w\in \mathcal O$ are admissible in the sense of \cite{PW}. Thus, the claim follows from Theorem~\ref{thm3} similarly to the proof of Theorem \ref{thm11}. \end{proof} \subsection{Singular blocks}\label{s6.4} \begin{theorem}\label{thm14} All blocks of $\mathcal{S}^{\mathfrak{p}}$ are both Auslander-Ringel-Gorenstein and Auslander-Gorenstein. \end{theorem} \begin{proof} Mutatis mutandis the proof of Corollary~\ref{cor6} \end{proof} \subsection{$\mathfrak{sl}_3$-example}\label{s6.5} For the Lie algebra $\mathfrak{sl}_3$, we have $W=\{e,s,t,st,ts,w_0=sts=tst\}$. Assume that $W_\mathfrak{p}=\{e,s\}$, then $\mathrm{long}({}_{W_{\mathfrak{p}}}\hspace{-2mm}\setminus \hspace{-1mm} W)=\{s,st,w_0\}$. The projective dimensions of the indecomposable tilting and injective modules in $\mathcal{S}_0^{\mathfrak{p}}$ are given by: \begin{displaymath} \begin{array}{c||c|c|c} w:&e&t&ts\\ \hline \mathrm{proj.dim}(T_w):&0&1&1 \end{array}\qquad \begin{array}{c||c|c|c} w:&s&st&w_0\\ \hline \mathrm{proj.dim}(I_w):&2&2&0 \end{array} \end{displaymath} The minimal (ungraded) tilting coresolutions of the {\color{blue}indecomposable projectives} in $\mathcal{S}_0^{\mathfrak{p}}$ are: \begin{displaymath} 0\to {\color{blue}P_{s} }\to T_e\to T_t\to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{st}}\to T_{e}\to T_{ts}\to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{w_0}}\to T_{e}\to 0, \end{displaymath} The minimal (ungraded) injective coresolutions of the {\color{blue}indecomposable projectives} in $\mathcal{S}_0^{\mathfrak{p}}$ are: \begin{displaymath} 0\to {\color{blue}P_{s}}\to I_{w_0}\to I_{w_0}\to I_s \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{st}}\to I_{w_0}\to I_{w_0}\to I_{st} \to 0, \end{displaymath} \begin{displaymath} 0\to {\color{blue}P_{w_0}}\to I_{w_0} \to 0, \end{displaymath} \section{Applications to the cohomology of twisting and Serre functors}\label{s7} \subsection{Twisting and Serre functors on $\mathcal{O}$}\label{s7.1} For a simple reflection $s$, we denote by $\top_s$ the corresponding twisting functor on $\mathcal{O}$, see \cite{AS}. For $w\in W$, with a fixed reduced expression $w=s_1s_2\cdots s_k$, we denote by $\top_w$ the composition $\top_{s_1}\top_{s_2}\cdots\top_{s_k}$ and note that it does not depend on the choice of a reduced expression by \cite{KM}. All functors $\top_w$ are right exact, functorially commute with projective functors, acyclic on Verma modules and the corresponding derived functors are self-equivalences of the derived category of $\mathcal{O}$. Furthermore, we have $\top_{w_0}P_x\cong T_{w_0x}$ and $\top_{w_0}T_x\cong I_{w_0x}$, for all $x\in W$. We refer to \cite{AS,KM} for all details. The functor $(\mathcal{L}\top_{w_0})^2$ is a Serre functor on $\mathcal{D}^b(\mathcal{O}_0)$, see \cite{MS2}. \subsection{Auslander regularity via Serre functors}\label{s7.2} Let $A$ be a finite dimensional associative algebra of finite global dimension over an algebraically closed field $\Bbbk$. Then the left derived $\mathcal{L}\mathbf{N}$ of the Nakayama functor $\mathbf{N}=A^*\otimes_A{}_-$ for $A$ is a Serre functor on $\mathcal{D}^b(A)$. Recall that $L_i$, where $i=1,2,\dots,k$, is a complete and irredundant list of simple $A$-modules, $P_i$ denotes the indecomposable projective cover of $L_i$ and $I_i$ denotes the indecomposable injective envelope of $L_i$. Let $P$ be a basic projective generator of $A$-mod and $I$ a basic injective cogenerator of $A$-mod. \begin{lemma}\label{lem21} For $M\in A$-mod, $j\in\{1,2,\dots,k\}$ and $i\in\mathbb{Z}_{\geq 0}$, we have \begin{displaymath} \dim \mathrm{Ext}^i_A(M,P_j)= (\mathcal{L}_i \mathbf{N}(M):L_j). \end{displaymath} \end{lemma} \begin{proof} Being a Serre functor, $\mathcal{L}\mathbf{N}$ is a self-equivalence of $\mathcal{D}^b(A)$. Therefore, we have \begin{displaymath} \begin{array}{rcl} \mathrm{Ext}^i_A(M,P_j)&=&\mathrm{Hom}_{\mathcal{D}^b(A)}(M,P_j[i])\\ &=&\mathrm{Hom}_{\mathcal{D}^b(A)}(\mathcal{L} \mathbf{N}(M),\mathcal{L} \mathbf{N}(P_j[i]))\\ &=&\mathrm{Hom}_{\mathcal{D}^b(A)}(\mathcal{L} \mathbf{N}(M),I_j[i]). \end{array} \end{displaymath} The claim of the lemma follows. \end{proof} The above observation has the following consequence: \begin{proposition}\label{prop22} The algebra $A$ is Auslander regular if and only if, for any simple $A$-module $L_j$, we have $\mathcal{L}_i\mathbf{N}(L_j)=0$, for all $i<\mathrm{proj.dim}(I_j)$. \end{proposition} \begin{proof} By definition, $A$ is Auslander regular if and only if, for any simple $A$-module $L_j$, we have $\mathrm{Ext}^i(L_j,A)=0$ unless $i\geq \mathrm{proj.dim}(I_j)$. Now the necessary claim follows from Lemma~\ref{lem21}. \end{proof} \subsection{Cohomology of twisting and Serre functors for category $\mathcal{O}_0$}\label{s7.3} \begin{corollary}\label{cor23} For $w\in W$, we have $(\mathcal{L}_i\top_{w_0})^2L_w=0$, for all $0\leq i<2\mathbf{a}(w_0w)$. \end{corollary} \begin{proof} By Theorem~\ref{thm3}, $\mathcal{O}_0$ is Auslander regular. By the main results of \cite{Ma3,Ma4}, the projective dimension of $I_w$ equals $2\mathbf{a}(w_0w)$. Therefore the claim follows from Proposition~\ref{prop22}. \end{proof} Corollary~\ref{cor23} admits the following refinement. \begin{proposition}\label{prop24} For $w\in W$, we have $\mathcal{L}_i\top_{w_0}L_w=0$, for all $0\leq i<\mathbf{a}(w_0w)$. \end{proposition} \begin{proof} The injective resolution $\mathcal{I}_\bullet(L_{w_0})$ of $T_{w_0}=L_{w_0}$ is linear and is a dominant standard object in the category of linear complexes of injective modules in $\mathcal{O}_0$, by the Koszul self-duality of $\mathcal{O}_0$, see \cite{So}. Therefore, for $x\in W$, the module $I_x$ can only appear as a summand of $\mathcal{I}_i(L_{w_0})$, for $\mathbf{a}(w_0x^{-1})=\mathbf{a}(w_0x)\leq i$. This means that \begin{displaymath} \mathrm{Ext}^i_{\mathcal{O}}(L_w,T_{w_0})=0,\text{ for all } i< \mathbf{a}(w_0w). \end{displaymath} Note that, for any projective functor $\theta$, all simple subquotients $L_x$ of the module $\theta L_w$ satisfy $\mathbf{a}(w_0x)\geq \mathbf{a}(w_0w)$. Therefore, for the adjoint $\theta'$ of $\theta$, the previous paragraph implies that \begin{displaymath} \mathrm{Ext}^i_{\mathcal{O}}(L_w,\theta T_{w_0})= \mathrm{Ext}^i_{\mathcal{O}}(\theta' L_w, T_{w_0})=0, \text{ for all } i< \mathbf{a}(w_0w). \end{displaymath} To sum up, for any tilting module $T$, we have \begin{displaymath} \mathrm{Ext}^i_{\mathcal{O}}(L_w,T)=0, \text{ for all } i< \mathbf{a}(w_0w). \end{displaymath} Applying the equivalence $\mathcal{L}\top_{w_0}$ and noting that it sends tilting modules to injective, we obtain the claim of the proposition. \end{proof} Now we prove a result ``in the opposite direction''. Let $I$ be an injective cogenerator of $\mathcal{O}_0$. \begin{proposition}\label{prop27} For $w\in W$, we have $\mathcal{L}_i\top_{w_0}L_w=0$, for all $i>\ell(w_0w)$. \end{proposition} \begin{proof} We want to prove that $\mathrm{Hom}_{\mathcal{D}^b(\mathcal{O})}(\mathcal{L}\top_{w_0}L_w,I[i])=0$, for all $i>\ell(w_0w)$. Applying the adjoint of the equivalence $\mathcal{L}\top_{w_0}$, we get an equivalent statement that $\mathrm{Hom}_{\mathcal{D}^b(\mathcal{O})}(L_w,T[i])=0$, for all $i>\ell(w_0w)$, where $T$ is the characteristic tilting module in $\mathcal O_0$. Consider the linear complex $\mathcal{T}_\bullet(L_w)$ of tilting modules which represents $L_w$. By \cite{Ma}, it is a tilting object in the category of linear complexes of tilting modules. Combining the Ringel and Koszul self-dualities of $\mathcal{O}_0$, we obtain that the absolute value of the minimal non-zero component of $\mathcal{T}_\bullet(L_w)$ equals the maximal degree of a non-zero component of $T_{w_0w^{-1} w_0}$. The latter is equal to $\ell(w_0w)$. Now the necessary claim follows from \cite[Chapter III(2), Lemma 2.1]{Ha}. \end{proof} \begin{proposition}\label{prop25} For $w\in W$, we have $[\mathcal{L}_i\top_{w_0} I:L_w]\neq 0$ only if $i\leq \mathbf{a}(w_0w)$. \end{proposition} \begin{proof} Applying projective functors, the statement reduces to the special case when $I$ is substituted by $I_e=\nabla_e$. Note that $[\mathcal{L}_i\top_{w_0} \nabla_e:L_w]$ equals the dimension of $\mathrm{Hom}_{\mathcal{D}^b(\mathcal{O})}(\mathcal{L}\top_{w_0} \nabla_e,I_w[i])$. Now, we write $\nabla_e= \mathcal{L}\top_{w_0} T_{w_0}$. Moving $(\mathcal{L}\top_{w_0})^2$ from the first argument to the second using adjunction, we arrive to the space $\mathrm{Hom}_{\mathcal{D}^b(\mathcal{O})}(T_{w_0},P_w[i])$. Now the necessary claim follows from the observation that $\mathbf{r}(P_w)=\mathbf{a}(w_0w)$, which is the Ringel dual of the main results of \cite{Ma3,Ma4}. \end{proof} \subsection{$\mathfrak{sl}_3$-example}\label{s7.4} In the case of $\mathfrak{sl}_3$, we have $W=\{e,s,t,st,ts,w_0\}$. In Figure~\ref{fig1}, we give an explicit $\mathbb{Z}$-graded description of composition factors of the tilting resolution \begin{displaymath} T_{w_0}\hookrightarrow T_{st}\oplus T_{ts}\to T_s\oplus T_t\to T_e \end{displaymath} of $\nabla_e$ and its image after applying $\mathcal{L}\top_{w_0}$. The original resolution is in {\color{magenta}magenta} and black with $\nabla_e$ being the {\color{magenta}magenta} part. The simple subquotients added during the application of $\mathcal{L}\top_{w_0}$ are {\color{blue}blue}. The resulting cohomology in negative positions is \fbox{boxed}. The module $L_w$ is denoted by $w$. The values of the $\mathbf{a}$-function are as follows: $\mathbf{a}(e)=0$, $\mathbf{a}(s)=\mathbf{a}(t)=\mathbf{a}(st)=\mathbf{a}(ts)=1$, $\mathbf{a}(w_0)=3$. \begin{figure}\label{fig1} \resizebox{\textwidth}{!}{$$ \xymatrix@C=0.1em@R=0.1em{ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&{\color{magenta}w_0}&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&{\color{magenta}st}&&{\color{magenta}ts}&\\ &&&&&&&&&&&&&&&&&&w_0&&&&&&w_0&&&&w_0&{\color{magenta}s}&&{\color{magenta}t}&w_0\\ &&&&&&&&&&&&&&&&&st&&ts&&&&st&&ts&&&st&ts&{\color{magenta}e}&st&ts\\ &&&\hookrightarrow&&w_0&&&&\oplus&&w_0&&&&\to& \fbox{${\color{blue}t}$}&w_0&s&w_0&&\oplus& \fbox{${\color{blue}s}$}&w_0&t&w_0&&\to&w_0&s&&t&w_0\\ &&&&&st&&{\color{blue}ts}&&&&ts&&{\color{blue}st} &&&\fbox{${\color{blue}st}$}&ts&\fbox{${\color{blue}e}$}&st&{\color{blue}ts} &&\fbox{${\color{blue}ts}$}&ts&\fbox{${\color{blue}e}$}&st&{\color{blue}st}&&&st&&ts&\\ &w_0&&&&w_0&{\color{blue}s}&&{\color{blue}t}&&& w_0&{\color{blue}t}&&{\color{blue}s}&&& {\color{blue}s}&w_0&{\color{blue}t}&&&& {\color{blue}t}&w_0&{\color{blue}s}&&&&&w_0&&\\ {\color{blue}st}&&{\color{blue}ts}&& {\color{blue}st}&&{\color{blue}ts}&\fbox{${\color{blue}e}$}&&& {\color{blue}st}&&{\color{blue}ts}&\fbox{${\color{blue}e}$}&&&&& {\color{blue}st}&&&&&&{\color{blue}ts}&&&&&&&&\\ {\color{blue}s}&&{\color{blue}t}&&&{\color{blue}s}&&&&&&{\color{blue}t}&&&&&&&&&&&&&&&&&&&&&\\ &\fbox{${\color{blue}e}$}&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ } $$} \caption{$\mathcal{L}\top_{w_0}\nabla_e$ and its cohomology for $\mathfrak{sl}_3$} \end{figure} \section{Regularity phenomena with respect to twisted projective and tilting modules}\label{s8} \subsection{Twisted projective modules}\label{s8.1} Let $P$ be a projective generator of $\mathcal{O}_0$. For $w\in W$, the module $\top_w P$ is a (generalized) tilting module in $\mathcal{O}_0$ because $\top_w$ is a derived self-equivalence which is acyclic on modules with Verma flag. A question is, for which $w$ is the category $\mathcal{O}_0$ $\top_w P$-regular. Below we show that the answer is non-trivial. \subsection{Regularity with respect to twisted projectives}\label{s8.2} \begin{theorem}\label{thm8.n1} If $w=w_0^\mathfrak{p}$, for some parabolic subalgebra $\mathfrak{p}$ in $\mathfrak{g}$, then $\mathcal{O}_0$ is $\top_w P$-regular. \end{theorem} \begin{proof} Let $w=w_0^{\mathfrak{p}}$ as above. Since twisting functors functorially commute with projective functors, we only need to show that $\Delta_e$ has a coresolution by modules in $\mathrm{add}(\top_w P)$ satisfying the regularity condition. By construction, twisting functors commute with parabolic induction. For the category $\mathcal{O}$ associated to the Levi subalgebra $\mathfrak{l}$ of $\mathfrak{p}$, the claim of our Theorem coincides with the claim of Theorem~\ref{thm1}. The parabolic induction from $\mathfrak l$ to $\mathfrak{g}$ is exact and sends the indecomposable projective $P_x^{\mathfrak l}$ (for $x\in W_{\mathfrak p}$) to the indecomposable projective $P_x$. It also sends (indecomposable) tiltings to our twisted projective modules. To see this, write the indecomposable tilting module for $\mathfrak l$ corresponding to $x\in W_{\mathfrak p}$ as $T^{\mathfrak l}_x \cong \top_wP^{\mathfrak l}_{wx}$ and use that the parabolic induction commutes with $\top_w$ to conclude that $T^{\mathfrak l}_x$ is sent to $\top_w P_{wx}$. Therefore a tilting coresolution of the dominant projective for $\mathfrak{l}$ is sent to a coresolution of the dominant projective for $\mathfrak{g}$ by our twisted projective modules. The claim follows. \end{proof} \begin{corollary}\label{cor8.n2} If $w=w_0^\mathfrak{p}$, for some parabolic subalgebra $\mathfrak{p}$ in $\mathfrak{g}$, then all blocks of $\mathcal{O}$ are $\top_w P$-regular. \end{corollary} \begin{proof} Since twisting functors functorially commute with projective functors, we can use translations to walls to extend Theorem~\ref{thm8.n1} to singular blocks. \end{proof} \begin{remark}\label{ex8.n3} {\em \em The module $\Delta_w$ admits a (linear) coresolution by tilting modules, which starts with $T_w$. Applying the inverse of $\mathcal{L}\top_w$ to this coresolution, we obtain a coresolution of $\Delta_e$ by modules in $\mathrm{add}(\top_{w^{-1}w_0}P)$. We note that, by \cite{AS}, the inverse of $\mathcal{L}\top_w$ is $\mathcal{R}(\star\circ\top_{w^{-1}}\circ\star)$, where $\star$ is the simple preserving duality, and the claim in the previous sentence follows by using the acyclicity results in \cite{AS}. Hence, a necessary condition for $\mathcal{O}_0$ to be $\top_{xw_0}P$-regular is that the module $(\mathcal{L}\top_w)^{-1}T_w$, which starts this coresolution, is projective. In case the multiplicity of $\Delta_{w_0}$ in a standard filtration of $T_w$ is greater than $1$, the module $(\mathcal{L}\top_w)^{-1}T_w$ will have $\Delta_e$ appearing with multiplicity $1$ (as $\Delta_w$ appears in $T_w$ with multiplicity $1$) while some standard module will have higher multiplicity, by assumption. Therefore, in this case, $(\mathcal{L}\top_w)^{-1}T_w$ is not a projective module. This shows that the condition $[T_w:\Delta_{w_0}]=1$ is necessary for $\mathcal{O}_0$ to be $\top_{xw_0}P$-regular. This implies that examples of $w\in W$ such that $\mathcal{O}_0$ is not $\top_{w}P$-regular exist already in type $A_3$. We will see in Subsection \ref{s8.5} below that $\top_{w}P$-regularity can fail already in $\mathcal{O}_0$ of type $A_2$. } \end{remark} \subsection{Twisted tilting modules}\label{s8.3} Let $T$ be a characteristic tilting module for $\mathcal{O}_0$. For $w\in W$, the module $\top_w T$ is a (generalized) tilting module in $\mathcal{O}_0$ because $\top_w$ is a derived self-equivalence which is acyclic on modules with Verma flag. This raises an interesting problem, namely, to determine for which $w$ the category $\mathcal{O}_0$ is $\top_w T$-regular. We show below that the answer is non-trivial. \subsection{Regularity with respect to twisted tiltings}\label{s8.4} \begin{theorem}\label{thm8.n21} If $w=w_0^\mathfrak{p}$, for some parabolic subalgebra $\mathfrak{p}$ in $\mathfrak{g}$, then $\mathcal{O}_0$ is $\top_w T$-regular. \end{theorem} \begin{proof} As usual, we use that the projective functors commutes with twisting functors to reduce the claim to finding a desired coresolution for $P_e=\Delta_e$. Let $\mathfrak l$ be the Levi subalgebra of $\mathfrak p$ and take a coresolution of $\Delta_e^{\mathfrak l}=P^{\mathfrak l}_e$ by injectives for $\mathfrak l$ with the regularity property, guaranteed by Theorem \ref{thm3}. Then just like in the proof of Theorem~\ref{thm8.n1}, the parabolic induction produces a coresolution of $\Delta_e$ in $\operatorname{add}(\top_w T)$ with the regularity condition. In fact, the $w_0^\mathfrak{p}$-twists of tiltings are obtained by the parabolic induction from injective modules over $\mathfrak{l}$, which are the $w_0^{\mathfrak{p}}$-twists of tiltings over $\mathfrak l$. The proof is complete. \end{proof} \begin{corollary}\label{cor8.n22} If $w=w_0^\mathfrak{p}$, for some parabolic subalgebra $\mathfrak{p}$ in $\mathfrak{g}$, then all blocks of $\mathcal{O}$ are $\top_w T$-regular. \end{corollary} \begin{proof} Since twisting functors functorially commute with projective functors, we can use translations to walls to extend the statement in Theorem \ref{thm8.n21} to singular blocks. \end{proof} We will see in Subsection \ref{s8.5} below that $\top_{w}T$-regularity can fail already in $\mathcal{O}_0$ of type $A_2$. \subsection{$\mathfrak{sl}_3$-example}\label{s8.5} For the Lie algebra $\mathfrak{sl}_3$, we have $W=\{e,s,t,st,ts,w_0=sts=tst\}$. The left of the two tables below describes the projective dimensions of the twisted projective modules $\top_x P_y$. The right table below describes the projective dimensions of the twisted tilting modules $\top_x T_y$. \begin{displaymath} \begin{array}{c||c|c|c|c|c|c} x\backslash y&e&s&t&st&ts&w_0\\ \hline\hline e&0&0&0&0&0&0\\\hline s&1&0&1&0&1&0\\\hline t&1&1&0&1&0&0\\\hline st&2&1&1&1&1&0\\\hline ts&2&1&1&1&1&0\\\hline w_0&3&1&1&1&1&0 \end{array}\qquad\qquad \begin{array}{c||c|c|c|c|c|c} x\backslash y&e&s&t&st&ts&w_0\\ \hline\hline e&0&1&1&1&1&3\\\hline s&0&2&1&2&1&4\\\hline t&0&1&2&1&2&4\\\hline st&0&2&2&2&2&5\\\hline ts&0&2&2&2&2&5\\\hline w_0&0&2&2&2&2&6 \end{array} \end{displaymath} Here are the graded characters of the modules $\top_s P_x$ (with the characters of the tilting cores displayed in {\color{magenta}magenta}): \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&&& {\color{blue}e}&&&&{\color{blue}s}&&&&{\color{blue}t}&&&& {\color{blue}st}&&&&&{\color{blue}ts}&&&&&&{\color{blue}w_0}&&&\\ -1&|&&&&|&&s&&|&&&|&&&st&&&|&&&&&|&&&{\color{magenta}w_0}&&&\\ 0&|&&s&&|&st&e&ts&|&st&&|&&s&{\color{magenta}w_0}&t&&|&&{\color{magenta}w_0}&s&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ 1&|&st&&ts&|&s&{\color{magenta}w_0}&t&|&s&{\color{magenta}w_0}&|&st&{\color{magenta}ts}&e&{\color{magenta}st}&ts&|&{\color{magenta}st}&e&{\color{magenta}ts}&st&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ 2&|&&{\color{magenta}w_0}&&|&&{\color{magenta}st}&ts&|&st&{\color{magenta}ts}&|&&{\color{magenta}w_0}&{\color{magenta}s}&{\color{magenta}w_0}&t&|&{\color{magenta}w_0}&{\color{magenta}t}&{\color{magenta}w_0}&s&|&{\color{magenta}st}&{\color{magenta}ts}&{\color{magenta}e}&{\color{magenta}st}&{\color{magenta}ts}&\\ 3&|&&&&|&&{\color{magenta}w_0}&&|&&{\color{magenta}w_0}&|&&{\color{magenta}ts}&&{\color{magenta}st}&&|&{\color{magenta}st}&&{\color{magenta}ts}&&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ 4&|&&&&|&&&&|&&&|&&&{\color{magenta}w_0}&&&|&&{\color{magenta}w_0}&&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ 5&|&&&&|&&&&|&&&|&&&&&&|&&&&&|&&&{\color{magenta}w_0}&&& } $$ } Here are the graded characters of the modules $\top_{ts} P_x$ (with the characters of the tilting cores displayed in {\color{magenta}magenta}): \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&& {\color{blue}e}&&&{\color{blue}s}&&{\color{blue}t}&&& {\color{blue}st}&&&&&{\color{blue}ts}&&&&&{\color{blue}w_0}&&\\ -2&|&&|&&&|&&|&&&&&|&&&&|&&&{\color{magenta}w_0}&&&\\ -1&|&&|&ts&&|&&|&&{\color{magenta}w_0}&t&&|&&{\color{magenta}w_0}&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ 0&|&ts&|&t&{\color{magenta}w_0}&|&{\color{magenta}w_0}&|&{\color{magenta}ts}&e&{\color{magenta}st}&ts&|&{\color{magenta}st}&&{\color{magenta}ts}&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ 1&|&{\color{magenta}w_0}&|&ts&{\color{magenta}st}&|&{\color{magenta}ts}&|&{\color{magenta}w_0}&{\color{magenta}s}&{\color{magenta}w_0}&t&|&{\color{magenta}w_0}&{\color{magenta}t}&{\color{magenta}w_0}&|&{\color{magenta}st}&{\color{magenta}ts}&{\color{magenta}e}&{\color{magenta}st}&{\color{magenta}ts}&\\ 2&|&&|&&{\color{magenta}w_0}&|&{\color{magenta}w_0}&|&{\color{magenta}st}&&{\color{magenta}ts}&&|&{\color{magenta}st}&&{\color{magenta}ts}&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ 3&|&&|&&&|&&|&&{\color{magenta}w_0}&&&|&&{\color{magenta}w_0}&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ 4&|&&|&&&|&&|&&&&&|&&&&|&&&{\color{magenta}w_0}&&& } $$ } Here are the graded characters of the modules $\top_s T_x$ (with the characters of the tilting cores displayed in {\color{magenta}magenta}): \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&& {\color{blue}w_0}&&{\color{blue}st}&&&{\color{blue}ts}&&& {\color{blue}s}&&&&&{\color{blue}t}&&&&&{\color{blue}e}&&&\\ -4&|&&|&&&|&&|&&&&&|&&&&|&&&{\color{magenta}w_0}&&&\\ -3&|&&|&&&|&&|&&{\color{magenta}w_0}&&&|&&{\color{magenta}w_0}&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ -2&|&&|&{\color{magenta}w_0}&&|&{\color{magenta}w_0}&|&{\color{magenta}st}&&{\color{magenta}ts}&&|&{\color{magenta}st}&&{\color{magenta}ts}&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ -1&|&{\color{magenta}w_0}&|&{\color{magenta}st}&ts&|&{\color{magenta}ts}&|&{\color{magenta}w_0}&{\color{magenta}s}&{\color{magenta}w_0}&t&|&{\color{magenta}w_0}&{\color{magenta}t}&{\color{magenta}w_0}&|&{\color{magenta}st}&{\color{magenta}ts}&{\color{magenta}e}&{\color{magenta}st}&{\color{magenta}ts}&\\ 0&|&ts&|&{\color{magenta}w_0}&t&|&{\color{magenta}w_0}&|&{\color{magenta}st}&ts&{\color{magenta}ts}&e&|&{\color{magenta}st}&&{\color{magenta}ts}&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ 1&|&&|&&ts&|&&|&&{\color{magenta}w_0}&t&&|&&{\color{magenta}w_0}&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ 2&|&&|&&&|&&|&&&&&|&&&&|&&&{\color{magenta}w_0}&&& } $$ } Here are the graded characters of the modules $\top_{ts} T_x$ (with the characters of the tilting cores displayed in {\color{magenta}magenta}): \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&&& {\color{blue}w_0}&&&&{\color{blue}st}&&&&{\color{blue}ts}&&& {\color{blue}s}&&&&&&{\color{blue}t}&&&&&&{\color{blue}e}&&&\\ -5&|&&&&|&&&&|&&&|&&&&&&|&&&&&|&&&{\color{magenta}w_0}&&&\\ -4&|&&&&|&&&&|&&&|&&{\color{magenta}w_0}&&&&|&&{\color{magenta}w_0}&&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ -3&|&&&&|&{\color{magenta}w_0}&&&|&&{\color{magenta}w_0}&|&{\color{magenta}st}&&{\color{magenta}ts}&&&|&{\color{magenta}st}&&{\color{magenta}st}&&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ -2&|&&{\color{magenta}w_0}&&|&{\color{magenta}st}&ts&&|&st&{\color{magenta}ts}&|&{\color{magenta}w_0}&{\color{magenta}s}&{\color{magenta}w_0}&t&&|&{\color{magenta}w_0}&{\color{magenta}t}&{\color{magenta}w_0}&s&|&{\color{magenta}st}&{\color{magenta}ts}&{\color{magenta}e}&{\color{magenta}st}&{\color{magenta}ts}&\\ -1&|&st&&ts&|&{\color{magenta}w_0}&t&s&|&s&{\color{magenta}w_0}&|&{\color{magenta}st}&ts&{\color{magenta}ts}&e&st&|&{\color{magenta}st}&e&{\color{magenta}ts}&st&|&{\color{magenta}w_0}&{\color{magenta}s}&&{\color{magenta}t}&{\color{magenta}w_0}&\\ 0&|&&s&&|&ts&st&e&|&st&&|&&{\color{magenta}w_0}&t&s&&|&&{\color{magenta}w_0}&s&&|&&{\color{magenta}st}&&{\color{magenta}ts}&&\\ 1&|&&&&|&&s&&|&&&|&&&st&&&|&&&&&|&&&{\color{magenta}w_0}&&& } $$ } The cases $x=e$ and $w_0$ are already discussed in the previous sections. To prove regularity, we only need to consider the coresolution of {\color{blue}$P_e$}. Up to the symmetry of the Dynkin diagram, it is enough to consider the four cases $\top_sP$, $\top_{ts}P$, $\top_sT$ and $\top_{ts}T$. The first two are given as follows: \begin{displaymath} 0\to{\color{blue}P_e}\to \top_s P_s\to \top_s P_e\to 0 \end{displaymath} \begin{displaymath} 0\to{\color{blue}P_e}\to \top_{ts} P_{st}\to \top_{ts} P_s\oplus \top_{ts}P_t\to\top_{ts}P_e\to 0 \end{displaymath} Here we see that the first coresolution is regular, while in the second one, $\top_{ts} P_{st}$ is not projective and hence we do not have regularity with respect to $\top_{ts} P$. The case of $\top_sT$ is regular and given as follows: \begin{displaymath} 0\to{\color{blue}P_e}\to \top_s T_e\to \top_s T_t\oplus \top_s T_{ts}\oplus \top_s T_e\to \top_s T_{ts}\oplus \top_s T_t\oplus \top_s T_s\to \top_s T_{ts}\oplus \top_s T_{st}\to \top_s T_{w_0}\to 0 \end{displaymath} Finally, we claim that we do not have the regularity in the case of $\top_{ts}T$. Indeed, in order not to fail already in position zero, we must start with $0\to{\color{blue}P_e}\to \top_{ts}T_e\to\mathrm{Coker}$. Further, in order not to fail on the next step, we again must embed $\mathrm{Coker}$ into $\top_{ts}T_e\oplus \top_{ts}T_e$. The new cokernel will necessarily have both $L_s$ and $L_t$ in the socle. However, $L_t$ does not appear in the socle of $\top_{ts}T$ and hence the coresolution cannot continue. This implies that one of the first two steps requires correction by adding non-projective summands of $\top_{ts}T$, which implies the failure of the regularity. \section{Regularity phenomena with respect to shuffled projective and tilting modules}\label{s9} \subsection{Shuffled projective modules}\label{s9.1} For $w\in W$, we denote by $\mathrm{C}_w$ the corresponding shuffling functor on $\mathcal{O}_0$, see \cite[Section~5]{MS}. Let $P$ be a projective generator of $\mathcal{O}_0$. For $w\in W$, the module $\mathrm{C}_w P$ is a (generalized) tilting module in $\mathcal{O}_0$ because $\mathrm{C}_w$ is a derived self-equivalence. Thus, a problem is to determine for which $w$ the category $\mathcal{O}_0$ is $\mathrm{C}_w P$-regular. This problem looks much harder than the one involving the twisting functors, due to the fact that shuffling functors do not commute with projective functors. \subsection{Regularity with respect to shuffled projectives}\label{s9.2} \begin{proposition}\label{prop9.n1} If $s$ is a simple reflection, then $\mathcal{O}_0$ is $\mathrm{C}_s P$-regular. \end{proposition} \begin{proof} The functor $\mathrm{C}_s$ is defined as the cokernel of the adjunction morphism $\mathrm{adj}_s:\theta_e\to\theta_s$. If $x\in W$ is such that $xs<x$, then $\mathrm{C}_s P_x\cong P_x$. If $x\in W$ is such that $xs>x$, then $\mathrm{C}_s P_x$ has projective dimension $1$ and a minimal projective resolution of the following form: \begin{equation}\label{eq9.n1-1} 0\to P_x\to\theta_s P_x\to \mathrm{C}_s P_x \to 0, \end{equation} where any summand $P_y$ of $\theta_s P_x$ satisfies $ys<y$ and hence $\mathrm{C}_s P_y=P_y$. The latter implies that \eqref{eq9.n1-1} can be viewed as a coresolution of $P_x$ by modules in $\mathrm{add}(\mathrm{C}_s P)$ and it is manifestly regular. The claim follows. \end{proof} Proposition~\ref{prop9.n1} and Theorem \ref{thm8.n1} motivate the following: \begin{conjecture}\label{conj5.n2} If $w_0^{\mathfrak{p}}$ is the longest element in some parabolic subgroup of $W$, then $\mathcal{O}_0$ is $\mathrm{C}_{w_0^{\mathfrak{p}}} P$-regular. \end{conjecture} Similarly to Subsection~\ref{s8.5} one can show that $\mathcal{O}_0$ is not $\mathrm{C}_{st} P$-regular for $\mathfrak{g}=\mathfrak{sl}_3$. \subsection{Shuffled tilting modules}\label{s9.3} Let $T$ be a characteristic tilting module for $\mathcal{O}_0$. For $w\in W$, the module $\mathrm{C}_w T$ is a (generalized) tilting module in $\mathcal{O}_0$ because $\mathrm{C}_w$ induces a derived self-equivalence which is acyclic on tilting modules (the latter follows by combining \cite[Proposition~5.3]{MS} and \cite[Theorem~5.16]{MS}). It seems to be an interesting problem to determine, for which $w$, the category $\mathcal{O}_0$ is $\mathrm{C}_w T$-regular. Again, this problem looks much harder than the one involving the twisting functors due to the fact that shuffling functors do not commute with projective functors. \subsection{Regularity with respect to shuffled tiltings}\label{s9.4} \begin{proposition}\label{prop9.n2} If $s$ is a simple reflection, then $\mathcal{O}_0$ is $\mathrm{C}_s T$-regular. \end{proposition} \begin{proof} This is very similar to the proof of Proposition~\ref{prop9.n1}. If $x\in W$ is such that $xs>x$, then $\mathrm{C}_s T_x\cong T_x$. If $x\in W$ is such that $xs<x$, then $\mathrm{C}_s T_x$ has a tilting resolution of the following form: \begin{equation}\label{eq9.n1-2} 0\to T_x\to\theta_s T_x\to \mathrm{C}_s T_x \to 0, \end{equation} where any summand $T_y$ of $\theta_s T_x$ satisfies $ys>y$ and hence $\mathrm{C}_s T_y=T_y$. Also, since $\theta_s$ is exact, the projective dimension of $\theta_s T_x$ does not exceed that of $T_x$. Consequently, the projective dimension of $\mathrm{C}_s T_x$ is bounded by the projective dimension of $T_x$ plus $1$. We can now take a minimal tilting coresolution of $P$, which we know has the regularity property, and coresolve each summand $T_x$, for $xs<x$, in this resolution using \eqref{eq9.n1-2}. The outcome is a regular coresolution of $P$ by modules in $\mathrm{add}(\mathrm{C}_sT)$. This completes the proof. \end{proof} Proposition~\ref{prop9.n2} motivates the following: \begin{conjecture}\label{conj5.n2T} If $w_0^{\mathfrak{p}}$ is the longest element in some parabolic subgroup of $W$, then $\mathcal{O}_0$ is $\mathrm{C}_{w_0^{\mathfrak{p}}} T$-regular. \end{conjecture} Similarly to Subsection~\ref{s8.5} one can show that $\mathcal{O}_0$ is not $\mathrm{C}_{st} T$-regular for $\mathfrak{g}=\mathfrak{sl}_3$. \subsection{$\mathfrak{sl}_3$-example}\label{s9.5} Let $\mathfrak g=\mathfrak{sl}_3$. Denote $W=\{e,s,t,st,ts,w_0=sts=tst\}$ as before. The left of the two tables below describes the projective dimensions of the twisted projective modules $\mathrm{C}_x P_y$. The right table below describes the projective dimensions of the twisted tilting modules $\mathrm{C}_x T_y$. \begin{displaymath} \begin{array}{c||c|c|c|c|c|c} x\backslash y&e&s&t&st&ts&w_0\\ \hline\hline e&0&0&0&0&0&0\\\hline s&1&0&1&1&0&0\\\hline t&1&1&0&0&1&0\\\hline st&2&1&1&1&1&0\\\hline ts&2&1&1&1&1&0\\\hline w_0&3&1&1&1&1&0 \end{array}\qquad\qquad \begin{array}{c||c|c|c|c|c|c} x\backslash y&e&s&t&st&ts&w_0\\ \hline\hline e&0&1&1&1&1&3\\\hline s&0&2&1&1&2&4\\\hline t&0&1&2&2&1&4\\\hline st&0&2&2&2&2&5\\\hline ts&0&2&2&2&2&5\\\hline w_0&0&2&2&2&2&6 \end{array} \end{displaymath} In the examples below, we note the following difference with the case of twisting functors: we do not know whether the notion of a ``tilting core'' makes sense for shuffled projective and tilting modules. Here are the graded characters of the modules $\mathrm{C}_s P_x$: \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&&& {\color{blue}e}&&&&{\color{blue}s}&&&&{\color{blue}t}&&& {\color{blue}st}&&&&&&{\color{blue}ts}&&&&&{\color{blue}w_0}&&&\\ -1&|&&&&|&&s&&|&&&|&&&&&|&&&ts&&&|&&&{{}w_0}&&&\\ 0&|&&s&&|&st&e&ts&|&ts&&|&&{{}w_0}&s&&|&&t&{{}w_0}&s&&|&{{}st}&&{{}ts}&&\\ 1&|&st&&ts&|&s&{{}w_0}&t&|&s&{{}w_0}&|&{{}ts}&e&{{}st}&ts&|&ts&{{}st}&e&{{}ts}&st&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ 2&|&&{{}w_0}&&|&&{{}st}&ts&|&ts&{{}st}&|&{{}w_0}&{{}s}&{{}w_0}&t&|&&{{}w_0}&{{}t}&{{}w_0}&s&|&{{}st}&{{}ts}&{{}e}&{{}st}&{{}ts}&\\ 3&|&&&&|&&{{}w_0}&&|&{{}w_0}&&|&{{}ts}&&{{}st}&&|&&{{}st}&&{{}ts}&&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ 4&|&&&&|&&&&|&&&|&&{{}w_0}&&&|&&&{{}w_0}&&&|&&{{}st}&&{{}ts}&&\\ 5&|&&&&|&&&&|&&&|&&&&&|&&&&&&|&&&{{}w_0}&&& } $$ } Here are the graded characters of the modules $\mathrm{C}_{st} P_x$: \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&& {\color{blue}e}&&&{\color{blue}s}&&{\color{blue}t}&&& {\color{blue}st}&&&&{\color{blue}ts}&&&&&&{\color{blue}w_0}&&\\ -2&|&&|&&&|&&|&&&&|&&&&&|&&&{{}w_0}&&&\\ -1&|&&|&st&&|&&|&&{{}w_0}&&|&&{{}w_0}&t&&|&&{{}st}&&{{}ts}&&\\ 0&|&st&|&t&{{}w_0}&|&{{}w_0}&|&{{}st}&&ts&|&{{}st}&e&{{}ts}&st&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ 1&|&{{}w_0}&|&st&{{}ts}&|&{{}st}&|&{{}w_0}&t&{{}w_0}&|&{{}w_0}&{{}s}&{{}w_0}&t&|&{{}st}&{{}ts}&{{}e}&{{}st}&{{}ts}&\\ 2&|&&|&&{{}w_0}&|&{{}w_0}&|&{{}st}&&{{}ts}&|&{{}st}&&{{}ts}&&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ 3&|&&|&&&|&&|&&{{}w_0}&&|&&{{}w_0}&&&|&&{{}st}&&{{}ts}&&\\ 4&|&&|&&&|&&|&&&&|&&&&&|&&&{{}w_0}&&& } $$ } Here are the graded characters of the modules $\mathrm{C}_s T_x$: \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&& {\color{blue}w_0}&&{\color{blue}st}&&{\color{blue}ts}&&&& {\color{blue}s}&&&&&{\color{blue}t}&&&&&{\color{blue}e}&&&\\ -4&|&&|&&|&&&|&&&&&|&&&&|&&&{{}w_0}&&&\\ -3&|&&|&&|&&&|&&{{}w_0}&&&|&&{{}w_0}&&|&&{{}st}&&{{}ts}&&\\ -2&|&&|&{{}w_0}&|&{{}w_0}&&|&{{}st}&&{{}ts}&&|&{{}st}&&{{}ts}&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ -1&|&{{}w_0}&|&{{}st}&|&{{}ts}&st&|&{{}w_0}&{{}s}&{{}w_0}&t&|&{{}w_0}&{{}t}&{{}w_0}&|&{{}st}&{{}ts}&{{}e}&{{}st}&{{}st}&\\ 0&|&ts&|&{{}w_0}&|&{{}w_0}&t&|&{{}st}&ts&{{}st}&e&|&{{}st}&&{{}ts}&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ 1&|&&|&&|&&st&|&&{{}w_0}&t&&|&&{{}w_0}&&|&&{{}st}&&{{}ts}&&\\ 2&|&&|&&|&&&|&&&&&|&&&&|&&&{{}w_0}&&& } $$ } Here are the graded characters of the modules $\mathrm{C}_{st} T_x$: \resizebox{\textwidth}{!}{ $$ \xymatrix@C=0.1em@R=0.1em{ \mathrm{deg}\backslash {\color{blue}x}&&& {\color{blue}w_0}&&&{\color{blue}st}&&&&{\color{blue}ts}&&&& {\color{blue}s}&&&&&&{\color{blue}t}&&&&&&{\color{blue}e}&&&\\ -5&|&&&&|&&&|&&&&|&&&&&&|&&&&&|&&&{{}w_0}&&&\\ -4&|&&&&|&&&|&&&&|&&{{}w_0}&&&&|&&{{}w_0}&&&|&&{{}st}&&{{}ts}&&\\ -3&|&&&&|&{{}w_0}&&|&&{{}w_0}&&|&{{}st}&&{{}ts}&&&|&{{}st}&&{{}st}&&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ -2&|&&{{}w_0}&&|&{{}ts}&st&|&st&{{}ts}&&|&{{}w_0}&{{}s}&{{}w_0}&t&&|&{{}w_0}&{{}t}&{{}w_0}&s&|&{{}st}&{{}ts}&{{}e}&{{}st}&{{}ts}&\\ -1&|&st&&ts&|&{{}w_0}&s&|&s&{{}w_0}&t&|&{{}st}&ts&{{}ts}&e&st&|&{{}ts}&e&{{}ts}&st&|&{{}w_0}&{{}s}&&{{}t}&{{}w_0}&\\ 0&|&&s&&|&&st&|&st&e&ts&|&&{{}w_0}&t&s&&|&&{{}w_0}&s&&|&&{{}st}&&{{}ts}&&\\ 1&|&&&&|&&&|&&s&&|&&&ts&&&|&&&&&|&&&{{}w_0}&&& } $$ } The non-trivial (ungraded) coresolutions of projectives using $\mathrm{C}_s P$ are: \begin{gather*} 0\to{\color{blue}P_e}\to \mathrm{C}_s P_s\to \mathrm{C}_s P_e\to 0,\\ 0\to{\color{blue}P_t}\to \mathrm{C}_s P_{ts}\to \mathrm{C}_s P_t\to 0,\\ 0\to{\color{blue}P_{st}}\to \mathrm{C}_s P_s\oplus \mathrm{C}_s P_s\to \mathrm{C}_s P_{st}\to 0. \end{gather*} These all are, clearly, regular. Next we claim that $P_e$ does not have a regular coresolution using $\mathrm{C}_{st} P$. Indeed, to have a chance at the zero step, we must embed $P_e$ into $\mathrm{C}_{st} P_{w_0}$. Let $\mathrm{Coker}$ be the cokernel. In order to embed $\mathrm{Coker}$, in the next step we need a copy of $\mathrm{C}_{st} P_{st}$ or $\mathrm{C}_{st} P_{w_0}$ and another copy of $\mathrm{C}_{st} P_{ts}$ or $\mathrm{C}_{st} P_{w_0}$. Either way, the new cokernel will have a copy of $L_t$ in the socle, while it is easy to see that no module in $\mathrm{add}(\mathrm{C}_{st} P)$ has $L_t$ in the socle, a contradiction. The non-trivial (ungraded) coresolutions of projectives using $\mathrm{C}_s T$ are: \begin{gather*} \resizebox{\textwidth}{!}{$ 0\to{\color{blue}P_e}\to \mathrm{C}_s T_e\to \mathrm{C}_s T_t \oplus \mathrm{C}_s T_e\oplus \mathrm{C}_s T_{st}\to \mathrm{C}_s T_s\oplus \mathrm{C}_s T_{st}\oplus \mathrm{C}_s T_{t}\to \mathrm{C}_s T_{ts}\oplus \mathrm{C}_s T_{st}\to \mathrm{C}_s T_{w_0} \to 0,$}\\ 0\to{\color{blue}P_s}\to \mathrm{C}_s T_{e}\to \mathrm{C}_s T_t\to 0,\\ 0\to{\color{blue}P_t}\to \mathrm{C}_s T_{e}\to \mathrm{C}_s T_e\oplus \mathrm{C}_s T_{st}\to \mathrm{C}_s T_{s}\to 0,\\ 0\to{\color{blue}P_{st}}\to \mathrm{C}_s T_e\to \mathrm{C}_s T_{t}\to\mathrm{C}_s T_{ts}\to 0\\ 0\to{\color{blue}P_{ts}}\to \mathrm{C}_s T_e\to \mathrm{C}_s T_{st}\to 0. \end{gather*} These all are, clearly, regular. \section{Projective dimension of indecomposable twisted and shuffled projectives and tiltings}\label{s10} \subsection{Projective dimension of twisted projectives}\label{s10.1} The results of Subsection~\ref{s8.2} motivate the problem to determine the projective dimension of twisted projective modules in $\mathcal{O}$. Since twisting functors commute with projective functors, twisted projective modules are exactly the modules obtained by applying projective functors to Verma modules: \begin{equation} \top_xP_y\cong \top_x\theta_y\Delta_e\cong \theta_y\top_x\Delta_e\cong \theta_y\Delta_x. \end{equation} This allows us to reformulate the problem as follows: \begin{problem}\label{probs10-1} For $x,y\in W$, determine the projective dimension of the module $\theta_x\Delta_y$. \end{problem} Here are some basic observations about this problem: \begin{itemize} \item If $y=e$, the module $\theta_x\Delta_e$ is projective and hence the answer is $0$. \item If $y=w_0$, the module $\theta_x\Delta_{w_0}$ is a tilting module and hence the answer is $\mathbf{a}(w_0x)$, see \cite{Ma3,Ma4}. \item If $x=e$, the answer is $\ell(y)$, see \cite{Ma3}. \item If $x=w_0$, we have $\theta_{w_0}\Delta_y=P_{w_0}$ and the answer is $0$. \item For a fixed $y$, the answer is weakly monotone in $x$, with respect to the right Kazhdan-Lusztig order, in particular, the answer is constant on the right Kazhdan-Lusztig cell of $x$. \item For a simple reflection $s$, we have $\theta_x\Delta_y=\theta_x\Delta_{ys}$ provided that $\ell(sx)<\ell(x)$, in particular, it is enough to consider the situation where $x$ is a Duflo involution and $y$ is a shortest (or longest) element in a coset from $W/W'$, where $W'$ is the parabolic subgroup of $W$ generated by all simple reflections in the left descent set of $x$. \item If $x=w_0^{\mathfrak{p}}$, for some parabolic $\mathfrak{p}$, then the projective dimension of $\theta_{w_0^{\mathfrak{p}}}\Delta_y$ coincides with the projective dimension of the singular Verma module obtained by translating $\Delta_y$ to the wall corresponding to $w_0^{\mathfrak{p}}$. This can be computed in therms of a certain function $\mathtt{d}_\lambda$, see \cite[Table~2]{CM} (see also \cite[Formula~(1.2)]{CM} and \cite[Remark~6.9]{KMM}). \end{itemize} The last observation suggest that Problem~\ref{probs10-1} might be not easy. Also, note that, by Koszul duality, the problem to determine the projective dimension of a singular Verma module is equivalent to the problem to determine the graded length of a parabolic Verma module. The latter is certainly ``combinatorial'' in the sense that the answer can be formulated purely in terms of Kazhdan-Lusztig combinatorics. Let $\mathbf{H}$ denote the Hecke algebra of $W$ (over $\mathbb{A}=\mathbb{Z}[v,v^{-1}]$ and in the normalization of \cite{So3}) with standard basis $\{H_w\,:\,w\in W\}$ and Kazhdan-Lusztig basis $\{\underline{H}_w\,:\,w\in W\}$. Consider the structure constants $h_{x,y}^z\in \mathbb{A}$ with respect to the KL-basis, that is \begin{displaymath} \underline{H}_x\underline{H}_y=\sum_{z\in W} h_{x,y}^z\underline{H}_z. \end{displaymath} In \cite[Subsection~6.3]{KMM}, for $x,y\in W$, we defined the function $\mathbf{b}:W\times W\to \mathbb{Z}_{\geq 0} \sqcup \{-\infty\}$ as follows: \begin{displaymath} \mathbf{b}(x,y):=\max\{\deg(h_{z,x^{-1}}^y)\,:\,z\in W\}. \end{displaymath} (By our convention the degree of the zero polynomial is $-\infty$.) The value $\mathbf{b}(x,y)$ is, if not $-\infty$, equal to the maximal degree of a non-zero graded component of $\theta_x L_y$, and also to the maximal non-zero position in the minimal complex of tilting modules representing $\theta_{y^{-1}w_0} L_{w_0x^{-1}}$. Here is an upper bound for the projective dimension of $\theta_x\Delta_y$ expressed in terms of the $\mathbf{b}$-function. \begin{proposition}\label{props10-n5} For all $x,y\in W$, we have: \begin{enumerate} \item\label{props10-n5.1} $\mathrm{proj.dim}\,\theta_x\Delta_y\leq \max\{\mathbf{b}(w_0a^{-1}w_0,x^{-1}w_0)\,:\,a\leq y\}$. \item\label{props10-n5.2} If the maximum in \eqref{props10-n5.1} coincides with $\mathbf{b}(w_0y^{-1}w_0,x^{-1}w_0)$, then the latter value is equal to $\mathrm{proj.dim}\,\theta_x\Delta_y$. \end{enumerate} \end{proposition} \begin{proof} For $x,y,z\in W$ and $k\in\mathbb{Z}_{\geq 0}$, by adjunction, we have \begin{displaymath} \mathrm{Ext}^k_{\mathcal{O}}(\theta_x\Delta_y,L_z)\cong \mathrm{Ext}^k_{\mathcal{O}}(\Delta_y,\theta_{x^{-1}}L_z). \end{displaymath} By \cite{Ma4}, the module $\theta_{x^{-1}}L_z$ can be represented by a linear complex of tilting module. Moreover, the multiplicity of $T_{a}\langle k\rangle[-k]$ in this complex coincides with the composition multiplicity of $L_{w_0a^{-1}w_0}\langle k\rangle$ in $\theta_{z^{-1}w_0}L_{w_0x}$. A costandard filtration of $T_{a}\langle k\rangle[-k]$ can contain $\nabla_y$ only when $a\leq y$, and hence only such summand $T_{a}\langle k\rangle[-k]$ in the tilting complex can, potentially, give rise to a non-zero element in $\mathrm{Ext}^k_{\mathcal{O}}(\Delta_y,\theta_{x^{-1}}L_z)$. Here we use the fact that standard and costandard modules are homologically orthogonal and hence derived homomorphisms can be constructed already on the level of the homotopy category. This implies claim~\eqref{props10-n5.1}. To prove claim~\eqref{props10-n5.2}, assume \begin{displaymath} k:= \mathbf{b}(w_0y^{-1}w_0,x^{-1}w_0)= \max\{\mathbf{b}(w_0a^{-1}w_0,x^{-1}w_0)\,:\,a\leq y\}. \end{displaymath} The canonical map $\Delta_y\to T_y$ gives rise to a homomorphism of $\Delta_y\langle k\rangle$ to the $k$-th homological position of the linear complex of tilting modules representing $\theta_{x^{-1}}L_z$. Because of the maximality assumption on $k$, there are no homomorphisms from $\Delta_y$ to the $k+1$-st homological position. This means that the map from the previous sentence is a homomorphism of complexes. It is not homotopic to zero since since the complex representing $\theta_{x^{-1}}L_z$ is linear and $T_{y}\langle k\rangle[-k]$ is in a diagonal position in this complex. The corresponding level at the position $k-1$ does not contain any socles of any costandard modules since all indecomposable tilting summands there are shifted by one in the positive direction of the grading. This means that the map we constructed gives a non-zero extension. Hence claim~\eqref{props10-n5.2} now follows from claim~\eqref{props10-n5.1}. \end{proof} \begin{corollary}\label{cor10-n6} For any parabolic $\mathfrak{p}$, in case $x\leq_{\mathtt{R}}w^{\mathfrak{p}}_0w_0$, we have $\mathrm{proj.dim}\,\theta_x\Delta_{w^{\mathfrak{p}}_0}= \ell(w^{\mathfrak{p}}_0)$. \end{corollary} \begin{proof} If $x\leq_{\mathtt{R}}w^{\mathfrak{p}}_0w_0$, then \cite[Proposition~6.8]{KMM} implies $\mathbf{b}(w_0w^{\mathfrak{p}}_0 w_0,x^{-1}w_0)=\ell(w^{\mathfrak{p}}_0)$. For any $a\leq w_0^{\mathfrak{p}}$, we also have \begin{displaymath} \mathbf{b}(w_0aw_0,x^{-1}w_0)\leq \ell(a)\leq \ell(w^{\mathfrak{p}}_0)=\mathbf{b}(w_0w^{\mathfrak{p}}_0 w_0,x^{-1}w_0), \end{displaymath} also using \cite[Proposition~6.8]{KMM}. Hence the claim follows from Proposition~\ref{props10-n5}\eqref{props10-n5.2}. \end{proof} \subsection{Projective dimension of twisted tiltings}\label{s10.3} The results of Subsection~\ref{s8.4} motivate the problem to determine the projective dimension of twisted tilting modules in $\mathcal{O}$. By \begin{equation} \top_xT_{w_0y}\cong \top_x\theta_yT_{w_0}\cong\top_x\theta_y\nabla_{w_0}\cong \theta_y\top_x\nabla_{w_0}\cong \theta_y\nabla_{xw_0}, \end{equation} we reformulate the problem as follows: \begin{problem}\label{probs10-15} For $x,y\in W$, determine the projective dimension of the module $\theta_x\nabla_y$. \end{problem} Here are some basic observations about this problem: \begin{itemize} \item If $y=w_0$, the module $\theta_x\nabla_{w_0}$ is tilting and hence the answer is $\mathbf{a}(w_0x)$, see \cite{Ma3,Ma4}. \item If $y=e$, the module $\theta_x\nabla_{e}$ is an indecomposable injective module and hence the answer is $2\mathbf{a}(w_0x)$, see \cite{Ma3,Ma4}. \item If $x=e$, the answer is $2\ell(w_0)-\ell(y)$, see \cite{Ma3}. \item If $x=w_0$, we have $\theta_{w_0}\nabla_y=P_{w_0}$ and the answer is $0$. \item For a fixed $y$, the answer is weakly monotone in $x$, with respect to the right Kazhdan-Lusztig order, in particular, the answer is constant on the right Kazhdan-Lusztig cell of $x$. \item For a simple reflection $s$, we have $\theta_x\nabla_y=\theta_x\nabla_{ys}$ provided that $\ell(sx)<\ell(x)$, in particular, it is enough to consider the situation where $x$ is a Duflo involution and $y$ is a shortest (or longest) element in a coset from $W/W'$, where $W'$ is the parabolic subgroup of $W$ generated by all simple reflections in the left descent set of $x$. \item If $x=w_0^{\mathfrak{p}}$, for some parabolic $\mathfrak{p}$, then the projective dimension of $\theta_{w_0^{\mathfrak{p}}}\nabla_y$ coincides with the projective dimension of the singular dual Verma module obtained by translating $\nabla_y$ to the wall corresponding to $w_0^{\mathfrak{p}}$. This can be computed in therms of a certain function $\mathtt{d}_\lambda$, see \cite[Table~2]{CM} (see also \cite[Formula~(1.2)]{CM} and \cite[Remark~6.9]{KMM}). \end{itemize} Let us now observe that $\nabla_y\cong \top_{w_0}\Delta_{w_0y}$ and that $\theta_x\nabla_y\cong \top_{w_0}\theta_x\Delta_{w_0y}$ since twisting and projective functors commute. We conjecture the following connection between Problems~\ref{probs10-1} and \ref{probs10-15}. \begin{conjecture}\label{conj10-151} For $x,y\in W$, we have $\mathrm{proj.dim}\, \theta_x\nabla_y=\mathbf{a}(w_0 x)+ \mathrm{proj.dim}\,\theta_x\Delta_{w_0y}$. \end{conjecture} Below we present some evidence for Conjecture~\ref{conj10-151}. \begin{proposition}\label{prop10-152} For $x,y\in W$, we have $\mathrm{proj.dim}\, \theta_x\nabla_y\leq \mathbf{a}(w_0 x)+ \mathrm{proj.dim}\,\theta_x\Delta_{w_0y}$. \end{proposition} \begin{proof} Assume that $\mathrm{proj.dim}\,\theta_x\Delta_{w_0y}=k$ and let $\mathcal{P}_{\bullet}$ be a minimal projective resolution of $\theta_x\Delta_{w_0y}$. Applying $\top_{w_0}$ to $\mathcal{P}_{\bullet}$, we get a minimal tilting resolution of $\theta_x\nabla_y$ (of length $k$). To obtain a projective resolution of $\theta_x\nabla_y$, we need to projectively resolve each indecomposable tilting module $T_u$ appearing in $\top_{w_0}\mathcal{P}_{\bullet}$ and glue all these resolutions together. In particular, $\mathrm{proj.dim}\, \theta_x\nabla_y$ is bounded by $k$ plus the maximal value of $\mathrm{proj.dim}\, T_u$, for $T_u$ appearing in $\top_{w_0}\mathcal{P}_{\bullet}$. Note that any indecomposable projective $P_v$ appearing in $\mathcal{P}_{\bullet}$ satisfies $v\geq_\mathtt{L} x$, because it is a summand of $\theta_x P_w$, for some $w$. Therefore $u=w_0v$ satisfies $u\leq_\mathtt{L} w_0x$. In particular, we have $\mathbf{a}(u)\leq \mathbf{a}(w_0x)$. By \cite{Ma3,Ma4}, the projective dimension of $T_u$ equals $\mathbf{a}(u)$. The claim of the proposition follows. \end{proof} \begin{corollary}\label{cor10-153} For $x,y\in W$, let $\mathrm{proj.dim}\,\theta_x\Delta_{w_0y}=k$. Assume that there exists $v\in W$ such that $v\sim_\mathtt{L} x$ and $\mathrm{Ext}^k_{\mathcal{O}}(\theta_x\Delta_{w_0y},L_v)\neq 0$. Then $\mathrm{proj.dim}\, \theta_x\nabla_y= \mathbf{a}(w_0 x)+ \mathrm{proj.dim}\,\theta_x\Delta_{w_0y}$. \end{corollary} \begin{proof} Let us look closely at the proof of Proposition~\ref{prop10-152}. From \cite[Section~6]{KMM}, it follows that there exists $w\in W$ such that $T_{w_0v}$ appears in position $\mathbf{a}(w_0x)$ of a minimal tilting complex $\mathcal{T}_\bullet$ representing $L_w$ and, moreover, this position $\mathbf{a}(w_0x)$ is a maximal non-zero position in $\mathcal{T}_\bullet$. The module $T_{w_0v}$ appears as a summand in $\top_{w_0}\mathcal{P}_{-k}$ and in $\mathcal{T}_{\mathbf{a}(w_0x)}$. Similarly to the proof of \cite[Theorem~1]{MO2}, the identity map on $T_{w_0v}$ induces a non-zero map from $\top_{w_0}\mathcal{P}_{\bullet}$ to $\mathcal{T}_\bullet[\mathbf{a}(w_0 x)+k]$ in the homotopy category and hence gives rise to a non-zero extension fro $\theta_x\nabla_y$ to $L_w$ of degree $\mathbf{a}(w_0 x)+k$, by construction. Therefore $\mathrm{proj.dim}\, \theta_x\nabla_y\geq \mathbf{a}(w_0 x)+ \mathrm{proj.dim}\,\theta_x\Delta_{w_0y}$ and the claim of the corollary follows from Proposition~\ref{prop10-152}. \end{proof} We note that the condition ``there exists $v\in W$ such that $v\sim_\mathtt{L} x$ and $\mathrm{Ext}^k_{\mathcal{O}}(\theta_x\Delta_{w_0y},L_v)\neq 0$'' in Corollary~\ref{cor10-153} is very similar to \cite[Conjecture~1.3]{KMM} proved in \cite[Theorem~A]{KMM}. We suspect that this condition is always satisfied \subsection{Projective dimension of shuffled projectives}\label{s10.2} The results of Subsection~\ref{s9.2} motivate the problem to determine the projective dimension of shuffled projective modules in $\mathcal{O}$. \begin{problem}\label{probs10-2} For $x,y\in W$, determine the projective dimension of the module $\mathrm{C}_x P_y$. \end{problem} This problems looks much harder than the one for the twisted projective modules, mostly because twisting functors do not commute with projective functors, in general. Here are some basic observations about this problem: \begin{itemize} \item If $x=e$, the module $\mathrm{C}_e P_y$ is projective and hence the answer is $0$. \item If $x=w_0$, the module $\mathrm{C}_{w_0} P_y$ is the tilting module $T_{yw_0}$ (this follows from \cite[Proposition~2.2, Proposition~4.4]{MS2} by a character argument). Hence the answer is $\mathbf{a}(yw_0)$ by \cite{Ma3,Ma4}. \item If $y=e$, we have $\mathrm{C}_x P_e\cong\mathrm C_x \Delta_e\cong \Delta_x$ and the answer is $\ell(x)$, see \cite{MS,Ma3}. \item If $y=w_0$, we have $\mathrm{C}_x P_{w_0}=P_{w_0}$ and the answer is $0$. \item The projective dimension of $\mathrm{C}_x P_y$ is at most $\ell(x)$, since each $\mathrm{C}_s$, where $s$ is a simple reflection, has derived length $1$. \item For $x=s$, a simple reflection, we have $\mathrm{C}_s P_y\cong P_y$ if $ys<y$, in which case the answer is $0$. In case $ys>y$, the module $\mathrm{C}_s P_y$ is not projective and the answer is $1$, see the proof of Proposition~\ref{prop9.n1}. \end{itemize} In the spirit of Subsection~\ref{s7.3}, we can reformulate Problem~\ref{probs10-2} in terms of the cohomology of certain functors. For $w\in W$, we denote by $\mathrm{K}_w$ the right adjoint of $\mathrm{C}_w$, called the {\em coshuffling} functor, see \cite[Section~5]{MS}. Note that, for a reduced expression $w=rs\dots t$, we have $\mathrm{C}_w=\mathrm{C}_t\dots \mathrm{C}_s\mathrm{C}_r$ and $\mathrm{K}_w=\mathrm{K}_r \mathrm{K}_s\dots\mathrm{K}_t$. Also, we have $\mathrm{K}_w=\star\circ\mathrm{C}_w\circ\star$. We denote by $L$ the direct sum of all simple modules in $\mathcal{O}_0$. \begin{proposition}\label{props10-21} For $x,y\in W$, the projective dimension of $\mathrm{C}_x P_y$ coincides with the maximal $k\geq 0$ such that $[\mathcal{L}_k\mathrm{C}_x\, L:L_y]\neq 0$. \end{proposition} \begin{proof} The projective dimension of a module coincides with the maximal degree of a non-vanishing extension to a simple module. Since $\mathcal{L}\mathrm{C}_x$ is a derived equivalence with inverse $\mathcal{R}\mathrm{K}_x$ by \cite[Theorem~5.7]{MS}, for $i\geq 0$, we have \begin{displaymath} \begin{array}{rcl} \dim \mathrm{Ext}_{\mathcal{O}}^i(\mathrm{C}_x P_y,L)&=& \dim \mathrm{Hom}_{\mathcal{D}^b(\mathcal{O})}(\mathrm{C}_x P_y,L[i])\\ &=&\dim \mathrm{Hom}_{\mathcal{D}^b(\mathcal{O})} (\mathcal{L}\mathrm{C}_x P_y,L[i])\\ &=&\dim \mathrm{Hom}_{\mathcal{D}^b(\mathcal{O})} (P_y,\mathcal{R}\mathrm{K}_x L[i])\\ &=& [\mathcal{R}^i\mathrm{K}_x\, L:L_y]\\ &=& [\mathcal{L}_i\mathrm{C}_x\, L^\star:L_y^\star]\\ &=& [\mathcal{L}_i\mathrm{C}_x\, L:L_y] \end{array} \end{displaymath} and the claim follows. \end{proof} \subsection{Projective dimension of shuffled tiltings}\label{s10.4} The results of Subsection~\ref{s9.4} motivate the problem to determine the projective dimension of shuffled projective modules in $\mathcal{O}$. \begin{problem}\label{probs10-45} For $x,y\in W$, determine the projective dimension of the module $\mathrm{C}_x T_y$. \end{problem} This problems looks much harder than the one for the twisted tilting modules, mostly because twisting functors do not commute with projective functors, in general. Here are some basic observations about this problem: \begin{itemize} \item If $x=e$, the module $\mathrm{C}_e T_y$ is tilting and hence the answer is $\mathbf{a}(y)$, see \cite{Ma3,Ma4}. \item If $x=w_0$, the module $\mathrm{C}_{w_0} T_y$ is the injective module $I_{yw_0}$. In fact, we have \[\mathrm{C}_{w_0} T_y \cong\mathrm{C}_{w_0} \top_{w_0} P_{w_0y}\cong \top_{w_0}\mathrm{C}_{w_0} P_{w_0y} \cong \top_{w_0}T_{w_0yw_0}\cong I_{yw_0}.\] Hence the answer is $2\mathbf{a}(w_0yw_0)$ by \cite{Ma3,Ma4}. \item If $y=e$, we have $\mathrm{C}_x T_e\cong \mathrm{C}_{x}\top_{w_0} P_{w_0} \cong \top_{w_0}\mathrm{C}_{x} P_{w_0} \cong \top_{w_0}P_{w_0} \cong P_{w_0}$ and the answer is $0$. \item If $y=w_0$, we have $\mathrm{C}_x T_{w_0}\cong \mathrm{C}_{x} \nabla_{w_0} \cong\nabla_{w_0x}$ and the answer is $\ell(w_0)+\ell(x)$, see \cite{Ma3}. \item The projective dimension of $\mathrm{C}_x T_y$ is at most $\ell(x)+\mathbf{a}(y)$, since the projective dimension of $T_y$ is $\mathbf{a}(y)$ by \cite{Ma3,Ma4} and each $\mathrm{C}_s$, where $s$ is a simple reflection, has derived length $1$. \item For $x=s$, a simple reflection, we have $\mathrm{C}_s T_y\cong T_y$ if $ys>y$, in which case the answer is $\mathbf{a}(y)$ by \cite{Ma3,Ma4}. In case $ys<y$, the module $\mathrm{C}_s P_y$ is no longer tilting and the answer is $\mathbf{a}(y)+1$ because the minimal tilting resolution of $\mathrm{C}_s P_y$ has $T_y$ in position $-1$. \end{itemize} In the spirit of Subsection~\ref{s10.3}, we make the following conjecture: \begin{conjecture}\label{conj10-451} For $x,y\in W$, we have $\mathrm{proj.dim}\,\mathrm{C}_x T_y=\mathbf{a}(y)+ \mathrm{proj.dim}\,\mathrm{C}_x P_{w_0y}$. \end{conjecture} Below we present some evidence for Conjecture~\ref{conj10-451}. \begin{proposition}\label{prop10-452} For $x,y\in W$, we have $\mathrm{proj.dim}\,\mathrm{C}_x T_y\leq \mathbf{a}(y)+ \mathrm{proj.dim}\,\mathrm{C}_x P_{w_0y}$. \end{proposition} \begin{proof} Assume that $\mathrm{proj.dim}\,\mathrm{C}_x P_{w_0y}=k$ and let $\mathcal{P}_{\bullet}$ be a minimal projective resolution of $\mathrm{C}_x P_{w_0y}$. Applying $\top_{w_0}$ to $\mathcal{P}_{\bullet}$, and using that twisting and shuffling functors commute (e.g. because twisting functors commute with projective functors and natural transformations between them and shuffling functors are defined in terms of (co)kernels of such natural transformations), we get a minimal tilting resolution of $\mathrm{C}_x T_y$ (of length $k$). To obtain a projective resolution of $\mathrm{C}_x T_y$, we need to projectively resolve each indecomposable tilting module $T_u$ appearing in $\top_{w_0}\mathcal{P}_{\bullet}$ and glue all these resolutions together. In particular, $\mathrm{proj.dim}\, \mathrm{C}_xT_y$ is bounded by $k$ plus the maximal value of $\mathrm{proj.dim}\, T_u$, for $T_u$ appearing in $\top_{w_0}\mathcal{P}_{\bullet}$. Note that a projective resolution of $\mathrm{C}_sP_{w}$, for any $w\in W$ and $s\in S$, has the following form: $0\to P_w\to\theta_x P_w\to0$ and a projective resolution of $\mathrm{C}_xP_{w_0y}$ is obtained by gluing such resolutions inductively along a reduced decomposition of $x$. Thus, an indecomposable projective $P_v$ appearing in $\mathcal{P}_{\bullet}$ is a summand of $\theta P_{w_0y}$ for some projective functor $\theta$ and satisfies $v\geq_\mathtt{R} w_0y$. Therefore, $u=w_0v$ satisfies $u\leq_\mathtt{R} y$. In particular, we have $\mathbf{a}(u)\leq \mathbf{a}(y)$. By \cite{Ma3,Ma4}, the projective dimension of $T_u$ equals $\mathbf{a}(u)$. The claim of the proposition follows. \end{proof} \begin{corollary}\label{cor10-453} For $x,y\in W$, let $\mathrm{proj.dim}\,\mathrm{C}_x P_{w_0y}=k$. Assume that there exists $v\in W$ such that $v\sim_\mathtt{R} w_0y$ and $\mathrm{Ext}^k_{\mathcal{O}}(\mathrm{C}_x P_{w_0y},L_v)\neq 0$. Then $\mathrm{proj.dim}\, \mathrm{C}_x T_y=\mathbf{a}(y)+ \mathrm{proj.dim}\,\mathrm{C}_x P_{w_0y}$. \end{corollary} \begin{proof} Follows from Proposition \ref{prop10-452} by a line of arguments analogous to the ones in the proof of Corollary \ref{cor10-153}. \end{proof} Again, we suspect that the above assumption ``there exists $v\in W$ such that $v\sim_\mathtt{R} w_0y$ and $\mathrm{Ext}^k_{\mathcal{O}}(\mathrm{C}_x P_{w_0y},L_v)\neq 0$'' in Corollary~\ref{cor10-453} is always satisfied.
2,869,038,154,432
arxiv
\section{Introduction} Spintronics is an emerging field in condensed matter physics, which focuses on the generation, manipulation, and detection of spin current\cite{MaekawaEd2012}. Two mechanisms of spin-current generation have been repeatedly confirmed: the spin-orbit coupling driven and the exchange coupling driven mechanisms. The spin-Hall effect\cite{Valenzuela2006,Saitoh2006,NLSV} belongs to the former class as it relies on the spin-orbit scattering. A typical example of the latter class is the spin pumping\cite{Tserkovnyak2002,Uchida2008,UchidaASP}, which originates from the dynamical torque-transfer process during magnetization due to nonequilibrium spin accumulation. Recently, an alternative scheme has been proposed, wherein spin-rotation coupling\cite{SRC} is exploited for generating spin currents\cite{MSOI-SC,SAW-SC:Matsuo,SAW-SC:Hamada}. The spin-rotation coupling refers to the fundamental coupling between spin and mechanical rotational motion and emerges in both ferromagnetic\cite{Barnett1915} and paramagnetic\cite{Ono2015,Ogata2017} metals as well as in nuclear spin systems\cite{Chudo2014,Chudo2015}. This coupling allows the interconversion of spin and mechanical angular momentum. Spin-current generation has been experimentally demonstrated using the mechanical rotation of a liquid metal\cite{Takahashi2016}. In the experiment, the induced mechanical rotation in a turbulent pipe flow of Hg and Ga alloys is utilized to generate the spin current. In this paper, we theoretically investigate the fluid-mechanical generation of spin current in both laminar and turbulent flows of a liquid metal and predict that the fluid velocity dependence of the spin current under laminar conditions will be qualitatively different from that in the turbulent flow. First, we show that the spin-vorticity coupling emerges in a liquid metal and derive the spin-diffusion equation in a liquid-metal flow based on quantum kinetic theory. We solve the spin-diffusion equation and reveal that the spin current is generated by the vorticity gradient. By solving the equation under both laminar- and turbulent-flow conditions, the inverse spin Hall voltage in the laminar liquid flow is predicted to be linearly proportional to the flow velocity, whereas the voltage in the turbulent flow is proportional to the square of the flow velocity. Our study will pave the way to fluid spintronics, where spin and fluid motion are harmonized. \section{Spin vorticity coupling\label{Sec:SpinVorticity} To consider the inertial effect on an electron due to nonuniform acceleration, we begin with the generally covariant Dirac equation\cite{Bib:SpinConnection}, which governs the fundamental theory for a spin-1/2 particle in a curved space-time: \begin{eqnarray} \left[i \gamma^{\mu} \left(p_{\mu }-qA_{\mu} -i\hbar \Gamma_{\mu} \right) +mc \right] \Psi =0,\label{gDirac} \end{eqnarray} where $c, \hbar, q=-e,$ and $ m$ represent the speed of light, the Planck's constant, the charge of an electron, and the mass of an electron, respectively. Equation (\ref{gDirac}) includes two types of gauge potentials: the U(1) gauge potential, $A_{\mu}$, and the spin connection, $\Gamma_{\mu}$. The former originates from external electromagnetic fields and the latter describes gravitational and inertial effects upon electron charge and spin. The spin connection, $\Gamma_{\mu}$, is determined by the metric $g_{\mu \nu}(x)$. The coordinate-dependent Clifford algebra can be expressed by $\gamma^{\mu}=\gamma^{\mu}(x)$, and it satisfies $\{ \gamma^{\mu}(x), \gamma^{\nu}(x) \}=2g^{\mu \nu}(x)\, (\mu,\nu=0,1,2,3)$ with the inverse metric given by $g^{\mu\nu}(x)$. In the following, we focus on a single electron in a conductive viscous fluid. The motion of the viscous fluid is effectively described by its flow velocity, $\v{v} (x)$, which is the source of the gauge potential on an electron, $\Gamma_{\mu}$, and reproduces inertial effects on the electron charge and spin, as explained below. We assume that the flow velocity is much less than the speed of light, $|\v{v}| \ll c$. The coordinate transformation from a local rest frame of the fluid to an inertial frame is written as $d\v{r}'=d\v{r} + \v{v}(x) dt,$ and the space-time line element in the local rest frame is \begin{eqnarray} ds^{2}&=&g_{\mu\nu}dx^{\mu}dx^{\nu} \nonumber \\ &=&[-c^{2} + \v{v}^{2} ] dt^{2} + 2 \v{v} \cdot d\r dt +d\r^{2}. \end{eqnarray} Then, the metric becomes \begin{eqnarray} g_{00}=-1+v^{2}/c^{2}, g_{0i}=g_{i0}=v_{i}/c, g_{ij}=\delta_{ij}. \label{metric} \end{eqnarray} Equations (\ref{gDirac}) and (\ref{metric}) lead to the Dirac Hamiltonian in the local rest frame: \begin{eqnarray} H &=& \beta mc^{2} + c \vb{\alpha} \cdot \vb{\pi} +qA_{0} -\frac12 q \v{A}\cdot \v{v} - \frac12 \{ \v{v}, \vb{\pi} \} -\frac12 \vb{\Sigma} \cdot \vb{\omega}. \nonumber \\ \label{Hlr} \end{eqnarray} Here $ \beta$ and $ \vb{\alpha}$ are the Dirac matrices and $\vb{\Sigma} $ is the spin operator. Moreover, $\vb{\pi}=\p - q\v{A} $ refers to the mechanical momentum, $\vb{\omega} = \nabla \times \v{v}$ is the vorticity of the fluid, and $\{ \v{v}, \vb{\pi} \}=\v{v} \vb{\pi}+ \vb{\pi}\v{v}$. Equation (\ref{Hlr}) is a generalization of the Dirac equation in a rigidly rotating frame. If the velocity is chosen to be $\v{v}(x) = \vb{\Omega} \times \r$ with a constant rotation frequency, $\vb{\Omega}$, then the fourth term $\{ \v{v}, \vb{\pi}\}/2 $ is a representative of the coupling of the rotation and the orbital angular momentum, $-\vb{\Omega} \cdot (\r \times \vb{\pi})$, which reproduces quantum-mechanical versions of the Coriolis, centrifugal, and Euler forces, as shown below. The fifth term, $-\vb{\Sigma} \cdot \vb{\omega}/2$, can be called the ``spin-vorticity coupling,'' which reproduces the spin-rotation coupling $-\vb{\Sigma} \cdot \vb{\Omega}$ because the vorticity $\vb{\omega}$ is reduced to the rotation frequency $\vb{\Omega}$ as $\vb{\omega} = 2 \vb{\Omega}$ for rigid motion. Thus, Eq. (\ref{Hlr}) reproduces the Dirac equation in the rotating frame. \section{Inertial forces on an electron due to viscous-fluid motion} Using the lowest order of the Foldy-Wouthuysen-Tani expansion\cite{FWT} for Eq. (\ref{Hlr}), we obtain the Schr\"odinger equation for an electron's two-spinor wave function, $\psi$, in the fluid: \begin{eqnarray} &&i\hbar \frac{\partial \psi}{\partial t}=H \psi, \nonumber \\ H &=& \frac{1}{2m}\vb{\pi}^{2} +qA_{0} - \mu_{B} \vb{\sigma} \cdot \v{B} \nonumber \\ &&-\frac12 q\v{A}\cdot \v{v} - \frac12 \{\v{v}, \vb{\pi} \} -\frac12 \v{S} \cdot \vb{\omega}, \label{Hsv} \end{eqnarray} with $\mu_{B}=q\hbar/2m$, $\v{B}=\nabla \times \v{A}$, and $\v{S}=(\hbar/2)\vb{\sigma}$. From Eq. (\ref{Hsv}), the Heisenberg equation for an electron in the fluid is obtained as \begin{eqnarray} &&\dot{\v{r}} = \frac{1}{i\hbar}[\v{r}, H]=\frac{\vb{\pi}}{m} + \v{v} \\ &&m \ddot{\v{r}} = \frac{1}{i\hbar}[m\dot{\v{r}},H] + m\frac{\partial \v{v}}{\partial t} = \v{F} \end{eqnarray} where the operator $F_{i}$ whose expectation value corresponds to a semi-classical force is given by \begin{eqnarray} &&\v{F}= \v{F}_{\rm em} + \v{F}_{\rm c} + \v{F}_{\sigma},\\ &&\v{F}_{\rm em} = q \left( \v{E}+ (\dot{\r} -\v{v})\times \v{B} \right), \label{Fem}\\ &&\v{F}_{\rm c} = m(\dot{\r}\cdot \nabla)\v{v} - (\nabla v_{i})\dot{r}_{i} + (\nabla v_{i})v_{i} + m\frac{\partial \v{v}}{\partial t},\label{Fc} \\ &&\v{F}_{\sigma} = \nabla \left\{ \mu_{B} \vb{\sigma} \cdot \left(\v{B} +\frac{\vb{\omega}}{2\gamma} \right) \right\}.\label{Fs} \end{eqnarray} Equation (\ref{Fem}) represents the electromagnetic force in a conductive viscous fluid. In the case of a rigid rotation, $\v{v}(x)= \vb{\Omega} \times \r$, the first and second terms in Eq. (\ref{Fc}) reproduce the Coriolis force, $- 2m\dot{\r} \times \vb{\Omega}$, the third term becomes the centrifugal force, $m \vb{\Omega} \times (\vb{\Omega} \times \r) $, and the last term corresponds to the Euler force. Equation (\ref{Fs}) is an expression for the Stern--Gerlach force, which originates from the gradient of the combination of the Zeeman term, $\mu_{B}\vb{\sigma} \cdot \v{B}$, and the spin-vorticity coupling term, $\hbar \vb{\sigma} \cdot \vb{\omega}/2$: \begin{eqnarray} H_{\vb{\sigma}}=-\mu_{B} \vb{\sigma} \cdot \left(\v{B} + \frac{\vb{\omega}}{2\gamma} \right), \label{Hsigma} \end{eqnarray} where $\gamma = gq/2m$ is the gyromagnetic ratio with $g=2$. This indicates that the inertial effect due to fluid motion is equivalent to the effective magnetic field $B_{\omega}=\gamma^{-1} \vb{\omega}/2$. In the following paragraphs, we demonstrate that the effective field is crucial for generating the spin current. \section{Spin-diffusion equation in a liquid-metal flow} To investigate spin-current generation due to the spin-vorticity coupling, we derive the spin-diffusion equation by using quantum kinetic theory. Starting with the quantum kinetic equation: \begin{eqnarray} \frac{\partial G^<}{\partial t} - \frac{1}{\hbar}&& \frac{\partial \mbox{Re}\Sigma^R}{\partial R} \frac{\partial G^<}{\partial k} + v_k \frac{\partial G^<}{\partial R} \nonumber\\ &&=\frac{1}{\hbar}(G^K \mbox{Im} \Sigma^R - \mbox{Im} G^R \Sigma^K), \end{eqnarray} where $G$ is the nonequilibrium Green's function of an electron, $\Sigma$ is the self-energy of the electron, and $v_k$ is the group velocity of the electron. We consider the effects of the impurity potential, the spin-orbit potential, and the spin-vorticity coupling: \begin{eqnarray} H_{\rm int} = V_{\rm imp} + \eta_{\rm so} \vb{\sigma} \cdot (\nabla V_{\rm imp} \times \v{p}) - \frac{\hbar}4 \vb{\sigma} \cdot \vb{\omega}, \end{eqnarray} where $V_{\rm imp}$ is an ordinary impurity potential and $\eta_{\rm so}$ is the spin-orbit coupling parameter. Using a quasi-particle approximation, the quantum kinetic equation reduces to the spin-dependent kinetic equation: \begin{eqnarray} \frac{\partial f^\sigma_{ktr}}{\partial t}-\frac{1}{\hbar}\frac{\partial \Sigma^{\sigma,R}_{k\varepsilon}}{\partial R} \frac{\partial f^{\sigma}_{krt}}{\partial k} + v_k \frac{\partial f^\sigma_{krt}}{\partial R} \nonumber\\ =-\frac{f^\sigma_{ktr}-f^0_k}{\tau_{\rm imp}} -\frac{f^\sigma_{ktr}-f^{-\sigma}_{ktr}}{\tilde\tau_{\rm sf}}, \label{Boltzmann} \end{eqnarray} where $f^\sigma_{krt}$ is the distribution function of an electron with spin $\sigma$, $f^0$ is the equilibrium distribution function of an electron, and $\tau_{\rm imp}$ is the transport-relaxation time given by \begin{eqnarray} \tau^{-1}_{\rm imp} = 2\pi n_{\rm imp} D_{\rm F}V^2_{\rm imp}/\hbar \end{eqnarray} with the impurity density $n_{\rm imp}$. The spin-flip relaxation time $\tilde\tau_{\rm sf}$ given by \begin{eqnarray} \tilde\tau^{-1}_{\rm sf}(k) = \tau^{-1}_{\rm sf}(k) + \tau^{-1}_{\rm sv}(k), \end{eqnarray} where the spin-life time due to the spin-orbit coupling is \begin{eqnarray} \tau^{-1}_{\rm sf}=2\eta^{-2}_{\rm so}\tau^{-1}_{\rm imp} \end{eqnarray} and the spin-life time due to the spin-vorticity coupling $\tau_{\rm sv}$ is given by \begin{eqnarray} \tau^{-1}_{\rm sv}(\r,\k,t,\omega)=D_{\rm F}\tilde{\omega}(\r,\k,t,\omega), \end{eqnarray} where $\tilde{\omega}(\r,\k,t,\omega)$ is the Wigner representation of the kinetic component of the two particle correlation function defined by \begin{eqnarray} \tilde{\omega}(\r,\k,t,\omega)\equiv \int_{\delta \r \delta t}\!\!\!\! \tilde{\omega}(\r,\delta \r,t,\delta t) e^{i(\k\cdot \delta \r - \omega \delta t)} \end{eqnarray} with \begin{eqnarray} &&\tilde{\omega}(\r,\delta \r,t,\delta t) \nonumber \\ &&= {\rm Tr}\Big[\hat\rho \omega^+\Big(\r- \frac{\delta\r}{2},t-\frac{\delta t}{2} \Big) \omega^-\Big(\r+\frac{\delta \r}{2},t+\frac{\delta t}{2} \Big) \Big]. \end{eqnarray} Here $\hat\rho$ is the density matrix of the fluid (Fig. \ref{fey-graph}). \begin{figure}[!hbtp] \begin{center} \includegraphics[scale=0.4]{fey.eps} \caption{Contribution to the self-energy $\Sigma$ originating from the spin-vorticity coupling. }\label{fey-graph} \end{center} \end{figure} Using the expansion: \begin{eqnarray} f^\sigma_{krt} = f^0_k + \partial_{E_k}f^0_k (\sigma\delta \mu + \hbar \omega_{krt}/2), \end{eqnarray} the momentum average of the kinetic equation is reduced to the generalized spin-diffusion equation: \begin{eqnarray} (\partial_t -D_s \partial_x^2 + \tilde\tau_{\rm sf}(k_F)^{-1}) \delta \mu_S = -\frac{\hbar}{\tilde\tau_{\rm sf}(k_F)} \zeta \omega^z,\label{g.spin-diff} \end{eqnarray} where $D_s$ is the diffusion constant and $\zeta$ is the renormalization factor of the spin-vorticity coupling defined by \begin{eqnarray} \zeta =\frac{\int_0^{k_F} dk [\partial_k f^0_k \omega^z_{rkt}\tilde\tau_{\rm sf}(k)^{-1}]}{ \omega^z(r,t)\tilde{\tau}_{\rm sf}(k_F)^{-1}\int_0^{k_F} dk [\partial_k f^0_k] } \end{eqnarray} with the Fermi wave number $k_F$. Based on the non-equilibrium Green's function method, the renormalization factor is found to depend on the microscopic parameters including the transport-relaxation time, the spin-flip life time, resulting from impurity scatterings and an extrinsic spin-orbit coupling. In nonequilibrium steady-state conditions, this equation may be further reduced to \begin{eqnarray} \Big(\nabla^2-\frac{1}{\lambda^2} \Big) \delta \mu_s= \frac{\hbar \zeta \omega}{\lambda^2},\label{sSD} \end{eqnarray} where $\lambda$ denotes the spin-diffusion length. \section{Spin current from fluid motion} \subsection{Spin current from laminar flow between plates} We now solve the spin-diffusion equation under a typical laminar flow condition. The equations of motion for an incompressible viscous fluid are well described by the Navier-Stokes (NS) equation: \begin{eqnarray} \frac{\partial \v{v}}{\partial t} + (\v{v} \cdot \nabla) \v{v} = - \frac{1}{\rho} \nabla p + \frac{\eta}{\rho} \nabla^{2} \v{v}, \label{NV} \end{eqnarray} where $\rho$ is the fluid density, $\eta$ is the viscosity coefficient, and $p$ is the pressure. In the following derivation, we use a solution of the NS equation. Moreover, the vorticity field calculated from the solution is inserted into the static spin-diffusion equation in (\ref{sSD}) to obtain the generated spin current. \begin{figure}[!hbtp] \begin{center} \includegraphics[scale=0.4]{lam1.eps} \caption{Representation of the spin current in the two-dimensional Poiseuille flow. The parallel flow between the two parallel planes, $y=\pm y_{0}$, creates the velocity field $\v{v}=(v_{0}(1-y^{2}/y_{0}^{2}),0,0)$. The vorticity of the flow emerges in the $z$-direction: $\nabla \times \v{v} = (0,0,2v_{0}y/y_{0}^{2})$. Then, the gradient of the vorticity generates the $z$-polarized spin-current in the $y$-dicrection. }\label{figP} \end{center} \end{figure} We consider the parallel flow enclosed between two parallel planes with a distance of $2y_{0}$ as shown in Fig. \ref{figP}. The solution to Eq. (\ref{NV}) is the well-known two-dimensional Poiseuille flow\cite{LandauFluid}: \begin{eqnarray} v_{x}=v_{0} \{1-\left(y/y_{0} \right)^{2} \}, v_{y}=v_{z}=0, \label{2dimP} \end{eqnarray} where \begin{eqnarray} \frac{v_{0}}{y_{0}^{2}}=-\frac{1}{2\eta} \frac{dp}{dx}. \end{eqnarray} In this case, the vorticity becomes \begin{eqnarray} \vb{\omega} = \nabla \times \v{v} = (0,0,2v_0 y/y_0^2 ).\label{vomega-2dimP} \end{eqnarray} Inserting Eq. (\ref{vomega-2dimP}) into Eq. (\ref{sSD}), we obtain the $z$-polarized spin current as \begin{eqnarray} J_{s,y}^z &&= \frac{\sigma_0}{e} \frac{\partial}{\partial y} \delta \mu_s^z (y) = 2\zeta \frac{\hbar \sigma_0}{e} \frac{v_0}{y_0^2} \Big[ 1 -\frac{\cosh (y/\lambda)}{ \cosh (y_0/\lambda)} \Big] \nonumber\\ && \approx 2\zeta \frac{\hbar \sigma_0}{e} \frac{v_0}{y_0^2},\label{Js-Po} \end{eqnarray} when $y_0 \gg \lambda$. \subsection{Spin current from laminar flow in a pipe.} Let us consider a steady flow in a pipe of circular cross-section with radius $r_{0}$ (Fig. \ref{figHP}). In this case, the solution to Eq. (\ref{NV}) is the Hagen--Poiseuille flow\cite{LandauFluid}: \begin{eqnarray} v_{x}=v_{0} \{1-\left(r/r_{0} \right)^{2} \}, v_{r}=v_{\theta}=0, \end{eqnarray} when \begin{eqnarray} \frac{v_{0}}{r_{0}^{2}}=-\frac{1}{4 \eta} \frac{dp}{dx}. \end{eqnarray} The $\theta$-polarized spin current, which flows in the radial direction, is given by \begin{eqnarray} J_{s,r}^{\theta} \approx 2\zeta \frac{\hbar \sigma_0}{e} \frac{v_0}{r_0^2}.\label{Js-HP} \end{eqnarray} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{lam2.eps} \end{center} \caption{Representation of the spin current for the Hagen--Poiseuille flow. A steady viscous flow in a pipe of circular cross-section with radius $r_{0}$ creates the velocity field, $(v_{x},v_{r},v_{\theta})=(v_{0}(1-r^{2}/r_{0}^{2}),0,0)$, in the cylindrical coordinate $(x,r,\theta)$. In this case, the vorticity gradient generates the $\theta$-polarized spin current in the radial direction. The inset shows the cross section of the pipe. } \label{figHP} \end{figure} \subsection{Spin current from turbulent flow in a pipe} We also consider a turbulent flow in the pipe. Velocity distribution in a turbulent flow in a pipe is well described as\cite{LandauFluid} \begin{eqnarray} \frac{v_x (r)}{v_*} = \left\{ \begin{array}{ll} \frac{v_* (r_{0}-r)}{\nu} & (r_0 -\delta_0 < r < r_0 ) \\ \frac{1}{\kappa} \ln \frac{v_* (r_{0}-r)}{\nu} + A & ( 0< r < r_0 -\delta_0 ) \end{array} \right. \end{eqnarray} where $v_*$ is the friction velocity, $r_{0}$ is the internal radius of the pipe, $\nu$ is the kinetic viscosity, $\kappa$ is the Karman constant, $A=5.5$ for the mercury, and $\delta_0$ is the thickness of the viscous sublayer. The friction velocity is related to the velocity distribution $v_x(r)$ as $v_* = \sqrt{\nu \Big| \frac{\partial v_x}{\partial r}}\Big|_{r=r_0}$. The region near the inner wall $(r_0 -\delta_0 < r < r_0 )$ is called the viscous sublayer. In the cylindrical coordinate $(x, r, \theta)$ (Fig. \ref{figHP}), the vorticity, $\omega_{\theta} (r) = -\partial_r v_x (r)$, is given by \begin{eqnarray} \omega_{\theta} (r) = \left\{ \begin{array}{ll} \frac{v_*^2 }{\nu} & (r_0 -\delta_0 < r < r_0) \\ \frac{v_*}{\kappa} \frac{1}{r_0 -r} & ( 0< r < r_0 -\delta_0 ) \label{vorticity} \end{array} \right. \end{eqnarray} The spin current is generated mostly near the viscous sublayer, especially around $r\approx r_o - \delta_0$, where the vorticity gradient is the largest. Then, the spin current becomes \begin{eqnarray} J_{s,r}^{\theta} \approx 2\zeta \frac{\hbar \sigma_0}{e} \frac{v_*}{\kappa (r-r_{0})^{2}}.\label{Js-Turb} \end{eqnarray} \subsection{Inverse spin Hall voltage} Finally, we investigate the inverse spin Hall voltage owing to the spin-current generation under the laminar and turbulent flow conditions. Following the voltage measurement by Takahashi et al.\cite{Takahashi2016}, we consider the inverse spin Hall voltage to be parallel to the flow velocity (the $x$-direction). The spin current is then converted into the electric voltage because of the spin-orbit coupling in the liquid metal and can be expressed as \begin{eqnarray} V_{\rm ISHE}^{\rm Lam} = \frac{L}{\sigma_0}\frac{2e}{\hbar}\theta_{\rm SHE} J_s \end{eqnarray} where $V_{\rm ISHE}$ is the inverse spin Hall voltage, $L$ is the length of the channel, $\theta_{\rm SHE}$ is the spin Hall angle of the liquid metal, and $J_s$ represents the generated spin current: $J_{s,y}^z = J_s^z$ or $J_{s,r}^\theta$. In the case of the Hagen--Poiseuille flow, the voltage is given by \begin{eqnarray} V_{\rm ISHE}^{\rm Lam} = 2\zeta \theta_{\rm SHE} \frac{\hbar}{e} \frac{L }{y_0^2}v_0. \end{eqnarray} This indicates that the generated voltage in a laminar flow is proportional to the flow velocity $v_0$. Contrast to the laminar flow case, the voltage in a turbulent flow proportional to the square of the flow velocity: \begin{eqnarray} V_{\rm ISHE}^{\rm Turb} &= & \frac{\theta_{\rm SHE} L}{\sigma_0 \pi r_0^2} \times \Big( \int_0^{r_0-\delta_0} + \int_{r_0-\delta_0}^{r_0}\Big) 2\pi r dr J_{s,r}^\theta \nonumber \\ &\approx& \zeta \theta_{\rm SHE} \frac{4L}{r_0}\frac{\hbar}{e} \frac{1}{\kappa \nu \mathcal{R}e^\delta} v_*^{\,\,2}, \end{eqnarray} where $\mathcal{R}e^\delta = \delta_0 v_*/\nu $ is the Reynolds number defined by the friction velocity. Making use of the material parameter values for the turbulent condition of the mercury\cite{Takahashi2016}, $\kappa = 1.2 \times 10^{-7}$, $\nu=1.2 \times 10^{-7} {\rm m}^2{\rm s}^{-1}$, $L=400\times 10^{-3}$ m, $r_0 = 0.2 \times 10^{-3}$ m, $v_* = 0.1$ m/s and $V_{\rm ISHE}^{\rm Turb}=100 \times$ nV, we obtain $\zeta \theta_{\rm SHE} = 1.1$. Taking $\theta_{\rm SHE} = 10^{-2}$ as an example, we find the renormalization factor $\zeta$ to be $10^{2}$. Furthermore, we estimate the voltage in the Hagen--Poiseuille flow. Although the renormalization factor $\zeta$ under a laminar-flow condition is generally different from that under a turbulent condition, we assume that the factor in a laminar flow is the same order of that in the turbulent flow as $\zeta \theta_{\rm SHE} \approx 1$. Then choosing $L=80$mm and $r_0=0.1$mm, the computed inverse spin Hall voltage is $V_{\rm ISHE}\approx 4$ nV. \section{Conclusion}% In this paper, we have investigated spin-current generation due to fluid motion. The spin-vorticity coupling was obtained from the low energy expansion of the Dirac equation in the fluid. Owing to the coupling, the fluid vorticity field acts on electron spins as an effective magnetic field. We have derived the generalized spin-diffusion equation in the presence of the effective field based on the quantum kinetic theory. Moreover, we have evaluated the spin current generated under both laminar- and turbulent-flow conditions, including the Poiseuille and Hagen--Poiseuille flow scenarios, and the turbulent flow in a fine pipe. The generated inverse spin Hall voltage is linearly proportional to the flow velocity, whereas that in a turbulent-flow environment is proportional to the square of the velocity. Our theory proposed here will bridge the gap between spintronics and fluid physics, and pave the way to fluid spintronics. \begin{acknowledgements} The authors are grateful to R. Takahashi, K. Harii, H. Chudo, E. Saitoh and J. Ieda for valuable comments. This work is financially supported by ERATO, JST, Grant-in-Aid for Scientific Research on Innovative Areas “Nano Spin Conversion Science” (26103005) Grant-in-Aid for Scientific Research C (15K05153), Grant-in-Aid for Scientific Research B (16H04023), and Grant-in-Aid for Scientific Research A (26247063) from MEXT, Japan. \end{acknowledgements}
2,869,038,154,433
arxiv
\section{Literature Overview of Comparison Studies} \label{related_work} A growing body of work discusses comparisons of humans and machines on a higher level. \citet{majaj2018deep} provide a broad overview how machine learning can help vision scientists to study biological vision, while \cite{barrett2019analyzing} review methods how to analyze representations of biological and artificial networks. From the perspective of cognitive science, \cite{cichy2019deep} stress that Deep Learning models \textit{can} serve as scientific models that not only provide both helpful predictions and explanations but that can also be used for exploration. Furthermore, from the perspective of psychology and philosophy, \cite{buckner2019comparative} emphasizes often-neglected caveats when comparing humans and DNNs such as human-centered interpretations and calls for discussions regarding how to properly align machine and human performance. \cite{chollet2019measure} proposes a general Artificial Intelligence benchmark and suggests to rather evaluate intelligence as ``skill-acquisition efficiency'' than to focus on skills at specific tasks. In the following, we give a brief overview of studies that compare human and machine perception. In order to test if DNNs have similar cognitive abilities as humans, a number of studies test DNNs on abstract (visual) reasoning tasks \citep{barrett2018measuring, yan2017intelligent, wu2019challenge, santoro2017simple, villalobos2019deep}. Other comparison studies focus on whether human visual phenomena such as illusions \citep{gomez2019convolutional, watanabe2018illusory, kim2019neural} or crowding \citep{volokitin2017deep, doerig2019crowding} can be reproduced in computational models. In the attempt to probe intuition in machine models, DNNs are compared to intuitive physics engines, i.e. probabilistic models that simulate physical events \citep{zhang2016comparative}. Other works investigate whether DNNs are sensible models of human perceptual processing. To this end, their prediction or internal representations are compared to those of biological systems; for example to human and/or monkey behavioral representations \citep{peterson2016adapting, schrimpf2018brain, yamins2014performance, eberhardt2016deep, golan2019controversial}, human fMRI representations \citep{han2019representation, khaligh2014deep} or monkey cell recordings \citep{schrimpf2018brain, khaligh2014deep, yamins2014performance, cadena2019deep}. A great number of studies focus on manipulating tasks and/or models. Researchers often use generalization tests on data dissimilar to the training set \citep{zhang2018can, wu2019challenge} to test whether machines understood the underlying concepts. In other studies, the degradation of object classification accuracy is measured with respect to image degradations \citep{geirhos2018generalisation} or with respect to the type of features that play an important role for human or machine decision-making \citep{geirhos2018imagenet, brendel2019approximating, kubilius2016deep, ullman2016atoms, ritter2017cognitive}. A lot of effort is being put into investigating whether humans are vulnerable to small, adversarial perturbations in images \citep{elsayed2018adversarial, zhou2019humans, han2019representation, dujmovic2020adversarial} - as DNNs are shown to be \citep{szegedy2013intriguing}. Similarly, in the field of Natural Language Processing, a trend is to manipulate the data set itself by for example negating statements to test whether a trained model gains an understanding of natural language or whether it only picks up on statistical regularities \citep{niven2019probing, mccoy2019right}. Further work takes inspiration from biology or uses human knowledge explicitly in order to improve DNNs. \citet{spoerer2017recurrent} found that recurrent connections, which are abundant in biological systems, allow for higher object recognition performance, especially in challenging situations such as in the presence of occlusions - in contrast to pure feed-forward networks. Furthermore, several researchers suggest \citep{zhang2018can, kim2018not} or show \citep{wu2019challenge, barrett2018measuring, santoro2017simple} that designing networks' architecture or features with human knowledge is key for machine algorithms to successfully solve abstract (reasoning) tasks. \section{Closed Contour Detection} \subsection{Data Set} \label{cc_dataset_details} \label{resnet_cc} Each image in the training set contained a main contour, multiple flankers and a background image. The main contour and flankers were drawn into an image of size $1028 \times 1028$ px. The main contour and flankers could either be straight or curvy lines, for which the generation processes are respectively described in \ref{polygon_desc} and \ref{curvy_desc}. The lines had a default thickness of \SI{10}{px}. We then re-sized the image to $256 \times 256$ px using anti-aliasing to transform the black and white pixels into smoother lines that had gray pixels at the borders. Thus, the lines in the re-sized image had a thickness of \SI{2.5}{px}. In the following, all specifications of sizes refer to the re-sized image (i.e a line described of final length \SI{10}{px} extended over \SI{40}{px} when drawn into the $1028 \times 1028$ px image). For the psychophysical experiments (see \ref{psychopyhysics_cc}), we added a white margin of \SI{16}{px} on each side of the image to avoid illusory contours at the borders of the image. \paragraph{Varying Contrast of Background} An image from the ImageNet data set was added as background to the line drawing. We converted the image into LAB color space and linearly rescaled the pixel intensities of the image to produce a normalized contrast value between $0$ (gray image with the RGB values $[118, 118, 118]$) and $1$ (original image) (see Figure \ref{fig:cc_randomcontrast}A). When adding the image to the line drawing, we replaced all pixels of the line drawing by the values of the background image for which the background image had a higher grayscale value than the line drawing. For the experiments in the main body, the contrast of the background image was always $0$. Only for the additional experiment described in \ref{random_contrast}, we used other contrast levels. \paragraph{Generation of Image Pairs}\label{appendix_pairs} We aimed to reduce the statistical properties that could be exploited to solve the task without judging the closedness of the contour. Therefore, we generated image pairs consisting of an "open" and a "closed" version of the same image. The two versions were designed to be almost identical and had the same flankers. They differed only in the main contour, which was either open or close. Examples of such image pairs are shown in Figure \ref{fig:cc_methods}. During training, either the closed or the open image of a pair was used. However, for the validation and testing, both versions were used. This allowed us to compare the predictions and heatmaps for images that differed only slightly, but belonged to different classes. \subsubsection{Line-drawing with Polygons as Main Contour} \label{polygon_desc} The data set used for training as well as some of the generalization sets consisted of straight lines. The main contour consisted of n $\in$ \{3, 4, 5, 6, 7, 8, 9\} line segments that formed either an open or a closed contour. The generation process of the main contour is depicted on the left side of Figure \ref{fig:cc_methods}A. To get a contour with $n$ edges, we generated $n$ points which were defined by a randomly sampled angle $\alpha_n$ and a randomly sampled radius $r_n$ (between $0$ and \SI{128}{px}). By connecting the resulting points, we obtained the closed contour. We used the python PIL library (PIL 5.4.1, python3) to draw the lines that connect the endpoints. For the corresponding open contour, we sampled two radii for one of the angles such that they had a distance of \SIrange{20}{50}{px} from each other. When connecting the points, a gap was created between the points that share the same angle. This generation procedure could allow for very short lines with edges being very close to each other. To avoid this we excluded all shapes with corner points closer to \SI{10}{px} from non-adjacent lines. The position of the main contour was random, but we ensured that the contour did not extend over the border of the image. Besides the main contour, several flankers consisting of either one or two line segments were added to each stimulus. The exact number of flankers was uniformly sampled from the range $[10,25]$. The length of each line segment varied between $32$ and \SI{64}{px}. For the flankers consisting of two line segments, both lines had the same length and the angle between the line segments was at least \ang{45}. We added the flankers successively to the image and thereby ensured a minimal distance of \SI{10}{px} between the line centers. To ensure that the corresponding image pairs would have the same flankers, the distances to both the closed and open version of the main contour were accounted for when re-sampling flankers. If a flanker did not fulfill this criterion, a new flanker was sampled of the same size and the same number of line segments, but it was placed somewhere else. If a flanker extended over the border of the image, the flanker was cropped. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{figures/cc_methods2.pdf} \caption{Closed contour data set. \textbf{A}: Left: The main contour was generated by connecting points from a random sampling process of angles and radii. Right: Resulting line-drawing with flankers. \textbf{B}: Left: Generation process of curvy contours. Right: Resulting line-drawing.} \label{fig:cc_methods} \end{figure} \subsubsection{Line-drawing with Curvy Lines as Main Contour} \label{curvy_desc} For some of the generalization sets, the contours consisted of curvy instead of straight lines. These were generated by modulating a circle of a given radius $r_c$ with a radial frequency function that was defined by two sinusoidal functions. The radius of the contour was thus given by \begin{equation} r(\phi) = A_1 \sin(f_1 (\phi + \theta_1)) + A_2 \sin(f_2 (\phi + \theta_2)) + r_c, \end{equation} with the frequencies $f_1$ and $f_2$, (integers between $1$ and $6$), amplitudes $A_1$ and $A_2$ (random values between $15$ and $45$) and phases $\theta_1$ and $\theta_2$ (between $0$ and $2\pi$). Unless stated otherwise, the diameter (diameter = $2 \times r_c$) was a random value between $50$ and \SI{100}{px}, and the contour was positioned in the center of the image. The open contours were obtained by removing a circular segment of size $\phi_o = \frac{\pi}{3}$ at a random phase (see Figure \ref{fig:cc_methods}B). For two of the generalization data sets we used dashed contours which were obtained by masking out 20 equally distributed circular segments each of size $\phi_d = \frac{\pi}{20}$. \subsubsection{Details on Generalization Data Sets} We constructed $15$ variants of the data set to test generalization performance. Nine variants consisted of contours with straight lines. Six of these featured varying line styles like changes in line width ($10$, $13$, $14$) and/or line color ($11$, $12$). For one variant ($5$), we increased the number of edges in the main contour. Another variant ($4$) had no flankers, and yet another variant ($6$) featured asymmetric flankers. For variant $9$, the lines were binarized (only black or gray pixels instead of different gray tones). In another six variants, the contours as well as the flankers were curved, meaning that we modulated a circle with a radial frequency function. The first four variants did not contain any flankers and the main contour had a fixed size of \SI{50}{px} ($3$), \SI{100}{px} ($1$) and \SI{150}{px} ($8$). For another variant ($15$), the contour was a dashed line. Finally, we tested the effect of different flankers by adding one additional closed, yet dashed contour ($2$) or one to four open contours ($7$). Below, we provide more details on some of these data sets: \textbf{Black-White-Black lines ($12$).} For all contours, black lines enclosed a white one in the middle. Each of these three lines had a thickness of \SI{1.5}{px} which resulted in a total thickness of \SI{4.5}{px}. \textbf{Asymmetric flankers ($6$).} The two-line flankers consisted of one long and one short line instead of two equally long lines. \textbf{W/ dashed flanker ($2$).} This data set with curvy contours contained an additional dashed, yet closed contour as a flanker. It was produced like the main contour in the dashed main contour set. To avoid overlap of the contours, the main contour and the flanker could only appear at four determined positions in the image, namely the corners. \textbf{W/ multiple flankers ($7$).} In addition to the curvy main contour, between one and four open curvy contours were added as flankers. The flankers were generated by the same process as the main contour. The circles that were modulated had a diameter of \SI{50}{px} and could appear at either one of the four corners of the image or in the center. \subsection{Psychophysical Experiment} \label{psychopyhysics_cc} To estimate how well humans would be able to distinguish closed and open stimuli, we performed a psychophysical experiment in which observers reported which of two sequentially presented images contained a closed contour (two-interval forced choice (``2-IFC'') task). \begin{figure} \centering \includegraphics[width=\linewidth]{figures/cc_human_exp2.pdf} \caption{A: In a 2-IFC task, human observers had to tell which of two images contained a closed contour. B: Accuracy of the 20 na\"{i}ve observers for the different conditions.} \label{fig:cc_human_exp} \end{figure} \subsubsection{Stimuli} The images of the closed contour data set were used as stimuli for the psychophysical experiments. Specifically, we used the images from the test sets that were used to evaluate the performance of the models. For our psychophysical experiments, we used two different conditions: the images contained either black (i.i.d. to the training set) or white contour lines. The latter was one one of the generalization test sets. \subsubsection{Apparatus} Stimuli were displayed on a VIEWPixx 3D LCD (VPIXX Technologies; spatial resolution $1920 \times 1080$ px, temporal resolution \SI{120}{Hz}, operating with the scanning backlight turned off). Outside the stimulus image, the monitor was set to mean gray. Observers viewed the display from \SI{60}{cm} (maintained via a chinrest) in a darkened chamber. At this distance, pixels subtended approximately \ang{0.024} degrees on average (\SI{41}{ps} per degree of visual angle). The monitor was linearized (maximum luminance \SI{260}{\mathrm{cd}/ \mathrm{m}^2} using a Konica-Minolta LS-100 photometer. Stimulus presentation and data collection was controlled via a desktop computer (Intel Core i5-4460 CPU, AMD Radeon R9 380 GPU) running Ubuntu Linux (16.04 LTS), using the Psychtoolbox Library \citep[][version 3.0.12]{pelli1997videotoolbox,kleiner2007s,brainard1997psychophysics} and the iShow library (\url{http://dx.doi.org/10.5281/zenodo.34217}) under MATLAB (The Mathworks, Inc., R2015b). \subsubsection{Participants} In total, $19$ na\"{i}ve observers ($4$ male, $15$ female, age: $25.05$ years, SD = $3.52$) participated in the experiment. Observers were paid $10$ \euro per hour for participation. Before the experiment, all subjects had given written informed consent for participating. All subjects had normal or corrected to normal vision. All procedures conformed to Standard 8 of the American Psychological 405 Association’s “Ethical Principles of Psychologists and Code of Conduct” (2010). \subsubsection{Procedure} On each trial, one closed and one open contour stimulus were presented to the observer (see Figure \ref{fig:cc_human_exp} A). The images used for each trial were randomly picked, but we ensured that the open and closed images shown in the same trial were not the ones that were almost identical to each other (see "Generation of Image Pairs" in Appendix \ref{appendix_pairs}). Thus, the number of edges of the main contour could differ between the two images shown in the same trial. Each image was shown for \SI{100}{ms}, separated by a \SI{300}{ms} inter-stimulus interval (blank gray screen). We instructed the observer to look at the fixation spot in the center of the screen. The observer was asked to identify whether the image containing a closed contour appeared first or second. The observer had \SI{1200}{ms} to respond and was given feedback after each trial. The inter-trial interval was \SI{1000}{ms}. Each block consisted of $100$ trials and observers performed five blocks. Trials with different line colors and varying background images (contrasts including $0$, $0.4$ and $1$) were blocked. Here, we only report the results for black and white lines of contrast $0$. Upon the first time that a block with a new line color was shown, observers performed a practice session with $48$ trials of the corresponding line color. \subsection{Training of ResNet-50 model} We fine-tuned a ResNet-50 \citep{he2016deep} pre-trained on ImageNet \citep{imagenet_cvpr09}, on the closed contour task. We replaced the last fully connected, $1000$-way classification layer by a layer with only one output neuron to perform binary classification with a decision threshold of $0$. The weights of all layers were fine-tuned using the optimizer Adam \citep{kingma2014adam} with a batch size of $64$. All images were pre-processed to have the same mean and standard deviation and were randomly mirrored horizontally and vertically for data augmentation. The model was trained on $14,000$ images for $10$ epochs with a learning rate of $0.0003$. We used a validation set of $5,600$ images. \paragraph{Generalization Tests} To determine the generalization performance, we evaluated the model on the test sets without any further training. Each of the test sets contained $5,600$ images. Poor accuracy could simply result from a sub-optimal decision criterion rather than because the network would not be able to tell the stimuli apart. To account for the distribution shift between the original training images and the generalization tasks, we optimized the decision threshold (a single scalar) for each data set. To find the optimal threshold for each data set, we subdivided the interval, in which $95\%$ of all logits lie, into $100$ sub points and picked the threshold that would lead to the highest performance. \subsection{Training of BagNet-33 model} To test an alternative decision-making mechanism to global contour integration, we trained and tested a BagNet-33 \citep{brendel2019approximating} on the closed contour task. Like the ResNet-50 model, it was pre-trained on ImageNet \citep{imagenet_cvpr09} and we replaced the last fully connected, $1000$-way classification layer by a layer with only one output neuron. We fine-tuned the weights using the optimizer AdaBound \citep{luo2019adaptive} with an initial and final learning rate of $0.0001$ and $0.1$, respectively. The training images were generated on-the-fly, which meant that new images were produced for each epoch. In total, the fine-tuning lasted $100$ epochs and we picked the weights from the epoch with highest performance. \paragraph{Generalization Tests} The generalization tests were conducted equivalently to the ones with ResNet-50. The results are shown in Figure \ref{fig:cc_bagnet_gen}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/cc_bagnet_gen.pdf} \caption{Generalization performances of BagNet-33.} \label{fig:cc_bagnet_gen} \end{figure} \subsection{Additional Experiment: Increasing the Task Difficulty by Adding a Background Image} \label{random_contrast} We performed an additional experiment, where we tested if the model would become more robust and thus generalized better if we trained on a more difficult task. This was achieved by adding an image to the background, such that the model had to learn how to separate the lines from the task-irrelevant background. In our experiment, we fine-tuned our ResNet-50-based model on images with a background image of a uniformly sampled contrast. For each data set, we evaluated the model separately on six discrete contrast levels \{0, 0.2, 0.4, 0.6, 0.8, 1\} (see Figure \ref{fig:cc_randomcontrast}A). We found that the generalization performance varied for some data sets compared to the experiment in the main body (see Figure \ref{fig:cc_randomcontrast}B). \begin{figure} \centering \includegraphics[width=\linewidth]{figures/cc_contrast_random4.pdf} \caption{A: An image of varying contrast was added as background. B: Generalization performances of our models trained on random contrast levels and tested on single contrast levels.} \label{fig:cc_randomcontrast} \end{figure} \section{Recognition Gap} \subsection{Details on Methods}\label{rec_gap:methods} \paragraph{Data set} We used two data sets for this experiment. One consisted of ten natural, color images whose grayscale versions were also used in the original study by \citet{ullman2016atoms}. We discarded one image from the original data set as it does not correspond to any ImageNet class. For our ground truth class selection, please see Table \ref{fig:ImageNet_classes}. The second data set consisted of $1000$ images from the ImageNet \citep{imagenet_cvpr09} validation set. All images were pre-processed like in standard training of ResNet (i.e. resizing to 256 $ \times 256$ pixels, cropping centrally to $224 \times 224$ pixels and normalizing). \paragraph{Model} In order to evaluate the recognition gap, the model had to be able to handle small input images. Standard networks like ResNet \citep{he2016deep} are not equipped to handle small images. In contrast, BagNet-33 \citep{brendel2019approximating} allows to straightforwardly analyze images as small as $33 \times 33$ pixels and hence was our model of choice for this experiment. It is a variation of ResNet-50 \citep{he2016deep} where most $3 \times 3$ kernels are replaced by $1 \times 1$ kernels such that the receptive field size at the top-most convolutional layer is restricted to $33 \times 33$ pixels. \paragraph{Machine-Based Search Procedure for Minimal Recognizable Images} \label{exp+model_recgap} Similar to \citet{ullman2016atoms}, we defined minimal recognizable images or configurations (MIRCs) as those patches of an image for which an observer - by which we mean an ensemble of humans or one or several machine algorithms - reaches $\geq 50\%$ accuracy, but any additional $20\%$ cropping of the corners or $20\%$ reduction in resolution would lead to an accuracy $< 50\%$. MIRCs are thus inherently observer-dependent. The original study only searched for MIRCs in humans. We implemented the following procedure to find MIRCs in our DNN: We passed each pre-processed image through BagNet-33 and selected the most predictive crop according to its probability. See Appendix \ref{rec_gap:custom_probability} on how to handle cases where the probability saturates at $100\%$ and Appendix \ref{rec_gap:class_stride_analysis} for different treatments of ground truth class selections. If this probability of the full-size image for the ground-truth class was $\geq 50\%$, we again searched for the $80\%$ subpatch with the highest probability. We repeated the search procedure until the class probability for all subpatches fell below $50\%$. If the $80\%$ subpatches would be smaller than $33 \times 33$ pixels, which is BagNet-33's smallest natural patch size, the crop was increased to $33 \times 33$ pixels using bilinear sampling. We evaluated the recognition gap as the difference in accuracy between the MIRC and the \textit{best-performing} sub-MIRC. This definition was more conservative than the one from \citet{ullman2016atoms} who considered the maximum difference between a MIRC and its sub-MIRCs, i.e. the difference between the MIRC and the \textit{worst-performing} sub-MIRC. Please note that one difference between our machine procedure and the psychophysics experiment by \citet{ullman2016atoms} remained: The former was greedy, whereas the latter corresponded to an exhaustive search under certain assumptions. \subsection{Analysis of Different Class Selections and Different Number of Descendants} \label{rec_gap:class_stride_analysis} Treating the ten stimuli from \citet{ullman2016atoms} in our machine algorithm setting required two design choices: We needed to both pick suitable ground truth classes from ImageNet for each stimulus as well as choose if and how to combine them. The former is subjective and using relationships from WordNet Hierarchy \citep{miller1995wordnet} (as \citet{ullman2016atoms} did in their psychophysics experiment) only provides limited guidance. We picked classes to our best judgement (for our final ground truth class choices, please see Table \ref{fig:ImageNet_classes}). Regarding the aspect of handling several ground truth classes, we extended our experiments: We tested whether considering all classes as one ('joint classes', i.e. summing the probabilities) or separately ('separate classes', i.e. rerunning the stimuli for each ground truth class) would have an effect on the recognition gap. As another check, we investigated whether the number of descendant options would alter the recognition gap: Instead of only considering the four corner crops as in the psychophysics experiment by \citet{ullman2016atoms} ('Ullman4'), we looked at every crop shifted by one pixel as a potential new parent ('stride-1'). The results reported in the main body correspond to joint classes and corner crops. Finally, besides analyzing the recognition gap, we also analyzed the sizes of MIRCs and the fractions of images that possess MIRCs for the mentioned conditions. Figure \ref{fig:recgap_sup}A shows that all options result in similar values for the recognition gap. The trend of smaller MIRC sizes for stride-1 compared to four corner crops shows that the search algorithm can find even smaller MIRCs when all crops are possible descendants (see. Figure \ref{fig:recgap_sup}B). The final analysis of how many images possess MIRCs (see Figure \ref{fig:recgap_sup}C) shows that recognition gaps only exist for fractions of the tested images: In the case of the stimuli from \citet{ullman2016atoms} three out of nine images, and in the case of ImageNet about $60\%$ of the images have MIRCs. This means that the recognition performance of the initial full-size configurations was $\geq 50\%$ for those fractions only. Please note that we did not evaluate the recognition gap over images that did not meet this criterion. In contrast, \citet{ullman2016atoms} average only across MIRCs that have a recognition rate above $65\%$ and sub-MIRCs that have a recognition rate below $20\%$ (personal communication, 2019). The reason why our model could only reliably classify three out of the nine stimuli from \citep{ullman2016atoms} can partly be traced back to the oversimplification of single-class-attribution in ImageNet as well as to the overconfidence of deep learning classification algorithms \citep{guo2017calibration}: They often attribute a lot of evidence to one class, and the remaining ones only share very little evidence. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/rec_gap_sup.pdf} \caption{A: Recognition gaps. The legend holds for all subplots. B: Size of MIRCs. C: Fraction of images with MIRCs.} \label{fig:recgap_sup} \end{figure} \subsection{Selecting Best Crop when Probabilities Saturate}\label{rec_gap:custom_probability} We observed that several crops had very high probabilities and therefore used the ``logit''-measure $logit(p)$, where $p$ is the probability. It is defined as the following: $logit(p) = log(\frac{p}{1-p})$. Note that this measure is different from what the deep learning community usually refers to as ``logits'', which are the values before the softmax-layer. In the following, we denote the latter values as $\mathbf{z}$. The logit $logit(p)$ is monotonic w.r.t. to the probability $p$, meaning that the higher the probability $p$, the higher the logit $logit(p)$. However, while $p$ saturates at $100\%$, $logit(p)$ is unbounded. Therefore, it yields a more sensitive discrimination measure between image patches $j$ that all have $p(\mathbf{z}^j) = 1$, where the superscript $j$ denotes different patches. In the following, we will provide a short derivation for the logit $logit(p)$. Consider a single patch with the correct class $c$. We start with the probability $p_c$ of class $c$, which can be obtained by plugging the logits $z_i$ into the softmax-formula, where $i$ corresponds to the classes $[0, ..., 1000]$. \begin{equation} p_c(\mathbf{z}) = \frac{exp(z_c)}{exp(z_c) + \sum\limits_{i \neq c} exp(z_i)} \end{equation} Since we are interested in the probability of the correct class, it holds that $p_c(\mathbf{z}) \neq 0 $. Thus, in the regime of interest, we can invert both sides of the equation. After simplifying, we get: \begin{equation} \frac {1}{p_c(\mathbf{z})} -1= \frac{ \sum\limits_{i \neq c} exp(z_i)} {exp(z_c)}. \end{equation} When taking the negative logarithm on both sides, we obtain: \begin{align} &\Leftrightarrow &-log \left(\frac {1}{p_c(\mathbf{z})} -1\right) & = -log \left(\frac{ \sum\limits_{i \neq c} exp(z_i)} {exp(z_c)}\right)\\ &\Leftrightarrow &-log \left( \frac{1-p_c(\mathbf{z})}{p_c(\mathbf{z})} \right) & = -log \left(\sum\limits_{i \neq c} exp(z_i)\right) - \left(-log(exp(z_c))\right)\\ &\Leftrightarrow &log \left( \frac{p_c(\mathbf{z})}{1-p_c(\mathbf{z})} \right) & = z_c -log \left(\sum\limits_{i \neq c} exp(z_i)\right) \end{align} The left-hand side of the equation is exactly the definition of the logit $logit(p)$. Intuitively, it measures in log-space how much the network's belief in the correct class outweighs the belief in all other classes taken together. The following reassembling operations illustrate this: \begin{equation} \begin{split} logit(p_c) & = log \left( \frac{p_c(\mathbf{z})}{1-p_c(\mathbf{z})} \right)\\ & = \underbrace{log \bigl( p_c(\mathbf{z}) \bigr)}_\text{log probability of correct class} - \underbrace{log \bigl( 1-p_c(\mathbf{z}) \bigr)}_\text{log probability of all incorrect classes} \end{split} \end{equation} The above formulations regarding one correct class hold when adjusting the experimental design to accept several classes $k$ as correct predictions. In brief, the logit $logit(p_C(z))$, where $C$ stands for several classes, then states: \begin{equation} \begin{split} logit(p_C(\mathbf{z})) & = -log \left( \frac{1}{p_{c_1}(\mathbf{z}) + p_{c_2}(\mathbf{z}) + ... + p_{c_k}(\mathbf{z})} - 1 \right) \\ & = -log \left( \frac{1}{\sum\limits_{k}p_k(\mathbf{z})} - 1 \right) \\ & = \underbrace{log \bigl( \sum\limits_{k}p_k(\mathbf{z}) \bigr)}_\text{log probability of all correct classes} - \underbrace{log \bigl( 1-\sum\limits_{k}p_k(\mathbf{z}) \bigr)}_\text{log probability of all incorrect classes}\\ & = log \bigl( \sum\limits_{k}exp(z_k) \bigr) - log \bigl( \sum\limits_{i \neq k}exp(z_i) \bigr) \end{split} \end{equation} \subsection{Selection of ImageNet Classes for Stimuli of Ullman et al. (2016)}\label{ImageNet_classes} Note that our selection of classes is different from the one used by \citet{ullman2016atoms}. We went through all classes for each image and selected the ones that we considered sensible. The tenth image of the eye does not have a sensible ImageNet class, hence only nine stimuli from \citet{ullman2016atoms} are listed in Table \ref{fig:ImageNet_classes}. \begin{table}[] \begin{tabular}{|l|l|l|l|} \hline Image & \begin{tabular}[c]{@{}l@{}} WordNet \\ Hierarchy ID \end{tabular} & WordNet Hierarchy description & \begin{tabular}[c]{@{}l@{}}Neuron number in ResNet-50\\ (indexing starts at 0)\end{tabular} \\ \hline fly & n02190166 & fly & 308 \\ \hline ship & n02687172 & \begin{tabular}[c]{@{}l@{}}aircraft carrier, carrier, flattop, attack \\ aircraft carrier \end{tabular} & 403 \\ \hline & n03095699 & \begin{tabular}[c]{@{}l@{}} container ship, containership, container \\ vessel \end{tabular} & 510 \\ \hline & n03344393 & fireboat & 554 \\ \hline & n03662601 & lifeboat & 625 \\ \hline & n03673027 & liner, ocean liner & 628 \\ \hline eagle & n01608432 & kite & 21 \\ \hline & n01614925 & \begin{tabular}[c]{@{}l@{}}bald eagle, American eagle, Haliaeetus \\ leucocephalus\end{tabular} & 22 \\ \hline glasses & n04355933 & sunglass & 836 \\ \hline & n04356056 & sunglasses, dark glasses, shades & 837 \\ \hline bike & n02835271 & \begin{tabular}[c]{@{}l@{}} bicycle-built-for-two, tandem bicycle, \\ tandem\end{tabular} & 444 \\ \hline & n03599486 & jinrikisha, ricksha, rickshaw & 612 \\ \hline & n03785016 & moped & 665 \\ \hline & n03792782 & mountain bike, all-terrain bike, off-roader & 671 \\ \hline & n04482393 & tricycle, trike, velocipede & 870 \\ \hline suit & n04350905 & suit, suit of clothes & 834 \\ \hline & n04591157 & windsor tie & 906 \\ \hline plane & n02690373 & airliner & 404 \\ \hline horse & n02389026 & sorrel & 339 \\ \hline & n03538406 & horse cart, horse-cart & 603 \\ \hline car & n02701002 & ambulance & 407 \\ \hline & n02814533 & \begin{tabular}[c]{@{}l@{}}beach wagon, station wagon, wagon \\ estate car, beach waggon, station waggon, \\ waggon\end{tabular} & 436 \\ \hline & n02930766 & cab, hack, taxi, taxicab & 468 \\ \hline & n03100240 & convertible & 511 \\ \hline & n03594945 & jeep, landrover & 609 \\ \hline & n03670208 & limousine, limo & 627 \\ \hline & n03769881 & minibus & 654 \\ \hline & n03770679 & minivan & 656 \\ \hline & n04037443 & racer, race car, racing car & 751 \\ \hline & n04285008 & sports car, sport car & 817 \\ \hline \end{tabular} \caption {Selection of ImageNet Classes for Stimuli of \citet{ullman2016atoms}} \label{fig:ImageNet_classes} \end{table} \section*{Conclusion} Comparing human and machine visual perception can be challenging. In this work, we presented a checklist on how to perform such comparison studies in a meaningful and robust way. For one, isolating a single mechanism requires us to minimize or exclude the effect of other differences between biological and artificial and to align experimental conditions for both systems. We further have to differentiate between necessary and sufficient mechanisms and to circumscribe in which tasks they are actually deployed. Finally, an overarching challenge in comparison studies between humans and machines is our strong internal human interpretation bias. Using three case studies we illustrated the application of the checklist. The first case study on closed contour detection showed that human bias can impede the objective interpretation of results, and that investigating which mechanisms could or could not be at work may require several analytic tools. The second case study highlighted the difficulty of drawing robust conclusions about mechanisms from experiments. While previous studies suggested that feedback mechanisms might be important for visual reasoning tasks, our experiments showed that they are not necessarily required. The third case study clarified that aligning experimental conditions for both systems is essential. When adapting the experimental settings, we found that, unlike the differences reported in a previous study, DNNs and humans indeed show similar behavior on an object recognition task. Our checklist complements other recent proposals about how to compare visual inference strategies between humans and machines \citep{buckner2019comparative, chollet2019measure, ma2020neural, geirhos2020beyond} and helps to create more nuanced and robust insights into both systems. \section*{Author contributions} The closed contour case study was designed by CMF, JB, TSAW and MB and later with WB. The code for the stimuli generation was developed by CMF. The neural networks were trained by CMF and JB. The psychophysical experiments were performed and analysed by CMF, TSAW and JB. The SVRT case study was conducted by CMF under supervision of TSAW, WB and MB. KS designed and implemented the recognition gap case study under the supervision of WB and MB, JB extended and refined it under the supervision of WB and MB. The initial idea to unite the three projects was conceived by WB, MB, TSAW and CMF, and further developed including JB. The first draft was jointly written by JB and CMF with input from TSAW and WB. All authors contributed to the final version and provided critical revisions. \section*{Acknowledgments} We thank Alexander S. Ecker, Felix A. Wichmann, Matthias K\"ummerer, Dylan Paiton as well as Drew Linsley for helpful discussions. We thank Thomas Serre, Junkyung Kim, Matthew Ricci, Justus Piater, Sebastian Stabinger, Antonio Rodr\'iguez-S\'anchez, Shimon Ullman, Liav Assif and Daniel Harari for discussions and feedback on an earlier version of this manuscript. Additionally, we would like to thank Nikolas Kriegeskorte for his detailed and constructive feedback, which helped us make our manuscript stronger. Furthermore, we thank Wiebke Ringels for helping with data collection for the psychophysical experiment. We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting CMF and JB. We acknowledge support from the German Federal Ministry of Education and Research (BMBF) through the competence center for machine learning (FKZ 01IS18039A) and the Bernstein Computational Neuroscience Program T\"ubingen (FKZ: 01GQ1002), the German Excellence Initiative through the Centre for Integrative Neuroscience T\"ubingen (EXC307), and the Deutsche Forschungsgemeinschaft (DFG; Projektnummer 276693517 – SFB 1233). Elements of this work were presented at the Conference on Cognitive Computational Neuroscience 2019 and the Shared Visual Representations in Human and Machine Intelligence Workshop at the Conference on Neural Information Processing Systems 2019. \section*{Commercial relationships} Matthias Bethge: Amazon scholar Jan 2019 – Jan 2021, Layer7AI, DeepArt.io, Upload AI; Wieland Brendel: Layer7AI. \section*{Introduction} Until recently, only biological systems could abstract the visual information in our world and transform it into a representation that supports understanding and action. Researchers have been studying how to implement such transformations in artificial systems since at least the 1950s. One advantage of artificial systems for understanding these computations is that many analyses can be performed that would not be possible in biological systems. For example, key components of visual processing, such as the role of feedback connections, can be investigated, and methods such as ablation studies gain new precision. \begin{sloppypar} Traditional models of visual processing sought to explicitly replicate the hypothesized computations performed in biological visual systems. One famous example is the hierarchical HMAX-model \citep{fukushima1980neocognitron, riesenhuber1999hierarchical}. It instantiates mechanisms hypothesized to occur in primate visual systems, such as template matching and max operations, whose goal is to achieve invariance to position, scale and translation. Crucially, though, these models never got close to human performance in real-world tasks. \end{sloppypar} With the success of learned approaches in the last decade, and particularly that of convolutional deep neural networks (DNNs), we now have much more powerful models. In fact, these models are able to perform a range of constrained image understanding tasks with human-like performance \citep{krizhevsky2012imagenet, eigen2015predicting, long2015fully}. While matching machine performance with that of the human visual system is a crucial step, the inner workings of the two systems can still be very different. We hence need to move beyond comparing accuracies to understand how the systems' mechanisms differ \citep{geirhos2020beyond, chollet2019measure, ma2020neural, Firestone201905334}. The range of frequently considered mechanisms is broad. They concern not only the architectural level (such as feedback vs. feed-forward connections, lateral connections, foveated architectures or eye movements, ...), but also involve different learning schemes (Back-propagation vs Spike-timing-dependent plasticity/Hebbian learning, ...) as well as the nature of the representations themselves (such as reliance on texture rather than shape, global vs. local processing, ...)\footnote{For an overview of comparison studies, please see Appendix \ref{related_work}}. \section*{Checklist for Psychophysical Comparison Studies} We present a checklist on how to design, conduct and interpret experiments of comparison studies that investigate relevant mechanisms for visual perception. The diagram in Figure~\ref{fig:framework_figure} illustrates the core ideas which we elaborate on below. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/framework_figure_7.pdf} \caption{\textbf{i}: The human system and a candidate machine system differ in a range of properties. Isolating a specific mechanism (for example feedback) can be challenging. \textbf{ii}: When designing an experiment, equivalent settings are important. \textbf{iii}: Even if a specific mechanism was important for a task, it would not be clear if this mechanism is necessary, as there could be other mechanisms (that might or might not be part of the human or machine system) that can allow a system to perform well. \textbf{iv}: Furthermore, the identified mechanisms might depend on the specific experimental setting and not generalize to e.g. another task. \textbf{v}: Overall, our human bias influences how we conduct and interpret our experiments. \textsuperscript{1}\citet{brendel2019approximating}, \textsuperscript{2}\citet{dicarlo2012does}, \textsuperscript{3}\citet{geirhos2018imagenet}, \textsuperscript{4}\citet{kubilius2016deep}, \textsuperscript{5}\citet{golan2019controversial}, \textsuperscript{6}\citet{dujmovic2020adversarial} } \label{fig:framework_figure} \end{figure} \begin{enumerate}[label=\roman*.] \item \textbf{Isolating implementational or functional properties.} Naturally, the systems that are being compared often differ in more than just one aspect, and hence pinpointing one single reason for an observed difference can be challenging. One approach is to design an artificial network constrained such that the mechanism of interest will show its effect as clearly as possible. An example of such an attempt is \citet{brendel2019approximating} which constrained models to process purely local information by reducing their receptive field sizes. Unfortunately, in many cases, it is almost impossible to exclude potential side-effects from other experimental factors such as architecture or training procedure. Therefore, making explicit if, how and where results depend on other experimental factors is important. \item \textbf{Aligning experimental conditions for both systems.} In comparative studies (whether humans and machines, or different organisms in nature), it can be exceedingly challenging to make experimental conditions equivalent. When comparing the two systems, any differences should be made as explicit as possible and taken into account in the design and analysis of the study. For example the human brain profits from lifelong experience, whereas a machine algorithm is usually limited to learning from specific stimuli of a particular task and setting. Another example is the stimulus timing used in psychophysical experiments, for which there is no direct equivalent in stateless algorithms. Comparisons of human and machine accuracies must therefore be considered with the temporal presentation characteristics of the experiment. These characteristics could be chosen based on, for example, a definition of the behaviour of interest as that occurring within a certain time after stimulus onset \citep[as for e.g. ``core object recognition'';][]{dicarlo2012does}. \citet{Firestone201905334} highlights that as aligning systems perfectly may not be possible due to different ``hardware'' constraints such as memory capacity, unequal performance of two systems might still arise despite similar competencies. \item \textbf{Differentiating between necessary and sufficient mechanisms.} It is possible that multiple mechanisms allow good task performance -- for example DNNs can use either shape or texture features to reach high performance on ImageNet \citep{geirhos2018imagenet, kubilius2016deep}. Thus, observing good performance for one mechanism does neither imply that this mechanism is strictly necessary nor that it is employed by the human visual system. As another example, \cite{watanabe2018illusory} investigated whether the rotating snakes illusion \citep{kitaoka2003phenomenal, conway2005neural} could be replicated in artificial neural networks. While they found that this was indeed the case, we argue that the mechanisms must be different from the ones used by humans, as the illusion requires small eye movements or blinks \citep{hisakata2008effects, kuriki2008functional}, while the artificial model does not emulate such biological processes. \item \textbf{Testing generalization of mechanisms.} Having identified an important mechanism, one needs to make explicit for which particular conditions (class of tasks, data sets, ...) the conclusion is intended to hold. A mechanism that is important for one setup may or may not be important for another one. In other words, whether a mechanism works under generalized settings has to be explicitly tested. An example of outstanding generalization for humans is their visual \emph{robustness} against various variations in the input. In DNNs, a mechanism to improve robustness is to ``stylize'' \citep{gatys2016image} training data. First presented as raising performance on parametrically distorted images \citep{geirhos2018imagenet}, this mechanism was later shown to also improve performance on images suffering from common corruptions \citep{michaelis2019benchmarking}, but would be unlikely to help with adversarial robustness. From a different perspective, the work of \citet{golan2019controversial} on controversial stimuli is an example where using stimuli outside of the training distribution can be insightful. Controversial stimuli are synthetic images that are designed to trigger distinct responses for two machine models. In their experimental setup, the use of this out-of-distribution data allows the authors to reveal whether the inductive bias of humans is similar to one of the candidate models. \item \textbf{Resisting human bias.} Human bias can affect not only the design but also the conclusions we draw from comparison experiments. In other words, our human reference point can influence for example how we interpret the behaviour of other systems, be they biological or artificial. An example is the well-known Braitenberg vehicles \citep{braitenberg1986vehicles}, which are defined by very simple rules. To a human observer, however, the vehicles' behaviour appears as arising from complex internal states such as fear, aggression or love. This phenomenon of anthropomorphizing is well known in the field of comparative psychology \citep{romanes1883animal, wolfgang1925mentality, koehler1943zahl, haun2011origins, boesch2007makes, tomasello2008assessing}. \citet{buckner2019comparative} specifically warns of human-centered interpretations and recommends to apply the lessons learned in comparative psychology to comparing DNNs and humans. In addition, our human reference point can influence how we design an experiment. As an example, \citet{dujmovic2020adversarial} illustrate that the selection of stimuli and labels can have a big effect on finding similarities or differences between humans and machines to adversarial examples. \end{enumerate} In the remainder of this paper, we provide concrete examples of the aspects discussed above using three case studies\footnote{The code is available at \url{https://github.com/bethgelab/notorious_difficulty_of_comparing_human_and_machine_perception}}: \begin{enumerate} \item \textbf{Closed Contour Detection}: The first case study illustrates how tricky overcoming our human bias can be, and that shedding light on an alternative decision-making mechanism may require multiple additional experiments. \item \textbf{Synthetic Visual Reasoning Test}: The second case study highlights the challenge of isolating mechanisms and of differentiating between necessary and sufficient mechanisms. Thereby, we discuss how human and machine model learning differ and how changes in the model architecture can affect the performance. \item \textbf{Recognition gap}: The third case study illustrates the importance of aligning experimental conditions. \end{enumerate} \section{SVRT} \label{appendix_svrt} \subsection{Methods} \paragraph{Data set} We used the original C-code provided by \citet{fleuret2011comparing} to generate the images of the SVRT data set. The images had a size of $128 \times 128$ pixels. For each problem, we used up to $28,000$ images for training, $5,600$ images for validation and $11,200$ images for testing. \paragraph{Experimental Procedures} \label{exp_procedure_SVRT} For each of the SVRT problems, we fine-tuned a ResNet-50 that was pretrained on ImageNet \citep{imagenet_cvpr09} (as described in section \ref{resnet_cc}). The same pre-processing, data augmentation, optimizer and batch size as for the closed contour task were used. For the different experiments, we varied the number of training images. We used subsets containing either $28,000$, $1000$ or $100$ images. The number of epochs depended on the size of the training set: The model was fine-tuned for respectively $10$, $280$ or $2800$ epochs. For each training set size and SVRT problem, we used the best learning rate after a hyper-parameter search on the validation set, where we tested the learning rates [\num{6e-5}, \num{1e-4}, \num{3e-4}]. As a control experiment, we also initialized the model with random weights and we again performed a hyper-parameter search over the learning rates [\num{3e-4}, \num{6e-4}, \num{1e-3}]. \subsection{Results} \label{results_svrt} In Figure \ref{fig:svrt_appendix} we show the results for the individual problems. When using $28,000$ training images, we reached above $90$\% accuracy for all SVRT problems, including the ones that required same-different judgments (see also Figure \ref{fig:svrt_main}B). When using less training images the performance on the test set was reduced. In particular, we found that the performance on same-different tasks dropped more rapidly than on spatial reasoning tasks. If the ResNet-50 was trained from scratch (i.e. weights were randomly initialized instead of loaded from pre-training on ImageNet), the performance dropped only slightly on all but one spatial reasoning task. Larger drops were found on same-different tasks. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/svrt_appendix.pdf} \caption{Accuracy of the models for the individual problems. Problem 8 is a mixture of same-different task and spatial task. In Figure \ref{fig:svrt_main} this problem was assigned to the spatial tasks. Bars re-plotted from \citet{kim2018not}.} \label{fig:svrt_appendix} \end{figure} \section*{Case Study 3: Recognition Gap} \citet{ullman2016atoms} investigated the minimally necessary visual information required for object recognition. To this end, they successively cropped or reduced the resolution of a natural image until more than $50\%$ of all human participants failed to identify the object. The study revealed that recognition performance drops sharply if the minimal recognizable image crops are reduced any further. They referred to this drop in performance as the ``recognition gap''. The gap is computed by subtracting the proportion of people who correctly classify the largest unrecognizable crop (e.g. $0.2$) from that of the people who correctly classify the smallest recognizable crop (e.g. $0.9$). In this example, the recognition gap would evaluate to $0.9 - 0.2 = 0.7$. On the same human-selected crops, \citet{ullman2016atoms} found that the recognition gap is much smaller for machine vision algorithms ($0.14\pm0.24$) than for humans ($0.71\pm0.05$). The researchers concluded that machine vision algorithms would not be able to ``explain [humans'] sensitivity to precise feature configurations'' and ``that the human visual system uses features and processes that are not used by current models and that are critical for recognition''. In a follow-up study, \citet{srivastava2019minimal} identified ``fragile recognition images'' (FRIs) with an exhaustive machine-based procedure whose results include a subset of patches that adhere to the definition of of minimal recognizable configurations (MIRCs) by \citet{ullman2016atoms}. On these machine-selected FRIs, a DNN experienced a moderately high recognition gap, whereas humans experienced a low one. Because of the differences between the selection procedures used in \citet{ullman2016atoms} and \citet{srivastava2019minimal}, the question remained open whether machines would show a high recognition gap on machine-selected minimal images, if the selection procedure was similar to the one used in \citet{ullman2016atoms}. \subsection*{Our Experiment} Our goal was to investigate if the differences in recognition gaps identified by \citet{ullman2016atoms} would at least in part be explainable by differences in the experimental procedures for humans and machines. Crucially, we wanted to assess machine performance on \textit{machine}-selected, and not \textit{human}-selected image crops. We therefore implemented the psychophysics experiment in a machine setting to search the smallest recognizable images (or minimal recognizable crop: `MIRCs'') and the largest unrecognizable images (``sub-MIRCs''). In the final step, we evaluated our machine model's recognition gap using the \textit{machine}-selected MIRCs and sub-MIRCs. \paragraph{Methods} Our machine-based search algorithm used the deep convolutional neural network BagNet-33 \citep{brendel2019approximating}, which allows to straightforwardly analyze images as small as $33 \times 33$ pixels. In the first step, the classification accuracy was evaluated for the whole image. If it was above 0.5, the image was successively cropped and reduced in resolution. In each step, the best performing crop was taken as the new parent. When the classification probability of all children fell below $0.5$, the parent was identified as the MIRC and all its children were considered sub-MIRCs. In order to evaluate the recognition gap, we calculate the difference in accuracy between the MIRC and the \textit{best-performing} sub-MIRC. This definition is more conservative than the one from \citet{ullman2016atoms} who evaluated the difference in accuracy between the MIRC and the \textit{worst-performing} sub-MIRC. For more details on the search procedure, please see Appendix \ref{rec_gap:methods} and \ref{rec_gap:class_stride_analysis}. \paragraph{Results} We evaluated the recognition gap on two data sets: the original images from \citet{ullman2016atoms} and a subset of the ImageNet validation images \citep{imagenet_cvpr09}. As shown in Figure \ref{fig:recgap_main}A, our model has an average recognition gap of $0.99\pm0.01$ on the machine-selected crops of the data set from \citet{ullman2016atoms}. On the machine-selected crops of the ImageNet validation subset, a large recognition gap occurs as well. Our values are similar to the recognition gap in humans and differ from the machines' recognition gap ($0.14\pm0.24$) between human-selected MIRCs and sub-MIRCs as identified by \citet{ullman2016atoms}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/rec_gap_main.pdf} \caption{A: BagNet-33's probability of correct class for decreasing crops: The sharp drop when the image becomes too small or the resolution too low is called the ``recognition gap'' \citep{ullman2016atoms}. It was computed by subtracting the model's predicted probability of the correct class for the sub-MIRC from the model's predicted probability of the correct class for the MIRC. As an example, for the glasses stimulus it evaluated as $0.9999-0.0002 = 0.9997$. The crop size on the x-axis corresponds to the size of the original image in pixels. Steps of reduced resolution are not displayed such that the three sample stimuli can be displayed coherently. B: Recognition gaps for machine algorithms (vertical bars) and humans (gray horizontal bar). A recognition gap \textit{is} identifiable for the DNN BagNet-33 when testing machine-selected stimuli of the original images from \citet{ullman2016atoms} and a subset of the ImageNet validation images \citep{imagenet_cvpr09}. Error bars denote standard deviation.} \label{fig:recgap_main} \end{figure} \paragraph{Discussion} Our findings contrast claims made by \citet{ullman2016atoms}. The latter study concluded that machine algorithms are not as sensitive as humans to precise feature configurations and that they are missing features and processes that are ``critical for recognition.'' First, our study shows that a machine algorithm \textit{is} sensitive to small image crops. It is only the precise minimal features that differ between humans and machines. Second, by the word ``critical,'' \citet{ullman2016atoms} imply that object recognition would not be possible without these human features and processes. Applying the same reasoning to \citet{srivastava2019minimal}, the low human performance on machine-selected patches should suggest that humans would miss ``features and processes critical for recognition.'' This would be an obviously overreaching conclusion. Furthermore, the success of modern artificial object recognition speaks against the conclusion that the purported processes are ``critical'' for recognition, at least within this discretely-defined recognition task. Finally, what we can conclude from the experiments of \citet{ullman2016atoms} and from our own is that both the human and a machine visual system \textit{can} recognize small image crops and that there \textit{is} a sudden drop in recognizability when reducing the amount of information. In summary, these results highlight the importance of testing humans and machines in as similar settings as possible, and of avoiding a human bias in the experiment design. All conditions, instructions and procedures should be as close as possible between humans and machines in order to ensure that observed differences are due to inherently different decision strategies rather than differences in the testing procedure. \section*{Case Study 1: Closed Contour Detection} Closed contours play a special role in human visual perception. According to the Gestalt principles of pr\"agnanz and good continuation, humans can group distinct visual elements together so that they appear as a ``form'' or ``whole''. As such, closed contours are thought to be prioritized by the human visual system and to be important in perceptual organization \citep{koffka2013principles, elder1993effect, kovacs1993closed, tversky2004contour, ringach1996spatial}. Specifically, to tell if a line closes up to form a closed contour, humans are believed to implement a process called ``contour integration'' that relies at least partially on global information \citep{levi2007global, loffler2003local, mathes2007closure}. Even many flanking, open contours would hardly influence human's robust closed contour detection abilities. \subsection*{Our Experiments} We hypothesize that, in contrast to humans, closed contour detection is difficult for DNNs. The reason is that this task would presumably require long-range contour integration, but DNNs are believed to process mainly local information \citep{geirhos2018imagenet, brendel2019approximating}. Here, we test how well humans and neural networks can separate closed from open contours. To this end, we create a custom data set, test humans and DNNs on it and investigate the decision-making process of the DNNs. \subsubsection*{DNNs and Humans Reach High Performance} We created a data set with two classes of images: The first class contained a closed contour, the second one did not. In order to make sure that the statistical properties of the two classes were similar, we included a main contour for both classes. While this contour line closed up for the first class, it remained open for the second class. This main contour consisted of $3 - 9$ straight line segments. In order to make the task more difficult, we added several flankers with either one or two line segments that each had a length of at least $32$ px (Figure \ref{fig:cc_main}A). The size of the images was $256 \times 256$ px. All lines were black and the background was uniformly gray. Details on the stimulus generation can be found in Appendix \ref{cc_dataset_details}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/cc_main16.pdf} \caption{ \textbf{A}: Our ResNet-50-model generalized well to many data sets without further retraining, suggesting it would be able to distinguish closed and open contours. \textbf{B}: However, the poor performance on many other data sets showed that our model did \textit{not} learn the concept of closedness. \textbf{C}: The heatmaps of our BagNet-33-based model show which parts of the image provided evidence for closedness (blue, negative values) or openness (red, positive values). The patches on the sides show the most extremely, non-overlapping patches and their logit-values. The logit distribution shows that most patches had logit values close to zero (y-axis truncated) and that many more patches in the open stimulus contributed positive logit values. \textbf{D}: Our BagNet- and ResNet-models showed different performances on generalization sets, such as the asymmetric flankers. This indicates that the local decision-making process of the substitute model BagNet is not used by the original model ResNet. Figure best viewed electronically.} \label{fig:cc_main} \end{figure} Humans identified the closed contour stimulus very reliably in a two-interval forced choice task. Their performance was $88.39\%$ (SEM = $2.96\%$) on stimuli whose generation procedure was identical to the training set. For stimuli with white instead of black lines, human participants reached a performance of $90.52\%$ (SEM = $1.58\%$). The psychophysical experiment is described in Appendix \ref{psychopyhysics_cc}. We fine-tuned a ResNet-50 \citep{he2016deep} pre-trained on ImageNet \citep{imagenet_cvpr09} on the closed contour data set. Similar to humans, it performed very well and reached an accuracy of $99.95\%$ (see Figure \ref{fig:cc_main}A [i.i.d. to training]). We found that both humans and our DNN reach high accuracy on the closed contour detection task. From a human-centered perspective it is enticing to infer that the model had learned the concept of open and closed contours and possibly that it performs a similar contour integration-like process as humans. However, this would have been overhasty. To better understand the degree of similarity, we investigated how our model performs on variations of the data sets that were not used during the training procedure. \subsubsection*{Generalization Tests Reveal Differences} Humans are expected to have no difficulties if the number of flankers, the color or the shape of lines would differ. We here test our model's robustness on such variants of the data set. If our model used similar decision-making processes as humans, it should be able to generalize well without any further training on the new images. This procedure is another perspective to shed light on whether our model really understood the concept of closedness or just picked up some statistical cues in the training data set. We tested our model on $15$ variants of the data set (out of distribution test sets) without fine-tuning on these variations. As shown in Figure \ref{fig:cc_main}A and B, our trained model generalized well to many but not all modified stimulus sets. On the following variations, our model achieved high accuracy: Curvy contours ($1$, $3$) were easily distinguishable for our model, as long as the diameter remained below \SI{100}{px}. Also, adding a dashed, closed flanker ($2$) did not lower its performance. The classification ability of the model remained similarly high for the no flankers ($4$) and the asymmetric flankers condition ($6$). When testing our model on main contours that consisted of more edges than the ones presented during training ($5$), the performance was also hardly impaired. It remained high as well when multiple curvy open contours were added as flankers ($7$). The following variations were more difficult for our model: If the size of the contour got too large, a moderate drop in accuracy was found ($8$). For binarized images, our model's performance was also reduced ($9$). And finally, (almost) chance performance was observed when varying the line width ($14$, $10$, $13$), changing the line color ($11$, $12$) or using dashed curvy lines ($15$). While humans would perform well on all variants of the closed contour data set, the failure of our model on some generalization tests suggests that it solves the task differently from humans. On the other hand it is equally difficult to prove that the model does not understand the concept. As described by \citet{Firestone201905334} models can "perform differently despite similar underlying competences". In either way, we argue that it is important to openly consider alternative mechanisms to the human approach of global contour integration. \subsubsection*{Our Closed Contour Detection Task is Partly Solvable with Local Features} In order to investigate an alternative mechanism to global contour integration, we here design an experiment to understand how well a decision-making process based on purely local features can work. For this purpose, we trained and tested BagNet-33 \citep{brendel2019approximating}, a model that has access to local features only. It is a variation of ResNet-50 \citep{he2016deep} where most $3 \times 3$ kernels are replaced by $1 \times 1$ kernels and therefore the receptive field size at the top-most convolutional layer is restricted to $33 \times 33$ pixels. We found that our restricted model still reached close to $90\%$ performance. In other words, contour integration was not necessary to perform well on the task. To understand which local features the model relied on mostly, we analyzed the contribution of each patch to the final classification decision. To this end, we used the log-likelihood values for each $33 \times 33$ pixels patch from BagNet-33 and visualized them as a \textit{heatmap}. Such a straight-forward interpretation of the contributions of single image patches is not possible with standard DNNs like ResNet \citep{he2016deep} due to their large receptive field sizes in top layers. The heatmaps of BagNet-33 (see Figure \ref{fig:cc_main}C) revealed which local patches played an important role in the decision-making process: An open contour was often detected by the presence of an end-point at a short edge. Since all flankers in the training set had edges larger than $33$ pixels, the presence of this feature was an indicator of an open contour. In turn, the absence of this feature was an indicator of a closed contour. Whether the ResNet-50-based model used the same local feature as the substitute model was unclear. To answer this question, we tested BagNet on the previously mentioned generalization tests. We found that the data sets on which it showed high performance were sometimes different from the ones of ResNet (see Figure \ref{fig:cc_bagnet_gen} in the Appendix). A striking example was the failure of BagNet on the "asymmetric flankers" condition (see Figure \ref{fig:cc_main}D). For these images, the flankers often consisted of shorter line segments and thus obscured the local feature we assumed BagNet to use. In contrast, ResNet performed well on this variation. This suggests that the decision-making strategy of ResNet did not heavily depend on the local feature found with the substitute BagNet model. In summary, the generalization tests, the high performance of BagNet as well as the existence of a distinctive local feature provide evidence that our human-biased assumption was misleading. We saw that other mechanisms for closed contour detection besides global contour integration do exist (see Introduction \textit{"Differentiating between necessary and sufficient mechanisms"}). As humans, we can easily miss the many statistical subtleties by which a task can be solved. In this respect, BagNets proved to be a useful tool to test a purportedly ``global'' visual task for the presence of local artifacts. Overall, various experiments and analyses can be beneficial to understand mechanisms and to overcome our human reference point. \section*{Case Study 2: Synthetic Visual Reasoning Test} In order to compare human and machine performance at learning abstract relationships between shapes, \citet{fleuret2011comparing} created the Synthetic Visual Reasoning Test (SVRT) consisting of 23 problems (see Figure \ref{fig:svrt_main}A). They showed that humans need only few examples to understand the underlying concepts. \citet{stabinger201625} as well as \citet{kim2018not} assessed the performance of deep convolutional neural networks on these problems. Both studies found a dichotomy between two task categories: While high accuracy was reached on spatial problems, the performance on same-different problems was poor. In order to compare the two types of tasks more systematically, \citet{kim2018not} developed a parameterized version of the SVRT data set called PSVRT. Using this data set, they found that for same-different problems, an increase in the complexity of the data set could quickly strain their models. In addition, they showed that an attentive version of the model did not exhibit the same deficits. From these results the authors concluded that feedback mechanisms as present in the human visual system such as attention, working memory or perceptual grouping are probably important components for abstract visual reasoning. More generally, these papers have been perceived and cited with the broader claim of feed-forward DNNs not being able to learn same-different relationships between visual objects \citep{serre2019deep, schofield2018understanding} - at least not ``efficiently'' \citet{Firestone201905334}. We argue that the results of \citet{kim2018not} cannot be taken as evidence for the importance of feedback components for abstract visual reasoning: \begin{enumerate} \item While their experiments showed that same-different tasks are harder to \emph{learn} for their models, this might also be true for the human visual system. Normally-sighted humans have experienced lifelong visual input; only looking at human performance with this extensive learning experience cannot reveal differences in learning difficulty. \item Even if there is a difference in learning complexity, this difference is not necessarily due to differences in the inference mechanism (e.g. feed-forward vs feedback)---the large variety of other differences between biological and artificial vision systems could be critical causal factors as well. \item In the same line, small modifications in the learning algorithm or architecture can significantly change learning complexity. For example, changing the network depth or width can greatly improve learning performance \citep{tan2019efficientnet}. \item Just because a attentive version of the model can learn both types of tasks does not prove that feedback mechanisms are necessary for these tasks (see introduction: \textit{"Differentiating between necessary and sufficient mechanisms"}). \end{enumerate} Determining the necessity of feedback mechanisms is especially difficult because feedback mechanisms are not clearly distinct from purely feed-forward mechanisms. In fact, any finite-time recurrent network can be unrolled into a feed-forward network \citep{liao2016bridging, van2020going}. For these reasons, we argue that the importance of feedback mechanisms for abstract visual reasoning remains unclear. In the following paragraph we present our own experiments on the SVRT data set and show that standard feed-forward DNNs can indeed perform well on same-different tasks. This confirms that feedback mechanisms are not strictly necessary for same-different tasks, although they helped in the specific experimental setting of \citet{kim2018not}. Furthermore, this experiment highlights that changes of the network architecture and training procedure can have large effects on the performance of artificial systems. \subsection*{Our Experiments} The findings of \citet{kim2018not} were based on rather small neural networks, which consisted of up to six layers. However, typical network architectures used for object recognition consist of more layers and have larger receptive fields. For this reason we tested a representative of such networks, namely ResNet-50. The experimental setup can be found in Appendix \ref{appendix_svrt}. We found that our feed-forward model can in fact perform well on the same-different tasks of SVRT (see Figure \ref{fig:svrt_main}B, see also concurrent work of \citet{messina2019testing}). This result was not due to an increase in the number of training samples. In fact, we used fewer images ($28,000$ images) than \citet{kim2018not} ($1$ million images) and \citet{messina2019testing} (400,000 images). Of course, the results were obtained on the SVRT data set and might not hold for other visual reasoning data sets (see introduction \textit{"Testing generalization of mechanisms"}). In the very low-data regime (1000 samples), we found a difference between the two types of tasks. In particular, the overall performance on same-different tasks was lower than on spatial reasoning tasks. As for the previously mentioned studies, this cannot be taken as evidence for systematic differences between feed-forward neural networks and the human visual system. In contrast to the neural networks used in this experiment, the human visual system is naturally pre-trained on large amounts of visual reasoning tasks, thus making the low-data regime an unfair testing scenario from which it is almost impossible to draw solid conclusions about differences in the internal information processing. In other words, it might very well be that the human visual system trained from scratch on the two types of tasks would exhibit a similar difference in sample efficiency as a ResNet-50. Furthermore, the performance of a network in the low-data regime is heavily influenced by many factors other than architecture, including regularization schemes or the optimizer, making it even more difficult to reach conclusions about systematic differences in the network structure between humans and machines. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/svrt_main.pdf} \caption{A: For three of the 23 SVRT problems, two example images representing the two opposing classes are shown. In each problem, the task was to find the rule that separated the images and to sort them accordingly. B: \citet{kim2018not} trained a DNN on each of the problems. They found that same-different tasks (red points), in contrast to spatial tasks (blue points), could not be solved with their models. Our ResNet-50-based models reached high accuracies for all problems when using $28,000$ training examples and weights from pre-training on ImageNet.} \label{fig:svrt_main} \end{figure}
2,869,038,154,434
arxiv
\section{Introduction and Summary} The phase diagram of the black-hole black-string transition (see the reviews \cite{review,HOrev}) was conjectured in \cite{TopChange} to include a ``merger'' point -- a static vacuum metric (see figure \ref{merger-figure}) which lies on the boundary between the black-string and black-hole branches. It can be thought as either a black string whose waist has become so thin that it has marginally pinched, or as a black-hole which has become so large that its poles had marginally intersected and merged (and hence the name ``merger''). The metric cannot be completely smooth as it interpolates between two different space-time topologies, but it may have only one naked singularity. \begin{figure}[t!] \centering \noindent \includegraphics[width=7cm]{merger-figure.eps} \caption[]{The merger metric. $r$ is the radial coordinate in the extended directions, $z$ is periodically compactified, while time and angular coordinates are suppressed. The heavy lines denote the horizon of a static black object which is at threshold between being a black-hole and being a black-string. The naked singularity is at the $\times$-shaped pinching (horizon crossing) point. Upon zooming onto the encircled singularity it is convenient to replace $(r,z)$ by radial coordinates $(\rho,\chi)$ radial coordinates. We shall be mostly interested in the ``critical merger solution'' -- the local metric near the singularity, namely the encircled portion of the metric (in the limit that the circle's size is infinitesimal).} \label{merger-figure} \end{figure} The black-hole black-string system has been the subject of intensive numerical research \cite{Wiseman1,KPS2,KudohWiseman1,KudohWiseman2,KKR}. Naturally, the merger space-time itself is unattainable numerically since it includes a singularity, but it may be approached by following either of the two branches far enough. Indeed, all the available data indicates that the black-string and the black-hole branches approach each other, in accord with the merger prediction. At merger the curvature is unbounded around the pinch point. One defines the ``critical merger solution'' to be the local metric around the pinch point (at merger), namely the one achieved through a zooming limit around the point. It is natural to predict \cite{TopChange,BKscaling} that the critical merger solution will lose all memory of the macroscopical scales of the problem (the size of the extra dimension and the size of the black hole) and moreover be self-similar, namely invariant under a scaling transformation. The central motivation of this paper is \emph{to determine the critical merger solution}. Self-similar metrics belong to one of two classes: Continuous Self-Similarity (CSS) or Discrete Self-Similarity (DSS): while a CSS metric is invariant under any scale transformation and can be pictured as a cone, a DSS metric is invariant only under a specific scaling transformation (and its powers) and can be pictured as a wiggly cone with its wiggles being log-periodic (see figure \ref{cone-illus}). A key question is: \emph{Is the critical merger solution CSS or DSS}? \EPSFIGURE{cone-illus.eps,width=7cm}{An illustration of a continuously self-similar geometry (CSS) as a cone, and a discretely self-similar geometry (DSS) as a wiggly cone. The singularity is at the tip. \label{cone-illus}} The significance of the critical solution is that critical exponents of the system near the merger point are determined by properties of this solution (this is known to be the case in the closely related system \cite{BKscaling} of Choptuik critical collapse \cite{Choptuik,GundlachRev}). \vspace{.5cm} \noindent {\bf Considerations and plan}. The direct way to settle our question would be through numerical simulation of the system: one would need to obtain solutions which are ever closer to the merger and with ever higher resolution near the high curvature region (where the singularity is about to form). This computation would be comparable in difficulty to Choptuik's original discovery of critical collapse \cite{Choptuik} in that it requires successive mesh refinements over several orders of magnitude. However, since we are asking a local question, we may expect or hope that a local analysis would suffice, namely the analysis of the metric close to the singularity. On the one hand a local analysis is disadvantaged relative to a solution of the whole system in being indirect and therefore its results need some interpretation which reduces certainty. On the other hand, a local analysis supplies more insight into the mechanism that determines the local metric, and it is easier to perform. These latter advantages induced us to prefer the local analysis for the current study. The demanding full (numerical) analysis is yet to be performed (see however the suggestive but inconclusive results in \cite{KolWiseman}). Our first assumption is that the local metric is self-similar, as discussed above. Our second working assumption is that if several self-similar metrics exist then the one actually realized by the system is the metric which is most attractive, or most stable, in an appropriately defined manner. One local self-similar solution, the ``double-cone'' has been known for a while \cite{TopChange}. In terms of the $(\rho,\chi)$ coordinates defined in figure \ref{merger-figure} it is given by \begin{equation} ds^2= d\rho^2 + {1 \over D-2}\, \rho^2 \[ d\chi^2+\cos^2(\chi)\, dt^2 + (D-4)\, d\Omega^2_{D-3} \] \label{d-cone} ~,\end{equation} where $t$ denotes Euclidean time (since the solutions are static we may work either with a Lorentzian or with a Euclidean signature), and $d\Omega^2_{D-3}$ is the standard metric on the ${\bf S}} \newcommand{\IT}{{\bf T}^{D-3}$ sphere. Let us recall some of its properties. The $(\chi,t)$ portion of the metric is (conformal to) the two-sphere ${\bf S}} \newcommand{\IT}{{\bf T}^2$. Thus the metric is a cone over a product of spheres ${\bf S}} \newcommand{\IT}{{\bf T}^2 \times {\bf S}} \newcommand{\IT}{{\bf T}^{D-3}$ which is the origin of its name. Its isometry group is $SO(3)_{\chi,t} \times SO(D-2)_\Omega$, which is an enhancement relative to the generic $SO(2)_t \times SO(D-2)_\Omega$ isometry of the system. The double cone is smooth everywhere except for the tip $\rho=0$. It is manifestly CSS under the transformations $\rho \to e^\alpha\, \rho$ for any $\alpha$. Finally, a linear analysis of perturbations around the double cone preserving the full $SO(3) \times SO(D-2)$ isometry reveals \cite{TopChange} an oscillating nature (as a function of $\rho$) for low enough dimensions, $D<10$. The current research started from a confusion regarding the CSS/DSS nature of the critical merger solution. \cite{TopChange} assumed CSS for simplicity and found the double cone. \cite{KolWiseman} found good but not overwhelming evidence for the double cone in numerical solutions. In \cite{BKscaling} a close relation between the merger and critical collapse was discovered,\footnote{See also \cite{SorkinOren} which followed and studied the dimensional dependence of the Choptuik scaling constants.} raising the possibility that just as the critical collapse solution is DSS, the critical merger could be DSS as well. Moreover, the linearized oscillations, mentioned in the preceding paragraph were realized to be analogous to GHP oscillations (Gundlach-Hod-Piran \cite{Gundlach96,HodPiran}) in critical collapse. In critical collapse the log-period of GHP oscillations is known to be essentially the log-period of the DSS critical solution. Therefore \cite{BKscaling} viewed the oscillations as pointing towards a DSS nature. Accordingly, our first objective was to find a DSS solution to the system. Since we saw little hope in finding an analytic solution, we turned to a numerical method. Rather than simulate the whole system and tune a parameter for criticality, we followed \cite{Gundlach95} and imposed periodic boundary conditions. We used two different algorithms. The first was of a relaxation type: the 3 fields are solved for iteratively using a selected 3 out of the 5 equations. This method involves an interesting interplay between the usual local variables (the fields) and certain global variables. The essential idea in the second algorithm is to take as a merit function the sum of squares of all 5 equations. Altogether, despite considerable work the code never converged to a (new) DSS solution, but rather to the double-cone. Therefore we relegate the description of algorithm and implementation to appendix \ref{search-section} and choose to detail only the second approach. The apparent paradox between the existence of GHP oscillations ad the absence of a DSS solution is explained, in hindsight, by the fact that while DSS indeed implies GHP oscillations the converse is incorrect: oscillations may arise also around CSS solutions. The misguided fruitless search for a DSS solution pointed us towards a different effort, which is the focus of this paper: one could \emph{analyze the linear stability of the double cone}. If it is found to be unstable (in a sense to be described below) then it is very unlikely to be the metric chosen by the black-hole black-string system. Indeed, the asymptotic boundary conditions, including the compact nature of the extra dimension, can be viewed as an asymptotic perturbation of the critical merger solution. By assumption, this perturbation is irrelevant near the singularity. In addition one could consider turning on various perturbations far away from the system, such as putting the black object in a non-flat but low-curvature background or turning on a cosmological constant (note that these perturbations belong to a wider class -- the first does not obey the generic isometries and the second perturbs the equations). If the double-cone is found to be unstable to some asymptotic perturbation it would be unlikely to be realized as the critical merger solution. On the other hand, if it is found to be stable that would make it a viable candidate for being the critical merger solution. \vspace{.5cm} \noindent {\bf Stability}. The preceding discussion motivates us to formulate our stability criterion. Actually, we are seeking a solution which is not absolutely stable to asymptotic perturbations, but one which has a single unstable such mode -- this is the mode which corresponds to motion on the branch of black-string (or black-hole) solutions away from merger.\footnote{ More precisely, motion onto the two branches, that of a black-hole and that of a black-string, is associated with two modes defined up to multiplication by $\mathbb{R}} \newcommand{\IC}{\mathbb{C}_+$ and these modes are not necessarily related by a multiplication by -1. See the last paragraph of section 3 for further details.} Therefore we define a self-similar solution to be \emph{stable} \emph{if all but one asymptotic perturbation are irrelevant at the singularity}, and we proceed to define ``irrelevant'' and ``asymptotic perturbation''. Each mode can be characterized by its $\rho$ (see figure \ref{merger-figure}) dependence, which must be a power law, since the background is continuously self-similar and therefore the scaling generator can be diagonalized simultaneously with the small perturbations operator (the Lichnerowicz operator) -- this will be seen explicitly in section \ref{pert-section}. For each mode we define a constant $s$ through \begin{equation} \delta g^{\mu}_{\nu} \sim \rho^s ~, \label{def-s} \end{equation} where $\delta g_{\mu \nu}$ is the perturbation to the metric, and in general $s$ could be complex. Actually since the modes are determined by a system of second order ordinary differential equations (ODEs), the modes come in pairs, and we denote the corresponding pair of $s$ constants by $s_\pm$, ordered such that $\Re(s_-) \le \Re(s_+)$. We refer to a mode as \emph{irrelevant} if it is negligible close to the tip (the singularity), namely if \begin{equation} s > 0 ~. \label{irrelevant} \end{equation} We define the \emph{asymptotic perturbations} as those corresponding to $s_+$. The rational behind the definition is common-place: for example in electro-statics, solutions of the 3d Laplace equations come with two possibilities for the radial dependence for each angular number $l$, being either $r^l$ or $r^{-l-1}$. The $r^l$ mode is interpreted as an asymptotic perturbation, while the $r^{-l-1}$ is interpreted as a perturbation to the source which lies at the origin. Our definition is ambiguous when $\Re(s_1) = \Re(s_2)$, but it will happen only for the $l=0$ case which is studied in detail in section \ref{non-pert-section}, and will turn out to pose no problem. \footnote{While our definition is intuitively clear it differs from the usual definitions requiring smoothness and normalizability at the tip. Since the double cone is singular the usual prescriptions do not apply. Conceptually one could determine the perturbation spectrum around the smoothed cone demanding the standard smoothness and normalizability at the smooth tip, and then rescale towards the double-cone (see figure \ref{ScaledCones}). It would be interesting to test whether this limit would result in our ``$s_+$ prescription''. Another possibility would be to perform a non-linear admissability analysis of the modes, which we indeed perform in the $l=0$ sector, as explained below.} Altogether our definition of stability as the case when all asymptotic perturbations but one are irrelevant at the tip means that \begin{equation} \fbox{$~~\rule[-2mm]{0mm}{8mm} s_+ > 0 ~~$} \label{stability} \end{equation} for all but one perturbation. It can be said that such a solution is a co-dimension 1 attractor for asymptotic perturbations. We note that this definition differs from other common definitions of stability which involve the positivity of the Lichnerowicz operator or absence of imaginary frequencies in modes with time dependence. \vspace{.5cm} \noindent {\bf Method and results}. In section \ref{pert-section} we determine the spectrum of $s$ constants (\ref{def-s}) for all perturbations with the generic $SO(2) \times SO(D-2)$ isometry. We use an action approach: we write down the most general ansatz consistent with the isometry which includes 5 fields which depend on two variables, compute the action and expand it to second order in perturbations. Then we fix a gauge, derive the corresponding pair of constraints from the action and the (other) 3 equations of motion. The latter are solved by separation of variables and the solutions are then tested against the constraints. Our result for the spectrum is summarized in (\ref{spectrum}). It consists of two families of solutions $s^s_\pm(l),\, s^t_\pm(l)$, a scalar and a tensor with respect to ${\bf S}} \newcommand{\IT}{{\bf T}^2_{\chi,t}$. For all but one\footnotemark[2] $l=0$ mode we find that $s_+> 0$ and hence as explained above the double cone is a viable candidate to be the critical merger solution. Combining this with the fact that even after our search for DSS solutions described in appendix \ref{search-section} the double-cone remains the only known self-similar solution with these symmetries, \emph{we interpret the result as strong evidence that the double cone is indeed the critical merger solution}. \vspace{.5cm} \noindent {\bf Non-linear spherical perturbations.} The spherical ($l=0$) perturbations are special. There are two such modes and in \cite{TopChange} evidence was given that there are two specific linear combinations that generates ``smoothed cones'' (see figure \ref{ScaledCones}) by attempting a Taylor expansion of the fields and equations around the smooth tip of an assumed smoothed cone and finding no obstruction to a solution. In section \ref{non-pert-section} we confirm this by an analysis of the qualitative features of the full non-linear dynamical system. The perturbation associated with the smoothed cones is precisely the special relevant mode mentioned above that moves the solution off criticality and along the solution branch\footnotemark[2]. Any other linear combination is seen to be highly singular at the tip, which justifies us in discarding it (and it is consistent with our ``$s_+$ prescription''). From the analytical point of view the qualitative analysis shows an interesting feature. The system is non-integrable (see the discussion below eq. \ref{K2}). The qualitative dynamics of 2d phase spaces is quite limited and never chaotic while in higher dimensions chaos is common. The phase space of this system is 3d: there are two phase space dimensions for each of the two modes minus a constraint. However, the dynamical system inherits the scaling symmetry of the background. One can define a reduced 2d phase space system by choosing a plane transversal to the symmetry flow, and then define a reduced dynamical flow to be the original flow projected onto the plane through the symmetry flow. This 2d phase space system is now amenable to analysis through the determination of equilibrium points: focal, nodal or saddle, and we are able to solve for the full qualitative features. The solution is summarized in figure \ref{phase-space5d}. \EPSFIGURE{ScaledCones.eps,width=5cm}{A single smoothed cone solution can be scaled down to provide a continuous family of metrics which approach the cone. \label{ScaledCones}} It turns out that a related dynamical system, given by the Hamiltonian $H=(p_x^2+p_y^2)/2-x^2\, y^2/2$, \footnote{In order to cast this system in a form similar to ours (\ref{K2}) we transform first to the Lagrangian $L=\(\dot{x}^2+\dot{y}^2+x^2\,y^2\)/2$ and then change variables into $u=\log(x),\; v=\log(y)$ to obtain $L=\(e^{2 u}\, \dot{u}^2+ e^{2v}\, \dot{v}^2/v^2+\exp(2u+2v)\)/2$. In the $H=0$ sector we may multiply $L$ (or $H$) by any lapse function, and we choose $\exp(-u-v)$ to arrive at $L=\(e^{u-v}\, \dot{u}^2+ e^{v-u}\, \dot{v}^2/v^2+\exp(u+v)\)/2$. This form still differs from (\ref{K2}) but it is similar as it has a nearly canonical kinetic term and an exponential potential.} was already analyzed in quite a different physical setting as a model for critical phenomena \cite{FLC:1998} (see also a higher dimensional generalization \cite{Frolov:2006}). There one analyzes minimal surfaces in a black hole background and one finds the (local) critical solutions to be cones. The appearance of essentially the same dynamical system (system of ODEs) in different physical settings suggests that this dynamical system is in some sense the minimal example for the physics of criticality including self-similarity and critical exponents. Moreover, the same Hamiltonian, apart from a sign change of the potential which is inessential for current purposes, was already studied in \cite{Savvidy:1982} in the context of Yang-Mills theories. \section{Perturbations of the double cone} \label{pert-section} In this section we compute the spectrum of zero modes around the double cone (preserving the isometries of the black-hole black-string system). \vspace{.5cm} \noindent {\bf Action}. Let us start by considering the following ansatz, which is the most general one given the $U(1)_t \times SO(D-2)_\Omega \times \relax\ifmmode\hbox{Z\kern-.4em Z}\else{Z\kern-.4em Z}\fi_{2,T}$ isometries of the black-hole black-string system, where $\relax\ifmmode\hbox{Z\kern-.4em Z}\else{Z\kern-.4em Z}\fi_{2,T}$ stands for time reflection $ t \to -t$ \begin{equation} ds^{2}=e^{2B_{\rho }}d\rho ^{2}+e^{2B_{\chi }}(d\chi -fd\rho )^{2}+e^{2\Phi }dt^{2}+e^{2C}d\Omega _{D-3}^{2} ~. \label{def-fields} \end{equation}% All the fields are functions of $\rho$ and $\chi$ only. The ansatz is ``most general'' in the sense that all of the Einstein equations can be recovered by varying the gravitational action with respect to these fields. From now on, in order to shorten the notation we denote the derivative with respect to $\chi$ by a dot, whereas the derivative with respect to $\rho$ is denoted by a prime. After some tedious computation the Lagrangian of the system can be obtained \begin{eqnarray} S &=& \int \exp \( \Psi + B_\rho + B_\chi \)\, d\rho\, d\chi ~ {\cal L} \nonumber \\ {\cal L} &=& K_1 + K_{2\rho} + K_{2\chi}- V ~,\end{eqnarray} where \begin{eqnarray} -{1 \over D-3}\, K_1 &:=& {\partial} C ~ {\partial}\( \Psi + \Phi - C\) \nonumber \\ &:=& C' \, (\Psi'+\Phi'-C') e^{-2 B_\rho} \nonumber \\ & & + \dot{C}\, (\dot{\Psi}+\dot{\Phi}-\dot{C}) \, \( e^{-2 B_\chi} + f^2\, e^{-2 B_\rho}\) \nonumber \\ & & + \( \dot{C}\, (\Psi'+\Phi'-C') + C'\, (\dot{\Psi}+\dot{\Phi}-\dot{C}) \) f \, e^{-2 B_\rho} \nonumber \\ e^{2 B_\rho}\; K_{2\rho} &:=& -2\, \dot{\Psi}\, (f\, \dot{f} + f' + f^2\, \dot{B}_\chi ) \nonumber \\ & & -2\, \Psi'\, (B'_\chi+ 2\,f\, \dot{B}_\chi ) \nonumber \\ & & +2 f (\dot{B}_\chi\, B'_\rho - B'_\chi\, \dot{B}_\rho) \nonumber \\ & & + 2\, \dot{f}\, (2\, B'_\chi-B'_\rho) -2\, f'\, (2\, \dot{B}_\chi - \dot{B}_\rho)\, \nonumber \\ e^{2 B_\chi}\; K_{2\chi} &:= & -2\, \dot{\Psi}\, \dot{B}_\rho\, \nonumber \\ V &:=& (D-3)(D-4)\, \exp(-2\, C) ~, \end{eqnarray} and we define \begin{equation} \Psi := \Phi + (D-3)\, C ~. \label{psidef}\end{equation} The Lagrangian was divided into several parts as follows: $V$ is the potential -- a part without derivatives and $K_1$ is a kinetic term which is co-variant in the $(\rho,\chi)$ plane. The rest of the kinetic part was somewhat arbitrarily divided such that terms with a $e^{-2 B_\rho}$ factor were collected into $K_{2\rho}$ and the term with a $e^{-2 B_\chi}$ factor was denoted by $K_{2\chi}$. \vspace{.5cm} \noindent {\bf Gauge fixing}. This action is invariant under reparameterizations of the $(\rho,\chi)$ plane (2 gauge functions). We do not pretend to know an optimal gauge, but we start by fixing \begin{equation} f=0 \label{fix1} \end{equation} as it seems to considerably simplify the equations. The corresponding constraint comes from the equation of motion for $f\left( \rho ,\chi \right) $ and is given by \begin{eqnarray} 0=\frac{1}{2}\left. {\delta S \over \delta f} \right|_{f=0} &=& \bigl(-2\Psi^{\prime}\dot{B}_{\chi } + \dot{B}_{\chi }B_{\rho }^{\prime }-B_{\chi }^{\prime }\dot{B}_{\rho } \notag \\ &&-(D-3)\bigl(\Phi ^{\prime }\dot{C}+\dot{\Psi}C^{\prime }-\dot{C}% C^{\prime }\bigr)\bigr)e^{\Psi -B_{\rho }+B_{\chi }} \notag \\ &&-\partial _{\rho }\left( \bigl(-\dot{\Psi}+\dot{B}_{\rho }-2 \dot{B}_{\chi }\bigl)e^{\Psi -B_{\rho }+B_{\chi }}\right) \notag \\ &&-\partial _{\chi }\left( (2B_{\chi }^{\prime }-B_{\rho }^{\prime })e^{\Psi -B_{\rho }+B_{\chi }}\right) \notag \\ \label{constraint} \end{eqnarray}% Now substituting the gauge $f=0$ into the Lagrangian, we get% \begin{gather} L=\Bigl[-2 B_{\chi }^{\prime } \Psi ^{\prime } e^{-2B_{\rho }} -(D-3) C^{\prime} \bigl( \Phi^{\prime }-C^{\prime}+\Psi^{\prime } \bigr)e^{-2B_{\rho }} \notag \\ +(B_{\chi }\longleftrightarrow B_{\rho } ~,~``~^\prime~"\rightarrow ``~^{\cdot}~")-(D-3)(D-4)e^{-2C}\Bigr]e^{\Psi +B_{\rho }+B_{\chi }} \label{Lagrangian} \end{gather} We choose to fix the remaining gauge freedom (beyond $f=0$) such that the kinetic term with respect to $\chi$ (${\cal O}({\partial}_\chi^{~2})~$ terms) in (\ref{Lagrangian}) is canonical (or more precisely, field independent) \begin{equation} B_\chi = \Psi + B_\rho + h \label{fix2} ~.\end{equation} where $h=h(\rho,\chi)$ will be fixed later. The associated constraint is given by \begin{eqnarray} 0=\left. {\delta S \over \delta B_\chi} \right|_{{\rm fix}~B_\chi} &=&\Bigl[2\dot{B}_{\rho }\dot{\Psi} +(D-3)\, \dot{C}\, (\dot{\Psi}+\dot{\Phi}-\dot{C}) \Bigr]e^{ -h} \notag \\ &&-\Bigl[ 2 (\Psi^{\prime }+B_{\rho }^{\prime }+h^{\prime })\Psi^{\prime } + (D-3) C^{\prime } \( \Phi^{\prime }-C^{\prime }+\Psi^{\prime }\) \Bigr] e^{ 2\Psi + h} \notag \\ &&-(D-3)(D-4)e^{ 2( \Psi- C +B_{\rho }) + h } + 2\partial _{\rho }\left( \Psi^{\prime } e^{ 2\Psi + h }\right) \label{bchi-constr} \end{eqnarray}% \vspace{.5cm} \noindent {\bf The background}. The double-cone is a Ricci-flat metric given by\footnote{See also (\ref{d-cone}).} \begin{equation} ds^{2}=d\rho ^{2}+\frac{\rho^2}{D-2}\left( d\chi ^{2}+\cos ^{2}\chi\; dt^{2}+\left( D-4\right) d\Omega _{\mathbf{S}^{D-3}}^{2}\right) \label{eq_1} \end{equation}% As mentioned in the introduction, our objective here is to compute the spectrum of perturbations around this metric, namely to solve the Linearized Einstein equations around this background. The gauge choice (\ref{fix1},\ref{fix2}) requires to redefine the angle $\chi \rightarrow \widetilde{\chi }$. Choosing \begin{equation} h(\rho)=-(D-3)\ln{\( \rho\, \sqrt { \frac{D-4}{D-2} } \)} \end{equation} implies the following $\rho$-independent equation for $\widetilde{\chi }$ \begin{equation} \ln \frac{d\chi }{d\widetilde{\chi }}=\ln \cos \chi \end{equation}% The solution of this differential equation is given by% \begin{equation} \chi =\arctan (\sinh \widetilde{\chi }) \end{equation}% where for simplicity we chose the constant of integration to be zero. Note that while $\chi$ ranges over $[-\pi/2,\pi/2]$ $\widetilde{\chi}$ ranges over $[-\infty,+\infty]$. With this redefinition at hand the solution (\ref{eq_1}) becomes \begin{equation} ds^{2}=d\rho ^{2}+\frac{\rho ^{2}}{D-2}\left( \frac{d\chi ^{2}+dt^{2}}{\cosh ^{2}\chi }+\left( D-4\right) d\Omega _{\mathbf{S}^{D-3}}^{2}\right) ~, \end{equation}% here for simplicity of notation we omit tilde above $\chi$ and denote $\widetilde{\chi}$ by $\chi$. \vspace{.5cm} \noindent {\bf Linearized equations}. Now we slightly perturb the double-cone solution while keeping the gauge condition (\ref{fix2}) unchanged, that is we set% \begin{eqnarray} B_{\rho } &=&B_{\rho }^{\left( 0\right) }+b_{\rho }= 0 + b_{\rho } \nonumber \\ B_{\chi } &=&B_{\chi }^{\left( 0\right) }+b_{\chi }=\ln \left( \frac{\rho }{% \cosh \chi \sqrt{D-2}}\right) +b_{\chi } \nonumber \\ \Phi &=&\Phi^{\left( 0\right) }+\phi =\ln \left( \frac{\rho }{\cosh \chi \sqrt{D-2}}\right) +\phi \nonumber \\ C &=&C^{\left( 0\right) }+c=\ln \left( \rho \sqrt{\frac{D-4}{D-2}}\right) +c \label{eq_5} \end{eqnarray}% together with the gauge condition derived from (\ref{fix2}) \begin{equation} b_{\chi } =\psi +b_{\rho } \label{gauge_condition_2} ~,\end{equation} where \begin{equation} \psi:=(D-3)c+\phi \end{equation} (see also (\ref{psidef})). Let us substitute equations (\ref{eq_5},\ref{gauge_condition_2}) into the $f$-constraint (\ref{constraint}) and $b_\chi$-constraint (\ref{bchi-constr}). The zeroth order in small perturbations vanishes in both cases, whereas the first order terms satisfy the following equations% \begin{eqnarray} && (D-2)\dot{b}_{\rho }-\rho\, \dot{\psi}^{\prime} -\rho \tanh \chi \( \psi^{\prime }-\phi^{\prime }+b_{\rho }^{\prime } \) =0 \label{constraint_f} \\ \nonumber \\ &&-2\rho ^{2} \psi^{\prime \prime } +2\rho \Bigl[\left( D-2\right) b_{\rho}^{\prime}-(D-1)\psi^{\prime} \Bigr] +\left( D-2\right) \Bigl[\dot{b}_{\rho }+\dot{\psi}-\phi\Bigr]\sinh \left(2\chi \right) \notag \\ && + 2\left( D-2\right) \left( D-3\right) \left( b_{\rho }-c\right) =0 \label{constraint b_chi} \end{eqnarray} Currently we are in the position to derive the equations of motion for the rest of the fields. As a first step we substitute (\ref{eq_5},\ref{gauge_condition_2}) into the Lagrangian (\ref{Lagrangian}), expand it and keep the quadratic part in the perturbations% \begin{eqnarray} \frac{\cosh ^{2}\chi }{\rho ^{D-3}}L^{\left( 2\right) }&=& -\frac{\rho ^{2}}{\left( D-2\right) }\Bigl[\left( 3 D-10\right) (D-3)c'^{~2}+6\phi ^{\prime} \psi' -4\phi'^{~2}+2b_{\rho}\,'\, \psi'\Bigr] \notag \\ &&-\Bigl[(D-3) \dot{c}(\dot{\psi}-\dot{c}+\dot{\phi})+2\dot{b}_{\rho }\dot{\psi}\Bigr]\cosh ^{2}\chi -4\rho \(2\psi^{\prime}+b_{\rho}^{\prime}\) \psi \notag \\ &&-2(D-1)\psi ^{2} -2(D-3)\( \psi - c +b_{\rho }\) ^{2} \label{L_2} \end{eqnarray} As a result, the equations of motion for the fields $b_{\rho }$, $c$ and $\phi$ are given respectively by \begin{eqnarray} && 2\left[ (D-3)b_{\rho }-2\psi + \phi \right]=\frac{\left( 3D-5\right) }{ \left( D-2\right) }\rho\, \psi^{\prime} +\frac{\rho ^{2}}{\left( D-2\right) }\psi^{\prime \prime} +\cosh ^{2}\chi~ \ddot{\psi} \nonumber \\ \nonumber \\ &&\rho \left[ \left( D-1\right)\(3\psi^{\prime}-c^{\prime}\) -(D-3)b_{\rho }^{\prime}\right] +\rho ^{2}\( 3\psi^{\prime\prime}-c^{\prime\prime}+b_{\rho}^{\prime\prime} \) \notag \\ &&+\left( D-2\right) \cosh ^{2}\chi \( \ddot{\psi}- \ddot{c}+\ddot{b_{\rho}}\) \notag \\ &=&2\left( D-2\right) \Bigl( c-2\psi+\phi+(D-4)b_{\rho} \Bigr) \nonumber \\ \nonumber \\ 0&=&\rho \left[ (D-1)(3\psi^{\prime}-\phi^{\prime})-(D-3) b_{\rho }^{\prime}\right] +\rho ^{2}\( 3\psi^{\prime\prime}-\phi^{\prime\prime}+b_{\rho}^{\prime\prime}\) \notag \\ &&+\left( D-2\right) \cosh ^{2}\chi \( \ddot{\psi}- \ddot{\phi}+\ddot{b_{\rho}} \) -2\left( D-2\right) (D-3)\( b_{\rho }-c\) \label{eom} \end{eqnarray} Let us denote the constraint (\ref{constraint_f}) coming from the gauge fixing condition $f=0$ by $A$, and by $B$ the constraint (\ref{constraint b_chi}) coming from the gauge condition (\ref{gauge_condition_2}). One finds that the following relations hold on the solutions of equations of motion (\ref{eom})% \begin{eqnarray} \partial _{\chi }A\left( \rho ,\chi \right) &=&-\frac{\rho \partial _{\rho }B\left( \rho ,\chi \right) }{2\left( D-2\right) \cosh ^{2}\chi } \notag \\ \partial _{\chi }B\left( \rho ,\chi \right) &=&2\left( D-2\right) A\left( \rho ,\chi \right) +2\rho \partial _{\rho }A\left( \rho ,\chi \right) +2\tanh \chi ~ B\left( \rho ,\chi \right) \end{eqnarray}% It means, that once the constraints are satisfied for some value of the coordinate $\chi $ they will vanish identically for any $\chi$. \subsection{Solving the equations} We proceed to solve first the equations of motion (\ref{eom}), and then later select those solutions which satisfy also the constraints (\ref{constraint_f},\ref{constraint b_chi}). In order to solve the equations of motion (\ref{eom}) we attempt to separate the variables through the following general ansatz% \begin{equation} \overrightarrow{X}\left( \rho ,\chi \right) =% \begin{pmatrix} b_{\rho }^{\left( 1\right) } \\ \phi ^{\left( 1\right) } \\ c^{\left( 1\right) }% \end{pmatrix}% =\overrightarrow{X}^{l}\rho ^{s}P_{l}\left( \tanh \chi \right) \label{Xansatz} \end{equation}% where $l$ is any non-negative integer, $\overrightarrow{X}^{l}$ are unknown constant vectors and $P_{l}\left( \tanh \chi \right)$ are the Legendre polynomials which can be shown to satisfy the following differential equation% \begin{equation} \cosh ^{2}\chi\, \frac{d^{2} }{d\chi ^{2}} P_{l}\left( \tanh \chi \right)% +l\left( l+1\right) P_{l}\left( \tanh \chi \right) =0 \label{Legendre2} \end{equation}% The description ``general ansatz'' needs some explanation. Once we add a sum over $l$ to the r.h.s of (\ref{Xansatz}) this becomes the most general decomposition since $\rho^s$ and $P_l$ form a complete basis of functions. The question is whether the variables separate in (\ref{eom}), thus allowing to omit the sum. {\it A priori} the $SO(3)$ isometry of the background tells us that the angular coordinates can be separated into spherical harmonic functions. In general such functions are labelled by $l,m$ but the $U(1)_t$ isometry implies that only $m=0$ terms contribute. If our fields included only scalars then the equations would be guaranteed to separate into the scalar spherical harmonics $Y_{l0}$, namely the Legendre polynomials. In our case the perturbation includes also tensor modes (with respect to the ${\bf S}} \newcommand{\IT}{{\bf T}^2$), but still direct inspection \footnote{$\chi$ appears in (\ref{eom}) only through the combination $\cosh^2{\chi}~{\partial}^2_\chi$ and after use of (\ref{Legendre2}) all $\chi$ dependence disappears.} confirms that equations (\ref{eom}) separate under the ansatz (\ref{Xansatz}). As a result we get the following set of algebraic equations% \begin{equation} \left[ s\left( s-1\right) E_{2}+sE_{1}-l\left( l+1\right) K\right] \overrightarrow{X}=V\overrightarrow{X} \end{equation}% where we have defined the following $3 \times 3$ constant matrices% \begin{eqnarray} E_{1} &=&% \begin{bmatrix} ~\rule[-2mm]{0mm}{8mm} -\left( D-3\right) & \room2\left( D-1\right) & \room3(D-3)\left( D-1\right) \\ ~\rule[-2mm]{0mm}{8mm} 0 & ~\rule[-2mm]{0mm}{8mm}\left( 3D-5\right) & ~\rule[-2mm]{0mm}{8mm}\left( 3D-5\right) (D-3) \\ ~\rule[-2mm]{0mm}{8mm} -(D-3) & \room3\left( D-1\right) & ~\rule[-2mm]{0mm}{8mm}\left( D-1\right) \left( 3D-10\right) \end{bmatrix}% ; \notag \\ E_{2} &=&% \begin{bmatrix} \room1 & \room2 & \room3(D-3) \\ \room0 & \room1 & ~\rule[-2mm]{0mm}{8mm}(D-3) \\ \room1 & \room3 & ~\rule[-2mm]{0mm}{8mm}\left( 3D-10\right) \end{bmatrix} ; \notag \\ K &=&\left( D-2\right) \begin{bmatrix} \room1 & \room0 & ~\rule[-2mm]{0mm}{8mm}(D-3) \\ \room0 & \room1 & ~\rule[-2mm]{0mm}{8mm}(D-3) \\ \room1 & \room1 & ~\rule[-2mm]{0mm}{8mm}\left( D-4\right) \end{bmatrix}% ; \notag \\ V &=&2\left( D-2\right) \begin{bmatrix} ~\rule[-2mm]{0mm}{8mm}(D-3) & \room0 & ~\rule[-2mm]{0mm}{8mm}-(D-3) \\ ~\rule[-2mm]{0mm}{8mm}(D-3) & ~\rule[-2mm]{0mm}{8mm}-1 & ~\rule[-2mm]{0mm}{8mm}-2(D-3) \\ ~\rule[-2mm]{0mm}{8mm}(D-4) & ~\rule[-2mm]{0mm}{8mm}-1 & ~\rule[-2mm]{0mm}{8mm}\left( 7-2D\right) \end{bmatrix}% ; \end{eqnarray}% The spectrum of $s$ is determined from the following characteristic equation% \begin{equation} Det\left[ s\left( s-1\right) E_{2}+sE_{1}-l\left( l+1\right) K-V\right] =0 \end{equation}% After some tedious algebraic rearrangement, one can simplify the above equation and get% \begin{eqnarray*} s^{2}+\left( D-2\right) s-\left( l+2\right) \left( l+1\right) \left( D-2\right) &=&0 \\ s^{2}+\left( D-2\right) s-\left( l+2\right) \left( l-1\right) \left( D-2\right) &=&0 \\ s^{2}+\left( D-2\right) s-l\left( l-1\right) \left( D-2\right) &=&0 \end{eqnarray*}% The solutions are given by\footnote{% Indices $1$,$3$ and $5$ correspond to upper sign, whereas indices $2$,$4$ and $6$ to lower one.}% \begin{eqnarray} s_{1,2} &=&\frac{1}{2}\left( 2-D\pm \sqrt{\left( D-2\right) \left( 4l^{2}+12l+D+6\right) }\right) \notag \\ s_{3,4} &=&\frac{1}{2}\left( 2-D\pm \sqrt{\left( D-2\right) \left( 4l^{2}+4l+D-10\right) }\right) \notag \\ s_{5,6} &=&\frac{1}{2}\left( 2-D\pm \sqrt{\left( D-2\right) \left( 4l^{2}-4l+D-2\right) }\right) \label{s} \end{eqnarray} whereas the corresponding eigenvectors are% \begin{eqnarray} \overrightarrow{X}_{1,2}^{l} &=&% \begin{bmatrix} ~\rule[-2mm]{0mm}{8mm}(l+2)\left[ 4-D\pm \sqrt{\left( D-2\right) \left( 4l^{2}+12l+D+6\right) }\right] \\ \room2l-D+6\mp \sqrt{\left( D-2\right) \left( 4l^{2}+12l+D+6\right) } \\ \room2(l+2)% \end{bmatrix} \notag \\ \notag \\ \notag \\ \overrightarrow{X}_{3,4}^{l} &=&% \begin{bmatrix} ~\rule[-2mm]{0mm}{8mm} 4(D-2)(D-3)(l-1)(l+2) \\ ~\rule[-2mm]{0mm}{8mm} \left( D-3\right) \left[ D-2\pm \sqrt{\left( D-2\right)\left( 4l^{2}+4l+D-10\right) }\right]^2 \\ ~\rule[-2mm]{0mm}{8mm} -4(D-2)(l-1)(l+2) \end{bmatrix} \notag \\ \notag \\ \notag \\ \overrightarrow{X}_{5,6}^{l} &=&% \begin{bmatrix} ~\rule[-2mm]{0mm}{8mm} (l-1)\left[ 4-D\pm \sqrt{\left( D-2\right) \left( 4l^{2}-4l+D-2\right) }\right] \\ ~\rule[-2mm]{0mm}{8mm} 2l+D-4\pm \sqrt{\left( D-2\right) \left( 4l^{2}-4l+D-2\right) } \\ ~\rule[-2mm]{0mm}{8mm} 2(l-1) \end{bmatrix} \end{eqnarray} It turns out that $\overrightarrow{X}_{3,4}$ satisfy the constraint equations (\ref{constraint b_chi},\ref{constraint_f}) as well, and thus they are part of the perturbation spectrum. On the other hand, $\overrightarrow{X}_{1,2}$ and $\overrightarrow{X}_{5,6}$ do not satisfy the constraints independently. Since mixing of different modes is allowed for those values of $s$ which are degenerate, one concludes that in order to find out other possible solutions of the perturbation spectrum we need to take a superposition of $\overrightarrow{X}_{1,2}$ and $\overrightarrow{X}_{5,6}$ corresponding to the same values of $s$ and then check whether such a combination can satisfy the constraints. According to (\ref{s}) one can see, that in order to make the powers of $\rho $ equal we should consider the following superpositions% \begin{eqnarray} \overrightarrow{X}\left( \rho ,\chi \right) &=&\rho ^{s_{1}}\left[ \overrightarrow{X}_{5}^{l+2}P_{l+2}\left( \tanh \chi \right) +F% \overrightarrow{X}_{1}^{l}P_{l}\left( \tanh \chi \right) \right] \notag \\ \overrightarrow{X}\left( \rho ,\chi \right) &=&\rho ^{s_{2}}\left[ \overrightarrow{X}_{6}^{l+2}P_{l+2}\left( \tanh \chi \right) +G% \overrightarrow{X}_{2}^{l}P_{l}\left( \tanh \chi \right) \right] \end{eqnarray}% where $F$and $G$ are constants to be determined. Substituting these linear superpositions into constraints equations, we find that the constraint can be satisfied by taking $F=G=-1$, and we have another family of solutions. In summary, the full perturbation spectrum is given by% \begin{equation} \fbox{$% \begin{array}{c} s_{\pm}^t=\frac{1}{2}\left( 2-D\pm \sqrt{\left( D-2\right) \left( 4l^{2}+12l+D+6\right) }\right) \\ s_{\pm}^s=\frac{1}{2}\left( 2-D\pm \sqrt{\left( D-2\right) \left( 4l^{2}+4l+D-10\right) }\right) \end{array}% $} \label{spectrum} \end{equation} where\footnote{``t" stands for ``tensor" and ``s" for ``scalar".} $l\geq 0$. For $s_{\pm}^t$ the modes are given by \begin{eqnarray} \overrightarrow{X}_{+}^t\left( \rho ,\chi \right) &=&\rho ^{s_{+}^t}\left[ \overrightarrow{X}_{5}^{l+2}P_{l+2}\left( \tanh \chi \right) -% \overrightarrow{X}_{1}^{l}P_{l}\left( \tanh \chi \right) \right] \notag \\ \overrightarrow{X}_{-}^t\left( \rho ,\chi \right) &=&\rho ^{s_{-}^t}\left[ \overrightarrow{X}_{6}^{l+2}P_{l+2}\left( \tanh \chi \right) -% \overrightarrow{X}_{2}^{l}P_{l}\left( \tanh \chi \right) \right] \end{eqnarray} while for $s_{\pm}^s$ they are given by \begin{equation} \overrightarrow{X}_{\pm}^s\left( \rho ,\chi \right) =\overrightarrow{X}% _{3,4}^{l} ~\rho ^{s_{\pm}^s}P_{l}\left( \tanh \chi \right) \end{equation} According to the ``$s_+$ prescription'' boundary condition (below (\ref{irrelevant})), we should eliminate all the $s_-$ modes. For $s^s(l=0)$ this prescription is ambiguous, but after studying the $l=0$ sector in detail in the next section we will conclude that still the b.c. reduce the dimension of the solution space from 2 to 1. In addition, another mode should be eliminated from the above perturbation spectrum, namely $\overrightarrow{X}_{+}^s\left( \rho ,\chi \right)$ for $s_{+}^s(l=1)=0$, since it corresponds to a residual gauge of an infinitesimal shift in $\chi$ coordinate. Altogether it can be seen that except for $s^s_+(l=0)$ all other $s_+$ (physical) modes are positive, thus satisfying our stability criterion (\ref{stability}). \section{Non-linear spherical perturbations} \label{non-pert-section} In this section we obtain the qualitative features of the dynamics of the full non-linear perturbations in the ``spherical'' sector ($l=0$, namely preserving all isometries). \vspace{.5cm} \noindent {\bf Action}. The most general D-dimensional metric with $SO(m+1) \times SO(n+1)$ isometry, $D=m+n+1$ is \begin{equation} ds^2=e^{2\, B_\rho}\, d\rho^2 + e^{2\, A}\, d\Omega^2_m + e^{2\, C}\, d\Omega^2_n ~,\label{ansatz-brho} \end{equation} where all three functions $B_\rho,\, A$ and $C$ depend on $\rho$ only, $d\Omega^2_m$ and $d\Omega^2_n$ are the standard metrics on the $m$ and $n$ spheres, and there is a reparameterization gauge freedom $\rho \to \rho'(\rho)$. For applications to the black-hole black-string system we need only the case $m=2$, which represents the $\chi,t$ 2-sphere while the $n$-sphere is the angular sphere. The action is \begin{eqnarray} S &=& \int d\rho \, e^{m\,A + n \, C}\, e^{B_\rho} \[ e^{-2\, B_\rho}\, K- \widetilde{V} \] \nonumber \\ K &=& {m\, n \over (D-1)}\, (A'-C')^2- \(1-{1 \over D-1}\)\, (m\, A' + n \, C')^2 = \nonumber \\ &=& -m(m-1)\, A'^2 - n(n-1)\, C'^2 - 2mn\, A'\, C' \nonumber \\ \widetilde{V} &=& m\, (m-1)\, e^{-2\, A} +n\, (n-1)\, e^{-2\, C} \end{eqnarray} where a prime denotes a derivative with respect to $\rho$ (the overall sign was chosen such that $S=-\int \sqrt{g}\, R$). The system enjoys a scaling symmetry \footnote{ The variation of the action due to an infinitesimal symmetry operation is $\delta_\alpha} \def\bt{\beta S= S$. Usually one considers symmetries which vary the action only by a boundary term thus defining a conserved current, which is absent in this case. Still this variation is enough to guarantee that the equations of motion are satisfied. Indeed, assuming we have a solution for the equations of motion $\delta S/\delta \phi = 0$ (where $\phi$ is a collective notation for the fields) then their variation vanishes as well $\delta_\alpha} \def\bt{\beta\, \delta S/\delta \phi = \delta ({\partial}_\alpha} \def\bt{\beta S)/\delta \phi = \delta S/\delta \phi = 0$.} \begin{equation} ds^2 \to e^{2 \alpha} \def\bt{\beta} ds^2 \end{equation} namely \begin{eqnarray} B_\rho &\to& B_\rho + \alpha} \def\bt{\beta \nonumber \\ A &\to& A + \alpha} \def\bt{\beta \nonumber \\ C &\to& C + \alpha} \def\bt{\beta ~. \label{scaling}\end{eqnarray} It is convenient to fix the gauge such that the kinetic term is canonical (more precisely, its prefactor is field independent), namely \begin{equation} B_\rho= m\, A + n\, C \label{gauge-cond} ~.\end{equation} The ansatz reads \begin{equation} ds^2=e^{2m\, A + 2n\, C}\, d\rho^2 + e^{2\, A}\, d\Omega^2_m + e^{2\, C}\, d\Omega^2_n ~,\label{ac-ansatz} \end{equation} while the action becomes \begin{eqnarray} S &=& \int d\rho\, \[ K- V \] \\ V &:=& \exp(2m\, A+ 2n\, C)\, \widetilde{V} \equiv m\, (m-1)\, e^{2(m-1)\, A+2 n\, C} +n\, (n-1)\, e^{2m\, A+ 2(n-1)\,C}~, \nonumber \end{eqnarray} and it is supplemented by the constraint \begin{equation} 0=H:=K+V \label{Hconstraint} ~.\end{equation} \vspace{.5cm} \noindent {\bf Change of variables}. It is convenient to make the following field re-definitions. First one makes a linear re-definition that simplifies the potential term \begin{eqnarray} u &:=& 2(m-1)\, A + 2n\, C + \log\(m(m-1)\) \nonumber \\ v &:=& 2 m\, A + 2(n-1)\, C + \log\(n(n-1)\) ~. \label{uv} \end{eqnarray} The potential and kinetic terms become \begin{eqnarray} V &=& e^u + e^v \nonumber \\ K &=& {1 \over 4(D-1)}\, \[m\, n \, (u'-v')^2- {1 \over (D-2)}\, (m\, u' + n \, v')^2 \]= \nonumber \\ &=& {1 \over 4(D-2)} \[ m(n-1)\, u'^2 + n(m-1)\, v'^2 - 2mn\, u'\, v' \] \label{K2} \end{eqnarray} This is a system with two degrees of freedom and a potential which is a sum of two exponentials. At first it appears to be similar to a Toda system, but actually it is probably not integrable for the following reason: the spring potential in a Toda system is of the form $e^x-x$ (where $x$ measures the deviation from equilibrium length), and while the linear term often cancels due to the mass being acted on by two springs, one from each side, here there are only two springs for two masses, and thus our system which lacks a linear term is not of a Toda form. Still it is convenient to make the following ``Toda inspired'' change of variables \begin{equation} \begin{array}{cc} X_1 := e^u & \hspace{1cm} P_1 := u' \\ X_2 := e^v & \hspace{1cm} P_2 := v' \label{XP} \end{array} ~, \end{equation} where the notation should not be mistaken to imply that the $X$'s and $P$'s are conjugate variables. The equations of motion read \begin{equation} \begin{array}{cc} X_1' = P_1\, X_1 & \hspace{1cm} P_1' = 2 \( (1-{1 \over m})\, X_1 + X_2 \) \\ X_2' = P_2 X_2 & \hspace{1cm} P_2':= 2 \( X_1 + (1-{1 \over n})\, X_2 \) \label{EOM-XP} \end{array} ~, \end{equation} and the constraint (\ref{Hconstraint}) becomes \begin{equation} 0 = {1 \over 4(D-1)}\, \[m\, n \, (P_1-P_2)^2- {1 \over (D-2)}\, (m\, P_1 + n \, P_2)^2 \] +X_1+X_2 \label{constraint2} ~.\end{equation} The $(X_i,P_i), ~i=1,2$ variables also provide a convenient realization of the scaling symmetry (\ref{scaling}) \begin{eqnarray} X_i &\to& e^{2\, \hat{\alpha} \def\bt{\beta}}\, X_i \nonumber \\ P_i &\to& e^{\hat{\alpha} \def\bt{\beta}}\, P_i \nonumber \\ \rho &\to& e^{-\hat{\alpha} \def\bt{\beta}}\, \rho \label{scaling-PX} \end{eqnarray} where $\hat{\alpha} \def\bt{\beta}$ is related to $\alpha} \def\bt{\beta$ in (\ref{scaling}) through $\hat{\alpha} \def\bt{\beta} := (D-2)\, \alpha} \def\bt{\beta$. The double cone metric in the $X,P$ variables is found either by transforming (\ref{d-cone}) according to the changes of variables (\ref{uv},\ref{XP}) or by solving directly the equations of motion (\ref{EOM-XP}) subject to scaling (\ref{scaling-PX}) invariance. It is given by \begin{eqnarray} P_1 &=& P_2 = -{2 \over \rho} \nonumber \\ X_1 &=& { m \over D-2}\, {1 \over \rho^2} \nonumber \\ X_2 &=& { n \over D-2}\, {1 \over \rho^2} \label{dcone-PX} \end{eqnarray} Next we transform to \begin{eqnarray} X^+ &:=& X_1 + X_2 \nonumber \\ X^- &:=& {1 \over m}\, X_1 - {1 \over n}\, X_2 \nonumber \\ P^+ &:=& m\, P_1 + n\, P_2 \nonumber \\ P^- &:=& P_1 - P_2 \label{XP+-} ~.\end{eqnarray} To arrive at these definitions we first considered the 5d case $m=n=2$ where by symmetry it is useful to transform to $X_1 \pm X_2,\, P_1 \pm P_2$ and then we generalized to arbitrary $m,n$ paying attention to the form of the kinetic energy (\ref{K2}). The equations of motion read \begin{eqnarray} X^+\,' &=& {1 \over m+n} \( X^+\, P^+ + mn\, X^-\, P^- \) \nonumber \\ X^-\,' &=& {1 \over m+n} \( X^+\, P^- + X^-\, P^+ + (n-m)\, X^-\, P^- \) \nonumber \\ P^+\,' &=& 2 (D-2)\, X^+ \nonumber \\ P^-\,' &=& -2\, X^- \end{eqnarray} \vspace{.5cm} \noindent {\bf Fixing the scaling symmetry}. Now comes the crucial step in the analysis, which will allow the qualitative solution of the dynamical system. The phase space consists of 4 variables $X_i,\,P_i$ constrained by (\ref{constraint2}) and hence it is 3d. However, dynamical systems in 3d can be quite involved, and we would not know how to analyze this system. Fortunately the scaling symmetry (\ref{scaling},\ref{scaling-PX}) can be used to reduce the problem to a 2d phase space, where the number of qualitative possibilities is quite limited and a full qualitative analysis is possible. The idea is to fix the symmetry by choosing a 2d cross section of the phase space which is transverse to the symmetry orbits. Then we supplement the infinitesimal $\rho$-evolution by an infinitesimal symmetry operation such that we always remain on the 2d cross-section, thereby reducing the problem to a 2d phase space. In practice we fix the symmetry as follows. Being transverse to a scaling symmetry means introducing an arbitrary scale. We choose the following condition \begin{equation} X^+ =1 \label{fix} ~.\end{equation} In order to define the reduced ``scaling compensated'' evolution we introduce a condensed notation for the phase space variables \begin{equation} Y^i=(X_1,X_2,P_1,P_2) ~.\end{equation} The scaling transformation is given by $Y^i \to e^{q^i {\hat \alpha}} \def\hbt{{\hat \beta}}\, Y^i$ where \begin{equation} q^i=(2,2,1,1) ~.\end{equation} We denote the functions on the r.h.s of (\ref{EOM-XP}) by $f^i(Y^j)$ such that \begin{equation} Y^i\,'=f^i(Y^j) ~. \label{EOM-Y} \end{equation} Note that since $\rho$ carries scaling charge $q_\rho=-1$ $f^i$ carry charge $q^i+1$. We need to distinguish the reduced evolution parameter $\rho_R$ from $\rho$ since scaling acts on $\rho$ as well (\ref{scaling-PX}). Now we can define the reduced phase space trajectory $Y^i_R=Y^i_R(\rho_R)$ by \begin{equation} \cdr Y^i_R = f^i - q^i\, {f^+ \over 2\, X^+}\, Y^i \label{EOM-YR} \end{equation} where $f^+/(2\, X^+)$ is the compensating infinitesimal scaling parameter and is defined such $\cdr X^+_R=0$. For clarity, we write down the definition of $f^+$ explicitly \begin{equation} f^+(Y^i_R):={1 \over m+n}\, (P^+_R + m n\, X^-_R\, P^-_R)~. \end{equation} We parameterize the reduced 2d phase space by $(X,\theta)$ where \begin{equation} X \equiv X^-_R ~,\end{equation} and $\theta$ is a hyperbolic angle which parameterizes the hyperbola in $P$ space given by the constraint (\ref{constraint2}) together with the condition (\ref{fix}), namely \begin{eqnarray} \cosh{\theta} &=& -{1 \over 2\sqrt{(D-2)\,(D-1)}} ~ P^+_R \nonumber \\ \sinh{\theta} &=& -{1 \over 2}} \def\quart{{1 \over 4} \sqrt{{m n \over D-1}} ~ P^-_R \label{theta} ~.\end{eqnarray} The signs above depend on conventions: the sign in the definition of $\cosh(\theta)$ is related to setting the direction of the flow towards the tip, and the sign in the definition of $\sinh(\theta)$ sets the sign of $\theta$. \emph{Our (final) expression for the reduced equations is} \begin{eqnarray} \cdr X &=& -{2 \over \sqrt{m n}}\, \sinh{\theta}\, (1+ n\, X )\, (1-m\, X) \nonumber \\ \cdr \, \theta &=& \sqrt{D-2}\, \sinh{\theta} + \sqrt{m n} ~ \cosh{\theta}~ X ~, \label{2deq} \end{eqnarray} where we chose to rescale $\rho_R \to \rho_R/\sqrt{m+n}$. Given a solution of (\ref{2deq}), or equivalently functions $Y^i_R(\rho_R)$ satisfying (\ref{EOM-YR}), we still need to \emph{uplift} it to a solution of (\ref{EOM-Y}). First we integrate for the scale factor evolution ${\hat \alpha}} \def\hbt{{\hat \beta}(\rho_R)$ \begin{equation} \cdr {\hat \alpha}} \def\hbt{{\hat \beta} = {f^+(y^i_R(\rho_R)) \over 2 X^+_R} = {1 \over 2}} \def\quart{{1 \over 4}\, f^+(y^i_R(\rho_R)) \label{hal-lift} ~.\end{equation} Then we define the uplifted trajectory $Y^i(\rho)$ by \begin{eqnarray} Y^i = e^{q^i {\hat \alpha}} \def\hbt{{\hat \beta}}\, Y^i_R \nonumber \\ d\rho=e^{-{\hat \alpha}} \def\hbt{{\hat \beta}}\, d\rho_R ~. \label{uplift} \end{eqnarray} A direct computation confirms that with this definition (\ref{EOM-Y}) is satisfied $${d \over d\rho}\, Y^i= e^{{\hat \alpha}} \def\hbt{{\hat \beta}}\, \cdr e^{q^i {\hat \alpha}} \def\hbt{{\hat \beta}}\, Y^i_R =e^{(q^i+1) {\hat \alpha}} \def\hbt{{\hat \beta}}\, (\cdr Y^i + q^i \cdr {\hat \alpha}} \def\hbt{{\hat \beta} ) = e^{(q^i+1)}\, f^i(Y^j_R)= f^i(Y^j)~,$$ where in the first equality we used (\ref{uplift}), in the second to last we used (\ref{EOM-YR},\ref{hal-lift}) and finally in the last equality we used the fact that $f^i$ has charge $q^i+1$ with respect to scaling. \subsection{Analysis of the reduced 2d phase space} The analysis of the qualitative form of the reduced 2d phase space (\ref{2deq}) proceeds by determining the domain, equilibrium points, their nature (at linear order) and a separate analysis of the behavior at infinity. \vspace{.5cm} \noindent {\bf Domain}. From the definition of $X_1,\, X_2$ (\ref{XP}) it is evident that they are both positive. Given the gauge fixing condition (\ref{fix}) and the change of variables (\ref{XP+-}) we find that the domain is restricted to the strip \begin{equation} -{1 \over n} \le X \le {1 \over m} \label{domain} ~.\end{equation} The boundary is strictly speaking not part of the domain, but since the equations (\ref{2deq}) continue smoothly to the boundary while $\cdr X$ vanishes there, we can join it to the domain and allow for equalities in (\ref{domain}). \vspace{.5cm} \noindent {\bf Equilibrium points}. There are 3 finite equilibrium points \begin{equation} \begin{array}{cc} X_0=0 & \theta_0=0 \\ X_1={1 \over m} ~~& ~~\tanh(\theta_1) = -\sqrt{{m n \over D-2}}\, {1 \over m} \\ X_2=-{1 \over n} ~~& ~~\tanh(\theta_2) = \sqrt{{m n \over D-2}}\, {1 \over n} \end{array} ~, \label{equilib} \end{equation} two of which are on the boundary. The point $(0,0)$ corresponds to the double cone (\ref{dcone-PX}), which is indeed a fixed point of the scaling symmetry. The role of the other points will become clear below. \begin{figure}[t!] \centering \noindent \includegraphics[width=7cm]{phase-space5d.eps} \caption[]{The phase space (for $m=n=2$). There is one interior equilibrium point at $(0,0)$ which represents the double cone. For $D<10$ it is focal repulsive but the spirals cannot be seen in the figure since their log-period is too large, while for $D\ge 10$ it is nodal repulsive, and hence the figure represents the whole range of dimensions. There are two finite equilibria on the boundary at $(1/m,\theta_1)$ and $(-1/n,\theta_2)$ which are saddle points. Each one has a critical curve which approaches it (heavy line). These two special trajectories denote the smoothed cones, where either ${\bf S}} \newcommand{\IT}{{\bf T}^n$ or ${\bf S}} \newcommand{\IT}{{\bf T}^m$ shrinks smoothly while the other one stays finite. Finally there are two attractive equilibria at infinity at $(1/m,-\infty)$ and $(-1/n,+\infty)$, and the thin lines trajectories represent generic trajectories which end at these infinite equilibrium points.} \label{phase-space5d} \end{figure} \vspace{.5cm} \noindent {\bf Linearized analysis of equilibrium points}. The linearized dynamics around the an equilibrium point $(X_q,\theta_q)$ is given by the $2 \times 2$ matrix $L$ \begin{equation} \cdr \[ \begin{array}{c} \delta X \\ \delta \theta \end{array} \] = L \[ \begin{array}{c} \delta X \\ \delta \theta \end{array} \] ~,\end{equation} where $\delta X:=X-X_q,~ \delta \theta:=\theta-\theta_q$. We remind the reader of the classification of equilibrium points \begin{itemize} \item If the eigenvalues of $L$ are complex (and necessarily self-conjugate since $L$ is real) then trajectories circle $(X_q,\theta_q)$ or spiral around it, and the point is called a \emph{focal point}. \item If the eigenvalues of $L$ are real and of opposite sign then $(X_q,\theta_q)$ is called a \emph{saddle point} and it is unstable in the sense that only if the initial conditions are fine-tuned to lie on a specific trajectory will the evolution flow into this point. \item If the eigenvalues of $L$ are real and of the same sign then $(X_q,\theta_q)$ is called a \emph{nodal point}. If they are both positive then the point is called \emph{repulsive}, while for negative it is called \emph{attractive}. Naturally, the adjectives repulsive and attractive interchange under inversion of the flow (time reversal). A focal point may be called repulsive (or attractive) as well if the real parts of $L$'s eigenvalues are all positive (or negative). \end{itemize} For the double-cone $(X,\theta)=(0,0)$ we find \begin{equation} L_0=\[ \begin{array}{cc} 0 & ~~~~-2/ \sqrt{m n} \\ \sqrt{m n} & ~~~~ \sqrt{D-2} \end{array} \] ~.\label{Ldcone} \end{equation} The eigenvalues of this matrix are \begin{equation} \lambda_\pm = {1 \over 2}\, \( \sqrt{D-2} \pm \sqrt{D-10} \) \end{equation} Let us make several observations \begin{itemize}} \def\ei{\end{itemize} \item The eigenvalues depend only on $D$ and not on $m,n$ separately. \item $D=10$ is a critical dimension, as found in \cite{TopChange}. \begin{itemize}} \def\ei{\end{itemize} \item \emph{For $D<10$} the eigenvalues are complex and we have a \emph{repulsive focal point}. \item \emph{For $D \ge 10$} the eigenvalues are real and we have a \emph{repulsive nodal point}. \item For the critical value $D=10$ the eigenvalues degenerate, but $L$ is not proportional to the identity matrix, but rather $L$ can be brought by a similarity transformation to the form \begin{equation} \[ \begin{array}{cc} \sqrt{2} & ~~1 \\ 0 & ~~ \sqrt{2} \end{array} \] ~.\label{Ldcone10D} \end{equation} \ei \item In the $D \to \infty$ limit we have $\lambda_+ \simeq \sqrt{D},~ \lambda_- \simeq 2/\sqrt{D}$. \item The eigenvectors which correspond to the eigenvalues $\lambda_\pm$ are $[2, ~-\lambda_\pm\, \sqrt{m n} ]$. \ei At the equilibrium point $(X_1,\theta_1)$ we find \begin{equation} L_1= \cosh{\theta_1}\, \[ \begin{array}{cc} {-2(D-1) \over m \sqrt{D-2}} & ~~~~0 \\ \sqrt{m n} & ~~~~\sqrt{D-2} - {n \over m\sqrt{D-2}} \end{array} \] ~.\label{Lsmooth-cone} \end{equation} Similarly $L_2$ is obtained by substituting $\theta_1 \leftrightarrow \theta_2,~ m \leftrightarrow n$. Since $\det{L}<0$ we see that \emph{at both $(1/m,\theta_1)$ and $(-1/n,\theta_2)$ we find a saddle point}. The positive direction (eigenvector of the positive eigenvalue, defining the repulsive direction) is $[0, ~1 ]$ in both cases, namely along the boundary. \vspace{.5cm} \noindent {\bf Behavior at infinity}. There are two attractive equilibrium points at infinity \begin{equation} \begin{array}{cc} X_3={1 \over m} ~~ & ~~\theta_3 = -\infty \\ X_4=-{1 \over n} ~~& ~~\theta_4 = +\infty \end{array} ~. \label{equilib-inf} \end{equation} $(X_3,\theta_3)$ is attractive since for $\theta<\theta_1$ we have $dX/d\rho_R>0, ~d\theta/d\rho_R<0$. There are no other fixed points at $\theta=-\infty$ but with $-1/n < X < 1/m$ since $\lim_{\theta \to -\infty} dX/d\theta < 0$. Moreover, close to $(X_3,\theta_3)$ the trajectories are found to asymptote exponentially fast to $X_3$, namely $|\delta X| \simeq \exp(-const\, \theta)$. Analogous properties hold for the attractive equilibrium point $(X_4,\theta_4)$. \vspace{.5cm} \noindent {\bf The whole picture}. Having found all the ingredients above we may assemble them into a complete phase diagram, figure \ref{phase-space5d}, which is discussed in its caption. The trajectories which end at $(X_1,\theta_1)$ and $(X_2,\theta_2)$ represent the smoothed cones. This is seen by transforming the definition of the smoothed cone through the changes of variables, as we proceed to explain. Before gauge fixing, one of the smoothed cones can be defined to behave in the limit $\rho \to 0$ as $B_\rho=0, ~A \sim \log(\rho),~ C \sim C_0$ (while for the other we interchange $A \leftrightarrow C$). After changing gauge and transforming to the $(X,P)$ variables this becomes $X_1 = m/(m-1)\, \rho^{-2}, ~X_2 = const\, \rho^{-2m/(m-1)}, ~P_1=-2/\rho, ~P_2= -2m/(m-1)\, \rho^{-2}$ which in the $(X,\theta)$ variables tends to $(X_1,\theta_1)$ as defined in (\ref{equilib}). In summary, we have confirmed non-perturbatively the existence of the smoothed cones, going beyond the perturbative analysis of \cite{TopChange} near the smoothed tip. Altogether there is a one-parameter family of solutions which in the linear regime (far away from the tip) corresponds to the various linear combinations of the two linearized modes. Two of these solutions correspond to the smoothed cones, while all the rest end at infinity in phase space and yield a singular geometry. Even though strictly speaking our b.c. ``$s_+$ prescription'' is undefined for $l=0$ since $\Re(s_1)=\Re(s_2)$, we define it to mean to retain only the smoothed cone solutions, thereby the double-cone is an attractor at co-dim 1, as wanted, rather than at co-dim 2. Equivalently, these two special directions are defined up to multiplication by $\mathbb{R}} \newcommand{\IC}{\mathbb{C}_+$, and hence we loosely refer to them as a single mode which is normally defined up to multiplication by $\mathbb{R}} \newcommand{\IC}{\mathbb{C}$. \vspace{.5cm} \noindent {\bf Acknowledgements} We would like to thank Ofer Aharony for a discussion. BK appreciates the hospitality of Amihay Hanany in MIT and KITP Santa Barbara where parts of this work were performed. This research is supported in part by The Israel Science Foundation grant no 607/05 and by The Binational Science Foundation BSF-2004117.
2,869,038,154,435
arxiv
\section{Introduction} Isotropic categories are local versions of motivic categories, obtained by, roughly speaking, killing all anisotropic varieties. Although they often have a handier structure than their global versions, they exhibit some key characteristics of both motivic and classical topological phenomena. In \cite{V}, Vishik introduced the isotropic triangulated category of motives and computed the isotropic motivic cohomology of the point, which is strongly related to the Milnor subalgebra. By following this lead, we studied in \cite{T} the isotropic stable motivic homotopy category. In particular, we identified the isotropic motivic homotopy groups of the sphere spectrum with the cohomology of the topological Steenrod algebra, i.e. the $E_2$-page of the classical Adams spectral sequence. These results are quite surprising since they show that topological objects naturally arise from isotropic environments, which could lead to a fruitful exchange between topology and isotropic motivic theory. Motivic categories, constructed by Morel and Voevodsky (see \cite{MV} and \cite{V3}) in order to study algebraic varieties by topological means, are extremely rich categories. Even over an algebraically closed field, they are more complex than the respective topological counterparts. For example, while every object in the classical stable homotopy category is cellular, i.e. built up by attaching spheres, not every motivic spectrum is cellular, since many algebro-geometric phenomena come into the picture. In spite of this, it is still interesting to understand the structure of the category of cellular objects in motivic stable homotopy theory. This project was initiated by Dugger and Isaksen in \cite{DI2} and much attention has been dedicated to it since then. Our work, in particular, is concerned with understanding the structure of the subcategories of cellular objects in isotropic categories, which we believe could shed light on the deep interconnection with topology. We have already highlighted that motivic categories are particularly challenging to study. For example, one of the difficulties that one does not encounter in classical topology is the presence of an object $\tau$ that appears in various incarnations throughout motivic homotopy theory, sometimes as an element of the motivic cohomology of the ground field and sometimes as a map in the $2$-complete motivic stable homotopy groups of spheres. Hence, the principal task is to find first some substitutes of the original motivic categories and tools which could help in the process of analysing them. In the case of algebraically closed fields, for example, topological realisation is a very helpful tool, since it allows to study the initial motivic category by looking at its deformation $\tau=1$, which happens to be just the classical stable homotopy theory (see \cite{DI}). However, in this process part of the information is lost and one could try to recover it by studying other deformations, for example $\tau=0$. This was done by Isaksen in \cite{I}, Gheorghe in \cite{G} and Gheorghe-Wang-Xu in \cite{GWX}. More precisely, in \cite{I} the stable motivic homotopy groups of $C{\tau}$, i.e. the cofiber of $\tau$, are identified with the $E_2$-page of the classical Adams-Novikov spectral sequence, while in \cite{G} the motivic spectrum $C{\tau}$ is provided with an $E_{\infty}$-ring structure inducing an isomorphism of rings with higher products between $\pi_{**}(C\tau)$ and the classical Adams-Novikov $E_2$-page. A parallel result for isotropic categories was obtained in \cite{T}, where the isotropic sphere spectrum $\X$ has been equipped with an $E_{\infty}$-ring structure inducing an isomorphism of rings with higher products between $\pi_{**}(\X)$ and the classical Adams $E_2$-page. Moreover, in \cite{GWX} the category of $C{\tau}$-cellular spectra is described, which is proved to be equivalent as a stable $\infty$-category equipped with a $t$-structure (see \cite{Lu}) to the derived category of left $\mathrm{BP}_*\mathrm{BP}$-comodules concentrated in even degrees, where $\mathrm{BP}$ is the Brown-Peterson spectrum and $\mathrm{BP}_*\mathrm{BP}$ its $\mathrm{BP}$-homology. In this work, we intend to follow a similar path for isotropic categories. Recall that a field $k$ is called flexible if it is a purely transcendental extension of countable infinite degree over some other field. In our situation it is really essential to work over flexible fields since, as highlighted in \cite{V}, these are the ground fields over which the isotropic categories behave particularly well. For example, over algebraically closed fields, due to the lack of anisotropic varieties, the isotropic category would be just the same as the original motivic category, so in this case the isotropic localization produces nothing new. We are encouraged by the evident parallel between the computations of $\pi_{**}(C\tau)$ over complex numbers (see \cite{I} and \cite{G}), on the one hand, and of $\pi_{**}(\X)$ over flexible fields (see \cite{T}), on the other. More precisely, we have been guided by the idea that studying the isotropic stable motivic homotopy category over a flexible field is similar in some sense to studying the stable $\infty$-category of $C{\tau}$-cellular spectra in the motivic stable homotopy category over complex numbers. Indeed, they obviously share some common features which is highlighted by the following theorem that is the main result of this paper. \begin{thm} Let $k$ be a flexible field of characteristic different from $2$. Then, there exists a $t$-exact equivalence of stable $\infty$-categories $$\D^b(\A_{*}-\com_{*}) \xrightarrow{\cong} \X-\mo^b_{cell,\hz}$$ where $\A_{*}$ is the classical dual Steenrod algebra and $\X-\mo^b_{cell,\hz}$ is the stable $\infty$-category of $\hz$-complete $\X$-cellular modules having $\mbp$-homology non-trivial in only finitely many Chow-Novikov degrees (the superscript ``b" stands for ``bounded", see Definition \ref{dc}). \end{thm} As a consequence, we obtain that the category of isotropic cellular spectra is completely algebraic, which makes it easier to study. Moreover, it is deeply related to classical topology, as foreseeable from results in \cite{V} and \cite{T}. In order to achieve our main results, we need several tools. In particular, it is necessary to develop and study isotropic versions of both the Adams spectral sequence and the Adams-Novikov spectral sequence. This requires a focus on the motivic Brown-Peterson spectrum $\mbp$ (see \cite{Ve}) from an isotropic point of view. In particular, we note that the isotropic Brown-Peterson spectrum is an $E_{\infty}$-ring spectrum, in contrast to the topological picture where $\mathrm{BP}$ has shown not to admit an $E_{\infty}$-ring structure by Lawson in \cite{La}. Then, we use techniques developed by Gheorghe-Wang-Xu in \cite{GWX}, based on Lurie's results (see \cite{Lu}), to, first, describe in algebraic terms the category of isotropic $\mbp$-cellular modules and, then, the category of all isotropic cellular spectra. At the end, we are also able to provide some results about the cellular subcategory of the isotropic triangulated category of motives, i.e. the category of isotropic Tate motives.\\ \textbf{Outline.} We now briefly present the contents of each section of this paper. In Section 2, we provide the main notations that are followed throughout this work. Then, we move on to Section 3 by recalling isotropic categories and their main properties, mostly referring to results in \cite{V} and \cite{T}. Since we are mainly interested in cellular objects, we recall in Section 4 definitions and some of the main results from \cite{DI2}, which are useful in the rest of the paper. Section 5 is devoted to a deep analysis of the isotropic motivic Adams spectral sequence, which was already initiated in \cite{T}. These results are used in Section 6 to study the motivic Brown-Peterson spectrum from an isotropic perspective. In particular, we compute its isotropic stable homotopy groups. Sections 7 and 8 are modeled on Sections 3, 4 and 5 of \cite{GWX}. More precisely, in Section 7 we endow the isotropic motivic Brown-Peterson spectrum with an $E_{\infty}$-ring structure and, then, identify as a triangulated category the category of isotropic $\mbp$-cellular spectra with the category of bigraded $\F$-vector spaces. In Section 8, after developing an isotropic Adams-Novikov spectral sequence, we describe the category of isotropic cellular spectra in algebraic terms as the derived category of comodules over the dual of the Steenrod algebra equipped with a $t$-structure. Finally, in Section 9, we provide an algebraic description of the hom-sets in the category of isotropic motives between motives of isotropic cellular spectra, which is a step forward in the understanding of the category of isotropic Tate motives.\\ \textbf{Acknowledgements.} I would like to thank Alexander Vishik for very helpful comments and Dan Isaksen for having pointed me out the work by Gheorghe-Wang-Xu on which this paper is modeled. I am also extremely grateful to Tom Bachmann for very useful remarks. I also wish to thank the referees for very useful comments which helped to improve the exposition and to simplify Section 7. \section{Notation} Let us start by fixing some notations we use throughout this paper.\\ \begin{tabular}{c|c} $k$ & flexible field with $char(k) \neq 2$\\ $\SH(k)$ & stable motivic homotopy category over $k$\\ $\SH(k/k)$ & isotropic stable motivic homotopy category over $k$\\ $\DM(k)$ & triangulated category of motives with $\Z/2$-coefficients over $k$\\ $\DM(k/k)$ & isotropic triangulated category of motives with $\Z/2$-coefficients over $k$\\ $\pi_{**}(-)$ & stable motivic homotopy groups\\ $\pi^{iso}_{**}(-)$ & isotropic stable motivic homotopy groups\\ $H_{**}(-),H^{**}(-)$ & motivic homology and cohomology with $\Z/2$-coefficients\\ $H^{iso}_{**}(-),H_{iso}^{**}(-)$ & isotropic motivic homology and cohomology with $\Z/2$-coefficients\\ $H_{**}(k),H^{**}(k)$ & motivic homology and cohomology with $\Z/2$-coefficients of $Spec(k)$\\ $H^{iso}_{**}(k/k),H_{iso}^{**}(k/k)$ & isotropic motivic homology and cohomology with $\Z/2$-coefficients of $Spec(k)$\\ $\A^{**}(k),\A_{**}(k)$ & mod 2 motivic Steenrod algebra and its dual\\ $\A^{**}(k/k),\A_{**}(k/k)$ & mod 2 isotropic motivic Steenrod algebra and its dual\\ $\A^{*},\A_{*}$ & mod 2 topological Steenrod algebra and its dual\\ $\G^{**},\G_{**}$ & bigraded mod 2 topological Steenrod algebra and its dual\\ & i.e. $\G^{2q,q}=\A^q$, $\G^{p,q}=0$ for $p \neq 2q$ and similarly for the dual\\ $\M^{**}$ & Milnor subalgebra $\Lambda_{\F}(Q_i)_{i \geq 0}$ of $\A^{**}(k/k)$\\ & where $Q_i$ are the Milnor operations in bidegrees $(2^i-1)[2^{i+1}-1]$\\ $\s$ & motivic sphere spectrum\\ $\hz$ & motivic Eilenberg-MacLane spectrum with $\Z/2$-coefficients\\ $\mgl$ & motivic algebraic cobordism spectrum\\ $\mbp$ & motivic Brown-Peterson spectrum at the prime $2$\\ $\X$ & isotropic sphere spectrum \end{tabular}\\ We denote hom-sets in $\SH(k)$ by $[-,-]$ and the suspension $\s^{p,q} \wedge X$ of a motivic spectrum $X$ by $\Sigma^{p,q}X$. Moreover, if $E$ is a motivic $E_{\infty}$-ring spectrum, the stable $\infty$-category of $E$-modules (see \cite{Lu}) is denoted by $E-\mo$, its smash product by $- \wedge_E -$ and hom-sets in its homotopy category by $[-,-]_E$. If $R$ is an algebra and $C$ a coalgebra, then we denote by $R-\mo$ and $C-\com$ the categories of left $R$-modules and left $C$-comodules respectively. Hom-sets in these categories are both denoted by $\Hom _R(-,-)$ and $\Hom _C(-,-)$ and it will be clear from the context if they are meant to be hom of modules or comodules. For a bigraded object $M_{**}$ (respectively $M^{**}$) we also denote by $\Sigma^{p,q}M_{**}$ (respectively $\Sigma^{p,q}M^{**}$) its suspension, i.e. the bigraded object defined by $\Sigma^{p,q}M_{a,b}=M_{a-p,b-q}$ (respectively $\Sigma^{p,q}M^{a,b}=M^{a+p,b+q}$). The convention for bigraded homomorphisms between bigraded objects is the following: $$\Hom ^{p,q}(M_{**},N_{**})=\Hom ^{0,0}(\Sigma^{p,q}M_{**},N_{**})$$ and $$\Hom ^{p,q}(M^{**},N^{**})=\Hom ^{0,0}(\Sigma^{p,q}M^{**},N^{**})$$ where $\Hom ^{0,0}(-,-)$ denotes the bidegree preserving homomorphisms. Moreover, the bounded derived categories of $R-\mo$ and $C-\com$ are denoted by $\D^b(R-\mo)$ and $\D^b(C-\com)$ respectively. \section{Isotropic motivic categories} In this section we want to introduce the main categories we consider throughout this paper, namely isotropic motivic categories. These categories are built from the respective motivic ones by, roughly speaking, killing all anisotropic varieties. We refer to \cite[Section 2]{V} and \cite[Section 2]{T} for more details on the construction and properties of isotropic categories. Let us recall first the definition of flexible field from \cite{V}. \begin{dfn} A field $k$ is called flexible if it is a purely transcendental extension of countable infinite degree, i.e. $k=k_0(t_1,t_2,\dots)$ for some other field $k_0$. \end{dfn} Once and for all we consider a flexible base field $k$ of characteristic different from 2. We proceed by recalling the definition of a fundamental object in $\SH(k)$ for the construction of the isotropic stable motivic homotopy category $\SH(k/k)$. \begin{dfn} \normalfont Denote by $Q$ the disjoint union of all connected anisotropic (mod 2) varieties over $k$, i.e. varieties which do not have closed points of odd degree, and by $\check{C}(Q)$ its \v{C}ech simplicial scheme, i.e. $\check{C}(Q)_n=Q^{n+1}$ with face and degeneracy maps given respectively by partial projections and partial diagonals. We define the isotropic sphere spectrum $\X$ as $Cone(\Sigma^{\infty}_+\check{C}(Q) \rightarrow \s)$ in $\SH(k)$ . \end{dfn} We recall from \cite[Section 2]{T} that $\X$ is an idempotent monoid, i.e. there is an equivalence $\X \wedge \X \cong \X$ induced by the map $\s \rightarrow \X$, and so an $E_{\infty}$-ring spectrum (see \cite[Proposition 6.1]{T}). \begin{dfn} \normalfont The full triangulated subcategory $\X \wedge \SH(k)$ of $\SH(k)$ will be called the isotropic stable motivic homotopy category, and denoted by $\SH(k/k)$. \end{dfn} This triangulated category has very nice properties, in particular it is both localising and colocalising (see \cite[Section 2]{T}). The very same construction was done first for $\DM(k)$ by Vishik in \cite{V}, by tensoring the triangulated category of motives with the idempotent ${\mathrm M}(\X)$, where ${\mathrm M}:\SH(k) \rightarrow \DM(k)$ is the motivic functor. \begin{dfn} \normalfont The full triangulated subcategory ${\mathrm M}(\X) \otimes \DM(k)$ of $\DM(k)$ will be called the isotropic category of motives, and denoted by $\DM(k/k)$. \end{dfn} The following result tells us that the isotropic stable motivic homotopy category is nothing else but the stable $\infty$-category of $\X$-modules. \begin{prop} The isotropic stable motivic homotopy category $\SH(k/k)$ is equivalent to the stable $\infty$-category $\X-\mo$ of modules over the motivic $E_{\infty}$-ring spectrum $\X$. \end{prop} \begin{proof} It follows immediately from \cite[Proposition 4.8.2.10]{Lu}. \end{proof} \begin{rem}\label{bach} \normalfont Since by construction $\X$ kills all anisotropic varieties, it kills in particular non-trivial quadratic extensions. Consider an element $x$ in $k$ such that neither $x$ nor $-x$ is a square. Then, we have that $\X \wedge \Sigma^{\infty}_+Spec(k(\sqrt{x}))$ and $\X \wedge \Sigma^{\infty}_+Spec(k(\sqrt{-x}))$ are both zero. This implies that the Euler characteristics of $Spec(k(\sqrt{x}))$ and $Spec(k(\sqrt{-x}))$, which are respectively equal to $\langle 2 \rangle (1 + \langle x\rangle)$ and $\langle 2 \rangle (1 + \langle -x\rangle)$ in $\pi_{0,0}(\s) \cong {\mathrm GW}(k)$ (see \cite[Corollary 11.2]{Le} and \cite[Theorem 6.2.2]{Mo}), vanish in $\pi_{0,0}(\X)$. It follows that $1 + \langle x\rangle$ and $1 + \langle -x\rangle$ vanish in $\pi_{0,0}(\X)$ and so does their sum $$2 + \langle x\rangle + \langle -x\rangle= 2 + \langle 1\rangle + \langle -1\rangle=3 +\langle -1\rangle.$$ Hence, we have that $-3=\langle -1\rangle$, and so $9=1$, i.e. $8=0$ in $\pi_{0,0}(\X)$. From all this one deduces that $\X$ is $2$-power torsion.\footnote{I am grateful to Tom Bachmann for this argument.} \end{rem} We are now ready to define isotropic motivic homotopy groups and isotropic motivic homology and cohomology. \begin{dfn} \normalfont Let $X$ be a motivic spectrum in $\SH(k)$. Then, the isotropic stable motivic homotopy groups of $X$ are defined by $$\pi_{**}^{iso}(X)=[\s^{**},\X \wedge X]=\pi_{**}(\X \wedge X).$$ \end{dfn} Recall that motivic cohomology with $\Z/2$-coefficients is represented by the motivic Eilenberg-MacLane spectrum $\hz$. Then, we define isotropic motivic cohomology as the cohomology theory represented by the motivic $E_{\infty}$-ring spectrum $\X \wedge \hz$. \begin{dfn} \normalfont For any $X$ in $\SH(k)$, we define the isotropic motivic cohomology of $X$ as $$H^{**}_{iso}(X)=[X,\Sigma^{**}(\X \wedge \hz)]$$ and the isotropic motivic homology of $X$ as $$H_{**}^{iso}(X)=[\s^{**},\X \wedge \hz \wedge X] = H_{**}(\X \wedge X).$$ \end{dfn} The isotropic motivic cohomology of the point was computed by Vishik in \cite{V}. We report the result in the next theorem. \begin{thm}\label{vis} Let $k$ be a flexible field. Then, for any $i \geq 0$ there exists a unique cohomology class $r_i$ of bidegree $(-2^i+1)[-2^{i+1}+1]$ such that $$H^{**}(k/k) \cong \Lambda_{\F}(r_i)_{i \geq 0}$$ and $Q_j r_i=\delta_{ij}$, where $Q_j$ are the Milnor operations. \end{thm} \begin{proof} See \cite[Theorem 3.7]{V}. \end{proof} At this point, we want to introduce the isotropic motivic Steenrod algebra $\A^{**}(k/k)$ and its dual $\A_{**}(k/k)$. They are defined respectively as the isotropic motivic cohomology and homology of the motivic Eilenberg-MacLane spectrum. \begin{dfn} \normalfont The isotropic motivic Steenrod algebra is defined by $$\A^{**}(k/k)=H^{**}_{iso}(\hz)=[\hz,\Sigma^{**}(\X \wedge \hz)] \cong [\X \wedge \hz,\Sigma^{**}(\X \wedge \hz)]$$ and its dual by $$\A_{**}(k/k)=H_{**}^{iso}(\hz)=[\s^{**},\X \wedge \hz \wedge \hz].$$ \end{dfn} The structure of $\A^{**}(k/k)$ was studied in \cite[Section 3]{T}. We summarise the main results in the next proposition. \begin{prop}\label{st} Let $k$ be a flexible field. Then, there exists an isomorphism of $H^{**}(k/k) - \M^{**}$-bimodules $$\A^{**}(k/k) \cong H^{**}(k/k) \otimes_{\F} \G^{**} \otimes_{\F} \M^{**}$$ where $\M^{**}$ is the Milnor subalgebra $\Lambda_{\F}(Q_i)_{i \geq 0}$ and $\G^{**}$ is the bigraded topological Steenrod algebra, i.e. $\G^{2n,n}=\A^n$. \end{prop} \begin{proof} See \cite[Propositions 3.5, 3.6 and 3.7]{T}. \end{proof} By projecting the motivic Cartan formulas (see \cite[Propositions 9.7 and 13.4]{V2}) to the isotropic category one gets a coproduct on $\A^{**}(k/k)$ given by: $$\Delta(Sq^{2n})=\sum_{i+j=n}Sq^{2i}\otimes Sq^{2j};$$ $$\Delta(Q_i)=Q_i\otimes 1 + 1\otimes Q_i.$$ This coproduct structures $\A^{**}(k/k)$ as a coalgebra whose dual is described as an $H_{**}(k/k)$-algebra by $$\A_{**}(k/k) \cong \frac {H_{**}(k/k)[\tau_i,\xi_j]_{i \geq 0,j \geq 1}} {(\tau_i^2)}$$ where $\tau_i$ is the dual of the Milnor operation $Q_i$ and $\xi_j$ is the dual of the motivic cohomology operation $Sq^{2^j}\cdots Sq^2$. The coproduct in $\A_{**}(k/k)$ is given by (see \cite[Lemma 12.11]{V2}): $$\psi(\xi_k)=\sum_{i=0}^k \xi_{k-i}^{2^i} \otimes \xi_i;$$ $$\psi(\tau_k)=\sum_{i=0}^k \xi_{k-i}^{2^i} \otimes \tau_i+\tau_k\otimes 1.$$ \begin{rem} \normalfont By Proposition \ref{st}, the projection from $\A^{**}(k/k)$ to its quotient by the left ideal generated by Milnor operations provides a homomorphism $$\A^{**}(k/k) \rightarrow H^{**}(k/k) \otimes_{\F} \G^{**}.$$ This map induces a left $\A^{**}(k/k)$-action on $H^{**}(k/k) \otimes_{\F} \G^{**}$ and, dually, a left $\A_{**}(k/k)$-coaction on $H_{**}(k/k) \otimes_{\F} \G_{**}$, where $\G_{**}$ is the subalgebra $\F[\xi_1,\xi_2,\dots]$. \end{rem} \section{Cellular motivic spectra} In this work, we are mostly interested in cellular objects of isotropic motivic categories. We recall from \cite[Remark 7.4]{DI2} that the category of cellular motivic spectra, which we denote by $\SH(k)_{cell}$, is the localising subcategory of $\SH(k)$ generated by the spheres $\Sigma^{p,q}\s$. Similarly, the category of Tate motives, which we denote by $\DM(k)_{Tate}$, is the localising subcategory of $\DM(k)$ generated by the Tate motives $T(q)[p]$. If $E$ is a motivic $E_{\infty}$-ring spectrum, then we denote by $E-\mo_{cell}$ the stable $\infty$-category of $E$-cellular modules, i.e. the localising subcategory of $E-\mo$ generated by $\Sigma^{p,q}E$. \begin{dfn} \normalfont The category of $\X$-cellular modules will be called the category of isotropic cellular motivic spectra, and denoted by $\SH(k/k)_{cell}$. In the same way, the full localising subcategory of $\DM(k/k)$ generated by the objects ${\mathrm M}(\X)(q)[p]$ will be called the category of isotropic Tate motives, and denoted by $\DM(k/k)_{Tate}$. \end{dfn} A fundamental property of the category of cellular objects is that isomorphisms can be detected by motivic homotopy groups, as reported in the following result. \begin{prop}\label{check} Let $E$ be a motivic $E_{\infty}$-ring spectrum and $X \rightarrow Y$ be a map of $E$-cellular motivic spectra that induces isomorphisms on $\pi_{p,q}$ for all $p $ and $q$ in $\Z$. Then, the map is a weak equivalence. \end{prop} \begin{proof} See \cite[Corollary 7.2 and Section 7.9]{DI2}. \end{proof} Another essential advantage of dealing with cellular objects is that they allow the construction of very useful convergent spectral sequences. \begin{prop}\label{ss} Let $E$ be a motivic $E_{\infty}$-ring spectrum and $N$ a left $E$-module. If $M$ is a right $E$-cellular spectrum, then there is a strongly convergent spectral sequence $$E^2_{s,t,u} \cong \Tor^{\pi_{**}(E)}_{s,t,u}(\pi_{**}(M),\pi_{**}(N))\Longrightarrow \pi_{s+t,u}(M \wedge_E N).$$ If $M$ is a left $E$-cellular motivic spectrum, then there is a conditionally convergent spectral sequence $$E_2^{s,t,u} \cong \Ext_{\pi_{**}(E)}^{s,t,u}(\pi_{**}(M),\pi_{**}(N)) \Longrightarrow [\Sigma^{t-s,u}M,N]_E.$$ \end{prop} \begin{proof} See \cite[Propositions 7.7 and 7.10]{DI2}. \end{proof} We will make a substantial use of the previous proposition in the following sections in order to compute specific hom-sets in the isotropic stable homotopy category. \section{The isotropic motivic Adams spectral sequence} In this section we recall the construction of the isotropic motivic Adams spectral sequence (see \cite[Section 4]{T}). Moreover, we study the circumstances under which the $E_2$-page is expressible in terms of $\Ext$-groups over the isotropic motivic Steenrod algebra. \begin{dfn} \normalfont Let $Y$ be an isotropic motivic spectrum, i.e. an object in $\X-\mo$. Then, the standard isotropic motivic Adams resolution of $Y$ consists of the Postnikov system $$ \xymatrix{ \dots \ar@{->}[r] & (\overline{\X \wedge \hz})^{\wedge s} \wedge Y \ar@{->}[r] \ar@{->}[d] & \dots \ar@{->}[r] & \overline{\X \wedge \hz} \wedge Y \ar@{->}[r] \ar@{->}[d] & Y \ar@{->}[d] \\ & \X \wedge \hz\wedge (\overline{\X \wedge \hz})^{\wedge s} \wedge Y \ar@{->}[ul]^{[1]} & & \X \wedge \hz \wedge \overline{\X \wedge \hz} \wedge Y \ar@{->}[ul]^{[1]} & \X \wedge \hz \wedge Y \ar@{->}[ul]^{[1]} } $$ where $\overline{\X \wedge \hz}$ is defined by the following exact triangle in $\SH(k)$: $$\overline{\X \wedge \hz} \rightarrow \s \rightarrow \X \wedge \hz \rightarrow \Sigma^{1,0}\overline{\X \wedge \hz}.$$ By applying motivic homotopy groups functors $\pi_{**}$ to the previous Postnikov system we get an unrolled exact couple, which induces in turn a spectral sequence with $E_1$-page described by $$E_1^{s,t,u}\cong \pi_{t-s,u}(\X \wedge \hz\wedge (\overline{\X \wedge \hz})^{\wedge s} \wedge Y)$$ and first differential $$d_1^{s,t,u}:\pi_{t-s,u}(\X \wedge \hz\wedge (\overline{\X \wedge \hz})^{\wedge s} \wedge Y) \rightarrow \pi_{t-s-1,u}(\X \wedge \hz\wedge (\overline{\X \wedge \hz})^{\wedge s+1} \wedge Y).$$ In general, differentials on the $E_r$-page have tri-degrees given by $$d_r^{s,t,u}:E_r^{s,t,u} \rightarrow E_r^{s+r,t+r-1,u}.$$ We call this spectral sequence isotropic motivic Adams spectral sequence. \end{dfn} The isotropic Adams spectral sequence converges to the homotopy groups of a motivic spectrum closely related to $Y$, namely its $\X \wedge \hz$-nilpotent completion that we denote by $Y^{\wedge}_{\X \wedge \hz}$. Before proceeding, let us recall from \cite[Section 5]{Bo} how to construct the $E$-nilpotent completion of a spectrum $Y$ for a homotopy ring spectrum $E$. \begin{dfn} \normalfont Let $E$ be a homotopy ring spectrum and $Y$ a motivic spectrum in $\SH(k)$. First, define $\overline{E}$ by the following distinguished triangle in $\SH(k)$: $$\overline{E} \rightarrow \s \rightarrow E \rightarrow \Sigma^{1,0}\overline{E}.$$ Then, define $\overline{E}_{n}$ as $Cone(\overline{E}^{\wedge n+1} \rightarrow \s)$ in $\SH(k)$. This way one gets an inverse system $$ \dots \rightarrow \overline{E}_{n} \wedge Y \rightarrow \dots \rightarrow \overline{E}_{1} \wedge Y \rightarrow \overline{E}_{0} \wedge Y$$ and the $E$-nilpotent completion of $Y$ is the motivic spectrum defined by $Y^{\wedge}_E=\hl (\overline{E}_{n} \wedge Y)$. \end{dfn} Note that, by \cite[Proposition 2.3]{T}, if $Y$ is an isotropic motivic spectrum then also $Y^{\wedge}_E$ is so. \begin{prop}\label{conv} Let $Y$ be an isotropic motivic spectrum. If $\varprojlim_{r}^1 E_r^{s,t,u}=0$ for any $s,t,u$, then the isotropic motivic Adams spectral sequence for $Y$ is strongly convergent to the stable motivic homotopy groups of the $\hz$-nilpotent completion of $Y$. \end{prop} \begin{proof} By \cite[Proposition 6.3]{Bo} and \cite[Remark 6.11]{DI}, under the vanishing hypothesis on $\varprojlim_{r}^1 E_r^{s,t,u}$, the isotropic motivic Adams spectral sequence strongly converges to $\pi_{**}(Y^{\wedge}_{\X \wedge \hz})$. It only remains to notice that, since $Y$ is a $\X$-module, its $\hz$-nilpotent and $\X \wedge \hz$-nilpotent completions coincide. In fact, after smashing with $\X$ the morphism of distinguished triangles $$ \xymatrix{ \overline{\hz} \ar@{->}[r] \ar@{->}[d]& \s \ar@{->}[r] \ar@{=}[d] & \hz \ar@{->}[r] \ar@{->}[d] & \Sigma^{1,0}\overline{ \hz} \ar@{->}[d]\\ \overline{\X \wedge \hz} \ar@{->}[r] & \s \ar@{->}[r] & \X \wedge\hz \ar@{->}[r] & \Sigma^{1,0}\overline{ \X \wedge \hz} } $$ one gets $$ \xymatrix{ \X \wedge \overline{\hz} \ar@{->}[r] \ar@{->}[d]& \X \ar@{->}[r] \ar@{=}[d] & \X \wedge \hz \ar@{->}[r] \ar@{->}[d]^{\cong} & \Sigma^{1,0}\X \wedge\overline{ \hz} \ar@{->}[d]\\ \X \wedge \overline{\X \wedge \hz} \ar@{->}[r] & \X \ar@{->}[r] & \X \wedge \X \wedge\hz \ar@{->}[r] & \Sigma^{1,0}\X \wedge \overline{ \X \wedge \hz} } $$ since $\X$ is an idempotent in $\SH(k)$. It follows that $\X \wedge \overline{\hz} \cong \X \wedge \overline{\X \wedge \hz}$, and so $\X \wedge \overline{\hz}_n \cong \X \wedge (\overline{\X \wedge \hz})_n$ for any $n$. Therefore, since $Y \cong \X \wedge Y$ one obtains that \begin{align*} Y^{\wedge}_{\X \wedge \hz}&= \hl ((\overline{\X \wedge \hz})_n \wedge Y)\cong \hl (\X \wedge (\overline{\X \wedge \hz})_n \wedge Y) \\ &\cong \hl (\X \wedge \overline{\hz}_n \wedge Y) \cong \hl (\overline{\hz}_n \wedge Y)=Y^{\wedge}_{ \hz} \end{align*} which is what we wanted to show. \end{proof} \begin{rem}\label{abc} \normalfont By \cite[Section 5.2 and Theorem 1.0.3]{Ma}, the $\hz$-completion of a connective motivic spectrum coincides with its $(2,\eta)$-completion. Since all isotropic motivic spectra are $2$-power torsion (see Remark \ref{bach}), and so $2$-complete, the previous result establishes the convergence of the isotropic Adams spectral sequence for a connective isotropic spectrum to the motivic stable homotopy groups of its $\eta$-completion. \end{rem} \begin{dfn} \normalfont A spectral sequence $\{E_r^{s,t,u}\}$ is called Mittag-Leffler if for each $s,t,u$ there exists $r_0$ such that $E_r^{s,t,u} \cong E_{\infty}^{s,t,u}$ whenever $r > r_0$. \end{dfn} Note that every Mittag-Leffler spectral sequence satisfies the condition $\varprojlim_{r}^1 E_r^{s,t,u}=0$ for any $s,t,u$ (see \cite[after Proposition 6.3]{Bo}). We will see that in many important cases the isotropic Adams spectral sequence is Mittag-Leffler, which guarantees strong convergence. Now, we would like to understand what conditions we need to impose on $Y$ in order to be able to express the $E_2$-page of the isotropic Adams spectral sequence in terms of $\Ext$-groups over the isotropic motivic Steenrod algebra. First, we need the following lemmas. \begin{lem}\label{kun} Let $k$ be a flexible field and $Y$ an object in $\X-\mo$. Then, there exists an isomorphism of left $H_{**}(k/k)$-modules $$H^{iso}_{**}(\X \wedge \hz \wedge Y) \cong \A_{**}(k/k) \otimes_{H_{**}(k/k)} H^{iso}_{**}(Y).$$ \end{lem} \begin{proof} Since by \cite[Theorem 5.10]{H} $\hz \wedge \hz$ is a split $\hz$-module, i.e. it is equivalent to a wedge sum of the form $\bigvee_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}\hz$, we have that \begin{align*} \A_{**}(k/k)& \cong \pi_{**}(\X \wedge \hz \wedge \hz)\\ & \cong \pi_{**}(\bigvee_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}(\X \wedge \hz))\\ & \cong \bigoplus_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}\pi_{**}(\X \wedge \hz)\\ & \cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} H_{**}(k/k). \end{align*} Now, let $Y$ be any object in $\X-\mo$. Then, \begin{align*} H^{iso}_{**}(\X \wedge \hz \wedge Y) &\cong \pi_{**}(\X \wedge \hz \wedge \hz \wedge Y)\\ & \cong \pi_{**}(\bigvee_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}(\X \wedge \hz \wedge Y))\\ & \cong \bigoplus_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}\pi_{**}(\X \wedge \hz \wedge Y) \\ & \cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} H^{iso}_{**}(Y) \\ &\cong \A_{**}(k/k) \otimes_{H_{**}(k/k)} H^{iso}_{**}(Y) \end{align*} which completes the proof. \end{proof} \begin{rem} \normalfont Note that, by the previous lemma, the map $Y \rightarrow \X \wedge \hz \wedge Y$ induces in isotropic motivic homology a coaction $H^{iso}_{**}(Y) \rightarrow \A_{**}(k/k) \otimes_{H_{**}(k/k)} H^{iso}_{**}(Y)$ which structures $H^{iso}_{**}(Y)$ as a left $\A_{**}(k/k)$-comodule. \end{rem} In the next result, we show that, if the homology of an isotropic cellular spectrum $Y$ is free over $H_{**}(k/k)$, then the motivic spectrum $\X \wedge \hz \wedge Y$ is a split $\X \wedge \hz$-module. \begin{lem}\label{gem} Let $ k$ be a flexible field and $Y$ an object in $\X-\mo_{cell}$ such that $H^{iso}_{**}(Y)$ is a free left $H_{**}(k/k)$-module generated by a set of elements $\{x_{\alpha}\}_{\alpha \in A}$, where $x_{\alpha}$ has bidegree $(q_{\alpha})[p_{\alpha}]$. Then, there exists an isomorphism of spectra $$\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz) \xrightarrow{\cong} \X \wedge \hz \wedge Y.$$ \end{lem} \begin{proof} Since $H^{iso}_{**}(Y) \cong \pi_{**}(\X \wedge \hz \wedge Y)$, we can represent each generator $x_{\alpha}$ as a map $\Sigma^{p_{\alpha},q_{\alpha}} \s \rightarrow \X \wedge \hz \wedge Y$, where $(q_{\alpha})[p_{\alpha}]$ is the bidegree of $x_{\alpha}$. For all $\alpha \in A$, this map corresponds bijectively to a map $\Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz) \rightarrow \X \wedge \hz \wedge Y$ of $\X \wedge \hz$-cellular modules. Hence, we get a map $$\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz) \rightarrow \X \wedge \hz \wedge Y$$ of $\X \wedge \hz$-cellular modules. In order to check that it is an isomorphism, by Proposition \ref{check} it is enough to look at the induced morphisms on homotopy groups. Indeed, we have that on the one hand $$\pi_{**}(\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz)) \cong \bigoplus_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}} \pi_{**}(\X \wedge \hz) \cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}}H_{**}(k/k)$$ and on the other $$\pi_{**}(\X \wedge \hz \wedge Y) \cong \bigoplus_{\alpha \in A} H_{**}(k/k) \cdot x_{\alpha}$$ by hypothesis. By construction, the map we are considering induces in homotopy groups the homomorphism of $H_{**}(k/k)$-modules $$\pi_{**}(\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz)) \rightarrow \pi_{**}(\X \wedge \hz \wedge Y)$$ which sends $1 \in \Sigma^{p_{\alpha},q_{\alpha}} H_{**}(k/k)$ to $x_{\alpha}$ for any $\alpha \in A$, so it is an isomorphism, as we wanted to show. \end{proof} The next lemma provides us with a condition under which the isotropic cohomology of a spectrum is dual to its isotropic homology. \begin{lem}\label{dual} Let $ k$ be a flexible field and $Y$ an object in $\X-\mo$ such that there is an isomorphism $\X \wedge \hz \wedge Y \cong \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz)$ for some set $A$. Then, for any bidegree $(q)[p]$ there is an isomorphism $$H_{iso}^{p,q}(Y) \cong \Hom ^{-p,-q}_{H_{**}(k/k)}(H^{iso}_{**}(Y),H_{**}(k/k)).$$ \end{lem} \begin{proof} Since $\X \wedge \hz \wedge Y \cong \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz)$ by hypothesis, we have that $$H^{iso}_{**}(Y)=[\s^{**},\X \wedge \hz \wedge Y] \cong [\s^{**},\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz)] \cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} H_{**}(k/k)$$ from which it follows that $$\Hom ^{-p,-q}_{H_{**}(k/k)}(H^{iso}_{**}(Y),H_{**}(k/k)) \cong \prod_{\alpha \in A} H_{p_{\alpha}-p,q_{\alpha}-q}(k/k).$$ On the other hand, we have the following chain of isomorphisms: \begin{align*} H_{iso}^{p,q}(Y) &= [Y,\Sigma^{p,q}(\X \wedge \hz)]\\ & \cong [\X \wedge \hz \wedge Y,\Sigma^{p,q}(\X \wedge \hz)]_{\X \wedge \hz}\\ & \cong [\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \hz),\Sigma^{p,q}(\X \wedge \hz)]_{\X \wedge \hz}\\ & \cong [\bigvee_{\alpha \in A}\s^{p_{\alpha},q_{\alpha}},\Sigma^{p,q}(\X \wedge \hz)] \\ & \cong \prod_{\alpha \in A} H_{p_{\alpha}-p,q_{\alpha}-q}(k/k) \end{align*} that concludes the proof. \end{proof} Now, we need to define a certain concept of finiteness which suits the isotropic environment. \begin{dfn} \normalfont We say that a set of bidegrees $\{(q_{\alpha})[p_{\alpha}]\}_{\alpha \in A}$ is isotropically finite type if for any bidegree $(q)[p]$ there are only finitely many $\alpha \in A$ such that $p-p_{\alpha} \geq 2(q-q_{\alpha}) \geq 0$. Moreover, we say that a set of bigraded elements $\{x_{\alpha}\}_{\alpha \in A}$ is isotropically finite type if the corresponding set of bidegrees is so. \end{dfn} \begin{lem}\label{ift} Let $k$ be a flexible field and $\{(q_{\alpha})[p_{\alpha}]\}_{\alpha \in A}$ an isotropically finite type set of bidegrees. Then, for any bidegree $(q)[p]$, the obvious map $$\pi_{p,q}(\X \wedge \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \hz) \rightarrow \Hom ^{p,q}_{\A^{**}(k/k)}(H_{iso}^{**}(\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \hz),H^{**}(k/k))$$ is an isomorphism. \end{lem} \begin{proof} First, note that, for any bidegree $(q)[p]$, one has the following commutative diagram $$ \xymatrix{ \pi_{p,q}(\X \wedge \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \hz) \ar@{->}[r] \ar@{->}[d]& \Hom ^{p,q}_{\A^{**}(k/k)}(H_{iso}^{**}(\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \hz),H^{**}(k/k)) \ar@{->}[d] \\ \Hom ^{p,q}_{\F}(\bigoplus_{\alpha \in A} \Sigma^{-p_{\alpha},-q_{\alpha}} \F,H^{**}(k/k)) \ar@{->}[r] & \Hom ^{p,q}_{\A^{**}(k/k)}(\bigoplus_{\alpha \in A} \Sigma^{-p_{\alpha},-q_{\alpha}} \A^{**}(k/k),H^{**}(k/k)). } $$ The left vertical arrow is the isomorphism described by the following chain of equivalences \begin{align*} \pi_{p,q}(\X \wedge \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \hz) &\cong \bigoplus_{\alpha \in A} \pi_{p,q}(\X \wedge \Sigma^{p_{\alpha},q_{\alpha}} \hz) \\ &\cong \bigoplus_{\alpha \in A} H^{p_{\alpha}-p,q_{\alpha}-q}(k/k)\\ & \cong \prod_{\alpha \in A}H^{p_{\alpha}-p,q_{\alpha}-q} (k/k)\\ & \cong \prod_{\alpha \in A} \Hom ^{p,q}_{\F}(\Sigma^{-p_{\alpha},-q_{\alpha}} \F,H^{**}(k/k)) \\ &\cong \Hom ^{p,q}_{\F}(\bigoplus_{\alpha \in A} \Sigma^{-p_{\alpha},-q_{\alpha}} \F,H^{**}(k/k)) \end{align*} where the identification $$\bigoplus_{\alpha \in A} H^{p_{\alpha}-p,q_{\alpha}-q}(k/k) \cong \prod_{\alpha \in A}H^{p_{\alpha}-p,q_{\alpha}-q} (k/k)$$ is due to the fact that the set $\{(q_{\alpha})[p_{\alpha}]\}_{\alpha \in A}$ is isotropically finite type, so for any bidegree $(q)[p]$ only for a finite number of $\alpha \in A$ the group $H^{p_{\alpha}-p,q_{\alpha}-q}(k/k)$ is non-zero by Theorem \ref{vis}. The bottom horizontal map is obviously an isomorphism since $\A^{**}(k/k)$ is an $\F$-vector space. The right vertical map is an isomorphism since \begin{align*} \Hom ^{p,q}_{\A^{**}(k/k)}(H^{**}_{iso}(\bigvee_{\alpha} \Sigma^{p_{\alpha},q_{\alpha}} \hz),H^{**}(k/k)) &\cong \Hom ^{p,q}_{\A^{**}(k/k)}(\prod_{\alpha}H^{**}_{iso}(\Sigma^{p_{\alpha},q_{\alpha}} \hz),H^{**}(k/k)) \\ &= \Hom ^{p,q}_{\A^{**}(k/k)}(\prod_{\alpha} \Sigma^{-p_{\alpha},-q_{\alpha}} \A^{**}(k/k),H^{**}(k/k)) \\ &\cong \Hom ^{p,q}_{\A^{**}(k/k)}(\bigoplus_{\alpha} \Sigma^{-p_{\alpha},-q_{\alpha}} \A^{**}(k/k),H^{**}(k/k)) \end{align*} where the last isomorphism comes from the fact that the set of bidegrees $\{(q_{\alpha})[p_{\alpha}]\}_{\alpha \in A}$ is isotropically finite type, so for any bidegree $(q)[p]$ only for finitely many $\alpha \in A$ the group $$\Hom ^{p,q}_{\A^{**}(k/k)}( \Sigma^{-p_{\alpha},-q_{\alpha}}\A^{**}(k/k),H^{**}(k/k))\cong H^{p_{\alpha}-p,q_{\alpha}-q}(k/k)$$ is non-trivial by Theorem \ref{vis}. This completes the proof. \end{proof} At this point, we are ready to present the structure of the $E_2$-page of the isotropic Adams spectral sequence, which behaves as in the classical case. \begin{thm}\label{iass} Let $k$ be a flexible field and $Y$ an object in $\X-\mo_{cell}$ such that $H^{iso}_{**}(Y)$ is a free left $H_{**}(k/k)$-module generated by an isotropically finite type set of elements $\{x_{\alpha}\}_{\alpha \in A}$. Then, the $E_2$-page of the isotropic motivic Adams spectral sequence is described by $$E_2^{s,t,u} \cong \Ext^{s,t,u}_{\A^{**}(k/k)}(H_{iso}^{**}(Y),H^{**}(k/k)).$$ \end{thm} \begin{proof} First, we want to prove by induction that $H_{**}^{iso}((\overline{\X \wedge \hz})^{\wedge s} \wedge Y)$ is a free left $H_{**}(k/k)$-module generated by an isotropically finite type set of elements $\{x_{\alpha}\}_{\alpha \in A_s}$ for any $s \geq 0$. The induction basis is guaranteed by hypothesis after setting $A_0=A$. Suppose the statement is true at the $s-1$ stage, i.e. $H_{**}^{iso}((\overline{\X \wedge \hz})^{\wedge s-1} \wedge Y) \cong \bigoplus_{\alpha \in A_{s-1}} \Sigma^{1-s,0}H_{**}(k/k) \cdot x_{\alpha}$. Then, by Lemma \ref{kun}, the map $(\overline{\X \wedge \hz})^{\wedge s-1} \wedge Y \rightarrow \X \wedge \hz \wedge(\overline{\X \wedge \hz})^{\wedge s-1} \wedge Y$ induces in isotropic motivic homology the monomorphism $$\bigoplus_{\alpha \in A_{s-1}} \Sigma^{1-s,0}H_{**}(k/k) \cdot x_{\alpha} \rightarrow \bigoplus_{\alpha \in A_{s-1}} \Sigma^{1-s,0}\A_{**}(k/k) \cdot x_{\alpha}.$$ Hence, the standard Adams resolution induces for any $p$ and $q$ a short exact sequence $$0 \rightarrow H_{p,q}^{iso}((\overline{\X \wedge \hz})^{\wedge s-1} \wedge Y) \rightarrow H_{p,q}^{iso}(\X \wedge \hz \wedge(\overline{\X \wedge \hz})^{\wedge s-1} \wedge Y) \rightarrow H_{p-1,q}^{iso}((\overline{\X \wedge \hz})^{\wedge s} \wedge Y) \rightarrow 0.$$ Now, note that, by the very structure of the dual of the isotropic motivic Steenrod algebra, $\A_{**}(k/k)$ is freely generated over $H_{**}(k/k)$ by a set of generators $\{1,y_{\beta}\}_{\beta \in B}$ which is finite in each bidegree and such that $p_{\beta} \geq 2q_{\beta} \geq 0$ for any $\beta \in B$, where $(q_{\beta})[p_{\beta}]$ is the bidegree of $y_{\beta}$. Hence, the set $\{y_{\beta}x_{\alpha}\}_{\beta \in B,\alpha \in A_{s-1}}$ is isotropically finite type and freely generates $H_{**}^{iso}((\overline{\X \wedge \hz})^{\wedge s} \wedge Y)$ over $H_{**}(k/k)$, i.e. $$H_{**}^{iso}((\overline{\X \wedge \hz})^{\wedge s} \wedge Y) \cong \bigoplus_{\beta \in B,\alpha \in A_{s-1}} \Sigma^{-s,0}H_{**}(k/k) \cdot y_{\beta}x_{\alpha}.$$ Therefore, Lemma \ref{gem} implies that all $\X \wedge \hz \wedge(\overline{\X \wedge \hz})^{\wedge s} \wedge Y$ are wedges of appropriately shifted $\X \wedge \hz$. More precisely, for any $s \geq 0$, there exists an isomorphism $$\X \wedge \bigvee_{\alpha \in A_s} \Sigma^{p_{\alpha}-s,q_{\alpha}} \hz \xrightarrow{\cong} \X \wedge \hz \wedge(\overline{\X \wedge \hz})^{\wedge s} \wedge Y$$ where $A_s=B \times A_{s-1}$, from which we deduce, using Lemma \ref{ift}, that the $E_1$-page of the isotropic Adams spectral sequence can be described by $$E_1^{s,t,u} \cong \pi_{t-s,u}(\X \wedge \hz\wedge (\overline{\X \wedge \hz})^{\wedge s} \wedge Y) \cong \Hom ^{t,u}_{\A^{**}(k/k)}(\bigoplus_{\alpha \in A_s} \Sigma^{-p_{\alpha},-q_{\alpha}} \A^{**}(k/k),H^{**}(k/k)).$$ Moreover, note that $$0 \leftarrow H_{iso}^{**}(Y) \leftarrow \bigoplus_{\alpha \in A_0} \Sigma^{-p_{\alpha},-q_{\alpha}} \A^{**}(k/k) \leftarrow \bigoplus_{\alpha \in A_1} \Sigma^{-p_{\alpha},-q_{\alpha}} \A^{**}(k/k) \leftarrow \dots$$ is a free $\A^{**}(k/k)$-resolution of $H_{iso}^{**}(Y)$. Thus, for any $s,t,u$ we have an isomorphism $$E_2^{s,t,u} \cong \Ext^{s,t,u}_{\A^{**}(k/k)}(H_{iso}^{**}(Y),H^{**}(k/k))$$ as we aimed to prove. \end{proof} By using the isotropic motivic Adams spectral sequence, in \cite{T} we computed the isotropic motivic homotopy groups of the sphere spectrum which can be identified with the $E_2$-page of the classical Adams spectral sequence. \begin{thm} Let $k$ be a flexible field. Then, the stable motivic homotopy groups of the $\hz$-completed isotropic sphere spectrum are completely described by $$\pi_{*,*'}(\X^{\wedge}_{\hz}) \cong \Ext_{\G^{**}}^{2*'-*,2*',*'}(\F,\F) \cong \Ext_{\A^*}^{2*'-*,*'}(\F,\F).$$ \end{thm} \begin{proof} See \cite[Theorem 5.7]{T}. \end{proof} \section{The motivic Brown-Peterson spectrum} In this section, we recall from \cite{Ve} the construction of the motivic Brown-Peterson spectrum. Moreover, we compute its isotropic homology and homotopy, which will be useful later on for the construction of the isotropic motivic Adams-Novikov spectral sequence and so for the proofs of our main results. \begin{dfn} \normalfont Let $\mgl_{(2)}$ be the motivic algebraic cobordism spectrum (see \cite[Section 6.3]{V0}) localised at 2. Then, following \cite[Section 5]{Ve} one defines the motivic Brown-Peterson spectrum at the prime 2 as the colimit of the diagram in $\SH(k)$ $$\dots \rightarrow \mgl_{(2)} \xrightarrow{e_{(2)}} \mgl_{(2)} \xrightarrow{e_{(2)}} \mgl_{(2)} \rightarrow \dots$$ where $e_{(2)}$ is the motivic Quillen idempotent. \end{dfn} Note, in particular, that $\mbp$ is a homotopy commutative ring spectrum and a direct summand of $\mgl_{(2)}$. \begin{prop} Let $k$ be a flexible field. Then, there is an isomorphism of $H^{**}(k/k)$-modules $$H_{iso}^{**}(\mgl) \cong H_{iso}^{**}(\bgl) \cong H^{**}(k/k)[c_1,c_2,\dots]$$ and an isomorphism of $H_{**}(k/k)$-algebras $$H^{iso}_{**}(\mgl) \cong H^{iso}_{**}(\bgl) \cong H_{**}(k/k)[b_1,b_2,\dots]$$ where $c_i$ is the $i$th Chern class in $H_{iso}^{2i,i}(\bgl)$ and $b_i \in H^{iso}_{2i,i}(\bgl)$ is the dual of $c_1^i$ with respect to the monomial basis, for any $i$. \end{prop} \begin{proof} First, note that the maps $P^1 \rightarrow P^{\infty}$ and $\hz \rightarrow \X \wedge \hz$ induce a commutative square $$ \xymatrix{ H^{**}(P^{\infty}) \ar@{->}[r] \ar@{->}[d]& H^{**}_{iso}(P^{\infty}) \ar@{->}[d]\\ H^{**}(P^1) \ar@{->}[r] & H^{**}_{iso}(P^1) } $$ where the left vertical morphism is the projection $H^{**}(k)[c] \rightarrow H^{**}(k)[c]/(c^2)$ and $c$ is the only non zero class in $H^{2,1}(P^{\infty}) \cong H^{2,1}(P^1) \cong \Z/2$. If we also denote by $c$ the images of $c$ under the horizontal maps in isotropic motivic cohomology, then the right vertical homomorphism is given by the projection $$H^{**}(k/k)[c] \rightarrow H^{**}(k/k)[c]/(c^2).$$ Hence, $\X \wedge \hz$ is an oriented motivic spectrum (see \cite[Definition 3.1]{Ve}) and the statement follows immediately from \cite[Proposition 6.2]{NSO}. \end{proof} Following \cite[Section 6]{H}, let $h:L \rightarrow \F[b_1,b_2,\dots]$ be the homomorphism from the Lazard ring $L$ classifying the formal group law on $\F[b_1,b_2,\dots]$ which is isomorphic to the additive one via the exponential $\sum_{n \geq 0}b_nx^{n+1}$. Lazard's theorem implies that $h(L)$ is a polynomial subring $\F[b'_n| n \neq 2^r-1]$, where $b'_n \equiv b_n$ modulo decomposables. Denote by $\pi:\F[b_1,b_2,\dots] \rightarrow h(L)$ a retraction of the inclusion. In the next proposition, we give a description of isotropic homology and cohomology of the algebraic cobordism spectrum $\mgl$. \begin{prop} Let $k$ be a flexible field. Then, the coaction $\Delta: H^{iso}_{**}(\mgl) \rightarrow \A_{**}(k/k) \otimes_{H_{**}(k/k)}H^{iso}_{**}(\mgl) $ factors through $H_{**}(k/k) \otimes_{\F} \G_{**} \otimes_{\F} \F[b_1,b_2,\dots]$ and the composition $$H^{iso}_{**}(\mgl) \xrightarrow{\Delta} H_{**}(k/k) \otimes_{\F} \G_{**} \otimes_{\F} \F[b_1,b_2,\dots] \xrightarrow{id \otimes \pi} H_{**}(k/k) \otimes_{\F} \G_{**} \otimes_{\F} h(L)$$ is an isomorphism of left $\A_{**}(k/k)$-comodule algebras. Dually, the map $$H^{**}(k/k) \otimes_{\F} \G^{**} \otimes_{\F} h(L)^{\lor} \rightarrow H^{**}_{iso}(\mgl)$$ is an isomorphism of left $\A^{**}(k/k)$-module coalgebras. \end{prop} \begin{proof} Since $\hz \wedge \mgl$ is a split $\hz$-module (see the remark after \cite[Definition 5.4]{H}), from \cite[Lemma 5.2]{H} we deduce that $$H^{iso}_{**}(\mgl)\cong \pi_{**}(\X \wedge \hz) \otimes_{\pi_{**}(\hz)} \pi_{**}(\hz \wedge \mgl) \cong H_{**}(k/k) \otimes_{H_{**}(k)} H_{**}(\mgl)$$ as an $H_{**}(k/k)$-algebra. From \cite[Theorem 6.5]{H} we know that the coaction $\Delta: H_{**}(\mgl) \rightarrow \A_{**}(k) \otimes_{H_{**}(k)}H_{**}(\mgl) $ factors through $\Pa_{**}\otimes_{\F} \F[b_1,b_2,\dots]$ and the composition $$H_{**}(\mgl) \xrightarrow{\Delta} \Pa_{**} \otimes_{\F} \F[b_1,b_2,\dots] \xrightarrow{id \otimes \pi} \Pa_{**}\otimes_{\F} h(L)$$ is an isomorphism of left $\A_{**}(k)$-comodule algebras, where $\Pa_{**}$ is the subalgebra of $\A_{**}(k)$ defined by $H_{**}(k)[\xi_1,\xi_2,\dots]$. By tensoring the previous composition with $H_{**}(k/k)$ over $H_{**}(k)$ we get the desired isomorphism, which completes the first part. The second part follows easily, since $\G_{**} \otimes_{\F} h(L)$ is isotropically finite type, from Lemmas \ref{gem} and \ref{dual} by dualizing the homology isomorphism. \end{proof} The next result provides us with the structure of isotropic homology and cohomology of the motivic Brown-Peterson spectrum $\mbp$. \begin{prop}\label{imbp} Let $k$ be a flexible field. Then, the isotropic motivic homology of $\mbp$ is described as a left $\A_{**}(k/k)$-comodule by $$H^{iso}_{**}(\mbp) \cong H_{**}(k/k) \otimes_{\F} \G_{**}.$$ Dually, the isotropic motivic cohomology of $\mbp$ is described as a left $\A^{**}(k/k)$-module by $$H^{**}_{iso}(\mbp) \cong H^{**}(k/k) \otimes_{\F} \G^{**}.$$ \end{prop} \begin{proof} From \cite[Remark 6.20]{H}, one knows that $\mbp$ is equivalent to $\mgl_{(2)}/x$ where $x$ is any maximal $h$-regular sequence, i.e. a sequence of homogeneous elements in $L$ such that $h(x)$ is a regular sequence in $h(L)$ which generates the maximal ideal. Therefore, \cite[Theorem 6.11]{H} implies that there exists an isomorphism of $\A_{**}(k)$-comodules $$H_{**}(\mbp) \cong \Pa_{**}.$$ Since $\hz \wedge \mbp$ is a split $\hz$-module, we deduce from \cite[Lemma 5.2]{H} that $$H_{**}^{iso}(\mbp) \cong H_{**}(k/k) \otimes_{H_{**}(k)} H_{**}(\mbp) \cong H_{**}(k/k) \otimes_{H_{**}(k)} \Pa_{**} \cong H_{**}(k/k) \otimes_{\F} \G_{**}$$ which proves the first part. The second part follows again from dualization, since $\G_{**}$ is isotropically finite type, by Lemmas \ref{gem} and \ref{dual}. \end{proof} Later on, we will also need the isotropic homology and cohomology of $\mbp \wedge \mbp$, which is reported in the following proposition. \begin{prop}\label{mbp2} Let $k$ be a flexible field. Then, the isotropic motivic homology of $\mbp \wedge \mbp$ is described as a left $\A_{**}(k/k)$-comodule by $$H^{iso}_{**}(\mbp \wedge \mbp) \cong H_{**}(k/k) \otimes_{\F} \G_{**} \otimes_{\F} \G_{**}.$$ Dually, the isotropic motivic cohomology of $\mbp \wedge \mbp$ is described as a left $\A^{**}(k/k)$-module by $$H^{**}_{iso}(\mbp \wedge \mbp) \cong H^{**}(k/k) \otimes_{\F} \G^{**}\otimes_{\F} \G^{**}.$$ \end{prop} \begin{proof} Since $\hz \wedge \mbp$ is a split $\hz$-module, by \cite[Lemma 5.2]{H} and Proposition \ref{imbp} we obtain that $$H^{iso}_{**}(\mbp \wedge \mbp) \cong (H_{**}(k/k) \otimes_{\F} \G_{**}) \otimes_{H_{**}(k/k)} (H_{**}(k/k) \otimes_{\F} \G_{**}) \cong H_{**}(k/k) \otimes_{\F} \G_{**} \otimes_{\F} \G_{**}.$$ The description of the isotropic cohomology follows again by dualizing the homology isomorphism. \end{proof} Now, we compute the isotropic stable homotopy groups of $\mbp$ by using the isotropic Adams spectral sequence developed in the previous section. \begin{thm}\label{hgmbp} Let $k$ be a flexible field. Then, the isotropic motivic homotopy groups of $\mbp$ are described by $$\pi^{iso}_{**}(\mbp) \cong \F.$$ \end{thm} \begin{proof} First, note that, by Proposition \ref{imbp}, $H^{iso}_{**}(\mbp)$ is freely generated over $H_{**}(k/k)$ by $\G_{**}$ which is isotropically finite type. Hence, Theorem \ref{iass} implies that the $E_2$-page of the isotropic motivic Adams spectral sequence for $\X \wedge \mbp$ is given by $$E_2^{s,t,u} \cong \Ext_{\A^{**}(k/k)}^{s,t,u}(H_{iso}^{**}(\mbp),H^{**}(k/k)).$$ Now, we deduce from Proposition \ref{imbp} and \cite[Theorem 5.4]{T} that \begin{align*} \Ext_{\A^{**}(k/k)}^{s,t,u}(H_{iso}^{**}(\mbp),H^{**}(k/k)) & \cong \Ext_{\A^{**}(k/k)}^{s,t,u}(H^{**}(k/k) \otimes_{\F} \G^{**},H^{**}(k/k)) \\ &\cong \Ext_{\G^{**}}^{s,t,u}(\G^{**},\F) \\ &\cong \Ext_{\F}^{s,t,u}(\F,\F) \cong \begin{cases} \F & if \: s=t=u=0\\ 0 & otherwise \end{cases}. \end{align*} Therefore, the $E_2$-page of the isotropic Adams spectral sequence for $\X \wedge \mbp$ is concentrated just in the tridegree $(0,0,0)$, from which it follows that all differentials from the second on are trivial. Thus, the Mittag-Leffler condition is clearly satisfied, and so strong convergence holds by Proposition \ref{conv}. Then, it immediately follows from Remark \ref{abc} and the fact that $\mbp$ is $\eta$-complete that $$\pi^{iso}_{**}(\mbp) \cong \pi_{**}(\X \wedge \mbp) \cong \F$$ which completes the proof. \end{proof} In the following sections, it will be also useful to know the isotropic homotopy groups of $\mbp \wedge \mbp$ that we compute in the next result. \begin{thm}\label{hgmbp2} Let $k$ be a flexible field. Then, the isotropic motivic homotopy groups of $\mbp \wedge \mbp$ are described by $$\pi^{iso}_{**}(\mbp \wedge \mbp) \cong \G_{**}.$$ \end{thm} \begin{proof} The proof of this theorem goes along the lines of the previous one. Since $H_{**}^{iso}(\mbp \wedge \mbp) \cong H_{**}(k/k) \otimes_{\F} \G_{**}\otimes_{\F} \G_{**}$ by Proposition \ref{mbp2} and $\G_{**}\otimes_{\F} \G_{**}$ is isotropically finite type, by Theorem \ref{iass} we have that the $E_2$-page of the isotropic Adams spectral sequence for $\X \wedge \mbp \wedge \mbp$ is provided by $$E_2^{s,t,u} \cong \Ext_{\A^{**}(k/k)}^{s,t,u}(H_{iso}^{**}(\mbp \wedge \mbp),H^{**}(k/k)).$$ Again, we note that by \cite[Theorem 5.4]{T} \begin{align*} \Ext_{\A^{**}(k/k)}^{s,t,u}(H_{iso}^{**}(\mbp \wedge \mbp),H^{**}(k/k)) & \cong \Ext_{\A^{**}(k/k)}^{s,t,u}(H^{**}(k/k) \otimes_{\F} \G^{**} \otimes_{\F} \G^{**},H^{**}(k/k)) \\ &\cong \Ext_{\G^{**}}^{s,t,u}(\G^{**} \otimes_{\F} \G^{**},\F) \\ &\cong \Ext_{\F}^{s,t,u}(\G^{**},\F) \cong \begin{cases} \G_{t,u} & if \: s=0\\ 0 & if \: s \neq 0 \end{cases}. \end{align*} In particular, since $\G_{**}$ is concentrated on the slope $2$ line, we have that all differentials from the second on are trivial by degree reasons. Hence, Mittag-Leffler condition is met, which implies that the spectral sequence is strongly convergent. From all this, it follows as above that $$\pi^{iso}_{**}(\mbp \wedge \mbp) \cong \pi_{**}(\X \wedge \mbp \wedge \mbp) \cong \G_{**}$$ that is what we aimed to show. \end{proof} \section{The category of isotropic cellular MBP-modules} In this section we start by providing $\X \wedge \mbp$ with an $E_{\infty}$-ring structure. This allows us to talk about the stable $\infty$-category of $\X \wedge \mbp$-modules, i.e. $\X \wedge \mbp - \mo$, and its cellular part, i.e. $\X \wedge \mbp- \mo_{cell}$. Our aim is to focus on isotropic cellular $\mbp$-modules, which is the same as cellular $\X \wedge \mbp$-modules. In particular, we completely describe the category $\X \wedge \mbp-\mo_{cell}$ in algebraic terms. This section is structured along the lines of \cite[Section 3]{GWX}. Therefore, before each result we indicate the one from \cite{GWX} it corresponds to. We hope this could clearly shed light on the deep parallelism between \cite{GWX} and this work. \begin{prop} The homotopy commutative ring structure on $\X \wedge \mbp$ extends to an $E_{\infty}$-ring structure. \end{prop} \begin{proof} It follows from \cite[Proposition 1.4.4.11]{Lu} that there exists a $t$-structure on $\X-\mo$ with non-negative part generated by $\X^{2n,n}$ for any $n \in \Z$. By \cite[Theorem A.1]{BKWX}, $\X \wedge \mgl$ belongs to the non-negative parte of this $t$-structure, and so also $\X \wedge \mbp$ does. On the other hand, one deduces from Theorem \ref{hgmbp} and \cite[Lemma 2.4]{BKWX} that $\X \wedge \mbp$ belongs to the non-positive part too. Hence, $\X \wedge \mbp$ is a homotopy commutative ring spectrum in the heart of the above mentioned $t$-structure, which means that it is an $E_{\infty}$-ring spectrum.\footnote{I am grateful to Tom Bachmann for this argument.} \end{proof} Once we know that $\X \wedge \mbp$ is a motivic $E_{\infty}$-ring spectrum, we can consider the stable $\infty$-category of $\X \wedge \mbp$-modules and its homotopy category which is tensor triangulated. In particular, we focus on its cellular part. \begin{prop}\label{gem2} Let $k$ be a flexible field and $Y$ an object in $\X \wedge \mbp-\mo_{cell}$ such that $\pi_{**}(Y)$ is isomorphic to the $\F$-vector space $\bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \F$. Then, there exists an isomorphism of spectra $$\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \mbp) \xrightarrow{\cong} Y.$$ \end{prop} \begin{proof} We follow the lines of the proof of Lemma \ref{gem}. Each generator of $\pi_{**}(Y)$ represents a map $\Sigma^{p_{\alpha},q_{\alpha}} \s \rightarrow Y$. For all $\alpha \in A$, this map corresponds bijectively to a map $\Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \mbp) \rightarrow Y$ of $\X \wedge \mbp$-cellular modules. Hence, we get a map $$\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \mbp) \rightarrow Y$$ of $\X \wedge \mbp$-cellular modules that induces an isomorphism on homotopy groups since $\pi_{**}(\X \wedge \mbp) \cong \F$ by Theorem \ref{hgmbp}. Therefore, it follows from Proposition \ref{check} that the above map is an isomorphism of spectra, which completes the proof. \end{proof} The previous result implies the following corollary that corresponds to \cite[Corollary 3.3]{GWX}. \begin{cor}\label{gem3} Let $k$ be a flexible field and $X$ and $Y$ be objects in $\X \wedge \mbp-\mo_{cell}$. Then, $$[X,Y]_{\X \wedge \mbp} \cong \Hom ^{0,0}_{\F}(\pi_{**}(X),\pi_{**}(Y)).$$ \end{cor} \begin{proof} It follows from Proposition \ref{gem2} that $X \cong \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} (\X \wedge \mbp)$ and $Y \cong \bigvee_{\beta \in B} \Sigma^{p_{\beta},q_{\beta}} (\X \wedge \mbp)$ for some sets $A$ and $B$. Then, we have that \begin{align*} [X,Y]_{\X \wedge \mbp} & \cong [\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \s,\bigvee_{\beta \in B} \Sigma^{p_{\beta},q_{\beta}} (\X \wedge \mbp)] \\ & \cong \prod_{\alpha \in A} \bigoplus_{\beta \in B} \pi_{p_{\alpha}-p_{\beta},q_{\alpha}-q_{\beta}}(\X \wedge \mbp)\\ & \cong \prod_{\alpha \in A} \bigoplus_{\beta \in B} \Sigma^{p_{\alpha}-p_{\beta},q_{\alpha}-q_{\beta}}\F\\ & \cong \Hom^{0,0}_{\F}(\bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}}\F,\bigoplus_{\beta \in B} \Sigma^{p_{\beta},q_{\beta}}\F)\\ & \cong \Hom ^{0,0}_{\F}(\pi_{**}(X),\pi_{**}(Y)) \end{align*} which concludes the proof. \end{proof} The next theorem, which corresponds to \cite[Theorem 3.8]{GWX}, identifies $\X \wedge \mbp-\mo_{cell}$ with the category of bigraded $\F$-vector spaces that we denote by $\F-\mo_{**}$. \begin{thm}\label{mbpcell} Let $k$ be a flexible field. Then, the functor $$\pi_{**}:\X \wedge \mbp-\mo_{cell} \xrightarrow{\cong} \F-\mo_{**}$$ is an equivalence of categories. \end{thm} \begin{proof} It follows immediately from Proposition \ref{gem2} and Corollary \ref{gem3}. \end{proof} \begin{rem} \normalfont We want to highlight that the equivalence provided by the previous theorem is actually an equivalence of triangulated categories, where $\F-\mo_{**}$ is structured as a triangulated category in the obvious way. More precisely, the translation functor is the suspension $\Sigma^{1,0}$ and distinguished triangles are of the form $$V \xrightarrow{f} W \rightarrow coker(f) \oplus \Sigma^{1,0}ker(f) \rightarrow \Sigma^{1,0}V$$ where $f$ is a morphism of bigraded $\F$-vector spaces. \end{rem} \section{The category of isotropic cellular spectra} This section is devoted to the understanding of the structure of the category $\X-\mo_{cell}$ that is, as we have already noticed, the category of cellular isotropic spectra $\SH(k/k)_{cell}$. We give a nice algebraic description of this category based on the dual of the topological Steenrod algebra. The results here are the isotropic versions of the ones in \cite[Sections 4 and 5]{GWX}. Therefore, the proofs we provide are isotropic adaptations of the respective ones in \cite{GWX}. In the next lemma, which corresponds to \cite[Lemma 5.1]{GWX}, we compute the $\mbp$-homology of isotropic $\mbp$-cellular spectra. \begin{lem}\label{inj} Let $k$ be a flexible field. Then, for any $I \in \X \wedge \mbp-\mo_{cell}$ there is an isomorphism of left $\G_{**}$-comodules $$\mbp_{**}(I) \cong \G_{**} \otimes_{\F} \pi_{**}(I).$$ \end{lem} \begin{proof} Since the motivic spectrum $I$ is by hypothesis in $\X \wedge \mbp-\mo_{cell}$, we deduce from Theorem \ref{mbpcell} that $I \cong \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}}(\X \wedge \mbp)$ for some set $A$. Therefore, by Theorem \ref{hgmbp2} one has that \begin{align*} \mbp_{**}(I)=\pi_{**}(\mbp \wedge I) &\cong \pi_{**}(\bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}}(\X \wedge \mbp \wedge \mbp)) \\ &\cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}}\pi_{**}(\X \wedge \mbp \wedge \mbp) \\ &\cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}}\G_{**} \cong \G_{**}\otimes_{\F} V \end{align*} where $V\cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \F$. Now, note that by Theorem \ref{hgmbp} $$\pi_{**}(I) \cong \bigoplus_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}\pi_{**}(\X \wedge \mbp) \cong V.$$ It follows that $$\mbp_{**}(I) \cong \G_{**} \otimes_{\F} \pi_{**}(I)$$ that is what we aimed to show. \end{proof} The following lemma, which corresponds to \cite[Lemma 5.3]{GWX}, describes algebraically the hom-sets from isotropic cellular spectra to isotropic $\mbp$-cellular spectra. \begin{lem}\label{injiso} Let $k$ be a flexible field. Then, for any $X \in \X-\mo_{cell}$ and $I \in \X \wedge \mbp-\mo_{cell}$ there is an isomorphism $$[X,I] \cong \Hom ^{0,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(I)).$$ \end{lem} \begin{proof} By Theorem \ref{mbpcell} and Lemma \ref{inj}, we have the following sequence of isomorphisms \begin{align*} [X,I] &\cong [\X \wedge \mbp \wedge X,I]_{\X \wedge \mbp} \\ &\cong \Hom ^{0,0}_{\F}(\pi_{**}(\X \wedge \mbp \wedge X),\pi_{**}(I))\\ &\cong \Hom ^{0,0}_{\G_{**}}(\pi_{**}(\X \wedge \mbp \wedge X),\G_{**} \otimes_{\F} \pi_{**}(I)) \\ & \cong \Hom ^{0,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(I)) \end{align*} which concludes the proof. \end{proof} Before constructing the isotropic version of the Adams-Novikov spectral sequence we need the following lemma. \begin{lem}\label{cas} Let $k$ be a flexible field and $Y$ an object in $\X-\mo$. Then, for any $s \geq 0$, there exist isomorphisms $$\mbp_{**}((\overline{\X \wedge \mbp})^{\wedge s} \wedge Y) \cong \Sigma^{-s,0} \overline{\G_{**}}^{\otimes s} \otimes_{\F} \mbp_{**}(Y)$$ and $$\mbp_{**}(\X \wedge \mbp\wedge (\overline{\X \wedge \mbp})^{\wedge s} \wedge Y) \cong \Sigma^{-s,0} \G_{**} \otimes_{\F} \overline{\G_{**}}^{\otimes s} \otimes_{\F} \mbp_{**}(Y).$$ \end{lem} \begin{proof} First, note that by arguments similar to the ones in Lemma \ref{kun} we have an isomorphism $$\mbp_{**}(\X \wedge \mbp\wedge (\overline{\X \wedge \mbp})^{\wedge s} \wedge Y) \cong \G_{**} \otimes_{\F} \mbp_{**}((\overline{\X \wedge \mbp})^{\wedge s} \wedge Y) $$ for any isotropic spectrum $Y$ and any $s \geq 0$. So, we only need to prove the first part of the statement. We achieve this by an induction argument, after noting that obviously the statement holds for $s=0$. Now, suppose the statement holds for $s-1$, i.e. $$\mbp_{**}( (\overline{\X \wedge \mbp})^{\wedge s-1} \wedge Y) \cong \Sigma^{1-s,0} \overline{\G_{**}}^{\otimes s-1} \otimes_{\F} \mbp_{**}(Y)$$ and $$\mbp_{**}(\X \wedge \mbp\wedge (\overline{\X \wedge \mbp})^{\wedge s-1} \wedge Y) \cong \Sigma^{1-s,0} \G_{**} \otimes_{\F} \overline{\G_{**}}^{\otimes s-1} \otimes_{\F} \mbp_{**}(Y).$$ Then, the distinguished triangle in $\SH(k)$ $$(\overline{\X \wedge \mbp})^{\wedge s} \wedge Y \rightarrow (\overline{\X \wedge \mbp})^{\wedge s-1}\wedge Y \rightarrow \X \wedge \mbp\wedge (\overline{\X \wedge \mbp})^{\wedge s-1} \wedge Y\rightarrow \Sigma^{1,0}(\overline{\X \wedge \mbp})^{\wedge s}\wedge Y$$ induces in $\mbp$-homology the short exact sequence $$0 \rightarrow \Sigma^{1-s,0} \overline{\G_{**}}^{\otimes s-1} \otimes_{\F} \mbp_{**}(Y) \rightarrow \Sigma^{1-s,0} \G_{**} \otimes_{\F} \overline{\G_{**}}^{\otimes s-1} \otimes_{\F} \mbp_{**}(Y)$$ $$ \rightarrow \Sigma^{1,0} \mbp_{**}((\overline{\X \wedge \mbp})^{\wedge s} \wedge Y) \rightarrow 0.$$ It follows that $$\mbp_{**}((\overline{\X \wedge \mbp})^{\wedge s} \wedge Y) \cong \Sigma^{-s,0} \overline{\G_{**}}^{\otimes s} \otimes_{\F} \mbp_{**}(Y)$$ and $$\mbp_{**}(\X \wedge \mbp\wedge (\overline{\X \wedge \mbp})^{\wedge s} \wedge Y) \cong \Sigma^{-s,0} \G_{**} \otimes_{\F} \overline{\G_{**}}^{\otimes s} \otimes_{\F} \mbp_{**}(Y)$$ which completes the proof. \end{proof} We are now ready to construct the isotropic Adams-Novikov spectral sequence, which corresponds to \cite[Theorem 5.6]{GWX}. Before proceeding, we would like to fix some notations. \begin{dfn}\label{dc} \normalfont Let $X$ be an isotropic spectrum. The Chow-Novikov degree of $\mbp_{p,q}(X)$ is the integer $p-2q$. We denote by $\X-\mo^b_{cell}$ the category of bounded isotropic cellular spectra, i.e. isotropic cellular spectra whose $\mbp$-homology is non-trivial only for a finite number of Chow-Novikov degrees. \end{dfn} \begin{thm} Let $k$ be a flexible field and $X$ and $Y$ objects in $\X-\mo^b_{cell}$. Then, there is a strongly convergent spectral sequence $$E_2^{s,t,u} \cong \Ext^{s,t,u}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)) \Longrightarrow [\Sigma^{t-s,u}X,Y^{\wedge}_{\hz}].$$ \end{thm} \begin{proof} Consider the Postnikov system in $\X-\mo_{cell}$ $$ \xymatrix{ \dots \ar@{->}[r] & (\overline{\X \wedge \mbp})^{\wedge s} \wedge Y \ar@{->}[r] \ar@{->}[d] & \dots \ar@{->}[r] & \overline{\X \wedge \mbp} \wedge Y \ar@{->}[r] \ar@{->}[d] & Y \ar@{->}[d] \\ & \X \wedge \mbp\wedge (\overline{\X \wedge \mbp})^{\wedge s} \wedge Y \ar@{->}[ul]^{[1]} & & \X \wedge \mbp \wedge \overline{\X \wedge \mbp} \wedge Y \ar@{->}[ul]^{[1]} & \X \wedge \mbp \wedge Y \ar@{->}[ul]^{[1]} } $$ where $\overline{\X \wedge \mbp}$ is defined by the following distinguished triangle in $\SH(k)$ $$\overline{\X \wedge \mbp} \rightarrow \s \rightarrow \X \wedge \mbp \rightarrow \Sigma^{1,0} \overline{\X \wedge \mbp}.$$ If we apply the functor $[\Sigma^{**}X,-]$ we get an unrolled exact couple $$ \xymatrix{ \dots \ar@{->}[r] & [\Sigma^{**}X,\overline{\X \wedge \mbp} \wedge Y] \ar@{->}[r] \ar@{->}[d] & [\Sigma^{**}X,Y] \ar@{->}[d] \\ & [\Sigma^{**}X,\X \wedge \mbp \wedge \overline{\X \wedge \mbp} \wedge Y] \ar@{->}[ul]^{[1]} & [\Sigma^{**}X,\X \wedge \mbp \wedge Y] \ar@{->}[ul]^{[1]} } $$ that induces a spectral sequence with $E_1$-page given by $$E_1^{s,t,u} \cong [\Sigma^{t-s,u}X,\X \wedge \mbp\wedge (\overline{\X \wedge \mbp})^{\wedge s} \wedge Y]$$ and first differential $$d_1^{s,t,u}:E_1^{s,t,u} \rightarrow E_1^{s+1,t,u}.$$ This is what we call the isotropic Adams-Novikov spectral sequence. Note that by Lemmas \ref{injiso} and \ref{cas} the $E_1$-page has a nice description provided by $$E_1^{s,t,u} \cong \Hom ^{t,u}_{\G_{**}}(\mbp_{**}(X),\G_{**} \otimes_{\F} \overline{\G_{**}}^{\otimes s} \otimes_{\F} \mbp_{**}(Y)).$$ Hence, the $E_2$-page has the usual description given in terms of $\Ext$-groups of left $\G_{**}$-comodules, i.e. $$E_2^{s,t,u} \cong \Ext^{s,t,u}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)).$$ By standard formal reasons, this spectral sequence actually converges to the groups $[\Sigma^{t-s,u}X,Y^{\wedge}_{\X \wedge \mbp}]$. We only have to notice that $$Y^{\wedge}_{\X \wedge \mbp} \cong Y^{\wedge}_{\X \wedge \hz} \cong Y^{\wedge}_{\hz}.$$ The second isomorphism comes from the same argument of the proof of Proposition \ref{conv}. Regarding the first isomorphism, we may consider following \cite[Section 7.3]{DI} the bicompletion $Y^{\wedge}_{\{\X \wedge \mbp,\X \wedge \hz\}}$. This spectrum may be obtained by computing the homotopy limit of the following cosimplicial spectrum $$\xymatrix{(\X \wedge \hz \wedge Y)^{\wedge}_{\X \wedge \mbp} \ar@<-.75ex>[r] \ar@<.75ex>[r] & ((\X \wedge \hz)^{\wedge 2} \wedge Y)^{\wedge}_{\X \wedge \mbp} \ar@<-.75ex>[r] \ar@<0ex>[r] \ar@<.75ex>[r] & ((\X \wedge \hz)^{\wedge 3} \wedge Y)^{\wedge}_{\X \wedge \mbp} \ar@<-.75ex>[r] \ar@<-.25ex>[r] \ar@<.25ex>[r] \ar@<.75ex>[r] & \dots} $$ or, equivalently, by computing the homotopy limit of the following cosimplicial spectrum $$\xymatrix{(\X \wedge \mbp \wedge Y)^{\wedge}_{\X \wedge \hz} \ar@<-.75ex>[r] \ar@<.75ex>[r] & ((\X \wedge \mbp)^{\wedge 2} \wedge Y)^{\wedge}_{\X \wedge \hz} \ar@<-.75ex>[r] \ar@<0ex>[r] \ar@<.75ex>[r] & ((\X \wedge \mbp)^{\wedge 3} \wedge Y)^{\wedge}_{\X \wedge \hz} \ar@<-.75ex>[r] \ar@<-.25ex>[r] \ar@<.25ex>[r] \ar@<.75ex>[r] & \dots.} $$ Since $\hz$ is a motivic $\mbp$-module we have that for any $n$ $$((\X \wedge \hz)^{\wedge n} \wedge Y)^{\wedge}_{\X \wedge \mbp} \cong (\X \wedge \hz)^{\wedge n} \wedge Y$$ from which it follows that the first homotopy limit is just $Y^{\wedge}_{\X \wedge \hz}$. On the other hand, we know that $\X \wedge \mbp$ is $\hz$-complete, thus we get that for any $n$ $$((\X \wedge \mbp)^{\wedge n} \wedge Y)^{\wedge}_{\X \wedge \hz} \cong (\X \wedge \mbp)^{\wedge n} \wedge Y$$ and the second homotopy limit gives back $Y^{\wedge}_{\X \wedge \mbp}$. This implies that $Y^{\wedge}_{\X \wedge \mbp} \cong Y^{\wedge}_{\X \wedge \hz}$. It only remains to prove the strong convergence. The arguments are the same as in \cite[Theorem 3.2]{GWX} and we report them here only for completeness. First, suppose that $\mbp_{**}(X)$ is concentrated in Chow-Novikov degrees $[a,b]$ and $\mbp_{**}(Y)$ is concentrated in Chow-Novikov degrees $[c,d]$. Therefore, the $E_1$-page and so all the following pages are trivial outside the range $c-b+2u \leq t \leq d-a+2u$. Now, note that the differential on the $E_r$-page has, as usual, the tridegree $(r,r-1,0)$, which means in particular that it is trivial when $r-1 > d-a-c+b$. This amounts to say that the spectral sequence collapses at the $E_{d-a-c+b+2}$-page, and so it is strongly convergent, which completes the proof. \end{proof} \begin{dfn} \normalfont Let $\X-\mo_{cell,\hz}$ be the full triangulated subcategory of $\X-\mo_{cell}$ consisting of $\hz$-complete cellular isotropic spectra. Denote by $\X-\mo^{b,\geq 0}_{cell,\hz}$ the full subcategory of $\X-\mo^{b}_{cell,\hz}$ whose objects have $\mbp$-homology concentrated in non-negative Chow-Novikov degrees and by $\X-\mo^{b,\leq 0}_{cell,\hz}$ the full subcategory of $\X-\mo^{b}_{cell,\hz}$ whose objects have $\mbp$-homology concentrated in non-positive Chow-Novikov degrees. Finally, let $\X-\mo^{\heartsuit}_{cell,\hz}$ be the full subcategory whose objects are both in $\X-\mo^{b,\geq 0}_{cell,\hz}$ and in $\X-\mo^{b,\leq 0}_{cell,\hz}$, i.e. have $\mbp$-homology concentrated in Chow-Novikov degree $0$. \end{dfn} We want to point out that, since $\X \wedge \hz$ is a $\X \wedge \mbp$-module and $\X \wedge \mbp$ is $\X \wedge \hz$-complete, the subcategories of $\hz$-complete and $\mbp$-complete isotropic spectra coincide. The next corollary, which corresponds to \cite[Corollary 4.7]{GWX}, computes hom-sets from $\X-\mo^{b,\geq 0}_{cell,\hz}$ to $\X-\mo^{b,\leq 0}_{cell,\hz}$ in algebraic terms. \begin{cor}\label{hom} Let $k$ be a flexible field, $X$ an object in $\X-\mo^{b,\geq 0}_{cell,\hz}$ and $Y$ in $\X-\mo^{b,\leq 0}_{cell,\hz}$. Then, the functor $\mbp_{**}$ provides an isomorphism $$[X,Y] \cong \Hom ^{0,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)).$$ \end{cor} \begin{proof} As we have already pointed out, the $E_1$-page of the isotropic Adams-Novikov spectral sequence is given by $$E_1^{s,t,u} \cong \Hom ^{t,u}_{\G_{**}}(\mbp_{**}(X),\G_{**} \otimes_{\F} \overline{\G_{**}}^{\otimes s} \otimes_{\F} \mbp_{**}(Y)).$$ Since we are interested in the group $[X,Y]$, the part of the $E_1$-page that is involved consists of the groups in tridegrees $(t,t,0)$. By hypothesis, $X$ is in $\X-\mo^{b,\geq 0}_{cell,\hz}$ while $Y$ is in $\X-\mo^{b,\leq 0}_{cell,\hz}$, which implies that, among these groups, only $E_1^{0,0,0}$ is non-trivial. Since in this tridegree all differentials from the second on are trivial by degree reasons, we have that $$[X,Y] \cong E_2^{0,0,0} \cong \Ext^{0,0,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)) \cong \Hom ^{0,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y))$$ which completes the proof. \end{proof} By using the isotropic Adams-Novikov spectral sequence we also get the following corollary that corresponds to \cite[Corollary 4.8]{GWX} and is a generalisation of \cite[Theorem 5.7]{T}. \begin{cor}\label{morcor} Let $k$ be a flexible field and $X$ and $Y$ objects in $\X-\mo^{\heartsuit}_{cell,\hz}$. Then, there is an isomorphism $$[\Sigma^{t,u}X,Y] \cong \Ext^{2u-t,2u,u}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)).$$ \end{cor} \begin{proof} This follows immediately by noticing that the differentials $d_r^{s,t,u}:E_r^{s,t,u} \rightarrow E_r^{s+r,t+r-1,u}$ of the isotropic Adams-Novikov spectral sequence are trivial for $r \geq 2$ since $E_2^{s,t,u}$ is trivial for $t \neq 2u$. Hence, the spectral sequence is strongly convergent and collapses at the second page, from which we get that $$[\Sigma^{t,u}X,Y] \cong E_2^{2u-t,2u,u} \cong \Ext^{2u-t,2u,u}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y))$$ that is what we wanted to show. \end{proof} Before proceeding, we also need the following lemma which essentially corresponds to \cite[Lemma 4.10]{GWX}. \begin{lem}\label{ext} Let $k$ be a flexible field and $M$ a $\G_{**}$-comodule concentrated in Chow-Novikov degree 0 which is finitely generated as an $\F$-vector space. Then, there exists an object $X$ in $\X-\mo^{\heartsuit}_{cell,\hz}$ such that $M \cong \mbp_{**}(X)$. \end{lem} \begin{proof} Since by hypothesis $M$ is a finite-dimensional $\F$-vector space, by \cite[Theorem 3.3]{L} one has a finite filtration of subcomodules $$0 \cong M_0 \subset M_1 \subset \dots \subset M_n \cong M$$ such that, for any $i$, the quotient $M_i/M_{i-1}$ is stably isomorphic to $\F$, i.e. $M_i/M_{i-1} \cong \Sigma^{2q_i,q_i} \F$ for some integer $q_i$. We want to prove the statement by induction on $i$. First, note that by Theorem \ref{hgmbp} the comodule $\Sigma^{2q_i,q_i} \F$ is the $\mbp$-homology of the isotropic spectrum $\Sigma^{2q_i,q_i} \X^{\wedge}_{\hz}$ for any $i$. Now, suppose that there exists an object $X_{i-1}$ in $\X-\mo^{\heartsuit}_{cell,\hz}$ such that $M_{i-1} \cong \mbp_{**}(X_{i-1})$. Then, the short exact sequence $$0 \rightarrow M_{i-1} \rightarrow M_i \rightarrow \Sigma^{2q_i,q_i} \F \rightarrow 0$$ represents an element of $\Ext_{\G_{**}}^{1,0,0}(\Sigma^{2q_i,q_i} \F,M_{i-1})$, namely a morphism $f_i$ in $[\Sigma^{2q_i-1,q_i}\X^{\wedge}_{\hz},X_{i-1}]$ by Corollary \ref{morcor}. Let us define $X_i$ as $Cone(f_i)$. Then, we have a long exact sequence in $\mbp$-homology $$\dots \rightarrow \Sigma^{2q_i-1,q_i} \F \xrightarrow{0} M_{i-1} \rightarrow \mbp_{**}(X_i) \rightarrow \Sigma^{2q_i,q_i} \F \xrightarrow{0} \Sigma^{1,0}M_{i-1} \rightarrow \dots .$$ Note that the connecting homomorphism $$g_{i*}:\Ext^{0,0,0}( \Sigma^{2q_i,q_i} \F, \Sigma^{2q_i,q_i} \F) \rightarrow \Ext^{1,0,0}( \Sigma^{2q_i,q_i} \F,M_{i-1})$$ described as the Yoneda product with the element $g_i$ of $\Ext_{\G_{**}}^{1,0,0}(\Sigma^{2q_i,q_i} \F,M_{i-1})$ corresponding to the short exact sequence $$0 \rightarrow M_{i-1} \rightarrow \mbp_{**}(X_i) \rightarrow \Sigma^{2q_i,q_i} \F \rightarrow 0$$ converges to the map $$f_{i*}: [\Sigma^{2q_i-1,q_i}\X^{\wedge}_{\hz} ,\Sigma^{2q_i-1,q_i}\X^{\wedge}_{\hz} ] \rightarrow [\Sigma^{2q_i-1,q_i}\X^{\wedge}_{\hz},X_{i-1}]$$ induced by $f_i$ in isotropic homotopy groups (see \cite[Theorem 2.3.4]{Ra}). By Corollary \ref{morcor} the isotropic Adams-Novikov spectral sequence collapses at the second page, so $g_{i*}=f_{i*}$. It follows that the extensions $g_i$ and $f_i$ coincide which implies that $\mbp_{**}(X_i) \cong M_i$, as we wanted to prove. \end{proof} The next result is the isotropic equivalent of \cite[Lemma 4.2]{GWX}. \begin{lem} Let $k$ be a flexible field and $X_{\alpha}$ be a filtered system in $\X-\mo^{\heartsuit}_{cell,\hz}$. Then, also the colimit ${\mathrm colim} \: X_{\alpha}$ in $\X-\mo_{cell}$ belongs to $\X-\mo^{\heartsuit}_{cell,\hz}$. \end{lem} \begin{proof} First, note that, since $\mbp_{**}({\mathrm colim} \: X_{\alpha}) \cong \mathrm{colim} \: \mbp_{**}(X_{\alpha})$, ${\mathrm colim} \: X_{\alpha}$ has $\mbp$-homology concentrated in Chow-Novikov degree 0. Moreover, recall from \cite[Corollary A1.2.12]{Ra} that $\Ext_{\G_{**}}(\F,-)$ may be computed as the homology of the cobar complex for the second variable. Since the cobar complex preserves filtered colimits, so does $\Ext_{\G_{**}}(\F,-)$. Then, Corollary \ref{morcor} implies that \begin{align*} \pi_{t,u}({\mathrm colim} \: X_{\alpha}) &\cong {\mathrm colim} \: \pi_{t,u}(X_{\alpha})\\ & \cong {\mathrm colim}\: \Ext^{2u-t,2u,u}_{\G_{**}}(\F,\mbp_{**}(X_{\alpha}))\\ & \cong \Ext^{2u-t,2u,u}_{\G_{**}}(\F,{\mathrm colim} \: \mbp_{**}(X_{\alpha}))\\ & \cong \Ext^{2u-t,2u,u}_{\G_{**}}(\F, \mbp_{**}({\mathrm colim} \: X_{\alpha}))\\ & \cong \pi_{t,u}(({\mathrm colim} \: X_{\alpha})^{\wedge}_{\hz}) \end{align*} from which it follows that ${\mathrm colim} \: X_{\alpha}$ is $\hz$-complete that concludes the proof. \end{proof} We are now ready to identify $\X-\mo^{\heartsuit}_{cell,\hz}$ with the abelian category of left $\G_{**}$-comodules concentrated in Chow-Novikov degree 0 that we denote by $\G_{**}-\com^0_{**}$. The following proposition is an isotropic version of \cite[Proposition 4.11]{GWX}. \begin{prop}\label{heart} Let $k$ be a flexible field. Then, the functor $$\mbp_{**}:\X-\mo^{\heartsuit}_{cell,\hz} \xrightarrow{\cong} \G_{**}-\com^0_{**}$$ is an equivalence of categories. \end{prop} \begin{proof} First, note that Corollary \ref{hom} guarantees that the functor $\mbp_{**}$ is fully faithful. We just need to show that it is essentially surjective. Recall from \cite[Propositions 1.4.10, 1.4.4 and 1.4.1]{Ho} that any left $\G_{**}$-comodule $M$ is a filtered colimit of comodules $M_{\alpha}$ which are finitely generated as $\F$-vector spaces. By Lemma \ref{ext} all $M_{\alpha}$ are expressible as $\mbp_{**}(X_{\alpha})$ for some $X_{\alpha}$ in $\X-\mo^{\heartsuit}_{cell,\hz}$. Therefore, we have that $M \cong \mbp_{**}(X)$ where $X={\mathrm colim} \: X_{\alpha}$, which is what we aimed to show. \end{proof} \begin{rem} \normalfont Note that $\G_{**}-\com^0_{**}$ is equivalent to the category of left $\A_{*}$-comodules, where $\A_{*}$ is the dual of the topological Steenrod algebra. Hence, the previous result can be rephrased by saying that $\X-\mo^{\heartsuit}_{cell,\hz}$ is equivalent to the abelian category of left $\A_*$-comodules. \end{rem} The next proposition that corresponds to \cite[Proposition 4.12]{GWX} provides $\X-\mo^b_{cell,\hz}$ with a $t$-structure. \begin{prop}\label{tri} Let $k$ be a flexible field. Then, the pair $(\X-\mo^{b,\geq 0}_{cell,\hz},\X-\mo^{b,\leq 0}_{cell,\hz})$ defines a bounded $t$-structure on $\X-\mo^b_{cell,\hz}$. \end{prop} \begin{proof} Just by the definition of $\X-\mo^{b,\geq 0}_{cell,\hz}$ and $\X-\mo^{b,\leq 0}_{cell,\hz}$ we know that the first is closed under suspensions, the second under desuspensions and both under extensions. Moreover, clearly $$\X-\mo^{b}_{cell,\hz} = \bigcup_{n \in \Z} \X-\mo^{b,\geq n}_{cell,\hz}$$ where $\X-\mo^{b,\geq n}_{cell,\hz}$ is the $n$-th suspension of $\X-\mo^{b,\geq 0}_{cell,\hz}$. Now, consider an object $X$ in $\X-\mo^{b,\geq 0}_{cell,\hz}$ and an object $Y$ in $\X-\mo^{b,\leq -1}_{cell,\hz}$, i.e. the first desuspension of $\X-\mo^{b,\leq 0}_{cell,\hz}$. Then, by Corollary \ref{hom} $$[X,Y] \cong \Hom ^{0,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)) \cong 0$$ since $\mbp_{**}(X)$ is concentrated in non-negative Chow-Novikov degrees while $\mbp_{**}(Y)$ is concentrated in negative Chow-Novikov degrees. Finally, let $X$ be an object in $\X-\mo^{b,\geq 0}_{cell,\hz}$, then $\mbp(X)$ is concentrated in non-negative Chow-Novikov degrees. Consider the projection $\mbp(X) \rightarrow \mbp(X)_0$ that kills all the elements in positive Chow-Novikov degrees, and note that there exists an object $X_0$ in $\X-\mo^{\heartsuit}_{cell,\hz}$ such that $\mbp(X_0) \cong \mbp(X)_0$. Now, by Corollary \ref{hom} this morphism comes from a map $f:X \rightarrow X_0$ such that $\Sigma^{-1,0}Cone(f)$ belongs to $\X-\mo^{b,\geq 1}_{cell,\hz}$. Therefore, by \cite[Proposition 3.6]{GWX}, the pair $(\X-\mo^{b,\geq 0}_{cell,\hz},\X-\mo^{b,\leq 0}_{cell,\hz})$ defines a bounded $t$-structure on $\X-\mo^b_{cell,\hz}$ that is what we aimed to prove. \end{proof} We are now ready to prove the main result of this section that corresponds to \cite[Theorem 4.13]{GWX}. In this theorem we identify $\X-\mo^b_{cell,\hz}$ with the derived category of left $\G_{**}$-comodules concentrated in Chow-Novikov degree 0. \begin{thm}\label{main} Let $k$ be a flexible field. Then, there exists a $t$-exact equivalence of stable $\infty$-categories $$\D^b(\G_{**}-\com^0_{**}) \xrightarrow{\cong} \X-\mo^b_{cell,\hz}.$$ \end{thm} \begin{proof} First, note that, by Propositions \ref{heart} and \ref{tri}, $(\X-\mo^{b,\geq 0}_{cell,\hz},\X-\mo^{b,\leq 0}_{cell,\hz})$ defines a bounded $t$-structure on $\X-\mo^b_{cell,\hz}$ whose heart is equivalent to the category of left $\G_{**}$-comodules concentrated in Chow-Novikov degree 0, so has enough injectives. Now, let $X$ and $Y$ be objects in $\X-\mo^{\heartsuit}_{cell,\hz}$ such that $\mbp_{**}(Y)$ is an injective $\G_{**}$-comodule. Then, in this case the isotropic Adams-Novikov spectral sequence $$E_2^{s,t,u} \cong \Ext^{s,t,u}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)) \Longrightarrow [\Sigma^{t-s,u}X,Y]$$ collapses at the second page since the $E_2$-page is trivial for $s \neq 0$. Hence, we have that $$[\Sigma^{-i}X,Y] \cong \Ext^{0,-i,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)) \cong \Hom ^{-i,0}_{\G_{**}}(\mbp_{**}(X),\mbp_{**}(Y)) \cong 0$$ for any $i>0$ since both $\mbp_{**}(X)$ and $\mbp_{**}(Y)$ are concentrated in Chow-Novikov degree $0$. It follows by \cite[Proposition 2.12]{GWX}, which is based on Lurie's recognition criterion \cite[Proposition 1.3.3.7]{Lu}, that there exists a $t$-exact equivalence of stable $\infty$-categories $$\D^b(\G_{**}-\com^0_{**}) \xrightarrow{\cong} \X-\mo^b_{cell,\hz}$$ extending the equivalence on the hearts, which completes the proof. \end{proof} \begin{rem} \normalfont We point out that, given the identification $\G_{**} \cong \A_*$, the last theorem identifies as triangulated categories the category of bounded isotropic $\hz$-complete cellular spectra with the derived category of left $\A_*$-comodules, namely $\D^b(\A_{*}-\com_*).$ \end{rem} By using the same argument as in \cite[Corollary 1.2]{GWX} one is able to obtain an unbounded version of the previous theorem identifying the whole $\X^{\wedge}_{\hz}-\mo_{cell}$ with Hovey's unbounded derived category $\st(\G_{**}-\com^0_{**})$, which is the same as $\st(\A_{*}-\com_{*})$ (see \cite[Section 6]{Ho}). \begin{cor} Let $k$ be a flexible field. Then, there exists an equivalence of stable $\infty$-categories $$ \X^{\wedge}_{\hz}-\mo_{cell} \cong \st(\G_{**}-\com^0_{**}).$$ \end{cor} \section{The category of isotropic Tate motives} We finish in this section by applying previous results in order to obtain information on the category of isotropic Tate motives $\DM(k/k)_{Tate}$. In particular, we get an easy algebraic description for the hom-sets in $\DM(k/k)_{Tate}$ between motives of isotropic cellular spectra. First, we prove the following lemma which tells us that the isotropic motivic homology of an isotropic spectrum is always a free $H_{**}(k/k)$-module. \begin{lem}\label{hiso} Let $k$ be a flexible field and $X$ an object in $\X-\mo$. Then, there exists an isomorphism of left $H_{**}(k/k)$-modules $$H^{iso}_{**}(X) \cong H_{**}(k/k) \otimes_{\F} \mbp_{**}(X).$$ \end{lem} \begin{proof} The Hopkins-Morel equivalence (see \cite[Theorem 7.12]{H}) implies in particular that $\hz$ is a quotient spectrum of $\mbp$. It follows that $\hz$ can be obtained from $\mbp$ by applying cones and homotopy colimits and, so, it is an $\mbp$-cellular module from which we get by Theorem \ref{mbpcell} that $$\X \wedge \hz \cong \bigvee_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}}(\X \wedge \mbp)$$ for some set $A$. Now, note that by Theorem \ref{hgmbp} \begin{align*} H_{**}(k/k)& \cong \pi_{**}(\X \wedge \hz)\\ & \cong \pi_{**}(\bigvee_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}(\X \wedge \mbp))\\ & \cong \bigoplus_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}\pi_{**}(\X \wedge \mbp)\\ & \cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \F. \end{align*} At this point, let $X$ be an object in $\X-\mo$. Then, \begin{align*} H^{iso}_{**}(X) &\cong \pi_{**}(\X \wedge \hz \wedge X)\\ & \cong \pi_{**}(\bigvee_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}(\X \wedge \mbp \wedge X))\\ & \cong \bigoplus_{\alpha \in A}\Sigma^{p_{\alpha},q_{\alpha}}\pi_{**}(\X \wedge \mbp \wedge X) \\ & \cong \bigoplus_{\alpha \in A} \Sigma^{p_{\alpha},q_{\alpha}} \mbp_{**}(X) \\ &\cong H_{**}(k/k) \otimes_{\F} \mbp_{**}(X) \end{align*} that finishes the proof. \end{proof} In the next proposition we compute hom-sets in the isotropic triangulated category of motives between motives of isotropic cellular spectra. They happen to be isomorphic to hom-sets of left $H_{**}(k/k)$-modules between the respective isotropic homology. \begin{prop} Let $k$ be a flexible field and $X$ and $Y$ objects in $\X-\mo_{cell}$. Then, there exists an isomorphism $$\Hom _{\DM(k/k)_{Tate}}({\mathrm M}(X),{\mathrm M}(Y)) \cong \Hom _{H_{**}(k/k)}(H^{iso}_{**}(X), H^{iso}_{**}(Y)).$$ \end{prop} \begin{proof} Consider the functor $$H^{iso}_{**}: \DM(k/k)_{Tate} \rightarrow H_{**}(k/k)-\mo_{**}$$ which sends each isotropic Tate motive to the respective isotropic motivic homology and let $X$ and $Y$ be motivic spectra in $\X-\mo_{cell}$. Then, by Theorem \ref{mbpcell}, Lemma \ref{hiso} and \cite[Proposition 2.4]{T} we have that \begin{align*} \Hom _{\DM(k/k)_{Tate}}({\mathrm M}(X),{\mathrm M}(Y)) &\cong [X,\X \wedge \hz \wedge Y] \\ &\cong [\X \wedge \mbp \wedge X,\X \wedge \hz \wedge Y]_{\X \wedge \mbp} \\ &\cong \Hom _{\F}(\pi_{**}(\X \wedge \mbp \wedge X), \pi_{**}(\X \wedge \hz \wedge Y))\\ &\cong \Hom _{\F}(\mbp_{**}(X), H_{**}^{iso}(Y))\\ &\cong \Hom _{H_{**}(k/k)}(H_{**}(k/k) \otimes_{\F} \mbp_{**}(X), H_{**}^{iso}(Y))\\ &\cong \Hom _{H_{**}(k/k)}(H^{iso}_{**}(X), H^{iso}_{**}(Y)) \end{align*} which completes the proof. \end{proof} \begin{rem} \normalfont The last result suggests that isotropic Tate motives coming from $\SH(k/k)_{cell}$ are very special in the sense that hom-sets in $\DM(k/k)_{Tate}$ between them are described simply in terms of hom-sets of free $H_{**}(k/k)$-modules. This property does not hold in general, so the next task should be to understand hom-sets in $\DM(k/k)_{Tate}$ between general isotropic Tate motives and try to describe them in algebraic terms. Unfortunately, since $H_{**}(k/k)$ is not concentrated in Chow-Novikov degree 0, the strategy used in \cite{GWX} and adapted in sections 7 and 8 of this paper does not immediately apply. Hence, some new ideas are needed and the hope is to develop them in following works. \end{rem} \footnotesize{
2,869,038,154,436
arxiv
\section{Introduction and Statement of Main Results} A Calder\'on--Zygmund operator associated to a kernel $K(x,y)$ is an integral operator: $$ \mathbf{T}f(x):=\int_{\mathbb{R}^n} K(x,y)f(y)\,dy,\quad x\notin\textnormal{supp} f, $$ where the kernel satisfies the standard size and smoothness estimates \begin{gather*} \left\vert K(x,y)\right\vert \leq \frac{C}{\left\vert x-y\right\vert^n}, \\ \left\vert K(x+h,y)-K(x,y)\right\vert +\left\vert K(x,y+h)-K(x,y)\right\vert \leq C\frac{\left\vert h\right\vert^{\delta}}{\left\vert x-y\right\vert^{n+\delta}}, \end{gather*} for all $\left\vert x-y\right\vert>2\left\vert h\right\vert>0$ and a fixed $\delta\in (0,1]$. The prototypes for this important class of operators are the Hilbert transform, in the one-dimensional case, and the Riesz transforms, in the multidimensional case. Recall that the commutator $[S, T]$ of two operators $S$ and $T$ is defined as $[S, T] \defeq ST - TS.$ We are interested in commutators of multiplication by a symbol $b$ with Calder{\'o}n-Zygmund operators $\mathbf{T}$, denoted $[b, \mathbf{T}]$ and defined as: $$[b, \mathbf{T}]f \defeq b\mathbf{T}f - \mathbf{T}(bf).$$ In the foundational paper \cite{CRW} Coifman, Rochberg, and Weiss provided a connection between the norm of the commutator $[b,\mathbf{T}]:L^p(\mathbb{R}^n)\to L^p(\mathbb{R}^n)$ and the norm of the function $b$ in $BMO$. This result was later extended to the case when the commutator acts between two different weighted Lebesgue spaces $L^p(\lambda):=L^p(\mathbb{R}^n;\lambda)$ and $L^p(\mu):=L^p(\mathbb{R}^n;\mu)$. In 1985, Bloom \cite{Bloom} showed that, if $\mu$ and $\lambda$ are $A_p$ weights, then $\| [b, H]: L^p(\mu) \to L^p(\lambda) \|$ is equivalent to $\|b\|_{BMO(\nu)}$, where $H$ is the Hilbert transform and $BMO(\nu)$ is the weighted BMO space associated with the weight $\nu = \mu^{1/p}\lambda^{-1/p}$. Here, $$\|b\|_{BMO(\nu)} \defeq \sup_Q \frac{1}{\nu(Q)} \int_Q |b - \left\langle b\right\rangle _Q|\,dx < \infty,$$ where $\nu(Q) = \int_Q \,d\nu$, and the supremum is over all cubes $Q$. When there is no weight involved, we will simply denote this space by $BMO$, which is the classical space of functions with bounded mean oscillation. A new dyadic proof of Bloom's result was given in \cite{HLW1}. This was then generalized to all Calder{\'o}n-Zygmund operators in \cite{HLW2}, where one of the main results is: \begin{thm} \label{T:CZOComm1} Let $\mathbf{T}$ be a Calder{\'o}n-Zygmund operator on $\mathbb{R}^n$ and $\mu, \lambda \in A_p$ with $1<p<\infty$. Suppose $b \in BMO(\nu)$, where $\nu = \mu^{\frac{1}{p}} \lambda^{-\frac{1}{p}}$. Then \begin{equation*} \| [b, \mathbf{T}]: L^p(\mu) \to L^p(\lambda) \| \leq c \|b\|_{BMO(\nu)}, \end{equation*} where $c$ is a constant depending on the dimension $n$, the operator $\mathbf{T}$, and $\mu$, $\lambda$, and $p$. \end{thm} A natural extension of this is to consider higher iterates of this commutator. To see how these arise naturally, we follow an argument of Coifman, Rochberg, and Weiss, \cite{CRW}. For $b\in BMO$ and $r$ sufficiently small, consider the operator: $$ S_r(f)=e^{rb}\mathbf{T}(e^{-rb}f) $$ Then it is easy to see that $\left.\frac{d}{dr}S_r(f)\right\vert_{r=0}=[b,\mathbf{T}](f)$ and similarly that we have $\left.\frac{d^n}{dr^n}S_r(f)\right\vert_{r=0}=[b, \ldots, \big[b, [b, \mathbf{T}]\big] \ldots]$ with the function $b$ appearing $n$ times. For some Calder\'on--Zygmund operator $\mathbf{T}$, let $C_b^1(\mathbf{T}) \defeq [b, \mathbf{T}]$, and $$C_b^k(\mathbf{T}) \defeq [b, C_b^{k-1}(\mathbf{T})] \text{, for all integers } k \geq 1.$$ Using weighted theory and the connection between the space $BMO$ and $A_2$ weights, it is then easy to see that the norm of the operator $C_b^k(\mathbf{T})$ on $L^2(\mathbb{R}^n)$ depends on the number of iterates and the norm of the function $b\in BMO$. At this point, a few natural questions arise: (1) What is the norm of the $k$th iterate as a function of the norm of $b\in BMO$? (2) What happens if we attempt to compute the norm of this operator when it acts on $L^p(\mathbb{R}^n;w)$ for a weight $w\in A_p$? (3) Is there an extension of Theorem \ref{T:CZOComm1} for the iterates? In the paper \cite{ChungPereyraPerez} Chung, Pereyra, and Perez provide answers to questions (1) and (2) and show that: $$ \left\| C_b^k(\mathbf{T}):L^2(w)\to L^2(w)\right\| \leq c \|b\|_{BMO}^k [w]_{A_2}^{k+1}, $$ where $c$ is a constant depending on $n$, $k$ and $\mathbf{T}$. In fact, they show that, more generally, if $\mathbf{T}$ is any operator bounded on $L^2(\mathbb{R}^n;w)$ with norm $\varphi([w]_{A_2})$ then $$ \left\Vert C_b^k(\mathbf{T}):L^2(w)\to L^2(w)\right\Vert\leq \varphi([w]_{A_2})[w]_{A_2}^{k}\left\Vert b\right\Vert_{BMO}^{k}. $$ However, the two weight extension of Theorem \ref{T:CZOComm1} lies outside the scope of the results in \cite{ChungPereyraPerez}. Additionally, in \cite[pg. 1166]{ChungPereyraPerez} they ask if it possible to provide a proof of the norm of the iterates of commutators with Calder\'on--Zygmund operators via the methods of dyadic analysis. The main goal of this paper is to extend Theorem \ref{T:CZOComm1} to the case of iterates, addressing question (3), and in the process show how to answer the question raised in \cite{ChungPereyraPerez}. This leads to the main result of the paper: \begin{thm} \label{T:Main} Let $\mathbf{T}$ be a Calder{\'o}n-Zygmund operator on $\mathbb{R}^n$ and $\mu, \lambda \in A_p$ with $1<p<\infty$. Suppose $b \in BMO \cap BMO(\nu)$, where $\nu = \mu^{\frac{1}{p}} \lambda^{-\frac{1}{p}}$. Then for all integers $k \geq 1$: $$\left\| C_b^k(\mathbf{T}) : L^p(\mu) \to L^p(\lambda) \right\| \leq c \: \|b\|^{k-1}_{BMO} \|b\|_{BMO(\nu)},$$ where $c$ is a constant depending on $n$, $k$, $\mathbf{T}$, $\mu$, $\lambda$, and $p$. In particular, if $\mu = \lambda = w \in A_2$: $$\left\| C_b^k(\mathbf{T}) : L^2(w) \to L^2(w) \right\| \leq c \|b\|_{BMO}^k [w]_{A_2}^{k+1},$$ where $c$ is a constant depending on $n$, $k$ and $\mathbf{T}$. \end{thm} The paper is structured as follows. In Section \ref{S:BandN} we discuss the necessary background and notation, such as the Haar system, dyadic shifts, and weighted BMO spaces. Note that most of these concepts were also needed in \cite{HLW2}, and are treated in more detail there. In Section \ref{S:MainProof} we show how, through the Hyt{\"o}nen Representation Theorem, it suffices to prove our main result for dyadic shifts $\mathbb{S}^{ij}$. The rest of the paper is dedicated to this. In Section \ref{S:FirstComm} we revisit the two-weight proof for the first commutator $[b, \mathbb{S}^{ij}]$ in \cite{HLW2}, making some definitions which will be useful later, and obtaining the one-weight result. In Section \ref{S:SecondComm} we look at the second iteration $\big[b, [b, \mathbb{S}^{ij}]\big]$ -- this will provide the intuition behind the general case of $k$ iterations, and also establish the final tools needed for this. In Section \ref{S:General} we prove the general result. \section{Background and Notation} \label{S:BandN} \subsection{The Haar System.} Let $\mathcal{D}^0 = \{ 2^{-k}([0,1)^n + m)\: :\: k\in\mathbb{Z}, m\in \mathbb{Z}^n \}$ be the standard dyadic grid on $\mathbb{R}^n$. For any $\omega = (\omega_j)_{j\in\mathbb{Z}} \in (\{0, 1\}^n)^{\mathbb{Z}}$, we let $\mathcal{D}^{\omega} \defeq \{Q \stackrel{\cdot}{+} \omega \: : \: Q\in\mathcal{D}^0\}$ be the translate of $\mathcal{D}^0$ by $\omega$, where $$Q \stackrel{\cdot}{+} \omega \defeq Q + \sum_{j: 2^{-j}< l(Q)} 2^{-j}\omega_j,$$ where $l(Q)$ denotes the side length of any cube $Q$ in $\mathbb{R}^n$. Every dyadic grid $\mathcal{D}^{\omega}$ is characterized by two fundamental properties, namely (1) For every $P, Q \in \mathcal{D}^{\omega}$, $P \cap Q \in \{ P, Q, \emptyset \}$, and (2) For every fixed $k \in\mathbb{Z}$, the cubes $Q\in\mathcal{D}^{\omega}$ with $l(Q) = 2^{-k}$ partition $\mathbb{R}^n$. Let $\mathcal{D}$ be a fixed dyadic grid, $Q\in\mathcal{D}$, and $k$ be a non-negative integer. We let $Q^{(k)}$ denote the $k^{\text{th}}$ ancestor of $Q$ in $\mathcal{D}$, i.e. the unique element of $\mathcal{D}$ with side length $2^k l(Q)$ that contains $Q$, and $Q_{(k)}$ denote the collection of $k^{\text{th}}$ descendants of $Q$ in $\mathcal{D}$, i.e. the $2^{kn}$ disjoint subcubes of $Q$ in $\mathcal{D}$ with side length $2^{-k} l(Q)$. The Haar system on $\mathcal{D}$ is defined by associating $2^n$ Haar functions to every $Q = I_1 \times \cdots \times I_n \in \mathcal{D}$, where each $I_i$ is a dyadic interval in $\mathbb{R}$ with length $l(Q)$: $$ h_Q^{\epsilon} (x) \defeq \prod_{i=1}^n h_{I_i}^{\epsilon_i}(x_i),$$ for all $x = (x_1, \ldots, x_n) \in \mathbb{R}^n$, where $\epsilon \in \{0, 1\}^n$ is called the signature of $h_Q^{\epsilon}$, and $h_{I_i}^{\epsilon_i}$ is one of the one-dimensional Haar functions: $$ h_I^0 \defeq \frac{1}{\sqrt{|I|}} (\mathbbm 1} %{1\!\!1_{I_{-}} - \mathbbm 1} %{1\!\!1_{I_{+}}); \:\:\: h_I^1 \defeq \frac{1}{\sqrt{|I|}} \mathbbm 1} %{1\!\!1_I.$$ We write $\epsilon = 1$ when $\epsilon_i = 1$ for all $i$. In this case, $h_Q^1 = |Q|^{-\frac{1}{2}} \mathbbm 1} %{1\!\!1_Q$ is said to be non-cancellative, while all the other $2^n-1$ Haar functions associated with $Q$ are cancellative. Moreover, the cancellative Haar functions on a fixed dyadic grid form an orthonormal basis for $L^2(\mathbb{R}^n)$. We then write for any $f \in L^2(\mathbb{R}^n)$: $$ f = \sum_{Q\in\mathcal{D}, \epsilon\neq 1} \widehat{f}(Q,\epsilon) h_Q^{\epsilon}, $$ where $\widehat{f}(Q, \epsilon) \defeq \left\langle f, h_Q^{\epsilon}\right\rangle $ and $\left\langle \cdot,\cdot\right\rangle $ denotes the usual inner product in $L^2(\mathbb{R}^n)$. Then $$ \left\langle f\right\rangle _Q = \sum_{R\in\mathcal{D}, R\supsetneq Q; \epsilon\neq 1} \widehat{f}(R,\epsilon) h_R^{\epsilon}(Q), $$ where $\left\langle f\right\rangle _Q \defeq |Q|^{-1} \int_Q f\,dx$ denotes the average of $f$ over $Q$. \subsection{$A_p$ weights.} By a weight on $\mathbb{R}^n$ we mean an almost everywhere positive, locally integrable function $w$. For some $1<p<\infty$ with H{\"o}lder conjugate $q$, we say that a weight $w$ belongs to the Muckenhoupt $A_p$ class if $$[w]_{A_p} \defeq \sup_{Q} \left\langle w\right\rangle _Q \left\langle w^{1-q}\right\rangle _Q^{p-1} < \infty,$$ where the supremum is over all cubes in $\mathbb{R}^n$. We let $w' \defeq w^{q-1}$, the `conjugate' weight to $w$. Then $w\in A_p$ if and only if $w' \in A_q$, with $[w']_{A_q} = [w]_{A_p}^{q-1}$. For a weight $w$ and $1<p<\infty$, let $L^p(w)$ denote the usual $L^p$ space with respect to the measure $dw = w\,dx$, i.e. the space of all functions $f$ such that $\|f\|_{L^p(w)}^p = \int_{\mathbb{R}^n} |f|^p\,dw < \infty$. If $w\in A_p$, we then have the duality $(L^p(w))^* \equiv L^q(w')$, in the sense that $$ \|f\|_{L^p(w)} = \sup\{ |\left\langle f, g\right\rangle | \: :\: g \in L^q(w'), \|g\|_{L^q(w')} \leq 1\}.$$ We review some of the crucial properties of $A_p$ weights, starting the maximal function: $$Mf \defeq \sup_{Q} \left(\left\langle |f|\right\rangle _Q \mathbbm 1} %{1\!\!1_Q \right),$$ where again the supremum is over all cubes $Q$ in $\mathbb{R}^n$. If $w\in A_p$, then the following bound is sharp \cites{Muck, Buckley} in the exponent of $[w]_{A_p}$: \begin{equation} \label{E:MaxOneWeight} \|M\|_{L^p(w)} \lesssim [w]_{A_p}^{\frac{1}{p-1}}, \end{equation} where for some quantities $A$ and $B$, ``$A \lesssim B$'' denotes $A \leq CB$ for some absolute constant $C$. Another important tool is the dyadic square function: $$(S_{\mathcal{D}}f)^2 \defeq \sum_{Q\in\mathcal{D},\epsilon\neq 1} |\widehat{f}(Q,\epsilon)|^2 \frac{\mathbbm 1} %{1\!\!1_Q}{|Q|},$$ for which we have the sharp \cite{CruzClassicOps} one-weight inequality: \begin{equation} \label{E:SFOneWeight} \|S_{\mathcal{D}}\|_{L^p(w)} \lesssim [w]_{A_p}^{\max\left(\frac{1}{2}, \frac{1}{p-1}\right)}. \end{equation} For a dyadic grid $\mathcal{D}$ on $\mathbb{R}^n$ and a pair $(i, j)$ of non-negative integers, define a shifted dyadic square function: \begin{equation} \label{E:ShiftedDSFDef} \big(\widetilde{S_{\mathcal{D}}}^{i,j} f\big)^2 \defeq \sum_{Q\in\mathcal{D}, \epsilon\neq 1} \bigg(\sum_{P \in (Q^{(j)})_{(i)}} |\widehat{f}(P,\epsilon)| \bigg)^2 \frac{\mathbbm 1} %{1\!\!1_Q}{|Q|}. \end{equation} The following was proved in \cite[Lemma 2.2]{HLW2}: \begin{equation} \label{E:ShiftedSF} \|\widetilde{S_{\mathcal{D}}}^{i,j}\|_{L^2(w)} \lesssim 2^{\frac{n}{2}(i+j)} [w]_{A_2}. \end{equation} Lastly, we recall the extrapolation property of $A_p$ weights \cite{Extrapolation}. Suppose an operator $T$ satisfies: $$\|Tf\|_{L^2(w)} \lesssim A [w]_{A_2}^{\alpha} \|f\|_{L^2(w)}$$ for all $w\in A_2$, for some fixed $A> 0$ and $\alpha>0$. Then: $$\|Tf\|_{L^p(w)} \lesssim A [w]_{A_p}^{\alpha\max\left(1, \frac{1}{p-1}\right)}\|f\|_{A_p},$$ for all $1<p<\infty$ and all $w\in A_p$. \subsection{Weighted BMO} Let $w$ be a weight on $\mathbb{R}^n$. The weighted BMO space $BMO(w)$ is the space of all locally integrable functions $b$ such that $$\|b\|_{BMO(w)} \defeq \sup_Q \frac{1}{w(Q)} \int_Q |b - \left\langle b\right\rangle _Q|\,dx < \infty,$$ where $w(Q) = \int_Q \,dw$, and the supremum is over all cubes $Q$. Note that if we take $w=1$ we obtain the usual space of functions with bounded mean oscillation, which we simply denote by $BMO$. If $w\in A_p$, it was shown in \cite{MuckWheeden} that $\|\cdot\|_{BMO(w)}$ is equivalent to the norm $\|\cdot\|_{BMO^q(w)}$, defined as $$\|b\|^q_{BMO^q(w)} \defeq \sup_Q \frac{1}{w(Q)} \int_Q |b - \left\langle b\right\rangle _Q|^q\,dw'.$$ Given a dyadic grid $\mathcal{D}$, we define the dyadic versions of these spaces, $BMO_{\mathcal{D}}(w)$ and $BMO^q_{\mathcal{D}}(w)$, by taking the supremum over $Q\in\mathcal{D}$ instead. Now suppose $\mu,\lambda\in A_p$ and define $\nu \defeq \mu^{\frac{1}{p}} \lambda^{-\frac{1}{p}}$. As shown in \cite{HLW2}, $\nu$ is then an $A_2$ weight. The following inequality will be very useful: \begin{equation}\label{E:NuH1BMODuality} |\left\langle b,\Phi\right\rangle | \lesssim [\nu]_{A_2} \|b\|_{BMO^2_{\mathcal{D}}(\nu)} \|S_{\mathcal{D}}\Phi\|_{L^1(\nu)}. \end{equation} This in fact holds for all $A_2$ weights $w$, and comes from a duality relationship between $BMO^2_{\mathcal{D}}(w)$ and the dyadic weighted Hardy space $\mathcal{H}^1_{\mathcal{D}}(w)$. See \cite[Section 2.6]{HLW2} for details. \subsection{Paraproducts.} Recall the paraproducts with symbol $b$ on $\mathbb{R}$: $\Pi_bf = \sum_{I\in\mathcal{D}} \widehat{b}(I)\left\langle f\right\rangle _I h_I$ and $\Pi^*_bf = \widehat{b}(I)\widehat{f}(I) |I|^{-1}\mathbbm 1} %{1\!\!1_I$. These are most useful in dyadic proofs due to the identity: $bf = \Pi_b f + \Pi^*_b f + \Pi_f b$. To generalize this property to $\mathbb{R}^n$, we define the multidimensional paraproducts below. \begin{defn} \label{D:Paraprods} For a fixed dyadic grid $\mathcal{D}$ on $\mathbb{R}^n$, define the following paraproduct operators with symbol $b$: $$\Pi_b f \defeq \sum_{Q \in \mathcal{D}, \epsilon \neq 1} \widehat{b}(Q, \epsilon) \left<f\right>_Q h_Q^{\epsilon}, \:\: \:\:\: \Pi_b^* f \defeq \sum_{Q \in \mathcal{D}, \epsilon \neq 1} \widehat{b}(Q, \epsilon) \widehat{f}(Q, \epsilon) \frac{\mathbbm 1} %{1\!\!1_Q}{|Q|},$$ and $$\Gamma_b f \defeq \sum_{Q \in \mathcal{D}} \sum_{\stackrel{\epsilon, \eta \neq 1}{\epsilon \neq \eta}} \widehat{b}(Q, \epsilon) \widehat{f}(Q, \eta) \frac{1}{\sqrt{|Q|}} h_Q^{\epsilon + \eta},$$ where for every $\epsilon,\eta \in \{0,1\}^n$, $\epsilon+\eta$ is defined by letting $(\epsilon+\eta)_i$ be $0$ if $\epsilon_i\neq\eta_i$ and $1$ otherwise. \end{defn} Then $bf = \Pi_b f + \Pi_b^*f + \Gamma_b f + \Pi_f b$. Note that, while the first two paraproducts above reduce to the standard one-dimensional ones when $n=1$, the third paraproduct $\Gamma_b$ vanishes in this case. This third paraproduct comes from the fact that $h_Q^{\epsilon}h_Q^{\eta} = |Q|^{-\rfrac{1}{2}} h_Q^{\epsilon+\eta}$. For ease of notation later, we denote: $$\|T\|_{L^p(w;\:v)} \defeq \left\|T:L^p(w) \to L^p(v)\right\|,$$ the operator norm between two weighted $L^p$-spaces. And, when $w=v$ we will frequently write $$\|T\|_{L^p(w)} \defeq \left\|T:L^p(w) \to L^p(w)\right\|.$$ The following two-weight result was proved in \cite[Theorem 3.1]{HLW2}. We recall first that the adjoints of $\Pi_b$, $\Pi^*_b$, and $\Gamma_b$ as $L^p(\mu) \to L^p(\lambda)$ operators are $\Pi^*_b$, $\Pi_b$, and $\Gamma_b$ as $L^q(\lambda') \to L^q(\mu')$ operators, respectively. \begin{thm} \label{T:Paraprod2WtBds} Let $\mu, \lambda \in A_p$ for some $1<p<\infty$, $\nu = \mu^{\frac{1}{p}}\lambda^{-\frac{1}{p}}$, and suppose $b \in BMO^2_{\mcd}(\nu)$ for a fixed dyadic grid $\mathcal{D}$ on $\mathbb{R}^n$. Then: \begin{align} \label{E:PibUBd} \left\|\Pi_b\right\|_{L^p(\mu;\:\lambda)} = \left\|\Pi^*_b\right\|_{L^q(\lambda';\:\mu')} &\lesssim c \|b\|_{BMO^2_{\mathcal{D}}(\nu)},\\ \label{E:PibStarUBd} \left\|\Pi^*_b\right\|_{L^p(\mu;\:\lambda)} = \left\|\Pi_b\right\|_{L^q(\lambda';\:\mu')} & \lesssim c \|b\|_{BMO^2_{\mathcal{D}}(\nu)},\\ \label{E:GammabUBd} \left\|\Gamma_b\right\|_{L^p(\mu;\:\lambda)} = \left\|\Gamma_b\right\|_{L^q(\lambda';\:\mu')} & \lesssim c \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, \end{align} where, in each case, $c$ denotes a constant depending on $\mu$, $\lambda$, and $p$. \end{thm} \subsection{Dyadic Shifts.} Let $i,j$ be non-negative integers and $\mathcal{D}$ a dyadic grid on $\mathbb{R}^n$. A dyadic shift operator with parameters $(i,j)$ is an operator of the form: \begin{equation} \label{E:SijDef} \mathbb{S}_{\mathcal{D}}^{ij}f \defeq \sum_{R \in \mathcal{D}} \sum_{P \in R_{(i)} , Q \in R_{(j)}} \sum_{\epsilon,\eta \in \{0,1\}^n} a^{\epsilon\eta}_{PQR} \widehat{f}(P,\epsilon) h_Q^{\eta}, \end{equation} where $a^{\epsilon\eta}_{PQR}$ are coefficients with $a^{\epsilon\eta}_{PQR} \leq |R|^{-1}\sqrt{|P||Q|}$. The shift is said to be cancellative if all Haar functions in its definition are cancellative, that is $a^{\epsilon\eta}_{PQR} = 0$ whenever $\epsilon = 1$ or $\eta = 1$. Otherwise, it is called non-cancellative. The following weighted inequality for dyadic shifts, which can be found in \cites{HytLacey,Lacey,TreilSharpA2}, will be extremely useful: \begin{thm} \label{T:SijWeighted} Let $\mathbb{S}_{\mathcal{D}}^{ij}$ be a dyadic shift operator. Then for any weight $w\in A_2$: \begin{equation} \label{E:SijWeighted} \left\|\mathbb{S}_{\mathcal{D}}^{ij}\right\|_{L^2(w)} \lesssim \kappa_{ij} [w]_2, \end{equation} where $\kappa_{ij} \defeq \max\{i, j, 1\}$ is the complexity of the shift. \end{thm} As a first application of this result, we observe that that all paraproducts in Definition \ref{D:Paraprods} can be expressed in terms of dyadic shifts with parameters $(0, 0)$, that is, shifts of the form $$\mathbb{S}^{00}f = \sum_{Q\in\mathcal{D}; \epsilon,\eta\in\{0,1\}^n} a_Q^{\epsilon\eta} \widehat{f}(Q,\epsilon) h_Q^{\eta};\:\: |a_Q^{\epsilon\eta}| \leq 1.$$ For instance, we may write $\Pi_b = \|b\|_{BMO^2_{\mathcal{D}}} \mathbb{S}^{00}$, where $a^{\epsilon\eta}_Q = 0$ if $\epsilon\neq 1$ or $\eta=1$, and $a^{\epsilon\eta}_Q = \widehat{b}(Q,\eta) |Q|^{-1/2} \|b\|_{BMO^2_{\mathcal{D}}}^{-1}$ if $\epsilon=1$ and $\eta\neq 1$. Similar expressions can be obtained for the other two paraproducts. Then, if $P_b$ is any one of $\Pi_b$, $\Pi^*_b$, or $\Gamma_b$, it follows from \eqref{E:SijWeighted} that \begin{equation} \label{E:Paraprod1WtBds} \|P_b\|_{L^2(w)} \lesssim \|b\|_{BMO^2_{\mathcal{D}}} [w]_{A_2}, \end{equation} for any $w \in A_2$. These one-weight inequalities for paraproducts were obtained in \cite{Beznosova} for the one-dimensional case $n=1$, and, using the Wilson Haar basis, in \cite{Chung} for $n \geq 1$. We can also use dyadic shifts to recover the one-weight bound for the martingale transform: $$T_{\sigma}f \defeq \sum_{Q\in\mathcal{D},\epsilon\neq 1} \sigma_{Q,\epsilon} \widehat{f}(Q,\epsilon) h_Q^{\epsilon},$$ where $|\sigma_{Q,\epsilon}| \leq 1$ for all $Q\in\mathcal{D}$ and $\epsilon\neq 1$. For $w\in A_2$, all martingale transforms $T_{\sigma}$ are uniformly bounded on $L^2(w)$. In particular, there is a universal constant $C$ such that \begin{equation} \label{E:MartTBd} \|T_{\sigma}\|_{L^2(w)} \leq C [w]_{A_2}, \end{equation} for all $\sigma$. This result, obtained in \cite{Wittwer} for the one-dimensional case, trivially follows from the observation that $T_{\sigma} = \mathbb{S}^{00}$, where $a^{\epsilon\eta}_{Q}$ is defined to be $\sigma_{Q,\epsilon}$ if $\epsilon=\eta\neq 1$ and $0$ otherwise. The following simple consequence of this fact will come in handy later: \begin{prop} \label{P:BMOSigma} Let $w \in A_2$, $b \in BMO^2_{\mathcal{D}}(w)$, and $T_{\sigma}$ be a martingale transform. Then $T_{\sigma}b \in BMO^2_{\mathcal{D}}(w)$, with \begin{equation} \label{E:BMOSigma} \|T_{\sigma}b\|_{BMO^2_{\mathcal{D}}(w)} \lesssim [w]_{A_2} \|b\|_{BMO^2_{\mathcal{D}}(w)}. \end{equation} \end{prop} \begin{proof} It is easy to observe that $\mathbbm 1} %{1\!\!1_Q(T_{\sigma}b - \left\langle T_{\sigma}b\right\rangle _{Q}) = T_{\sigma}\big( \mathbbm 1} %{1\!\!1_Q (b - \left\langle b\right\rangle _Q) \big)$, and so $$ \|T_{\sigma}b\|_{BMO^2_{\mathcal{D}}(w)} = \sup_{Q\in\mathcal{D}} \frac{1}{w(Q)^{\rfrac{1}{2}}} \left\| T_{\sigma} \big( \mathbbm 1} %{1\!\!1_Q (b - \left\langle b\right\rangle _Q) \big) \right\|_{L^2(w^{-1})} \lesssim [w]_{A_2} \|b\|_{BMO^2_{\mathcal{D}}(w)}.$$ \end{proof} \section{Proof of The Main Result} \label{S:MainProof} As in the proof of Theorem \ref{T:CZOComm1} in \cite{HLW2}, the backbone of our proof of Theorem \ref{T:Main} is the celebrated Hyt{\"o}nen Representation Theorem \cites{HytRepOrig, HytRep, HytPerezTV}, which we state below. \begin{thm} \label{T:HRP} Let $\mathbf{T}$ be a Calder{\'o}n-Zygmund operator associated with a $\delta$-standard kernel. Then there exist dyadic shift operators $\mathbb{S}_{\omega}^{ij}$ with parameters $(i, j)$ for all non-negative integers $i, j$ such that $$\left\langle \mathbf{T}f, g\right\rangle = c\: \mathbb{E}_{\omega} \sum_{i,j=0}^{\infty} 2^{- \kappa _{i,j}\frac{\delta}{2}} \left\langle \mathbb{S}_{\omega}^{ij}f, g\right\rangle ,$$ for all bounded, compactly supported functions $f$ and $g$, where $c$ is a constant depending on the dimension $n$ and on $\mathbf{T}$. Here all $\mathbb{S}^{ij}_{\omega}$ with $(i,j) \neq (0,0)$ are cancellative, but the shifts $\mathbb{S}_{\omega}^{00}$ may not be cancellative. \end{thm} It is easy to see that \begin{equation} \label{E:Main_t1} \left\langle C_b^k(\mathbf{T})f, g\right\rangle = c \: \mathbb{E}_{\omega} \sum_{i,j=0}^{\infty} 2^{- \kappa _{i,j}\frac{\delta}{2}} \left\langle C_b^k(\mathbb{S}^{ij}_{\omega})f, g\right\rangle , \end{equation} for all integers $k\geq 1$. Thus it suffices to show that $C_b^k(\mathbb{S}^{ij}_{\omega})$ are uniformly bounded, regardless of $\omega$, with bounds that depend at most polynomially on $\kappa_{ij}$. Since our arguments will be independent of choice of $\omega$, we fix a dyadic grid $\mathcal{D}$ and suppress the $\omega$ subscript in what follows. We claim that: \begin{thm} \label{T:BigThm_Sij} Let $\mu, \lambda \in A_p$ for some $1<p<\infty$ and $b \in BMO^2_{\mathcal{D}}(\nu) \cap BMO^2_{\mathcal{D}}$, where $\nu = \mu^{\frac{1}{p}} \lambda^{-\frac{1}{p}}$. For any pair $(i, j)$ of non-negative integers, let $\mathbb{S}^{ij} \defeq \mathbb{S}^{ij}_{\mathcal{D}}$ be a dyadic shift as in the Hyt{\"o}nen Representation Theorem. Then for any integer $k \geq 1$: \begin{equation} \left\| C_b^k(\mathbb{S}^{ij}) \right\|_{L^p(\mu;\:\lambda)} \leq c\: \kappa^k_{ij} \|b\|^{k-1}_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, \end{equation} where $c$ is a constant depending on $n$, $p$, $\mu$, $\lambda$, and $k$. In particular, if $\mu = \lambda = w \in A_2$: \begin{equation} \left\| C_b^k(\mathbb{S}^{ij}) \right\|_{L^2(w)} \leq c\: \kappa^k_{ij} \|b\|^k_{BMO^2_{\mathcal{D}}} [w]_{A_2}^{k+1}, \end{equation} where $c$ is a constant depending on $n$ and $k$. \end{thm} Then for all $\omega$: $$\left\| C_b^k(\mathbb{S}^{ij}_{\omega}) \right\|_{L^p(\mu;\:\lambda)} \leq c\: \kappa_{ij}^k \|b\|^{k-1}_{BMO} \|b\|_{BMO(\nu)} \text{, and } \left\|C_b^k(\mathbb{S}^{ij}_{\omega}) \right\|_{L^2(w)} \leq c\: \kappa^k_{ij} \|b\|_{BMO}^k [w]^{k+1}_{A_2},$$ where we used the equivalence of $BMO(\nu)$ and $BMO^2(\nu)$ norms. Then \eqref{E:Main_t1} gives that $$\left\|C_b^k(\mathbf{T})\right\|_{L^p(\mu;\:\lambda)} \leq c\: \|b\|^{k-1}_{BMO} \|b\|_{BMO(\nu)} \sum_{i,j=0}^{\infty} 2^{- \kappa _{i,j}\frac{\delta}{2}} \kappa_{ij}^k \leq c' \:\|b\|^{k-1}_{BMO} \|b\|_{BMO(\nu)}.$$ The one-weight result in Theorem \ref{T:Main} follows similarly. The rest of the paper is dedicated to proving Theorem \ref{T:BigThm_Sij}. \section{The Commutator $[b, \mathbb{S}^{ij}]$ Revisited} \label{S:FirstComm} Recall that the product of two functions can be formally decomposed in terms of the paraproducts in Definition \ref{D:Paraprods} as $bf = \mathfrak{P}_bf + \Pi_f b,$ where $$\mathfrak{P}_b \defeq \Pi_b + \Pi^*_b + \Gamma_b.$$ Consequently, the commutator $[b, T]$ with an operator $T$ can be expressed as: $$C_b^1(T)f = [b, T]f = [\mathfrak{P}_b, T]f + \left( \Pi_{Tf}b - T\Pi_f b \right).$$ Proving inequalities for $[b, T]$ via dyadic methods usually involves proving some appropriate bounds for the paraproducts -- from which the boundedness of the first term $[\mathfrak{P}_b, T]$ usually follows -- and then treating the `remainder term' $\mathcal{R}_1f = \Pi_{Tf}b - T\Pi_f b$ separately. Now, remark that if we consider the second iteration: $$C_b^2(T)f = [b, C_b^1(T)]f = [\mathfrak{P}_b, C_b^1(T)]f + \left( \Pi_{C_b^1(T) f}b - C_b^1(T)\Pi_f b \right),$$ we will encounter the term $\Pi_{\mathcal{R}_1f}b - \mathcal{R}_1\Pi_f b$. We can already see that more compact notation for repeatedly performing the operation $T \mapsto \Pi_{T\cdot}b - T\Pi_{\cdot}b$ would be useful. \begin{defn} \label{D:Theta_b} Given an operator $T$ on some function space and a function $b$, define the operator $\Theta_b(T)$ by: $$\Theta_b(T)f \defeq \Pi_{Tf} b - T \Pi_f b.$$ More generally, let $\Theta_b^0(T) \defeq T$, and $$\Theta_b^k(T) \defeq \Theta_b\big( \Theta_b^{k-1}(T) \big),$$ for all integers $k \geq 1$. \end{defn} Using this notation, for an operator $T$, \begin{equation} \label{E:CommTh_bDec} [b, T] = [\mathfrak{P}_b, T] + \Theta_b(T), \end{equation} In particular, \begin{equation} \label{E:Comm1_Dec} C_b^1(\mathbb{S}^{ij}) = [\mathfrak{P}_b, \mathbb{S}^{ij}] + \Theta_b(\mathbb{S}^{ij}), \end{equation} so \begin{align} \label{E:Cb1} \| C_b^1(\mathbb{S}^{ij}) \|_{L^p(\mu;\:\lambda)} & \leq \|\mathfrak{P}_b\|_{L^p(\mu;\:\lambda)} \big( \|\mathbb{S}^{ij}\|_{L^p(\mu)} + \|\mathbb{S}^{ij}\|_{L^p(\lambda)} \big) + \|\Theta_b(\mathbb{S}^{ij})\|_{L^p(\mu;\:\lambda)}\\ & \lesssim \kappa_{ij} C(\mu, \lambda, p) \|b\|_{BMO^2_{\mathcal{D}}(\nu)} + \|\Theta_b(\mathbb{S}^{ij})\|_{L^p(\mu;\:\lambda)}, \end{align} where the first term is bounded using Theorem \ref{T:Paraprod2WtBds} and \eqref{E:SijWeighted}. Letting $\mu = \lambda = w \in A_2$ in \eqref{E:Cb1} and using the one-weight bounds \eqref{E:Paraprod1WtBds} for the paraproducts: \begin{align} \|C_b^1(\mathbb{S}^{ij})\|_{L^2(w)} & \leq 2 \|\mathfrak{P}_b\|_{L^2(w)} \|\mathbb{S}^{ij}\|_{L^2(w)} + \|\Theta_b(\mathbb{S}^{ij})\|_{L^2(w)}\\ & \lesssim \kappa_{ij}\|b\|_{BMO^2_{\mathcal{D}}} [w]_{A_2}^2 + \|\Theta_b(\mathbb{S}^{ij})\|_{L^2(w)}. \end{align} So it remains to bound the remainder term. And, based on analysis that will come later, we in fact need to control certain iterates of the remainder term, leading to the following claim: \begin{prop} \label{P:Comm1} Under the same assumptions as Theorem \ref{T:BigThm_Sij}, for all integers $k \geq 1$: \begin{equation} \label{E:Comm1_2Wt} \left\| \Theta_b^k (\mathbb{S}^{ij}) \right\|_{L^p(\mu;\:\lambda)} \leq c \kappa^k_{ij} \|b\|^{k-1}_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, \end{equation} where $c$ is a constant depending on $n$, $p$, $\mu$, $\lambda$, and $k$, and \begin{equation} \label{E:Comm1_1Wt} \left\| \Theta_b^k (\mathbb{S}^{ij}) \right\|_{L^2(w)} \leq c \kappa^k_{ij} \| b \|^k_{BMO^2_{\mathcal{D}}} [w]^{k+1}_{A_2}, \end{equation} where $c$ is a constant depending on $n$ and $k$. \end{prop} Obviously, letting $k=1$ in this proposition yields the results in Theorem \ref{T:BigThm_Sij} with $k = 1$. The first result \eqref{E:Comm1_2Wt} was proved for $k=1$ in \cite{HLW2}. In this section we revisit this proof in order to obtain the one-weight result. The latter will follow directly from the two-weight proof in the case of cancellative shifts $\mathbb{S}^{ij}$ with $(i, j) \neq (0, 0)$, but the case $(i, j) = (0, 0)$ will require some care. However, as we shall see in the next section, the tools we introduce here lay most of the groundwork for the iterated commutators. \subsection{The Cancellative Shifts.} We showed in \cite{HLW2} that \begin{equation} \label{E:ThbSij} \Theta_b (\mathbb{S}^{ij}) = \sum_{\substack{R\in\mathcal{D} \\ \epsilon,\eta\neq 1}} \sum_{\substack{P \in R_{(i)} \\ Q\in R_{(j)}}} a^{\epsilon\eta}_{PQR} \widehat{f}(P,\epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big) h_Q^{\eta}, \end{equation} whenever $(i, j) \neq (0, 0)$ and the dyadic shift is cancellative. Then, assuming $i \leq j$, we expressed $\Theta_b(\mathbb{S}^{ij})$ as: $$\Theta_b(\mathbb{S}^{ij}) = \sum_{l=1}^j A_l - \sum_{m=1}^i B_m,$$ for some certain operators $A_l$, $B_m$. Using the weighted $\mathcal{H}^1 - BMO$ duality statement in \eqref{E:NuH1BMODuality}, we showed that these operators satisfy: \begin{align} \label{E:Al} \|A_l\|_{L^p(\mu;\: \lambda)} & \leq [\nu]_{A_2} \|b\|_{BMO^2_{\mathcal{D}}(\nu)} 2^{-\frac{n}{2}(i+j-l)} \|M\|_{L^q(\lambda')} \| \widetilde{S_{\mathcal{D}}}^{i, j-l} \|_{L^p(\mu)}, \\ \label{E:Bm} \|B_m\|_{L^p(\mu;\: \lambda)} & \leq [\nu]_{A_2} \|b\|_{BMO^2_{\mathcal{D}}(\nu)} 2^{-\frac{n}{2}(i+j-m)} \|M\|_{L^p(\mu)} \| \widetilde{S_{\mathcal{D}}}^{j, i-m} \|_{L^q(\lambda')}, \end{align} for all integers $1 \leq l \leq j$ and $1 \leq m \leq j$, where $\widetilde{S_{\mathcal{D}}}^{i,j}$ is the shifted dyadic square function in \eqref{E:ShiftedDSFDef}. From here, \eqref{E:Comm1_2Wt} follows easily from \eqref{E:MaxOneWeight} and \eqref{E:ShiftedSF}. Now, if we let $\mu = \lambda = w \in A_2$, \eqref{E:Al} becomes: \begin{align*} \|A_l\|_{L^2(w)} &\leq \|b\|_{BMO^2_{\mathcal{D}}} 2^{-\frac{n}{2}(i+j-l)} \|M\|_{L^2(w^{-1})} \|\widetilde{S_{\mathcal{D}}}^{i, j-l}\|_{L^2(w)} \\ & \lesssim \|b\|_{BMO^2_{\mathcal{D}}} 2^{-\frac{n}{2}(i+j-l)} [w^{-1}]_{A_2} 2^{\frac{n}{2}(i+j-l)} [w]_{A_2}\\ & = \|b\|_{BMO^2(\mathcal{D})} [w]^2_{A_2}. \end{align*} Similarly, we obtain $\|B_m\|_{L^2(w)} \lesssim \|b\|_{BMO^2_{\mathcal{D}}} [w]^2_{A_2}$ from \eqref{E:Bm}, and \eqref{E:Comm1_1Wt} follows. The proof for $i \geq j$ is symmetrical. \subsection{The case $i = j = 0$.} As shown in \cite{HytRepOrig}, the non-cancellative shift $\mathbb{S}^{00}$ is of the form \begin{equation} \label{E:00HRT} \mathbb{S}^{00} = \mathbb{S}_c^{00} + \Pi_a + \Pi^*_d, \end{equation} where $\mathbb{S}_c^{00}$ is a \textit{cancellative} shift with parameters $(0, 0)$, and $\Pi_a$, $\Pi^*_d$ are paraproducts with symbols $a, d \in BMO_{\mathcal{D}}$ and $\|a\|_{BMO_{\mathcal{D}}} \leq 1$, $\|d\|_{BMO_{\mathcal{D}}} \leq 1$. In \cite[Section 5.2]{HLW2} we show that $\Theta_b(\mathbb{S}_c^{00}) = 0$ and \begin{align} \Theta_b(\Pi_a) &= \Pi_a \Pi_b + \Pi_a \Gamma_b + \Lambda_{a, b} - \widetilde{\Lambda}_{a, b}, \label{E:Th_bPi_a}\\ \Theta_b(\Pi^*_a) &= \Lambda_{b, a} - \Pi_b^* \Pi_a^* - \Gamma_b \Pi^*_a - \Pi_b \Pi^*_a, \label{E:Th_bPi*_a} \end{align} where: \begin{align} \Lambda_{a,b}f &\defeq \sum_{Q\in\mathcal{D}; \epsilon,\eta\neq 1} \widehat{a}(Q,\epsilon) \sum_{R\in\mathcal{D}, R \supset Q} \widehat{b}(R,\eta) \widehat{f}(R,\eta) \frac{1}{|R|} h_Q^{\epsilon}, \label{E:Lb_abDef}\\ \textup{and} \qquad \widetilde{\Lambda}_{a,b} &\defeq \sum_{Q\in\mathcal{D}; \epsilon,\eta\neq 1} \widehat{a}(Q,\epsilon) \widehat{b}(Q,\eta) \widehat{f}(Q,\eta) \frac{1}{|Q|} h_Q^{\epsilon}. \end{align} We remark to the reader, that we are using a slightly different definition of $\Lambda_{a,b}$ than in \cite{HLW2}. The $\Lambda_{a,b}$ defined in \eqref{E:Lb_abDef} corresponds to $\Lambda^*_{b,a}$ in \cite{HLW2}. However, we shall see later that it is more advantageous for our purposes to work with the definitions above. We claim that: \begin{lm} \label{L:Lambda_ab} Let $\mu, \lambda \in A_p$ for some $1<p<\infty$. Suppose $a \in BMO^2_{\mathcal{D}}$ and $b \in BMO^2_{\mathcal{D}}(\nu)$, where $\mathcal{D}$ is a fixed dyadic grid on $\mathbb{R}^n$ and $\nu = \mu^{\frac{1}{p}}\lambda^{-\frac{1}{p}}$. If $T$ is any one of the operators: $$\Lambda_{a, b},\: \widetilde{\Lambda}_{a,b},\: \Lambda_{b, a},\: \widetilde{\Lambda}_{b, a},\: \Theta_b(\Pi_a) \text{, or } \Theta_b(\Pi^*_a),$$ then: \begin{equation} \label{E:Lb_2WtBd} \|T\|_{L^p(\mu;\: \lambda)} \lesssim C(\mu, \lambda, p) \|a\|_{BMO_{\mathcal{D}}^2} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}. \end{equation} In particular, if $\mu = \lambda = w \in A_2$: \begin{equation} \label{E:Lb_1WtBd} \|T\|_{L^2(w)} \lesssim \|a\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}} [w]_{A_2}^2. \end{equation} \end{lm} Proposition \ref{P:Comm1} with $i = j = 0$ and $k=1$ follows from this immediately, by the assumptions on the $BMO$ norms of $a$ and $d$. Before we proceed with the proof, we remark that, while \eqref{E:Lb_2WtBd} was proved in some form for the $\Lambda$ operators in \cite{HLW2}, we need a slight modification of that proof in order for it to yield the one-weight result. Roughly speaking, the original proof boils down to $\|S_{\mathcal{D}}\|_{L^p(\mu)} \|M \Pi_a^* \|_{L^q(\lambda')}$, which in the one-weight case gives a factor of $[w]^3_{A_2}$. \begin{proof}[Proof of Lemma \ref{L:Lambda_ab}] It suffices to prove the result for the $\Lambda$ operators. For then, from the decomposition in \eqref{E:Th_bPi_a}: \begin{align*} \| \Theta_b(\Pi_a)\|_{L^p(\mu;\:\lambda)} & \leq \|\Pi_a\|_{L^p(\lambda)} \| \Pi_b + \Gamma_b \|_{L^p(\mu;\:\lambda)} + \| \Lambda_{a, b} - \widetilde{\Lambda}_{a, b}\|_{L^p(\mu;\:\lambda)}\\ & \lesssim C(\mu, \lambda, p) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, \end{align*} where we used \eqref{E:Paraprod1WtBds} for the $\Pi_a$ term, and Theorem \ref{T:Paraprod2WtBds} for the paraproducts with symbol $b$. Similarly, we can see from \eqref{E:Th_bPi*_a} that $\Theta_b(\Pi^*_a)$ obeys the same bound. If $\mu=\lambda=w\in A_2$: $$ \|\Theta_b(\Pi_a)\|_{L^2(w)} \leq \|\Pi_a\|_{L^2(w)} \|\Pi_b + \Gamma_b\|_{L^2(w)} + \|\Lambda_{a, b} + \widetilde{\Lambda}_{a, b}\|_{L^2(w)} \lesssim \|a\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}} [w]^2_{A_2},$$ and the same holds for $\Theta_b(\Pi^*_a)$. Now let us look at $\Lambda_{a,b}$. Let $f \in L^p(\mu)$ and $g \in L^q(\lambda')$. Then $$\left\langle \Lambda_{a,b}f, g\right\rangle = \sum_{Q\in\mathcal{D}, \epsilon\neq 1} \widehat{a}(Q,\epsilon) \widehat{g}(Q,\epsilon) \sum_{R\in\mathcal{D}, R\supset Q; \eta\neq 1} \widehat{b}(R,\eta) \widehat{f}(R,\eta) \frac{1}{|R|}.$$ Let $a_{\tau} = T_{\tau}a$ and $b_{\sigma} = T_{\sigma}b$, where $T_{\tau}$ and $T_{\sigma}$ are martingale transforms with $\tau_{Q,\epsilon} = \pm 1$ and $\sigma_{Q, \epsilon} = \pm 1$ chosen for every pair $(Q\in\mathcal{D}, \epsilon\neq 1)$ such that $$\tau_{Q,\epsilon}\widehat{a}(Q,\epsilon) \widehat{g}(Q,\epsilon) \geq 0 \text{, and } \sigma_{Q,\epsilon} \widehat{b}(Q,\epsilon)\widehat{f}(Q,\epsilon) \geq 0.$$ Then \begin{align} \nonumber \left| \left\langle \Lambda_{a,b}f, g\right\rangle \right| & \leq \sum_{Q\in\mathcal{D};\epsilon\neq 1} \widehat{a_{\tau}}(Q,\epsilon) \widehat{g}(Q,\epsilon) \sum_{R\in\mathcal{D}, R\supset Q ; \eta\neq 1} \widehat{b_{\sigma}}(R,\eta) \widehat{f}(R,\eta) \frac{1}{|R|} \\ \label{E:LbBd1} &\leq \sum_{Q\in\mathcal{D};\epsilon\neq 1} \widehat{a_{\tau}}(Q,\epsilon) \widehat{g}(Q,\epsilon) \left\langle \Pi^*_{b_{\sigma}} f\right\rangle _{Q}, \end{align} where the last inequality follows from $$\left\langle \Pi^*_{b_{\sigma}} f\right\rangle _Q = \sum_{P\in\mathcal{D}, P \subsetneq Q; \epsilon\neq 1} \widehat{b_{\sigma}}(P,\epsilon) \widehat{f}(P,\epsilon) \frac{1}{|Q|} + \sum_{R\in\mathcal{D}, R\supset Q; \eta\neq 1} \widehat{b_{\sigma}}(R,\eta) \widehat{f}(R,\eta) \frac{1}{|R|}.$$ By the assumptions on $\sigma$ and $\tau$ and the Monotone Convergence Theorem, \eqref{E:LbBd1} becomes: \begin{align*} \sum_{Q\in\mathcal{D},\epsilon\neq 1} \int \widehat{a_{\tau}}(Q,\epsilon) \widehat{g}(Q,\epsilon) (\Pi^*_{b_{\sigma}}f)(x) \frac{\mathbbm 1} %{1\!\!1_Q(x)}{|Q|}\,dx &= \int \sum_{Q\in\mathcal{D},\epsilon\neq 1} \widehat{a_{\tau}}(Q,\epsilon) \widehat{g}(Q,\epsilon) (\Pi^*_{b_{\sigma}}f)(x) \frac{\mathbbm 1} %{1\!\!1_Q(x)}{|Q|}\,dx\\ &= \left\langle \Pi^*_{a_{\tau}}g, \Pi^*_{b_{\sigma}}f \right\rangle . \end{align*} Therefore, by \eqref{E:Paraprod1WtBds} and Theorem \ref{T:Paraprod2WtBds}, \begin{align} \label{E:LbL1} \| \Lambda_{a,b} \|_{L^p(\mu;\:\lambda)} & \leq \|\Pi^*_{a_{\tau}}\|_{L^q(\lambda')} \|\Pi^*_{b_{\sigma}}\|_{L^p(\mu;\: \lambda)} \\ \nonumber &\lesssim C(\mu, \lambda, p) \|a_{\tau}\|_{BMO^2_{\mathcal{D}}} \|b_{\sigma}\|_{BMO^2_{\mathcal{D}}(\nu)}\\ \nonumber & \lesssim C(\mu, \lambda, p) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, \end{align} where the last inequality follows from Proposition \ref{P:BMOSigma}. Letting $\mu = \lambda = w \in A_2$ in \eqref{E:LbL1}: $$ \| \Lambda_{a,b} \|_{L^2(w)} \lesssim \|a_{\tau}\|_{BMO^2_{\mathcal{D}}}[w]_{A_2} \|b_{\sigma}\|_{BMO^2_{\mathcal{D}}} [w]_{A_2} \lesssim \|a\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}} [w]^2_{A_2},$$ which proves the result for $\Lambda_{a, b}$. An identical argument proves the result for $\widetilde{\Lambda}_{a,b}$. As for $\Lambda_{b,a}$ or $\widetilde{\Lambda}_{b,a}$, the argument follows similarly, with a few modifications: $$\| \Lambda_{b, a}\|_{L^p(\mu;\:\lambda)} \leq \|\Pi^*_{a_{\tau}}\|_{L^p(\mu)} \|\Pi^*_{b_{\sigma}}\|_{L^q(\lambda';\: \mu')}.$$ \end{proof} \section{The Second Iteration $\big[b, [b, \mathbb{S}^{ij}]\big]$} \label{S:SecondComm} In this section we take a closer look at what happens when $k = 2$, and develop the rest of the tools we need for the general case. From \eqref{E:CommTh_bDec} and \eqref{E:Comm1_Dec}: $$C_b^2(\mathbb{S}^{ij}) = [b, C_b^1(\mathbb{S}^{ij})] = [\mathfrak{P}_b, C_b^1(\mathbb{S}^{ij})] + \Theta_b\big( C_b^1(\mathbb{S}^{ij}) \big).$$ We show that each term obeys the bounds in Theorem \ref{T:BigThm_Sij}. The first term can be bounded using \eqref{E:Paraprod1WtBds} and Theorem \ref{T:BigThm_Sij} with $k = 1$. In order to analyze the second term, we look at some simple properties of $\Theta_b$ that will be useful. Suppose $S$ and $T$ are some operators. Obviously $\Theta_b$ is linear, that is $\Theta_b(S + cT) = \Theta_b(S) + c\Theta_b(T)$. Moreover: \begin{equation} \label{E:Th_bComp} \Theta_b(ST) = \Theta_b(S)T + S\Theta_b(T). \end{equation} To see this, note that we can write: $\Theta_b(ST)f = \Pi_{S(Tf)}b - S\Pi_{Tf}b + S\Pi_{Tf}b - ST\Pi_f b.$ In turn, this yields \begin{equation} \label{E:Th_bComm} \Theta_b \big( [S, T] \big) = [\Theta_b(S), T] + [S, \Theta_b(T)]. \end{equation} Then \begin{equation} \label{E:Comm2Temp1} \Theta_b\big( C_b^1(\mathbb{S}^{ij})\big) = [\Theta_b(\mathfrak{P}_b), \mathbb{S}^{ij}] + [\mathfrak{P}_b, \Theta_b(\mathbb{S}^{ij})] + \Theta_b^2(\mathbb{S}^{ij}). \end{equation} The second term in the expression is easily controlled using Proposition \ref{P:Comm1} with $k = 1$. For the first term, remark that \begin{equation} \label{E:Th_bGmb} \Theta_b(\Gamma_b) = 0, \end{equation} which is easily seen by verifying that $$\Pi_{\Gamma_bf}b = \Gamma_b \Pi_f b = \sum_{R\in\mathcal{D}} \sum_{\epsilon,\eta\neq 1; \epsilon\neq \eta} \widehat{b}(R,\eta) \widehat{f}(R,\eta) \left\langle b\right\rangle _{R} \frac{1}{\sqrt{|R|}} h_R^{\epsilon+\eta}.$$ So $\Theta_b(\mathfrak{P}_b) = \Theta_b(\Pi_b) + \Theta_b(\Pi^*_b)$, and both these terms can be bounded using Lemma \ref{L:Lambda_ab} with $a = b$. To analyze $\Theta_b^2(\mathbb{S}^{ij})$ and prove Proposition \ref{P:Comm1} for $k = 2$, we again need to look at the cancellative and non-cancellative cases separately. \subsection{The Cancellative Case.} \label{Ss:Cb2_ij} Using the expression for $\Theta_b(\mathbb{S}^{ij})$ in \eqref{E:ThbSij}, we find that $$\Pi_{\Theta_b(\mathbb{S}^{ij})f} b = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P\in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big) \left\langle b\right\rangle _Q h_Q^{\eta},$$ and $$\Theta_b(\mathbb{S}^{ij}) \Pi_f b = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P \in R_{(i)}, Q \in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b \right\rangle _Q - \left\langle b \right\rangle _P \big) \left\langle b \right\rangle _P h_Q^{\eta}.$$ Therefore $$\Theta_b^2 (\mathbb{S}^{ij})f = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P\in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P,\epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big)^2 h_Q^{\eta}.$$ Now consider the operator $$U_{(j)}f \defeq \sum_{Q\in\mathcal{D}; \eta\neq 1} \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _{Q^{(j)}} \big) \widehat{f}(Q,\eta) h_Q^{\eta},$$ for a non-negative integer $j$. Then $$U_{(j)} \Theta_b(\mathbb{S}^{ij}) f = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P \in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _R \big) h_Q^{\eta},$$ and $$\Theta_b(\mathbb{S}^{ij}) U_{(i)} f = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P \in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big) \big( \left\langle b\right\rangle _P - \left\langle b\right\rangle _R \big) h_Q^{\eta}.$$ So $$\Theta_b^2(\mathbb{S}^{ij}) = U_{(j)} \Theta_b(\mathbb{S}^{ij}) - \Theta_b(\mathbb{S}^{ij}) U_{(i)}.$$ We claim that for any $A_2$ weight $w$: \begin{equation} \label{E:UjBound} \left\| U_{(j)} \right\|_{L^2(w)} \lesssim j [w]_{A_2} \|b\|_{BMO^2_{\mathcal{D}}}. \end{equation} To see this, remark that $$ \left| \left\langle b\right\rangle _Q - \left\langle b\right\rangle _{Q^{(j)}} \right| \leq j 2^n \|b\|_{BMO^2_{\mathcal{D}}},$$ so $U_{(j)}$ can be expressed as $$U_{(j)} = j 2^n \|b\|_{BMO^2_{\mathcal{D}}} T_{\sigma},$$ where $T_{\sigma}$ is a martingale transform. Then \eqref{E:UjBound} follows from \eqref{E:MartTBd}. Finally, this and Proposition \ref{P:Comm1} with $k=1$ give that \begin{align} \left\| \Theta_b^2(\mathbb{S}^{ij}) \right\|_{L^p(\mu;\:\lambda)} & \leq \|\Theta_b(\mathbb{S}^{ij}) \|_{L^p(\mu;\:\lambda)} \big( \|U_{(i)}\|_{L^p(\mu)} + \|U_{(j)}\|_{L^p(\lambda)} \big) \\ &\lesssim \kappa^2_{ij} C(\mu, \lambda, p) \|b\|_{BMO^2_{\mathcal{D}}(\nu)} \|b\|_{BMO^2_{\mathcal{D}}}, \end{align} and, in the one-weight case, $$\left\| \Theta_b^2(\mathbb{S}^{ij}) \right\|_{L^2(w)} \lesssim \kappa_{ij}^2 \|b\|^2_{BMO^2_{\mathcal{D}}} [w]^3_{A_2}.$$ \subsection{The case $i = j = 0$.} \label{Ss:Cb2_00} From \eqref{E:00HRT}: $$\Theta_b^2(\mathbb{S}^{00}) = \Theta_b^2(\Pi_a) + \Theta_b^2(\Pi^*_{d}).$$ Using the expression in \eqref{E:Th_bPi_a} and the properties of $\Theta_b$ in \eqref{E:Th_bComp} and \eqref{E:Th_bGmb}: \begin{equation} \label{E:Cb00Temp1} \Theta_b^2(\Pi_a) = \Theta_b(\Pi_a) \Pi_b + \Pi_a \Theta_b(\Pi_b) + \Theta_b(\Pi_a) \Gamma_b + \Theta_b(\Lambda_{a,b}) - \Theta_b(\widetilde{\Lambda}_{a, b}). \end{equation} Lemma \ref{L:Lambda_ab} and the paraproduct norms immediately control the first three terms, showing that their norms as operators $L^p(\mu) \to L^p(\lambda)$ are bounded (up to a constant) by $\|a\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}$, and their norms as operators $L^2(w) \to L^2(w)$ are bounded (up to a constant) by $\|a\|_{BMO^2_{\mathcal{D}}} \|b\|^2_{BMO^2_{\mathcal{D}}} [w]^3_{A_2}$. For the last two terms, we look at an interesting property of the $\Lambda_{a,b}$ operators: \begin{prop} \label{P:Switch} For some locally integrable functions $a$, $b$, $c$, the operator $\Lambda_{a,b}$ satisfies: \begin{align} \label{E:Switch} \Theta_c(\Lambda_{a,b}) &= \Pi_a \Lambda_{c,b} \\ \text{and } \Theta_c(\widetilde{\Lambda}_{a,b}) &= 0. \end{align} \end{prop} \begin{proof} To prove the first statement, note that \begin{align} \nonumber \left\langle \Lambda_{a,b}f\right\rangle _Q &= \sum_{\substack{Q,R\in\mathcal{D}; R \supsetneq Q \\ \eta\neq 1}} \widehat{b}(R,\eta) \widehat{f}(R,\eta) \frac{1}{|R|} \sum_{\substack{P\in\mathcal{D}; Q \subsetneq P\subset R \\ \epsilon\neq 1}} \widehat{a}(P,\epsilon) h_P^{\epsilon}(Q)\\ \label{E:SwitchTemp1} &= \sum_{\substack{Q,R\in\mathcal{D}; R \supsetneq Q \\ \eta\neq 1}} \widehat{b}(R,\eta) \widehat{f}(R,\eta) \frac{1}{|R|} \left( \left\langle a\right\rangle _Q - \left\langle a\right\rangle _R \right). \end{align} A quick calculation shows that $$\Theta_c(\Lambda_{a,b}) f = \sum_{\substack{Q\in\mathcal{D} \\ \epsilon\neq 1}} \widehat{a}(Q,\epsilon) \left[ \sum_{\substack{Q,R\in\mathcal{D}; R \supset Q\\ \eta\neq 1}} \widehat{b}(R,\eta) \widehat{f}(R,\eta) \frac{1}{|R|} \big( \left\langle c\right\rangle _Q - \left\langle c\right\rangle _R \big) \right] h_Q^{\epsilon}.$$ From \eqref{E:SwitchTemp1}, we recognize the term in parentheses as $\left\langle \Lambda_{c,b} f \right\rangle _Q$, and so $$\Theta_c(\Lambda_{a,b}) f = \sum_{Q\in\mathcal{D}, \epsilon\neq 1} \widehat{a}(Q,\epsilon) \left\langle \Lambda_{c,b} f \right\rangle _Q h_Q^{\epsilon} = \Pi_a \Lambda_{c,b}f.$$ The second statement follows by $$\Pi_{\widetilde{\Lambda}_{a,b}f}c = \widetilde{\Lambda}_{a,b}\Pi_f c = \sum_{Q\in\mathcal{D}; \epsilon,\eta\neq 1} \widehat{a}(Q,\epsilon) \widehat{b}(Q,\eta) \widehat{f}(Q,\eta) \frac{1}{|Q|} \left\langle c\right\rangle _Q h_Q^{\epsilon}.$$ \end{proof} Returning to \eqref{E:Cb00Temp1}, we can now see that the last two terms in the expression become simply $\Pi_a \Lambda_{b,b}$, which is controlled exactly as the other terms. The result for $\Theta_b^2(\Pi^*_a)$ follows similarly, after noting that $\Theta_b(\Lambda_{b,a}) = \Pi_b \Lambda_{b, a}$. Finally, recall the assumptions on the $BMO$ norms of $a$ and $d$ in \eqref{E:00HRT} and see that the results in this section prove Proposition \ref{P:Comm1} for $k = 2$ and $i = j = 0$. \section{The General Case of Higher Iterations} \label{S:General} In this section we prove Theorem \ref{T:BigThm_Sij}. A closer look at recursively expanding the formula: $$C_b^{m+1}(T) = [\mathfrak{P}_b, C_b^m(T)] + \Theta_b\big( C_b^m(T) \big),$$ for some operator $T$, shows that, in order to control $C_b^{m+1}(T)$, we need not only bound the previous iterations $C_b^{m+1-k}(T)$, but really $$C_b^{m-k}(T),\: \Theta_b\big( C_b^{m-k}(T) \big),\: \Theta^2_b\big( C_b^{m-k}(T) \big), \ldots, \Theta^k_b\big( C_b^{m-k}(T) \big),$$ for every $0 \leq k \leq m-1$. So, it makes sense to instead prove the following more general statement: \begin{thm} \label{T:BIGTHM_Shifts} Under the same assumptions as Theorem \ref{T:BigThm_Sij}, for any integer $k \geq 1$: \begin{equation} \label{ThbM_2Wt} \left\| \Theta_b^M\big( C_b^k(\mathbb{S}^{ij}) \big) \right\|_{L^p(\mu;\:\lambda)} \leq c \kappa_{ij}^{M+k} \|b\|_{BMO^2_{\mathcal{D}}}^{M+k-1} \|b\|_{BMO^2_{\mathcal{D}}(\nu)} \text{, for all } M \geq 0, \end{equation} where $c$ is a constant depending on $n$, $\mu$, $\lambda$, $p$, $M$, and $k$. In particular, if $\mu = \lambda = w \in A_2$, \begin{equation} \label{ThbM_1Wt} \left\| \Theta_b^M\big( C_b^k(\mathbb{S}^{ij}) \big) \right\|_{L^2(w)} \leq c \kappa_{ij}^{M+k} \|b\|^{M+k}_{BMO^2_{\mathcal{D}}} [w]_{A_2}^{M+k+1} \text{, for all } M \geq 0, \end{equation} where $c$ is a constant depending on $n$, $M$, and $k$. \end{thm} Theorem \ref{T:BigThm_Sij} will then follow as a special case of the above result, with $M = 0$. We begin by completing the proof of Proposition \ref{P:Comm1}. \begin{proof}[Proof of Proposition \ref{P:Comm1}] So far, this result has been proved for $k = 1$ and $k = 2$. In case $(i,j) \neq (0,0)$, we generalize the argument in Section \ref{Ss:Cb2_ij}. We claim that for all $k \geq 2$ and $(i,j) \neq (0,0)$: \begin{align} \label{E:Temp1} \Theta_b^k(\mathbb{S}^{ij}) f &= \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P\in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P,\epsilon) \big(\left\langle b\right\rangle _Q - \left\langle b\right\rangle _P\big)^k h_Q^{\eta}\\ \label{E:Temp2} &= U_{(j)} \Theta_b^{k-1}(\mathbb{S}^{ij}) - \Theta_b^{k-1} (\mathbb{S}^{ij})U_{(i)}. \end{align} Then, assuming Proposition \ref{P:Comm1} holds for some $k\geq 1$, the result for $k+1$ follows from \eqref{E:UjBound}. To see this, assume \eqref{E:Temp1} holds for some $k \geq 2$. Then $$\Pi_{\Theta^k_b(\mathbb{S}^{ij})f} b = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P\in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big)^k \left\langle b\right\rangle _Q h_Q^{\eta},$$ and $$\Theta^k_b(\mathbb{S}^{ij}) \Pi_f b = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P \in R_{(i)}, Q \in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b \right\rangle _Q - \left\langle b \right\rangle _P \big)^k \left\langle b \right\rangle _P h_Q^{\eta}.$$ Since $\Theta_b^{k+1}(\mathbb{S}^{ij})f = \Pi_{\Theta^k_b(\mathbb{S}^{ij})f} b - \Theta^k_b(\mathbb{S}^{ij}) \Pi_f b$, we see that \eqref{E:Temp1} holds for $k+1$. Similarly, $$U_{(j)} \Theta^k_b(\mathbb{S}^{ij}) f = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P \in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big)^k \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _R \big) h_Q^{\eta},$$ and $$\Theta^k_b(\mathbb{S}^{ij}) U_{(i)} f = \sum_{R\in\mathcal{D}; \epsilon,\eta\neq 1} \sum_{P \in R_{(i)}, Q\in R_{(j)}} a^{\epsilon\eta}_{PQR} \widehat{f}(P, \epsilon) \big( \left\langle b\right\rangle _Q - \left\langle b\right\rangle _P \big)^k \big( \left\langle b\right\rangle _P - \left\langle b\right\rangle _R \big) h_Q^{\eta},$$ from which \eqref{E:Temp2} with $k+1$ follows. For the case $i = j = 0$, $$\Theta_b^k(\mathbb{S}^{00}) = \Theta_b^k(\Pi_a) + \Theta_b^k(\Pi^*_d),$$ with $\|a\|_{BMO^2_{\mathcal{D}}} \lesssim 1$ and $\|d\|_{BMO^2_{\mathcal{D}}} \lesssim 1$. Proposition \ref{P:Comm1} for $i = j =0$ therefore follows trivially from the next result. \end{proof} \begin{prop} \label{P:Th_b^k(P,Lb)} Under the same assumptions as Theorem \ref{T:BigThm_Sij}, let $P_a$ denote either one of the operators $\Pi_a$ and $\Pi^*_a$, and $\Lambda$ denote either one of the operators $\Lambda_{a,b}$ or $\Lambda_{b, a}$. Then for any integer $k \geq 1$: \begin{equation} \label{E:Thbk(P)2Wt} \left\| \Theta_b^k(P_a) \right\|_{L^p(\mu;\:\lambda)} \leq c \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^{k-1}_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, \end{equation} \begin{equation} \label{E:Thbk(L)2Wt} \left\| \Theta_b^k (\Lambda) \right\|_{L^p(\mu;\:\lambda)} \leq c \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^k_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, \end{equation} where $c$ is a constant depending on $n$, $p$, $\mu$, $\lambda$, and $k$. In particular, if $\mu = \lambda = w \in A_2$: \begin{equation} \label{E:Thbk(P)1Wt} \left\| \Theta_b^k(P_a) \right\|_{L^2(w)} \leq c \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^k_{BMO^2_{\mathcal{D}}} [w]^{k+1}_{A_2}, \end{equation} \begin{equation} \label{E:Thbk(L)1Wt} \left\| \Theta_b^k (\Lambda) \right\|_{L^2(w)} \leq c \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^{k+1}_{BMO^2_{\mathcal{D}}} [w]^{k+2}_{A_2}, \end{equation} where $c$ is a constant depending on $n$ and $k$. \end{prop} \begin{proof} This result with $k = 1$ was proved in Lemma \ref{L:Lambda_ab} for $\Theta_b(P_a)$, and in Section \ref{Ss:Cb2_00} for $\Theta_b(\Lambda)$. We proceed by (strong) induction. Fix $m \geq 1$ and suppose Proposition \ref{P:Th_b^k(P,Lb)} holds for all $1 \leq k \leq m$. We show that it then holds for $k = m + 1$. Let us look at the case $P_a = \Pi_a$: $$\Theta_b^{m+1}(\Pi_a) = \Theta_b^m \big( \Theta_b(\Pi_a) \big) = \Theta_b^m \big(\Pi_a (\Pi_b + \Gamma_b) \big) + \Theta_b^m(\Lambda_{a,b}).$$ Remark that the last term is already controlled by the induction assumption. To analyze the first term, we use the binomial formula: \begin{equation} \label{E:BinomComp} \Theta_b^m(ST) = \sum_{k = 0}^m \binom{m}{k} \Theta_b^{m-k}(S) \Theta_b^k(T), \end{equation} which follows from \eqref{E:Th_bComp} by a simple induction argument. Then \begin{equation} \label{E:Ind1t1} \left\| \Theta_b^m\big( \Pi_a(\Pi_b + \Gamma_b) \big) \right\|_{L^p(\mu;\:\lambda)} \leq \sum_{k = 0}^m \binom{m}{k} \left\| \Theta_b^{m-k}(\Pi_a) \right\|_{L^p(\lambda)} \left\| \Theta_b^k(\Pi_b + \Gamma_b) \right\|_{L^p(\mu;\:\lambda)}. \end{equation} The statement: \begin{equation} \label{E:Ind1t3} \left\| \Theta_b^{m-k} (\Pi_a) \right\|_{L^2(w)} \lesssim C(m-k) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^{m-k}_{BMO^2_{\mathcal{D}}} [w]_{A_2}^{m-k+1} \text{, } 0 \leq k \leq m, \end{equation} for all $w \in A_2$, follows from the induction assumption on $P_a$ for $0 \leq k \leq m-1$, and from \eqref{E:Paraprod1WtBds} for $k=m$. Then, by Extrapolation, \begin{equation} \label{E:Ind1t4} \left\| \Theta_b^{m-k}(\Pi_a) \right\|_{L^p(\lambda)} \lesssim C(\lambda, p, m-k) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}}^{m-k} \text{, } 0\leq k \leq m. \end{equation} On the other hand, noting that $\Theta_b^0(\Pi_b + \Gamma_b) = \Pi_b + \Gamma_b$ and $\Theta_b^k(\Pi_b + \Gamma_b) = \Theta_b^k(\Pi_b)$ for $1 \leq k \leq m$, we have: $$ \left\|\Theta_b^k(\Pi_b + \Gamma_b) \right\|_{L^p(\mu;\:\lambda)} \lesssim C(\mu, \lambda, p, k) \|b\|^k_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)} \text{, } 0 \leq k \leq m,$$ which follows from Theorem \ref{T:Paraprod2WtBds} for $k = 0$, and from the induction assumption on $P_a$ with $a = b$ for $1 \leq k \leq m$. Similarly: $$ \left\| \Theta_b^k(\Pi_b + \Gamma_b) \right\|_{L^2(w)} \lesssim C(k) \|b\|_{BMO^2_{\mathcal{D}}}^{k+1} [w]_{A_2}^{k+1} \text{, } 0\leq k \leq m.$$ From these estimates and \eqref{E:Ind1t1}, we obtain $$ \left\| \Theta_b^m\big( \Pi_a(\Pi_b + \Gamma_b) \big) \right\|_{L^p(\mu;\:\lambda)} \lesssim C(\mu, \lambda, p, m) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^m_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)},$$ and $$ \left\| \Theta_b^m\big( \Pi_a(\Pi_b + \Gamma_b) \big) \right\|_{L^2(w)} \lesssim C(m) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^{m+1}_{BMO^2_{\mathcal{D}}} [w]_{A_2}^{m+2},$$ which proves the result for $k = m+1$ and $P_a = \Pi_a$. The case $P_a = \Pi^*_a$ follows similarly. Now suppose $\Lambda = \Lambda_{a,b}$. Then, by \eqref{E:Switch}, $$ \Theta_b^{m+1}(\Lambda_{a,b}) = \Theta_b^m\big( \Theta_b(\Lambda_{a, b}) \big) = \Theta_b^m(\Pi_a \Lambda_{b,b}).$$ Using the binomial formula again, \begin{equation} \label{E:Ind1t2} \left\| \Theta_b^{m+1}(\Lambda_{a,b}) \right\|_{L^p(\mu;\:\lambda)} \leq \sum_{k=0}^m \binom{m}{k} \left\| \Theta_b^{m-k}(\Pi_a) \right\|_{L^p(\lambda)} \left\| \Theta_b^k(\Lambda_{b,b}) \right\|_{L^p(\mu;\:\lambda)}. \end{equation} The statements: $$ \left\| \Theta_b^k(\Lambda_{b,b}) \right\|_{L^p(\mu;\:\lambda)} \lesssim C(\mu,\lambda,p,k) \|b\|^{k+1}_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)} \text{, } 0 \leq k \leq m, $$ $$ \left\| \Theta_b^k(\Lambda_{b,b}) \right\|_{L^2(w)} \lesssim C(k) \|b\|^{k+2}_{BMO^2_{\mathcal{D}}}[w]^{k+2}_{BMO^2_{\mathcal{D}}} \text{, } 0 \leq k \leq m, $$ follow from the induction assumption on $\Lambda$ with $a = b$ for $1 \leq k \leq m$, and from Lemma \ref{L:Lambda_ab} for $k = 0$. Combining these with \eqref{E:Ind1t3} and \eqref{E:Ind1t4}, we have from \eqref{E:Ind1t2}: $$ \left\| \Theta_b^{m+1}(\Lambda_{a,b}) \right\|_{L^p(\mu;\:\lambda)} \lesssim C(\mu,\lambda,p,m+1) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^{m+1}_{BMO^2_{\mathcal{D}}} \|b\|_{BMO^2_{\mathcal{D}}(\nu)}, $$ $$ \left\| \Theta_b^{m+1}(\Lambda_{a,b}) \right\|_{L^2(w)} \lesssim C(m+1) \|a\|_{BMO^2_{\mathcal{D}}} \|b\|^{m+2}_{BMO^2_{\mathcal{D}}} [w]^{m+3}_{A_2}, $$ which proves the result for $k = m+1$ and $\Lambda = \Lambda_{a,b}$. Since the case $\Lambda = \Lambda_{b,a}$ follows similarly, Proposition \ref{P:Th_b^k(P,Lb)} is proved. \end{proof} We now have all the tools needed to prove Theorem \ref{T:BIGTHM_Shifts}. \begin{proof}[Proof of Theorem \ref{T:BIGTHM_Shifts}] We prove the result for $k = 1$, so we look at $$\Theta_b^M\big( C_b^1(\mathbb{S}^{ij}) \big) = \Theta_b^M\big( [\mathfrak{P}_b, \mathbb{S}^{ij}] + \Theta_b(\mathbb{S}^{ij}) \big),$$ for some integer $M \geq 0$. At this point, we use another easily deduced binomial formula: \begin{equation} \label{E:BinomComm} \Theta_b^M\big( [S, T] \big) = \sum_{m=0}^M \binom{M}{m} \big[ \Theta_b^{M-m}(S), \Theta_b^m(T) \big]. \end{equation} Then \begin{align*} \left\| \Theta_b^M\big( C_b^1(\mathbb{S}^{ij}) \big) \right\|_{L^p(\mu;\:\lambda)} \leq & \sum_{m=0}^M \binom{M}{m} \left\| \Theta_b^{M-m}(\mathfrak{P}_b) \right\|_{L^p(\mu;\:\lambda)} \big( \|\Theta_b^m(\mathbb{S}^{ij})\|_{L^p(\mu)} + \|\Theta_b^m(\mathbb{S}^{ij})\|_{L^p(\lambda)} \big) \\ & + \left\| \Theta_b^{M+1}(\mathbb{S}^{ij}) \right\|_{L^p(\mu;\:\lambda)}. \end{align*} Applying Proposition \ref{P:Th_b^k(P,Lb)} with $a = b$ and Proposition \ref{P:Comm1}, we obtain the result for $k=1$. Finally, suppose Theorem \ref{T:BIGTHM_Shifts} holds for some $k\geq 1$ and let an integer $M \geq 0$. Then $$\Theta_b^M\big( C_b^{k+1}(\mathbb{S}^{ij}) \big) = \Theta_b^M \Big( [\mathfrak{P}_b, C_b^k(\mathbb{S}^{ij})] + \Theta_b\big( C_b^k(\mathbb{S}^{ij}) \big)\Big),$$ and the result for $k+1$ again follows from Propositions \ref{P:Th_b^k(P,Lb)} and \ref{P:Comm1}. \end{proof} \begin{bibdiv} \begin{biblist} \normalsize \bib{Beznosova}{article}{ author={Beznosova, O.}, title={Linear bound for the dyadic paraproduct on weighted Lebesgue space $L_2(w)$}, journal={Journal of Functional Analysis}, volume={255}, number={4}, date={2008}, pages={994--1007}, } \bib{Bloom}{article}{ author={Bloom, Steven}, title={A commutator theorem and weighted BMO}, journal={Trans. Amer. Math. Soc.}, volume={292}, date={1985}, number={1}, pages={103--122} } \bib{Buckley}{article}{ author={Buckley, S. M.}, title={Estimates for operator norms and reverse Jensen’s inequalities}, journal={Trans. Amer. Math. Soc.}, volume={340}, date={1993}, number={1}, pages={253--272}, } \bib{Chung}{article}{ author={Chung, Daewon}, title={Weighted inequalities for multivariable dyadic paraproducts}, journal={Publ. Mat.}, volume={55}, number={2}, date={2011}, pages={475--499}, } \bib{ChungPereyraPerez}{article}{ author={Chung, Daewon}, author={Pereyra, M. Cristina}, author={Perez, Carlos}, title={Sharp Bounds for General Commutators On Weighted Lebesgue Spaces}, journal={Transactions of the American Mathematical Society}, volume={364}, number={3}, date={2012}, pages={1163--1177}, } \bib{CRW}{article}{ author={Coifman, R. R.}, author={Rochberg, R.}, author={Weiss, Guido}, title={Factorization theorems for Hardy spaces in several variables}, journal={Ann. of Math. (2)}, volume={103}, date={1976}, number={3}, pages={611--635}, } \bib{CruzClassicOps}{article}{ author={Cruz-Uribe, D.}, author={Moen, K.}, title={Sharp norm inequalities for commutators of classical operators}, journal={Publ. Mat.}, volume={56}, number={1}, date={2011}, pages={147--190}, } \bib{DalencOu}{article}{ author={Dalenc, L.}, author={Ou, Y.}, title={Upper Bound for Multi-parameter Iterated Commutators}, pages={1--25}, eprint={http://arxiv.org/abs/1401.5994}, year={2014}, } \bib{Extrapolation}{article}{ author={Dragi\u{c}evi\'c, O.}, author={Grafakos, L.}, author={Pereyra, M. C.}, author={Petermichl, S.}, title={Extrapolation and sharp norm estimates for classical operators on weighted Lebegue spaces}, journal={Publ. Math}, volume={49}, date={2005}, number={1}, pages={73--91}, } \bib{HLW1}{article}{ author={Holmes, Irina}, author={Lacey, Michael T.}, author={Wick, Brett D.}, title={Bloom's Inequality: Commutators in a Two-Weight Setting}, date={2015}, eprint={http://arxiv.org/abs/1505.07947} } \bib{HLW2}{article}{ author={Holmes, Irina}, author={Lacey, Michael T.}, author={Wick, Brett D.}, title={Commutators in the Two-Weight Setting}, date={2015}, eprint={http://arxiv.org/abs/1506.05747}, } \bib{HytRepOrig}{article}{ author={Hyt{\"o}nen, T.}, title={Representation of singular integrals by dyadic operators, and the A\_2 theorem}, eprint={http://arxiv.org/abs/1108.5119}, year={2011}, } \bib{HytRep}{article}{ author={Hyt{\"o}nen, T.}, title={The sharp weighted bound for general Calder\'on-Zygmund operators}, journal={Ann. of Math. (2)}, volume={175}, date={2012}, number={3}, pages={1473--1506}, } \bib{HytLacey}{article}{ author={Hyt{\"o}nen, T. P.}, author={Lacey, M. T.}, author={Martikainen, H.}, author={Orponen, T.}, author={Reguera, M}, author={Sawyer, E. T.}, author={Uriarte-Tuero, I.}, title={Weak and strong type estimates for maximal truncations of Calder\'on-Zygmund operators on $A_p$ weighted spaces}, journal={J. Anal. Math.}, volume={118}, date={2012}, number={1}, pages={177--220}, } \bib{HytPerezTV}{article}{ author={Hyt{\"o}nen, T.}, author={P{\'e}rez, C.}, author={Treil, S.}, author={Volberg, A.}, title={Sharp weighted estimates for dyadic shifts and the A2 conjecture}, journal={Journal f{\"u}r die reine und angewandte Mathematik}, volume={2014}, number={687}, pages={43--86}, date={2012}, } \bib{Lacey}{article}{ author={Lacey, M. T.}, title={On the $A_2$ inequality for Calder\'on-Zygmund operators}, conference={ title={Recent advances in harmonic analysis and applications}, }, book={ series={Springer Proc. Math. Stat.}, volume={25}, publisher={Springer, New York}, }, date={2013}, pages={235--242}, } \bib{Muck}{article}{ author={Muckenhoupt, Benjamin}, title={Weighted norm inequalities for the Hardy maximal function}, journal={Trans. Amer. Math. Soc.}, volume={165}, date={1972}, pages={207--226} } \bib{MuckWheeden}{article}{ author={Muckenhoupt, B.}, author={Wheeden, R. L.}, title={Weighted bounded mean oscillation and the Hilbert transform}, journal={Studia Math.}, volume={54}, date={1975/76}, number={3}, pages={221--237}, } \bib{TreilSharpA2}{article}{ author={Treil, S.}, title={Sharp $A_2$ estimates of Haar shifts via Bellman function}, date={2011}, pages={1-23}, eprint={http://arxiv.org/abs/1105.2252}, } \bib{Wittwer}{article}{ author={Wittwer, Janine}, title={A Sharp Estimate on the Norm of the Martingale Transform}, journal={Mathematical Research Letters}, volume={7}, date={2000}, pages={1--12}, } \end{biblist} \end{bibdiv} \end{document}
2,869,038,154,437
arxiv
\section{Introduction} This paper establishes robustness of an algorithm for recovering a vector $x_0 \in \mathbb{R}^n$ from phaseless linear measurements that contain noise and a constant fraction of gross, arbitrary errors. That is, for fixed measurement vectors $a_i \in \mathbb{R}^n$ for $i = 1 \ldots m$, our task is to find $x_0$ satisfying \begin{align} b_i = | \langle x_0, a_i \rangle|^2 + \varepsilon_i + \eta_i \label{measurements-components} \end{align} for known $b_i \in \mathbb{R}$, known $a_i$, and unknown $\eta_i$ and $\varepsilon_i$. Here $\eta_i$ will represent the noise in the measurements, and $\varepsilon_i$ will represent gross, arbitrary errors. This recovery problem is known as phase retrieval. Measurements of form \eqref{measurements-components} arise in several applications, such as X-ray crystallography, optics, and microscopy \cite{harrison1993phase, millane1990phase,CESV2011}. In such applications, extremely large errors in some measurements may be due to sensor failure, occlusions, or other effects. Ideally, recovery algorithms could provably tolerate a small number of such errors. Recently, researchers have introduced algorithms for the phase retrieval problem that have provable recovery guarantees \cite{CESV2011,CSV2013}. The insight of these methods is that the phase retrieval problem can be convexified by lifting it to the space of matrices. That is, instead of searching for the vector $x_0$, one can search for the lifted matrix $x_0 x_0^t$. The quadratic measurements \eqref{measurements-components} then become linear measurements on this lifted matrix. As the desired matrix is semidefinite and rank-one, one can write a rank minimization problem under the semidefinite and data constraints, which has a convex relaxation known as PhaseLift. In this noiseless case, PhaseLift is the program \begin{align} \min_X \text{tr}(X) \text{ subject to } X \succeq 0, \{a_i^t X a_i = b_i\}_{i = 1 \ldots m} \label{phaselift-trace} \end{align} Here, the trace of $X$ is a convex proxy for the rank of a positive semidefinite $X$. An estimate for the underlying signal $x_0$ can be computed by the leading eigenvector of the optimizer of \eqref{phaselift-trace}. As in \cite{CSV2013, DH2014, CL2012}, we will seek recovery guarantees for independent identically distributed Gaussians $$ a_i \sim \mathcal{N}(0, I_n). $$ Under this data model, \cite{DH2014} and \cite{CL2012} have shown that \eqref{phaselift-trace} can be simplified to the semidefinite feasibility problem \begin{align} \text{find } X \succeq 0 \text{ such that } \{a_i^t X a_i = b_i\}_{i = 1 \ldots m} \notag \end{align} This feasibility problem succeeds at finding $x_0 x_0^t$ exactly with high probability when $m \geq c n$ for a sufficiently large $c$ \cite{CL2012}. This scaling is quite surprising because there are only $O(n)$ measurements for an $O(n^2)$ dimensional object. As discussed in \cite{DH2014}, the semidefinite cone is sufficiently `pointy' that the high-dimensional affine space of data-consistent matrices intersects the semidefinite cone only at exactly one point. In the noisy case without gross erros, that is for $\varepsilon=0$, \cite{CL2012} showed that the PhaseLift variant \begin{align} \min \sum_{i} | a_i^t X a_i - b_i | \text{ subject to } X \succeq 0 \label{phaselift-l1} \end{align} successfully recovers a matrix near $x_0 x_0^t$ with high probability. Specifically, they prove that the solution $\hat{X}$ to \eqref{phaselift-l1} satisfies $\|\hat{X} - x_0 x_0^t\|_\text{F} \leq C_0 \|\eta \|_1/m$ with high probability. From $\hat{X}$, an estimate of $x_0$ can be obtained by $$\hat{x}_0 = \sqrt{\hat{\lambda}_1} \hat{u}_1$$ where $(\hat{\lambda}_1, \hat{u}_1)$ is the leading eigenvector and eigenvalue pair for $\hat{X}$. In \cite{CL2012}, the authors prove that $|\hat{x}_0 - \pm x_0 | \leq C_0 \min(\|x_0\|, \|\eta\|_1 / m \|x_0\| )$ for some $C_0$. The contribution of the present paper is to show that the program \eqref{phaselift-l1} is additionally robust against a constant fraction of arbitrary errors. For a fixed set of coefficients that contain gross errors, we show that approximate recovery succeeds with high probability for arbitrary signals and arbitrary values of the gross errors. \begin{theorem} \label{theorem-exact-recovery} There exist positive numbers $f_\text{min}, \gamma, c, C, C'$ such that the following holds. Let $m \geq cn$. Fix a set $S \subset \{1 \ldots m\}$ such that $|S|/m \leq f_\text{min}$. On an event of probability at least $1 - e^{-\gamma m}$, for any $x_0 \in \mathbb{R}^n$ and for any $\varepsilon$ with $\mbox{supp}(\varepsilon) \subseteq S$, the minimizer $\hat{X}$ to \eqref{phaselift-l1} satisfies $$\|\hat{X} - x_0 x_0^t\|_\text{F} \leq C \frac{\|\eta\|_1}{m}.$$ The resulting estimate for $x_0$ satisfies $$\| \hat{x}_0 - \pm x_0 \| \leq C' \min \left( \|x_0\|, \frac{\| \eta\|_1}{m \|x_0\|} \right)$$ \end{theorem} Note that this high-probability result is universal over $x_0$ and $\varepsilon$ and does not only apply for merely for a fixed signal or for a fixed error vector $\varepsilon$. In the case of no gross errors, in which $\varepsilon = 0$, this theorem reduces to the result in \cite{CL2012} mentioned above. In the noiseless case, in which $\eta = 0$, the theorem guarantees exact recovery of $x_0$ with high probability under a a linear number of measurements, of which a constant fraction are corrupted. We now explore the optimality of this theorem. The scaling of $m$ versus $n$ is information theoretically optimal and has no unnecessary logarithmic factors. The noise scaling is the same as in \cite{CL2012}, and its optimality was established there. For arbitrary errors, the fixed fraction of gross errors can not be extended to a case where $f_\text{min} \geq 1/2$ because one could build a problem where half of the measurements are due to an $x_0$ and the other half are due to some $x_1$. In such a case, recovery would be impossible. \subsection{Relation to Robust PCA} Much recent work in matrix completion has studied the recovery of low-rank matrices from arbitrary corruptions to its entries, known as robust Principal Component Analysis (PCA). Results in this framework typically involve measuring some of the entries of a low rank $n \times n$ matrix $X$ and assuming that some fraction of those measurements are arbitrarily corrupted, giving the data matrix $A$. The matrix $X$ can then be recovered under certain conditions by a sparse-plus-low-rank convex program: \begin{align} \min \lambda \|X\|_* + \|E\|_1 \text{ such that } \mathcal{P}(X + E) = \mathcal{P}(A) \label{sparse-plus-low-rank} \end{align} where $\|X\|_*$ is the nuclear norm of $X$, $\lambda$ is a constant, $\|E\|_1$ is the $\ell_1$ norm of the vectorization of $E$, and $\mathcal{P}$ is the projection of a matrix onto the observed entries. \cite{CLMW2011, CSPW2011, HKZ2011, WGRPM2009, GWLCM2010, CJSC2013, Li2013}. Results from this formulation have been quite surprising. For example, under an appropriate choice of $\lambda$ and under an incoherence assumption, the sparse-plus-low-rank decomposition succeeds for sufficiently low rank $X$ when $O(n^2)$ entries are measured and a small fraction of them have arbitrary errors \cite{CLMW2011}. Subsequent results have been proved that only require $m \gtrsim rn \polylog(n)$ measurements, where $r$ is the rank of $X$ \cite{CJSC2013, Li2013}. This result is information theoretically optimal except for the polylogarithmic factor. The present paper can be viewed as a rank-one semidefinite robust PCA problem under generic rank-one measurements. From this perspective, we would naturally formulate the PhaseLift problem under gross errors by \begin{align} \min \lambda \tr(X) + \sum_{i} | a_i^t X a_i - b_i | \text{ subject to } X\succeq 0. \label{phaselift-l1-trace} \end{align} The present paper shows that explicit rank penalization by the trace term is not fundamental for exact recovery in the presence of arbitrary errors. That is, \eqref{phaselift-l1-trace} can be simplified by taking $\lambda = 0$. The resulting program has no free parameters that need to be explicitly tuned. As in \cite{DH2014, CL2012}, the positive semidefinite cone provides enough of a constraint to enforce low-rankness. The present paper also shows that rank-one matrix completion can succeed under an information theoretically optimal data scaling. Specifically, the extra logarithmic factors from low-rank matrix completion and robust PCA do not appear in Theorem \ref{theorem-exact-recovery}. The present work also differs from the standard robust PCA literature in that the measurements are generic and are not direct samples of the entries of the unknown matrix. \subsection{Numerical simulation} We now explore the empirical performance of \eqref{phaselift-l1} by numerical simulation. Let the signal length $n$ vary from 5 to 50, and let the number of measurements $m$ vary from 10 to 250. Let $x_0 = e_1$. For each $(n,m)$, we consider measurements such that \begin{align} \begin{cases} b_i \sim \text{Uniform}([0, 10^4]) & \text{ if } 1 \leq i \leq \lceil 0.05 m \rceil, \\ b_i = |\langle x_0, a_i \rangle |^2 & \text{ otherwise.} \end{cases} \end{align} We attempt to recover $x_0 x_0^t$ by solving \eqref{phaselift-l1} using the SDPT3 solver \cite{TTT1999, TTT2003} and YALMIP \cite{L2004}. For a given optimizer $\hat{X}$, define the capped relative error as $$\min(\|\hat{X} - x_0 x_0^t\|_\text{F} / \|x_0 x_0^t\|_\text{F}, 1).$$ Figure \ref{figure-phase-transition} plots the average capped relative error over 10 independent trials. It provides empirical evidence that the matrix recovery problem \eqref{phaselift-l1} succeeds under a linear number of measurements, even when a constant fraction of them contain very large errors. \begin{figure}[h] {\large \input{phasetransition.tex} } \caption{Recovery error for the PhaseLift matrix recovery problem \eqref{phaselift-l1} as a function of $n$ and $m$, when 5\% of measurements contain large errors. Black represents an average recovery error of 100\%. White represents zero average recovery error. Each block corresponds to the average from 10 independent trials. The solid curve depicts when the number of measurements equals the number of degrees of freedom in a symmetric $n \times n$ matrix. The number of measurements required for successful recovery appears to be linear in $n$, even with a small fraction of large errors. } \label{figure-phase-transition} \end{figure} \section{Proofs} Let $\mathcal{A}: \mathcal{S}^n \to \mathbb{R}^m$ be defined by the mapping $X \mapsto (a_i^t X a_i)_{i = 1 \ldots m}$, where $\mathcal{S}^n$ is the space of symmetric real-valued $n \times n$ matrices. Note that $\mathcal{A}^* \lambda = \sum_i \lambda_i a_i a_i^t$. Let $\A_S$ be the restriction of $\mathcal{A}$ onto the coefficients given by the set $S$. Let $e_i$ be the $i$th standard basis vector. Let $X_0 = x_0 x_0^t$. We can write the measurements \eqref{measurements-components} as $$b = \mathcal{A} X_0 + \varepsilon + \eta.$$ Similarly, the optimization program \eqref{phaselift-l1} can be written as $$ \min \|\mathcal{A} X - b \|_1 \text{ such that } X \succeq 0.$$ We introduce the following notation. Let $\|X\|_1$ be the nuclear norm of the matrix $X$. When $X \succeq 0$, $\|X\|_1 = \tr(X)$. Denote the Frobenius and spectral norms of $X$ as $\|X\|_\text{F}$ and $\|X\|$, respectively. Given $x_0$, let $T_{x_0} = \{ y x_0^t + x_0 y^t \mid y \in \mathbb{R}^n \}$. Note that $T_{e_1}$ is the space of symmetric matrices supported on their first row and column. The orthogonal complement $T_{e_1}^\perp$ is then the space of matrices supported in the lower-right $n-1 \times n-1 $ block. When $x_0$ is clear, we will simply write $T$ instead of $T_{x_0}$. Let $I$ be the identity matrix, and let $\mathbbm{1}(E)$ be the indicator function of the event $E$. \subsection{Recovery by dual certificates} The proof of Theorem \ref{theorem-exact-recovery} will be based on dual certificates, as in \cite{CSV2013, DH2014, CL2012}. A dual certificate is an optimal variable for the dual problem to \eqref{phaselift-l1}. Its existence certifies the correctness of a minimizer to \eqref{phaselift-l1}. The first order optimality conditions at $X_0$ for \eqref{phaselift-l1} are given by \begin{align} \tilde{Y} &= \mathcal{A}^* \tilde{\lambda} \\ \tilde{\lambda} &\in \partial \|\cdot \|_1(-\varepsilon)\\ \tilde{Y} &\succeq 0 \label{psd-constraint}\\ \langle \tilde{Y}, X_0 \rangle &= 0 \label{slackness-constraint} \end{align} where $\partial \|\cdot \|_1(-\varepsilon)$ is the subgradient of the $\ell_1$ norm evaluated at $-\varepsilon$. Note that \eqref{psd-constraint} and \eqref{slackness-constraint} imply $\tilde{Y}_T = 0$. Such a $\tilde{Y}$ would be dual certificate for \eqref{phaselift-l1}. Unfortunately, constructing such a $\tilde{Y}$ that exactly satisfies these conditions is difficult. As in \cite{CSV2013, DH2014, CL2012}, we seek an inexact dual certificate, which approximately satisfies these conditions. Specifically, we will build a dual certificate $Y = \mathcal{A}^* \lambda$ that satisfies \begin{align} &{Y}_{T^\perp} \succeq {I}_{T^\perp} \label{ytp-condition}\\ &\| {Y}_{T}\|_\text{F} \leq 1/2 \label{yt-condition} \\ &\begin{cases}\lambda_i = - \frac{7}{m} \sgn(\varepsilon_i) & \text{ if } \varepsilon_i \neq 0 \\ \lambda_i \leq \frac{7}{m} & \text{ if } \varepsilon_i = 0. \end{cases} \label{lambda-condition} \end{align} To prove that existence of such a $Y$ will guarantee successful recovery of $x_0 x_0^t$ with high probability, we will rely on two technical lemmas. The first technical lemma provides $\ell_1$-isometry bounds on $\mathcal{A}$ and was proven in \cite{CSV2013}. \begin{lemma}[\cite{CSV2013}]\label{lemma-l1-isometry} There exist constants $c_0, \gamma_0$ such that if $m \geq c_0 n$, then with probability at least $1 - e^{-\gamma_0 m}$, \begin{align} \frac{1}{m} \|\mathcal{A}(X) \|_1 &\leq \left(1 + \frac{1}{16} \right) \| X \|_1 \text{ for all } X, \\ \frac{1}{m} \|\mathcal{A}(X) \|_1 &\geq 0.94 \left(1 - \frac{1}{16}\right) \|X\| \text{ for all } \text{symmetric, rank-2 } X \end{align} \end{lemma} We will need simultaneous control of the $\ell_1$-isometry properties over several subsets of measurements. \begin{lemma}\label{lemma-l1-isometry-subsets} There exists a constant $\tilde{\gamma}_0$ such that the following holds. Let $m \geq 100 c_0 n$, and fix a support set $S$ with $|S| = \lceil 0.01 m \rceil$. There is an event $E_S$ with probability at least $1 - e^{-\tilde{\gamma}_0 m}$ on which \begin{align} \frac{1}{|S^c|} \| \A_{S^c} X \|_1 &\geq 0.94 \left( 1 - \frac{1}{16} \right) \|X\| \text{ for all symmetric rank-2 } X,\\ \frac{1}{|S|} \| \A_S X\|_1 &\leq \left( 1 + \frac{1}{16} \right) \| X\|_1 \text{ for all } X,\\ \frac{1}{|m|} \| \mathcal{A} X\|_1 &\leq \left( 1 + \frac{1}{16} \right) \| X\|_1 \text{ for all } X. \end{align} \end{lemma} The proof of Lemma \ref{lemma-l1-isometry-subsets} is immediate from Lemma \ref{lemma-l1-isometry}. In order to prove that a dual certificate guarantees recovery, we establish a technical result that an optimal solution $X_0+H$ to \eqref{phaselift-l1} lies near the cone $\|{H}_{T^\perp}\|_1 \leq \frac{1}{2} \| {H}_{T}\|_\text{F}$ with high probability. This property is a strong version of injectivity on $T$. \begin{lemma} \label{lemma-feasible-cone} Fix a support set $S$ with $|S| = \lceil 0.01 m \rceil$ and let $m \geq 100 c_0 n$. On the event $E_S$ from Lemma \ref{lemma-l1-isometry-subsets}, for all $x$ and for all $\varepsilon$ with supp$(\varepsilon) \subseteq S$, any optimal $X_0 + H$ satisfies $\|{H}_{T^\perp} \|_1 \geq 0.56 \| {H}_{T} \|_F - \frac{2}{m}\|\eta\|_1$. \end{lemma} \begin{proof} By assumption, $\| \mathcal{A} H - \varepsilon - \eta \|_1 \leq \|\varepsilon + \eta\|_1$. By the additivity of the $\ell_1$ norm over vectors with disjoint supports, \begin{align} \|\A_{S^c} H - \eta_{\Sc} \|_1 + \|\A_S H - \varepsilon - \eta_S\|_1 \leq \|\varepsilon + \eta_S\|_1 + \|\eta_{\Sc}\|_1. \end{align} By the triangle inequality, we have \begin{align} &\| \A_{S^c} H - \eta_{\Sc} \|_1 + \|\varepsilon + \eta_S\|_1 - \| \A_S H \|_1 \leq \| \varepsilon + \eta_S \|_1 + \| \eta_{\Sc}\|_1, \end{align} which implies \begin{align} &\| \A_{S^c} H\|_1 \leq 2 \| \eta_{\Sc}\|_1 + \| \A_S H \|_1 \label{asch-vs-ash} \end{align} Breaking \eqref{asch-vs-ash} into its components on $T$ and $T^\perp$ and applying the triangle inequality, we have \begin{align} &\|\A_{S^c} {H}_{T} \|_1 - \| \A_{S^c} {H}_{T^\perp} \|_1 \leq 2 \| \eta_{\Sc}\|_1 + \| \A_S {H}_{T} \|_1 + \| \A_S {H}_{T^\perp}\|_1\\ \Rightarrow &\|\A_{S^c} {H}_{T}\|_1 \leq 2 \| \eta_{\Sc}\|_1 + \| \A_S {H}_{T}\|_1 + \| \mathcal{A} {H}_{T^\perp} \|_1 \label{ascht-vs-asht-ahtp} \end{align} We now apply the $\ell_1$ isometry bounds from Lemma \ref{lemma-l1-isometry} on each term of \eqref{ascht-vs-asht-ahtp}. On the event $E_S$, \begin{align} \| \A_{S^c} {H}_{T} \|_1 &\geq 0.94 \left(1 - \frac{1}{16} \right) |S^c| \|{H}_{T}\| \\ &\geq 0.94 \left(1 - \frac{1}{16}\right) \frac{|S^c|}{\sqrt{2}} \| {H}_{T}\|_\text{F}, \label{ascht-bound} \end{align} where the second inequality follows because ${H}_{T}$ has rank at most 2. On the event $E_S$, \begin{align} \|\A_S {H}_{T} \|_1 &\leq |S| \left(1 + \frac{1}{16}\right) \|{H}_{T}\|_1 \\ &\leq |S| \left(1 + \frac{1}{16} \right) \sqrt{2} \|{H}_{T}\|_\text{F} \end{align} On the event $E_S$, \begin{align} \|\mathcal{A} {H}_{T^\perp}\|_1 \leq m \left(1 + \frac{1}{16}\right) \| {H}_{T^\perp} \|_1 \label{ahtp-bound} \end{align} Combining \eqref{ascht-vs-asht-ahtp}--\eqref{ahtp-bound}, we have \begin{align} \left(\frac{0.94}{\sqrt{2}} \frac{\left(1 - \frac{1}{16} \right)}{\left( 1 + \frac{1}{16} \right)} \frac{|S^c|}{m} - \sqrt{2} \frac{|S|}{m} \right) \|{H}_{T}\|_\text{F} \leq \frac{2}{m} \|\eta_{\Sc}\|_1 + \| {H}_{T^\perp} \|_1 \end{align} Thus, $0.56 \| {H}_{T} \|_F \leq \frac{2}{m} \|\eta \|_1 + \| {H}_{T^\perp}\|_1$ on the event $E_S$. \end{proof} We may now prove that existence of an inexact dual certificate \eqref{ytp-condition}--\eqref{lambda-condition} will guarantee successful recovery of a matrix near $X_0 = x_0 x_0^t$ with high probability, provided that there are few enough arbitrary errors. \begin{lemma} \label{lemma-recovery-with-dual-certificate} There exists a $C$ such that the following holds. Fix $S$ such that $|S| = \lceil 0.01 m\rceil$. Let $m \geq 100 c_0 n$. Fix $x_0 \in \mathbb{R}^n$. Fix $\varepsilon \in \mathbb{R}^m$ such that $\mbox{supp}(\varepsilon) \subseteq S$. Suppose that there exists $Y = \mathcal{A}^* \lambda$ satisfying \eqref{ytp-condition}--\eqref{lambda-condition}. Then, on the event $E_S$ from Lemma \ref{lemma-l1-isometry-subsets}, a minimizer $\hat{X}$ of \eqref{phaselift-l1} satisfies $\|\hat{X} - X_0\|_\text{F} \leq C\frac{\|\eta\|_1}{m}$. \end{lemma} \begin{proof} Let $\hat{X} = X_0 + H$ be a minimizer for \eqref{phaselift-l1}, which implies that $\|\mathcal{A}(X_0 + H) - b \|_1 \leq \| \mathcal{A}(X_0) - b\|_1$. That is $$ \|\mathcal{A} H - \varepsilon - \eta\|_1 \leq \|\varepsilon + \eta\|_1.$$ Letting $\alpha = 7/m$, condition \eqref{lambda-condition} gives \begin{align} \lambda/\alpha \in \partial \|\cdot\|_1(-\varepsilon) \end{align} Hence, \begin{alignat}{2} && \|-\varepsilon\|_1 + \langle \lambda/\alpha, \mathcal{A} H - \eta \rangle &\leq \|\varepsilon\|_1 + \|\eta\|_1 \\ &\Rightarrow &\langle \lambda, \mathcal{A} H - \eta \rangle &\leq \alpha \|\eta\|_1 \\ &\Rightarrow &\langle Y, H \rangle &\leq \langle \lambda, \eta \rangle + \alpha \|\eta\|_1 \\ &\Rightarrow &\langle Y, H \rangle &\leq 2 \alpha \|\eta\|_1 \label{yh-negative} \end{alignat} Decomposing \eqref{yh-negative} into $T$ and $T^\perp$, we have \begin{align} \langle {Y}_{T}, {H}_{T} \rangle \leq - \langle {Y}_{T^\perp}, {H}_{T^\perp} \rangle + 2 \alpha \|\eta\|_1 \end{align} As ${Y}_{T^\perp} \succeq 0$ and ${H}_{T^\perp} \succeq 0$, we have \begin{align} \langle {Y}_{T^\perp}, {H}_{T^\perp} \rangle \leq | \langle {Y}_{T}, {H}_{T} \rangle | + 2 \alpha \|\eta\|_1 \end{align} By conditions \eqref{ytp-condition}--\eqref{yt-condition} \begin{align} \|{H}_{T^\perp} \|_1 \leq | \langle {Y}_{T^\perp}, {H}_{T^\perp}\rangle | \leq | \langle {Y}_{T}, {H}_{T} \rangle | + 2 \alpha \|\eta\|_1 \leq \frac{1}{2} \|{H}_{T}\|_F + 2 \alpha \|\eta\|_1 \label{cone-condition-first} \end{align} By Lemma \ref{lemma-feasible-cone}, on the event $E_S$, \begin{align} \|{H}_{T^\perp} \|_1 \geq 0.56 \| {H}_{T} \|_\text{F} - \frac{2}{m} \|\eta\|_1. \label{cone-condition} \end{align} Combining \eqref{cone-condition-first} and \eqref{cone-condition}, and using $\alpha = 7/m$, we get \begin{align} 0.56 \|{H}_{T}\|_\text{F} \leq \frac{2}{m} \| \eta\|_1 + 0.5 \| {H}_{T}\|_\text{F} + \frac{14}{m} \| \eta\|_1 \end{align} So, \begin{align} \|{H}_{T}\|_\text{F} \leq \frac{C'}{m} \| \eta\|_1 \text{ and thus } \|{H}_{T^\perp}\|_\text{F} \leq \frac{C''}{m} \| \eta\|_1 \end{align} We conclude $\|\hat{X} - X_0\|_\text{F} = \|H\|_\text{F} \leq \frac{C}{m} \|\eta\|_1$ for some $C$. \end{proof} \subsection{Construction of the dual certificate} We now construct the dual certificate for arbitrary $x_0$. Our construction will be a modification to the dual certificate in \cite{CL2012}. Also similar to \cite{CL2012}, we will build dual certificates with high probability on a net of $x_0$. We will then use a continuity argument to get a dual certificate for a arbitrary $x_0$. Let $S^+$ and $S^-$ be disjoint supersets of the indices over which $\varepsilon$ is positive or negative, respectively. Let $S = S^+ \cup S^-$. For pedagogical purposes, $S^+$ and $S^-$ should be thought of as exactly the indices over which $\varepsilon$ is positive or negative. For technical reasons, we let them be supersets of cardinality linear in $n$, in order to use standard probability bounds. For a fixed choice of $S^+$ and $S^-$, let the inexact dual certificate $Y$ be defined by \begin{align} Y = \frac{1}{m} \Bigl[ \sum_{i \in S^+} -7 a_i a_i^t + \sum_{i \in S^-} 7 a_i a_i^t + \sum_{i \in S^c} [\beta_0 - |\langle a_i, \frac{x_0}{\|x_0\|}\rangle|^2 \mathbbm{1}(|\langle a_i, \frac{x_0}{\|x_0\|} \rangle | \leq 3 )] a_i a_i^t \Bigr] \label{dual-certificate} \end{align} where $\beta_0 = \mathbb{E} z^4 \mathbbm{1}(|z| \leq 3) \approx 2.6728$ and $z$ is a standard normal random variable. We will refer to each of the terms of in the right hand side of \eqref{dual-certificate} as $\Y_+, \Y_-$, and $\Y_0$, respectively. The form of $\Y_0$ is due to \cite{CL2012}, and the intuition behind it is as follows. Note that $\mathbb{E} ( a_i a_i^t) = I_n$ and $\mathbb{E} ( |\langle a_i, e_1 \rangle|^2 a_i a_i^t) = \begin{pmatrix}\tilde{\beta_0} & 0 \\ 0 & I_{n-1} \end{pmatrix}$, where $\tilde{\beta_0} = \mathbb{E} z^4$ for a standard normal $z$. The construction $\frac{1}{m} \sum_{i=1}^m [\tilde{\beta}_0 - |\langle a_i, e_1\rangle|^2 ] a_i a_i^t $ thus has expected value $(\tilde{\beta_0} - 1) {I}_{T^\perp}$, which would provide the exact dual certificate conditions \eqref{psd-constraint}--\eqref{slackness-constraint}. As shown in \cite{CL2012}, a satisfactory inexact dual certificate can be built with $m = O(n)$ and coefficients that are truncated to be no larger than $7/m$. In the present formulation, the terms $\Y_+$ and $\Y_-$ are then set to have coefficients $\mp 7/m$ in order to satisfy \eqref{lambda-condition}. We now show that for a fixed signal and a fixed pattern of signs of $\varepsilon$, that a dual certificate exists with high probability. \begin{lemma} \label{lemma-dual-certificate-construction} There exists constants $c, \gamma^*$ such that the following holds. Let $m \geq c n$. Fix $x_0 \in \mathbb{R}^n$ and $\varepsilon \in \mathbb{R}^m$. Let $S^+$ and $S^-$ be fixed disjoint sets of cardinality $\lceil 0.001 m \rceil$. Then the dual certificate $Y$ from \eqref{dual-certificate} satisfies \eqref{lambda-condition}, $\|{Y}_{T}\|_\text{F} \leq 1/4$, and $\|{Y}_{T^\perp} - \frac{17}{10} {I}_{T^\perp} \| \leq \frac{3}{10}$ with probability at least $1 - e^{-\gamma^* m}$. \end{lemma} \begin{proof} Without loss of generality, it suffices to assume $x_0 = e_1$. It suffices to show that with high probability \begin{align} \left \| \Y_{0, \Tp} - \frac{17}{10} {I}_{T^\perp} \right \| &\leq 0.15, \label{YoTp-bound} \\ \| \Y_{0, T}\|_\text{F} &\leq \frac{3}{20}, \label{YoT-bound}\\ \| \Y_{\pm, \Tp}\| &\leq 0.015, \label{YplusTp-bound}\\ \| \Y_{\pm, T}\|_\text{F} &\leq 0.035. \label{YplusT-bound} \end{align} First, we establish \eqref{YoTp-bound}--\eqref{YoT-bound}. By Lemma 2.3 in \cite{CL2012}, there exist $\tilde{c}, \tilde{\gamma}$ such that if $|S^c| \geq \tilde{c} n$, then with probability at least $1-e^{-\tilde{\gamma} |S^c|},$ \begin{align} \left \| \frac{m}{|S^c|} \Y_{0, \Tp} - \frac{17}{10} {I}_{T^\perp} \right \| &\leq 1/10, \text{ and }\\ \left \| \frac{m}{|S^c|} \Y_{0, T} \right \| &\leq 3/20. \label{yot-bound-two} \end{align} Thus, \begin{align} \left \|\Y_{0, \Tp} - \frac{17}{10} {I}_{T^\perp} \right \| &\leq \left \| \Y_{0, \Tp} - \frac{|S^c|}{m} \frac{17}{10} {I}_{T^\perp} \right \| \notag + \left( 1 - \frac{|S^c|}{m} \right) \frac{17}{10} \leq 0.15 \end{align} which establishes \eqref{YoTp-bound}. By \eqref{yot-bound-two}, we get \eqref{YoT-bound} immediately. Next, we establish \eqref{YplusTp-bound}. Let $a'$ be the vector formed by the last $n-1$ components of $a$. Observe that $\sum_{i \in S^+} a'_i a_i{'^*}$ is a Wishart random matrix. Standard estimates for singular values of random matrices with Gaussian i.i.d. entries, such as Corollary 5.35 in \cite{V2012}, apply. If $|S^+| = \lceil 0.001 m \rceil \geq \tilde{c}_0 n$, with probability at least $1 - e^{-\tilde{\gamma}_1 m}$ for some $\tilde{\gamma}_1$, $$ \left \| \frac{1}{|S^+|} \sum_{i \in S^+} a'_i a_i{'^*} - {I}_{T^\perp} \right \| \leq 1/2 $$ Hence, $$ \left \| \frac{1}{|S^+|} \sum_{i \in S^+} a'_i a_i{'^*} \right \| \leq 3/2 $$ Thus, $ \| \Y_{+, \Tp} \| \leq \frac{3}{2} \frac{|S^+|}{m}, $ and we arrive at \eqref{YplusTp-bound}. The bound for the $\Y_{-, \Tp}$ term is identical. Next, we establish \eqref{YplusT-bound}. Note that $\Y_+ = -7 \frac{|S^+|}{m} \cdot \frac{1}{|S^+|} \sum_{i \in S^+} a_i a_i^t$. As per Lemma \ref{lemma-aiai-bounds}, if $|S^+| = \lceil 0.001 m \rceil \geq \tilde{c}_1 n$, then $\| \Y_{+, T}\|_\text{F} \leq 7 \frac{|S^+|}{m} \cdot 5 \leq 7 \cdot 0.001 \cdot 5$, with probability at least $1 - e^{-\tilde{\gamma}_1 m}$. Thus \eqref{YoTp-bound}--\eqref{YplusT-bound} hold simultaneously with probability at least $1 - e^{-\gamma^* m}$ for some $\gamma^*$ provided that $c \geq {\max(2 \tilde{c}, 1000 \tilde{c}_0, 1000 \tilde{c}_1)}.$ \end{proof} The behavior of $\Y_{\pm, T}$ relies on the following probability estimate for the behavior of a Gaussian Wishart matrix on $T$. \begin{lemma} \label{lemma-aiai-bounds} Let $x_0 = e_1$. Let $A = \frac{1}{m} \sum_{i=1}^m a_i a_i^t$. There exists $\tilde{c}_1, \tilde{\gamma}_1$ such that if $m \geq \tilde{c}_1 n$ then $\| A_T \|_\text{F} \leq 5$ with probability at least $1 - e^{-\tilde{\gamma}_1 m}$. \end{lemma} \begin{proof} Let $y$ be the $(1,1)$ entry of $A$, and let $y'$ be the rest of the first column of $A$. Then $\|A_T\|_\text{F}^2 = y^2 + \|y'\|_2^2.$ So, $y = \frac{1}{m} \sum_{i=1}^m a_i^2(1)$. Hence, $my \sim \chi^2_m$. Standard results on the concentration of chi-squared variables, such as Lemma 1 in \cite{LM2000}, give $$ \mathbb{P}(my \geq 4 m) \leq e^{-\tilde{\gamma}_{1,2} m}. $$ for some $\tilde{\gamma}_{1,2}$. Hence $\mathbb{P}(y^2 \geq 16) \leq e^{-\tilde{\gamma}_{1,2} m}$. Now, it remains to bound $\|y'\|_2^2$. We can write $y' = \frac{1}{m} Z' c$, where $Z' = [a'_1, \ldots, a'_m]$ and $c_i = a_i(i)$, where $Z'$ and $c$ are independent. Note that $\|c\|_2^2$ is a Chi-squared random variable with $m$ degrees of freedom. Hence, with probability at least $1-e^{- \tilde{\gamma}_{1, 2} m}$, $$ \|c\|_2^2 \leq 4m $$ For fixed $\|x\|_2=1$, $\|Z'x\|_2^2 \sim \chi^2_{n-1}$, and hence with probability at least $1 - e^{-\gamma_{1,3} m}$ $$ \|Z' x \|_2^2 \leq m $$ when $m \geq \tilde{c}_1 n$. Hence, $ m^2 \| y' \|_2^2 \leq 4 m \cdot m$ with probability at least $1 -2 e^{-\tilde{\gamma}_{1,4} m}$. So, $\|y'\|_2 < 2$ with probability at least $1 -2 e^{-\tilde{\gamma}_{1,4} m}$. So $\|A_T\|_\text{F}^2 \leq 25$, and hence $\|A_T\|_\text{F} \leq 5$ with probability at least $1 - e^{-\tilde{\gamma}_1 m}$ for some $\tilde{\gamma}_1$. \end{proof} We now show that for a fixed signal and support set of gross errors, there is a high probability that a dual certificate exists simultaneously for all gross errors. \begin{lemma} \label{dual-certificate-over-all-errors} Fix $x_0$ and a support set $S$. If $m \geq cn$ and $|S|/m \leq \min(0.001, \gamma^*/ 2 \log 2)$, then there is an event $\tilde{E}_{S,x_0}$ on which for all $\varepsilon$ with $\mbox{supp}(\varepsilon) \subseteq S$, there exists a $Y$ satisfying \eqref{lambda-condition}, $\|{Y}_{T}\|_\text{F} \leq 1/4$, and $\| {Y}_{T^\perp} - \frac{17}{10} {I}_{T^\perp} \| \leq 3/10$. The probability of $\tilde{E}_{S,x_0}$ is at least $1 - e^{-\gamma^* m /2}$. \end{lemma} \begin{proof} Consider all of the $2^{|S|}$ possible assignments of sign to the entries of $\varepsilon$ on $S$. For each, choose an $S^+$ and $S^-$ that are disjoint, have cardinality $\lceil 0.001 m \rceil$, and are supersets of the indices assigned a positive or negative sign, respectively. Let $\tilde{E}_{S, x_0}$ be the event on which all sign assignments yield a $Y$ satisfying \eqref{lambda-condition}, $\|{Y}_{T}\|_\text{F} \leq 1/4$, and $\| {Y}_{T^\perp} - \frac{17}{10} {I}_{T^\perp} \| \leq 3/10$. By Lemma \ref{lemma-dual-certificate-construction}, this event has probability at least $$1 - 2^{|S|} e^{-\gamma^*} \geq 1 - e^{-\gamma^* m /2}.$$ \end{proof} We now show that for a fixed support set of gross errors, there is a high probability that a dual certificate exists simultaneously for all signals and for all gross errors. \begin{lemma} \label{dual-certificate-universality-over-x} Fix a support set $S$. If $m \geq max(c, 4 \log(201)/\gamma^*) n$ and $|S|/m \leq \min(0.001, \gamma^* / 2 \log 2)$, then on an event of probability at least $1 - e^{-\gamma^* m/4}$, for all $x_0$ and for all $\varepsilon$ with $\mbox{supp}(\varepsilon) \subseteq S$, there exists $Y = \mathcal{A}^* \lambda$ satisfying \eqref{lambda-condition} with $\alpha = 7/m$ and $\|{Y}_{T}\|_\text{F} \leq 0.44$ and $\| {Y}_{T^\perp} - \frac{17}{10} {I}_{T^\perp} \| \leq 4/10$. \end{lemma} \begin{proof} By Lemma \ref{dual-certificate-over-all-errors}, for any fixed all $x_0$ such that $\|x_0\|=1$, for all $\varepsilon$ with $\mbox{supp}(\varepsilon) \subseteq S$, there exists a $Y = \mathcal{A}^* \lambda$ such that \begin{align} \| \lambda \|_\infty &\leq \frac{7}{m} \\ \lambda_S &= \frac{7}{m} \sgn(\varepsilon_S)\\ \| {Y}_{T^\perp} + 1.7 {I}_{T^\perp}\| &\leq 0.3 \\ \| {Y}_{T}\|_\text{F} &\leq 0.25 \end{align} on the event $\tilde{E}_{S, x_0}$, which has probability at least $1 - e^{-\gamma^* m/2}$. By Lemma 5.2 in \cite{V2012}, there exists a net $\mathcal{N}_\eps$ such that $|\mathcal{N}_\eps| \leq (1 + 2/\varepsilon)^n$. Hence, such a $Y$ exists simultaneously for all $x_0 \in \mathcal{N}_\eps$ on an event of probability at least $1 - (1+2/\varepsilon)^n e^{-\gamma^* m/2}$. If $m > 4 n\log ( 1 + 2/\varepsilon) / \gamma^*$, then such a $Y$ exists simultaneously for all $x_0 \in \mathcal{N}_\eps$ with probability at least $1 - e^{-\gamma^* c n / 4} \geq 1 - e^{-\gamma^* m/4}$. We now appeal to a continuity argument to show that a dual certificate exists for points not on the net $\mathcal{N}_\eps$. For an arbitrary $x$ such that $\|x\|_2 = 1$, we consider the $Y$ corresponding to the nearest $x_0 \in \mathcal{N}_\eps$. Note that $\|x-x_0\| \leq \varepsilon$ by definition of the net $\mathcal{N}_\eps$. We now closely follow the proof and notation of Corollary 2.4 in \cite{CL2012} to show that $Y$ is a satisfactory approximate dual certificate for $x$. Note that $\|Y\| \leq 2.5$. Let $\Delta = x x^t - x_0 x_0^t$ and note that $\|\Delta\|_\text{F} \leq 2 \varepsilon.$ Let $T = T_x$. Now we have \begin{align} {Y}_{T^\perp} + 1.7 {I}_{T^\perp} &= {Y}_{T_{x_0}^\perp} + 1.7 {I}_{T_{x_0}^\perp} - R_1 \\ {Y}_{T} &= {Y}_{T_{x_0}} + R_2 \end{align} where \begin{align} R_1&= \Delta Y(I - x_0 x_0^t) + (I - x_0 x_0^t)Y\Delta - \Delta Y \Delta + 1.7 \Delta\\ R_2&= \Delta Y (I - x_0 x_0^t) + (I - x_0 x_0^t) Y \Delta - \Delta Y \Delta \end{align} We observe that \begin{align} \|R_1\| &\leq 2 \|Y\| \|\Delta\| \| I - x_0 x_0^t \| + \|Y\| \|\Delta\|^2 + 1.7 \|\Delta\| \leq 13.4 \varepsilon + 10 \varepsilon^2\\ \|R_2\| &\leq \sqrt{2} \|R_2\| \leq \sqrt{2} \left( 2 \|Y\| \|\Delta \| \|I - x_0 x_0^t\| + \|Y\| \|\Delta \|^2 \right) \leq 10 \sqrt{2} ( \varepsilon + \varepsilon^2) \end{align} If we choose $\varepsilon = 0.01$, $\|R_1\| \leq 0.135$ and $\|R_2\|_\text{F} \leq 0.143$. and \begin{align} \|{Y}_{T^\perp} + 1.7 {I}_{T^\perp}\| &\leq 0.3 + 0.135 = 0.435 \\ \| {Y}_{T}\|_\text{F} &\leq 0.25 + 0.143 = 0.393 \end{align} \end{proof} We can now prove Theorem \ref{theorem-exact-recovery}. \begin{proof}[Proof of Theorem \ref{theorem-exact-recovery}] Assume that $|S|/m \leq \min(0.001, \gamma^*/2 \log 2)$ and $m \geq \max(c, 4 \log(201)/\gamma^*)$. By Lemma \ref{dual-certificate-universality-over-x}, there is an event of probability at least $1 - e^{-\gamma^* m/4}$ such that for all $x_0$ and for all $\varepsilon$ with $\mbox{supp}(\varepsilon) \subseteq S$, there exists a $Y = \mathcal{A}^* \lambda$ satisfying \eqref{ytp-condition}--\eqref{lambda-condition}. Choose a superset $\overline{S}$ such that $|\overline{S}| = 0.01m$. On the intersection of this event with $E_{\overline{S}}$, Lemma \ref{lemma-recovery-with-dual-certificate} guarantees that $\| \hat{X} - X_0 \|_\text{F} \leq C \|\eta\|_1 / m$. The intersection of these events has probability at least $ 1 - e^{-\gamma m}$ for some $\gamma$. The proof of the error estimate $\|\hat{x} - x_0 \| \leq C' \min \left( \|x_0\|, \frac{\|\eta\|_1}{m \|x_0\|} \right)$ can be found in \cite{CSV2013}. \end{proof} \bibliographystyle{plain}
2,869,038,154,438
arxiv
\section{Notations} Given a finite simple graph $G=(V,E)$ with vertex set $V$ and edge set $E$, the {\bf Barycentric refinement} $G_1=(V_1,E_1)$ is the graph for which $V_1$ consists of all nonempty complete subgraphs of $G$ and where $E_1$ consists of all unordered distinct pairs in $V_1$, for which one is a subgraph of the other. Denote by $G_m$ the successive Barycentric refinements of $G$ assuming $G_0=G$. If $\lambda_0 \leq \dots \leq \lambda_n$ are the eigenvalues of the {\bf Kirchhoff Laplacian} $L=B-A$ of $G$, where $B$ is the diagonal {\bf degree matrix} and $A$ the {\bf adjacency matrix} of $G$, define the {\bf spectral function} $F_G(x) = \lambda_{[n x]}$, where $[t]$ is the largest integer smaller or equal to $t$. So, $F_G(0)=0$ and $F_G(1)$ is the largest eigenvalue of $G$. The $L^1$ norm of $F$ satisfies $||F_G||_1=||\lambda||_1/|V|={\rm tr}(L)/|V| =2 |E|/|V|$ by the {\bf Euler handshaking lemma} telling that $2|E|$ is the sum of the vertex degrees. In other words, $||F||_1=d (|V|-1)$, where $d$ is the {\bf graph density} $d=|E|/B(|V|,2)$ with Binomial coefficient $B(\cdot,\cdot)$. The density $d$ gives the fraction of occupied graphs in the completed graph with vertex set $V$. If $G,H$ are two sub graphs of some graph with $n$ vertices, define the {\bf graph distance} $d(G,H)$ as the minimal number of edges which need to be modified to get from $G$ to $H$. If $L,K$ are the Laplacians of $G,H$, then $\sum_{i,j} |L_{ij}-K_{ij}| \leq 4 d(G,H)$ because each edge $(i,j)$ affects the four matrix entries $L_{ij}, L_{ji},L_{ii},L_{jj}$ of the Laplacian only: adding or removing that edge changes each of the 4 entries by $1$. The {\bf Lidskii-Last inequality} assures $||\mu-\lambda||_1 \leq \sum_{i,j=1}^{n} |A-B|_{ij}$ \cite{SimonTrace,Last1995} for any two symmetric $n \times n$ matrices $A,B$ with eigenvalues $\alpha_1 \leq \alpha_2 \leq \dots \leq \alpha_n$ and $\beta_1 \leq \beta_2 \leq \dots \leq \beta_n$. For two subgraphs $G,H$ of a common graph with $n$ vertices, and Laplacians $L,H$, the inequality gives $||\lambda-\mu||_1 \leq 4 d(G,H)$ so that $||F_{G'}-F_{H'}|| \leq 4 d(G,H)/n$ if $G',H'$ are the graphs with edge set of $G$ and vertex set of the host graph having $n(G,H)$ vertices. Therefore $||F_G-F_H||_1 \leq 4 d(G,H)/n(G,H)$ holds, where $n(G,H)$ is the minimum of $|V(G))$ and $|V(H)|$ assuming both are in a common host graph. The $L^1$ distance of spectral functions can so estimated by the graph distance. Obviously, if $k$ disjoint copies of the same graph $H$ form a larger graph $G$, then $F(G)=F(H)$. Given a cover $U_j$ of $G$, let $H$ be graph generated by the set of vertices which are only in one of the set $U_j$. Let $K$ be the graph generated by the complement of $H$. Now, $d(G,H) \leq 4 |K|$ and $\sum_j d(U_j,H_j) \leq 4 |K|$. f $v_k$ is the number of complete subgraphs $K_{k+1}$ of $G$, the {\bf clique number} of $G$ is defined to be $k$ if $v_{k}=0$ and $v_{k-1}>0$. A tree for example has clique number $2$ as it does not contain triangles. Denote by $\mathcal{G}_d$ the class of graphs with clique number $d+1$. The class contains $K_{d+1}$ as well as any of its Barycentric refinements. The {\bf clique data} of $G$ is the vector $\vec{v}=(v_0,v_1,\dots, )$, where $v_k$ counts the number of $K_{k+1}$ subgraphs of $G$. We have $v_0=|V|, v_1=|E|$ and $v_2$ counts the number of triangles in $G$. The clique data define the {\bf Euler polynomial} $e_G(x) = \sum_{k=0}^{\infty} v_k x^k$ and the {\bf Euler characteristic} $\chi(G) = e_G(-1)$. The polynomial degree of $e(x)$ is $d$ if $d+1$ is the clique number. There is a linear {\bf Barycentric operator} $A$ which maps the clique data of $G$ to the clique data of $G_1$. It is an upper triangular linear operator on $l^2$ with diagonal entries $A_{kk}=k!$. It also maps the Euler polynomial of $G$ linearly to the Euler polynomial of $G_1$. The {\bf unit sphere} $S(x)$ of a vertex $x \in V$ in $G$ is the graph generated by all vertices connected to $x$. The {\bf dimension} of $G$ is defined as ${\rm dim}(\emptyset)=-1$ and ${\rm dim}(G)=1+\sum_{v \in V} {\rm dim}(S(x))/v_0$, where $S(x)$ is the unit sphere. A graph has {\bf uniform dimension} $d$ if every unit sphere has uniform dimension $d-1$. The empty graph has uniform dimension $-1$. Given a graph $G$ with uniform dimension $d$, the {\bf interior} is the graph generated by the set of vertices for which every unit sphere $S(x)$ is a $(d-1)$-{\bf sphere}. A $d$-{\bf sphere} is inductively defined to be a graph of uniform dimension $d$ for which every $S(x)$ is a $(d-1)$-sphere and for which removing one vertex renders the graph contractible. This {\bf Evako sphere} definition starts with the assumption that the $(-1)$-sphere is the empty graph. {\bf Contractibility} for graphs is inductively defined as the property that there exists $x \in V$ for which both $S(x)$ and the graph generated by $V \setminus \{x\}$ are both contractible, starting with the assumption that $K_1$ is contractible. The {\bf boundary} $\delta G$ of a graph $G$ is the graph generated by the subset of vertices in $G$ for which the unit sphere $S(x)$ is not a sphere. Also the next definitions are inductive: a {\bf d-graph} is a graph for which every unit sphere is a $(d-1)$-sphere; a {\bf d-graph with boundary} is a graph for which every unit sphere is a $(d-1)$-sphere or $(d-1)$-ball; a {\bf $d$-ball} is a $d$-graph with boundary for which the boundary is a $(d-1)$-sphere. Let $w_k(G_m)$ denote the number of $K_{k+1}$ subgraphs in the boundary $\delta G_m$. For $G=K_{d+1}$, the boundary of $G_m$ is a $(d-1)$-sphere for $m \geq 1$ and $\delta G_m$ contains the vertices for which the unit sphere in $G_m$ has Euler characteristic $1$. Barycentric refinements honor both the class of $d$-graphs as well as the class of $d$-graphs with boundary. Starting with $G=K_{d+1}$ which itself is neither a $d$-graph, nor a $d$-graph with boundary, the Barycentric refinements $G_m$ are all $d$-balls for every $m \geq 1$. While the spectral functions $F$ are well suited to describe limits in $L^1([0,1])$, one can also look at the {\bf integrated density of states} $F^{-1}$, a monotone $[0,1]$-valued function on $[0,\infty)$ which is also called {\bf spectral distribution function} or {\bf von Neumann trace}. It defines the {\bf density of states} $(F^{-1})'$ which is a probability measure on $[0,\infty)$ also called {\bf Plancherel measure}, analogue to the cumulative distribution functions defining the law of the random variable which in absolutely continuous case is the probability density function. Point-wise convergence of $F$ implies point-wise convergence of $F^{-1}$ and so weak-* convergence of the density of states. \section{The theorem} \begin{thm}[Central limit theorem for Barycentric subdivision] The functions $F_{G_m}(x)$ converge in $L^1([0,1])$ to a function $F(x)$ which only depends on the clique number of $G$. The density of states converges to a measure $\mu$ which only depends on the clique number. \end{thm} We first prove a lemma which is interesting by itself. It allows to compute explicitly the clique vector of the subdivision $G_1$ from the clique vector of $G$. \begin{lemma} There is an upper triangular matrix $A$ such that $\vec{v}(G_1) = A \vec{v}(G)$ for all finite simple graphs $G$. \end{lemma} \begin{proof} When subdividing a subgraph $K_{k+1}$ it splits into $A_{kk} = (k+1)!$ smaller $K_{k+1}$ graphs. This is the diagonal element of $A$. Additionally, for $m>k$, every $K_{m+1}$ subgraph produces $A_{km}$ subgraphs isomorphic to $K_{k+1}$, which is the number of interior $K_{k+1}$ subgraphs of the Barycentric subdivision of $K_{d+1}$. These $K_{m+1}$ subgraphs in the interior correspond to $K_{m+2}$ subgraphs on the boundary. This means that $A_{km}$ is the number of $K_{m+2}$ subgraphs of the boundary of the Barycentric refinement of $K_{k+1}$. Since this boundary is of smaller dimension, we can use the already computed part of $A$ to determine $A_{km}$. To construct $A$, we build up the columns recursively, starting to the left with $e_1$. Let $B(n,k)=n!/(k! (n-k)!)$. If $n$ columns of $A$ have been constructed, we apply the upper $n \times n$ part of $A$ to the vector $\vec{v}=[B(n+1,1),\dots,B(n+1,n)]^T$ in order to get the clique data of the $(n-1)$-sphere $\delta (K_{n+1})_1$. These numbers encode the number of interior complete subgraphs of the $n$-ball $(K_{n+1})_1$. If the resulting vector is $[w_1,\dots,w_n]^T$, take $[1,w_1,\dots,w_{n-1},(n+1)!,0,0, \dots ]^T$ as the new column. The Mathematica code below implements this procedure. \end{proof} The {\bf Barycentric operator} $A$ has an inverse $A^{-1}$ which is a bounded {\bf compact operator} on $l^2({\bf N})$. Below we give the bootstrap procedure to compute $A$, adding more and more columns, using the already computed $A$ to determine the next column. The eigenvalues $\lambda$ of $A$ are included in the spectrum $\sigma(A) = \{ 1!,2!,3!, \dots \}$. Any eigenvector $f$ of $A^T$ can lead to {\bf invariants} $X(G) = \langle f,v(G) \rangle$ which correspond to {\bf valuations} in the continuum as $\lambda X(G) = \langle f,v \rangle = \langle A^T f, v \rangle = \langle f,Av \rangle = X(G_1)$. Of particular interest is the {\bf Euler characteristic eigenvector} $f=[1,-1,1,-1,....]^T$ to the eigenvalue $\lambda = 1$ verifying that $\chi(G)$ is an invariant under Barycentric subdivision. An other invariant is the eigenvectors $f=[0,\dots,0,-2,d+1]^T$ to the eigenvalue $d!$ which measures the ``boundary volume" of a $d$-graph with boundary. For $d$-graphs, it remains zero under refinement. \\ Here is the proof of the theorem: \begin{proof} {\bf (i)} {\bf For $G \in \mathcal{G}} \def\A{\mathcal{A}} \def\H{\mathcal{H}} \def\U{\mathcal{U}_d$, there are constants $C_d>0$ such that $v_k(G_0) (d+1)!)^m/C_d \leq v_k(G_m) \geq v_k(G_0) C_d ((d+1)!)^m$ and $w_k(G_0) (d!)^m/C_d \leq w_k(G_m) \geq w_k(G_0) C_d (d!)^m$ for every $0 \leq k \leq d$.} Proof. This follows from the lemma and {\bf Perron-Frobenius} as the matrix $A$ is explicit. When restricting to $\mathcal{G}} \def\A{\mathcal{A}} \def\H{\mathcal{H}} \def\U{\mathcal{U}_d$, it is a $(d+1) \times (d+1)$-matrix with maximal eigenvalue $(d+1)!$, when restricted to $G_{d-1}$ its maximal eigenvalue is $d!$. Diagonalizing $A_d=S_d^{-1} B_d S_d$ we could find explicit bounds $C_d$. \\ {\bf (ii)} {\bf For every $G=K_k$, the sequence $F_{G_m}$ converges. } Proof: There are $(d+1)!$ subgraphs of $G_{M=1}$ isomorphic to $G_m$, forming a cover of $G_{m+1}$. They intersect in a lower dimensional graph. Since the number of vertices of this intersection grows with an upper bound $C_d d!^m$ in each $G_m$ we have $d(G_{m+1},\bigcup_j G_m^{(j)} ) \leq (d+1)! C_d (d!)^m$. This shows $||F_{G_m}-F_{G_{m+1}}||_1 \leq C_d (d+1)! (d!)^m/((d+1)!)^m = C_d/(d+1)^{m-1}$. We have a Cauchy sequence in $L^1([0,1])$ and so a limit in that Banach space. \\ {\bf (ii)} {\bf Barycentric refinement is a contraction on $\mathcal{G}_d$ in an adapted metric}. Proof. Every $G \in \mathcal{G}_d$ can be written as a union of several $H_m \sim K_{d+1}$ subgraphs and a rest graph $L$ in $\mathcal{G}_{k}$ with $k<d$. Then $F_{G_m}$ and $F_{H_m}$ have the same limit because $F_{H_m}=F_{G_{m-1}}$. The Barycentric evolution of the boundary of refinements of $K_{d+1}$ as well as $L$ grows exponentially smaller. Given two graphs $A,B \in \mathcal{G}_d$, then $d(A_m,B_m) \leq C_d (d!)^m$ so that $||F_{A_m}-F_{B_m}||_1 \leq 4 C_d/(d+1)^m$. Let $m_0$ be so large that $c=C'/(d+1)^{m_0}<1$. Define a new distance $d'(A,B) = \sum_{k=0}^{m_0-1} d(A_k,B_k)$, so that $d'(A_1,B_1) \leq c d'(A,B)$. Apply the {\bf Banach fixed point theorem}. \\ {\bf (vi)} {\bf There is uniform convergence of $F_{G_m}$ on compact intervals of $(0,1)$}. Since each $F_{G_m}$ is a monotone function in $L^1([0,1])$, the exponential convergence on compact subsets of $(0,1)$ follows from exponential $L^1$ convergence. (This is a general real analysis fact \cite{LewisShisha}). In dimension $d$, one has $||F_{G_m}||_1 \to (d+1)!$ exponentially fast. Indeed, as the number of boundary simplices grows like $(d!)^m$ and the number of interior simplices grows like $((d+1)!)^m$, the convergence is of the order $1/(d+1)^m$. By the {\bf Courant-Fischer mini-max principle} $F_{G}(1) = \lambda_{n} = {\rm max} (v,Lv)/(v,v) \geq {\rm max}(L_{xx}) = {\rm max}({\rm deg}(x))$ grows indefinitely, so that the $L^{\infty}$ convergence can not be extended to $L^{\infty}[0,1]$. \end{proof} It follows that the density of states of $G$ converges ``in law" to a universal density of states which only depends on the clique number class of $G$, hence the name ``central limit theorem". The analogy is to think of $G$ or its Laplacian $L$ as the random variable and the spectrum $\sigma(L)$ of the Laplacian $L$ as the analog of the probability density and of the Barycentric refinement operation as the analogue of adding and normalizing two independent identically distributed random variables. For $d=1$, where the graph is triangle-free and contains at least one edge, we know everything: \begin{propo} For $d=1$, the limiting function is $F_1(x) = 4 \sin^2(\pi x/2)$. \end{propo} \begin{proof} As the limiting distribution is universal, we can compute it for $G=C_m$, where $G_m = C_{4 \cdot 2^m}$. As the spectrum of $C_n$ is the set $\{ 4 \sin^2(\pi k/n) \; | \; k=1, \dots, n\}$, the limit is $F_1$. \end{proof} For $d=1$, the limiting spectral function is related to the {\bf Julia set} of the {\bf quadratic map} $z \to z(4-z)$ which is conjugated to $z \to z^2-2$ ($c=-2$ at bottom tail of Mandelbrot set) or $z \to 4z(1-z)$ which is the {\bf Ulam interval map} conjugated to {\bf tent map}, or $z \to 4z^2-1$ which is a {\bf Chebyshev polynomial}. \section{Remarks} {\bf 1)} For $d=2$ already, we expect spectral gaps. A first large one is observed at $x=1/2$. For $G_4$ with $G=K_3$, we see a jump at $0.5$ of $2.002$, Starting with $G=K_3$, the graph $G_2$ has $25$ vertices. Its Laplacian has eigenvalues for which $\lambda_{13}-\lambda_{12} =2.0647...\sim 2$ being already close to the gap. The eigenvalues can be simplified to be roots of polynomials of degree $4$ for which explicit radical expressions exist allowing to estimate. We also know $||F_{G_3}- F_2||_1 \leq 8 \sum_{k=3}^{\infty} 1/3^k = 8/18<1/2$. While we know that $F_{G_m}- F_2$ converges pointwise and uniformly on each compact interval we don't have uniform constant bounds on each interval.\\ {\bf 2)} Instead of the Laplacian $L=B-A$, we could take the {\bf adjacency matrix} $A$. We could also take the {\bf multiplicative Laplacian} $L'=A B^{-1}$ or the to $L'$ isospectral selfadjoint {\bf random walk Laplacian} $L''=B^{1/2} A B^{1/2}$. The corresponding spectral functions always converge exponentially fast, but to different limiting functions. For the adjacency matrix for example, the spectral gaps appear much smaller. \\ {\bf 3)} The convergence also works for the {\bf Dirac operator} $D=d+d^*$, where $d$ is the {\bf exterior derivative} or the {\bf Hodge Laplacian} $L=D^2$. For the Dirac operator $D$, for which the density of states is supported on $(-\infty,\infty)$. For the Hodge Laplacian on $[0,\infty)$. The Hodge Laplacian factors into different form sectors $L_k$ and the spectral functions of each {\bf form Laplacian} $L_k$ converge. The blocks which make up $D^2$. The {\bf scalar Laplacian} $L_0 = d^* d$ agrees with the {\bf combinatorial Laplacian} = Kirchhoff Laplacian $L=B-A$ discussed above. \\ {\bf 4)} Any Barycentric refinement preserves the {\bf Euler characteristic} $\chi(G)=$ $\sum_{k=0} (-1)^k v_k$. We know that $G_n$ is homotopic and even homeomorphic to $G$. $G$ and $H$ are {\bf Ivashchenko homotopic} \cite{I94,CYY} if one get $H$ from $G$ by homotopy transformation steps done by adding a vertex, connecting it to a contractible subgraph, or removing one with a contractible sphere. A topology $\mathcal{O}} \def\B{\mathcal{B}} \def\C{\mathcal{C}$ on $V$ of $G$ is a {\bf graph topology} if there is a sub-base $\B$ of $\mathcal{O}} \def\B{\mathcal{B}} \def\C{\mathcal{C}$ consisting of contractible subgraphs such that the intersection of any $A,B \in \B$ satisfying $\dim(A \cap B) \geq {\rm min}(\dim(A),\dim(B))$ is contractible, and every edge is contained in some $B \in \B$. We ask the $\mathcal{G}} \def\A{\mathcal{A}} \def\H{\mathcal{H}} \def\U{\mathcal{U}$ of $\B$ to be homotopic to $G$, where the {\bf nerve graph} $\mathcal{G}} \def\A{\mathcal{A}} \def\H{\mathcal{H}} \def\U{\mathcal{U} = (\B,\E)$ has edges $\E$ consisting of all pairs $(A,B) \in \B \times \B$ for which the dimension assumption is satisfied. A map between two graphs equipped with graph topologies is {\bf continuous}, if it is a graph homomorphism of the nerve graphs such that ${\rm dim}(\phi(A)) \leq {\rm dim}(A)$ for every $A \in \B$. If $\phi$ has a continuous inverse it is a {\bf graph homeomorphism}. \\ {\bf 5)} We know ${\rm dim}(G_1) \geq {\rm dim}(G)$ with equality for $d$-graphs, graphs for which unit spheres are spheres. For a visualization of the discrepancy on random Erd\"os-Renyi graphs, see \cite{KnillProduct}. We also know that ${\rm dim}(G_m)$ converges monotonically to the dimension of the largest complete subgraph because the highest dimensional simplices dominate exponentially. \\ {\bf 6)} If $G$ is a $d$-graph, then for $m>1$, all $G_m$ as well as any finite intersections of unit spheres in $G_m$ are all {\bf Eulerian} so that we can define a {\bf geodesic flow} on $G_m$ or a {\bf billiard} on Barycentric refinements of the ball $(K_k)_m$. For non-Eulerian graphs like an icosahedron, a light propagation on the vertex set is not defined without breaking some symmetry. Also, for any graph and $m>0$, the {\bf chromatic number} of $G_m$ is the clique number of $G$. Indeed, for $m \geq 1$ the dimension of the complete subgraph graph $x$ of $G_m$ can be taken as the {\bf color} of the vertex $x$ in $G_{m+1}$. Since the dimension takes values in $\{ 0, \dots, d\}$ the chromatic number of $G$ agrees with the clique number $d+1$ of $G$. \\ {\bf 7)} If $d_0 \leq \cdots \leq d_n$ is the {\bf degree sequence} of $G$, define the {\bf degree function} $H_G(x) = d_{[x n]}$ in the same way as the spectral function. Since the degrees are the diagonal elements of $L$, the {\bf integrated degree function} $\tilde{H}(x) = \int_0^x H_{G}(t) \; dt$ and the {\bf integrated spectral function} $\tilde{F}(x) = \int_0^x F_G(t) \; dt$ agree at $x=0$ and $x=1$. By the {\bf Schur inequality}, we have $\tilde{H}(x) \geq \tilde{F}(x)$. The degree function $H_{G_m}$ also converges. \\ {\bf 8)} Barycentric refinements are usually defined for realizations of a {\bf simplicial complex} in Euclidean space. The Barycentric subdivision of a graph is not the same than what is called a {\bf simplex graph} \cite{BandeltVandeVel} as the later contains the empty graph as a node. It also is not the same than the {\bf clique graph} \cite{Hamelink68} as the later has as vertices the maximal complete subgraphs, connecting them if they have a non empty intersection. The Barycentric refinement is the {\bf graph product} of $G$ with $K_1$ \cite{KnillProduct}. The product $G \times H$ has as vertex set the set of all pairs $(x,y)$, where $x$ is a complete subgraph $x$ of $G$ and a complete subgraph $y$ of $H$. Two such vertices $(x,y)$ and $(u,v)$ are connected by an edge, if either $x \subset u$ and $y \subset v$ or $u \subset x$ and $y \subset v$. \\ {\bf 9)} The limit of Barycentric refinements is ``{\bf flat}" in the following sense: the graph {\bf curvature} $K(x)=1-V_0(x)/2+V_1(x)/3- \cdots$ with $\vec{V}(x)=(V_0(x),V_1(x),\dots)$ being the clique data of the unit sphere $S(x)$ is defined for all finite simple graphs \cite{cherngaussbonnet} satisfying the {\bf Gauss-Bonnet-Chern theorem} adds up to the Euler characteristic of the original graph. Because the total curvature is $1$, and as the graph gets larger, the curvature averaged over some subgraph $G_{m-k}$ of $G_m$ goes to zero, while the individual curvatures can grow indefinitely. For $d=2$ for example, where the curvature is $K(x)=1-d(x)/6$, there are vertices with very negative curvature, but averaging this over a smaller patch gives zero. In general, if we look at subgraphs of $G$ which are wheel graphs as $2$-dimensional sections, also the {\bf sectional curvatures} become unbounded at some points but averages of sectional curvatures over two dimensional surfaces obtained by Barycentric refinements of a wheel graph go to zero. The limiting holographic object looks like Euclidean space. There is even {\bf ``rotational symmetry"}, not everywhere, but with centers at a dense subset of the continuum. \\ {\bf 10)} The graph theoretical subdivision definition uses the point of view that complete subgraphs of a graph are treated as {\bf points}. When this identification is taken seriously and iterated, we get a {\bf holographic Barycentric refinement sequence} which is already realized in the graph itself. This is familiar when building up number systems: if the set of integers $G=Z$ is considered to be a graph with adjacent integers connected, the Barycentric refinements $G_n$ contain all {\bf dyadic rationals} $k/2^n$. Modulo $1$, this is a {\bf Pr\"ufer group} $\PP_2$ which has as the {\bf Pontryagin dual} the compact topological group $\D_2$ of dyadic integers, a subring of the {\bf field} $\Q_2$ of {\bf 2-adic numbers}. In the case $d=1$, there is a limiting random operator on the group of {\bf dyadic integers}. \\ {\bf 11)} There is an analogy between $(\R,\T,\mathbb{Z}} \newcommand{\T}{\mathbb{T}} \newcommand{\Q}{\mathbb{Q}, x \to x+\alpha \; {\rm mod} \; 1,x \to 2x \; {\rm mod} \; 1)$ and $(\Q_2,\PP_2,\D_2, x \to x+1,\sigma)$, where $x \to x+1$ is the addition on the compact topological group. This translation on $\D_2$ is called {\bf adding machine} \cite{Friedman} and $\sigma$ is shift which is a return map on half of $\D_2$. There is a natural way to get the group $\D_2$ through {\bf ergodic theory} as it is the unique fixed point of 2:1 {\bf integral extensions} in the class of all dynamical system. It is called also the {\bf von Neumann-Kakutani system} and usually written as an interval map on $[0,1]$. The ergodic theory of systems with discrete spectrum is completely understood \cite{CFS,DGS} and the von-Neumann-Kakutani system belongs so to a class of systems, where we can solve the {\bf dynamical log-problem}: given any $x,y$ and $\epsilon>-$ find $n$ such that $d(T^n x,y) < \epsilon$ which is essential everywhere in dynamics, from prediction of events up to finding solutions to Diophantine equations. Such systems are typically {\bf uniquely ergodic} and so by spectral theory naturally conjugated to a group translation on a compact topological group, the dual group of the eigenvalues of the {\bf Koopman operator} on the unit circle in the complex plane. There is an important difference between the real and 2-adic story: while in both cases, the chaotic scaling systems are isomorphic {\bf Bernoulli systems} on compact Abelian topological groups, the group translation on the 2-adic group is naturally unique and {\bf quantized}, while on the circle, there are many group translations $x \to x+ \alpha$. In the {\bf real picture}, there is a continuum of natural translations, in the {\bf dyadic picture}, there is a {\bf smallest translation}. The picture is already naturally quantum. Egyptian dyadic fractions, music notation or the transition from Fourier to wavelet theory can be seen as a move towards {\bf dyadic models}. See \cite{TaoDyadic} for a plethora of other places. \\ {\bf 12)} For a $d$-graph $G$ with $d>1$, the renormalization limit of $G$ is still unidentified. The limiting operator is likely a random operator on a compact topological group. As the group of dyadic integers, it is likely also a {\bf solenoid}, an inverse limit of an inverse system of topological groups. There is a group acting on it producing the operator in a crossed product $C^*$-algebra. This is at least the picture in one dimensions. \\ {\bf 13)} As $F_{G_m}$ converges in $L^1$, we have convergence of the {\bf integrated density of states} $F_{G_m}^{-1}$ in $L^1[0,\infty)$ and so weak-* convergence of the derivative, the {\bf density of states}, in the Banach space of measures. In the case $d=1$, the density of states is $f(x) = (x(4-x))^{-1/2}/\pi$ supported on $[0,4]$. The integrated density of states is $F^{-1}(x) = (2/\pi) \arcsin(\sqrt{x}/2)$. The measure $\mu = f(x) 1_{[0,4]} dx$ is the {\bf equilibrium measure} on the Julia set of $T(z) = 4z-z^2$ whose dynamic is conjugated to $z \to z^2-2$ in the Mandelbrot picture or the {\bf Ulam map} $z \to 4x(1-x)$ for maps on the interval $[0,1]$. The spectral function $F$ satisfies $T(F(x))=F(2x)$. The measure $\mu$ maximizes metric entropy and equals it to topological entropy $\log(2)$ of $T$. \\ {\bf 14)} In the case $d=2$, the numbers $v_k$ were already known \cite{Snyder,hexacarpet} as $v_2(G_m)=6^m$ and $v_0$ is the sum of $v_0(m-1),v_1(m-1),v_2(m-1)$ and $v_0-v_1+v_2=2$ leading to formulas like $v_0(G_m) = 1-3(2^{m-1}+2^m+2^{m-1} 3^{m-1})$, $v_1(G_m) = 3(-2^{m-1}+2^m+2^{m-1} 3^m)$. As the lemma shows, in general, the clique data of $G_1$ are a linear image of the clique data of $G$ with a linear map $A$ independent of $G$. Since $A$ has a compact inverse, we can look at an eigenbasis of $A$ and could write down explicit formulas for the number of vertices of $G_n$, if the initial clique data of $G$ are known. \\ {\bf 15)} If $H$ is a subgraph of $G$, then each refinement $H_m$ is a subgraph of $G_m$. Also, $G_k$ is a subgraph of $G_m$ if $k \leq m$ and the automorphism group of $G_m$ contains the automorphism group of $G$. The case $G=K_n$ shows that the {\bf automorphism group} can become larger. In general, ${\rm Aut}(G_1)={\rm Aut}(G)$. It is only in rare cases like $G=C_n$ that ${\rm Aut}(G_m)$ can grow indefinitely. \\ {\bf 16)} The matrix $A$ mapping the clique data of $G$ to the clique data of $G_1$ is a special case of the following: for any graph $H$, there is a linear map $A_H$ which maps the clique data of any graph $G$ to the clique data of $G \times H$. This operator $A_H$ depends only on $H$. While $A=A_{K_1}$ is invertible on each class of finite dimensional graphs. The spectral data $v(G)$ do not determine the graph $G$ as trees already show, the Barycentric refinement operator $T$ is invertible on the image $G(\mathcal{G})$ of the class $\mathcal{G}$ of all finite simple graphs. \\ {\bf 17)} Barycentric refinements make graphs nicer: the chromatic number is the clique number, the graphs are Eulerian. If we identify graphs with their Barycentric refinements, we have a {\bf holographic picture}. There is a natural geodesic flow on $G_1$ if $G$ is a $d$-graph as each every unit sphere has a natural involution, defined inductively by running the geodesic flow on the unit sphere. On a two dimensional sphere for example, the flow is defined because every vertex has even degree. This allows to draw {\bf straight lines} on it and to define an antipodal map. For a 3-sphere, because every unit sphere has such an antipodal map, we can define straight lines there, in turn leading to an antipodal map on such spheres. This can be continued by climbing up the dimensions to get a geodesic flow on the graph $G_1$ itself. \\ {\bf 18)} The density of states obtained as a limiting spectral density of finite dimensional situations is central in the theory of {\bf random Schr\"odinger operators} $L$ \cite{Cycon,Carmona,Pastur}, where the density of states $\mu$ can be defined as the functional $f \to {\rm tr}(f(L))$ as $L$ is an element in a von Neumann algebra with trace, the crossed product of $L^\infty(\Omega)$ with an ergodic group action on the probability space. The {\bf Birkhoff ergodic theorem} has been applied by Pastur to assure in that theory that the density of states of finite dimensional operators defined by orbits starting at some point $x$ converges {\bf almost surely} to the measure $\mu$. The $1$-dimensional discrete case $d=1$, where a single ergodic transformation $T$ on a probability space is given is one of the most studied models, in particular the map $x \to x+ \alpha \; {\rm mod} \; 1$ leading to almost periodic matrices like the {\bf almost Mathieu operator}. It is the real analog of the limiting operator which is almost periodic over the dyadic integers. There is no doubt that also in higher dimensions, the limiting model of Barycentric refinement is part of the theory of aperiodic almost periodic media \cite{Pastur,Cycon,Pastur,Senechal,Simon82,KellendonkLenzSavinien}. \section{Open ends} {\bf 1)} Already for $d=2$, we do not know what the nature of $F_d$ is. One must suspect that there is a renormalization picture, as in one dimensions. It can not be a Julia set of a polynomial, as the spectrum becomes unbounded. We suspect that for $d>1$, the time of the group action is higher dimensional and that the limiting Laplacian remains almost periodic. Since we can not compute $F_2(G_m)$ yet for large $m$ due to the fast growth of $G_m$, we see experimentally that the function $F_2$ on $L^1([0,1])$ has a {\bf self-similar nature} in the sense that $F_2(6x) \sim \phi_2(F_2(x))$ for some $\phi_2$ and $x \in [0,1/6]$. This self-similar nature is present in $d=1$ as $F_1(2x) = \phi_1(F_1(x))$, where $\phi_1$ is the {\bf quadratic map} $\phi_1(x) = x(4-x)$. Since $F_2(1)$ is infinite, the map $\phi_2$ (if it exists at all) must be an unbounded function and in particular can not be a polynomial. \\ {\bf 2)} In order to establish an existence of a {\bf spectral gap} for $F_2$, one could first establish an exact finite approximation bound, then estimate the rest. For $d=2$, the first large gap opens at $x=1/2$. This could be related to the fact that every step doubles the vertex degrees in two dimensions. Here are the largest spectral jump values for $G_5$ if $G=K_3$: $\Delta(0.5)=2.002..$, $\Delta(0.835366)=1.67669...$ $\Delta(0.917683)=2.86249$, $\Delta(0.972561)=3.89394$, $\Delta(0.98628)=6.93379$ $\Delta(0.995427)=7.96794$ and $\Delta(0.998476)=14.9767$. To investigate gaps, we would need a Lidskii type estimate for the smaller half of the eigenvalues. \\ {\bf 3)} We see experimentally that for $d=2$, the sequence $F_{G_m}$ appears monotonically increasing $F_{G_{m}} \leq F_{G_{m+1}}$ at least for the small $m$ in which we can compute it. In the case $d=1$, the monotonicity is explicit. It fails for smaller $m$ and $d=3$ but could hold for any $d$ if $m$ is large enough. If we had more details about the convergence in the middle of the spectrum, we could attack the problem of verifying that spectral gaps exist. \\ {\bf 4)} Unlike the {\bf Feigenbaum renormalization} in one dimensional dynamics \cite{Feigenbaum1978,Lanford84,DeMelo}, where a hyperbolic attractor with stable and unstable manifold exists, the Barycentric renormalization is a contraction and the convergence proof is more elementary. More so than the {\bf central limit theorem} in probability theory, where the renormalization map is $X \to \overline{X_1+X_2}$, where $X_i$ are independent random variables on a probability space $\Omega$ with the same distribution than $X$ and $\overline{X}=(X-{\rm E}[X])/\sigma(X)$. That random variable renormalization is not a uniform contraction in $L^2(\Omega,{\rm P})$. For $d=1$ there is a limiting almost periodic operator where the compact topological {\bf group of dyadic integers} becomes visible in the Schr\"odinger case. We expect also in higher dimensions, a random limiting operator and that the renormalization map is robust in the sense that one can for example shift the energy $E$ and have a deformed attractor. In the one-dimensional Schroedinger case, where one deals with Jacobi matrices, one gets so to spectra on Julia sets $J_E$ of the quadratic map $T(x) = z^2+E$ \cite{BGH,BBM,Kni93a,Kni93b,Kni95}. Also in higher dimensions, we expect that a Schr\"odinger renormalization picture reveals the underlying topological group if an energy parameter is used to modify the renomalization. \\ {\bf 5)} In the case $d=1$, the roots of the {\bf Dirac zeta function} $\zeta(s) = \sum_{\lambda>0} \lambda^{-s}$, defined by the positive eigenvalues $\lambda$ of the Dirac operator $D=d+d^*$ \cite{KnillILAS,McKeanSinger} of $G_{m}$, converges to a subset of the line ${\rm Re}(s)=1$. What happens in the case $d=2$? Already for $d=1$, the convergence of the roots is slow of the order $\log(\log(v_0(G))$ as it is initially proportional to $m$ and slowing down exponentially when approaching the line. Any experimental investigations in $d=2$ would be difficult as no explicit formulas for the eigenvalues exist. \\ {\bf 6)} It would be nice to know more about the linear operators $A_H$ which belong to the map $G \to G \times H$ on $\mathcal{G}$. Their inverse is compact. In the case $H=K_1$, where we get the Barycentric refinement operator $A$, the eigenvector of $A^T$ to the eigenvalue $1$ gives an invariant for Barycentric refinement: it is the Euler characteristic vector $(1,-1,1,-1,1,-1, \dots)$. When restricted to $d$-graphs, the vector $(0,0,...,1)$ with $1$ at the $d+1$'th entry is an eigenvector to the eigenvalue $(d+1)!$. It leads to a counting invariant in the limit which is called {\bf volume}. The other eigenvectors lead to similar limits which must be linear combinations of {\bf valuations} in Riemannian geometry \cite{Santalo,KlainRota}. We see that the even {\bf Barycentric invariants} $X_{2k}(G) = \vec{f}_{2k}(A) \cdot \vec{v}(G)$ are constant zero on $d$-graphs, where $\vec{f}_k(A)$ are the eigenvectors of $A^T$. It will imply for example that for any compact smooth 4-manifold containing $v$ vertices, $e$ edges, $f$ triangles, $g$ tetrahedra, and $h$ pentatopes satisfies $22e+40g=33f+45h$. In the continuum, the more general operator $A_H$ leads to a linear map on valuations. These maps will help to investigate the connection between the discrete and continuum. \\ {\bf 7)} In the same way as for Riemannian manifolds, where one studies the eigenfunctions $f_k$ of the Laplacians, the eigenfunctions of the Kirchhoff Laplacians will play an important role for understanding the limiting density of states. The {\bf Chladni figures} are the level sets $f_k=0$. They can be defined for $d$-graphs. We can look at $f=0$. It is by a discrete Sard result \cite{KnillSard} a $(d-1)$-graph as long as $0$ is not a value taken by $f$. If $0$ is a value taken we just can plot $f=\epsilon$ for $0<\epsilon$ small. The point is that we can so define nice $(d-1)$ graphs associated to $f_k$. The topology of these {\bf Chladni graphs} depends very much on the energy as in the continuum and as in the continuum, are not well understood yet. \\ {\bf 8)} The even eigenvectors $\vec{v}_{2k}$ of $A^T$ appear to have the property that $X_{2k}(G) = \vec{v}_{2k} \cdot \vec{v}(G)=0$ for any $2d$-graph. If true, this leads to integral geometric invariants in the limit. For any graph, define $X_{2k}(G) = \lim_{m \to \infty} \vec{v}_{2k} \vec{v}(G_m)/\lambda_{2k}^{-m}$. The limit is trivial as the right hand side is constant. For $d$-graphs, we see $X_{2k}(G)=0$. We have tried random versions of $S^4,S^2 \times S^2, T^4,S^2 \times T^2, S^3 \times T^1,S^6$. The limiting integral theoretic invariants for $d$-manifolds would be zero for differentiable manifolds. Assume we would find a graph $G$ with uniform dimension $d$ which is not a $d$-graph but which is homeomorphic to a $d$-graph $H$. Then, since Barycentric refinements preserve the homeomorphism relation, the Barycentric limit $M$ of $G$ is a topological manifold homeomorphic to the Barycentric limit $N$ of $H$. While $M,N$ are topological manifolds which are homeomorphic, they can not be diffeomorphic, as integral geometric integer valued invariants are diffeomorphism invariants. A basis for valuations can be obtained if $M$ is embedded in a projective $n$-sphere. We can compute the expectation of the $k$-volume of a random $m$-planes with $M$ using a natural probability measure obtained from Haar measure on $SO(n)$ acting on $k$-planes. This leads to $d+1$ invariants, for which some linear combinations is the Euler characteristic. The graph theoretical invariants obtained from $A$ could produce invariants allowing to distinguish homeomorphic but not diffeomorphic manifolds. \section{About the literature} Barycentric refinement are of central importance in topology. Dieudonn\'e \cite{Dieudonne1989} writes of {\it "the three essential innovations that launched combinatorial topology: simplicial subdivisions by the barycentric method, the use of dual triangulation and, finally, the use of incidence matrices and of their reduction."} In algebraic topology, it is used for proving the {\bf excision theorem} or the {\bf simplicial approximation theorem} \cite{Hatcher,Rotman}. In topology, Barycentric subdivision is primarily used in an Euclidean setting for subdividing complex polytops or CW complexes. For abstract simplicial complexes, it is related to flag complexes even so there are various inequivalent definitions of what simplicial subdivision or simplex graph or Barycentric subdivisions are. In graph theory, subdivisions are considered for simplicial complexes which are special {\bf hypergraphs} in topological graph theory \cite{TuckerGross}. Indeed, most graph theory treats graphs as one-dimensional simplicial complexes, ignoring the {\bf Whitney complex} of all complete subgraphs. Subdivisions classically considered for graphs only agree with the definition used here if the graph has no triangles. Two graphs are {\bf classically homeomorphic}, if they have isomorphic subdivisions ignoring triangles (see e.g. \cite{Bollobas1,HHM,BR}). In the context of {\bf maps} which are finite cell complexes whose topological space is a surface $S$, Barycentric subdivisions are considered for this cell complex \cite{handbookgraph} but not for the graph. In the context of convex polytops, Barycentric subdivision appear for the {\bf order complex} of a polytop which is an abstract simplicial complex \cite{McMullenSchulte}. \\ Resources on the spectral theory of graph are in \cite{Chung97,Mieghem,Brouwer,VerdiereGraphSpectra,Post,BLS}. It parallels to a great deal the corresponding theory for Riemannian manifolds \cite{Chavel,Rosenberg,BergerPanorama}. \\ The definition of $d$-spheres and homotopy are both due to Evako. Having developed the sphere notion independently in \cite{KnillEulerian,KnillProduct} we realized in \cite{KnillJordan} the earlier definition of Ivashchenko=Evako \cite{I94a,I94,Evako1994,Evako2013}. The definition of these Evako spheres is based on Ivashchenko homotopy which is homotopy notion inspired by Whitehead \cite{Whitehead} but defined for graphs. \\ The Barycentric refinements for $d=2$ are studied in \cite{hexacarpet}, where the limit of Barycentric refinement has a dual called the {\bf hexacarpet}. Figure~{7)} in that paper shows the eigenvalue counting function in which gaps in the eigenvalues are shown similarly as in the spectral function $F_G(x)$. The work \cite{hexacarpet} must therefore be credited for the experimental discovery of the gap. \\ Papers like \cite{DiaconisMiclo,Hough} deal with the geometry of tessellations of a triangle, which defines a random walk leading to a dense subgroup of $SL(2,R)$ which defines a Lyapunov exponent. While the focus of those papers is different, there might be relations. \\ For illustrations, more motivation and background, see also our first write-up \cite{KnillBarycentric}. \section{Figures and Code} \begin{figure} \scalebox{0.25}{\includegraphics{figures2/comparison1d.pdf}} \scalebox{0.25}{\includegraphics{figures2/comparison2d.pdf}} \caption{ The functions $F_{G_m}$ for $G=C_4$ converge to the limiting function $4 \sin^2(\pi x/2)$. The functions $F_{G_m}$ for $G=K_3$ converge to a limiting function which appears to have jumps corresponding to gaps in the spectrum. We have here only established that the universal limit $F$ exists, but already for $d=2$ do not know about the nature of the limit. } \end{figure} Here are the Mathematica routines which produced the graphs: \vspace{12mm} \begin{tiny} \lstset{language=Mathematica} \lstset{frameround=fttt} \begin{lstlisting}[frame=single] TopDim=2; Cl[s_,k_]:=Module[{n,t,m,u,q,V=VertexList[s],W=EdgeList[s],l}, n=Length[V]; m=Length[W]; u=Subsets[V,{k,k}]; q=Length[u]; l={}; W=Table[{W[[j,1]],W[[j,2]]},{j,m}];If[k==1,l=Table[{V[[j]]},{j,n}], If[k==2,l=W,Do[t=Subgraph[s,u[[j]]]; If[Length[EdgeList[t]]== Binomial[k,2],l=Append[l,VertexList[t]]], {j,q}]]];l]; Ring[s_,a_]:=Module[{v,n,m,u,X},v=VertexList[s]; n=Length[v]; u=Table[Cl[s,k],{k,TopDim+1}] /. Table[k->a[[k]],{k,n}];m=Length[u]; X=Sum[Sum[Product[u[[k,l,m]], {m,Length[u[[k,l]]]}],{l,Length[u[[k]]]}],{k,m}]]; GR[f_]:=Module[{s={}},Do[Do[If[Denominator[f[[k]]/f[[l]]]==1 && k!=l, s=Append[s,k->l]],{k,Length[f]}],{l,Length[f]}]; UndirectedGraph[Graph[s]]]; GraphProduct[s1_,s2_]:=Module[{f,g,i,fc,tc}, fc=FromCharacterCode; tc=ToCharacterCode; i[l_,n_]:=Table[fc[Join[tc[l],IntegerDigits[k]+48]],{k,n}]; f=Ring[s1,i["a",Length[VertexList[s1]]]]; g=Ring[s2,i["b",Length[VertexList[s2]]]]; GR[Expand[f*g]]]; NewGraph[s_]:=GraphProduct[s,CompleteGraph[1]]; Bary[s_,n_]:=Last[NestList[NewGraph,s,n]]; Laplace[s_,n_]:=Normal[KirchhoffMatrix[Bary[s,n]]]; F[s_,n_]:=Module[{u,m},u=Sort[Eigenvalues[1.0*Laplace[s,n]]]; m=Length[u]; Plot[u[[Floor[x*m]]],{x,0,1}]]; TopDim=1; Show[Table[F[CycleGraph[4],k],{k,3}],PlotRange->{0,4}] TopDim=2; Show[Table[F[CompleteGraph[3],k],{k,3}],PlotRange->{0,10}] \end{lstlisting} \end{tiny} And here is the recursive computation of the matrix $A$ which gives the clique data $A \vec{v}$ of the Barycentric refinement of $G_1$ if $\vec{v}$ is the clique vector of $G$. The computation produces the clique data of the boundary of $(K_n)_1$ which allows to compute the number of interior $k$-simplices producing the off diagonal matrix elements not in the top row. \begin{tiny} \lstset{language=Mathematica} \lstset{frameround=fttt} \begin{lstlisting}[frame=single] BarycentricOperator[m_]:=Module[{}, b[A_]:=Module[{n=Length[A],c},c=A.Table[Binomial[n+1,k],{k,n}]; Delete[Prepend[c,1],n+1]]; T[A_]:=Append[Transpose[Append[Transpose[A],b[A]]], Append[Table[0,{Length[A]}],(Length[A]+1)!]]; Last[NestList[T,{{1}},m]]]; BarycentricOperator[7] \end{lstlisting} \end{tiny} $$ A = \left[ \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 0 & 2 & 6 & 14 & 30 & 62 & 126 & 254 \\ 0 & 0 & 6 & 36 & 150 & 540 & 1806 & 5796 \\ 0 & 0 & 0 & 24 & 240 & 1560 & 8400 & 40824 \\ 0 & 0 & 0 & 0 & 120 & 1800 & 16800 & 126000 \\ 0 & 0 & 0 & 0 & 0 & 720 & 15120 & 191520 \\ 0 & 0 & 0 & 0 & 0 & 0 & 5040 & 141120 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 40320 \\ \end{array} \right] \; . $$ The procedure produces finite dimensional versions of this matrix which matter when looking for interesting quantities on $\mathcal{G}} \def\A{\mathcal{A}} \def\H{\mathcal{H}} \def\U{\mathcal{U}_d$. For $d=2$ for example, we have $\left[ \begin{array}{ccc} 1 & 1 & 1 \\ 0 & 2 & 6 \\ 0 & 0 & 6 \\ \end{array} \right]$ whose transpose has the eigenvectors $[1,-1,1]^T$ (Euler characteristic), the eigenvectors $[0,-2,3]^T$, a well known invariant for $2$-graphs which is proportional to the boundary for $2$-graphs with boundary as well as $[0,0,1]^T$ which is area. For $d=3$, where $A=\left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 2 & 6 & 14 \\ 0 & 0 & 6 & 36 \\ 0 & 0 & 0 & 24 \\ \end{array} \right]$, the eigenvectors besides the Euler characteristic vector $[1,-1,1,-1]^T$ to the eigenvalue $1$ and volume $[0,0,0,1]^T$ to the eigenvalue $24$, there is $[0,22,-33,40]^T$ with eigenvalue $2$ and $[0,0,0-1,2]^T$ with eigenvalue $6$. The later gives an invariant which is zero for $3$-graphs as in general $[0,\dots 0,-2,d+1]$ is an invariant for $d$-graphs. We see experimentally that {\bf for any $d$-graph,the eigenfunction of $A^T$ to each eigenvalue $(2k)!$ is perpendicular to the clique vector $\vec{v}$}. For example, there is a discrete $P^2 \times S^2$ with clique vector $\vec{v}=[1908, 26520, 87020, 104010, 41604]$ which is perpendicular to $\vec{v}_4=[0, -22, 33, -40, 45]$. The invariant does not change under {\bf edge refinement modifications} which are homotopies preserving $d$-graphs. The invariants $X_{2k}(G) = \vec{v}(G) \cdot \vec{v}_{2k}$ remain zero under such homotopies. They are currently under investigation. It looks promising that integral theoretical methods like \cite{indexformula,colorcurvature} for generalized curvatures allow to prove the invariants to be zero. It would be useful also to know whether any two homeomorphic $d$-graphs have a common refinement when using Barycentric or edge refinements. This looks more accessible than related questions for triangulations as $d$-graphs can be dealt with recursively and refinements of unit spheres carry to refinements of the entire graph. \bibliographystyle{plain}
2,869,038,154,439
arxiv
\section{Introduction} Back-action of measurement on the measured object is an emblematic property of quantum systems. Quantum Zeno effect (QZE) - suppression of the object's coherent internal dynamics by measurement - is one of the marked scenarios for that back-action. In this work we are going to discuss the back-action effect in the framework of unsharp measurements, for which there is a well-developed theoretical model called continuous quantum measurement theory (cf. Ref.~\cite{cont.model} and references therein). The unsharp measurement extracts only partial information about an observable, so we introduce another detector with sharp detection. We are interested in the output of the sharp detection, which means that the unsharp measurement will be treated in a \textit{nonselective} picture. The \textit{nonselective} description represents a measurement, where our record of data wasn't read out. The sharp detector also has it's back-action, projecting the system randomly into it's eigenstate, but now the readout will be kept and the further evolution of the system will be thrown away. The application we have in mind is the spin dependent single quantum dot, available in high quality due to massive progress in experimental technology. \\ Quantum dots and also spin manipulations are important for the realization of qubits. The effects of spin decoherence related to the quantum computing was studied by Ref. \cite{Loss}. There is a growing culture of indirect measurements on nanostructures by means of Coulomb-coupled quantum point contacts, single-electron transistors, or double quantum dot's \cite{Gur97}-\cite{Mao04}. QZE is one of the effects studied from the beginning \cite{Gur97} (cf. also \cite{QZE}), and the concept of time-continuous measurement has penetrated the field \cite{Gur97,Kor99,Goa01} for a long time. Previous research has monitored the sharp or unsharp detection of the electron number. While the work presented here examines the sharp detection of the electron number, it also contains an unsharp detection of spin observables. Spin manipulation and magnetization detection in quantum dot has been studied in experiment by Ref. \cite{Experiment}. The external field used for spin manipulation, can be viewed as an environment of the subsystem, the quantum dot. The whole quantum system dynamics is reversible, and tracing out the degrees of freedom of the environment we arrive to a non-unitary time evolution \cite{BreuerPetruccione}. In general, all these non-unitary processes are connected through the Kraus-form (Appendix \ref{GM}) related to the completely positive mappings of the density matrices \cite{Kraus}. The subsystem's non-unitary dynamic imposed by the external field can be interpreted as an unsharp measurement \cite{nuclearmagnetic}. These manipulations are time-continuous, so we will study the spin currents and magnetization detections in the framework of the continuous quantum measurement theory. \\ This article is organized as follows. In Sec. \ref{The model} we introduce our model. We derive the equation of motion by the usual Born-Markov approximation and calculate the Markovian current operators. We introduce the charge detector by extending the original Hilbert-space and we derive again the dynamics and operators mentioned above. The continuous measurement theory starts from this point. The results of quantum Zeno effect are shown and discussed in Secs. \ref{Results}, \ref{Conclusions}. General and continuous quantum measurement theories has a wide literature and to ensure the background of this work we present a short summary of this topic in the Appendices \ref{GM} and \ref{CQM} using Refs. \cite{cont.model, diosik, NielsenChuang}. \section{The model} \label{The model} Back-action of a spin related observable is discussed by the nonselective continuous quantum measurement model (Appendix \ref{CQM}). Measuring observable $\hat{A}$ by a tool, which is treated quantum mechanically, we have the following equation of motion \cite{BBDG},\cite{expl}: \begin{equation} \label{masteqDN} \frac{d\hat{\rho}}{dt}=\mathcal{L} \hat{\rho}(t)-\frac{\gamma_A}{8}[\hat{A},[\hat{A},\hat{\rho}]], \end{equation} where the main parameter of the theory, the detection performance (or detection strength), defined as \begin{equation} \label{gamma} \gamma_A=(\Delta t)^{-1}(\Delta A)^{-2}~, \end{equation} where $\Delta t$ is the time-resolution (or, equivalently, the inverse bandwidth) of the detector (detecting observable $\hat{A}$) and $\Delta A$ is the statistical error characterizing unsharp detection of the average value of the observable $\hat{A}$ in the period $\Delta t$.\\ The nonselective model throws away the time dependent detection result of the observable $\hat{A}$, but our goal is to monitor the time dependency of the charge observable in a sharp detection scenario, by adding a charge detector to our system. Our model will contain two detectors, one is measuring continuously in the nonselective way, and the other is measuring the whole system output.\\ First of all let's derive our system equations without any detector. Let us consider our system, the spin dependent quantum dot, subject of experimental work \cite{KoppensScience309,PettaScience309}, which is coupled to two separate electron reservoirs. The density of states in the reservoirs is very high (continuum), however the dot contains only isolated levels. Taking into account the spin degrees of freedom ($s$), we introduce the Coulomb interaction ($U$), and we also add an spin flip Hamiltonian with frequency $\Omega$. In this case the full Hamiltonian becomes \begin{equation} \hat{H}=\hat{H}_D+\hat{H}_R+\hat{H}_I, \end{equation} which describes the quantum dot, \begin{eqnarray} \hat{H}_D&=&\sum_{s}E_s \hat{a}^{\dagger}_{D,s}\hat{a}_{D,s}+U\hat{a}^{\dagger}_{D,s}\hat{a}_{D,s}\hat{a}^{\dagger}_{D,-s}\hat{a}_{D,-s} \nonumber \\ &+& \hbar \Omega (\hat{a}^{\dagger}_{D,s}\hat{a}_{D,-s}+\hat{a}^{\dagger}_{D,-s}\hat{a}_{D,s}), \end{eqnarray} the reservoirs (leads), \begin{equation} \hat{H}_R=\sum_{l,s} E_{l,s}\hat{a}^{\dagger}_{l,s}\hat{a}_{l,s}+\sum_{r,s} E_{r,s}\hat{a}^{\dagger}_{r,s}\hat{a}_{r,s}, \end{equation} and the tunnel coupling between the reservoirs and dot, \begin{eqnarray} \hat{H}_I&=& \sum_{l,s} \hbar \omega_{l,s}(\hat{a}^{\dagger}_{l,s}\hat{a}_{D,s} + \hat{a}^{\dagger}_{D,s}\hat{a}_{l,s}) \nonumber \\ &+& \sum_{r,s} \hbar \omega_{r,s}(\hat{a}^{\dagger}_{r,s}a_{D,s} + \hat{a}^{\dagger}_{D,s}a_{r,s}), \end{eqnarray} where $s=\pm 1/2$ and the subscripts $l$ and $r$ enumerate correspondingly the (very dense) levels in the left and right leads.\\ The state of the combined system, dot and leads, is given by the full density matrix $\hat{\chi}(t)$. The states of interest are the electronic states on the dot, described by the reduced density matrix of the dot, $\hat{\rho}(t)=Tr_R(\hat{\chi}(t))$. Here, $Tr_R$ is the trace taken over the leads, averaging over the degrees of freedom of the reservoirs. \\ The usual Born-Markov master equation can be derived \cite{BreuerPetruccione}: \begin{eqnarray} &&\frac{d\hat{\rho}}{dt}=\mathcal{L} \hat{\rho}(t)=-\frac{i}{\hbar}[\hat{H}_{D},\hat{\rho}(t)] \\ &&-\frac{1}{\hbar^2}\int_0^{\infty} Tr_R\left[\hat{H}_{I}^{R}(\tau),\big[\hat{ H}_{I}^{D}(-\tau),\hat{\rho}(t)\otimes\hat{R}_0 \big] \right] d\tau, \nonumber \end{eqnarray} where \begin{eqnarray} &\hat{H}_{I}^{R}(t)=e^{\frac{i}{\hbar}\hat{H}_R t}\hat{ H}_{I}e^{-\frac{i}{\hbar}\hat{H}_R t}, \nonumber \\ &\hat{H}_{I}^{D}(-t)=e^{-\frac{i}{\hbar}\hat{H}_D t}\hat{ H}_{I}e^{\frac{i}{\hbar}\hat{H}_D t}. \nonumber \end{eqnarray} The $\hat{R}_0$ is the density matrix of the leads in thermal equilibrium. This equation of motion is for sequential tunneling, which is enough to describe our model. The cotunneling effect can be found in higher order of perturbation in the master equation approach \cite{Krech}.\\ Using the fact, that the spin flip doesn't occur under the tunneling, which means that the energy levels of the dot are much more larger than the energy of spin flip, we avoid this Hamiltonian contribution from the interaction picture. Assuming that the current flows from left to right ($\mu_l > E_s,U >\mu_r$) and the reservoirs are in thermal equilibrium the following equations can be derived: \begin{widetext} \begin{eqnarray} \label{masterSD} \dot\rho_{aa} & = &-(\Gamma_{l,\frac{1}{2}}+\Gamma_{l,-\frac{1}{2}})\rho_{aa} +\Gamma_{r,\frac{1}{2}}\rho_{bb}+\Gamma_{r,-\frac{1}{2}}\rho_{cc} \label{eqa}\\ \dot\rho_{bb} & = &+i\Omega(\rho_{bc}-\rho_{cb})-(\Gamma'_{l,-\frac{1}{2}}+\Gamma_{r,\frac{1}{2}}) \rho_{bb}+\Gamma_{l,\frac{1}{2}}\rho_{aa} +\Gamma'_{r,-\frac{1}{2}}\rho_{dd} \label{eqb}\\ \dot\rho_{bc}&=&-i\delta \rho_{bc}+i\Omega(\rho_{bb}-\rho_{cc})-\frac{\Gamma'_{l,\frac{1}{2}}+\Gamma'_{l,-\frac{1}{2}}+\Gamma_{r,\frac{1}{2}}+\Gamma_{r,-\frac{1}{2}}}{2}\rho_{bc} \label{eqbc} \\ \dot\rho_{cb}&=&i\delta \rho_{cb}-i\Omega(\rho_{bb}-\rho_{cc})-\frac{\Gamma'_{l,\frac{1}{2}}+\Gamma'_{l,-\frac{1}{2}}+\Gamma_{r,\frac{1}{2}}+\Gamma_{r,-\frac{1}{2}}}{2}\rho_{cb} \label{eqcb} \\ \dot\rho_{cc} & = &-i\Omega(\rho_{bc}-\rho_{cb})-(\Gamma'_{l,\frac{1}{2}}+\Gamma_{r,-\frac{1}{2}}) \rho_{cc}+\Gamma_{l,-\frac{1}{2}}\rho_{aa} +\Gamma'_{r,\frac{1}{2}}\rho_{dd} \label{eqc}\\ \dot\rho_{dd} & = &-(\Gamma'_{r,\frac{1}{2}}+\Gamma'_{r,-\frac{1}{2}})\rho_{dd} +\Gamma'_{l,-\frac{1}{2}}\rho_{bb}+\Gamma'_{l,\frac{1}{2}}\rho_{cc}, \label{eqd} \end{eqnarray} \end{widetext} where we used the basis $|a \,\rangle=|0\rangle$ for empty, $|b \,\rangle=|\uparrow\rangle$ for spin up($s=\frac{1}{2}$), $|c\, \rangle=|\downarrow \rangle$ for spin down($s=-\frac{1}{2}$) and $|d \,\rangle=| \uparrow \downarrow \rangle$ for fully occupied dot state. The left tunneling rates are \begin{eqnarray} \label{Gammal} \Gamma_{l,s}&=&2 \pi \sum_{l} | \omega_{l,s} |^2 \delta(E_{l,s}-E_s) f_{F-D}(E_{l,s}), \\ \Gamma'_{l,s}&=&2 \pi \sum_{l} | \omega_{l,s} |^2 \delta(E_{l,s}-E_s-U) f_{F-D}(E_{l,s}), \nonumber \end{eqnarray} and the right ones are \begin{eqnarray} \label{Gammar} \Gamma_{r,s}&=&2 \pi \sum_{r} | \omega_{r,s} |^2 \delta(E_{r,s}-E_s) \Big(1-f_{F-D}(E_{r,s})\Big), \\ \Gamma'_{r,s}&=&2 \pi \sum_{r} | \omega_{r,s} |^2 \delta(E_{r,s}-E_s-U) \Big(1-f_{F-D}(E_{r,s})\Big). \nonumber \end{eqnarray} We also introduce $\delta=(E_{\frac{1}{2}}-E_{-\frac{1}{2}})/\hbar$, the difference of the energy levels, which are renormalized by the Lamb-shifts.\\ The spin up and spin down current operators can be calculated from the continuity equation \cite{GebCar04,BodDio06}: \begin{equation}\label{continuity} \hat{I}_s+\mathcal{L}^\dagger\hat{N}_s=0. \end{equation} This is in the Hilbert-space of the dot, but also other equivalent formulation can be used: \begin{eqnarray} &&Tr_D (\hat{I}_s \hat{\rho})= \\ &&-\int_0^{\infty} Tr_{D+R}\left( \left[[\hat{Q}_s,\hat{H}_{I}^R(\tau)],\hat{H}_{I}^D(-\tau)\right]\hat{\rho}\otimes\hat{R}_0\right) d \tau, \nonumber \end{eqnarray} Here $\hat{Q}_s=\sum_{r} \hat{a}^{\dagger}_{r,s} \hat{a}_{r,s}-\sum_{l} \hat{a}^{\dagger}_{l,s} \hat{a}_{l,s} $ is the number of electron with spin $s$ flowed through the quantum dot.\\ We get the spin current operators: \begin{eqnarray} 2\hat{I}_{\frac{1}{2}}&=&\Gamma_{l,\frac{1}{2}}| a\, \rangle \langle a\, | + \Gamma_{r,\frac{1}{2}}| b\, \rangle \langle b\, |+\Gamma'_{l,\frac{1}{2}}| c\, \rangle \langle c\, | \nonumber \\ &+&\Gamma'_{r,\frac{1}{2}}| d\, \rangle \langle d\, |, \\ 2\hat{I}_{-\frac{1}{2}}&=&\Gamma_{l,-\frac{1}{2}}| a\, \rangle \langle a\, | + \Gamma'_{l,-\frac{1}{2}}| b\, \rangle \langle b\, |+\Gamma_{r,-\frac{1}{2}}| c\, \rangle \langle c\, | \nonumber \\ &+&\Gamma'_{r,-\frac{1}{2}}| d\, \rangle \langle d\, |. \end{eqnarray} At the moment we know the dynamics of system and the observables, which we want to detect unsharply. This is enough to build up our model containing the continuous quantum measurement theory. The method is same, achieved by Ref. \cite{BBDG,BMZ}, but here the application is for a different system. In the following we extend the Hilbert space adding the charge detector to the system. We will apply the continuous quantum measurement theory after the calculation of the equation of motion and the spin current operators in this new Hilbert space.\\ We are introducing the charge detector, making our detection results, which will describe the counting statistics with the master equation approach. The problem is that the number of tunneled particles is actually a bath and not a system observable. We perform the counting statistics by extending our Hilbert space by $| n \,\rangle$ which is the state of the charge detector \cite{Brandes}. This charge detector can be defined by two way: first type is the Ramo-Shockley one \cite{RS}, where all type of tunnelings are counted, or just counting the arrived electrons in the right lead detector \cite{Brandes,Gur96}. Formally this is done by modifying the interaction Hamiltonians in the Ramo-Shockley case: \begin{eqnarray} \hat{a}_{D,s} \hat{a}^\dagger_{r,s} \rightarrow \hat{a}_{D,s} \hat{a}^\dagger_{r,s}\otimes \hat{b}^\dagger, \,\,\,\, \hat{a}^\dagger_{D,s} \hat{a}_{l,s} \rightarrow \hat{a}^\dagger_{D,s} \hat{a}_{l,s}\otimes \hat{b}^\dagger,&& \\ \hat{a}^\dagger_{D,s} \hat{a}_{r,s} \rightarrow \hat{a}^\dagger_{D,s} \hat{a}_{r,s}\otimes \hat{b}, \,\,\,\, \hat{a}_{D,s} \hat{a}^\dagger_{l,s} \rightarrow \hat{a}_{D,s} \hat{a}^\dagger_{l,s}\otimes \hat{b},&& \end{eqnarray} and for the right lead detector: \begin{eqnarray} \hat{a}_{D,s} \hat{a}^\dagger_{r,s} \rightarrow \hat{a}_{D,s} \hat{a}^\dagger_{r,s}\otimes \hat{b}^\dagger, \,\,\,\, \hat{a}^\dagger_{D,s} \hat{a}_{l,s} \rightarrow \hat{a}^\dagger_{D,s} \hat{a}_{l,s}\otimes \hat{\mathcal{I}},&&\\ \hat{a}^\dagger_{D,s} \hat{a}_{r,s} \rightarrow \hat{a}^\dagger_{D,s} \hat{a}_{r,s}\otimes \hat{b}, \,\,\,\, \hat{a}_{D,s} \hat{a}^\dagger_{l,s} \rightarrow \hat{a}_{D,s} \hat{a}^\dagger_{l,s}\otimes \hat{\mathcal{I}},&& \end{eqnarray} where $\hat{\mathcal{I}}$ is the identity operator, and the other terms of the system Hamiltonian will be extended by the tensorproduct with the identity. The charge detector excitation operator \begin{equation} \hat{b}^\dagger=\sum^{\infty}_{-\infty} |n+1\rangle \langle n|. \end{equation} This operator increase the state if one tunneling happened in the Ramo-Shockley case or one electron is created in the right lead. Counting all possible tunnelings is only important when the charge detection is sensitive for both leads and the excitation number has to be divide by 2 to get charge number.\\ After the usual Born-Markov master equation approach the new Hilbert space will be spanned by $| a \,\rangle \otimes | n\, \rangle$, $| b\, \rangle \otimes | n\, \rangle$, $| c \, \rangle \otimes | n\, \rangle$, $| d \, \rangle \otimes | n\, \rangle$. The eq. \eqref{masterSD} is closed for states that are diagonal in $n$. Introduce the following representation: $\langle n\, | \hat{\rho}| n \, \rangle=\rho^{n}$ where $\rho^{n}$ is the unnormalized conditional density matrix of the dots, depending on the number $n$ and acting on the Hilbert space of the dot.\\ The $n$-resolved density matrices are related via \begin{equation} \dot\rho^{n} \equiv L_0 \rho^n + L_+ \rho^{n-1}. \end{equation} Fixing the direction of the current means that the term of $\rho^{n+1}$ will not appear in our equations. Calculating the current operators in the new basis, we find out \begin{equation} \hat{J}_{s}=\hat{I}_{s}\otimes |n\rangle \langle n |. \end{equation} We have now all the information about the system and we apply the theory of continuous quantum measurement. The model what we will get is describing a system which is continuously detecting the spin observables without any output gained and in the same time there is a sharp detection done by a charge detector. The sharp detector is giving information about the system, and the result will depend on the interaction strength of the continuous detection. \section{Results} \label{Results} We are going to investigate three situations, where the spin up, the spin down currents and the magnetization($\hat{M}=\hat{a}^{\dagger}_s\hat{a}_s-\hat{a}^{\dagger}_{-s}\hat{a}_{-s}$) is unsharply detected. This operators are block diagonals in the extended Hilbert space. But also they are diagonal in the restricted space of the quantum dot. Applying the extension, which counts the electrons tunneled through the quantum dot, the eq. \eqref{masteqDN} with the continuous detection takes the form: \begin{widetext} \begin{eqnarray} \dot\rho_{aa}^n & = &-(\Gamma_{l,\frac{1}{2}}+\Gamma_{l,-\frac{1}{2}})\rho_{aa}^n +\Gamma_{r,\frac{1}{2}}\rho_{bb}^{n-1}+\Gamma_{r,-\frac{1}{2}}\rho_{cc}^{n-1} \label{eqan}\\ \dot\rho_{bb}^n & = &+i\Omega(\rho_{bc}^n-\rho_{cb}^n)-(\Gamma'_{l,-\frac{1}{2}}+\Gamma_{r,\frac{1}{2}}) \rho_{bb}^n+\Gamma_{l,\frac{1}{2}}\rho_{aa}^{n} +\Gamma'_{r,-\frac{1}{2}}\rho_{dd}^{n-1} \label{eqbn}\\ \dot\rho_{bc}^n&=&-i\delta \rho_{bc}^n+i\Omega(\rho_{bb}^n-\rho_{cc}^n)-\frac{\Gamma'_{l,\frac{1}{2}}+\Gamma'_{l,-\frac{1}{2}}+\Gamma_{r,\frac{1}{2}}+\Gamma_{r,-\frac{1}{2}}}{2}\rho_{bc}^n-\Gamma \rho_{bc}^n \label{eqbcn} \\ \dot\rho_{cb}^n&=&i\delta \rho_{cb}^n-i\Omega(\rho_{bb}^n-\rho_{cc}^n)-\frac{\Gamma'_{l,\frac{1}{2}}+\Gamma'_{l,-\frac{1}{2}}+\Gamma_{r,\frac{1}{2}}+\Gamma_{r,-\frac{1}{2}}}{2}\rho_{cb}^n -\Gamma \rho_{cb}^n \label{eqcbn} \\ \dot\rho_{cc}^n & = &-i\Omega(\rho_{bc}^n-\rho_{cb}^n)-(\Gamma'_{l,\frac{1}{2}}+\Gamma_{r,-\frac{1}{2}}) \rho_{cc}^n+\Gamma_{l,-\frac{1}{2}}\rho_{aa}^{n} +\Gamma'_{r,\frac{1}{2}}\rho_{dd}^{n-1} \label{eqcn}\\ \dot\rho_{dd}^n & = &-(\Gamma'_{r,\frac{1}{2}}+\Gamma'_{r,-\frac{1}{2}})\rho_{dd}^n +\Gamma'_{l,-\frac{1}{2}}\rho_{bb}^{n}+\Gamma'_{l,\frac{1}{2}}\rho_{cc}^{n}, \label{eqdn} \end{eqnarray} \end{widetext} where $\Gamma$ is \begin{eqnarray} \hat{A}=\hat{J}_{\frac{1}{2}}&:&\, \Gamma=\frac{\gamma_{\frac{1}{2}}}{8}\frac{(\Gamma'_{l,\frac{1}{2}}-\Gamma_{r,\frac{1}{2}})^2}{4} \label{1/2} \\ \hat{A}=\hat{J}_{-\frac{1}{2}}&:&\, \Gamma=\frac{\gamma_{-\frac{1}{2}}}{8}\frac{(\Gamma'_{l,-\frac{1}{2}}-\Gamma_{r,-\frac{1}{2}})^2}{4} \label{-1/2} \\ \hat{A}=\hat{M}\otimes \hat{\mathcal{I}}&:&\, \Gamma=\frac{\gamma_M}{8} \label{M}. \end{eqnarray} The diagonal density matrix elements $\rho_{aa}^{n}(t)$, $\rho_{bb}^{n}(t)$, $\rho_{cc}^{n}(t)$ and $\rho_{dd}^{n}(t)$ are the probabilities to find: a) no electrons; b) one electron with spin up, c) one electron with spin down, and c) two electrons inside the well, at the time $t$ when $n$ number of electrons tunneled through the system. Summing up the partial probabilities (states of the detector) we obtain for the total probabilities, $\rho(t)=\sum_n\rho^{n}(t)$, the same eqs. \eqref{eqa},\eqref{eqb},\eqref{eqbc},\eqref{eqcb},\eqref{eqc},\eqref{eqd} without $\Gamma$ terms. Tracing out the dot degrees of freedom, we can calculate: \begin{equation} P_n(t)=Tr\{\rho^n(t)\}, \end{equation} which can be interpreted as the probability of $n$ electrons tunneled through the whole system. \\ The charge current via the interpretation of right lead detection: \begin{equation} I(t)=e \frac{d}{dt}\left(\sum_n n P_n(t) \right). \end{equation} If we set, that the spin up state energy is bigger then the spin down state ($E_{\frac{1}{2}}>E_{-\frac{1}{2}}$), and using the fact, that the left incoherent tunnelings are related to the electron Fermi-Dirac distribution function in eqs. \eqref{Gammal}, and the right ones to the hole F-D ($\mu_l > E_s,U >\mu_r$) in eqs. \eqref{Gammar}, we have the following relations: \begin{eqnarray} \Gamma_{l,-\frac{1}{2}}&>&\Gamma_{l,\frac{1}{2}}>\Gamma'_{l,-\frac{1}{2}}>\Gamma'_{l,\frac{1}{2}}, \nonumber \\ \Gamma'_{r,\frac{1}{2}}&>&\Gamma'_{r,-\frac{1}{2}}>\Gamma_{r,\frac{1}{2}}>\Gamma_{r,-\frac{1}{2}}, \nonumber \end{eqnarray} assuming that the different spin states in the reservoir are occupied in the same number. We studied a finite and a zero temperature case with assumptions for further simplicity of our results:\\ In the {\em finite temperature\/} case due to the Fermi-Dirac distribution function properties with the assumption $\mu_l-\mu_r=\sum_s E_s + U$ and for both leads the density of states with the tunneling rates are same, we have \begin{eqnarray} \Gamma_1&=&\Gamma_{l,-\frac{1}{2}}= \Gamma'_{r,\frac{1}{2}},\,\,\,\,\Gamma_2=\Gamma_{l,\frac{1}{2}}=\Gamma'_{r,-\frac{1}{2}}, \\ \Gamma_3&=&\Gamma'_{l,-\frac{1}{2}}=\Gamma_{r,\frac{1}{2}},\,\,\,\, \Gamma_4=\Gamma'_{l,\frac{1}{2}}=\Gamma_{r,-\frac{1}{2}}. \end{eqnarray} In the {\em finite temperature\/} case, when $T \rightarrow 0$ other type of assumptions have to be made. In this case all the left hand side tunneling parameters are equal to each other and same is true for the right ones. Assuming that the leads are same, would result no effect induced by continuous measurement scenario for the spin currents. \begin{eqnarray} \Gamma_l&=&\Gamma_{l,-\frac{1}{2}}=\Gamma_{l,\frac{1}{2}}=\Gamma'_{l,-\frac{1}{2}}=\Gamma'_{l,\frac{1}{2}},\\ \Gamma_r&=&\Gamma_{r,-\frac{1}{2}}=\Gamma_{r,\frac{1}{2}}=\Gamma'_{r,-\frac{1}{2}}=\Gamma'_{r,\frac{1}{2}}. \end{eqnarray} In the finite temperature case the stationary charge current reads \begin{widetext} \begin{equation} \label{Istac} I_{\infty}=e\frac{2 \left(\Gamma _1+\Gamma _2\right) \left(2 \Gamma _4 \Gamma _5 \Omega ^2+\Gamma _3 \left(2 \Gamma _5 \Omega ^2+\Gamma _4 \left(4 \delta ^2+\Gamma _5^2\right)\right)\right)}{8 \Gamma _3 \Gamma _4 \delta ^2+2 \Gamma _3 \Gamma _4 \Gamma _5^2+4 \Omega ^2 \Gamma _3 \Gamma _5+4 \Omega ^2 \Gamma _4 \Gamma _5+\Gamma _1 \left(4 \Gamma _5 \Omega ^2+\Gamma _3 \left(4 \delta ^2+\Gamma _5^2\right)\right)+\Gamma _2 \left(4 \Gamma _5 \Omega ^2+\Gamma _4 \left(4 \delta ^2+\Gamma _5^2\right)\right)}, \end{equation} \end{widetext} where we introduced for simplification $\Gamma_5=\Gamma_3+\Gamma_4+\Gamma$. The above formula contains the way of quantum Zeno effect appears with growing measurement performances $\gamma_s,\gamma_M$ (see Figs. \ref{fig1}, \ref{fig2}). Here this effect doesn't lead to a total vanish of the current \cite{BBDG,Gur97}. When $\Gamma \to \infty$ the stationary current is: \begin{equation} I_{\infty}=e\frac{2 \left(\Gamma _1+\Gamma _2\right) \Gamma _3 \Gamma _4}{\Gamma _1 \Gamma _3+\left(\Gamma _2+2 \Gamma _3\right) \Gamma _4}. \end{equation} By inspection of the density matrix we learn that this is due to damping the coherent oscillation of the spin. Also this is the reason for the ``partial'' Quantum Zeno effect, because the transport of the electrons is not affected totally by the damping.\\ In the zero temperature case the stationary charge current reads \begin{equation} I_{\infty}=e\frac{2 \Gamma _l \Gamma _r}{\Gamma _l+\Gamma _r}. \end{equation} The result means, that the continuous measurement effects do not contribute to the stationary current. The result twice bigger than the stationary spinless single dot current, because the density of states is doubled by the spin degrees of freedom. Studying the dynamics of the density matrix it turns out that the equality of different spin related incoherent tunnelings(left or right in the same time) \begin{eqnarray} \label{criteria} \Gamma_{l,-\frac{1}{2}}=\Gamma_{l,\frac{1}{2}},\,\,\,\,\Gamma'_{l,-\frac{1}{2}}=\Gamma'_{l,\frac{1}{2}},\\ \Gamma_{r,-\frac{1}{2}}=\Gamma_{r,\frac{1}{2}},\,\,\,\,\Gamma'_{r,-\frac{1}{2}}=\Gamma'_{r,\frac{1}{2}}, \end{eqnarray} leads to the cancellation of measurement induced damping mechanism. If only one of the above equalities is not true, the damping will survive.\\ \begin{figure}[t] \begin{center} \includegraphics[width=3.1in]{current3D_Gamma_Omega} \caption{\label{fig1} Measured stationary charge current as a function of the rescaled damping and spin flip frequency parameters. $\delta/\Gamma_1=2$, $\Gamma_4/\Gamma_1=0.4$, $\Gamma_3/\Gamma_1=0.6$ and $\Gamma_2/\Gamma_1=0.8$. As the spin flip frequency($\Omega$) is increased, the stationary current is also increased. The increase of the measurement performance ($\Gamma$) shows the behaviour of quantum Zeno effect. This effect becomes bigger when $\Omega$ is increased. For large values of $\Gamma$ a partial quantum Zeno effect can be seen.} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=3.1in]{current3D_Gamma_delta} \caption{\label{fig2} Measured stationary charge current as a function of the rescaled damping and $\delta$ parameters. $\Gamma_4/\Omega=0.4$, $\Gamma_3/\Omega=0.6$, $\Gamma_2/\Omega=0.8$ and $\Gamma_1/\Omega=1$. As the difference of energy levels($\delta$) is increased, a quantum anti-Zeno effect can be seen. } \end{center} \end{figure} Analyzing eq. \eqref{Istac}, when the energy levels in the dot are equal ($E_{\frac{1}{2}}=E_{-\frac{1}{2}}$), then $\delta=0$ and the reduction of the current is monotonous. For any different value of $\delta$ there is a range where the current increase with growing $\Gamma$ (``anti-Zeno effect''). Also another important property is the dependence from spin flip frequency $\Omega$: if this parameter turns to be $0$, then no more measurement induced decoherence will have effect on the system. This is also a prove how the decoherence mechanism is damping the internal coherent motion.\\ In a future experimental setup where will be applied this double detection system, changing the conditions of the sensitive quantum detection and analyzing the sharp detection output the validity of the continuous quantum measurement theory can be demonstrated. The substrate induced decoherence was treated, so a possible steady current decrease will only be the result of a unsharp quantum measurement. Also we have to mention that the unsharply detected operators, chosen by us, are diagonal in the Hilbert space of the dot and this is the reason why the damping mechanism has only effect on the internal coherent motion. Any other, more general operators introduce a complex damping mechanism.\\ Noise spectrum can be calculated by the MacDonald formula \cite{macd}: \begin{equation} S(\omega)=2 \omega \int_{0}^{\infty} \frac{d}{d \tau}\Big(\langle n^2(t)\rangle- \langle n(t) \rangle^2 \Big)sin(\omega \tau) d \tau. \end{equation} \begin{figure}[t] \begin{center} \includegraphics[width=3.1in]{noise3D} \caption{\label{fig3} Noise spectrum of the spin dependent single quantum dot as a function of $\Gamma$. $\Gamma_4/\Omega=0.4$, $\Gamma_3/\Omega=0.6$, $\Gamma_2/\Omega=0.8$, $\Gamma_1/\Omega=1$ and $\delta/\Omega=1$. The peaks of the Rabi-frequency can be found at $\omega\approx 2\Omega$ and on increasing the performance of the measurement($\Gamma$) their amplitudes start to decrease. For high values of $\Gamma$ the Rabi peaks get overdamped and merge into a single peak around $\omega = 0$. } \end{center} \end{figure} The Rabi oscillations, $\omega_R \approx 2 \Omega$, can be recognized on Fig. \ref{fig3} and due to the increase of damping mechanism induced by measurement they disappear. The peaks exact locations is also depending very strongly from the incoherent tunneling parameters. Other important property is \begin{equation} \lim_{\omega \to \infty} S(\omega)=eI_{\infty}. \end{equation} This limit of the noise spectra is the half of the Poissonian shot noise, caused by the fermionic stucture of the system.\\ The Fano factor by definition: \begin{equation} F=\frac{S(0)}{2eI}. \end{equation} \begin{figure}[t] \begin{center} \includegraphics[width=3.1in]{Fano3D} \caption{\label{fig4} The Fano factor as a function of $\Gamma$ and the energy levels diference, $\delta$. $\Gamma_4/\Omega=0.4$, $\Gamma_3/\Omega=0.6$, $\Gamma_2/\Omega=0.8$ and $\Gamma_1/\Omega=1$. As the difference of energy levels($\delta$) is increased, the Fano factor is also increased. Same effect can be seen, when the measurement performance($\Gamma$) is increased. The most interesting features is for large values of $\delta$, when the Fano factor is decreasing and after that is increasing as a function of $\Gamma$. } \end{center} \end{figure} Studying Fig. \ref{fig4} we realize that the Fano factor never become Poissonian or super-Poissonian. By increasing the strength of the continuous detection an increase can be seen, but the limit of $\Gamma \to \infty$ tends to a well defined value: \begin{equation} \frac{S(0)}{2eI}=1-\frac{2 \left(\Gamma _1 \left(\Gamma _2 \left(\Gamma _3-\Gamma _4\right){}^2-2 \Gamma _3^2 \Gamma _4\right)-2 \Gamma _2 \Gamma _3 \Gamma _4^2\right)}{\left(\Gamma _1 \Gamma _3+\left(\Gamma _2+2 \Gamma _3\right) \Gamma _4\right){}^2}. \end{equation} The role of $\delta$ is similar like in the discussion of the steady current. Here, for $\delta\neq0$, the Fano factor values are {\em decreasing\/} and after that they are increasing. The decrease can be related to the ``anti-Zeno'' and the increase to the quantum Zeno effect. The system studied by us shows, that the control of the energy levels($\delta$) and spin flip frequency($\Omega$) could lead to the detection of all \textit{these effects} in a future experiment. \section{Conclusions} \label{Conclusions} We derived an explicit expression for a double detected spin dependent single dot system. Unsharp detection of spin related observables was studied in the presence of sharp charge detection. The back-action of the unsharp detection has the character of partial quantum Zeno effect, as reducing the mean transmitted charge current at most by 5\%. This effect is significant, because the charge detection is insensitive to the spin degree of freedom, but measuring the spin variable in a nonselective scenario, the spin manipulations could be detected by a charge detector. In fact, an experimental study has the potential to analyze the validity of the continuous quantum measurement theory in the mesoscopic solid state physics. In the system studied, the tunneling parameters are playing an important role because, if all the pairs of different spin tunneling parameters are equal, then there wouldn't be further effect of the backaction. The damping process of the Rabi-oscillations in the noise spectrum was observed. An other measurable quantity, the Fano factor, was studied, and it's decrease was related to the ``anti-Zeno'' and the increase to the quantum Zeno effect. Tight control of the coherent motion may strongly enhance the possibilities of observing the effects.\\ The author has profited from helpful suggestions made by T. Geszti, M. J\"a\"askel\"ainen, and U. Z\"ulicke. This work was supported by a postdoctoral fellowship grant from the Massey University Research Fund.
2,869,038,154,440
arxiv
\section*{Acknowledgments} The computing resources used for our numerical study and the related technical support have been provided by the CRESCO/ENEA\-GRID High Performance Computing infrastructure and its staff \cite{Ponti}. CRESCO ({\color{red}C}omputational {\color{red} RES}earch centre on {\color{red} CO}mplex systems) is funded by ENEA and by Italian and European research programmes. \section{Derivation of MFEs in the SBM} In order to work out the functions $f^{(i,\pm)}_D$ for $Q>2$, we need to introduce some additional notation. {\mydef Given $k,n\ge 1$, and $D = \{A_{i_1},\ldots,A_{i_n}\}$ we let \begin{equation} \theta_k\circ D = \left\{\begin{array}{ll} \{A_{i_1},\ldots,A_{i_{k-1}},A_{i_{k+1}},\ldots,A_{i_n}\} & {\rm if }\ k\le n\,, \\[3.0ex] D & {\rm otherwise}\,. \end{array} \right. \end{equation} In other words, the operator $\theta_k$ removes the $k$th name from a notebook. We also let \begin{equation} \rho_k \circ D = A_{i_k}\,, \qquad {\rm for} \ k\le n\,, \end{equation} {\it i.e.\ } the operator $\rho_k$ extracts the $k$th name out of a notebook. We finally let \begin{equation} \Sigma_A({\cal D}) = \{D\in S({\cal D}):\ \ A\in D\}\,, \end{equation} that is to say $\Sigma_A$ is the set of all notebooks containing the name $A$.\hfill $\Box$ } \vskip 0.1cm We find it convenient to work out $f^{(i)}_D$ separately for notebooks with $|D|=1$ (single-name notebooks) and $|D|>1$ (multi-name notebooks). Indeed, if the initial conditions are chosen as in eq.~(\ref{eq:initcond}), densities $n^{(i)}_D$ with $|D|=1$ can increase throughout the game only owing to agent-agent interactions where a certain multi-name notebook collapses to $D$, whereas densities $n^{(i)}_D$ with $|D|>1$ can increase only when an agent adds a name to his/her notebook thus attaining $D$. We start from the latter. \vskip 0.1cm \noindent\underline{Case I : $|D|>1$}. In order to let $n^{(i)}_D$ increase, the only possibility is that a listener belonging to the $i$th community has initially a notebook differing from $D$ by the lack of one name and a speaker belonging to any community has a notebook containing that name. Therefore, we let $N_{+,D}=1$ for $|D|>1$. The contribution to MFEs is given by \begin{equation} f^{(i,+,1)}_D(\bar n) = \sum_{\ell=1}^{|D|} \ \ \sum_{\tilde D \in \Sigma_{\rho_\ell\circ D}({\cal D})} \ \ \frac{1}{|\tilde D|}\ \ \left [ \pi^{(ii)}n^{(i)}_{\tilde D} + \sum_{k\ne i}^{1\ldots Q} \pi^{(ik)}n^{(k)}_{\tilde D}\right] \ \ n^{(i)}_{\theta_\ell\circ D}\,. \end{equation} \vskip -0.2cm \noindent The factor of $1/|\tilde D|$ represents the probability that the speaker chooses the name $\rho_\ell\circ D$ among those in his/her notebook. In order to let $n^{(i)}_D$ decrease, a listener or a speaker in the $i$th community must have initially notebook $D$ and must modify it when interacting with another agent. The latter must have a notebook sharing at least one name with~$D$. Qualitatively, these are two different types of transitions, therefore we let $N_{-,D}=2$ for $|D|>1$. \begin{itemize} \item{Type-I transitions lowering $n^{(i)}_{D}$ for $|D|>1$} \end{itemize} The listener belongs to the $i$th community and the speaker belongs to any community. The listener has initially notebook $D$, while the speaker has a notebook which has a non-vanishing overlap with $D$. The contribution to MFEs is given by \begin{equation} f^{(i,-,1)}_D(\bar n) = \sum_{\ell=1}^{|D|} \ \ \sum_{\tilde D \in \Sigma_{\rho_\ell\circ D}({\cal D})} \frac{1}{|\tilde D|} \left[\pi^{(ii)}n^{(i)}_{\tilde D} + \sum_{k\ne i}^{1\ldots Q}\pi^{(ik)}n^{(k)}_{\tilde D}\right] n^{(i)}_D\,. \end{equation} \vskip -0.2cm \noindent The factor of $1/|\tilde D|$ represents again the probability that the speaker chooses the name $\rho_\ell\circ D$ among those in his/her notebook. \begin{itemize} \item{Type-II transitions lowering $n^{(i)}_{D}$ for $|D|>1$} \end{itemize} The speaker belongs to the $i$th community and the listener belongs to any community. The speaker has initially notebook $D$, while the listener has a notebook which has a non-vanishing overlap with $D$. The contribution to MFEs is given by \begin{equation} f^{(i,-,2)}_D(\bar n) = \sum_{\ell=1}^{|D|} \ \ \sum_{\tilde D \in \Sigma_{\rho_\ell\circ D}({\cal D})} \frac{1}{|D|} \left[\pi^{(ii)}n^{(i)}_{\tilde D} + \sum_{k\ne i}^{1\ldots Q}\pi^{(ik)}n^{(k)}_{\tilde D}\right] n^{(i)}_D\,, \end{equation} where the factor of $1/|D|$ represents once more the probability that the speaker chooses the name $\rho_\ell\circ D$ among those in his/her notebook. \vskip 0.3cm \noindent\underline{Case II : $|D|=1$}. In this case $D = \{A_\ell\}$ for some $\ell=1,\ldots,Q$. As mentioned above, $n^{(i)}_{A_\ell}$ can increase only because a listener or a speaker in the $i$th community has initially a multi-name notebook containing $A_\ell$, which collapses to $D=\{A_\ell\}$ when he/she interacts with another agent belonging to any community. The latter too must have initially a notebook containing $A_\ell$. There are three different types of transitions, thus we let $N_{+,A_\ell} = 3$. \begin{itemize} \item{Type-I transitions increasing $n^{(i)}_{A_\ell}$} \end{itemize} The listener belongs to the $i$th community and the speaker belongs to any community. The speaker has notebook $D=\{A_\ell\}$, while the listener has any multi-name notebook $D_{\rm\scriptscriptstyle L}\in\Sigma_{A_\ell}({\cal D})$ with $|D_{\rm\scriptscriptstyle L}|>1$ (otherwise the interaction leaves the system unchanged!). The contribution to MFEs is given by \begin{equation} f^{(i,+,1)}_{A_\ell}(\bar n) = \sum_{\substack{ D_{\rm\scriptscriptstyle L}\in \Sigma_{A_\ell}({\cal D})\\[1.0ex] |D_{\rm\scriptscriptstyle L}|>1}} \left[\pi^{(ii)}n^{(i)}_{A_\ell} + \sum_{k\ne i}^{1\ldots Q}\pi^{(ik)}n^{(k)}_{A_\ell}\right]\,n^{(i)}_{D_{\rm\scriptscriptstyle L}}\,. \end{equation} \begin{itemize} \item{Type-II transitions increasing $n^{(i)}_{A_\ell}$} \end{itemize} The speaker belongs to the $i$th community and the listener belongs to any community. The listener has notebook $D=\{A_\ell\}$, while the speaker has any multi-name notebook $D_{\rm\scriptscriptstyle S}\in\Sigma_{A_\ell}({\cal D})$ with $|D_{\rm\scriptscriptstyle S}|>1$ (otherwise the interaction leaves again the system unchanged!). The contribution to MFEs is given by \begin{equation} f^{(i,+,2)}_{A_\ell}(\bar n) = \sum_{\substack{D_{\rm\scriptscriptstyle S}\in \Sigma_{A_\ell}({\cal D})\\[1.0ex] |D_{\rm\scriptscriptstyle S}|>1}} \frac{1}{|D_{\rm\scriptscriptstyle S}|} \left[\pi^{(ii)}n^{(i)}_{A_\ell} +\sum_{k\ne i}^{1\ldots Q}\pi^{(ik)}n^{(k)}_{A_\ell}\right]\,n^{(i)}_{D_{\rm\scriptscriptstyle S}}\,, \end{equation} where the factor of $1/|D_S|$ represents the probability that the speaker chooses the name $A_\ell$ among those in his/her notebook. \vskip 0.2cm \begin{itemize} \item{Type-III transitions increasing $n^{(i)}_{A_\ell}$} \end{itemize} The speaker belongs to the $i$th community and the listener belongs to any community or the other way round. The speaker has a notebook $D_{\rm\scriptscriptstyle S}\in\Sigma_{A_k}({\cal D})$ with $|D_{\rm \scriptscriptstyle S}|>1$ and the listener has a notebook $D_{\rm \scriptscriptstyle L}\in\Sigma_{A_k}({\cal D})$ with $|D_{\rm\scriptscriptstyle L}|>1$. The contribution of this type of interaction to MFEs is given by \begin{equation} f^{(i,+,3)}_{A_\ell}(\bar n) = \sum_{\substack{D_{\rm\scriptscriptstyle S},D_{\rm\scriptscriptstyle L}\in \Sigma_{A_\ell}({\cal D}) \\[1.0ex] |D_{\rm S}|,|D_{\rm L}|>1}} \frac{1}{|D_{\rm\scriptscriptstyle S}|} \,\left[2\pi^{(ii)}n^{(i)}_{D_{\rm\scriptscriptstyle S}}n^{(i)}_{D_{\rm\scriptscriptstyle L}} + \sum_{k\ne i}^{1\ldots Q} \pi^{(ik)}\left( n^{(i)}_{D_{\rm\scriptscriptstyle S}}\,n^{(k)}_{D_{\rm\scriptscriptstyle L}} + n^{(k)}_{D_{\rm\scriptscriptstyle S}}\,n^{(i)}_{D_{\rm\scriptscriptstyle L}}\right)\right]\,, \end{equation} where the factor of $1/|D_{\rm\scriptscriptstyle S}|$ represents the probability that the speaker chooses the name $A_\ell$ among those in his/her notebook and the factor of $2$ takes into account that $n^{(i)}_{A_\ell}$ increases by 2 fractional units following the transition if both speaker and listener belong to the $i$th community. \vskip 0.3cm We finally discuss transitions lowering $n^{(i)}_{A_\ell}$. For this to happen it is necessary that an agent belonging to the $i$th community, who has initially notebook $D=\{A_\ell\}$, switches to a multi-name notebook. This is possible only provided the agent adds a second name to his/her notebook and this can occur only if the agent is a listener. The speaker's notebook might either contain $A_\ell$ or not and we must take care of properly counting the probability of not choosing $A_\ell$ in the former case, otherwise we fall back into Case I/Type-II. Therefore, we find it better to work out separately the two types of transitions and correspondingly we let $N_{-,A_\ell}=2$. \vskip 0.2cm \begin{itemize} \item{Type-I transitions lowering $n^{(i)}_{A_\ell}$} \end{itemize} The listener belongs to the $i$th community and the speaker belongs to any community. The listener has notebook $D=\{A_\ell\}$ and the speaker has a notebook $D_{\rm\scriptscriptstyle S}\in \Sigma_{A_\ell}({\cal D})$ with $|D_{\rm\scriptscriptstyle S}|>1$. The contribution to MFEs is given by \begin{equation} f^{(i,-,1)}_{A_\ell}(\bar n) = \sum_{\substack{D_{\rm\scriptscriptstyle S}\in\Sigma_{A_\ell}({\cal D}) \\ |D_{\rm\scriptscriptstyle S}|>1}} \frac{|D_{\rm\scriptscriptstyle S}|-1}{|D_{\rm\scriptscriptstyle S}|} \left[\pi^{(ii)}n^{(i)}_{D_{\rm\scriptscriptstyle S}} + \sum_{k\ne i}^{1\ldots Q}\pi^{(ik)} n^{(k)}_{D_{\rm\scriptscriptstyle S}}\right]n^{(i)}_{A_\ell}\,, \end{equation} where the factor of $\frac{|D_{\rm\scriptscriptstyle S}|-1}{|D_{\rm\scriptscriptstyle S}|}$ represents the probability that the speaker chooses a name different from $A_\ell$ among those in his/her notebook. \vskip 0.2cm \begin{itemize} \item{Type-II transitions lowering $n^{(i)}_{A_\ell}$} \end{itemize} The listener belongs to the $i$th community and the speaker belongs to any community. The listener has notebook $D=\{A_\ell\}$ and the speaker has a notebook $D_{\rm\scriptscriptstyle S}\in [\Sigma_{A_k}({\cal D})]^{\rm c}$, where $[D]^{\rm c} = S({\cal D})\setminus D$ denotes generically the complement of $D$ in $S({\cal D})$. The contribution to MFEs is given by \begin{equation} f^{(i,-,2)}_{A_\ell}(\bar n) = \sum_{D_{\rm\scriptscriptstyle S}\in[\Sigma_{A_\ell}({\cal D})]^{\rm c}} n^{(i)}_{A_\ell} \left[\pi^{(ii)}n^{(i)}_{D_{\rm\scriptscriptstyle S}} + \sum_{k\ne i}^{1\ldots Q}\pi^{(ik)}n^{(k)}_{D_{\rm\scriptscriptstyle S}}\right]\,, \end{equation} where no probability coefficient is needed in front of the product of densities, such as in all previous cases, since here the speaker can choose equivalently any name among those in his/her notebook. When the above sums are expanded, lengthy and tedious algebraic expressions are generated, even for the case $Q=3$. To enable interested readers to write down complete MFEs, we provide in App.~B an essential and correctly working Maple$^\text{TM}$ code for the PPM (it generates MFEs for $Q=6$, as set at line 3). The code can be easily generalized to the SBM, while its output can be easily adapted for numerical analysis. We apply in sect.~6 the formalism here presented. \section{Maple$^\textsuperscript{TM}$ code for generating MFEs in the PPM} {\scriptsize \begin{minipage}{0.5\textwidth} \begin{Verbatim}[frame=single,rulecolor=\color{blue}] with(CodeGeneration): # Parameters Q := 6: ########################################################### sublist := proc(list, ini) [seq(list[i],i=ini..nops(list))]: end proc: addElt := proc(elt, list) local templist, i: templist:=[]: for i from 1 to nops(list) do templist:=[op(templist),[elt, op(list[i])]]: end do: templist: end proc: listSubsets := proc(L, i) local n, j, fl, temp: n := nops(L): if i=1 then fl:=[]: for j from 1 to n do fl:=[op(fl),[L[j]]]: end do: else fl:=[]: for j from 1 to n-i+1 do temp:=listSubsets(sublist(L,j+1),i-1): temp:=addElt(L[j],temp): fl:=[op(fl),op(temp)]: end do: end if: fl: end proc: listAllSubsets := proc(Q::posint) description "generates $S(\\cal D)$ with Q names": local L,i, temp, fl: L := [A||(1..Q)]: fl := []: for i from 1 to Q do fl:=[op(fl),op(listSubsets(L,i))]: end do: fl: end proc: SigmaAk := proc(L::list,item) description "applies the operator $\\Sigma_{item}$ of " "eq. (3.12) to L": local dd,outL: outL := NULL: for dd in L do if(member(item,dd)) then outL := outL,dd: end if: end do: outL: end proc: -- 1 -- \end{Verbatim} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{Verbatim}[frame=single,rulecolor=\color{blue}] TauAk := proc(L::list,item) local dd,outL: outL := NULL: for dd in L do if(not member(item,dd)) then outL := outL,dd: end if: end do: outL: end proc: removeListItem := proc(L::list,item) remove(has,L,item): end proc: thetak := proc(L::list,k::posint) description "applies the operator $\\theta_{k}$ of " "eq. (3.10) to L": local seq1,seq2,i: if (k>nops(L)) then return L: end if: seq1 := seq(op(i,L),i=1..k-1): seq2 := seq(op(i,L),i=k+1..nops(L)): return [seq1,seq2]: end proc: rhok := proc(L::list,k::posint) description "applies the operator $\\rho_{k}$ of " "eq. (3.11) to L": local seq1,seq2,i: if (k>nops(L)) then return NULL: end if: return op(k,L): end proc: # =========================================================== # Case I # =========================================================== CaseIrise := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Wk,Uk,NBSubList,Dtilde: if(nops(Dset)=1) then print("[CaseIrise]: WARNING -> |Dset|=1"): return: end if: if(i>Q) then print("[CaseIrise]: WARNING -> index i out of range"): return: end if: out := 0: for k from 1 to nops(Dset) do Wk := rhok(Dset,k): Uk := thetak(Dset,k): NBSubList := SigmaAk(NBList,Wk): for Dtilde in NBSubList do out := out + (1/nops(Dtilde))*n[i,Uk]*(pin*n[i,Dtilde] + pout*sum(n[j,Dtilde],j=1..i-1) + pout*sum(n[j,Dtilde],j=i+1..Q)): -- 2 -- \end{Verbatim} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{Verbatim}[frame=single,rulecolor=\color{blue}] end do: end do: return out: end proc: CaseIlower1 := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Wk,NBSubList,Dtilde: if(nops(Dset)=1) then print("[CaseIlower1]: WARNING -> |Dset|=1"): return: end if: if(i>Q) then print("[CaseIlower1]: WARNING -> index i out of range"): return: end if: out := 0: for k from 1 to nops(Dset) do Wk := rhok(Dset,k): NBSubList := SigmaAk(NBList,Wk): for Dtilde in NBSubList do out := out + (1/nops(Dtilde))*n[i,Dset]*(pin*n[i,Dtilde] + pout*sum(n[j,Dtilde],j=1..i-1) + pout*sum(n[j,Dtilde],j=i+1..Q)): end do: end do: return out: end proc: CaseIlower2 := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Wk,NBSubList,Dtilde: if(nops(Dset)=1) then print("[CaseIlower2]: WARNING -> |Dset|=1"): return: end if: if(i>Q) then print("[CaseIlower2]: WARNING -> index i out of range"): return: end if: out := 0: for k from 1 to nops(Dset) do Wk := rhok(Dset,k): NBSubList := SigmaAk(NBList,Wk): for Dtilde in NBSubList do out := out + (1/nops(Dset))*n[i,Dset]*(pin*n[i,Dtilde] + pout*sum(n[j,Dtilde],j=1..i-1) + pout*sum(n[j,Dtilde],j=i+1..Q)): end do: end do: return out: end proc: # =========================================================== # Case II # =========================================================== CaseIIrise1 := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Ak,NBSubList,DL: if(nops(Dset)>1) then print("[CaseIIrise1]: WARNING -> |Dset|>1"): return: end if: -- 3 -- \end{Verbatim} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{Verbatim}[frame=single,rulecolor=\color{blue}] if(i>Q) then print("[CaseIIrise1]: WARNING -> index i out of range"): return: end if: out := 0: Ak := op(1,Dset): NBSubList := SigmaAk(NBList,Ak): for DL in NBSubList do if (nops(DL)>1) then out := out + n[i,DL]*(pin*n[i,Dset] + pout*sum(n[j,Dset],j=1..i-1) + pout*sum(n[j,Dset],j=i+1..Q)): end if: end do: return out: end proc: CaseIIrise2 := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Ak,NBSubList,DS: if(nops(Dset)>1) then print("[CaseIIrise2]: WARNING -> |Dset|>1"): return: end if: if(i>Q) then print("[CaseIIrise2]: WARNING -> index i out of range"): return: end if: out := 0: Ak := op(1,Dset): NBSubList := SigmaAk(NBList,Ak): for DS in NBSubList do if (nops(DS)>1) then out := out + (1/nops(DS))*n[i,DS]*(pin*n[i,Dset] + pout*sum(n[j,Dset],j=1..i-1) + pout*sum(n[j,Dset],j=i+1..Q)): end if: end do: return out: end proc: CaseIIrise3 := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Ak,NBSubList,DS,DL: if(nops(Dset)>1) then print("[CaseIIrise3]: WARNING -> |Dset|>1"): return: end if: if(i>Q) then print("[CaseIIrise3]: WARNING -> index i out of range"): return: end if: out := 0: Ak := op(1,Dset): NBSubList := SigmaAk(NBList,Ak): for DS in NBSubList do for DL in NBSubList do if (nops(DS)>1 and nops(DL)>1) then out := out + (1/nops(DS))*(2*pin*n[i,DS]*n[i,DL] + pout*sum(n[i,DS]*n[j,DL] + n[j,DS]*n[i,DL],j=1..i-1) + pout*sum(n[i,DS]*n[j,DL] + n[j,DS]*n[i,DL],j=i+1..Q)): end if: end do: end do: -- 4 -- \end{Verbatim} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{Verbatim}[frame=single,rulecolor=\color{blue}] return out: end proc: CaseIIlower1 := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Ak,NBSubList,DS: if(nops(Dset)>1) then print("[CaseIIlower1]: WARNING -> |Dset|>1"): return: end if: if(i>Q) then print("[CaseIIlower1]: WARNING -> index i out of range"): return: end if: out := 0: Ak := op(1,Dset): NBSubList := SigmaAk(NBList,Ak): for DS in NBSubList do if (nops(DS)>1) then out := out + ((nops(DS)-1)/nops(DS))*n[i,Dset]*(pin*n[i,DS] + pout*sum(n[j,DS],j=1..i-1) + pout*sum(n[j,DS],j=i+1..Q)): end if: end do: return out: end proc: CaseIIlower2 := proc(Dset::list,i::posint) global Q,NBList,pin,pout: local out,j,k,Ak,NBSubList,DS: if(nops(Dset)>1) then print("[CaseIIlower2]: WARNING -> |Dset|>1"): return: end if: if(i>Q) then print("[CaseIIlower2]: WARNING -> index i out of range"): return: end if: out := 0: Ak := op(1,Dset): NBSubList := [TauAk(NBList,Ak)]: for DS in NBSubList do out := out + n[i,Dset]*(pin*n[i,DS] + pout*sum(n[j,DS],j=1..i-1) + pout*sum(n[j,DS],j=i+1..Q)): end do: return out: end proc: #------------------------------------------------------- NBList := listAllSubsets(Q): f_intra := Array([seq(0,i=1..Q*(2^Q-1))]): f_inter := Array([seq(0,i=1..Q*(2^Q-1))]): SDctr := 1: for qq from 1 to Q do Dctr := 1: for Dset in op(1..-2,NBList) do if(nops(Dset)>1) then f1 := expand(CaseIrise(Dset,qq)): f1_intra := coeff(f1,pin): -- 5 -- \end{Verbatim} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{Verbatim}[frame=single,rulecolor=\color{blue}] f1_inter := coeff(f1,pout): f2 := expand(CaseIlower1(Dset,qq)): f2_intra := coeff(f2,pin): f2_inter := coeff(f2,pout): f3 := expand(CaseIlower2(Dset,qq)): f3_intra := coeff(f3,pin): f3_inter := coeff(f3,pout): T_intra := f1_intra - f2_intra - f3_intra: T_inter := f1_inter - f2_inter - f3_inter: else g1 := expand(CaseIIrise1(Dset,qq)): g1_intra := coeff(g1,pin): g1_inter := coeff(g1,pout): g2 := expand(CaseIIrise2(Dset,qq)): g2_intra := coeff(g2,pin): g2_inter := coeff(g2,pout): g3 := expand(CaseIIrise3(Dset,qq)): g3_intra := coeff(g3,pin): g3_inter := coeff(g3,pout): g4 := expand(CaseIIlower1(Dset,qq)): g4_intra := coeff(g4,pin): g4_inter := coeff(g4,pout): g5 := expand(CaseIIlower2(Dset,qq)): g5_intra := coeff(g5,pin): g5_inter := coeff(g5,pout): T_intra := g1_intra + g2_intra + g3_intra - g4_intra - g5_intra: T_inter := g1_inter + g2_inter + g3_inter - g4_inter - g5_inter: end if: # ======================================================= # remove Rosetta notebooks # ======================================================= for ii from 1 to Q do for jj from 1 to (2**Q-2) do T_intra := eval(T_intra, n[ii,op(jj,NBList)]=xx[jj+(ii-1)*(2**Q-1)]): T_inter := eval(T_inter, n[ii,op(jj,NBList)]=xx[jj+(ii-1)*(2**Q-1)]): end do: T_intra := expand(eval(T_intra,n[ii,op(-1,NBList)]= 1-add(xx[kk+(ii-1)*(2**Q-1)],kk=1..(2**Q-2)))): T_inter := expand(eval(T_inter,n[ii,op(-1,NBList)]= 1-add(xx[kk+(ii-1)*(2**Q-1)],kk=1..(2**Q-2)))): end do: printf("# clique = qq,convert(Dset,string),SDctr): printf("# nops(intra) = nops(T_intra), nops(T_inter)): f_intra[SDctr] := T_intra: f_inter[SDctr] := T_inter: SDctr := SDctr + 1: Dctr := Dctr+1: end do: -- 6 -- \end{Verbatim} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{Verbatim}[frame=single,rulecolor=\color{blue}] f_intra[SDctr] := 0: f_inter[SDctr] := 0: SDctr := SDctr + 1: end do: save f_intra,`f_intra.m`: save f_inter,`f_inter.m`: -- 7 -- \end{Verbatim} \end{minipage} } \end{appendices} \section{Introduction} The emergence of spoken languages and their continuous evolution in human societies are complex phenomena in which interaction and self-organization play an essential r\^ole. Lying at the heart of opinion dynamics~\cite{fortcastlor}, these features have attracted great interest from researchers in statistical physics over the past twenty years. After some attempts to ascribe the origin of language conventions to evolutionary mechanisms~\cite{Niyogi,Nowak:1,Nowak:2,Nowak:3,Smith,Komarova}, in ref.~\cite{Baronchelli:1} a multi-agent model was proposed where the rise of a globally shared language occurs with no underlying guiding principle and no external influence. The model, known as the Naming Game ({\bf NG}), was inspired by the seminal work of refs.~\cite{Luc:1,Luc:2}. The NG is a language-game in the sense of ref.~\cite{Wittgenstein}, with agents iteratively communicating to each other conventional names for a target object. Each agent is endowed with a notebook, in which he/she writes names. In the original version of the model all notebooks are initially empty. Elementary interactions involve two agents, playing respectively the r\^ole of \emph{speaker} and \emph{listener}. In each iteration the speaker is chosen randomly among the agents, while the listener is chosen randomly among the speaker's neighbours. The speaker-listener interaction is schematically described by the flowchart reported in Fig.~\ref{fig:gamerule}. Following ref.~\cite{Baronchelli:1}, the dynamics of the NG was investigated on networks with several topologies, including the fully connected graph~\cite{Baronchelli:1,Baronchelli:5}, low-dimensional regular lattices~\cite{Baronchelli:3},~{Erd\H{o}s}-R\'enyi ({\bf ER}) graphs~\cite{Baronchelli:2}, small-world networks~\cite{Baronchelli:6}, Barab\'asi-Albert ({\bf BA}) networks~\cite{Baronchelli:2,Baronchelli:7}, etc. In all cases, the system was found to evolve dynamically with the number of different competing names initially inflating and then deflating due to self-organization, until the whole population agrees spontaneously on an ultimate name for the target object (consensus). Theoretical predictions derived from the NG have recently been shown to correctly reproduce experimental results in Web-based live games with controlled design~\cite{Centola}. \begin{figure}[t!] \centering \includegraphics[width=0.95\textwidth]{./flowchart.pdf} \caption{\footnotesize Speaker-listener interaction.\label{fig:gamerule}} \vskip -0.2cm \end{figure} In ref.~\cite{Baronchelli:2} it was first pointed out that convergence to consensus follows a special pattern on community-based networks. Here, after a \emph{``creative"} transient during which the number of different competing names inflates, the system relaxes to an equilibrium where different communities reach local consensus on different names. In a finite time dynamical fluctuations break the equilibrium and make the system fall into global consensus. The presence of metastable equilibria was soon realized to be of practical worth. In ref.~\cite{Lu:1} it was shown that local consensus might be used to identify communities in empirical networks. In a sense, this ratified the entrance of the NG into a large family of community detection algorithms~\cite{Schaeffer,Porter,Fortunato:1,Coscia,Newman:book,Xie,Fortunato:2}. More recently, the goodness of the community partition operated by the NG was investigated in terms of quality indicators~\cite{Gubanov} (such as the partition modularity~\cite{Newman}) on the benchmark networks of ref.~\cite{Lancichinetti}. It is important to recall that community detection is a major problem in network science, since modular networks arise in a variety of applicative contexts (see for instance refs.~\cite{Girvan:1,Porter,Flake}). It is also worth noting that the presence of metastable equilibria is not an exclusive feature of the NG. A similar phenomenon is observed in other models of opinion dynamics, such as the majority rule model~\cite{Lambiotte} and an extension of the Axelrod model~\cite{Candia}. While local consensus exists on finite networks only in the form of metastable equilibrium, it becomes fully stable in the thermodynamic limit provided communities are sufficiently isolated. As a consequence, the phase diagram of the model develops a very rich structure. Communities agree or disagree on the ultimate name of their choice depending on how strongly they are connected to each other. Networks on which equal combinations of names survive at equilibrium in the thermodynamic limit correspond to the same multi-language phase. Despite a growing body of literature, a systematic study of the phase structure of the NG on community-based networks is still lacking. Aim of the present paper is to contribute to filling this gap. Studying the phase structure is important in order to identify theoretical limits within which the model can be used effectively as a community detection algorithm. However, phases depend upon all topological properties of communities, including their number, overlaps, relative size, internal topology and the topology of their interconnections. Since an overall parameterization of all such features is not given, we are forced to adopt a case-by-case strategy, where we investigate the transition to multi-language phases on groups of networks with distinct topological features. We can summarize the results of our study by stating that \emph{i}) steady multi-languages states arise in the thermodynamic limit when links connecting agents in different communities are about 10-20\% or less of those connecting agents within their respective communities and \emph{ii}) multi-language phases look rather robust against changes in the network topology. Before we start, we mention that multi-language phases are also observed in the NG under variations of its microscopic dynamics. Examples are the introduction of noise in the loss of memory when two agents agree on a given name~\cite{Baronchelli:4} or the introduction of single/opposing committed groups of agents who never change their notebook in time~\cite{Xie:1,Xie:2}. The difference is that communities produce stable multi-language states as a purely topological effect. This is a distinguishing feature of the NG: in other models of opinion dynamics communities are unable to hinder the convergence to global consensus, independently of their degree of isolation, even in the thermodynamic limit. An example is represented by the voter model \cite{Palombi}, where global consensus can be avoided only by introducing zealot agents~\cite{Mobilia:1,Mobilia:2} with competing opinions. The plan of the paper is as follows. In sect.~2 we set up the notation and introduce the relative inter-community connectedness, a parameter that we use to compare results on different two-community symmetric networks. In sect.~3 we investigate the phase diagram of the NG in the stochastic block model~\cite{Holland}. In sect.~4, we work out the exact solution to its mean field equations ({\bf MFEs}) in the special case of the planted partition model~\cite{Condon,McSherry} with two communities. In sect.~5 we derive MFEs for the NG on a network made of two overlapping cliques, then we work out an almost fully analytic solution to them. In sect.~6 we study how the phase transition depends on the number of communities in the planted partition model and in sect.~7 we study how it depends on their relative size. In sect.~8 we extend our study to heterogeneous networks by means of Monte Carlo simulations. Finally, we draw our conclusions in\break sect.~9. \section{Relative connectedness in two-community symmetric networks} We consider a graph ${\cal G} = ({\cal V},{\cal E})$ and a partition $\smash{{\cal V}_{\cal C}=\{{\cal C}^{(k)}\}_{k=1}^Q}$ of ${\cal V}$, {\it i.e.\ } we assume $\smash{{\cal V} = \cup_{k=1}^Q {\cal C}^{(k)}}$ and $\smash{{\cal C}^{(i)}\cap{\cal C}^{(k)}=\emptyset}$ for $i\ne k$. We let $\smash{N^{(k)} = |{\cal C}^{(k)}|>0}$ and $N=|{\cal V}|$, hence we have $\smash{N=\sum_{k=1}^Q N^{(k)}}$. Then we observe that ${\cal V}_{\cal C}$ induces a partition ${\cal E}_{\cal C} = \smash{\{{\cal E}^{(ik)}\}_{i,k=1}^Q}$ of ${\cal E}$, {\it i.e.\ } ${\cal E} = \cup_{ik=1}^Q {\cal E}^{(ik)}$ with \begin{equation} {\cal E}^{(ik)} = \{\,(x,y):\ x\in {\cal C}^{(i)} \text{ and } y\in{\cal C}^{(k)}\}\,. \end{equation} We take $(x,y)$ as an ordered pair. This implies by no means that the graph is either directed or undirected, but only that if $(x,y)\in{\cal E}^{(i,k)}$, then $(x,y)\notin{\cal E}^{(k,i)},$ for $i\ne k$. In particular, an undirected graph is obtained by requesting that $(x,y)\in{\cal E}$ iff $(y,x)\in{\cal E}$ and by considering $(x,y) = (y,x)$. In the sequel we always assume undirected graphs with $(x,x)\notin{\cal E}$ for all $x$. We say that ${\cal V}_{\cal C}$ displays an explicit community structure ({\bf ECS}) provided \begin{equation} |{\cal E}^{(i,k)}|\ll\min(|{\cal E}^{(ii)}|,|{\cal E}^{(kk)}|)\,,\qquad \text{ for all } i\ne k\,. \label{eq:ecs} \end{equation} If eqs.~(\ref{eq:ecs}) are fulfilled, we interpret the sets ${\cal C}^{(k)}$ as communities of agents. Although restrictive, the above ECS conditions leave several topological features of ${\cal G}$ totally unspecified. For instance, in static network models (which are, however, unfit to describe realistic networks~\cite{BarabasiNS}) the topology is defined by assigning the $Q+1$ deterministic parameters $Q$, $\{N^{(k)}\}_{k=1}^Q$ and the edge probability laws \begin{equation} \mathfrak{p}^{(ik)}(x,y) = \text{prob}\left\{(x,y)\in {\cal E}^{(ik)}\,\biggl|\, x\in{\cal C}^{(i)} \text{ and } y\in{\cal C}^{(k)}\right\}\,,\qquad i,k=1,\ldots,Q\,. \end{equation} In principle the functions $\smash{\mathfrak{p}^{(ik)}(x,y)}$ may be arbitrarily complex. They may depend explicitly on the community indexes $(ik)$, {\it i.e.\ } for each choice of these they may depend upon different discrete or continuous parameters. Indeed, different static network models correspond to different settings of the above degrees of freedom. As such, they allow to explore (limited) subsets of the wider \emph{ensemble} defined by ECS conditions. A relevant question is how to compare results for an agent-based model running on different network models, when these are defined in terms of different parameters. Unfortunately, there exists no universal answer to such question. Yet, for two-community networks which are symmetric under exchange of community indexes, we can use a simple indicator that allows to make comparisons. The indicator measures the relative extent to which communities are connected to each other rather than to themselves. To define it, we start from the notion of node degree, which counts the number of neighbours of a given node, and extend it to entire communities. We first introduce the inner average degree of the $i$th community \begin{equation} \langle\kappa^{(i)}_\text{in}\rangle = \frac{1}{N^{(i)}}\left\langle\sum_{x,y\in{\cal C}^{(i)}}\mathbf{1}_{{\cal E}^{(i,i)}}(x,y)\right\rangle = \frac{2\langle|{\cal E}^{(ii)}|\rangle}{N^{(i)}}\,, \qquad i=1,2\,, \label{eq:gammain} \end{equation} where $\mathbf{1}_A(x)$ denotes the indicator function of $A$ ({\it i.e.\ } $\mathbf{1}_A(x) = 1$ if $x\in A$ and $\mathbf{1}_A(x) = 0$ otherwise) and the symbol $\langle\, \cdot\, \rangle$ represents an average over the corresponding network model. By definition, we have $\langle \kappa^{(1)}_\text{in}\rangle = \langle \kappa^{(2)}_\text{in}\rangle$ on two-community symmetric networks. Then, we introduce the outer average degree of the $i$th to $k$th community \begin{equation} \langle \kappa^{(i,k)}_\text{out}\rangle = \frac{1}{N^{(i)}}\left\langle\sum_{x\in{\cal C}^{(i)}}\sum_{y\in{\cal C}^{(k)}}\mathbf{1}_{{\cal E}^{(i,k)}}(x,y)\right\rangle = \frac{\langle|{\cal E}^{(ik)}|\rangle}{N^{(i)}}\,,\qquad i\ne k \,, \label{eq:gammaout} \end{equation} and again we observe that $\langle \kappa^{(12)}_\text{out}\rangle = \langle \kappa^{(21)}_\text{out}\rangle$ on two-community symmetric networks. Finally, we define the relative inter-community connectedness as the ratio \begin{equation} \gamma_\text{out/in} = \frac{\langle\kappa^{(12)}_{\text{out}}\rangle}{\langle\kappa^{(1)}_{\text{in}}\rangle} = \frac{1}{2}\frac{\langle|{\cal E}^{(12)}|\rangle}{\langle|{\cal E}^{(11)}|\rangle}\,. \label{eq:intercomconnect} \end{equation} ECS conditions are fulfilled provided $\gamma_{\text{out/in}}\ll 1$. We notice that $\gamma_\text{out/in}$ has a rather general valence in that either of eqs.~(\ref{eq:gammain})--(\ref{eq:gammaout}) depends by no means on the specific topology of ${\cal E}^{(11)}$ and ${\cal E}^{(12)}$. Unfortunately, there is no unambiguous way to generalize $\gamma_{\text{out/in}}$ to networks with two asymmetric and/or more than two communities. Such a generalization goes beyond our aims here. \section{$Q$-ary Naming Game in the Stochastic Block Model} As a first step we investigate the dynamics of the NG in the Stochastic Block Model~({\bf SBM})~\cite{Holland}. In the SBM we consider $Q$ communities with $N^{(i)}=N/Q$ for $i=1,\ldots, Q$. We introduce a set of $Q(Q+1)/2$ parameters $\{p^{(ik)}\}_{i\le k}^{1\ldots Q}$ and we assume $\mathfrak{p}^{(ik)}(x,y) = p^{(ik)}$ for all $i,k$. For $Q=2$ the SBM yields asymmetric networks whenever $p^{(11)}\ne p^{(22)}$. Hence, $\gamma_{\text{out/in}}$ is in general not well defined. As mentioned in sect.~1, in the original version of the NG \cite{Baronchelli:1} agents have empty notebooks at the beginning of the game, hence they invent names. After a while the number of different competing names observed across the network peaks at a value which is $\text{O}(N/2)$. Then, it decreases. If we identify the state of an agent with his/her notebook, we see that the number of allowed agent states (notebooks containing all possible combinations of the competing names) inflates exponentially just in the initial stage of the dynamics. This makes studying the system rather impractical beyond numerical simulations. In order to avoid such a complication, we resort to a trick which was first introduced in ref.~\cite{Baronchelli:9}: instead of starting the game with empty notebooks, we assign precisely one name to each agent. As a result the \emph{``creative''} transient disappears, while the left side of Fig.~\ref{fig:gamerule} reduces to a single square, with the speaker choosing randomly a name from his/her notebook and uttering it. Depending on how many different names we distribute across the network, the trick allows to set the overall dimension of the state space of the system. In particular, in ref.~\cite{Baronchelli:4} each agent was randomly assigned one of two names, respectively represented by letters $A$ and $B$. Since we are interested in community-based networks, we find it preferable to prepare the initial state of the system with agents in a given community being assigned a common name and with different communities being assigned different names. We let $A_k$ represent the name initially assigned to ${\cal C}^{(k)}$. Then we introduce a \emph{Rosetta} notebook\footnote{Evidently, this name evokes the famous stone rediscovered near the town of Rashid (Rosetta, Egypt) by Napoleon's army in 1799. The stone contained versions of the same text in Greek, Demotic and Hieroglyphic. As such, it served as a language translation tool.} ${\cal D} = \{A_1,\ldots,A_Q\}$ and we let $S({\cal D}) = \{D:\, D\subset{\cal D}\}$. At time $t\ge 0$ an agent $x$ has a certain notebook $D$, hence $D$ represents the state of $x$ at time $t$. We count the number of agent states $|S({\cal D})|$ in full generality by counting all notebooks $D$ with $1\le |D|\le Q$ names. There are precisely \vskip 0.1cm \begin{itemize}[itemsep=0.4em] \item{$Q$ notebooks with one name,} \item{$\frac{1}{2!}Q(Q-1)$ notebooks with two names,} \item{$\frac{1}{3!}Q(Q-1)(Q-2)$ notebooks with three names,} \item[$\vdots$]{} \item{$\frac{1}{Q!}Q(Q-1)(Q-2)\ldots 1 = 1$ notebooks with $Q$ names,} \end{itemize} \vskip 0.1cm with the factorials at denominator ensuring that the inclusion of states differing by a permutation of names is avoided in the counting. By adding all the above numbers, we get \begin{equation} |S({\cal D})| = \sum_{k=1}^Q \frac{1}{k!}Q(Q-1)\cdots (Q-k+1) = \sum_{k=1}^Q \frac{Q!}{k!(Q-k)!} = \sum_{k=1}^Q {Q \choose k} = 2^Q - 1\,. \label{eq:numstates} \end{equation} We conclude that the number of agent states still increases exponentially with the number of communities; nevertheless, eq.~(\ref{eq:numstates}) represents the minimum one must cope with to study multi-language phases with no substantial restriction. \subsection{Mean field equations} MFEs describe correctly the dynamics of the system in the thermodynamic limit (where stochastic fluctuations become increasingly negligible). In the SBM we define this by letting $N\to\infty$ with $Q=\text{const.}$, $N^{(i)}/N = \text{const.}$ and $p^{(ik)}=\text{const.}$ for all $i,k$. To derive MFEs we need to take into account and correctly weigh all possible agent-agent interactions yielding an increase/decrease of the fraction of agents in a given state. For each notebook $D$ we introduce local densities \begin{equation} n^{(i)}_D = \frac{\text{no. of agents with notebook $D$ belonging to ${\cal C}^{(i)}$}}{N^{(i)}}\,. \label{eq:comdens} \end{equation} At each time the vectors $\{n^{(i)}_D\}_{D\in S({\cal D})}$ fulfill simplex conditions separately for each community, that is to say state densities are constrained by equations \begin{equation} \sum_{D\in S({\cal D})} n^{(i)}_D = 1\,,\qquad i = 1,\ldots,Q\,. \end{equation} Hence, there is one redundant state per community, whose density we represent in terms of the remaining ones via the corresponding simplex equation. We are free to choose the \emph{Rosetta} notebook ${\cal D}$ as redundant state for all communities. If we let $\bar S({\cal D}) = S\setminus {\cal D}$, then we have \begin{equation} n^{(i)}_{\cal D} = 1 - \sum_{D \in \bar S({\cal D})} n^{(i)}_D\,,\qquad i = 1,\ldots,Q\,. \end{equation} Following this choice, we introduce the essential state vector \begin{equation} \bar n = \{n^{(i)}_D:\ D\in \bar S({\cal D}) \ \ \text{and} \ \ i=1,\ldots, Q\}\,. \end{equation} The domain of $\bar n$ is the Cartesian product of $Q$ simplices. Taken as a whole, $\bar n(t)$ provides a full kinematic description of the state of the system at time $t$. Its trajectory in state space is mathematically described by a set of stochastic differential equations, governing the dynamics of the system under the joint action of deterministic drift and random diffusion terms. MFEs follow as the result of switching off all diffusion terms. They read \begin{equation} \frac{\text{d} n^{(i)}_D}{\text{d} t} = f^{(i)}_D(\bar n)\,,\qquad D\in \bar S({\cal D})\,. \label{eq:mfebsm} \end{equation} The function $f^{(i)}_D$ yields the overall transition rate for the agent state $D$ in the $i$th community. It includes positive and negative contributions, each corresponding to an interaction involving two neighbouring agents belonging to ${\cal C}^{(i)}$ or rather an agent belonging to ${\cal C}^{(i)}$ and a neighbour lying somewhere else. We group terms contributing to $f^{(i)}_D$ in two different ways, namely \vskip -0.7cm \begin{equation} f^{(i)}_D \, = \, f^{(i,+)}_D - f^{(i,-)}_D \, = \, f^{(ii)}_D + \sum_{k\ne i}^{1\ldots Q}f^{(ik)}_D\,, \end{equation} \vskip -0.5cm \noindent where $f^{(i,+)}_D$ collects all positive contributions, $f^{(i,-)}_D$ all negative ones and $f^{(ik)}_D$ all contributions involving agents who belong respectively to the $i$th and $k$th communities. The first representation shows that $D$ is a steady state in the $i$th community provided the balance $f^{(i,+)}_D = f^{(i,-)}_D$ is exactly fulfilled. The second one allows to count easily the number of dimensions of the phase space of the system. Indeed, $f^{(ik)}_D$ is proportional to the probability of picking up an agent $x$ in the $i$th community and a neighbour $x'$ of $x$ in the $k$th one. This probability amounts to \begin{equation} \pi^{(ik)} \,=\, \text{prob}\left\{x\in{\cal C}^{(i)},\,x'\in{\cal C}^{(k)}\right\} \ = \ \frac{1}{Q}\,\frac{p^{(ik)}}{\sum_{\ell=1}^Q p^{(i\ell)}} \ = \ \frac{1}{Q}\,\frac{\nu^{(ik)}}{1 + \sum_{\ell\ne i}^{1\ldots Q}\nu^{(i\ell)}}\,, \end{equation} with $\nu^{(ik)} = p^{(ik)}/p^{(ii)}$. When we look for a steady solution to eqs.~(\ref{eq:mfebsm}) we annihilate all derivatives on the left hand side. Since the denominator of $\pi^{(ik)}$ is the same for all $k$ with fixed $i$, we just factorize all such denominators and leave them out. We thus see that the only parameters a steady solution depends on are precisely the constants $\{\nu^{(ik)}\}_{i\ne k}$. Since these are all independent, we conclude that the phase space of the model has $Q(Q-1)$ dimensions. \begin{table}[!t] \begin{center} \small \begin{tabular}{|c|c|c|c|c|c|} \hline before interaction & after interaction & \multicolumn{4}{c|}{conditional transition rates} \\ \hline \\[-2.8ex] $S^{(i)} \to L^{(k)}$ & $S^{(i)} - L^{(k)}$ & $\Delta n^{(i)}_{A_1}$ & $\Delta n^{(i)}_{A_2}$ & $\Delta n^{(k)}_{A_1}$ & $\Delta n^{(k)}_{A_2}$\\ \hline\\[-2.8ex] $\phantom{A_2}A_1\stackbin{A_1}{\to}A_1\phantom{A_1}$ & $\phantom{A_2}A_1 - A_1\phantom{A_2}$ & 0 & 0 & 0 & 0 \\ $\phantom{A_2}A_1\stackbin{A_1}{\to}A_2\phantom{A_1}$ & $\phantom{A_2}A_1 - A_1A_2$ & 0 & 0 & 0 & $-n^{(i)}_{A_1}n^{(k)}_{A_2}$ \\ $\phantom{A_2}A_1\stackbin{A_1}{\to}A_1A_2$ & $\phantom{A_2}A_1 - A_1\phantom{A_2}$ & 0 & 0 & $n^{(i)}_{A_1}n^{(k)}_{A_1A_2}$ & 0 \\ $\phantom{A_1}A_2\stackbin{A_2}{\to}A_1\phantom{A_2}$ & $\phantom{A_1}A_2 - A_1A_2$ & 0 & 0 & $-n^{(i)}_{A_2}n^{(k)}_{A_1}$ & 0 \\ $\phantom{A_1}A_2\stackbin{A_2}{\to}A_2\phantom{A_1}$ & $\phantom{A_1}A_2 - A_2\phantom{A_1}$ & 0 & 0 & 0 & 0 \\ $\phantom{A_1}A_2\stackbin{A_2}{\to}A_1A_2$ & $\phantom{A_1}A_2 - A_2\phantom{A_1}$ & 0 & 0 & 0 & $n^{(i)}_{A_2}n^{(k)}_{A_1A_2}$ \\ $A_1A_2\stackbin{A_1}{\to}A_1\phantom{A_2}$ & $\phantom{A_2}A_1 - A_1\phantom{A_2}$ & $\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_1}$ & 0 & 0 & 0 \\ $A_1A_2\stackbin{A_1}{\to}A_2\phantom{A_1}$ & $A_1A_2 - A_1A_2$ & 0 & 0 & 0 & $-\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_2}$ \\ $A_1A_2\stackbin{A_1}{\to}A_1A_2$ & $\phantom{A_2}A_1 - A_1\phantom{A_2}$ & $\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_1A_2}$ & 0 & $\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_1A_2}$ & 0 \\ $A_1A_2\stackbin{A_2}{\to}A_1\phantom{A_2}$ & $A_1A_2 - A_1A_2$ & 0 & 0 & $-\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_1}$ & 0 \\ $A_1A_2\stackbin{A_2}{\to}A_2\phantom{A_1}$ & $\phantom{A_1}A_2 - A_2\phantom{A_1}$ & 0 & $\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_2}$ & 0 & 0 \\ $A_1A_2\stackbin{A_2}{\to}A_1A_2$ & $\phantom{A_1}A_2 - A_2\phantom{A_1}$ & 0 & $\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_1A_2}$ & 0 & $\frac{1}{2}n^{(i)}_{A_1A_2}n^{(k)}_{A_1A_2}$ \\[1.0ex] \hline \end{tabular} \caption{\footnotesize Conditional transition rates for speaker-listener interactions. Labels $S^{(i)}$ and $L^{(k)}$ denote respectively a speaker in ${\cal C}^{(i)}$ and a listener in ${\cal C}^{(k)}$, $i,k=1,2$.\label{tab:binaryNG}} \end{center} \vskip -0.4cm \end{table} Using the representation $f^{(i)}_D = f^{(i,+)}_D - f^{(i,-)}_D$ is more convenient for calculational purposes. All speaker-listener interactions of a given type generate algebraic expressions differing only in the community indexes carried by $\{\pi^{(ik)}\}$. Such expressions can be easily grouped together. To work out $f^{(i,\pm)}_D$, it is advisable to first enumerate its contributions. In full generality we let \begin{equation} f^{(i,\pm)}_D = \sum_{\alpha=1}^{N_{\pm,D}}f^{(i,\pm,\alpha)}_D\,, \end{equation} where $f_D^{(i,\pm,\alpha)}$ includes all interactions of the $\alpha$th type increasing/decreasing $n^{(i)}_D$ and $N_{\pm,D}$ denotes the overall number of interaction types yielding an increase/decrease of $n^{(i)}_D$. As we just noticed, each contribution to $f_D^{(i,\pm,\alpha)}$ is proportional to $\pi^{(ik)}$ for some $k$. The proportionality factor yields the conditional transition rate $\Delta n^{(i)}_D$ of an interaction between agents $x$ and $x'$ given $x\in{\cal C}^{(i)}$ and $x'\in{\cal C}^{(k)}$. Only in the specific case of the binary NG, where $S({\cal D}) = \left\{\{A_1\},\{A_2\},\{A_1,A_2\}\right\}$, can such conditional rates be simply enumerated and calculated with paper and pencil. Indeed, these have concise and well known expressions. For the reader's convenience we report them all in Table~\ref{tab:binaryNG} (to keep the notation simple, here as well as in the sequel we allow expressions such as $n^{(i)}_{A}$ in place of $\smash{n^{(i)}_{\{A\}}}$). For $Q>2$ the number of contributions increases. For the sake of readability, we refer the reader to App. A for a complete derivation of MFEs. \subsection{Phase diagram for $Q=2$} As explained above, the phase space of the NG in the SBM corresponds to the 1st orthant of the $Q(Q-1)$-dimensional Euclidean space generated by parameters $\{\nu^{(ik)}\}_{i\ne k}^{1\ldots Q}$. Recall that we start the game with initial configuration \begin{equation} n^{(k)}_D(0) = \left\{\begin{array}{ll} 1 & \text{if } D = \{A_k\}\,, \\[1.0ex] 0 & \text{otherwise}\,,\end{array}\right.\qquad k=1,\ldots,Q\,. \label{eq:initcond} \end{equation} After a while the system reaches an equilibrium state where a certain number of names are left out in favour of others. Surviving names are found not necessarily only in their original communities, but also in other ones over which they spread along the dynamics. If $n^{(k)}_{A_\ell}(t\to\infty)\simeq 1$ for $\ell\ne k$, we say that $A_\ell$ has \emph{colonized} ${\cal C}^{(k)}$. Phases correspond to all possible ways names colonize communities. In principle, their total number is given by \begin{equation} \text{no. of phases } \, = \, \sum_{k=1}^Q{Q\choose k}\sum_{\substack{n_1\ldots n_k=0 \\[1.0ex] n_1 + \ldots + n_k = Q-k}}^{Q-k}\frac{(Q-k)!}{n_1!\ldots n_k!} \, = \, \sum_{k=1}^Q {Q\choose k}k^{Q-k}\,. \label{eq:nophases} \end{equation} Indeed, assume that $k$ names survive in the final state. The number of ways to choose them out of a set of $Q$ names is ${Q\choose k}$, which explains the presence of the binomial coefficient in eq.~(\ref{eq:nophases}). We have to sum over $k=1,\ldots,Q$ to take into account all possibilities. The $k$ surviving names certainly dominate their respective communities, so we are left with the task of distributing them across the remaining $Q-k$ ones. The number of ways to distribute $n_1$ copies of the first name, $n_2$ copies of the second one and so forth is $(Q-k)!/(n_1!\ldots n_k!)$, with the factorials at denominator removing unwanted repetitions. The total number of phases is finally obtained by summing over all possible choices of $n_1,\ldots,n_k$. The rightmost expression in eq.~(\ref{eq:nophases}) simply follows from the multinomial theorem. In Table~\ref{tab:phases}, we report the number of phases for the lowest few values of $Q$. Each phase occupies a sharply bounded region in the phase space. As the reader may notice, the phase structure of the model becomes increasingly complex as $Q$ increases. \begin{table}[!t] \begin{center} \begin{tabular}{c||c|c|c|c|c|c|c} \hline\hline Q$\phantom{\bigr|^f}$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \text{no. of phases$\phantom{\bigr|^f}$} & 3 & 10 & 41 & 196 & 1057 & 6322 & 41393 \\ \hline\hline \end{tabular} \vskip 0.2cm \caption{\footnotesize Number of phases in the SBM with $Q$ communities.\label{tab:phases}} \end{center} \vskip -0.4cm \end{table} The only case where the phase diagram can be easily studied is for $Q=2$. For notational simplicity, we introduce symbols $\nu_1 = p^{(12)}/p^{(11)}$ and $\nu_2 = p^{(12)}/p^{(22)}$ in place of $\nu^{(12)}$ and $\nu^{(21)}$ respectively. Notice that $\nu_1$ and $\nu_2$ increase when the inter-community links become denser and also when the intra-community ones rarefy. MFEs can be easily worked out, either thanks to Table~\ref{tab:binaryNG} or via the code provided in App.~B. They read \begin{align} \frac{\text{d} n^{(1)}_{A_1}}{\text{d} t} & = \pi^{(11)}\left\{ n^{(1)}_{A_1}n^{(1)}_{A_1A_2} + (n^{(1)}_{A_1A_2})^2 - n^{(1)}_{A_1}n^{(1)}_{A_2}\right\}\nonumber \\[0.0ex] & + \pi^{(12)}\left\{ \frac{3}{2}n^{(1)}_{A_1A_2}n^{(2)}_{A_1} - \frac{1}{2}n^{(1)}_{A_1}n^{(2)}_{A_1A_2} + n^{(1)}_{A_1A_2}n^{(2)}_{A_1A_2} - n^{(1)}_{A_1}n^{(2)}_{A_2} \right\}\,, \label{eq:sbmfrst}\\[1.5ex] \frac{\text{d} n^{(1)}_{A_2}}{\text{d} t} & = \pi^{(11)}\left\{ n^{(1)}_{A_2}n^{(1)}_{A_1A_2} + (n^{(1)}_{A_1A_2})^2 - n^{(1)}_{A_1}n^{(1)}_{A_2} \right\}\nonumber \\[0.0ex] & + \pi^{(12)}\left\{\frac{3}{2}n^{(1)}_{A_1A_2}n^{(2)}_{A_2}-\frac{1}{2}n^{(1)}_{A_2}n^{(2)}_{A_1A_2}+n^{(1)}_{A_1A_2}n^{(2)}_{A_1A_2}-n^{(1)}_{A_2}n^{(2)}_{A_1} \right\}\,, \label{eq:sbmscnd}\\[1.5ex] \frac{\text{d} n^{(2)}_{A_1}}{\text{d} t} & = \pi^{(22)}\left\{ n^{(2)}_{A_1}n^{(2)}_{A_1A_2} + (n^{(2)}_{A_1A_2})^2 - n^{(2)}_{A_1}n^{(2)}_{A_2}\right\}\nonumber \\[0.0ex] & + \pi^{(21)}\left\{ \frac{3}{2}n^{(2)}_{A_1A_2}n^{(1)}_{A_1} - \frac{1}{2}n^{(2)}_{A_1}n^{(1)}_{A_1A_2} + n^{(2)}_{A_1A_2}n^{(1)}_{A_1A_2} - n^{(2)}_{A_1}n^{(1)}_{A_2} \right\}\,, \label{eq:sbmthrd}\\[1.5ex] \frac{\text{d} n^{(2)}_{A_2}}{\text{d} t} & = \pi^{(22)}\left\{ n^{(2)}_{A_2}n^{(2)}_{A_1A_2} + (n^{(2)}_{A_1A_2})^2 - n^{(2)}_{A_1}n^{(2)}_{A_2} \right\}\nonumber \\[0.0ex] & + \pi^{(21)}\left\{\frac{3}{2}n^{(2)}_{A_1A_2}n^{(1)}_{A_2}-\frac{1}{2}n^{(2)}_{A_2}n^{(1)}_{A_1A_2}+n^{(2)}_{A_1A_2}n^{(1)}_{A_1A_2}-n^{(2)}_{A_2}n^{(1)}_{A_1} \right\}\,. \label{eq:sbmfrth} \end{align} The phase diagram of the model, obtained by integrating eqs.~(\ref{eq:sbmfrst})--(\ref{eq:sbmfrth}) numerically, is reported in Fig.~\ref{fig:phases}. We observe three different phases: in region I the system converges to a global consensus state where $A_1$ colonizes ${\cal C}^{(2)}$, in region III it converges to the opposite global consensus state, with $A_2$ colonizing ${\cal C}^{(1)}$, while region II corresponds to a multi-language phase. Here large fractions of both communities keep speaking their original language without ever converging to global consensus. It is interesting to observe that the phase structure of the model is qualitatively similar to that obtained for the binary NG on a fully connected graph when competing committed groups of agents are introduced, see Fig.~1 of ref.~\cite{Xie:2}. Nevertheless, the phase structure here is fully induced by the network topology. Moreover, the cusp of region II, that we shall derive exactly in next section, is located at $\nu_1=\nu_2= 0.1321\ldots$, while in ref.~\cite{Xie:2} it is located at $p_\text{A}=p_\text{B}=0.1623\ldots$. \begin{figure}[t!] \centering \hskip -0.8cm\includegraphics[width=0.84\textwidth]{./phases.pdf} \vskip -0.2cm \caption{\footnotesize Phase diagram in the SBM with $Q=2$.\label{fig:phases}} \vskip -0.2cm \end{figure} \section{Binary dynamics in the Planted Partition Model with $Q=2$} The Planted Partition Model~({\bf PPM})~\cite{Condon,McSherry} includes all networks of the SBM generated by letting $p^{(ii)}={p_{\rm in}}$ for $i=1,\ldots,Q$ and $p^{(ik)} = {p_{\rm out}}$ for $i\ne k$. Networks in the PPM are fully symmetric under exchange of community indexes. For $Q=2$ we have \begin{align} & \langle\kappa^{(1)}_{\text{in}}\rangle = \langle\kappa^{(2)}_{\text{in}}\rangle = \frac{2}{N}\frac{N}{2}{p_{\rm in}}\left(\frac{N}{2}-1\right) = {p_{\rm in}}\left(\frac{N}{2}-1\right)\,,\\[4.0ex] & \langle\kappa^{(12)}_{\text{out}}\rangle = \langle\kappa^{(21)}_{\text{out}}\rangle = \frac{2}{N}\frac{N}{2}{p_{\rm out}}\frac{N}{2} = {p_{\rm out}}\frac{N}{2}\,, \end{align} hence \begin{equation} \gamma_\text{out/in} = \frac{{p_{\rm out}}}{{p_{\rm in}}} \frac{1}{\left(1-2/N\right)} \simeq {p_{\rm out}}/{p_{\rm in}} \equiv \nu\,. \label{eq:PPMgamma} \end{equation} ECS conditions are fulfilled by graphs with $\nu\ll 1$. In this limit the PPM is in absolute the simplest \emph{ensemble} of community-based networks. For $Q=2$ the phase space of the system corresponds to the bisecting line of Fig.~\ref{fig:phases}, where $\nu_1=\nu_2\equiv\nu$. When the system relaxes to equilibrium, all derivatives on the l.h.s. of eqs.~(\ref{eq:sbmfrst})--(\ref{eq:sbmfrth}) vanish. Therefore, steady densities are determined by a system of algebraic equations. We want to show that the latter admit a symmetric solution $\tilde n = \{\tilde n^{(1)}_{A_1},\tilde n^{(1)}_{A_2},\tilde n^{(2)}_{A_1},\tilde n^{(2)}_{A_2}\}$ with $\tilde n^{(1)}_{A_1}=\tilde n^{(2)}_{A_2}=x$ and $\tilde n^{(1)}_{A_2}=\tilde n^{(2)}_{A_1}=y$. This turns out to be stable only for $\nu$ lying within region II of Fig.~\ref{fig:phases}. When $\nu$ lies outside it, the symmetric solution becomes unstable under small density perturbations. In this region the symmetry is broken by dynamical fluctuations, leading to global consensus on $A_1$ or $A_2$ with equal probability. Depending on how large $\nu$ is, instabilities may be triggered by density fluctuations occurring along one specific direction or spanning an entire plane in state space, as we shall see in a while. As a result of our \emph{ansatz} two of MFEs become redundant, so we are left with \begin{align} \label{eq:ppmfrst} & x(1-x-y) + (1-x-y)^2 -xy \nonumber\\[0.0ex] & \hskip 2.0cm + \nu\left\{\frac{3}{2}y(1-x-y) - \frac{1}{2}x(1-x-y) + (1-x-y)^2 - x^2\right\} = 0\,,\\[4.0ex] \label{eq:ppmscnd} & y(1-x-y) + (1-x-y)^2 -xy \nonumber\\[0.0ex] & \hskip 2.0cm + \nu\left\{\frac{3}{2}x(1-x-y) - \frac{1}{2}y(1-x-y) + (1-x-y)^2 - y^2\right\} = 0\,, \end{align} We let $u=x-y$ and $v = 1-x-y$. Adding and subtracting the above two equations yields the equivalent system \begin{align} \label{eq:ppmthrd} & \left\{v(1-v) + 2v^2 -\frac{1}{2}[(1-v)^2 - u^2]\right\} + \nu \left\{v(1-v) + 2v^2 - \frac{1}{2}[(1-v)^2+u^2]\right\}=0\,,\\[3.0ex] \label{eq:ppmfrth} & uv + \nu\left\{-\frac{3}{2}uv - \frac{1}{2}uv - u(1-v)\right\} = 0 \qquad \Leftrightarrow \qquad u\left[v - \nu(1+v)\right] = 0\,. \end{align} In particular, eq.~(\ref{eq:ppmfrth}) has two solutions: \emph{i}) $u=0$ and \emph{ii}) $u\ne 0$, $v = \nu/(1-\nu)$. These hold separately for $\nu$ belonging to disjoint subintervals of $[0,1]$. If we assume first that $u\ne 0$, from eq.~(\ref{eq:ppmthrd}) it follows that \begin{equation} u^2 = -2\frac{1+\nu}{1-\nu}\left\{\frac{v^2}{2} + 2v - \frac{1}{2}\right\}\,. \label{eq:ueq} \end{equation} Inserting $v=\nu/(1-\nu)$ into this yields \begin{equation} u(\nu) = \pm \sqrt{\frac{1+\nu}{(1-\nu)^3}\left(4\nu^2 - 6\nu + 1\right)}\,. \end{equation} To ensure that $u(\nu)$ is real, we must have $0<\nu\le \hat\nu = (3-\sqrt{5})/4 = 0.190983\ldots$ Moreover, from eq.~(\ref{eq:ueq}) we see that $u=0$ entails $v = \sqrt{5}-2 = 0.236068\ldots$ This represents a solution for $\nu>\hat\nu$. Therefore, with initial conditions $n^{(1)}_{A_1} = n^{(2)}_{A_2}=1$ and $n^{(1)}_{A_2} = n^{(2)}_{A_1} = 0$, the symmetric steady solution is given by \begin{equation} \left\{\begin{array}{ll} \tilde n^{(i)}_{A_i}(\nu) & \hskip -0.2cm = \dfrac{1}{2}\left\{\dfrac{1-2\nu}{1-\nu} + \sqrt{\dfrac{(1+\nu)}{(1-\nu)^3}\left(4\nu^2 - 6\nu + 1\right)} \right\}\,,\\[4.0ex] \tilde n^{(i)}_{A_{3-i}}(\nu) & \hskip -0.2cm = \dfrac{1}{2}\left\{\dfrac{1-2\nu}{1-\nu} - \sqrt{\dfrac{(1+\nu)}{(1-\nu)^3}\left(4\nu^2 - 6\nu + 1\right)} \right\}\,, \end{array}\right.\quad \text{ for } \ \nu\le \hat\nu \ \text{ and } \ i=1,2\,, \label{eq:symsol} \end{equation} and \begin{equation} \tilde n^{(1)}_{A_1}(\nu) = \tilde n^{(1)}_{A_2}(\nu) = n^{(2)}_{A_1}(\nu) = \tilde n^{(2)}_{A_2}(\nu) = \frac{3-\sqrt{5}}{2}\,,\quad \ \text{ for } \ \nu\ge \hat\nu\,. \end{equation} In Fig.~\ref{fig:ppmstability} (\emph{left}) we plot the symmetric steady densities in ${\cal C}^{(1)}$ vs $\nu$. We notice that both $\tilde n^{(1)}_{A_1}(\nu)$ and $\tilde n^{(1)}_{A_2}(\nu)$ have discontinuous first order derivatives for $\nu=\hat\nu$. \begin{figure}[t!] \centering \hskip -0.8cm\includegraphics[width=0.48\textwidth]{./PPMsymsol.pdf} \hskip 0.2cm\includegraphics[width=0.48\textwidth]{./PPMeigen.pdf}\,. \vskip -0.2cm \caption{\footnotesize (\emph{left}) Symmetric steady solution to MFEs in the PPM with $Q=2$. (\emph{right}) Eigenvalues of linearized MFEs. The critical point $\nu_c$ is represented by a star.} \label{fig:ppmstability} \end{figure} \subsection{Stability of the symmetric steady solution} In order to investigate the stability of the symmetric solution, we consider densities deviating by a small amount from $\tilde n$, {\it i.e.\ } we let $n^{(i)}_{\scriptscriptstyle X} = \tilde n^{(i)}_{\scriptscriptstyle X} +\, \epsilon^{(i)}_{\scriptscriptstyle X}$ for $i=1,2$ and ${X} = {\rm A_1,A_2}$. Then we examine the conditions under which all deviations $\{\epsilon^{(i)}_{\scriptscriptstyle X}(t)\}_{X=A_1,A_2}^{i=1,2}$ vanish simultaneously as $t\to\infty$. By inserting such perturbations into MFEs and by expanding in Taylor series at leading order, we obtain linearized MFEs \vskip -0.5cm \begin{equation} \frac{\text{d}\epsilon^{(i)}_{\scriptscriptstyle X}}{\text{d} t} = \sum_{Y=A_1,A_2}\sum_{k=1,2} \epsilon^{(k)}_{\scriptscriptstyle Y} \frac{\partial f_{\scriptscriptstyle X}^{(i)}}{\partial n^{(k)}_{\scriptscriptstyle Y}}(\tilde n)\,,\qquad i=1,2\ \text { and }\ X = A_1,A_2\,. \end{equation} The stability matrix $\Lambda^{(i,{\scriptscriptstyle X})}_{(k,{\scriptscriptstyle Y})} = \partial f^{(i)}_{\scriptscriptstyle X}/\partial n^{(k)}_{\scriptscriptstyle Y}(\tilde n)$ has constant elements, depending on the components of $\tilde n$ and the relative connectedness $\nu$ (but not on $\{\epsilon^{(i)}_{\scriptscriptstyle X}\}$). In particular, we find \begin{alignat}{2} \sigma\Lambda^{(1,{\scriptscriptstyle A_1})}_{(1,{\scriptscriptstyle A_1})} & = - 1 - \frac{3}{2}\nu + \frac{\nu}{2}\tilde n^{(2)}_{A_2}\,, & \sigma\Lambda^{(1,{\scriptscriptstyle A_1})}_{(1,{\scriptscriptstyle A_2})} & = -2 - \nu - \frac{\nu}{2} \tilde n^{(2)}_{A_1} + 2\tilde n^{(1)}_{A_2} + \nu \tilde n^{(2)}_{A_2}\,,\\[1.0ex] \sigma\Lambda^{(1,{\scriptscriptstyle A_1})}_{(2,{\scriptscriptstyle A_1})} & = \frac{1}{2}\nu -\frac{\nu}{2}\tilde n^{(1)}_{A_2}\,, & \sigma\Lambda^{(1,{\scriptscriptstyle A_1})}_{(2,{\scriptscriptstyle A_2})} & = -\nu + \frac{\nu}{2}\tilde n^{(1)}_{A_1}+\nu\tilde n^{(1)}_{A_2}\, \end{alignat} \begin{alignat}{2} \sigma\Lambda^{(1,{\scriptscriptstyle A_2})}_{(1,{\scriptscriptstyle A_1})} & = -2 - \nu - \frac{\nu}{2} \tilde n^{(2)}_{A_2} + 2 \tilde n^{(1)}_{A_1} + \nu \tilde n^{(2)}_{A_1}\,,\qquad& \sigma\Lambda^{(1,{\scriptscriptstyle A_2})}_{(1,{\scriptscriptstyle A_2})} & = -1 -\frac{3}{2}\nu + \frac{\nu}{2} \tilde n^{(2)}_{A_1}\, \\[1.0ex] \sigma\Lambda^{(1,{\scriptscriptstyle A_2})}_{(2,{\scriptscriptstyle A_1})} & = -\nu + \frac{\nu}{2} \tilde n^{(1)}_{A_2} + \nu\tilde n^{(1)}_{A_1}\,, & \sigma\Lambda^{(1,{\scriptscriptstyle A_2})}_{(2,{\scriptscriptstyle A_2})} & = \frac{\nu}{2} - \frac{\nu}{2}\tilde n^{(1)}_{A_1}\,, \\[1.0ex] \sigma\Lambda^{(2,{\scriptscriptstyle A_1})}_{(1,{\scriptscriptstyle A_1})} & = \frac{\nu}{2} - \frac{\nu}{2}\tilde n^{(2)}_{A_2}\,,& \sigma\Lambda^{(2,{\scriptscriptstyle A_1})}_{(1,{\scriptscriptstyle A_2})} & = -\nu + \frac{\nu}{2}\tilde n^{(2)}_{A_1} + \nu\tilde n^{(2)}_{A_2} \,, \\[1.0ex] \sigma\Lambda^{(2,{\scriptscriptstyle A_1})}_{(2,{\scriptscriptstyle A_1})} & = -1-\frac{3}{2}\nu + \frac{\nu}{2}\tilde n^{(1)}_{A_2} \,,& \sigma\Lambda^{(2,{\scriptscriptstyle A_1})}_{(2,{\scriptscriptstyle A_2})} & = -2 -\nu - \frac{\nu}{2}\tilde n^{(1)}_{A_1} + 2 \tilde n^{(2)}_{A_2} + \nu\tilde n^{(1)}_{A_2} \,, \\[1.0ex] \sigma\Lambda^{(2,{\scriptscriptstyle A_2})}_{(1,{\scriptscriptstyle A_1})} & = -\nu +\frac{\nu}{2}\tilde n^{(2)}_{A_2}+ \nu \tilde n^{(2)}_{A_1} \,, & \sigma\Lambda^{(2,{\scriptscriptstyle A_2})}_{(1,{\scriptscriptstyle A_2})} & = \frac{\nu}{2} - \frac{\nu}{2}\tilde n^{(2)}_{A_1} \,, \\[1.0ex] \sigma\Lambda^{(2,{\scriptscriptstyle A_2})}_{(2,{\scriptscriptstyle A_1})} & = -2 -\nu - \frac{\nu}{2}\tilde n^{(1)}_{A_2} + 2\tilde n^{(2)}_{A_1} + \nu\tilde n^{(1)}_{A_1} \,, & \sigma\Lambda^{(2,{\scriptscriptstyle A_2})}_{(2,{\scriptscriptstyle A_2})} & = -1 -\frac{3}{2}\nu + \frac{\nu}{2}\tilde n^{(1)}_{A_1} \,, \end{alignat} \vskip 0.2cm \noindent with $\sigma=2(1+\nu)$. It is possible to work out the four eigenvalues of $\Lambda$ exactly, either by paper-and-pencil calculations or via a simple Maple$^\text{TM}$ script. Rather exceptionally, their algebraic expressions are sufficiently concise to allow us to report them in full. Indeed, we have \begin{align} \lambda_1 & = \frac{1}{4}\frac{3\nu^2 - 2 + \sqrt{\nu^4 -20\nu^3 + 8\nu^2 + 28\nu}}{1-\nu^2} \,,\\[3.7ex] \lambda_2 & = \frac{1}{4}\frac{\nu^2 -\nu - 2 + \sqrt{17\nu^4 - 26\nu^3 - 15\nu^2 + 28\nu}}{1-\nu^2}\,,\\[3.7ex] \lambda_3 & = \frac{1}{4}\frac{ 3\nu^2 - 2 - \sqrt{\nu^4 -20\nu^3 + 8\nu^2 + 28\nu}}{1-\nu^2} \,,\\[3.7ex] \lambda_4 & = \frac{1}{4}\frac{\nu^2 -\nu - 2 - \sqrt{17\nu^4 - 26\nu^3 - 15\nu^2 + 28\nu}}{1-\nu^2}\,. \end{align} The behaviour of these eigenvalues as functions of $\nu$ is reported in Fig.~\ref{fig:ppmstability} (\emph{right}). For sufficiently small $\nu$ all of them are negative, thus granting that the symmetric steady solution is stable. In fact, the transition to the multi-language phase occurs when the eigenvalue $\lambda_1$ shifts from negative to positive values~\cite{Arnold}. The critical point $\gamma_\text{out/in,\,c} = \nu_\text{c}$, in correspondence of which we have $\lambda_1=0$, can be calculated exactly. The equation $\lambda_1(\nu)=0$ is indeed equivalent to a quartic equation for $\nu$ with four real simple roots. Among these two are negative and one is larger than one. The fourth root, that we just identify with $\nu_\text{c}$, is given by \begin{align} \nu_\text{c} & = \frac{\sqrt{19}}{2}\sin\left[-\frac{1}{3}\arctan\left(2\frac{\sqrt{2694}}{99}\right)+\frac{\pi}{3} \right] -\frac{\sqrt{57}}{6}\sin\left[\frac{1}{3}\arctan\left(2\frac{\sqrt{2694}}{99}\right)+ \frac{\pi}{6} \right] - \frac{1}{2}\nonumber\\[3.0ex] & = 0.132122756\ldots \label{eq:ppmnuc} \end{align} Actually, among all network models that we consider in the present paper, the PPM is the only one where a calculation of the critical connectedness can be performed analytically to the very end. The eigenvector $v_1$ of $\Lambda$ corresponding to $\lambda_1$ becomes a direction of instability for the symmetric steady solution for $\nu>\nu_\text{c}$. In other words, the projection of the perturbation vector along $v_1$ diverges asymptotically. From Fig.~\ref{fig:ppmstability} (\emph{right}) we observe that also $\lambda_2$ shifts to positive values at some point. In particular, it can be shown that $\lambda_2=0$ for $\nu=\hat \nu$. Therefore, as anticipated, the eigenvector $v_2$ of $\Lambda$ corresponding to $\lambda_2$ represents a second direction of instability for $\nu>\hat \nu$. \subsection{Numerical integration of mean field equations} It is worthwhile discussing at this point the integration of MFEs. We can take advantage of the analytic solution presented above to fix the details of our numerical recipe, so as to be confident that numerical integration yields correct results when applied to network models for which no analytic solution is available (for instance the SBM with $Q=2$ and $\nu_1 \ne \nu_2$, discussed in sect.~3). First of all, we notice that for $\nu_1=\nu_2=\nu$ eqs.~(\ref{eq:sbmfrst})--(\ref{eq:sbmfrth}) are symmetric under the exchange $n^{(1)}_{A_1}\leftrightarrow n^{(2)}_{A_2}$, $n^{(1)}_{A_2}\leftrightarrow n^{(2)}_{A_1}$. Since the initial state densities, eq.~(\ref{eq:initcond}), are symmetric too and no dynamical fluctuations are encoded in MFEs, nothing can break the exchange symmetry, hence numerical solutions always converge to symmetric steady densities. To let the system fall into global consensus, we have two possibilities. One is to break the exchange symmetry explicitly in the initial conditions. For instance, we can introduce a contamination of $A_2$ within~${\cal C}^{(1)}$ by letting \begin{alignat}{3} \begin{array}{ll}n^{(1)}_{A_1}(0) = 1-\epsilon\,,& \qquad n^{(2)}_{A_1}(0) = 0\,,\\[3.3ex] n^{(1)}_{A_2}(0) = \epsilon\,,& \qquad n^{(2)}_{A_2}(0) = 1\,, \end{array} \label{eq:PPMasyminitcond} \end{alignat} with $0<\epsilon\ll 1$. For $\nu>\nu_\text{c}$ such a perturbation makes the system converge with certainty to global consensus on $A_2$. However, the contamination affects the results of numerical integration. More specifically, it modifies the duration of metastable states, thus changing the value of the critical connectedness by terms~$\text{O}(\epsilon)$. To get rid of this effect, we must integrate numerically MFEs for a sequence of decreasing values of $\epsilon$ and then extrapolate to $\epsilon\to 0^+$. Albeit legitimate, the above approach has the drawback that symmetry breaking is implicit in the initial conditions, while MFEs are kept fully symmetric. An opposite possibility is to leave initial conditions unchanged and assume that ${\cal C}^{(1)}$ and ${\cal C}^{(2)}$ have different size. For instance, we can let $N^{(2)} = (1+\epsilon)N^{(1)}$ for $0<\epsilon\ll 1$, so that ${\cal C}^{(2)}$ is slightly larger than ${\cal C}^{(1)}$ (for $\nu>\nu_\text{c}$ the system is then expected to converge to global consensus on $A_2$). This assumption modifies the coefficients $\{\pi^{(ik)}\}$. Indeed, the probability of picking up an agent belonging to ${\cal C}^{(1)}$ is now $N^{(1)}/N = (1/2)(1-\epsilon/2) + \text{O}(\epsilon^2)$, while the probability of picking up one belonging to ${\cal C}^{(2)}$ is $N^{(2)}/N = (1/2)(1+\epsilon/2) + \text{O}(\epsilon^2)$. Therefore, the exchange symmetry is explicitly broken in MFEs. As previously, numerical estimates of the critical connectedness are biased by terms $\text{O}(\epsilon)$, hence we must extrapolate results to $\epsilon\to 0^+$. All in all, the above two approaches for breaking the exchange symmetry are equivalent. \begin{figure}[t!] \centering \hskip -0.8cm\includegraphics[width=0.45\textwidth]{./timedep.pdf} \hskip 0.2cm\includegraphics[width=0.45\textwidth]{./Tc_PPM_Q=02.pdf}\,. \caption{\footnotesize (\emph{left}) Numerical integration of MFEs in the PPM; (\emph{right}) Time to consensus vs.~$\nu$ for several values of the symmetry breaking parameter $\epsilon$.} \vskip 0.2cm \label{fig:numint} \end{figure} Apart from this issue, we discretize MFEs according to the Euler method~\cite{Atkinson} with step size $\text{d} t = 0.1$. In Fig.~\ref{fig:numint} (\emph{left}) we show examples of numerical integrations for a handful of values of $\nu$. The plot has been obtained with asymmetric initial conditions corresponding to $\epsilon=1.0\times 10^{-4}$. As anticipated, we observe the presence of metastable states followed by collapse to global consensus. These states have a finite duration depending on $\nu$. In particular, we see from the plot that the closer $\nu$ to $\nu_\text{c}$, the longer metastable states persist, until for $\nu<\nu_c$ they become truly stable. In Fig.~\ref{fig:numint} (\emph{right}) we plot the time to consensus $T_\text{cons}$ as a function of~$\nu$. The collapse of metastable states to global consensus takes a finite time $\Delta t$. Therefore, we need to define $T_\text{cons}$ operatively by setting a threshold. Throughout the paper we define $T_\text{cons}$ as the lowest value of $t$ for which $n^{(1)}_{A_1}(t)<1.0\times 10^{-4}$. This introduces a systematic error, which is however negligible to all purposes, since $\Delta t / T_\text{cons}\to 0$ as $\nu\to\nu_\text{c}$. The dependence of $T_{\rm cons}$ upon $\nu$ is well described by the function \begin{equation} T_\text{cons}(\nu,\epsilon) = \left\{\begin{array}{ll} \dfrac{A(\epsilon)}{[\nu-\nu_\text{c}(\epsilon)]^{\gamma(\epsilon)}} & \text{if } \ \ \nu>\nu_c\, \\[3.0ex] +\infty & \text{otherwise}\,.\end{array}\right. \label{eq:model} \end{equation} with $\nu_\text{c}(\epsilon)$ and $\gamma(\epsilon)$ converging as $\epsilon \to 0^+$. In Table~\ref{tab:fitpars} we report estimates of the parameters $A,\nu_\text{c},\gamma$, obtained upon fitting data produced by numerical integrations to eq.~(\ref{eq:model}). In particular, the critical exponent $\gamma(\epsilon)$ converges to $\gamma(0^+) \simeq 0.96$ (for a definition of critical exponents see ref.~\cite{Stanley}), while $\nu_\text{c}(\epsilon)$ converges to the exact value, eq.~(\ref{eq:ppmnuc}). \begin{table}[!t] \begin{center} \small \begin{tabular}{c|r|r|r} \hline \hline $\epsilon$ & $A(\epsilon)\ \ \ $ & $\nu_\text{c}(\epsilon)\ \ \ $ & $\gamma(\epsilon)\ \ \ $ \\ \hline\\[-2.8ex] $1.0\times 10^{-2}$ & 8.214(1) & 0.1321161(2) & 0.74205(3) \\ $1.0\times 10^{-3}$ & 6.537(1) & 0.1321222(2) & 0.86468(3) \\ $1.0\times 10^{-4}$ & 6.920(1) & 0.1321227(2) & 0.90872(3) \\ $1.0\times 10^{-5}$ & 7.729(1) & 0.1321228(2) & 0.93087(3) \\ $1.0\times 10^{-6}$ & 8.523(1) & 0.1321228(2) & 0.94602(3) \\ $1.0\times 10^{-7}$ & 9.730(1) & 0.1321229(2) & 0.95210(3) \\ $1.0\times 10^{-8}$ & 10.790(1) & 0.1321229(2) & 0.95840(3) \\[0.3ex] \hline \hline \end{tabular} \vskip 0.1cm \caption{\footnotesize Estimates of fit parameters for $T_\text{cons}(\nu,\epsilon)$.\label{tab:fitpars}} \end{center} \vskip -0.4cm \end{table} In Fig.~\ref{fig:PPMsimul} (\emph{left}) we show the equilibrium densities $n^{(1)}_{A_k}(\infty)$ as obtained from numerical integration of MFEs with asymmetric initial conditions corresponding to $\epsilon=1.0\times 10^{-4}$. They are in perfect agreement with the symmetric steady solution derived in sect.~4.1 for $\nu<\nu_\text{c}$. We conclude that our numerical recipe introduces no relevant systematic error in the calculation. \subsection{Finite size effects} So far we studied the model in the mean field approximation. This is known to work well only in the thermodynamic limit. In Monte Carlo simulations networks are necessarily made of a finite number of agents. Moreover, due to computational limitations this number is never exceedingly large. The main effect induced by the finiteness of the network is a blurring of the phase transition. The critical connectedness $\nu_\text{c}$ disappears on small networks together with the multi-language phase. The coexistence of different local languages within communities becomes a purely metastable phenomenon, independently of $\nu$. Dynamical fluctuations of state densities are always able to trigger a collapse to global consensus after a finite time since the game start. Of course, the lower $\nu$ the longer the system persists in the metastable phase. To quantify this, we average $\text{O}(100)$ independent Monte Carlo measures of the time to consensus for several choices of $N$ and $\nu$. In particular, we let $p^{(11)} = p^{(22)} = 1$ in all our numerical tests, hence the communities that we simulate are actually cliques. Since measuring time to consensus becomes increasingly costly as $\nu$ decreases, we need to set up a threshold beyond which the stochastic dynamics is forcedly arrested. We introduce the bounded time to consensus \begin{equation} \tilde T_\text{cons}(N,\nu) = \min\left\{T_\text{cons}(N,\nu),100\,N\right\}\,. \label{eq:TconsPPM} \end{equation} In Fig.~\ref{fig:PPMsimul} (\emph{right}) we show the behaviour of $\tilde T_\text{cons}$ in a range of $\nu$ around the critical point $\nu_\text{c}$ for $N=1000,\,2000,\,4000$. We observe that $\tilde T_\text{cons}$ is essentially the same for all values of $N$ if $\nu\gg\nu_\text{c}$. As $\nu$ approaches $\nu_\text{c}$ from above $\tilde T_\text{cons}$ begins to rise and the increase is steeper for larger values of $N$. Finally, we see that $\tilde T_\text{cons}$ keeps finite for $\nu<\nu_\text{c}$ even though it takes soon large values as $\nu$ decreases. In principle it is possible to reproduce the observed curves thanks to a numerical technique that allows to build quasi-stationary solutions of the Master Equation for stochastic processes with absorbing states~\cite{Dickman:1,Dickman:2}. Although this technique has been applied to the NG in other contexts~\cite{Baronchelli:8,Xie:1} with very good results, its use here goes beyond our aims. We conclude by noting that in the crossover region, {\it i.e.\ } in the range of $\nu$ across which $\tilde T_\text{cons}(N,\nu)$ rises from $\text{O}(100)$ to the upper bound $100\cdot N$, the behaviour of $\tilde T_\text{cons}(N,\nu)$ is well described by the function \begin{equation} \tilde T_\text{cons}(N,\nu) \propto \exp\left\{N(\nu_\text{c}-\nu)^\beta\right\}\,, \label{eq:thmodelT} \end{equation} with $\beta\simeq 1.5$, in analogy with the findings of ref.~\cite{Xie:1}. \begin{figure}[t!] \centering \hskip -0.8cm\includegraphics[width=0.45\textwidth]{./PPMbrokensol.pdf} \hskip 0.2cm\includegraphics[width=0.45\textwidth]{./TCPPM_Q=02.pdf}\,. \vskip -0.2cm \caption{\footnotesize (\emph{left}) Equilibrium densities in ${\cal C}^{(1)}$ with initial conditions as in eq.~(\ref{eq:PPMasyminitcond}); (\emph{right}) Bounded time to consensus from simulations in the PPM with $Q=2$.} \label{fig:PPMsimul} \vskip -0.2cm \end{figure} \section{Binary Naming Game on two overlapping cliques} In order to investigate how the coexistence of multi-language states in the NG is affected by the presence of agents belonging simultaneously to different communities, we consider a graph ${\cal G} = {\cal C}^{(1)}\cup{\cal C}^{(2)}$ made of two partially overlapping cliques, having size $N^{(1)}=N^{(2)}=N/2$. We recall that ${\cal C}^{(k)}$ is a clique provided $\mathfrak{p}^{(kk)}(x,y)=1$ for all $x,y\in{\cal C}^{(k)}$ and $x\ne y$. We split ${\cal C}^{(1)}$ and ${\cal C}^{(2)}$ into two disjoint groups of nodes respectively, {\it i.e.\ } we let \begin{equation} {\cal C}^{(1)} = {\cal C}^{(1)}_\text{in}\cup{\cal C}^{(1)}_\text{ov}\,,\qquad {\cal C}^{(2)} = {\cal C}^{(2)}_\text{in}\cup{\cal C}^{(2)}_\text{ov}\,, \label{eq:ovgroups} \end{equation} with ${\cal C}^{(i)}_\text{in}$ and ${\cal C}^{(i)}_\text{ov}$ fulfilling \begin{align} i\text{)} & \qquad (x,y)\notin{\cal E}\text{ for all } x\in{\cal C}^{(1)}_\text{in}\text{ and for all } y\in{\cal C}^{(2)}\,,\nonumbe \end{align} \begin{align} ii\text{)} & \qquad (x,y)\notin{\cal E}\text{ for all } x\in{\cal C}^{(2)}_\text{in}\text{ and for all } y\in{\cal C}^{(1)}\,,\nonumber\\[2.0ex] iii\text{)} & \qquad (x,y)\in{\cal E}\text{ for all } x\in{\cal C}^{(1)}_\text{ov}\text{ and for all } y\in{\cal C}^{(2)}\,,\nonumber\\[2.0ex] iv\text{)} & \qquad (x,y)\in{\cal E}\text{ for all } x\in{\cal C}^{(2)}_\text{ov}\text{ and for all } y\in{\cal C}^{(1)}\,.\nonumber \end{align} We also assume $|{\cal C}^{(1)}_\text{in}|=|{\cal C}^{(2)}_\text{in}|=N_\text{in}$ and $|{\cal C}^{(1)}_\text{ov}| = |{\cal C}^{(2)}_\text{ov}| = N_\text{ov}/2$. Therefore, we have $N = 2N_\text{in} + N_\text{ov}$. It will be noticed that these networks have no stochastic elements\footnote{With little effort we could consider a generalization where edges exist with probabilities $\mathfrak{p}^{(kk)}(x,y)<1$ for $x,y\in{\cal C}^{(k)}$. We prefer to restrict our study to cliques, as we wish to investigate how the overlap affects the multi-language phase of the NG with no additional degree of freedom.}. An example corresponding to $N=600$ and $N_\text{ov} = 60$ is reported in Fig.~\ref{fig:overlap}. The connectedness parameters are given by \begin{align} & \langle\kappa^{(1)}_{\text{in}}\rangle = \langle\kappa^{(2)}_{\text{in}}\rangle = \frac{2}{N}\frac{N}{2}\left(\frac{N}{2}-1\right) = \frac{N}{2}-1\,,\\[2.0ex] & \langle\kappa^{(12)}_{\text{out}}\rangle = \langle\kappa^{(21)}_{\text{out}}\rangle = \frac{2}{N}\frac{N_\text{ov}}{2}\frac{N}{2} = \frac{N_\text{ov}}{2}\,, \end{align} hence \begin{equation} \gamma_\text{out/in}= \frac{N_\text{ov}}{N}\frac{1}{1-2/N} \simeq \frac{N_\text{ov}}{N} = \frac{N_\text{ov}}{2N_\text{in}+N_\text{ov}} = \frac{\omega}{2+\omega}\,, \label{eq:OVgamma} \end{equation} with $\omega = N_\text{ov}/N_\text{in}$. ECS conditions are fulfilled provided $\omega\ll 1$. \begin{figure}[t!] \centering \includegraphics[width=0.31\textwidth]{./CQ_OV.jpg} \vskip 0.4cm \caption{\footnotesize A network with two overlapping cliques, $N=600$ and $N_\text{ov} = 60$.\label{fig:overlap}} \vskip 0.2cm \end{figure} It is possible to study the binary NG on such networks with the same approach used for the PPM. However, we first need to clarify how the overlap contributes to shaping MFEs. So far we considered communities as groups of dynamically homogeneous agents. Accordingly, we identified the state densities~$n^{(i)}_D$, introduced in eq.~(\ref{eq:comdens}), as fundamental degrees of freedom of the system. When agents belonging to more than one community are present and they are likewise connected to all of these, assigning such agents to one community or another becomes ambiguous, hence it is advisable to treat them separately. Indeed, in our case we can distinguish precisely three groups of dynamically homogeneous agents, namely ${\cal C}^{(1)}_\text{in}$, ${\cal C}^{(2)}_\text{in}$ and ${\cal C}_{\text{ov}} = {\cal C}^{(1)}_\text{ov}\cup{\cal C}^{(2)}_\text{ov}$. Correspondingly, it makes sense to define state densities \begin{align} n^{(i)}_D & = \frac{\text{no. of agents with notebook $D$ belonging to ${\cal C}^{(i)}_{\text{in}}$}}{N_\text{in}}\,,\qquad \text{ for } \ i=1,2\,, \ \text{ and } \ D\in\bar S({\cal D})\,, \\[3.0ex] n^{(\text{o})}_D & = \frac{\text{no. of agents with notebook $D$ belonging to ${\cal C}_\text{ov}$}}{N_{\text{ov}}}\,,\qquad \text{ for } \ D\in\bar S({\cal D})\,. \end{align} Possible agent-agent interactions and corresponding conditional rates are still those listed in Table~\ref{tab:binaryNG}, but the probabilities of picking up an agent $x$ and a neighbour $x'$ of $x$ belonging to combinations of the above groups must be specified over again. In particular, here we let \begin{align} \pi^{(ii)} & = \text{prob}\left\{x\in{\cal C}^{(i)}_\text{in},\ x'\in {\cal C}^{(i)}_\text{in}\right\} = \frac{1}{(1+\omega)(2+\omega)}\,, \\[0.5ex] \pi^{(i\text{o})} & = \text{prob}\left\{x\in{\cal C}^{(i)}_\text{in},\ x'\in{\cal C}_\text{ov}\right\} = \frac{\omega}{(1+\omega)(2+\omega)}\,, \\[0.5ex] \pi^{(\text{o}i)} & = \text{prob}\left\{x\in{\cal C}_\text{ov},\ x'\in{\cal C}^{(i)}_\text{in}\right\} = \frac{\omega}{(2+\omega)^2}\,, \\[0.5ex] \pi^{(\text{oo})} & = \text{prob}\left\{x\in{\cal C}_\text{ov},\ x'\in{\cal C}_\text{ov}\right\} = \frac{\omega^2}{(2+\omega)^2}\,, \end{align} and we notice that $\omega = {\pi^{(1\text{o})}}/{\pi^{(11)}} = {\pi^{(2\text{o})}}/{\pi^{(22)}} = {\pi^{(\text{oo})}}/{\pi^{(\text{o}1)}} ={\pi^{(\text{oo})}}/{\pi^{(\text{o}2)}}$. The above probabilities include all possible pairings, indeed they fulfill \begin{equation} \pi^{(11)} + \pi^{(1\text{o})} + \pi^{(22)} + \pi^{(2\text{o})} + \pi^{(\text{oo})} + \pi^{(\text{o}1)} + \pi^{(\text{o}2)} = 1\,. \end{equation} From the above definitions we easily recognize that the system is governed by MFEs \begin{alignat}{2} \frac{\text{d} n^{(1)}_{A_1}}{\text{d} t} & = \pi^{(11)}\left\{ n^{(1)}_{A_1}n^{(1)}_{A_1A_2} + (n^{(1)}_{A_1A_2})^2 - n^{(1)}_{A_1}n^{(1)}_{A_2}\right\} \nonumber \\[0.0ex] & + \pi^{(1\text{o})}\left\{ \frac{3}{2}n^{(1)}_{A_1A_2}n^{(\text{o})}_{A_1} - \frac{1}{2}n^{(1)}_{A_1}n^{(\text{o})}_{A_1A_2} + n^{(1)}_{A_1A_2}n^{(\text{o})}_{A_1A_2} - n^{(1)}_{A_1}n^{(\text{o})}_{A_2} \right\}\,, \label{eq:ovfrsteq}\\[1.0ex] \frac{\text{d} n^{(1)}_{A_2}}{\text{d} t} & = \pi^{(11)}\left\{ n^{(1)}_{A_2}n^{(1)}_{A_1A_2} + (n^{(1)}_{A_1A_2})^2 - n^{(1)}_{A_1}n^{(1)}_{A_2} \right\} \nonumber \\[0.0ex] & + \pi^{(1\text{o})}\left\{\frac{3}{2}n^{(1)}_{A_1A_2}n^{(\text{o})}_{A_2}-\frac{1}{2}n^{(1)}_{A_2}n^{(\text{o})}_{A_1A_2}+n^{(1)}_{A_1A_2}n^{(\text{o})}_{A_1A_2}-n^{(1)}_{A_2}n^{(\text{o})}_{A_1} \right\}\,, \label{eq:ovscndeq}\\[1.0ex] \frac{\text{d} n^{(2)}_{A_1}}{\text{d} t} & = \pi^{(22)}\left\{ n^{(2)}_{A_1}n^{(2)}_{A_1A_2} + (n^{(2)}_{A_1A_2})^2 - n^{(2)}_{A_1}n^{(2)}_{A_2}\right\} \nonumber\\[0.0ex] & + \pi^{(2\text{o})}\left\{ \frac{3}{2}n^{(2)}_{A_1A_2}n^{(\text{o})}_{A_1} - \frac{1}{2}n^{(2)}_{A_1}n^{(\text{o})}_{A_1A_2} + n^{(2)}_{A_1A_2}n^{(\text{o})}_{A_1A_2} - n^{(2)}_{A_1}n^{(\text{o})}_{A_2} \right\}\,, \label{eq:ovthrdeq} \\[1.0ex] \frac{\text{d} n^{(2)}_{A_2}}{\text{d} t} & = \pi^{(22)}\left\{ n^{(2)}_{A_2}n^{(2)}_{A_1A_2} + (n^{(2)}_{A_1A_2})^2 - n^{(2)}_{A_1}n^{(2)}_{A_2} \right\} \nonumber\\[0.0ex] & + \pi^{(2\text{o})}\left\{\frac{3}{2}n^{(2)}_{A_1A_2}n^{(\text{o})}_{A_2}-\frac{1}{2}n^{(2)}_{A_2}n^{(\text{o})}_{A_1A_2}+n^{(2)}_{A_1A_2}n^{(\text{o})}_{A_1A_2}-n^{(2)}_{A_2}n^{(\text{o})}_{A_1} \right\}\,, \label{eq:ovfrtheq}\\[1.0ex] \frac{\text{d} n^{(\text{o})}_{A_1}}{\text{d} t} & = \pi^{(\text{oo})}\left\{ n^{(\text{o})}_{A_1}n^{(\text{o})}_{A_1A_2} + (n^{(\text{o})}_{A_1A_2})^2 - n^{(\text{o})}_{A_1}n^{(\text{o})}_{A_2}\right\} \nonumber\\[0.0ex] & + \pi^{(\text{o}1)}\left\{ \frac{3}{2}n^{(\text{o})}_{A_1A_2}n^{(1)}_{A_1} - \frac{1}{2}n^{(\text{o})}_{A_1}\nAB{\text{1}} + n^{(\text{o})}_{A_1A_2}\nAB{\text{1}} - n^{(\text{o})}_{A_1}n^{(1)}_{A_2} \right\}\phantom{\,,} \nonumber\\[0.0ex] & + \pi^{(\text{o}2)}\left\{ \frac{3}{2}n^{(\text{o})}_{A_1A_2}n^{(2)}_{A_1} - \frac{1}{2}n^{(\text{o})}_{A_1}\nAB{\text{2}} + n^{(\text{o})}_{A_1A_2}\nAB{\text{2}} - n^{(\text{o})}_{A_1}n^{(2)}_{A_2} \right\}\,, \label{eq:ovfitheq}\\[1.0ex] \frac{\text{d} n^{(\text{o})}_{A_2}}{\text{d} t} & = \pi^{(\text{oo})}\left\{ n^{(\text{o})}_{A_2}n^{(\text{o})}_{A_1A_2} + (n^{(\text{o})}_{A_1A_2})^2 - n^{(\text{o})}_{A_1}n^{(\text{o})}_{A_2} \right\} \nonumber\\[0.0ex] & + \pi^{(\text{o}1)}\left\{\frac{3}{2}n^{(\text{o})}_{A_1A_2}n^{(1)}_{A_2}-\frac{1}{2}n^{(\text{o})}_{A_2}n^{(1)}_{A_1A_2}+n^{(\text{o})}_{A_1A_2}n^{(1)}_{A_1A_2}-n^{(\text{o})}_{A_2}n^{(1)}_{A_1} \right\}\phantom{\,,}\nonumber\\[0.0ex] & + \pi^{(\text{o}2)}\left\{\frac{3}{2}n^{(\text{o})}_{A_1A_2}n^{(2)}_{A_2}-\frac{1}{2}n^{(\text{o})}_{A_2}n^{(2)}_{A_1A_2}+n^{(\text{o})}_{A_1A_2}n^{(2)}_{A_1A_2}-n^{(\text{o})}_{A_2}n^{(2)}_{A_1} \right\}\,, \label{eq:ovsitheq} \end{alignat} In analogy with sect.~4, we can show that these admit a symmetric steady solution $\tilde n = \{\tilde n^{(1)}_{A_1},\tilde n^{(1)}_{A_2},\tilde n^{(2)}_{A_1},$ $n^{(2)}_{A_2},\tilde n^{(\text{o})}_{A_1},\tilde n^{(\text{o})}_{A_2}\}$ with $\tilde n^{(1)}_{A_1} = \tilde n^{(2)}_{A_2} = x$, $\tilde n^{(1)}_{A_2}=\tilde n^{(2)}_{A_1} = y$ and $\tilde n^{(\text{o})}_{A_1} = \tilde n^{(\text{o})}_{A_2}=z$. Moreover, here too there exists a finite critical threshold $\omega_c$, such that the symmetric steady solution is stable for $\omega<\omega_c$ and unstable for $\omega>\omega_c$. In particular, if the game starts with initial conditions \begin{alignat}{3} \begin{array}{lll} n^{(1)}_{A_1}(0) = 1-\epsilon\,,& \qquad n^{(2)}_{A_1}(0) = 0\,, & \qquad n^{(\text{o})}_{A_1} = 1/2\,,\\[2.0ex] n^{(1)}_{A_2}(0) = \epsilon\,,& \qquad n^{(2)}_{A_2}(0) = 1\,, & \qquad n^{(\text{o})}_{A_2} = 1/2\,, \end{array} \label{eq:ovinitcond} \end{alignat} we find that for $\omega<\omega_\text{c}$ the system relaxes to a stable symmetric equilibrium with $A_1$ and $A_2$ prevailing respectively in ${\cal C}^{(1)}$ and ${\cal C}^{(2)}$, while for $\omega>\omega_\text{c}$ the system converges to global consensus on $A_2$, due to the symmetry breaking induced by dynamical fluctuations. Now, as a consequence of the exchange symmetry of our \emph{ansatz} the unknown density values $x,y,z$ are fully determined by algebraic equations \begin{align} \label{eq:ov1eq} & x(1-x-y) + (1-x-y)^2 - xy \nonumber\\[0.0ex] & \hskip 2.0cm + \omega\left\{\frac{3}{2}(1-x-y)z - \frac{1}{2}x(1-2z) + (1-x-y)(1-2z) -xz \right\} = 0\,, \\[1.0ex] \label{eq:ov2eq} & y(1-x-y) + (1-x-y)^2 - xy \nonumber\\[1.0ex] & \hskip 2.0cm + \omega\left\{\frac{3}{2}(1-x-y)z - \frac{1}{2}y(1-2z) + (1-x-y)(1-2z) -yz \right\} = 0\,, \\[1.0ex] & \omega(z^2 - 3z + 1) + \left\{\frac{3}{2}(x+y)(1-2z) - z(1-x-y) + 2(1-x-y)(1-2z) - (x+y)z \right\} = 0\,. \label{eq:ov3eq} \end{align} Again we let $u = x-y$ and $v = 1-x-y$. Then we observe that adding and subtracting eqs.~(\ref{eq:ov1eq})--(\ref{eq:ov2eq}) yields the equivalent system \begin{align} \label{eq:ov4eq} & v(1-v) + 2v^2 - \frac{1}{2}[(1-v)^2-u^2] + \omega\left\{ 3vz - \frac{1}{2}(1-v)(1-2z) + 2v(1-2z)-(1-v)z\right\} = 0\,, \\[2.0ex] \label{eq:ov5eq} & uv + \omega\left\{ -\frac{1}{2}u(1-2z) - uz \right\} = uv -\frac{1}{2}\omega u = 0\,,\\[2.0ex] \label{eq:ov6eq} & \omega(z^2-3z+1)+\left\{\frac{3}{2}(1-v)(1-2z)-zv+2v(1-2z)-(1-v)z\right\}\,. \end{align} Eq.~(\ref{eq:ov5eq}) has solutions: \emph{i}) $u=0$ and \emph{ii}) $u\ne 0$, $v = \omega/2$. Similar to what we found in sect.~4, these hold within disjoint intervals of $\omega$. We focus first on the second solution. Specifically, since eq.~(\ref{eq:ov6eq}) depends on $v$ but not on $u$, inserting $v=\omega/2$ into it yields immediately an equation for $z$ alone, namely \begin{equation} z^2 - \left(\frac{7}{2} + \frac{4}{\omega}\right)z + \frac{5}{4} + \frac{3}{2\omega} = 0\,. \end{equation} This has positive solution \begin{equation} z(\omega) = \frac{7}{4} + \frac{2}{\omega} - \frac{1}{4}\frac{\sqrt{29\omega^2 + 88\omega + 64}}{\omega}\,. \label{eq:ovzsol} \end{equation} Despite the presence of inverse powers of $\omega$, $z(\omega)$ keeps always finite, as can be seen by expanding the square root on the right hand side in Taylor series. Indeed we have $\lim_{\omega\to 0^+}z(\omega) = 3/8$. By inserting the values just determined for $v(\omega)$ and $z(\omega)$ into eq.~(\ref{eq:ov4eq}), we get \begin{equation} u(\omega) = \pm\sqrt{1 + \omega - \omega^2 - \frac{1}{4}\omega\Omega}\,. \end{equation} with $\Omega = \sqrt{29\omega^2 + 88\omega + 64}$. In order for $u(\omega)$ to be real, it must be $0\le\omega\le \hat\omega = 2\sqrt{5}-4=0.472136\ldots$. More precisely, it can be shown that the equation $u(\omega)=0$ is equivalent to a quartic equation in $\omega$ with four real simple roots. Among these, only $\hat \omega$ is positive. For $\omega>\hat\omega$, {\it i.e.\ } for $u=0$, eq.~(\ref{eq:ovzsol}) holds no more. In this region the unknowns $v$ and $z$ are jointly determined by eqs.~(\ref{eq:ov4eq}) and (\ref{eq:ov6eq}). These admit constant solutions, {\it i.e.\ } not depending on $\omega$. Indeed, $z$ and $v$ are separately determined by $z^2-3z+1=0$ and $v(1-v) + 2v^2 - (1-v)^2/2=0$, yielding respectively $z=(3-\sqrt{5})/2$ and $v=1/(2+\sqrt{5})$. With some algebra we find that the symmetric steady solution, corresponding to initial conditions as specified in eq.~(\ref{eq:ovinitcond}) with $\epsilon=0$, is given by \begin{equation} \left\{\begin{array}{ll} \tilde n^{(i)}_{A_i}(\omega) & \hskip -0.25cm = \dfrac{1}{2}\left\{ 1 + \sqrt{1+\omega - \omega^2 - \dfrac{\omega}{4}\Omega}-\dfrac{\omega}{2}\right\}\,,\\[2.0ex] \tilde n^{(i)}_{A_{3-i}}(\omega) & \hskip -0.25cm = \dfrac{1}{2}\left\{ 1 - \sqrt{1+\omega - \omega^2 - \dfrac{\omega}{4}\Omega}-\dfrac{\omega}{2}\right\}\,,\\[2.0ex] \tilde n^{(\text{o})}_{A_i}(\omega) & \hskip -0.25cm = \dfrac{7}{4} + \dfrac{2}{\omega}-\dfrac{1}{4}\dfrac{\Omega}{\omega}\,, \end{array}\right.\qquad \text{ for }\ \omega\le\hat\omega \ \text{ and } \ i=1,2\,, \label{eq:ovstabone} \end{equation} and \begin{equation} \tilde n^{(i)}_{A_k}(\omega) = \frac{3-\sqrt{5}}{2}\,,\qquad \text{ for } \ \omega>\hat\omega\,, \ \ i=1,2,\text{o} \ \text{ and } \ k=1,2\,. \label{eq:ovstabtwo} \end{equation} In Fig.~\ref{fig:ovstability} (\emph{left}) we plot the symmetric steady densities in ${\cal C}^{(1)}_\text{in}$ and ${\cal C}_\text{ov}$ vs. $\omega$. Results for $n^{(i)}_{A_k}$, $i,k=1,2$ are qualitatively similar to those reported in Fig.~\ref{fig:ppmstability}. Also in this case we see that both $\tilde n^{(1)}_{A_1}(\omega)$, $\tilde n^{(1)}_{A_2}(\omega)$ and $\tilde n^{(\text{o})}_{A_k}$ have discontinuous first order derivatives for $\omega=\hat\omega$. \begin{figure}[t!] \centering \hskip -0.8cm\includegraphics[width=0.48\textwidth]{./OVsymsol.pdf} \hskip 0.2cm\includegraphics[width=0.48\textwidth]{./OVeigen.pdf}\,. \vskip -0.2cm \caption{\footnotesize (\emph{left}) Symmetric steady solution to MFEs on a network with two overlapping cliques. (\emph{right}) Eigenvalues of the linearized MFEs. The critical point $\omega_c$ is denoted by a star.} \label{fig:ovstability} \vskip -0.1cm \end{figure} \subsection{Stability of the symmetric steady solution} The existence of a critical threshold $\omega_\text{c}$ above which the symmetric steady solution $\tilde n$ becomes unstable is again revealed by a stability analysis similar to that performed in sect.~4.1. The algebra is just a bit harder here. In particular, although the game rules are unchanged and the network is still made of two interacting communities, the state space of the system is now larger: we have six coupled equations in six unknown variables. As a result, the stability matrix has $6\times 6$ entries $\Lambda^{(i,{\scriptscriptstyle X})}_{(k,{\scriptscriptstyle Y})} = \partial f^{(i)}_{\scriptscriptstyle X}/\partial n^{(k)}_{\scriptscriptstyle Y}(\tilde n)$ corresponding to $i,k = 1,2,\text{o}$ and $X,Y = A_1,A_2$. The lack of a direct interaction between agents belonging to ${\cal C}^{(1)}_\text{in}$ and ${\cal C}^{(2)}_\text{in}$ makes some of these matrix elements vanish. Concretely, we find \begingroup\makeatletter\def\f@size{9}\check@mathfonts \begin{alignat}{3} \rho_1\Lambda^{(1,{\scriptscriptstyle A_1})}_{(1,{\rm\scriptscriptstyle A_1})} & = - 1 - \frac{3}{2}\omega + \frac{\omega}{2}\tilde n^{(\text{o})}_{A_2}\,, & \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_1})}_{(1,{\rm\scriptscriptstyle A_2})} & = -2 - \omega - \frac{\omega}{2} \tilde n^{(\text{o})}_{A_1} + 2\tilde n^{(1)}_{A_2} + \omega \tilde n^{(\text{o})}_{A_2}\,,\\[1.1ex] \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_1})}_{(2,{\rm\scriptscriptstyle A_1})} & = 0\,, & \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_1})}_{(2,{\rm\scriptscriptstyle A_2})} & = 0\,, \\[1.1ex] \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_1})}_{(\text{o},{\rm\scriptscriptstyle A_1})} & = \frac{1}{2}\omega -\frac{\omega}{2}\tilde n^{(1)}_{A_2}\,, & \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_1})}_{(\text{o},{\rm\scriptscriptstyle A_2})} & = -\omega + \frac{\omega}{2}\tilde n^{(1)}_{A_1}+\omega\tilde n^{(1)}_{A_2}\,, \\[1.1ex] \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_2})}_{(1,{\rm\scriptscriptstyle A_1})} & = -2 - \omega - \frac{\omega}{2} \tilde n^{(\text{o})}_{A_2} + 2 \tilde n^{(1)}_{A_1} + \omega \tilde n^{(\text{o})}_{A_1}\,,& \hskip 0.8cm \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_2})}_{(1,{\rm\scriptscriptstyle A_2})} & = -1 -\frac{3}{2}\omega + \frac{\omega}{2} \tilde n^{(\text{o})}_{A_1}\, \\[1.1ex] \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_2})}_{(2,{\rm\scriptscriptstyle A_1})} & = 0\,, & \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_2})}_{(2,{\rm\scriptscriptstyle A_2})} & = 0\,, \\[1.1ex] \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_2})}_{(\text{o},{\rm\scriptscriptstyle A_1})} & = -\omega + \frac{\omega}{2} \tilde n^{(1)}_{A_2} + \omega\tilde n^{(1)}_{A_1}\,, & \rho_1\Lambda^{(1,{\rm\scriptscriptstyle A_2})}_{(\text{o},{\rm\scriptscriptstyle A_2})} & = \frac{\omega}{2} - \frac{\omega}{2}\tilde n^{(1)}_{A_1}\,,\\[1.1ex] \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_1})}_{(1,{\rm\scriptscriptstyle A_1})} & = 0\,, & \hskip 0.8cm \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_1})}_{(1,{\rm\scriptscriptstyle A_2})} & = 0\,,\\[1.1ex] \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_1})}_{(2,{\rm\scriptscriptstyle A_1})} & = - 1 - \frac{3}{2}\omega + \frac{\omega}{2}\tilde n^{(\text{o})}_{A_2}\,, & \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_1})}_{(2,{\rm\scriptscriptstyle A_2})} & = - 2 - \omega - \frac{\omega}{2} \tilde n^{(\text{o})}_{A_1} + 2\tilde n^{(2)}_{A_2} + \omega \tilde n^{(\text{o})}_{A_2}\,,\\[1.1ex] \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_1})}_{(\text{o},{\rm\scriptscriptstyle A_1})} & = \frac{1}{2}\omega -\frac{\omega}{2}\tilde n^{(2)}_{A_2}\,, & \hskip 1.0cm \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_1})}_{(\text{o},{\rm\scriptscriptstyle A_2})} & = -\omega + \frac{\omega}{2}\tilde n^{(2)}_{A_1}+\omega\tilde n^{(2)}_{A_2}\,, \\[1.1ex] \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_2})}_{(1,{\rm\scriptscriptstyle A_1})} & = 0\,, & \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_2})}_{(1,{\rm\scriptscriptstyle A_2})} & = 0\,, \\[1.1ex] \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_2})}_{(2,{\rm\scriptscriptstyle A_1})} & = -2 - \omega - \frac{\omega}{2} \tilde n^{(\text{o})}_{A_2} + 2 \tilde n^{(2)}_{A_1} + \omega \tilde n^{(\text{o})}_{A_1}\,,& \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_2})}_{(2,{\rm\scriptscriptstyle A_2})} & = -1 -\frac{3}{2}\omega + \frac{\omega}{2} \tilde n^{(\text{o})}_{A_1}\, \\[1.1ex] \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_2})}_{(\text{o},{\rm\scriptscriptstyle A_1})} & = -\omega + \frac{\omega}{2} \tilde n^{(2)}_{A_2} + \omega\tilde n^{(2)}_{A_1}\,, & \rho_1\Lambda^{(2,{\rm\scriptscriptstyle A_2})}_{(\text{o},{\rm\scriptscriptstyle A_2})} & = \frac{\omega}{2} - \frac{\omega}{2}\tilde n^{(2)}_{A_1}\,,\\[1.1ex] \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_1})}_{(1,{\rm\scriptscriptstyle A_1})} & = \frac{1}{2} - \frac{1}{2}\tilde n^{(\text{o})}_{A_2}\,, & \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_1})}_{(1,{\rm\scriptscriptstyle A_2})} & = -1 +\frac{1}{2}\tilde n^{(\text{o})}_{A_1} + \tilde n^{(\text{o})}_{A_2}\,, \\[1.0ex] \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_1})}_{(2,{\rm\scriptscriptstyle A_1})} & = \frac{1}{2} - \frac{1}{2}\tilde n^{(\text{o})}_{A_2}\,, & \hskip 2.1cm \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_1})}_{(2,{\rm\scriptscriptstyle A_2})} & = -1 +\frac{1}{2}\tilde n^{(\text{o})}_{A_1} + \tilde n^{(\text{o})}_{A_2}\,,\\[1.1ex] \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_1})}_{(\text{o},{\rm\scriptscriptstyle A_1})} & = -3-\omega + \frac{1}{2}[\tilde n^{(1)}_{A_2} +\tilde n^{(2)}_{A_2}]\,, & \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_1})}_{(\text{o},{\rm\scriptscriptstyle A_2})} & = -2 -2\omega -\frac{1}{2}[\tilde n^{(1)}_{A_1}+\tilde n^{(2)}_{A_1}] \nonumber\\[1.1ex] & & & +[\tilde n^{(1)}_{A_2}+\tilde n^{(2)}_{A_2}] + 2\omega\tilde n^{(\text{o})}_{A_2}\,, \\[1.3ex] \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_2})}_{(1,{\rm\scriptscriptstyle A_1})} & = -1 + \tilde n^{(\text{o})}_{A_1} + \frac{1}{2}\tilde n^{(\text{o})}_{A_2}\,, & \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_2})}_{(1,{\rm\scriptscriptstyle A_2})} & = \frac{1}{2} - \frac{1}{2}\tilde n^{(\text{o})}_{A_1}\,, \\[1.1ex] \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_2})}_{(2,{\rm\scriptscriptstyle A_1})} & = -1 + \tilde n^{(\text{o})}_{A_1} + \frac{1}{2}\tilde n^{(\text{o})}_{A_2}\,, & \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_2})}_{(2,{\rm\scriptscriptstyle A_2})} & = \frac{1}{2}- \frac{1}{2}\tilde n^{(\text{o})}_{A_1}\,,\\[1.1ex] \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_2})}_{(\text{o},{\rm\scriptscriptstyle A_1})} & = -2-2\omega-\frac{1}{2}[\tilde n^{(1)}_{A_2}+\tilde n^{(2)}_{A_2}] & \hskip 2.1cm \rho_2\Lambda^{(\text{o},{\rm\scriptscriptstyle A_2})}_{(\text{o},{\rm\scriptscriptstyle A_2})} & = -3-\omega +\frac{1}{2}[\tilde n^{(1)}_{A_1}+\tilde n^{(2)}_{A_1}]\,, \nonumber\\[1.0ex] & + [\tilde n^{(1)}_{A_1}+\tilde n^{(2)}_{A_1}] + 2\omega\tilde n^{(\text{o})}_{A_1}\,, & \end{alignat} \endgroup with $\rho_1=(1+\omega)(2+\omega)=1/\pi^{(11)}=1/\pi^{(22)}$ and $\rho_2=\omega^{-1}(2+\omega)^2=1/\pi^{(\text{o}1)}=1/\pi^{(\text{o}2)}$. In order to calculate the eigenvalues $\{\lambda_k\}_{k=1}^6$ of $\Lambda$ we need to solve the secular equation $\det(\Lambda - \lambda\mathds{1})=0$. With the components of $\tilde n$ depending on $\omega$ either explicitly (in the form of direct and inverse powers of the latter) and implicitly via $\Omega$, the secular determinant turns out to be a polynomial of sixth degree in $\lambda$ with rational coefficient functions in $\omega$ and $\Omega$. Luckily, the determinant factorizes into cubic polynomials, {\it i.e.\ } we have \begin{equation} \det(\Lambda-\lambda\mathds{1}) = \frac{p_1(\lambda)p_2(\lambda)}{64(1+\omega)^4(2+\omega)^8}\,, \end{equation} with \begingroup\makeatletter\def\f@size{9}\check@mathfonts \begin{align} p_1(\lambda) & = [ 128 +512\,\omega +832\,{\omega}^{2} +704\,{\omega}^{3} +328\,{\omega}^{4} +80\,{\omega}^{5} +8\,{\omega}^{6} ]\lambda^3 \nonumber\\ & \hskip -0.5cm + [80\,\omega +200\,{\omega}^{2} +180\,{\omega}^{3} +70\,{\omega}^{4} +10\,{\omega}^{5} +(16 +56\,\omega +84\,{\omega}^{2} +66\,{\omega}^{3} +26\,{\omega}^{4} +4\,{\omega}^{5})\Omega ]\lambda^2 \nonumber\\ & \hskip -0.5cm + [32 +64\,\omega +24\,{\omega}^{2} +10\,{\omega}^{3} +15\,{\omega}^{4} +5\,{\omega}^{5} +(-8\,\omega +10\,{\omega}^{3}+4\,{\omega}^{4})\Omega +(2\,\omega+3\,{\omega}^{2} +{\omega}^{3})\,{\Omega}^{2} ]\lambda\nonumber\\ & \hskip -0.5cm + 4\,\omega\,\Omega+4\,{\omega}^{2}\,\Omega-4\,{\omega}^{3}\Omega-{\omega}^{2}{\Omega}^{2}\,, \end{align} and \begin{align} p_2(\lambda) & = [128 +512\,\omega +832\,{\omega}^{2} +704\,{\omega}^{3} +328\,{\omega}^{4} +80\,{\omega}^{5} +8\,{\omega}^{6} ]\lambda^3 \nonumber\\ & \hskip -0.5cm +[240\,\omega +760\,\omega^2 +940\,\omega^3 +570\,\omega^4 +170\,\omega^5 +20\,\omega^6 -(16\, +24\,\omega\, -12\,\omega^2\, -38\,\omega^3 -22\,\omega^4\, -4\,\omega^5\,)\Omega ]\lambda^2 \nonumber\\ & \hskip -0.5cm +[32 +128\,\omega +300\,\omega^2 +384\,\omega^3 +213\,\omega^4 +41\,\omega^5 +(4\,\omega\, +2\,\omega^2\, -8\,\omega^3\, -4\,\omega^4\,)\Omega \nonumber\\ & \hskip -0.5cm -(2\,\omega\, +3\,\omega^2\, +\omega^3)\Omega^2 ]\lambda -(8\,\omega +52\,\omega^2 +56\,\omega^3 +16\,\omega^4) -(8\,\omega\, +5\,\omega^2\, -2\,\omega^3\, )\,\Omega +(\omega\, +\omega^2)\,\Omega^2\,. \end{align} \endgroup Therefore, the eigenvalues of $\Lambda$ are simply given by the roots of $p_1$ and $p_2$ separately. The behaviour of the eigenvalues as functions of $\omega$ is shown in Fig.~\ref{fig:ovstability} (\emph{right}). All eigenvalues are negative for sufficiently small $\omega$, hence $\tilde n$ represents here a stable multi-language steady state. At some point as $\omega$ increases (precisely for $\omega=\omega_\text{c}$), $\lambda_1$ shifts from negative to positive values, thus determining a sudden change of phase, with the system converging to global consensus in a finite time. For an even larger value of $\omega$ (more precisely for $\omega=\hat\omega$), also $\lambda_2$ shifts to positive values. It is possible to find analytic expressions for all the eigenvalues of $\Lambda$. Below we report only $\lambda_1$, since this is related to the critical threshold $\omega_\text{c}$. We have \begin{equation} \lambda_1 = \frac{1}{12(1+\omega)(2+\omega)^2}\left[a + \frac{ (1+\omega)^2(2+\omega)^2\left( 3(2+\omega)\sqrt{b}+c\right)^{2/3}+d}{(1+\omega)^2(2+\omega)^2\left(3(2+\omega)\sqrt{b}+c\right)^{1/3}}\right]\,, \end{equation} with the coefficient functions $a,b,c,d$ being given respectively by \begin{align} a & = -30\,\omega-35\,{\omega}^{2}-10\,{\omega}^{3}-2\,\Omega+\omega\, \Omega+2\,{\omega}^{2}\,\Omega\,, \\[2.0ex] b & = 196608 +2162688\,\omega +10736640\,{\omega}^{2} +28446720\,{\omega}^{3} +42713856\,{\omega}^{4} +32143872\,{\omega}^{5}\nonumber\\ & -5234928\,{\omega}^{6} -36468864\,{\omega}^{7} -28560744\,{\omega}^{8} +127224\,{\omega}^{9} +10823661\,{\omega}^{10} +4106838\,{\omega}^{11}\nonumber\\ & -808959\,{\omega}^{12} -755556\,{\omega}^{13} -120300\,{\omega}^{14} -129024\,\omega\,\Omega -2365440\,{\omega}^{2}\,\Omega -9703680\,{\omega}^{3}\Omega\nonumber\\ & -16942464\,{\omega}^{4}\Omega -13454016\,{\omega}^{5}\Omega -531888\,{\omega}^{6}\Omega +9250152\,{\omega}^{7}\Omega +8316420\,{\omega}^{8}\Omega +2124942\,{\omega}^{9}\Omega\nonumber\\ & -796212\,{\omega}^{10}\Omega -426270\,{\omega}^{11}\Omega +16944\,{\omega}^{12}\Omega +21720\,{\omega}^{13}\Omega -3072\,{\Omega}^{2} -165888\,\omega{\Omega}^{2}\nonumber\\ & -440064\,{\omega}^{2}{\Omega}^{2} +521856\,{\omega}^{3}{\Omega}^{2} +2961120\,{\omega}^{4}{\Omega}^{2} +3698208\,{\omega}^{5}{\Omega}^{2} +1102452\,{\omega}^{6}{\Omega}^{2} -1830852\,{\omega}^{7}{\Omega}^{2}\nonumber\\ & -2180301\,{\omega}^{8}{\Omega}^{2} -849546\,{\omega}^{9}{\Omega}^{2} -15249\,{\omega}^{10}{\Omega}^{2} +66012\,{\omega}^{11}{\Omega}^{2} +11148\,{\omega}^{12}{\Omega}^{2} +14592\,\omega\,{\Omega}^{3}\nonumber\\ & +64896\,{\omega}^{2}{\Omega}^{3} -12672\,{\omega}^{3}{\Omega}^{3} -326976\,{\omega}^{4}{\Omega}^{3} -447696\,{\omega}^{5}{\Omega}^{3} -83904\,{\omega}^{6}{\Omega}^{3} +307404\,{\omega}^{7}{\Omega}^{3}\nonumber\\ & +309864\,{\omega}^{8}{\Omega}^{3} +116052\,{\omega}^{9}{\Omega}^{3} +13392\,{\omega}^{10}{\Omega}^{3} -816\,{\omega}^{11}{\Omega}^{3} +1920\,\omega\,{\Omega}^{4} +8976\,{\omega}^{2}{\Omega}^{4}\nonumber\\ & +25824\,{\omega}^{3}{\Omega}^{4} +42912\,{\omega}^{4}{\Omega}^{4} +28200\,{\omega}^{5}{\Omega}^{4} -16689\,{\omega}^{6}{\Omega}^{4} -42798\,{\omega}^{7}{\Omega}^{4} -30489\,{\omega}^{8}{\Omega}^{4}\nonumbe \end{align} \begin{align} \phantom{b} & -9468\,{\omega}^{9}{\Omega}^{4} -1044\,{\omega}^{10}{\Omega}^{4} -192\,\omega\,{\Omega}^{5} -912\,{\omega}^{2}{\Omega}^{5} -1848\,{\omega}^{3}{\Omega}^{5} -1380\,{\omega}^{4}{\Omega}^{5} +1278\,{\omega}^{5}{\Omega}^{5}\nonumber\\ & +3612\,{\omega}^{6}{\Omega}^{5} +3210\,{\omega}^{7}{\Omega}^{5} +1344\,{\omega}^{8}{\Omega}^{5} +216\,{\omega}^{9}{\Omega}^{5} -12\,{\omega}^{2}{\Omega}^{6} -60\,{\omega}^{3}{\Omega}^{6} -135\,{\omega}^{4}{\Omega}^{6}\nonumber\\ & -174\,{\omega}^{5}{\Omega}^{6} -135\,{\omega}^{6}{\Omega}^{6} -60\,{\omega}^{7}{\Omega}^{6} -12\,{\omega}^{8}{\Omega}^{6}\,, \end{align} \begin{align} c & = 5184\,\omega +15264\,{\omega}^{2} +23760\,{\omega}^{3} +30960\,{\omega}^{4} +18540\,{\omega}^{5} -7838\,{\omega}^{6} -15393\,{\omega}^{7} -6810\,{\omega}^{8}\nonumber\\ & -1000\,{\omega}^{9} +576\,\Omega -1440\,\omega\,\Omega -9720\,{\omega}^{2}\,\Omega -13968\,{\omega}^{3}\Omega -7278\,{\omega}^{4}\Omega +2904\,{\omega}^{5}\Omega +6483\,{\omega}^{6}\Omega\nonumber\\ & +3402\,{\omega}^{7}\Omega +600\,{\omega}^{8}\Omega +144\,\omega\,{\Omega}^{2} +696\,{\omega}^{2}{\Omega}^{2} +660\,{\omega}^{3}{\Omega}^{2} -288\,{\omega}^{4}{\Omega}^{2} -867\,{\omega}^{5}{\Omega}^{2} -558\,{\omega}^{6}{\Omega}^{2}\nonumber\\ & -120\,{\omega}^{7}{\Omega}^{2} -24\,\omega\,{\Omega}^{3} -18\,{\omega}^{2}{\Omega}^{3} +22\,{\omega}^{3}{\Omega}^{3} +45\,{\omega}^{4}{\Omega}^{3} +30\,{\omega}^{5}{\Omega}^{3} +8\,{\omega}^{6}{\Omega}^{3} -8\,{\Omega}^{3}\,, \end{align} \vskip -0.0cm \noindent and \vskip -0.7cm \begin{align} d & = -768 -5376\,\omega -15312\,{\omega}^{2} -22752\,{\omega}^{3} -16760\,{\omega}^{4} -440\,{\omega}^{5} +10835\,{\omega}^{6} +10180\,{\omega}^{7} +4571\,{\omega}^{8}\nonumber\\ & +1054\,{\omega}^{9} +100\,{\omega}^{10} +384\,\omega\,\Omega +1424\,{\omega}^{2}\,\Omega +1656\,{\omega}^{3}\Omega -308\,{\omega}^{4}\Omega -2614\,{\omega}^{5}\Omega -2792\,{\omega}^{6}\Omega\nonumber\\ & -1438\,{\omega}^{7}\Omega -376\,{\omega}^{8}\Omega -40\,{\omega}^{9}\Omega +16\,{\Omega}^{2} 80\,\omega\,{\Omega}^{2} +192\,{\omega}^{2}{\Omega}^{2} +300\,{\omega}^{3}{\Omega}^{2} +331\,{\omega}^{4}{\Omega}^{2} +252\,{\omega}^{5}{\Omega}^{2}\nonumber\\ & +123\,{\omega}^{6}{\Omega}^{2} +34\,{\omega}^{7}{\Omega}^{2} +4\,{\omega}^{8}{\Omega}^{2}\,. \end{align} Due to the complex algebraic structure of $\lambda_1$ it is not possible to solve the equation $\lambda_1(\omega)=0$ exactly. We find numerically $\omega_\text{c} = 0.26065807\ldots$ and accordingly $\gamma_\text{out/in,\,c} = \omega_\text{c}/(2+\omega_\text{c}) = 0.1153019\ldots$ A comparison with the results of sect.~4 shows that the critical connectedness on the network with two overlapping cliques is rather close to that observed in the PPM. This leads us to conclude that, with reference to the stochastic dynamics of the NG, few agents with many links to more than one community make a tie comparable with many agents who tightly belong to a single community and have sparse links to the other. \subsection{Finite size effects} In Fig.~\ref{fig:OVsimul} we show the behaviour of the bounded time to consensus $\tilde T_\text{cons}(N,\omega)$ as a function of $\omega$, as obtained from Monte Carlo simulations on finite networks with $N=1000,\,2000,\,4000$ (the exit threshold is set to~$100\cdot N$, like in eq.~(\ref{eq:TconsPPM})). Similar to Fig.~\ref{fig:PPMsimul} (\emph{right}), also in this case $\tilde T_\text{cons}$ is seen to increase as $\omega$ decreases and, in perfect analogy, the rise is steeper in correspondence of larger values of $N$. The main (and only) difference is that the appearance of long-lasting metastable states occurs here for values of $\omega$ which are about twice the values of $\nu$ observed therein, in accordance with $\omega_\text{c}\simeq 2\nu_\text{c}$. This confirms the validity of the analysis we made in the previous pages. We conclude this section by commenting that since long-lasting multi-language metastable states are observed in a finite volume only for $\gamma_\text{out/in}$ below its critical threshold, having $\gamma_\text{out/in}(\nu_\text{c})\simeq \gamma_\text{out/in}(\omega_\text{c})$ yields a qualitative indication that the NG, when used as a community detection algorithm on empirical networks, is equivalently robust (in the language of ref.~\cite{Lambiotte} we would say that it has an equivalent resolution) in finding overlapping or non-overlapping communities, $\gamma_\text{out/in}$ being the same. This result represents the main lesson we learn from the algebraic exercises of sect.~4 and the present one. \begin{figure}[t!] \centering \hskip 0.2cm\includegraphics[width=0.48\textwidth]{./TCOV_Q=02.pdf}\,. \vskip -0.3cm \caption{\footnotesize Bounded time to consensus from simulations on a network with two overlapping cliques.} \label{fig:OVsimul} \vskip 0.1cm \end{figure} \section{Dependence of $\nu_\text{c}$ upon $Q$ in the Planted Partition Model} As seen in sect.~3, the $Q$-ary NG in the SBM develops an exponentially large number of phases as $Q$\break increases. As a result, studying the the phase diagram becomes soon unfeasible. Permutational symmetry certainly helps reduce the complexity of the problem, however the boundary surface of single phases is anyway expected to depend on $Q$ to some degree. Moreover, phase diagrams corresponding to different values of $Q$ live in Euclidean spaces with different dimensions, hence direct comparisons are ---strictly speaking--- ill-defined. In spite of this, something about the phase structure of the model can be said. For all $Q>2$ the phase diagram is bounded by coordinate planes delimiting the 1st orthant of the $[Q(Q-1)]$-dimensional Euclidean space. We recall that for $i\ne k$ the $(\nu^{(ik)},\nu^{(ki)})$-plane is just the set of points with coordinates $\{\nu^{(\ell,m)}=0\}_{(\ell,m)\ne (i,k),(k,i)}$. Such points correspond physically to networks where all communities but the $i$th and $k$th ones are disconnected. Therefore, on the coordinate planes we fall back into the case $Q=2$. We conclude that the geometric structure of the phase diagram, at the boundaries of its domain, is precisely that of Fig.~\ref{fig:phases}. More difficult is to establish the structure of phases in the bulk of the phase diagram. For instance, we know from Fig.~\ref{fig:phases} that for $Q=2$ the cusp of region II represents the point with maximum Euclidean distance from the origin, for which the system does not converge to global consensus. The formalism developed in sect.~3 and App. A allows us to show that for $Q>2$ the NG in the PPM is equally characterized by a critical threshold $\nu_\text{c}(Q)$. The reader may wonder whether it is true as well that in the SBM for $Q>2$ the point $\nu^{(12)} = \nu^{(21)} = \nu^{(13)} = \ldots = \nu_\text{c}(Q)$ is that with maximum Euclidean distance from the origin, for which language coexistence is observed. We shall see in a while that the answer is negative. In fact, $\nu_\text{c}(Q)$ turns out to be a monotonically decreasing function of $Q$. \begin{figure}[t!] \centering \includegraphics[width=0.87\textwidth]{./Tc_QQ.pdf}\,. \vskip -0.3cm \caption{\footnotesize Time to consensus in the PPM with $Q>2$.} \label{fig:TCPPMQQ} \vskip -0.2cm \end{figure} For $Q>2$ the dynamics of the NG in the PPM is still described by eqs.~(\ref{eq:mfebsm}) with all $\nu^{(ik)}=\nu=p_\text{out}/p_\text{in}$. Owing to the large number of degrees of freedom, we are unable to work out the symmetric steady state of the system exactly, as we did in sect.~4. Since $|\bar S({\cal D})|=2^Q-2$, the overall number of unknowns (or equivalently coupled equations) amounts to $Q(2^Q-2)$, {\it e.g.\ } for $Q=6$ we have 372 coupled equations. The algebraic complexity of the transition rates $\{f_D^{(i)}\}$ increases with $Q$, too. For these reasons, we can only study the system numerically. Since in the PPM communities have all the same connectedness, either each of them keeps speaking its original language forever or the system goes to global consensus, with one single name colonizing the whole network. In order to let the system reach global consensus, we introduce an asymmetric perturbation $\epsilon$ in eq.~(\ref{eq:initcond}) that favours $A_1$, namely we integrate MFEs with initial conditions \begin{equation} \left\{\begin{array}{ll} n^{(1)}_{A_1}\hskip -0.01em = 1\,,& \\[3.0ex] n^{(1)}_D = 0 & \text{ for } \ D\ne A_1\,,\end{array}\right.\quad \text{ and } \quad \left\{\begin{array}{ll} n^{(k)}_{A_k}\hskip -0.01em = 1-\epsilon\,,& \\[3.0ex] n^{(k)}_{A_1} = \epsilon &\,,\\[3.0ex] n^{(k)}_{D} = 0 & \text{ for } \ D\ne A_1,A_k\,, \end{array}\right.\,\quad \text{ for }\ k\ne 1\,. \end{equation} In Fig.~\ref{fig:TCPPMQQ}, we show the behaviour of $T_\text{cons}$ as a function of $\nu$ for $Q=3,\ldots,6$ and for several values of~$\epsilon$. Numerical integration becomes demanding for $Q\gtrsim 5$, which is why the rise of $T_\text{cons}$ in the plot at bottom right (corresponding to $Q=6$) is cut off. A glance to the four plots reveals that $\nu_\text{c}(Q)$ decreases as $Q$ increases. The dependence of $T_\text{cons}$ upon $\nu$ is still well described by eq.~(\ref{eq:model}). In Table~\ref{tab:fitparsQQ} we report estimates of the parameters $A,\nu_\text{c},\gamma$, obtained from fits to the theoretical model. Interestingly, the critical exponent $\gamma(0^+)$ appears to be independent of $Q$ (actually, for $Q=5,6$ we observe numerical instabilities in the fits due to a variety of factors, including round-off errors arising in numerical integration of MFEs and a gradual enhancement of the systematic error associated to the finiteness of $\Delta t/T_\text{cons}$ as $Q$ increases, see sect.~4). \begin{table}[!t] \begingroup\makeatletter\def\f@size{8}\check@mathfonts \begin{center} \begin{tabular}{c|r|r|r||r|r|r} \cline{2-7}\\[-2.7ex]\cline{2-7} & \multicolumn{3} {c||} {$Q=3$} & \multicolumn{3} {c} {$Q=4$} \\ \hline $\epsilon$ & $A(\epsilon)\ \ \ $ & $\nu_\text{c}(\epsilon)\ \ \ $ & $\gamma(\epsilon)\ \ \ $ & $A(\epsilon)\ \ \ $ & $\nu_\text{c}(\epsilon)\ \ \ $ & $\gamma(\epsilon)\ \ \ $ \\ \hline\\[-2.2ex] $1.0\times 10^{-2}$ & $8.524(2)$ & $0.100257(4)$ & $0.8162(2)$ & $22.72(4)$ & $0.088284(6)$ & $0.6902(1)$ \\ $1.0\times 10^{-3}$ & $8.348(3)$ & $0.100252(4)$ & $0.8871(2)$ & $18.35(5)$ & $0.088345(6)$ & $0.8128(2)$ \\ $1.0\times 10^{-4}$ & $9.894(4)$ & $0.100249(4)$ & $0.9100(3)$ & $17.72(5)$ & $0.088357(6)$ & $0.8733(2)$ \\ $1.0\times 10^{-5}$ & $11.56(4)$ & $0.100247(4)$ & $0.9233(3)$ & $19.06(5)$ & $0.088360(6)$ & $0.9032(3)$ \\ $1.0\times 10^{-6}$ & $12.34(5)$ & $0.100245(4)$ & $0.9409(4)$ & $20.59(6)$ & $0.088361(6)$ & $0.9235(4)$ \\ $1.0\times 10^{-7}$ & $14.08(5)$ & $0.100244(4)$ & $0.9462(4)$ & $22.55(6)$ & $0.088361(6)$ & $0.9366(4)$ \\ \hline & \multicolumn{3} {c||} {$Q=5$} & \multicolumn{3} {c} {$Q=6$} \\ \hline $\epsilon$ & $A(\epsilon)\ \ \ $ & $\nu_\text{c}(\epsilon)\ \ \ $ & $\gamma(\epsilon)\ \ \ $ & $A(\epsilon)\ \ $ & $\nu_\text{c}(\epsilon)\, \ \ $ & $\gamma(\epsilon)\ \ \ $ \\ \hline\\[-2.2ex] $1.0\times 10^{-2}$ & $52.1(3)$ & $0.08027(2)$ & $0.6028(3)$ & $93.6(4)$& $0.0681(4)$ & $0.5519(1)$ \\ $1.0\times 10^{-3}$ & $52.3(4)$ & $0.08066(2)$ & $0.7045(8)$ & $75.4(4)$& $0.0689(4)$ & $0.7258(1)$ \\ $1.0\times 10^{-4}$ & $44.5(4)$ & $0.08070(2)$ & $0.804(8)$ & $68.0(4)$& $0.0690(4)$ & $0.8254(1)$ \\ $1.0\times 10^{-5}$ & $34.3(5)$ & $0.08067(2)$ & $0.92(2)$ & $72.9(4)$& $0.0690(4)$ & $0.8710(1)$ \\ $1.0\times 10^{-6}$ & $22.5(7)$ & $0.08064(2)$ & $1.03(6)$ & $77.3(4)$& $0.0690(4)$ & $0.9045(1)$ \\ \hline \hline \end{tabular} \vskip 0.2cm \caption{\footnotesize Estimates of fit parameters for $T_\text{cons}(\nu,\epsilon)$ for $Q>2$.\label{tab:fitparsQQ}} \end{center} \endgroup \vskip -0.4cm \end{table} In Fig.~\ref{fig:ppmcritpoints} (\emph{left}) we plot $\nu_\text{c}(Q)$ vs. $1/Q$. We observe an approximately linear behaviour, distorted however by a mild modulation in correspondence of the largest values of $Q$. This makes it difficult to extrapolate $\nu_\text{c}(Q)$ for $Q\to\infty$ (we leave this as an open problem). The scaling law $\nu_\text{c}(Q)\cdot Q \simeq \text{const}.$ looks pretty natural in consideration that communities are equally connected to each other in the PPM: since the overall number of inter-community links connecting one community to the rest of the network increases proportionally to $Q-1$ for fixed $\nu$, the critical connectedness is expected to decrease correspondingly. The absence of anomalous scaling, such as $\nu_\text{c}(Q)\cdot Q^\alpha \simeq \text{const}.$ with $\alpha>1$, is a signal of robustness of multi-language phases against variations of $Q$. The observed scaling and our previous considerations about the boundaries of the phase diagram suggest as well that the joint union of all multi-language phases in the SBM is reverse convex on a large scale, {\it i.e.\ } it is progressively squeezed towards the origin of the phase space as we approach the ``bisecting'' line $\{\nu^{(ik)} = \nu\}_{i\ne k}$. However, this picture needs further investigation to be confirmed or disproved. To conclude, in Fig.~\ref{fig:ppmcritpoints} (\emph{right}) we show the behaviour of $\tilde T_\text{cons}$ as a function of $\nu$ for $Q=3$, as obtained from Monte Carlo simulations. We chose $N$ such that communities have the same size as in Fig.~\ref{fig:PPMsimul} (which corresponds to $Q=2$). The plot shows that the crossover region lies in a range of $\nu$ that is shifted to the left with respect to Fig.~\ref{fig:PPMsimul}, in agreement with the predictions of mean field theory. \section{Effects induced by a change of the relative size of communities} Another important aspect of the problem is the way and the extent to which a change in the relative size of communities affects locally and/or globally the geometric structure of the phase diagram. For instance, consider the SBM and let $\sigma$ be (a piece of) some critical surface separating two phases. For fixed $N$, $\sigma$ moves across the phase space as $N^{(1)}$, \ldots, $N^{(Q)}$ are modified continuously under the constraint $N^{(1)}+\ldots+N^{(Q)}=N$. The question is whether and how $\sigma$ shifts, rotates, contracts and/or expands as a function of $\{N^{(k)}\}$. The problem depends on $Q-1$ continuous variables, thus answering in full generality is not easy. To keep the theoretical framework as simple as possible, we assume $Q=2$. In this case, we have only one additional parameter. More precisely, we let $N^{(2)} = (1+\epsilon)N^{(1)}$. We assume $\epsilon>0$, hence ${\cal C}^{(1)}$ is smaller than ${\cal C}^{(2)}$. We also let $\nu_1 = p^{(12)}/p^{(11)}$ and $\nu_2 = p^{(12)}/p^{(22)}$, as we also did in sect. 3. Since the network is no more symmetric under exchange of community indexes, $\gamma_{\text{out/in}}$ is not well defined. The dynamics of the binary NG is still ruled by eqs.~(\ref{eq:sbmfrst})--(\ref{eq:sbmfrth}), but the probabilities of picking up an agent $x$ and a neighbour $x'$ of $x$ in one community or the other are now given by \begin{align} \pi^{(11)} & = \text{prob}\left\{x\in{\cal C}^{(1)},\ x'\in{\cal C}^{(1)}\right\} = \frac{1}{2+\epsilon}\,\frac{1}{1+\nu_1(1+\epsilon)}\,, \\[0.0ex] \pi^{(12)} & = \text{prob}\left\{x\in{\cal C}^{(1)},\ x'\in{\cal C}^{(2)}\right\} = \frac{1}{2+\epsilon}\,\frac{\nu_1(1+\epsilon)}{1+\nu_1(1+\epsilon)}\,, \\[0.0ex] \pi^{(21)} & = \text{prob}\left\{x\in{\cal C}^{(2)},\ x'\in{\cal C}^{(1)}\right\} = \frac{1+\epsilon}{2+\epsilon}\,\frac{\nu_2}{1+\epsilon+\nu_2}\,, \\[0.0ex] \pi^{(22)} & = \text{prob}\left\{x\in{\cal C}^{(2)},\ x'\in{\cal C}^{(2)}\right\} = \frac{1+\epsilon}{2+\epsilon}\,\frac{1+\epsilon}{1+\epsilon+\nu_2}\,. \end{align} The above probabilities correctly fulfill $\pi^{(11)}+\pi^{(12)}+\pi^{(21)}+\pi^{(22)}=1$. Since the exchange symmetry is explicitly broken, steady solutions to MFEs are inevitably asymmetric. As such, they are also harder to work out than for $\epsilon=0$. Accordingly, we solve MFEs by numerical integration. In Fig.~\ref{fig:phdiffsize} we show phase diagrams corresponding to $\epsilon = 0.1,0.5,1.0$. We see that region II is progressively squeezed downwards, while it simultaneously expands rightwards, as $\epsilon$ increases. If we approximate region II by a rectangle with sides at $\nu_1=\nu_{1,\text{c}}$ and $\nu_2=\nu_{2,\text{c}}$, then Fig.~\ref{fig:phdiffsize} suggests that $\nu_{1,\text{c}}\cdot N^{(2)} \simeq \text{const.}$ and $\nu_{2,\text{c}}\cdot N^{(1)} \simeq \text{const.}$ For instance, for $\epsilon=1$ we have $N^{(2)} = 2N^{(1)}$ and we find $\nu_{1,\text{c}} \simeq 0.055$, which is about half the value found for $\epsilon = 0$. The above scaling laws look pretty natural in consideration that agents in ${\cal C}^{(1)}$ have a number of neighbours in ${\cal C}^{(2)}$ increasing proportionally to $N^{(2)}$ for fixed $\nu_1$ and the other way round. Hence, $\nu_{1,\text{c}}$ ($\nu_{2,\text{c}}$) is expected to decrease as $N^{(2)}$ ($N^{(1)}$) increases. The absence of anomalous scaling, such as $\nu_{1,\text{c}}\cdot (N^{(2)})^\alpha\simeq \text{const.}$ or $\nu_{2,\text{c}}\cdot (N^{(1)})^\alpha \simeq \text{const.}$ with $\alpha>1$, is a signal of robustness of multi-language phases against variations of the relative size of communities. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{./PPMnucrit.pdf}\,. \includegraphics[width=0.45\textwidth]{./TCPPM_Q=03.pdf}\,. \caption{\footnotesize (\emph{left}) Dependence of $\nu_c$ upon the number $Q$ of communities; (\emph{right}) Bounded time to consensus from simulations in the PPM with $Q=3$.} \label{fig:ppmcritpoints} \vskip -0.2cm \end{figure} \begin{figure}[t!] \centering \frame{\includegraphics[width=0.63\textwidth]{./phases_eps=01}}\\[2.0ex] \frame{\includegraphics[width=0.63\textwidth]{./phases_eps=05}}\\[2.0ex] \frame{\includegraphics[width=0.63\textwidth]{./phases_eps=10}}\\[5.0ex] \caption{\footnotesize Phase diagram of the NG in the SBM with $Q=2$ and $N^{(2)}=(1+\epsilon)N^{(1)}$ for $\epsilon = 0.1,0.5,1.0$.} \label{fig:phdiffsize} \end{figure} \section{Dependence of $\gamma_\text{out/in,\,c}$ upon the topology of $\{{\cal E}^{(ik)}\}$} \begin{figure}[t!] \centering \hskip 0.2cm\includegraphics[width=0.25\textwidth]{./BA_BA.jpg} \hskip 2.0cm \hskip 0.5cm \includegraphics[width=0.35\textwidth]{./BA_BA2.jpg} \vskip 0.1cm \caption{\footnotesize (left) BA communities interconnected by scale-free links (no inter-community assortativity); (right) BA communities interconnected by scale-free links (with inter-community assortativity). In both cases, $N=600$, $\rho=0.1$, the size of each node is proportional to its overall degree and the spatial position of nodes represents the equilibrium configuration of the Force Atlas visualization algorithm~\cite{forceatlas}.\label{fig:banetworks}} \vskip -0.2cm \end{figure} It is well known that realistic networks are heterogeneous (node degrees display high variability). Networks typically result from growth processes where new nodes join progressively those already in place. As a result, their topology cannot be described by static functions such as $\mathfrak{p}^{(ik)}(x,y)$. In order to examine how the critical point of the multi-language phase depends on the internal topology of communities and their interconnections, we study a network model with two interacting BA communities. Specifically, we fix $N$ and consider first two disjoint BA subgraphs ${\cal G}_\text{BA}(N^{(k)},m_0,m)$~\cite{BarabasiSF}, each made of $N^{(k)}=N/2$ nodes, with $m_0$ and $m$ denoting respectively the number of initial nodes of each subgraph and the number of links each new node establishes, based on preferential attachment, when it joins the subgraph. In particular, we let $m_0=m=5$ in our numerical simulations. Then, we consider three possible definitions of ${\cal E}^{(12)}$, all relying on one parameter~$\rho$: \vskip 0.2cm \begin{itemize}[leftmargin=0.4in,rightmargin=0.0in] \item[${\cal E}^{(12)}_\text{ER\phantom{1}}$:]{we statically connect nodes belonging to different communities with probability ${p_{\rm out}}(N) = 4m\rho/N$;} \item[${\cal E}^{(12)}_\text{SF1}$:]{starting with no inter-community links, we alternately choose at random a node belonging to one community and connect it to a target node belonging to the other one. The target node is chosen using a variant of preferential attachment where only inter-community links are taken into account when defining the target-node degree distribution. We stop the growth process as soon as $|{\cal E}^{(12)}_\text{SF1}|=m\rho N$. We end up with ${\cal E}^{(12)}_\text{SF1}$ having a scale-free topology. Moreover, there is no inter-community assortativity, {\it i.e.\ } nodes with high inner degree in one community do not tend to attach preferably to nodes with high inner degree in the other one;} \item[${\cal E}^{(12)}_\text{SF2}$:]{we generate inter-community links similar to ${\cal E}^{(12)}_\text{SF1}$, the only difference being that, concerning preferential attachment, both intra- and inter-community links are now taken into account when defining the target-node degree distribution. Again, ${\cal E}^{(12)}_\text{SF2}$ develops a scale-free topology. Yet, there is inter-community assortativity in this case.} \end{itemize} \vskip -0.0cm \noindent The connectedness parameters are given by \begin{align} & \langle\kappa^{(1)}_\text{in}\rangle = \langle\kappa^{(2)}_\text{in}\rangle = \frac{2}{N}\frac{N}{2}2m = 2m\,,\\[0.0ex] & \langle\kappa^{(12)}_\text{out}\rangle_{\rm \scriptscriptstyle ER} = \langle\kappa^{(21)}_\text{out}\rangle_{\rm\scriptscriptstyle ER} = \frac{2}{N}\frac{N}{2}{p_{\rm out}}(N)\frac{N}{2} = 2m\rho\,,\\[0.0ex] & \langle\kappa^{(12)}_\text{out}\rangle_{\rm \scriptscriptstyle SF1} = \langle\kappa^{(21)}_\text{out}\rangle_{\rm\scriptscriptstyle SF1} = \langle\kappa^{(12)}_\text{out}\rangle_{\rm \scriptscriptstyle SF2} = \langle\kappa^{(21)}_\text{out}\rangle_{\rm\scriptscriptstyle SF2} = \frac{2}{N}\frac{N}{2}\frac{2}{N}|{\cal E}^{(12)}| = 2m\rho\,, \end{align} hence \begin{equation} \gamma_\text{out/in,\,ER} = \gamma_\text{out/in,\,SF1} = \gamma_\text{out/in,\,SF2} = \rho \end{equation} Examples of networks with $N=600$, inter-community links generated according to ${\cal E}^{(12)}_\text{SF1}$ or ${\cal E}^{(12)}_\text{SF2}$ and $\rho=0.1$ are reported in Fig.~\ref{fig:banetworks}. The main assumption underlying mean field theory is that agents are all equivalent. When links are heterogeneously distributed, this assumption is violated. In such a case, agents with many neighbours may (or may not) turn out to be more influential than agents with few ones, depending on the microscopic dynamics of the system. When this happens, MFEs lose predictive accuracy. Historically, the problem arose first in the context of epidemic spreading and was solved by Pastor-Satorras and Vespignani with the introduction of heterogeneous mean field theory~\cite{Vespignani,VespignaniTwo}. Here, agents with different degrees are treated separately. By analogy, the dynamics of the NG on heterogeneous community-based networks is expected to be accurately described by equations \begin{equation} \frac{\text{d} n^{(i)}_{D,\kappa}}{\text{d} t} = f^{(i)}_{D,\kappa}(\bar n)\,,\quad\text{with }\ n^{(i)}_{D,\kappa} = \frac{\text{no. of agents with degree $\kappa$ and notebook $D$ belonging to ${\cal C}^{(i)}$}}{N^{(i)}}\,. \label{eq:hmfes} \end{equation} State densities $\{\smash{n^{(i)}_{D,\kappa}}\}$ represent a refinement of $\{n^{(i)}_D\}$ in that they fulfill $n^{(i)}_D = \sum_\kappa n^{(i)}_{D,\kappa}$. As such, they also fulfill finer simplex conditions $\smash{\sum_{\kappa}\sum_D n^{(i)}_{D,\kappa}=1}$ for $i=1,\ldots,Q$. The mathematical structure of $\smash{f^{(i)}_{D,\kappa}}$ is similar to that of $f^{(i)}_D$ in standard MFEs, the only difference being that each term contributing to $f^{(i)}_{D,\kappa}$ is proportional to the probability $\pi^{(ik)}_{\kappa\kappa_2}$ of picking up an agent $x$ with degree $\kappa$ in ${\cal C}^{(i)}$ and a neighbour $x'$ of $x$ with degree $\kappa_2$ in ${\cal C}^{(k)}$ for some $k,\kappa_2$. Since agents can have arbitrarily large degrees in the thermodynamic limit, the number of heterogeneous MFEs is virtually infinite. In view of this, it seems reasonable to impose an upper cut-off $\kappa_\text{max}$ to the agent degree. Numerical solutions are then expected to converge as $\kappa_\text{max}\to\infty$. \begin{table}[!t] \begin{center} \small \begin{tabular}{c|r|r|r} \hline \hline\\[-2.0ex] & ${\cal E}^{(12)}_\text{ER}$ & ${\cal E}^{(12)}_\text{SF1}$ & ${\cal E}^{(12)}_\text{SF2}$ \\[1.0ex] \hline\\[-2.0ex] $\rho_\text{c}$ & 0.087(1) & 0.187(5) & 0.127(4) \\[1.0ex] $\beta$ & 1.45(5) & 1.50(9) & 1.62(8) \\[0.3ex] \hline \hline \end{tabular} \caption{\footnotesize Estimates of fit parameters for $\tilde T_\text{cons}$.\label{tab:fitboundpars}} \end{center} \vskip -0.5cm \end{table} We leave for future research the precise computation of the critical connectedness $\gamma_\text{out/in,\,c}$ on heterogeneous networks via eqs.~(\ref{eq:hmfes}). Instead, we present here results obtained from numerical simulations. In particular, in Fig.~\ref{fig:nuctopol} we show the behaviour of the bounded time to consensus $\tilde T_\text{cons}$ as a function of $\rho$ for the network models introduced above. Although plots are qualitatively similar, the crossover region depends rather significantly on the topology of ${\cal E}^{(12)}$. In this range of $\rho$ we can fit data to the curve described by eq.~(\ref{eq:thmodelT}), with $\nu_\text{c}$ replaced by $\rho_\text{c}$. We report our estimates of $\rho_\text{c}$ and $\beta$ in Table~\ref{tab:fitboundpars}. Pairwise comparisons suggest the following considerations. \begin{itemize}[leftmargin=0.3in,rightmargin=0.0in] \item{The network model with ${\cal E}^{(12)}_\text{ER}$ differs from the PPM only in the internal structure of communities. A comparison of $\gamma_\text{out/in,c}$ in these models suggests that BA communities yield a more efficient opinion spread than ER ones. We know from ref.~\cite{Baronchelli:2} that $T_\text{cons}\propto N^{1.4}$ for both BA and ER networks (with no community structure). This is not in contradiction with our finding, which concerns indeed the effectiveness by which fluctuations break consensus within communities.} \item{A comparison of $\rho_\text{c}$ in the network models with ${\cal E}^{(12)}_\text{ER}$ and ${\cal E}^{(12)}_\text{SF1}$ suggests that BA communities yield a more efficient opinion spread when interacting via random than via scale-free links, provided the latter have no correlation with the internal degree distribution. In other words, the effectiveness by which fluctuations break local consensus is largely reduced when intra- and inter-community links are heterogeneously distributed with no correlation to each other.} \item{A comparison of $\rho_\text{c}$ in the network models with ${\cal E}^{(12)}_\text{SF1}$ and ${\cal E}^{(12)}_\text{SF2}$ shows that inter-community assortativity allows to restore the effectiveness by which fluctuations break local consensus. Indeed, $\gamma_\text{out/in,SF2,c}$ is very close to the critical connectedness observed in the PPM.} \end{itemize} In the end the variability of $\gamma_\text{out/in,\,c}$ is not dramatic. A glance to all network models we considered so far shows that, whenever $\gamma_\text{out/in}$ is well defined, its critical value lies always in the range $0.1 \div 0.2$. Although it is not possible to parameterize the dependence of $\rho_\text{c}$ upon the topological structure of ${\cal E}^{(ik)}$ in a simple way, such limited variation of $\rho_\text{c}$ is a clear signal of robustness of multi-language phases under variations of the network topology. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{./TCBAER.pdf}\,. \includegraphics[width=0.48\textwidth]{./TCBABA.pdf}\,. \includegraphics[width=0.48\textwidth]{./TCBABA2.pdf}\,. \caption{\footnotesize Bounded time to consensus from simulations in M$_5$.} \label{fig:nuctopol} \vskip -0.2cm \end{figure} \section{Conclusions} We studied the phase structure of the Naming Game (NG) on community-based networks. Prior to this paper it was known in the literature that communities of agents playing the NG tend to develop own long-lasting languages when sufficiently isolated. We showed that on infinitely extended networks the latter become everlasting. In other words, communities induce genuine multi-language phases in the thermodynamic limit. Our interest in studying these was both theoretical and practical. On the theoretical side, the NG is a non-trivial agent-based model ---designed to investigate the emergence of spoken languages in human societies--- whose phase structure is fully determined by the topology of the underlying network. On the practical side, the NG is an algorithm that, within limits, allows to detect communities in a given network. Either way, a first-principle analysis of which aspects of communities are more relevant or critical in order to guarantee the stability of local languages was lacking. It should be clear that studying the phase structure of the NG on community-based networks is an ill-posed problem because the concept of community is not strictly defined by itself~\cite{Fortunato:2}. ECS conditions, introduced in sect.~2, define a huge \emph{ensemble} of networks for which no universal parameterization exists. Hence, the phase diagram of the system cannot be simply explored by varying a fistful of parameters. To bypass this difficulty we considered several distinct network models. Each of them was meant to highlight the dependence of multi-language phases upon specific features of communities. We studied the phase diagram in the stochastic block model to have a clear picture of how an increase in the fraction of links connecting communities triggers a sharp transition from local to global consensus. In connection with this, we derived exactly the cusp of the multi-language phase (region II of Fig.~\ref{fig:phases}) to show that in at least a simple case it is possible to get full insight into the phase transition. Then, we studied the NG on a network with two overlapping cliques to understand whether connectedness is more or less efficient than overlap in order to spread languages across communities. We finally examined the dependence of critical thresholds upon topological properties of the network, such as the number and relative size of communities or the structure of intra-/inter-community links, to investigate whether the model displays anomalous scaling somewhere in the phase space. The overall picture emerging from our study is that multi-language phases in the NG display a high degree of robustness against changes in the network topology. The characteristic scale of the connectedness parameter $\gamma_\text{out/in}$ (sect.~2), at which metastable equilibria become stable, lies at $\gamma_\text{out/in,c}\sim~0.1--0.2$, depending on specific features of the underlying network model. Such large values provide a full theoretical justification for using the NG as a community detection algorithm, in accordance with the original proposal of ref.~\cite{Lu:1} and the analysis performed in ref.~\cite{Gubanov}. Connectedness and overlap seem to contribute to a similar degree to break local equilibria and make the system converge to global consensus. The phase diagram appears to scale trivially under variations of the relative size of communities. Although the geometric structure of multi-language phases becomes increasingly complex as the number of communities $Q$ increases, at least the critical point corresponding to networks with fully symmetric communities displays a natural scaling with $Q$. Finally, the critical threshold $\gamma_\text{out/in,c}$ on two-community symmetric networks shows a mild dependence upon the topology of the intra-/inter-community links. The phase structure induced by communities in the stochastic block model is remarkably similar to that induced by competing committed groups of agents on the fully connected graph~\cite{Xie:2}. The analytic structure of mean field equations is different in one case and the other, yet their solutions bear a strong resemblance. The rationale behind such similarity is not known.
2,869,038,154,441
arxiv
\section{Introduction} Latent Dirichlet allocation (LDA)~\citep{Blei:03} is a three-layer hierarchical Bayesian model for probabilistic topic modeling, computer vision and computational biology~\citep{Blei:12}. The collections of documents can be represented as a document-word co-occurrence matrix, where each element is the number of word count in the specific document. Modeling each document as a mixture topics and each topic as a mixture of vocabulary words, LDA assigns thematic labels to explain non-zero elements in the document-word matrix, segmenting observed words into several thematic groups called topics. From the joint probability of latent labels and observed words, existing training algorithms of LDA approximately infers the posterior probability of topic labels given observed words, and estimate multinomial parameters for document-specific topic proportions and topic distributions of vocabulary words. The time and space complexity of these training algorithms depends on the number of non-zero ($NNZ$) elements in the matrix. Probabilistic topic modeling for massive corpora has attracted intense interests recently. This research line is motivated by increasingly common massive data sets, such as online distributed texts, images and videos. Extracting and analyzing the large number of topics from these massive data sets brings new challenges to current topic modeling algorithms, particularly in computation time and memory requirement. In this paper, we focus on reducing the memory usage of topic modeling for massive corpora, because the memory limitation prohibits running existing topic modeling algorithms. For example, when the document-word matrix has $NNZ=5\times10^8$, existing training algorithms of LDA often requires allocating more than $12$GBytes memory including space for data and parameters. Such a topic modeling task cannot be done on a common desktop computer with $2$GB memory even if we can tolerate the slow speed of topic modeling. Because computing the exact posterior of LDA is intractable, we must adopt approximate inference methods for training LDA. Modern approximate posterior inference algorithms for LDA fall broadly into three categories: variational Bayes (VB)~\citep{Blei:03}, collapsed Gibbs sampling (GS)~\citep{Griffiths:04}, and loopy belief propagation (BP)~\citep{Zeng:11}. We may interpret these methods within a unified message passing framework~\citep{Bishop:book}, which infers the approximate marginal posterior distribution of the topic label for each word called {\em message}. According to the expectation-maximization (EM) algorithm~\citep{Dempster:77}, the local inferred messages are used to estimate the best multinomial parameters in LDA based on the maximum-likelihood (ML) criterion. VB is a variational message passing algorithm~\citep{Winn:05}, which infers the message from a factorizable variational distribution to be close in Kullback-Leibler (KL) divergence to the joint distribution. The gap between variational and true joint distributions cause VB to use computationally expensive digamma functions, introducing biases and slowness in the message updating and passing process~\citep{Asuncion:09,Zeng:11}. GS is based on Markov chain Monte Carlo (MCMC) sampling process, whose stationary distribution is the desired joint distribution. GS usually updates its message using the sampled topic labels from previous messages, which does not keep all uncertainties of previous messages. In contrast, BP directly updates and passes the entire messages without sampling, and thus achieves a much higher topic modeling accuracy. Till now, BP is very competitive in both speed and accuracy for topic modeling~\citep{Zeng:11}. Similar BP ideas have also been discussed as the zero-order approximation of the collapsed VB (CVB0) algorithm within the mean-field framework~\citep{Asuncion:09,Asuncion:10}. However, the message passing techniques often require storing previous messages for updating and passing, which leads to the high memory usage increasing linearly with the number of documents or the number of topics. So, to save the memory usage, we propose a novel algorithm for training LDA: tiny belief propagation (TBP). The basic idea of TBP is inspired by the multiplicative update rules of non-negative matrix factorization (NMF)~\citep{Lee:01}, which absorbs the message updating into passing process without storing previous messages. Extensive experiments demonstrate that TBP enjoys a significantly less memory usage for topic modeling of massive data sets, but achieves a comparable or even better topic modeling accuracy than VB, GS and BP. Moreover, the speed of TBP is very close to BP, which is currently the fastest batch learning algorithm for topic modeling~\citep{Zeng:11}. We also extend the proposed TBP using the block optimization framework~\citep{Yu:10} to handle the case when data cannot fit in computer memory. For example, we extend TBP to extract $10$ topics from $7$GB PUBMED biomedical corpus using a desktop computer with $2$GB memory. There have been two straight-forward machine learning strategies to process large-scale data sets: online and parallel learning schemes. On the one hand, online topic modeling algorithms such as online VB (OVB)~\citep{Hoffman:10} read massive corpora as a data stream composed of multiple smaller mini-batches. Loading each smaller mini-batch into memory, OVB optimizes LDA within the online stochastic optimization framework, theoretically converging to the batch VB's objective function. But OVB still needs to store messages for each mini-batch. When the size of mini-batch is large, the space complexity of OVB is still higher than the batch training algorithm TBP. In addition, the best online topic modeling performance depends highly on several heuristic parameters including the mini-batch size. On the other hand, parallel topic modeling algorithms such as parallel GS (PGS)~\citep{Newman:09} use expensive parallel architectures with more physical memory. Indeed, PGS does not reduce the space complexity for training LDA, but it distributes massive corpora into $P$ distributed computing units, and thus requires only $1/P$ memory usage as GS. By contrast, the proposed TBP can reduce the space complexity for batch training LDA on a common desktop computer. Notice that we may also develop much more efficient online and parallel topic modeling algorithms based on TBP in order for a significant speedup. The rest paper is organized as follows. Section~\ref{s2} compares VB, GS and BP for message passing, and analyzes their space complexity for training LDA. Section~\ref{s3} proposes the TBP algorithm to reduce the space complexity of BP, and discusses TBP's relation with the multiplicative update rules of NMF. Section~\ref{s4} shows extensive experiments on four real-world corpora. Finally, Section~\ref{s5} draws conclusions and envisions future work. \section{The Message Passing Algorithms for Training LDA} \label{s2} \begin{figure*} \centering \includegraphics[width=0.6\linewidth]{message.eps} \caption{Message passing for training LDA: (A) collapsed Gibbs sampling (GS), (B) loopy belief propagation (BP), and (C) variational Bayes (VB).} \label{message} \end{figure*} LDA allocates a set of semantic topic labels, $\mathbf{z} = \{z^k_{w,d}\}$, to explain non-zero elements in the document-word co-occurrence matrix $\mathbf{x}_{W \times D} = \{x_{w,d}\}$, where $1 \le w \le W$ denotes the word index in the vocabulary, $1 \le d \le D$ denotes the document index in the corpus, and $1 \le k \le K$ denotes the topic index. Usually, the number of topics $K$ is provided by users. The topic label satisfies $z^k_{w,d} = \{0,1\}, \sum_{k=1}^K z^k_{w,d} = 1$. After inferring the topic labeling configuration over the document-word matrix, LDA estimates two matrices of multinomial parameters: topic distributions over the fixed vocabulary $\boldsymbol{\phi}_{W \times K} = \{\phi_{\cdot,k}\}$, where $\theta_{\cdot, d}$ is a $K$-tuple vector and $\phi_{\cdot,k}$ is a $W$-tuple vector, satisfying $\sum_k \theta_{k,d} = 1$ and $\sum_w \phi_{w,k} = 1$. From a document-specific proportion $\theta_{\cdot,d}$, LDA independently generates a topic label $z^k_{\cdot,d}=1$, which further combines $\phi_{\cdot,k}$ to generate a word index $w$, forming the total number of observed word counts $x_{w,d}$. Both multinomial vectors $\theta_{\cdot,d}$ and $\phi_{\cdot,k}$ are generated by two Dirichlet distributions with hyperparameters $\alpha$ and $\beta$. For simplicity, we consider the smoothed LDA with fixed symmetric hyperparameters provided by users~\citep{Griffiths:04}. To illustrate the generative process, we refer the readers to the original three-layer graphical representation for LDA~\citep{Blei:03} and the two-layer factor graph for the collapsed LDA~\cite{Zeng:11}. Recently, there have been three types of message passing algorithms for training LDA: GS, BP and VB. These message passing algorithms have space complexity as follows, \begin{align} \text{Total memory usage} = \text{data memory} + \text{message memory} + \text{parameter memory}, \end{align} where the data memory is used to store the input document-word matrix $\mathbf{x}_{W \times D}$, the message memory is allocated to store previous messages during passing, and the parameter memory is used to store two output parameter matrices $\boldsymbol{\phi}_{W \times K}$ and $\boldsymbol{\theta}_{K \times D}$. Because the input and output matrices of these algorithms are the same, we focus on comparing the message memory consumption among these message passing algorithms. \subsection{Collapsed Gibbs Sampling (GS)} After integrating out the multinomial parameters $\{\phi,\theta\}$, LDA becomes the collapsed LDA in the collapsed hidden variable space $\{\mathbf{z, \alpha, \beta}\}$. GS~\citep{Griffiths:04} is a Markov Chain Monte Carlo (MCMC) sampling technique to infer the marginal distribution or {\em message}, $\mu_{w,d,n}(k) = p(z^k_{w,d,n}=1)$, where $1 \le n \le x_{w,d}$ is the word token index. The message update equation is \begin{align} \label{GS} \mu_{w,d,n}(k) \propto \frac{\mathbf{z}^k_{\cdot,d,-n} + \alpha}{\sum_k [\mathbf{z}^k_{\cdot,d,-n} + \alpha]} \times \frac{\mathbf{z}^k_{w,\cdot,-n} + \beta}{\sum_w [\mathbf{z}^k_{w,\cdot,-n} + \beta]}, \end{align} where $\mathbf{z}^k_{\cdot,d,-n} = \sum_{w} z^k_{w,d,-n}$, $\mathbf{z}^k_{w,\cdot,-n} = \sum_{d} z^k_{w,d,-n}$, and the notation $-n$ denotes excluding the current topic label $z^k_{w,d,n}$. After normalizing the message $\sum_{k} \mu_{w,d,n}(k) = 1$, GS draws a random number $u \sim \text{Uniform}[0,1]$ and checks which topic segment will be hit as shown in Fig.~\ref{message}A, where $K = 4$ for example. If the topic index $k=3$ is hit, then we assign $z^3_{w,d,n} = 1$. The sampled topic label will be used immediately to estimate the message for the next word token. If we view the sampled topic labels as {\em particles}, GS can be interpreted as a special case of non-parametric belief propagation~\citep{Sudderth:03}, in which only particles rather than complete messages are updated and passed at each iteration. Eq.~\eqref{GS} sweeps all word tokens for $1 \le t \le T$ training iterations until the convergence criterion is satisfied. To exclude the current topic label $z^k_{w,d,n}$ in Eq.~\eqref{GS}, we need to store all topic labels, $z^k_{w,d,n}=1, \forall w,d,n$, in memory for message passing. In a common $32$-bit desktop computer, GS generally uses the integer type ($4$ bytes) for each topic label, so the approximate message memory in bytes can be estimated by \begin{align} \label{GSM} \text{GS} = 4 \times \sum_{w,d} x_{w,d}, \end{align} where $\sum_{w,d} x_{w,d}$ is the total number of word tokens in the document-word matrix. For example, $7$GB PUBMED corpus has $737,869,083$ word tokens, occupying around $2.75$GB message memory according to Eq.~\eqref{GSM}. Based on inferred topic configuration $z^k_{w,d,n}$ over word tokens, the multinomial parameters can be estimated as follows, \begin{gather} \phi_{w,k} = \frac{\mathbf{z}^k_{w,\cdot,\cdot} + \beta}{\sum_{w} [\mathbf{z}^k_{w,\cdot,\cdot} + \beta]}, \\ \theta_{k,d} = \frac{\mathbf{z}^k_{\cdot,d,\cdot} + \alpha}{\sum_k [\mathbf{z}^k_{\cdot,d,\cdot} + \alpha]}. \end{gather} These equations look similar to Eq.~\eqref{GS} except including the current topic label $z^k_{w,d,n}$ in both numerator and denominator. \subsection{Loopy Belief Propagation (BP)} \label{s2.2} Similar to GS, BP~\citep{Zeng:11} performs in the collapsed hidden variable space of LDA called collapsed LDA. The basic idea is to integrate out the multinomial parameters $\{\theta,\phi\}$, and infer the marginal posterior probability in the collapsed space $\{\mathbf{z},\alpha,\beta\}$. The collapsed LDA can be represented by a factor graph, which facilitates the BP algorithm for approximate inference and parameter estimation. Unlike GS, BP infers messages, $\mu_{w,d}(k) = p(z^k_{w,d}=1)$, without sampling in order to keep all uncertainties of messages. The message update equation is \begin{align} \label{BP} \mu_{w,d}(k)\propto\frac{\boldsymbol{\mu}_{-w,d}(k) + \alpha}{\sum_k[\boldsymbol{\mu}_{-w,d}(k) + \alpha]} \times \frac{\boldsymbol{\mu}_{w,-d}(k) + \beta}{\sum_w[\boldsymbol{\mu}_{w,-d}(k) + \beta]}, \end{align} where $\boldsymbol{\mu}_{-w,d}(k) = \sum_{-w} x_{-w,d}\mu_{-w,d}(k)$ and $\boldsymbol{\mu}_{w,-d}(k) = \sum_{-d} x_{w,-d}\mu_{w,-d}(k)$. The notation $-w$ and $-d$ denote all word indices except $w$ and all document indices except $d$. After normalizing $\sum_k \mu_{w,d}(k) = 1$, BP updates other messages iteratively. Fig.~\ref{message}B illustrates the message passing in BP when $K=4$, slightly different from GS in Fig.~\ref{message}A. Eq.~\eqref{BP} differs from Eq.~\eqref{GS} in two aspects. First, BP infers messages based on word indices rather than word tokens. Second, BP updates and passes complete messages without sampling. In this sense, BP can be viewed as a {\em soft} version of GS. Obviously, such differences give Eq.~\eqref{BP} two advantages over Eq.~\eqref{GS}. First, it keeps all uncertainties of messages for high topic modeling accuracy. Second, it scans a total of $NNZ$ word indices for message passing, which is significantly less than the total number of word tokens $\sum_{w,d} x_{w,d}$ in $\mathbf{x}$. So, BP is often faster than GS by scanning a significantly less number of elements ($NNZ \ll \sum_{w,d}x_{w,d}$) at each training iteration~\citep{Zeng:11}. Eq.~\eqref{BP} scans $NNZ$ in the document-word matrix for $1 \le t \le T$ training iterations until the convergence criterion is satisfied. However, BP has a higher space complexity than GS. Because BP excludes the current message $\mu_{w,d}(k)$ in message update~\eqref{BP}, it requires storing all $K$-tuple messages. In the widely-used $32$-bit desktop computer, we generally use the double type ($8$ bytes) to store all messages with the memory occupancy in bytes, \begin{align} \label{BPM} \text{BP} = 8 \times K \times NNZ, \end{align} which increases linearly with the number of topics $K$. For example, $7$GB PUBMED corpus has $NNZ=483,450,157$. When $K=10$, BP needs around $36$GB for message passing. Notice that when $K$ is large, Eq.~\eqref{BPM} is significantly higher than Eq.~\eqref{GSM}. Based on the normalized messages, the multinomial parameters can be estimated by \begin{gather} \label{thetad} \phi_{w,k} = \frac{\boldsymbol{\mu}_{w,\cdot}(k) + \beta}{\sum_w [\boldsymbol{\mu}_{w,\cdot}(k) + \beta]}, \\ \theta_{k,d} = \frac{\boldsymbol{\mu}_{\cdot,d}(k) + \alpha}{\sum_k [\boldsymbol{\mu}_{\cdot,d}(k) + \alpha]}. \label{phiw} \end{gather} These equations look similar to Eq.~\eqref{BP} except including the current message $\mu_{w,d}(k)$ in both numerator and denominator. \subsection{Variational Bayes (VB)} Unlike BP in the collapsed space, VB~\citep{Blei:03,Winn:05} passes variational messages, $\tilde{\mu}_{w,d}(k) = \tilde{p}(z^k_{w,d}=1)$, derived from the approximate variational distribution $\tilde{p}$ to the true joint distribution $p$ by minimizing the KL divergence, $KL(\tilde{p}||p)$. The variational message update equation is \begin{align} \label{VB} \tilde{\mu}_{w,d}(k) \propto \frac{\exp[\Psi(\tilde{\boldsymbol{\mu}}_{\cdot,d}(k) + \alpha)]} {\exp[\Psi(\sum_k [\tilde{\boldsymbol{\mu}}_{\cdot,d}(k) + \alpha])]} \times \frac{\tilde{\boldsymbol{\mu}}_{w,\cdot}(k) + \beta}{\sum_w [\tilde{\boldsymbol{\mu}}_{w,\cdot}(k) + \beta]}, \end{align} where $\tilde{\boldsymbol{\mu}}_{\cdot,d}(k) = \sum_{w} x_{w,d}\tilde{\mu}_{w,d}(k)$, $\tilde{\boldsymbol{\mu}}_{w,\cdot}(k) = \sum_{d} x_{w,d}\tilde{\mu}_{w,d}(k)$, and the notation $\exp$ and $\Psi$ are exponential and digamma functions, respectively. After normalizing the variational message $\sum_k \tilde{\mu}_{w,d}(k) = 1$, VB passes this message to update other messages. There are two major differences between Eq.~\eqref{VB} and Eq.~\eqref{BP}. First, Eq.~\eqref{VB} involves computationally expensive digamma functions. Second, it include the current variational message $\tilde{\mu}_{w,d}$ in the update equation. The digamma function significantly slows down VB, and also introduces bias in message passing~\citep{Asuncion:09,Zeng:11}. Fig.~\ref{message}C shows the variational message passing in VB, where the dashed line illustrates that the variational message is derived from the variational distribution. Because VB also stores the variational messages for updating and passing, its space complexity is the same as BP in Eq.~\eqref{BPM}. Based on the normalized variational messages, VB estimates the multinomial parameters as \begin{gather} \phi_{w,k} = \frac{\tilde{\boldsymbol{\mu}}_{w,\cdot}(k) + \beta}{\sum_w [\tilde{\boldsymbol{\mu}}_{w,\cdot}(k) + \beta]}, \\ \theta_{k,d} = \frac{\tilde{\boldsymbol{\mu}}_{\cdot,d}(k) + \alpha}{\sum_k [\tilde{\boldsymbol{\mu}}_{\cdot,d}(k) + \alpha]}. \end{gather} These equations are almost the same as Eqs.~\eqref{thetad} and \eqref{phiw} but using variational messages. \subsection{Synchronous and Asynchronous Message Passing} \label{s2.4} Message passing algorithms for LDA first randomly initialize messages, and then pass messages according to two schedules: the synchronous and the asynchronous update schedules~\citep{Tappen:03}. The synchronous message passing schedule uses all messages at $t-1$ training iteration to update current messages at $t$ training iteration, while the asynchronous schedule immediately uses the updated messages to update other remaining messages within the same $t$ training iteration. Empirical results demonstrate that the asynchronous schedule is slightly more efficient than the synchronous schedule~\citep{Zeng:11} for topic modeling. However, the synchronous schedule is much easier to extend for parallel computation. GS is naturally an asynchronous message passing algorithm. The sampled topic label will immediately influence the topic sampling process at the next word token. Both synchronous and asynchronous schedules of BP work equally well in terms of topic modeling accuracy, but the asynchronous schedule converges slightly faster than the synchronous one~\citep{Elidan:06}. VB is a synchronous variational message passing algorithm, updating messages at iteration $t$ using messages at iteration $t-1$. \section{Tiny Belief Propagation} \label{s3} In this section, we propose TBP to save the message memory and data memory usage of BP in section~\ref{s2.2}. Generally, the parameter memory of BP takes a relatively smaller space when the number of topics $K$ is small. For example, as far as $7$GB PUBMED data set is concerned ($D = 8,200,000$ and $W = 141043$), when $K=10$, the parameter $\boldsymbol{\theta}_{K \times D}$ occupies around $0.6$GB memory, while the parameter $\boldsymbol{\phi}_{W \times K}$ occupies around $0.01$GB memory. For simplicity, we assume that the parameter memory is enough for topic modeling. \subsection{Message Memory} \label{3.1} The algorithmic contribution of TBP is to reduce the message memory of BP to almost zero during message passing process. Combining Eqs.~\eqref{BP},~\eqref{thetad} and~\eqref{phiw} yields the approximate message update equation, \begin{align} \label{TBP} \mu_{w,d}(k) \propto \phi_{w,k} \times \theta_{k,d}, \end{align} where the current message $\mu_{w,d}(k)$ is added in both numerator and denominator in Eq.~\eqref{BP}. Notice that such an approximation does not distort the message update very much because the message $\mu_{w,d}(k)$ is significantly smaller than the aggregate of other messages in both numerator and denominator. Eq.~\eqref{TBP} has the following intuitive explanation. If the $w$th word has a higher likelihood in the topic $k$ and the topic $k$ has a larger proportion in the $d$th document, then the topic $k$ has a higher probability to be assigned to the element $x_{w,d}$, i.e., $z^k_{w,d} = 1$. The normalized message can be written as the matrix operation, \begin{align} \label{normalize} \mu_{w,d}(k) = \frac{\phi_{w,k}\theta_{k,d}}{(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}}, \end{align} where $(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}$ is the element at $\{w,d\}$ after matrix multiplication $\boldsymbol{\phi}\boldsymbol{\theta}$. Within the probabilistic framework, LDA generates the word token at index $\{w,d\}$ using the likelihood $(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}$, which satisfies $\sum_{w} (\boldsymbol{\phi}\boldsymbol{\theta})_{w,d} = 1$, so that $\sum_{w,d} (\boldsymbol{\phi}\boldsymbol{\theta})_{w,d} = D$ is a constant. Replacing the normalized messages by Eq.~\eqref{normalize}, we can re-write Eqs.~\eqref{thetad} and~\eqref{phiw} as \begin{gather} \label{phi} \phi_{w,k} \leftarrow \frac{\sum_{d}x_{w,d}[\phi_{w,k}\theta_{k,d}/(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}] + \beta} {\sum_{w,d}x_{w,d}[\phi_{w,k}\theta_{k,d}/(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}] + W\beta}, \\ \theta_{k,d} \leftarrow \frac{\sum_{w}x_{w,d}[\phi_{w,k}\theta_{k,d}/(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}] + \alpha}{\sum_{w}x_{w,d} + K\alpha}, \label{theta} \end{gather} where the denominators play normalization roles to constrain $\sum_k \theta_{k,d} = 1, \theta_{k,d} \ge 0$ and $\sum_w \phi_{w,k} = 1, \phi_{w,k} \ge 0$. We absorb the message update equation into the parameter estimation in Eqs.~\eqref{phi} and~\eqref{theta}, so that we do not need to store the previous messages during message passing process. We refer to these matrix update algorithm as TBP. If we discard the hyperparameters $\alpha$ and $\beta$ in Eqs.~\eqref{theta} and~\eqref{phi}, we find that these matrix update equations look similar to the following multiplicative update rules in non-negative matrix factorization (NMF)~\citep{Lee:01}, \begin{gather} \label{nmf1} \phi_{w,k} \leftarrow \frac{\sum_{d}x_{w,d}[\phi_{w,k}\theta_{k,d}/(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}]}{\sum_{d}\theta_{k,d}}, \\ \theta_{k,d} \leftarrow \frac{\sum_{w}x_{w,d}[\phi_{w,k}\theta_{k,d}/(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}]}{\sum_{w}\phi_{w,k}}, \label{nmf2} \end{gather} where the objective of NMF is to minimize the following divergence, \begin{align} \label{divergence} D(\mathbf{x}||\boldsymbol{\phi}\boldsymbol{\theta}) = \sum_{w,d}\bigg(x_{w,d}\log\frac{x_{w,d}}{(\boldsymbol{\theta}\boldsymbol{\phi})_{w,d}} - x_{w,d} + (\boldsymbol{\theta}\boldsymbol{\phi})_{w,d}\bigg), \end{align} under the constraints $\phi_{w,d} \ge 0$ and $\theta_{k,d} \ge 0$. First, Eqs.~\eqref{theta} and~\eqref{phi} are different from Eqs.~\eqref{nmf1} and~\eqref{nmf2} in denominators, just because LDA additionally constrain the sum of multinomial parameters to be one. Second, as far as LDA is concerned, because $\sum_{w,d}x_{w,d}$ and $\sum_{w,d} (\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}$ are constants, Eq.~\eqref{divergence} is proportional to the standard Kullback-Leibler (KL) divergence, \begin{align} \label{KL} D(\mathbf{x}||\boldsymbol{\phi}\boldsymbol{\theta}) \propto KL(\mathbf{x}||\boldsymbol{\phi}\boldsymbol{\theta}) &\propto \sum_{w,d}x_{w,d}\log\frac{x_{w,d}}{(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}} \notag \\ &\propto \sum_{w,d} -x_{w,d}\log(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}. \end{align} In conclusion, if we discard hyperparameters in Eqs.~\eqref{phi} and Eq.~\eqref{theta}, the proposed TBP algorithm becomes a special NMF algorithm: \begin{gather} \mathbf{x} \approx \boldsymbol{\phi}\boldsymbol{\theta}, \\ \label{KL1} \min \bigg(\sum_{w,d} -x_{w,d}\log(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}\bigg), \forall x_{w,d} \ne 0, \\ \phi_{w,k} \ge 0, \sum_w \phi_{w,k} = 1, \theta_{k,d} \ge 0, \sum_k \theta_{k,d} = 1, \end{gather} where TBP focuses only on approximating non-zero elements $x_{w,d} \ne 0$ by $\boldsymbol{\phi}\boldsymbol{\theta}$ in terms of the KL divergence. Notice that the hyperparameters play smoothing roles in avoiding zeros in the factorized matrices in Eqs.~\eqref{phi} and Eq.~\eqref{theta}, where zeros are major reasons for worse performance in predicting unseen words in the test set~\citep{Blei:03}. Conventionally, different training algorithms for LDA can be fairly compared by the perplexity metric~\citep{Blei:03,Asuncion:09,Hoffman:10}, \begin{align} \label{perplexity} \text{Perplexity} &= \exp\Bigg\{-\frac{\sum_{w,d} x_{w,d}\log(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}} {\sum_{w,d} x_{w,d}}\Bigg\}, \notag \\ &\propto \sum_{w,d} -x_{w,d}\log(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}. \end{align} which has been previously interpreted as the geometric mean of the likelihood in the probabilistic framework. Comparing~\eqref{perplexity} with~\eqref{KL}, we find that the perplexity metric can be also interpreted as a KL divergence between the document-word matrix $\mathbf{x}$ and the multiplication of two factorized matrices $\boldsymbol{\phi}\boldsymbol{\theta}$. Because the TBP algorithm directly minimizes the KL divergence~\eqref{KL1}, it often has a much lower predictive perplexity on unseen test data than both GS and VB algorithms in Section~\ref{s2} for better topic modeling accuracy. This theoretical analysis has also been supported by extensive experiments in Section~\ref{s4}. Indeed, LDA is a full Bayesian counterpart of the probabilistic latent semantic analysis (PLSA)~\citep{Hofmann:01}, which is equivalent to the NMF algorithm with the KL divergence~\citep{Gaussier:05}. Moreover, the inference objective functions between LDA and PLSA are very similar, and PLSA can be viewed a maximum-a-posteriori (MAP) estimated LDA model~\citep{Girolami:03}. For example, two recent studies~\citep{Asuncion:09,Zeng:11} find that the CVB0 and the simplified BP algorithms for training LDA resemble the EM algorithm for training PLSA. Based on these previous works, it is a natural step to connect the NMF algorithms with those message passing algorithms for training LDA. More generally, we speculate that such intrinsic relations also exist between finite mixture models such as LDA and latent factor models such as NMF~\citep{Gershman:12}. As an example, the NMF algorithm in theory has been recently justified to learn topic models such as LDA with a polynomial time~\citep{Arora:12}. Notice that TBP and other NMF algorithms do not need to store previous messages within the message passing framework in Section~\ref{s2}, and thus save a lot of memory usage. Based on~\eqref{phi} and~\eqref{theta}, we implement two types of TBP algorithms: synchronous TBP (sTBP) and asynchronous TBP (aTBP), similar to the synchronous and asynchronous message passing algorithms in Section~\ref{s2.4}. Because the denominator of~\eqref{theta} is a constant, it does not influence the normalized message~\eqref{normalize}. So, we consider only the unnormalized $\theta$ during the matrix factorization. However, the denominator of~\eqref{phi} depends on $k$, so we use a $K$-tuple vector $\boldsymbol{\lambda}_K$ to store the denominator, and use the unnormalized $\boldsymbol{\phi}$ during the matrix factorization. The normalization can be easily performed by a simple division $(\phi_{w,k} + \beta)/(\lambda_k + W\beta)$. \begin{figure*} \centering \includegraphics[width=1\linewidth]{stbp.eps} \caption{The sTBP algorithm for LDA.} \label{stbp} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\linewidth]{atbp.eps} \caption{The aTBP algorithm for LDA.} \label{atbp} \end{figure*} Fig.~\ref{stbp} shows the synchronous TBP (sTBP) algorithm. We use three temporary matrices $\hat{\boldsymbol{\phi}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\lambda}}$ in Line $2$ to store numerators of~\eqref{phi},~\eqref{theta} and denominator of~\eqref{phi} for synchronization. Form Line $4$ to $9$, we randomly initialize $\hat{\boldsymbol{\phi}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\lambda}}$ by $\verb+rand(K)+$, which generates a random integer $k, 1 \le k \le K$. At each training iteration $t, 1 \le t \le T$, we copy the temporary matrices to $\boldsymbol{\phi}, \boldsymbol{\theta}, \boldsymbol{\lambda}$ and clear the temporary matrices to zeros from Line $12$ to $13$. Then, for each non-zero element in the document-word matrix, we accumulate the numerators of~\eqref{phi},~\eqref{theta} and the denominator of~\eqref{phi} by the $K$-tuple message $\eta_k$ in the temporary matrices $\hat{\boldsymbol{\phi}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\lambda}}$ from Line $15$ to $19$. In the synchronous schedule, the update of elements in the factorized matrices does not influence other elements within each iteration $t$. Fig.~\ref{atbp} shows the asynchronous TBP (aTBP) algorithm. Unlike the sTBP algorithm, aTBP does not require temporary matrices $\hat{\boldsymbol{\phi}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\lambda}}$. After the random initialization from Line $4$ to $9$, aTBP reduces the matrices $\boldsymbol{\phi}, \boldsymbol{\theta}, \boldsymbol{\lambda}$ in a certain proportion from Line $13$ to $15$, which can be compensated by the updated $K$-tuple message $\eta_k$ from Line $17$ to $20$. In the asynchronous schedule, the change of elements in the factorized matrices $\boldsymbol{\phi}, \boldsymbol{\theta}$ will immediately influence the update of other elements. In anticipation, the asynchronous schedule is more efficient to pass the influence of the updated elements in matrices than the synchronous schedule. The sTBP and aTBP algorithms will iterate until the convergence condition is satisfied or the maximum iteration $T$ is reached. The time complexity of TBP is $\mathcal{O}(NNZ \times KT)$, where $NNZ$ is the number of non-zero elements in the document-word matrix, $K$ is the number of topics and $T$ is the number of training iterations. sTBP has the space complexity $\mathcal{O}(3 \times NNZ + 2 \times KW + 2 \times KD)$, but aTBP has the space complexity $\mathcal{O}(3 \times NNZ + KW + KD)$. Generally, we use $3 \times NNZ$ to store data in the memory including indices of non-zero elements in the document-word matrix, and also use $KW + KD$ memory to store matrices $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$. Because sTBP uses additional matrices $\hat{\boldsymbol{\phi}}, \hat{\boldsymbol{\theta}}$ for synchronization, it uses $2 \times KW + 2 \times KD$ for all matrices. \subsection{Data Memory} When the corpus data is larger than the computer memory, traditional algorithms cannot train LDA due to the memory limitation. We assume that the hard disk is large enough to store the corpus file. Recently, reading data from hard disk into memory as blocks is a promising method~\citep{Yu:10} to handle such problems. We can extend the TBP algorithms in Figs.~\ref{stbp} and~\ref{atbp} to read the corpus file as blocks, and optimize each block sequentially. For example, we can read each document in the corpus file at one time into memory and perform the TBP algorithms to refine the matrices $\{\boldsymbol{\phi}, \boldsymbol{\theta}\}$. After scanning all documents in the corpus data file, TBP finishes one iteration of training in Figs.~\ref{stbp} and~\ref{atbp}. Similarly, we can also store the matrices $\{\boldsymbol{\phi}, \boldsymbol{\theta}\}$ in the file on the hard disk when they are larger than computer memory. In such cases, TBP consumes almost no memory to do topic modeling. Because loading data into memory requires additional time, TBP running on files is around twice slower than that running on memory. For example, for the $7$GB PUBMED corpus and $K=10$, aTBP requires $259.64$ seconds to scan the whole data file on the hard disk, while it requires only $128.50$ seconds to scan the entire data in the memory at each training iteration. Another choice is to extend TBP to the online learning~\citep{Hoffman:10}, which partitions the whole corpus file into mini-batches and optimizes each mini-batch after one look sequentially. Although some online topic modeling algorithms like OVB can converge to the objective of corresponding batch topic modeling algorithm, we find that the best topic modeling accuracy depends on several heuristic parameters including the mini-batch size~\citep{Hoffman:10}. In contrast, TBP is a batch learning algorithm that can handle large data memory with better topic modeling accuracy. Reading block data from hard disk to memory can be also applied to both GS and VB algorithms for LDA. \subsection{Relationship to Previous Algorithms} The proposed TBP connects the training algorithm of LDA to the NMF algorithm with KL divergence. The intrinsic relation between probabilistic topic models~\citep{Hofmann:01,Blei:03} and NMF~\citep{Lee:01} have been extensively discussed in several previous works~\citep{Buntine:02,Gaussier:05,Girolami:03,Wahabzada:11,Wahabzada:11a,Zeng:11}. A more recent work shows that learning topic models by NMF has a polynomial time~\citep{Arora:12}. Generally speaking, learning topic models can be formulated within the message passing framework in Section~\ref{s2} based on the generalized expectation-maximization (EM)~\citep{Dempster:77} algorithm. The objective is to maximize the joint distribution of PLSA or LDA in two iterative steps. At the E-step, we approximately infer the marginal distribution of a topic label assigned to a word called {\em message}. At the M-step, based on the normalized messages, we estimate two multinomial parameters according to the maximum-likelihood criterion. The EM algorithm iterates until converges to the local optimum. On the other hand, the NMF algorithm with KL divergence has a probabilistic interpretation~\citep{Lee:01}, which views the multiplication of two factorized matrices as the normalized probability distribution. Notice that the widely-used performance measure perplexity~\citep{Blei:03,Asuncion:09,Hoffman:10} for topic models follows exactly the same KL divergence in NMF, which implies that the NMF algorithm may achieve a lower perplexity in learning topic models. Therefore, connecting NMF with LDA may inspire more efficient algorithms to learn topic models. For example, in this paper, we show that the proposed TBP can avoid storing messages to reduce the memory usage. More generally, we speculate that finite mixture models and latent factor models~\citep{Gershman:12} may share similar learning techniques, which may inspire more efficient training algorithms to each other in the near future. \section{Experimental Results} \label{s4} \begin{table} \centering \caption{Statistics of four document data sets.} \begin{tabular}{|l|l|l|l|l|} \hline Data sets &$D$ &$W$ &$N_d$ &$W_d$ \\ \hline \hline ENRON &$39861$ &$28102$ &$160.9$ &$93.1$ \\ \hline NYTIMES &$15000$ &$84258$ &$328.7$ &$230.2$ \\ \hline PUBMED &$80000$ &$76878$ &$68.4$ &$46.7$ \\ \hline WIKI &$10000$ &$77896$ &$1013.3$ &$447.2$ \\ \hline\end{tabular} \label{dataset} \end{table} Our experiments aim to confirm the less memory usage of TBP compared with the state-of-the-art batch learning algorithms such as VB~\citep{Blei:03}, GS~\citep{Griffiths:04} and BP~\citep{Zeng:11} algorithms, as well as online learning algorithm such as online VB (OVB)~\citep{Hoffman:10}. We use four publicly available document data sets~\citep{Porteous:08,Hoffman:10}: ENRON, NYTIMES, PUBMED and WIKI. Previous studies~\citep{Porteous:08} revealed that the topic modeling result is relatively insensitive to the total number of documents in the corpus. Because of the memory limitation for GS, BP and VB algorithms, we randomly select $15000$ documents from the original NYTIMES data set, $80000$ documents from the original PUBMED data set, and $10000$ documents from the original WIKI data set for experiments. Table~\ref{dataset} summarizes the statistics of four data sets, where $D$ is the total number of documents in the corpus, $W$ is the number of words in the vocabulary, $N_d$ is the average number of word tokens per document, and $W_d$ is the average number of word indices per document. We randomly partition each data set into halves with one for training set and the other for test set. The training perplexity~\eqref{perplexity} is calculated on the training set in $500$ iterations. Usually, the training perplexity will decrease with the increase of number of training iterations. The algorithm often converges if the change of training perplexity at successive iterations is less than a predefined threshold. In our experiments, we set the threshold as one because the decrease of training perplexity is very small after satisfying this threshold. The predictive perplexity for the unseen test set is computed as follows~\citep{Asuncion:09}. On the training set, we estimate $\boldsymbol{\phi}$ from the same random initialization after $500$ iterations. For the test set, we randomly partition each document into $80\%$ and $20\%$ subsets. Fixing $\boldsymbol{\phi}$, we estimate $\boldsymbol{\theta}$ on the $80\%$ subset by training algorithms from the same random initialization after $500$ iterations, and then calculate the predictive perplexity on the rest $20\%$ subset, \begin{align} \text{predictive perplexity}=\exp\Bigg\{-\frac{\sum_{w,d} x_{w,d}^{20\%}\log(\boldsymbol{\phi}\boldsymbol{\theta})_{w,d}} {\sum_{w,d} x_{w,d}^{20\%}}\Bigg\}, \end{align} where $x_{w,d}^{20\%}$ denotes word counts in the the $20\%$ subset. The lower predictive perplexity represents a better generalization ability. \subsection{Comparison with Batch Learning Algorithms} We compare TBP with other batch learning algorithms such as GS, BP and VB. For all data sets, we fix the same hyperparameters as $\alpha = 2/K, \beta = 0.01$~\citep{Porteous:08}. The CPU time per iteration is measured after sweeping the entire data set. We report the average CPU time per iteration after $T = 500$ iterations, which practically ensures that GS, BP and VB converge in terms of training perplexity. For a fair comparison, we use the same random initialization to examine all algorithms with $500$ iterations. To repeat our experiments, we have made all source codes and data sets publicly available~\citep{Zeng:12}. These experiments are run on the Sun Fire X4270 M2 server with two 6-core $3.46$ GHz CPUs and $128$ GB RAMs. \begin{table}[t] \centering \caption{Message memory (MBytes) for training set when $K=100$.} \begin{tabular}{|c|c|c|c|c|} \hline Inference methods & ENRON & NYTIMES & PUBMED & WIKI \\ \hline \hline VB and BP & $1433.6$ & $1323.1$ & $1425.1$ & $1705.0$ \\ \hline GS & $12.6$ & $9.5$ & $10.4$ & $19.3$ \\ \hline aTBP and sTBP & $0$ & $0$ & $0$ & $0$ \\ \hline \end{tabular} \label{memory} \end{table} Table~\ref{memory} compares the the message memory usage during training. VB and BP consumes more than $1$GB memory to message passing when $K=100$. VB and BP even require more than $9$GB for message passing when $K=900$, because their message memory increases linearly with the number of topics $K$ in Eq.~\eqref{BPM}. In contrast, GS needs only $10\sim20$MB memory for message passing. The advantage of GS is that its memory occupancy does not depend on the number of topics $K$ in Eq.~\eqref{GSM}. Therefore, PGS~\citep{Newman:09} can handle the relatively large-scale data set containing thousands of topics without memory problems using the parallel architecture. However, PGS still requires message memory for message passing at the distributed computing unit. Clearly, both aTBP and sTBP do not need memory space to store previous messages, and thus save a lot of memory usage. This is a significant improvement especially compared with VB and BP algorithms. In conclusion, TBP is our first choice for batch topic modeling when memory is limited for topic modeling of massive corpora containing a large number of topics. \begin{figure*} \centering \includegraphics[width=1\linewidth]{prediction.eps} \caption{Predictive perplexity as $K=\{100,300,500,700,900\}$ on ENRON, NYTIMES, PUBMED and WIKI data sets. The notation $0.8x$ and $0.4x$ denote the predictive perplexity is multiplied by $0.8$ and $0.4$, respectively.} \label{prediction} \end{figure*} Fig.~\ref{prediction} shows the topic modeling accuracy measured by the predictive perplexity on the unseen test set. The lower predictive perplexity implies a better topic modeling performance. Obviously, VB performs the worst among all batch learning algorithms with the highest predictive perplexity. For a better illustration, we multiply VB's perplexity by $0.8$ on ENRON and NYTIMES, and by $0.4$ on PUBMED data sets, respectively. Also, we find that VB shows an overfitting phenomenon, where the predictive perplexity increases with the increase of the number of topics $K$ on all data sets. The basic reason is that VB optimizes an approximate variational distribution with the gap to the true distribution. When the number of topics is large, this gap cannot be ignored, leading to serious biases. We see that GS performs much better than VB on all data sets, because it theoretically approximates the true distribution by sampling techniques. BP always achieves a much lower predictive perplexity than GS, because it retains all uncertainty of messages without sampling. Both sTBP and aTBP perform equally well on ENRON and PUBMED data sets, which also achieve the lowest predictive perplexity among all batch training algorithms. However, BP outperforms both sTBP and aTBP on NYTIMES and WIKI data sets. Also, aTBP outperforms both sTBP and GS, while sTBP performs slightly worse than GS. Because aTBP has consistently better topic modeling accuracy than GS on all data sets, we advocate aTBP for topic modeling in limited memory. As we discussed in Section~\ref{3.1}, BP/TBP has the lowest predictive perplexity mainly because it directly minimizes the KL divergence between $\mathbf{x}$ and $\boldsymbol{\phi}\boldsymbol{\theta}$ from the NMF perspective. \begin{figure*} \centering \includegraphics[width=1\linewidth]{time.eps} \caption{CPU time per iteration (seconds) as $K=\{100,300,500,700,900\}$ on ENRON, NYTIMES, PUBMED and WIKI data sets. The notation $0.3x$ denotes the training time is multiplied by $0.3$.} \label{time} \end{figure*} Fig.~\ref{time} shows the CPU time per iteration of all algorithms. All these algorithms has a linear time complexity of $K$. VB is the most time-consuming because it involves complicated digamma function computation~\citep{Asuncion:09,Zeng:11}. For a better illustration, we multiply the VB's training time by $0.3$. Although BP runs faster than GS when $K$ is small ($K \le 100$)~\citep{Zeng:11}, it is sometimes slower than GS when $K$ is large ($K \ge 100$), especially on ENRON and PUBMED data sets. The reason lies in that GS often randomly samples a topic label without visiting all $K$ topics, while BP requires searching all $K$ topics for the message update. When $K$ is very large, this slight difference will be enlarged. sTBP runs as fast as BP in most cases, but aTBP runs slightly slower than both sTBP and BP. Comparing two algorithms in Figs.~\ref{stbp} and~\ref{atbp}, we find that aTBP uses more division operations than sTBP at each training iteration, which accounts for aTBP's slowness. As a summary, TBP has a comparable topic modeling speed as GS and BP but with reduced memory usage. \begin{figure*} \centering \includegraphics[width=1\linewidth]{convergence.eps} \caption{Training perplexity as a function of the number of iterations when $K=500$ on ENRON, NYTIMES, PUBMED and WIKI data sets.} \label{convergence} \end{figure*} Fig.~\ref{convergence} shows the training perplexity as a function of training iterations. All algorithms converge to a fixed point given enough training iterations. On all data sets, VB usually uses $110\sim170$ iterations, GS uses around $400\sim470$ iterations, and BP/TBP uses $180\sim230$ iterations for convergence. Although the digamma function calculation is slow, it reduces the number of training iterations of VB to reach convergence. GS is a stochastic sampling method, and thus requires more iterations to approximate the true distribution. Because BP/TBP is a deterministic message passing method, it needs less iterations to achieve convergence than GS. Overall, BP/TBP consumes the least training time until convergence according to Figs.~\ref{time} and~\ref{convergence}. \begin{figure*} \centering \includegraphics[width=1\linewidth]{topic.eps} \caption{Top ten words in ten topics extracted from the subset of PUBMED: VB (red), GS (blue), BP (black), aTBP (green) and sTBP (magenta). Most topics contain similar words but with a different order. The subjective measures~\citep{Chang:09b} such as word intrusions in topics and topic intrusions in documents are comparable among different algorithms.} \label{topic} \end{figure*} Fig.~\ref{topic} shows the top ten words of $K=10$ topics extracted by VB (red), GS (blue), BP (black), aTBP (green) and sTBP (magenta). We see that most topics contain similar top ten words but with a different order. More formally, we can adopt subjective measures such as the word intrusion in topics and the topic intrusion in documents~\citep{Chang:09b} to evaluate extracted topics. PUBMED is a biomedical corpus. According to our prior knowledge in biomedical domain, we find these topics are all meaningful. Under this condition, we advocate TBP for topic modeling with reduced memory requirements. \subsection{Comparison with Online Algorithms} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{online.eps} \caption{Predictive perplexity obtained on the complete PUBMED corpus as a function of CPU time (seconds in log scale) when $K=10$.} \label{online} \end{figure*} We compare the topic modeling performance between TBP and the state-of-the-art online topic modeling algorithm OVB~\citep{Hoffman:10}\footnote{\url{http://www.cs.princeton.edu/~blei/topicmodeling.html}} on a desktop computer with $2$GB memory. The complete $7$GB PUBMED data set~\citep{Porteous:08} contains a total of $D = 820,000,000$ documents with a vocabulary size $W = 141,043$. Currently, only TBP and online topic modeling methods can handle $7$GB data set using $2$GB memory. OVB~\citep{Hoffman:10} uses the following default parameters: $\kappa = 0.5$, $\tau_0 = 1024$, and the mini-batch size $S = 1024$. We randomly reserve $40,000$ documents as the test set, and use the remainder $8,160,000$ documents as the training set. The number of topics $K=10$. The hyperparameters $\alpha = 2/K = 0.05$ and $\beta = 0.01$. Fig.~\ref{online} shows the predictive perplexity as a function of training time (seconds in log scale). OVB converges slower than TBP because it reads input data as a data stream, discarding each mini-batch sequentially after one look. Notice that, for each mini-batch, OVB still requires allocating message memory for computation. In contrast, TBP achieves a much lower perplexity using less memory usage and training time. There are two major reasons. First, TBP directly optimizes the perplexity in terms of the KL divergence in Eq~\eqref{perplexity}, so that it can achieve a much lower perplexity than OVB. Second, OVB involves computationally expensive digamma functions which significantly slow down the speed. We see that sTBP is a bit faster than aTBP because it does not perform the division operation at each iteration (see Figs.~\ref{stbp} and~\ref{atbp}). Because aTBP influences matrix factorization immediately after the matrix update, it converges at a slightly lower perplexity than sTBP. \section{Conclusions} \label{s5} This paper has presented a novel tiny belief propagation (TBP) algorithm for training LDA with significantly reduced memory requirements. The TBP algorithm reduces the message memory required by conventional message passing algorithms including GS, BP and VB. We also discuss the intrinsic relation between the proposed TBP and NMF~\citep{Lee:01} with KL divergence. We find that TBP can be approximately viewed as a special NMF algorithm for minimizing the perplexity metric, which is a widely-used evaluation method for different training algorithms of LDA. In addition, we confirm the superior topic modeling accuracy of TBP in terms of predictive perplexity on extensive experiments. For example, when compared with the state-of-the-art online topic modeling algorithm OVB, the proposed TBP is faster and more accurate to extract $10$ topics from $7$GB PUBMED corpus using a desktop computer with $2$GB memory. Recently, the NMF algorithm has been advocated to learn topic models such as LDA with a polynomial time~\citep{Arora:12}. The proposed TBP algorithm also suggests that the NMF algorithms can be applied to training topic models like LDA with a high accuracy in terms of the perplexity metric. We hope that our results may inspire more and more NMF algorithms~\citep{Lee:01} to be extended to learn other complicated LDA-based topic models~\citep{Blei:12} in the near future. \acks{This work is supported by NSFC (Grant No. 61003154), the Shanghai Key Laboratory of Intelligent Information Processing, China (Grant No. IIPL-2010-009), and a grant from Baidu to JZ, and a GRF grant from RGC UGC Hong Kong (GRF Project No.9041574) and a grant from City University of Hong Kong (Project No. 7008026) to ZQL.} \vskip 0.2in \bibliographystyle{natbib}
2,869,038,154,442
arxiv
\section{Introduction} Two-dimensional electron systems (2DES) are convenient platforms for numerous physical experiments and applications. Adding up lateral modulation turns a system into two-dimensional metamaterial and opens additional functionality, like gate-tunable superconductivity in a lattice of superconducting tin islands on graphene \cite{ControlSC}, discovery of correlated state and superconductivity in magic-angle twisted bilayer graphene \cite{matblg1,matblg2}, experimental observation of Holfstatder's butterfly \cite{Hofstadter}, commensurability effects in semiconducting quantum wells with lateral modulation \cite{Commensurability}, selectivity to circularly polarized light in the chiral laterally modulated structures \cite{chiral} etc. In all these examples the modulation period is smaller than the relevant length, e.g. mean free path or coherence length, otherwise the effects of periodical modulation would be damped. On the other hand, even if the latter condition is violated, the modulated system remains to be a regular effective medium anyway. We address a question to what extent should one expect the emergence of new phenomena there? Conductance of such effective medium is very sensitive to electronic properties and could serve therefore as a convenient indicator. In this paper we examine the conductive properties in the disordered and yet not insulating limit of macroscopically modulated gate-tunable array of islands (dots/antidots) within 2D electronic system, realized in the archetypal Si-MOSFET platform. This system is somewhat similar to granular materials, studied broadly in the past \cite{Abeles} both theoretically and experimentally. The studied array of islands differs from granular systems by: (i) complete two-dimensionality and tunability of both parent electron gas and islands; (ii) periodicity, i.e. absence of randomness in positions of islands; (iii) smooth transition regions (larger than mean free path) between parent gas and islands. So far transport studies of lithographically modulated semiconducting two-dimensional systems were focused either on clean systems (where mean free path is larger than the period of modulation and all studied phenomena are essentially ballistic \cite{Weiss, weiss2,kozlov1,kozlov2,tsukagoshi}) or to Aharonov-Bohm/Altshuler-Aharonov-Spivak oscillations \cite{AharonovBohm,AlshulerAharonovSpivak}, i.e. coherent low-temperature mesoscopic effects \cite{nihey, iye, yagi}. All these phenomena are essentially nano-scale. We should also mention a group of a papers \cite{dorn,minkov,staley,goswami, tkachenko}, where percolation and transition to localization phenomena in the arrays of dots/antidots were explored. Arrays of {\it macroscopic} (i.e. micrometer size) islands should address essentially the classical physics. Macroscopic means that the mean free path($<50$ nm in our case) and coherence length ($\sim$ 300 nm at 2K) are smaller than the period of the structure and size of the islands (in our case 5 and 2.5 $\mu$m respectively). To the best of our knowledge, magnetotransport properties of such system (array of depleted antidots) so far were reported only by us in Ref.\cite{kuntsevich}, where Hall resistance was shown to be nonlinear function of magnetic field due to current redistribution in magnetic field. Present study qualitatively extends those first measurements, by adding a new parameter, i.e. electron density in the islands. From the transport studies we explore island density/2DES density phase diagram of this effective media. We reveal and explain qualitatively 2DES-dominated, island-dominated and shell dominated phases, highlight the role of inhomogeneities in 2D-metal-to-insulator transition. Our data indicate weak-localization related reason for low-field Hall nonlinearity and novel effect in the Shubnikov-de Haas oscillation regime: Zeeman splitting of the resistivity minima. Also in this paper we find the analytical expression for the effective conductivity of the model system using classical mean field approach. Considered system differs from the experimental one primarily by the neglect of the transitional regions between islands and parent gas and quantum corrections in conductivity. The theoretical model qualitatively describes the experimental system reproducing the non-linear dependence of Hall conductivity from the voltage on the gate above parent gas. \section{Samples used} We used Si-MOSFETs structures with lithographically defined antidot array (AA) (for simplicity, we call islands antidots though they can be dots), with TEM cross-section and gate connection shown schematically in fig.\ref{Samples}. The transport current flows in the inversion layer at the interface between Si substrate and oxide. Voltages applied to two electrically decoupled gate electrodes independently control the density of the electrons (i) inside the antidots ($V_{a}$) and (ii) in the surrounding 2D gas (S2DG) ($V_{g}$). Panels \textbf{a}, \textbf{b} show optical images of the sample. Diameter of the antidots is 2.5 $\mu m$, lateral period $d$ of the structure is 5 $\mu m$ so that transport between them is diffusive ($l \ll d$ where $l \sim 50$ nm is mean free path in the highest mobility samples) and possible coherent effects are negligible ($l_\phi < d$ where $l_\phi < 500$nm is coherence length in studied temperature range). The AA has a Hall-bar shape with lateral dimensions 0.4 mm x 0.4 mm. The cross-section thin lamella for TEM studies was cut out from the surface region (shown by dashed line in panel \textbf{b} of Fig.\ref{Samples}) of the sample using FEI Helios NanoLab 650 focused ion beam. The STEM images (see example in Figs.\ref{Samples}\textbf{c} and \ref{Samples}\textbf{d}) were obtained using FEI Titan 80-300 microscope at the electron energy of 200 keV. The structure of our sample is following: bottom layer in gray color - single crystalline (001) Si substrate; the dark color corresponds to $SiO_2$, trapezoidal-shaped polycrystalline heavily doped Si is the gate of the antidots, the rest polycrystalline heavily doped Si (gray color above $SiO_2$) is the S2DG gate. Panel \textbf{d} shows the zoom in of the edge of the antidot. It is seen that the oxide layer becomes thicker closer to the edge of the S2DG. This leads to lower density of electrons in the domains underneath. Moreover, gate electrodes are separated by oxide so that between antidots and S2DG there is an area where density is expected to be low. We call these transition regions \textit{shells}. The panel \textbf{e} shows the same spot as panel \textbf{b} with all mentioned above areas in color. \begin{figure} \centerline{\includegraphics[width=\linewidth]{fig1.eps}} \begin{minipage}{3.2in} \caption{(Color online) (a) Optic image of the corner of the AA (100x magnification), (b) zoom-in of image (a) with direction of slice for TEM, (c) TEM image with scheme of the gating, (d) TEM image of the border of the island (on the right), S2DG (on the left) and shell (between), (e) image (b) with signed areas (1-island, 2-shell, 3-SD2G).} \label{Samples} \end{minipage} \end{figure} Multiple chips of the same design were fabricated on the same wafer. Probably due to inevitable temperature gradients during the fabrication, AA on different chips demonstrated different low-temperature transport properties. In particular, peak mobility varied by an order of magnitude (see Results section). \section{Results} Magnetoresistance measurements were performed in the temperature range 0.3-8 K using Cryogenics 21T/0.3 K and CFMS 16T/1.8K systems. AC transport current was fixed at value 100 nA to avoid overheating. All measurements were carried out in the frequency range 13-18 Hz using a standard 4-terminal technique with a lock-in amplifier. In order to compensate for contact asymmetry, magnetic field was swept from positive to negative values and with resistance per square (Hall resistivity) data being then (anti)symmetrized. The properties of Silicon-based 2D systems are known to be strongly dependent on the mobility of carriers. In high-mobility uniform systems ($\mu\gtrsim 10000$cm$^2/$Vs) metallic behavior of resistivity and metal-insulator transition can be realized \cite{MIT}. In contrast, low-mobility Si-MOSFETs do not demonstrate a stark metallic temperature dependence of the resistivity. Also, for high-mobility samples Shubnikov-de Haas oscillation(SdHO) patterns allowed to resolve the carrier density value $n_{SdH}$. The experiments were carried out on several samples with effective peak mobility of electrons in AA in wide range from 400 to 5000 cm$^2/$Vs. Despite this spread of mobilities, most of the observed phenomena were shown up in all samples. The mobility had impact only on the magnitude of the corresponding effects. We demonstrate the data from the representative high mobility sample AA1 and low mobility sample AA2. All measurements are made in the regime of highly conductive media. Indeed, measured resistance per square (that is always elevated with respect to the S2DEG local resistivity due to bottleneck effect) is lower than the resistance quantum $h/e^2\sim 25.8$ kOhm. Therefore the quasiclassical treatment of the transport is applicable. \subsection{Effective density} We straightforwardly characterize this effective medium by effective Hall density ($n_{eff}\equiv{[eR_{xy}(B=1{\rm T})]^{-1}}$) and effective carrier mobility ($\mu_{eff}\equiv{(n_{eff}e\varrho)^{-1}}$). Here and further $\varrho$ is the measured resistance per square. The effective density and mobility were calculated from the resistance per square and Hall resistivity at 1 and -1 T. We analyzed the $n_{eff}$ dependency on $V_g$ and $V_a$. In uniform Si inversion layers electron density is roughly proportional to $(V_g-V_{th})$ \cite{Ando}, where $V_{th}$ is a threshold voltage, which is usually small and originates from charge stored in oxide and the difference of work functions of the gate and 2D system. Experimentally observed $n_{eff}(V_g)$ dependencies (for three various $V_a$ values, shown in fig.\ref{n(Vg)}) are in contrast with this expectation. The reason for the deviations is artificial non-uniformity of the system. Such behavior reflects different regimes of transport current flow distribution. We distinguish the ranges of gate voltages that correspond to various current density distribution (schematically shown by letters (a)-(d) in the main panel and also in the corresponding panels under the graph in fig.\ref{n(Vg)}). The higher transport current density is shown by lighter color. \begin{figure}[ht] \centerline{\includegraphics[width=\linewidth]{fig2.eps}} \caption{(Color online) The Hall density at T=1.8K for sample AA2 vs S2DG gate voltage for three representative island gate voltages. Inset shows the similar data for the high-mobility sample AA1. In panels (a)-(d) the higher electron density the lighter area. Panels (a)-(d) correspond to the domains of the voltages designated by the same letter on the graph.} \label{n(Vg)} \end{figure} For $V_g$ high enough (figures \ref{n(Vg)}(a) and \ref{n(Vg)}(b)), S2DG is very conductive because of high electron density. Due to edge effects and larger gate-to-2DEG distance, shells have lower electron density and hence smaller conductance. Therefore, transport current flows predominantly through the S2DG and Hall effect, i.e. $n_{eff}(V_g)$, is determined by its density. It means that the islands have small impact on $n_{eff}(V_g)$ dependence. For small values of $V_g$ (figures \ref{n(Vg)}(c) and \ref{n(Vg)}(d)) the S2DG density and conductivity decreases and contribution of islands to the transport rises. Increasing the $V_a$ value makes the islands much more conductive than S2DG. Therefore transport current prefers to flow through islands and minimizes the path through the S2DG. Thus the effective density $n_{eff}$ increases (relatively to density defined by $V_g$) since the Hall voltage is determined by the islands. As $V_g$ increases, the contribution of depleted S2DG rises leading to the drop of the $n_{eff}$ (figure \ref{n(Vg)}(d)). For $V_g$ and $V_a$ low enough, both S2DG (unlike case \textbf{b}) and islands (unlike case \textbf{d}) are poorly conductive. Low conductance of both regions force transport current to flow through the whole perimeter of the shell. This leads to the elevated role of the low-density shells and the visible increase of the Hall voltage, i.e. drop of $n_{eff}$ value (case \textbf{c} on the fig.\ref{n(Vg)}). The effective density data, shown in fig.\ref{n(Vg)}, demonstrating an enhanced drop in low-$V_a/$ low-$V_g$ region, were obtained for low-mobility sample AA2. For high-mobility sample AA1 (inset to fig.\ref{n(Vg)}), despite the absence of the drop, a similar tendency is clearly seen: $n_{eff}$ value decreases with $V_g$ growth at high $V_a$ and this effect vanishes as $V_a$ is lowered. This data show that the effective density in the macroscopically inhomogenious systems follows the same physics irrespectively of mobility. \subsection{Magnetoresistance and Hall measurements} Thus, we established different regimes of current transport in artificially inhomogeneous tunable media. In order to explore the differences between the regimes \textbf{b}, \textbf{c}, and \textbf{d} (here and further designations are taken from fig.\ref{n(Vg)}) we performed more detailed magnetotransport measurements. We chose Hall coefficient ($R_{xy}/B$) to visualize the difference between AA and uniform system, where $R_{xy}/B$ is roughly field-independent. \begin{figure*} \begin{minipage}[h]{0.49\linewidth} \centerline{\includegraphics[width=\linewidth]{fig3a.eps}} \end{minipage} \hfill \begin{minipage}[h]{0.49\linewidth} \center{\includegraphics[width=\linewidth]{fig3b.eps} \\ \includegraphics[width=\linewidth]{fig3c.eps}} \end{minipage} \caption{(Color online) Magnetoresistance (black curves) and Hall coefficient (red curves) of sample AA1 at T=0.3K in regime of current through antidots (a), current through S2DG (b) and elevated role of shells (c). (d) Hall coefficient of sample AA2 vs magnetic field for four different temperatures. For convenience, curves shifted such that their edges (at B=5T) coincide (curve for 0.3K remained unchanged). (e) and (f) are enlarged areas from panels (a) and (b), respectively, shown by dashed rectangles which demonstrate the splitting of minima of magnetoresistance. (g) Schematics of Zeeman-splitted Landau levels in density of states vs energy diagram. Fermi levels for magnetic fields $B_1$ and $B_2$ (indicated in panel (e)) are shown by dashed lines. } \label{R(B)} \end{figure*} Fig.\ref{R(B)}(a) shows magnetoresistance ($\varrho$) and Hall coefficient in the regime \textbf{d} where transport is dominated by islands. Though the effective $\varrho$ is about 15 kOhms (i.e. $\sim e^2/h$), pronounced Shubnikov-de Haas oscillations (SdHO) are observed due to the high mobility electron gas in the islands. Electron density obtained from SdHO ($n_{SdH}\approx 1.4\times10^{12}$cm$^{-2}$) is higher than the Hall density ($n_{eff}\approx 1\times10^{12}$cm$^{-2}$) because the latter is affected also by S2DG bottlenecks. Hall coefficient is a non-monotonic function of magnetic field with a \textit{maximum} at $B=0$. For comparison in fig.\ref{R(B)}(b) we show magnetoresistance and Hall coefficient of the system in regime \textbf{b} with $V_a=0$. $V_g$ value was adjusted to make $n_{eff}$ approximately equal to the value from fig.\ref{R(B)}(a). Effective $\varrho\approx3$kOhms value is about 5 times less because S2DG in this case is well-conductive and transport current bypasses the depleted regions. SdHO are also observed with $n_{SdH}\approx 0.9\times10^{12}$cm$^{-2}$ comparable to $n_{eff} \approx 1.2\times10^{12}$cm$^{-2}$. At $B=0$ Hall coefficient in this case has {\it minimum}. Finally, fig.\ref{R(B)}(c) shows magnetoresistance and Hall coefficient in low-density regime somewhere between \textbf{b} and \textbf{c}. The gate voltages were adjusted to make $\varrho$ approximately equal to the value from fig.\ref{R(B)}(a). The behavior of the transport is completely different from fig.\ref{R(B)}(a) and qualitatively similar to fig.\ref{R(B)}(b) without SdHO. Hall coefficient has {\it minimum} at zero field. This data straightforwardly demonstrates that contrary to non-modulated 2DES, the magnetotransport reflects complexity of carrier density redistribution and is not determined by the value of the effective resistivity. The common tendency for all Hall coefficient data is the growth with the magnetic field. In homogeneous system Hall coefficient is constant and directly corresponds to electron density $R_{xy}/B=1/ne$. In the studied system there are regions with different densities. For Si MOSFETs it is known that electron mobility is density-dependent ($\mu$ generally grows with $n$, then reaches a steep maximum and decreases slowly for very large carrier densities) \cite{Ando}. In magnetic field the longitudinal conductivity $\sigma_{xx}$ decreases $\propto (1+(\mu B)^2)^{-1}$, i.e. the higher the mobility, the faster the decrease. Thus, with increasing the field the conductivity of low-density regions decreases slower than the one of high-density regions. Since the current prefers to flow through high-conductive regions, with increasing field current redistributes so that the role of low-density low-mobility regions increases. Therefore, Hall coefficient should rise, in agreement with experimental data. Exactly this mechanism was suggested in our first paper\cite{kuntsevich}. In small magnetic field Hall coefficient experiences an abrupt feature. The bare 2D gas in Si-MOSFETs also has a small low-field Hall nonlinearity, discussed in detail in Ref.\cite{kuntsevich2013} and reported for the similar samples in Ref.\cite{kuntsevich}. However, the huge amplitude of the low-field Hall coefficient variation in Fig.\ref{R(B)} clearly identifies it with the sample nonuniformity. This huge non-linearity is one of the main observations of our paper. Interestingly, low-field quenching of transverse magnetoresistance (and even change of its sign) has already been explored in various artificially inhomogeneous and mesoscopic systems. First experiments in 1D wires by Roukes \cite{Roukes} were further theoretically explained \cite{Beenakker} by scrambling of electron trajectories on crossroad in a place of contacts. The authors speculated that quenching is unambiguous manifestation of 1D transport. We note that all available theories in 1D or 2D systems are essentially ballistic. In further experiments with ballistic antidot arrays\cite{Weiss} the quenching of the Hall effect was also observed, although the qualitative pinball picture didn't account for attenuation of Hall coefficient. In the more recent experiments on 2D systems with AA \cite{Cross-shapedAA} the observed quenching of Hall effect was confirmed by numerical simulations, but no physical mechanism was suggested. Our system is essentially different, because the transport is diffusive and the inhomogeneities are tunable from dots (areas of low potential, $V_a>V_g$) to antidots (areas of high potential, $V_a<V_g$). Zero-field Hall coefficient in our experiments can either grow or fall with $B$ depending on $V_g$ and $V_a$. Origin of different behavior is unclear and requires further theoretical investigation. Suppression of the zero-field Hall coefficient quenching with temperature (fig.\ref{R(B)}(d) for sample AA2) is the indicator, that this feature is related to weak-localization phenomenon. We believe that low-field feature in Hall coefficient comes from redistribution of transport current in the regime of weak localization. This assumption is totally nontrivial: firstly, it is a textbook knowledge that in homogenous medium weak localization does not influence the Hall resistivity\cite{AltshulerAronov} and, secondly, the relative value of the observed nonlinearity is rather high (few 10\%), larger than weak localization correction to resistivity in the bare 2D gas. Our results thus call for theoretical modeling of the weak localization in the presence of macroscopic modulation. Moreover, it might be that sample inhomogeneity is a clue to understanding the often observed and not always explained low-field feature in the other 2D systems \cite{kuntsevich2013, ovadiahu, MinkovHall}. Another unusual, yet high-field magnetotransport effect is the splitting of the minima of the longitudinal magnetoresistance $\varrho$ (enlarged domains from figs.\ref{R(B)}(a-b) are shown in Fig.\ref{R(B)}(e-f)). As a rule, as magnetic field increases, and Zeeman term exceeds the temperature and Landau level broadening (see fig.\ref{R(B)}g for the schematics of the density of states), the resistivity maxima are split. Indeed, in uniform 2D systems in SdH domain Hall resistivity $\rho_{xy}$ is higher than $\rho_{xx}$ and the maxima of the conductivity $\sigma_{xx} = \rho_{xx}/(\rho_{xx}^2+\rho_{xy}^2)\approx\rho_{xx}/\rho_{xy}^2$ at the half-integer filling factors correspond to the maxima of the resistivity and maxima of the density of states. In our samples, effective resistance per square $\varrho$ is higher than $R_{xy}$ in SdH domain. If the areas of antidots were just infinite barriers for electrons, it would only change the geometrical factor $w/l$ and do not turn minima to maxima. In other words, the resistance per square should increase but the $\varrho(B)/\varrho(B=0)$ ratio should remain unchanged. Meanwhile in our system $\varrho_{xx}$ {\it minima} appear to be splitted. It is worth to note that splitting is observed both in regime {\bf b} (current through S2DG) either in regime {\bf d} (current mainly through islands). \begin{figure*}[t] \begin{minipage}[h]{0.49\linewidth} \centerline{\includegraphics[width=\linewidth]{fig4a.eps}} \end{minipage} \hfill \begin{minipage}[h]{0.49\linewidth} \centerline{\includegraphics[width=\linewidth]{fig4b.eps}} \end{minipage} \caption{(Color online) (a) Relative change of resistivity of AA1 with temperature (from 1.8K to 7.4K) $\kappa$ vs S2DG voltage for different voltage on antidots gate. The same data (but for temperatures 2.1K and 8K) for low-mobility sample AA2 is shown on inset. (b) Temperature dependence of resistivity of the sample AA2 at fixed $V_a$=0V for different $V_g$. The same data at fixed $V_g$ for different $V_a$ is shown on inset.} \label{Metallicity} \end{figure*} We suggest that this splitting might be explained if the equation $\sigma^{eff}_{xx} = \varrho/(\varrho^2+R_{xy}^2)$ holds correct for the resistance per square. Then $\sigma^{eff}_{xx}\approx 1/\varrho$ and conductivity maxima (coinciding with the maxima of Zeeman-splitted density of states at Fermi level, shown in Fig.\ref{R(B)}g) correspond to the resistance per square minima. This suggestion is not expected to be valid because conductivity and resistivity are local properties, whereas the resistance per square is the macroscopic characteristic of the sample. In other words, effective conductivity approach is surprisingly applicable not locally but rather to the overall system. \subsection{Metallic behavior of resistivity} High mobility Si-based 2D systems are also remarkable by ``metallic'' resistivity behavior ($d\rho/dT >$ 0) and metal-insulator transition \cite{MIT}. These phenomena were intensively investigated during last two decades. They are shown to occur due to interplay of strong electron-electron interactions and localization, however the exact mechanism is yet debated \cite{spivakVK, dobrosavlevic, finkelsteinRG, french, meir, pudalovNonUniform, gold}. Since in some of these models the system was believed to be essentially non-uniform at the microscale \cite{spivakVK, pudalovNonUniform, meir} we decided to examine how the artificially tunable inhomogeneity in our system will affect 2D ``metallicity''. We should note that even in non-modulated Si-based 2DES a valuable metallicity (2-5 times growth of the resistivity from $\sim 1$K to $\sim 10$ K ) emerges only if the peak mobility is rather large ($\mu>1$ m$^2$/Vs). In this case the strength of metallicity grows as density decreases and eventually quenches at metal-insulator transition point. If the peak mobility is low, than low densities are not achieved and magnitude of resistivity variation with temperature becomes small or even slightly negative. In order to quantify ``metallicity'' experimentally we took the relative variation $\kappa\equiv (\varrho_{7.4}-\varrho_{1.8})/ \varrho_{1.8}$, where $\varrho_{1.8}$ and $\varrho_{7.4}$ values were measured at experimentally convenient temperatures 1.8K and 7.4K respectively. Thus, $\kappa$ never drop below -1, and relatively big positive values of $\kappa$ correspond to strong ``metallic'' behavior and negative values - to insulator. Figure\ref{Metallicity}\textbf{(a)} shows $\kappa$ versus $V_g$ dependence for different values of $V_a$ for high-mobility sample AA1. The inset shows a similar series of $\kappa(V_g)$ dependencies for low-mobility sample AA2. Insignificant distinction is that temperature reference points used for sample AA2 were 2.1K and 8K, respectively. This difference is connected only with experimental conveniences. At high values of $V_g$, when the system is deep in the conductive domain $\kappa$ tends to zero for both low and high-mobility samples. This behavior is caused by (i) weakening of electron-electron interactions at elevated densities and (ii) domination of the S2DG in conductance of the system, i.e. transport properties of antidot array for large $V_g$ are equivalent to bare 2D gas, as expected. For small values of $V_g$ $\kappa$ depends dramatically on the value of $V_a$ and on the mobility of the sample. For high-mobility sample and small values of $V_a$ there is a strong ``metallic'' conductivity: $\kappa$ is positive, quite large (about 1.5), and drops monotonically with $V_g$, as it should be for bare 2D gas in Si-MOSFET\cite{MIT}, because the islands areas are out of the game. However for high values of $V_a$ ``metallic'' conductivity becomes suppressed for all values of $V_g$ and $\kappa$ for sample AA1 becomes non-monotonic and goes to zero at small $V_g$. For low mobility sample AA2 as $V_a$ increases weakly positive $\kappa$ for low $V_g$ turns to negative. Fig.\ref{Metallicity}(b) represents the temperature dependence of the resistivity of low-mobility sample. On the main graph dependencies for different S2DG voltage $V_g$ and same antidots voltage $V_a$ are shown to demonstrate the degeneracy of metallic behavior with increase of $V_g$. The same tendency with $V_a$ increase at fixed $V_g$ is reflected in inset. In other words filling the islands with electrons turns the system to ``insulating'' behavior, no matter how large the mobility is. We suggest the following explanation of this phenomena. For small values of $V_a$ islands areas are ``closed'' for electrons. However for high values of $V_a$ current flows to the islands and, as result, inevitably flows through the shells. The latter have strong insulating behavior that cause the suppression of $\kappa$. Thus, we demonstrate and explain qualitatively that our effective media allows to tune 2D ``metallicity''. This observation might also help to understand the answer to the question why the strength of the metallicity in Si-MOSFETs is the highest among other system despite the relatively low mobility. Indeed, in the highest mobility Si-MOSFETS ($\mu_{peak}\sim 3-4\cdot 10^4$cm$^2/$Vs), the resistance increases almost by an order of magnitude with temperature\cite{MIT}, and strong metallicity is observed at relatively high carrier densities (few $10^{11}$ cm$^{-2}$). In the other material systems with mobilities exceeding $10^5$ cm$^2/$Vs and much lower carrier densities ($\sim 10^{10}$ cm$^{-2}$, e.g. Si/SiGe quantum wells\cite{melnikov}, n-GaAs \cite{lilly}, p-GaAsTO\cite{proskuryakov}, etc.) the growth of the resistivity with temperature is typically smaller than a factor of 2. All these high mobility systems have smooth impurity potential, similarly to tunable part of potential due to artificial modulation in the antidot array. This potential might be one possible mechanism for the metallicity suppression. Indeed, in low carrier density materials the relative fluctuations of charge distribution are much larger and in their role should be re-examined. \section{Theory} Interestingly, there is a possibility to obtain \textit{analytical} results for the conductivity of the regular array of equivalent elliptic islands embedded into a conductive matrix\cite{MFT}. For theoretical description here we consider infinite 2D array of round islands with radius $R$ and period $a$ and the main matrix (S2DG). The input parameters are magnetic field directed perpendicular to the plane and the conductivity tensors of the islands $\hat{\sigma}_1$ and the S2DG $\hat{\sigma}_2$. The task is solved for boundary condition that DC current $j_0$ is set on the infinity. The goal of the theory is to obtain conductivity tensor of the inhomogeneous system $\hat{\sigma}_{eff}$. From the general physical principles the conductivity tensor must have the following form: \begin{equation} \hat{\sigma}_{eff}=\left( \begin{matrix} \sigma^e_{xx} & -\sigma^e_{xy} \\ \sigma^e_{xy} & \sigma^e_{xx} \end{matrix} \right) \end{equation} Thus, there are only two independent variables $\sigma^e_{xx}$ and $\sigma^e_{xy}$. These values have to be expressed through $\sigma^n_{xx}$, $\sigma^n_{xy}$ of islands (n=1) and S2DG (n=2) and geometrical factor $p=\frac{\pi R^2}{a^2}$ that denotes the fraction of the system occupied by the islands. Such problems are very common for classical electrodynamics of continuous media and they can be solved within the widespread approach of mean field\cite{MFT}. In our case it claims that instead of periodical inhomogeneous system it is enough to consider the following system (see Fig.\ref{Image}): round island of radius $R$ with conductivity $\hat{\sigma}_1$ inside the ring with external radius $R_1=R/\sqrt{p}$ and conductivity $\hat{\sigma}_2$ that is surrounded by the effective medium with conductivity $\hat{\sigma}_{eff}$. \begin{figure}[t] \centerline{\includegraphics[width=0.85\linewidth]{fig5.eps}} \caption{Schematic image of the transition of theoretical model according to mean field theory} \label{Image} \end{figure} The dependence of electrical potential $\phi$ from coordinates is found using the continuity equation in stationary case div $\textbf{j}=0$, Ohm's law $\textbf{j}=\hat{\sigma}\textbf{E}$ and the definition of the potential $\textbf{E}=-\nabla\phi$. These conditions lead to the Laplace's equation $\Delta\phi=0$ where Laplace operator is taken in 2D. The solutions in different media are matched using the conditions of continuity of the electrical potential $\phi$ and radial component of the current $j_r$ on the borders. The solutions are chosen to satisfy the boundary conditions $j_x=j_0$ and $j_y=0$ at $x,y\rightarrow\infty$. After all steps the electrical potential $\phi$ is expressed through the components of the conductivity tensor $\hat{\sigma}_{eff}$. Self-consistency conditions provide another one equation connecting these two values: \begin{equation} \hat{\sigma}_{eff}\cdot\overline{\textbf{E}}=\left( \begin{matrix} j_0 \\ 0 \end{matrix} \right) \end{equation} where $\overline{\textbf{E}}$ is mean field: \begin{equation} \pi R_1^2 \overline{\textbf{E}} = \int^R_0 rdr\int_0^{2\pi}d\theta\textbf{E}(\textbf{r}) + \int^{R_1}_R rdr\int_0^{2\pi}d\theta\textbf{E}(\textbf{r}) \end{equation} Finally, the following expressions for the components of $\hat{\sigma}_{eff}$ are obtained: \begin{equation} \label{Expres} \begin{aligned} \frac{\sigma^e_{xx}}{\sigma^2_{xx}}= & \left\{ \frac{(1-p)(1+\alpha^2)+\beta p(1-\alpha\gamma)}{(1-p)^2(1+\alpha^2)+\beta p[2(1-p)+\beta p]}+ \right. \\ & \left. +\frac{\gamma\sigma^2_{xy}}{2\sigma^2_{xx}}-\frac{1}{2} \right\} \frac{2}{1+\gamma^2} , \\ \sigma^e_{xy}= & \;\gamma\sigma^e_{xx}, \\ \gamma = & \left\{ \frac{\sigma^2_{xy}}{2\sigma^2_{xx}}-\frac{\beta p\alpha}{(1-p)^2(1+\alpha^2)+\beta p[2(1-p)+\beta p]} \right\}\cdot \\ & \cdot \left\{ \frac{(1-p)(1+\alpha^2)+\beta p}{(1-p)^2(1+\alpha^2)+\beta p[2(1-p)+\beta p]}-\frac{1}{2} \right\}^{-1} \end{aligned} \end{equation} where \begin{equation} \label{Express} \begin{aligned} \sigma^i_{xx}=\frac{\sigma_i}{1+(\mu_i B)^2}, \;& \sigma^i_{xy}=\frac{\sigma_i \mu_i B}{1+(\mu_i B)^2}, \\ \alpha=\frac{\sigma^2_{xy}-\sigma^1_{xy}}{\sigma^2_{xx}+\sigma^1_{xx}}, \;& \beta=\frac{2\sigma^2_{xx}}{\sigma^2_{xx}+\sigma^1_{xx}} \end{aligned} \end{equation} Here $\sigma_i = n_i e\mu_i$ is Drude conductivity (i=1 corresponds to islands and i=2 to the S2DG), $p=\frac{\pi R^2}{a^2}\approx0.2$ (as $R$=2.5$\mu m$ and $a$=5$\mu m$). From these equations experimentally measurable value of the effective concentration $n_{eff}=\frac{B}{e\rho^e_{xy}}=B\frac{(\sigma^e_{xx})^2+(\sigma^e_{xy})^2}{e\sigma^e_{xy}}$ can be expressed. As a result $n_{eff} = n_{eff}(B,n_1,\mu_1,n_2,\mu_2)$, i.e. $n_{eff}$ depends on 5 parameters: magnetic field and concentration and mobility of the electrons in islands and S2DG. Fortunately, in our Si-MOSFET system both concentration $n$ and mobility $\mu$ of the electron gas are set by the voltage on the gate: $V_a$ for islands and $V_g$ for S2DG. This fact significantly simplifies analysis as effective concentration depends only on three variables: $n_{eff}(V_g,V_a,B) = n_{eff}(n_1(V_a),\mu_1(V_a),n_2(V_g),\mu_2(V_g),B)$. The dependencies $n(V)$ and $\mu(V)$ were taken from the interpolation of the experimental data obtained on the conventional Hall bars from the same chip with investigated AA obtained in the same technological process: $n(V)$ is well approximated by a linear function, $\mu(V)$ - by a polynomial one. The experimental behavior of $n(V)$ and $\mu(V)$ differs strongly from one sample to another and, therefore, the coefficients in these functions shouldn't be considered as strict values defined by the samples. The dependence of $n_{eff}$ on $V_g$ for three different $V_a$ obtained from theoretical equations is shown in the Fig.\ref{TheoryNeffVg}. The inset of the Fig.\ref{TheoryNeffVg} demonstrates the dependence of the mobility of pristine electron gas on gate voltage $\mu(V)$. The dependence of concentration was taken $n(V)=0.645+0.4235\cdot V$ where $n$ and $V$ are measured in $10^{12} cm^{-2}$ and $V$, respectively. The magnetic field was taken to be 1T. The given dependencies are very similar to the experimental dependence of $n_{eff}(V_g)$ shown in Fig.\ref{n(Vg)}. That is linear behavior for high values of $V_g$ and the bend of the graph for low $V_g$. The graph even demonstrates the low-$V_g$ upturn for high $V_a$. However here are also some distinctions of the theoretical model and the experimental graph. Firstly, in the Fig.\ref{TheoryNeffVg} graphs for low $V_a$ may intersect at some value of $V_g$ that never was observed in experiment. Secondly, on the theoretical dependence there is no bend down for low $V_a$. We attribute the emergence of both distinctions to the existence of shells. Theory doesn't take them into account at all, whereas they must crucially influence the system and the arise of drop for small $V_a$ and $V_g$ we attributed exactly to the enhanced role of shells. To sum up, simple theoretical model given above satisfactorily describes the investigated system and reproduces the main features of the Hall effect behavior. For more strict description of the system and all the effects discovered experimentally the theoretical model should take into account the shells around the islands and quantum effects such as weak localization, SdHO. However taking these effects into account leads to the significant complication of the analytical result and the increase of the number of parameters. It can lead to impossibility of the analysis of such solutions. \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{fig6.eps}} \caption{(Color online) Dependence of the effective concentration $n_{eff}$ obtained from the theory (eq.\ref{Expres}) from the voltage $V_g$ that parametrizes the matrix (S2DG) for three different voltages $V_a$ (1V, 3V and 6V) that parametrize the array of islands. The inset demonstrates the dependence of the mobility of electron gas from gate voltage which was used in theory.} \label{TheoryNeffVg} \end{figure} \section{Discussion} \subsection{Metal-insulator transition point} Since Ioffe and Regel\cite{iofferegel} it is common knowledge that the boundary between metal and insulator corresponds to $k_Fl\sim 1$. In uniform 2D systems this criterion means that the conductivity is about $e^2/h\sim 1/26 $ kOhm . Below this value the wave functions at Fermi energy are localized and system is supposed to have insulating temperature dependence of the resistivity. Above this value the temperature dependence of the resistivity within non-interacting picture should be either weak insulating or metallic, in case of strong electron-electron interactions. The ultimate boundary between metal and insulator can be, of course, introduced only at $T=0$, when the coherence length is infinite. For macroscopic antidot array similar to ours, the low temperature limit can hardly be achieved, since it requires mK and sub-mK temperatures. S2DG is responsible for metal to insulator transition, while the geometrical factor (effective length-to-width ratio) in such system is enhanced. Therefore the threshold resistance per square in antidot array is elevated, and 26 kOhm is not a dogma for macroscopically modulated system anymore. E.g. in our samples we observed vanishing temperature dependence of the resistivity for about 50-80 kOhm effective sample resistance. \subsection{Phase diagram} Our results are summarized in the phase diagram of the system in $(V_g;V_a)$ plane in fig.\ref{PhaseDiagram}. For very low values of $V_g$ the system doesn't conduct, i.e. it is in insulating state. For low values of $V_g$ the value of $V_a$ is decisive. If $V_a$ is high enough, the system is in the island-dominated regime: current flows in the low-resistance islands and minimizes the path through the narrow bottlenecks between them. In this regime Hall density is elevated and Shubnikov-de Haas density is given by islands. Metallic temperature dependence of the conductivity is suppressed because total resistivity of the system is determined by bottlenecks between islands and S2DG. \begin{figure}[b] \centerline{\includegraphics[width=\linewidth]{fig7.eps}} \caption{Schematic phase diagram of the system in space of S2DG (vertical) and islands (horizontal) electron density.} \label{PhaseDiagram} \end{figure} For low values of $V_a$ the system is in the ``shell-dominated'' regime when current flows without preferences spreading out over the whole system. And for high values of $V_g$ again there is no big difference between low and high values of $V_a$ because islands are almost out of the game, the system is in the S2DG-dominated regime: current bypasses islands flowing through S2DG. {\bf Role of periodicity}. Interestingly, the periodic structure (i.e. equivalence of all islands and inter-island necks) is important. In our case the period of the structure is 5 $\mu$m and there are only 80 periods across the 400 $\mu$m wide sample. If the system was more random, like e.g. \cite{minkov}, transport through it would be governed by percolation cluster and lateral cluster size could easily exceed 80 periods. In this case the properties of the system would be unreproducible and very large samples were needed for averaging, thus hindering the systematic studies. \begin{comment} \begin{figure}[b] \begin{minipage}{200 pt} \centerline{\psfig{figure=fig6.eps, width=250pt}} \caption{Schematics of antidot array based photosensor (a) and chemical sensor (b).} \label{application} \end{minipage} \end{figure} \end{comment} \section{Conclusions} To sum up, we experimentally examined transport properties of the macroscopically non-uniform and tunable Si-based 2D electron system and found the analytical solution of the simplified model system. Explored samples have two gates for controlling the densities in the islands and residual 2D gas separately. The conductive properties of this system turn out to depend on both gate voltages $V_g$ and $V_a$. The mean field theory gives a qualitative description of the experimental system including both gate voltages as parameters. In order to explain different behavior of the samples under different gate voltages we apply simple classical considerations about the current flow within 2DES. Finally, we suggest the phase diagram of the system, in coordinates electron density in the islands vs electron density in the 2D gas. In this phase diagram we identify various transport regimes from the analysis of the Hall effect and magnetoresistivity. \section{Acknowledgements} The authors are thankful to S.G. Tikhodeev, A.S. Ioselevich, L.E. Golub and V.Yu. Kachorovskii for discussions, and V.M. Pudalov for reading the manuscript. The measurements were carried out using the equipment of the LPI Shared Facility Center. A.Yu. K. was supported by Basic research program of the HSE.
2,869,038,154,443
arxiv
\section{Introduction} \label{sec:intro} \sloppy In this paper, we focus on translating a person image from one pose to another and a facial image from one expression to another, as depicted in Figure~\ref{fig:motivation}(a). Existing person pose and facial image generation methods, such as \cite{ma2017pose,ma2018disentangled,siarohin2018deformable,tang2019cycle,albahar2019guided,esser2018variational,zhu2019progressive,chan2019everybody,balakrishnan2018synthesizing,zanfir2018human,liang2019pcgan,liu2019liquid,tang2019cycle,zhang2020cross} typically rely on convolutional layers. However, due to the physical design of convolutional filters, convolutional operations can only model local relations. To capture global relations, existing methods such as \cite{zhu2019progressive,tang2019cycle} inefficiently stack multiple convolutional layers to enlarge the receptive fields to cover all the body joints from both the source pose and the target pose. However, none of the above-mentioned methods explicitly consider modeling the cross relations between the source and target pose. \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{motivation.jpg} \caption{Illustration of our motivation. We propose a novel BiGraphGAN (Figure~(c)) to capture the long-range cross relations between the source pose $P_a$ and the target pose $P_b$ in a bipartite graph. The node features from both the source and target poses in the coordinate space are projected into the nodes in a bipartite graph, thereby forming a fully connected bipartite graph. After cross-reasoning the graph, the node features are projected back to the original coordinate space for further processing.} \label{fig:motivation} \end{figure*} \hao{Rather than relying solely on convolutions/Transformers in the coordinate space to implicitly capture the cross relations between the source pose and the target pose, we propose to construct a latent interaction space where global or long-range (can also be understood as long-distance, which means that the distance between the same joint on the source pose and the target pose very long) reasoning can be performed directly. Within this interaction space, a pair of source and target joints that share similar semantics (e.g., the source left-hand and the target left-hand joints) are represented by a single mapping, instead of a set of scattered coordinate-specific mappings. Reasoning the relations of multiple different human joints is thus simplified to modeling those between the corresponding mappings in the interaction space. We thus build a bipartite graph connecting these mappings within the interaction space and perform relation reasoning over the bipartite graph. After the reasoning, the updated information is then projected back to the original coordinate space for the generation task. Accordingly, we design a novel bipartite graph reasoning (BGR) to efficiently implement the coordinate-interaction space mapping process, as well as the cross-relation reasoning by graph convolution network (GCNs).} In this paper, we propose a novel bipartite graph reasoning Generative Adversarial Network (BiGraphGAN), which consists of two novel blocks, i.e., a bipartite graph reasoning (BGR) block and an interaction-and-aggregation (IA) block. The BGR block aims to efficiently capture the long-range cross relations between the source pose and the target pose in a bipartite graph (see Figure~\ref{fig:motivation}(c)). Specifically, the BGR block first projects both the source pose and target pose feature from the original coordinate space onto a bipartite graph. Next, the two features are represented by a set of nodes to form a fully connected bipartite graph, on which long-range cross relation reasoning is performed by GCNs. To the best of our knowledge, we are the first to use GCNs to model the long-range cross relations for solving both the challenging person pose and facial image generation tasks. After reasoning, we project the node features back to the original coordinate space for further processing. Moreover, to further capture the change in pose of each part more precisely, we further extend the BGR block to the part-aware bipartite graph reasoning (PBGR) block, which can capture the local transformations among body parts. Meanwhile, the IA block is proposed to effectively and interactively enhance a person's shape and appearance features. We also introduce an attention-based image fusion (AIF) module to selectively generate the final result using an attention network. Qualitative and quantitative experiments on two challenging person pose generation datasets, i.e., Market-1501 \cite{zheng2015scalable} and DeepFashion \cite{liu2016deepfashion}, demonstrate that the proposed BiGraphGAN and BiGraphGAN++ generate better person images than several state-of-the-art methods, i.e., PG2~\cite{ma2017pose}, DPIG~\cite{ma2018disentangled}, Deform~\cite{siarohin2018deformable}, C2GAN~\cite{tang2019cycle}, BTF~\cite{albahar2019guided}, VUNet~\cite{esser2018variational}, PATN~\cite{zhu2019progressive}, PoseStylizer~\cite{huang2020generating}, and XingGAN \cite{tang2020xinggan}. Lastly, to evaluate the versatility of the proposed BiGraphGAN, we also investigate the facial expression generation task on the Radboud Faces dataset~\cite{langner2010presentation}. Extensive experiments show that the proposed method achieves better results than existing leading baselines, such as Pix2pix~\cite{isola2017image}, GPGAN~\cite{di2018gp}, PG2~\cite{ma2017pose}, CocosNet \cite{zhang2020cross}, and C2GAN \cite{tang2019cycle}. The contributions of this paper are summarized as follows: \begin{itemize}[leftmargin=*] \item We propose a novel bipartite graph reasoning GAN (BiGraphGAN) for person pose and facial image synthesis. The proposed BiGraphGAN aims to progressively reason the pose-to-pose and pose-to-image relations via two novel blocks. \item We propose a novel bipartite graph reasoning (BGR) block to effectively reason the long-range cross relations between the source and target pose in a bipartite graph, using GCNs. \item We introduce a new interaction-and-aggregation (IA) block to interactively enhance both a person's appearance and shape feature representations. \item We decompose the process of reasoning the global structure transformation with a bipartite graph into learning different local transformations for different semantic body/face parts, which captures the change in pose of each part more precisely. To this end, we propose a novel part-aware bipartite graph reasoning (PBGR) block to capture the local transformations among body parts. \item Extensive experiments on both the challenging person pose generation and facial expression generation tasks with three public datasets demonstrate the effectiveness of the proposed method and its significantly better performance compared with state-of-the-art methods. \end{itemize} Some of the material presented here appeared in \cite{tang2020bipartite}. The current paper extends \cite{tang2020bipartite} in several ways: \begin{itemize}[leftmargin=*] \item More detailed analyses are presented in the ``Introduction'' and ``Related Work'' sections, which now include very recently published papers dealing with person pose and facial image synthesis. \item We propose a novel PBGR block to capture the local transformations among body parts. Equipped with this new module, our BiGraphGAN proposed in \cite{tang2020bipartite} is upgraded to BiGraphGAN++. \item \hao{We present an in-depth description of the proposed method, providing the architectural and implementation details, with special emphasis on guaranteeing the reproducibility of our experiments. The source code is also available online.} \item We extend the experimental evaluation provided in \cite{tang2020bipartite} in several directions. First, we conduct extensive experiments on two challenging tasks with three popular datasets, demonstrating the wide application scope of the proposed BiGraphGAN and BiGraphGAN++. Second, we also include more state-of-the-art baselines (e.g., PoseStylizer~\cite{huang2020generating} and XingGAN \cite{tang2020xinggan}) for the person pose generation task, and observe that the proposed BiGraphGAN and BiGraphGAN++ achieve better results than both methods. Lastly, we conduct extensive experiments on the facial expression generation task, demonstrating both quantitatively and qualitatively that the proposed method achieves much better results than existing leading methods such as Pix2pix~\cite{isola2017image}, GPGAN~\cite{di2018gp}, PG2~\cite{ma2017pose}, CocosNet \cite{zhang2020cross}, and C2GAN \cite{tang2019cycle}. \end{itemize} \section{Related Work} \noindent \textbf{Generative Adversarial Networks (GANs)} \cite{goodfellow2014generative} have shown great potential in generating realistic images \cite{shaham2019singan,karras2019style,brock2019large,zhang2022unsupervised,zhang20213d,tang2021total,tang2020unified}. For instance, Shaham et al. proposed an unconditional SinGAN~\cite{shaham2019singan} which can be learned from a single image. Moreover, to generate user-defined images, several conditional GANs (CGANs) \cite{mirza2014conditional} have recently been proposed. A CGAN always consists of a vanilla GAN and external guidance information such as class labels \cite{wu2019relgan,choi2018stargan,zhang2018sparsely,tang2019attribute}, text descriptions \cite{xu2022predict,tao2022df}, segmentation maps \cite{tang2019multi,park2019semantic,tang2020local,liu2020exocentric,wu2022cross,wu2022cross_tmm,tang2022local,ren2021cascaded,tang2021layout,tang2020dual}, attention maps \cite{kim2019u,tang2019attention,mejjati2018unsupervised,tang2021attentiongan}, or human skeletons \cite{albahar2019guided,balakrishnan2018synthesizing,zhu2019progressive,tang2018gesturegan,tang2020xinggan}. In this work, we focus on the person pose and facial expression generation tasks, which aim to transfer a person image from one pose to another and a facial image from one expression to another, respectively. \noindent \textbf{Person Pose Generation} is a challenging task due to the pose deformation between the source image and the target image. Modeling the long-range relations between the source and target pose is the key to solving this. However, existing methods, such as \cite{balakrishnan2018synthesizing,albahar2019guided,esser2018variational,chan2019everybody,zanfir2018human,liang2019pcgan,liu2019liquid}, are built by stacking several convolutional layers, which can only leverage the relations between the source pose and the target pose locally. For instance, Zhu et al.~\cite{zhu2019progressive} proposed a pose-attentional transfer block (PATB), in which the source and target poses are simply concatenated and then fed into an encoder to capture their dependencies. \noindent \textbf{Facial Expression Generation} aims to translate one facial expression to another \cite{tang2019expression,tang2019cycle,pumarola2020ganimation,choi2018stargan}. For instance, Choi et al. \cite{choi2018stargan} proposed a scalable method that can perform facial expression-to-expression translation for multiple domains using a single model. Pumarola et al. \cite{pumarola2020ganimation} introduced a GAN conditioning scheme based on action unit (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Finally, Tang et al. \cite{tang2019cycle} proposed a novel Cycle in Cycle GAN (C2GAN) for generating human faces and bodies. Unlike existing person pose and facial expression generation methods, which model the relations between the source and target poses in a localized manner, we show that the proposed BGR block can bring considerable performance improvements in the global view. \noindent \textbf{Graph-Based Reasoning.} Graph-based approaches have been shown efficient at reasoning relations in many computer vision tasks such as semi-supervised classification \cite{kipf2017semi}, video recognition \cite{wang2018videos}, crowd counting~\cite{chen2020relevant}, action recognition \cite{yan2018spatial,peng2020mix}, face clustering \cite{wang2019linkage,yang2019learning}, and semantic segmentation \cite{chen2019graph,zhang2019dual}. In contrast, to these graph-based reasoning methods, which model the long-range relations within the same feature map to incorporate global information, we focus on developing two novel BiGraphGAN and BiGraphGAN++ frameworks that reason and model the long-range cross relations between different features of the source and target pose in a bipartite graph. Then, the cross relations are further used to guide the image generation process (see Figure~\ref{fig:motivation}). This idea has not been investigated in existing GAN-based person image generation or even image-to-image translation methods. \begin{figure*}[!t] \centering \includegraphics[width=1\linewidth]{blocks.jpg} \caption{Illustration of the proposed bipartite graph reasoning (BGR) block $t$, which consists of two branches, i.e., B2A and A2B. Each branch aims to model cross-contextual information between shape features $F_{t-1}^{pa}$ and $F_{t-1}^{pb}$ in a bipartite graph via GCNs.} \label{fig:blocks} \end{figure*} \section{Bipartite Graph Reasoning GANs} We start by introducing the details of the proposed bipartite graph reasoning GAN (BiGraphGAN), which consists of a graph generator $G$ and two discriminators (i.e., the appearance discriminator $D_a$ and shape discriminator $D_s$). An illustration of the proposed graph generator $G$ is shown in Figure~\ref{fig:method}. It contains three main parts, i.e., a sequence of bipartite graph reasoning (BGR) blocks modeling the long-range cross relations between the source pose $P_a$ and the target pose $P_b$, a sequence of interaction-and-aggregation (IA) blocks interactively enhancing both the person's shape and appearance feature representations, and an attention-based image fusion (AIF) module attentively generating the final result~$I_b^{'}$. In the following, we first present the proposed blocks and then introduce the optimization objective and implementation details of the proposed BiGraphGAN. Figure~\ref{fig:method} shows the proposed graph generator $G$, whose inputs are the source image $I_a$, the source pose $P_a$, and the target pose $P_b$. The generator $G$ aims to transfer the pose of the person in the source image $I_a$ from the source pose $P_a$ to the target pose $P_b$, generating the desired image $I_b^{'}$. Firstly, \hao{$I_a$, $P_a$, and $P_b$ are separately fed into three encoders to obtain the initial appearance code $F_0^i$, the initial source shape code $F_0^{pa}$, and the initial target shape code $F_0^{pb}$.} Note that we use the same shape encoder to learn both $P_a$ and $P_b$, i.e., the two shape encoders used for learning the two different poses share weights. \subsection{Pose-to-Pose Bipartite Graph Reasoning} The proposed BGR block aims to reason the long-range cross relations between the source pose and the target pose in a bipartite graph. All BGR blocks have an identical structure, as illustrated in Figure~\ref{fig:method}. \hao{Consider the $t$-th block given in Figure~\ref{fig:blocks}, whose inputs are the source shape code $F_{t-1}^{pa}$ and the target shape code $F_{t-1}^{pb}$.} The BGR block aims to reason these two codes in a bipartite graph via GCNs and outputs new shape codes. It contains two symmetrical branches (i.e., the B2A branch and A2B branch) because a bipartite graph is bidirectional. As shown in Figure~\ref{fig:motivation}(c), each source node is connected to all the target nodes; at the same time, each target node is connected to all the source nodes. In the following, we describe the detailed modeling process of the B2A branch. Note that the A2B branch is similar. \noindent \textbf{From Coordinate Space to Bipartite-Graph Space.} Firstly, we reduce the dimension of the source shape code $F_{t-1}^{pa}$ with the function $\varphi_a(F_{t-1}^{pa}) {\in} \mathbb{R}^{C \times D_a}$, where $C$ is the number of feature map channels, and $D_a$ is the number of nodes of $F_{t-1}^{pa}$. Then we reduce the dimension of the target shape code $F_{t-1}^{pb}$ with the function $\theta_b(F_{t-1}^{pb}) {=} H_b^\intercal {\in} \mathbb{R}^{D_b \times C}$, where $D_b$ is the number of nodes of $F_{t-1}^{pb}$. Next, we project $F_{t-1}^{pa}$ to a new feature $V_a$ in a bipartite graph using the projection function $H_b^T$. Therefore we have: \begin{equation} \begin{aligned} V_a = H_b^\intercal \varphi_a(F_{t-1}^{pa}) = \theta_b(F_{t-1}^{pb}) \varphi_a(F_{t-1}^{pa}), \end{aligned} \end{equation} where both functions $\theta_b(\cdot)$ and $\varphi_a(\cdot)$ are implemented using a $1{\times1}$ convolutional layer. This results in a new feature $V_a {\in} \mathbb{R}^{D_b \times D_a}$ in the bipartite graph, which represents the cross relations between the nodes of the target pose $F_{t-1}^{pb}$ and the source pose $F_{t-1}^{pa}$ (see Figure~\ref{fig:motivation}(c)). \noindent \textbf{Cross Reasoning with a Graph Convolution.} After projection, we build a fully connected bipartite graph with adjacency matrix $A_a {\in} \mathbb{R}^{D_b \times D_b}$. We then use a graph convolution to reason the long-range cross relations between the nodes from both the source and target poses, which can be formulated as: \begin{equation} \begin{aligned} M_a = ({\rm I} - A_a) V_a W_a, \end{aligned} \end{equation} where $W_a {\in} \mathbb{R}^{D_a \times D_a}$ denotes the trainable edge weights. We follow \cite{chen2019graph,zhang2019dual} and use Laplacian smoothing \cite{chen2019graph,li2018deeper} to propagate the node features over the bipartite graph. The identity matrix~${\rm I}$ can be viewed as a residual sum connection to alleviate optimization difficulties. Note that we randomly initialize both the adjacency matrix $A_a$ and the weights $W_a$, and then train them by gradient descent in an end-to-end manner. \noindent \textbf{From Bipartite-Graph Space to Coordinate Space.} After the cross-reasoning, the new updated feature $M_a$ is mapped back to the original coordinate space for further processing. Next, we add the result to the original source shape code $F_{t-1}^{pa}$ to form a residual connection \cite{he2016deep}. This process can be expressed as: \begin{equation} \begin{aligned} \tilde{F}_{t-1}^{pa} = \phi_a(H_b M_a) + F_{t-1}^{pa}, \end{aligned} \end{equation} where we reuse the projection matrix $H_b$ and apply a linear projection $\phi_a(\cdot)$ to project $M_a$ back to the original coordinate space. Therefore, we obtain the new source feature $\tilde{F}_{t-1}^{pa}$, which has the same dimension as the original one $F_{t-1}^{pa}$. Similarly, the A2B branch outputs the new target shape feature $\tilde{F}_{t-1}^{pb}$. Note that the idea behind the proposed BGR block was inspired by the GloRe unit proposed in \cite{chen2019graph}. The main difference is that the GloRe unit reasons the relations within the same feature map via a standard graph, while the proposed BGR block reasons the cross relations between feature maps of different poses using a bipartite graph. \subsection{Pose-to-Image Interaction and Aggregation} As shown in Figure~\ref{fig:method}, the proposed IA block receives the appearance code $F_{t-1}^i$, the new source shape code $\tilde{F}_{t-1}^{pa}$, and the new target shape code $\tilde{F}_{t-1}^{pb}$ as inputs. The IA block aims to simultaneously and interactively enhance $F_{t}^i$, $F_{t}^{pa}$ and $F_{t}^{pb}$. Specifically, the shape codes are first concatenated and then fed into two convolutional layers to produce the attention map $A_{t-1}$. Mathematically, \begin{equation} \begin{aligned} A_{t-1} = \sigma({\rm Conv}({\rm Concat}(\tilde{F}_{t-1}^{pa}, \tilde{F}_{t-1}^{pb}))), \end{aligned} \end{equation} where $\sigma(\cdot)$ denotes the element-wise Sigmoid function. \hao{Appearance and shape features are crucial to generate the final person image since the appearance feature mainly focus on the texture and color information of clothes, and the shape feature mainly focus on the body orientation and size information. Thus, we propose the ``Appearance Code Enhancement'' to learn and enhance useful person appearance feature, while the ``Shape Code Enhancement'' to learn and enhance useful person shape feature. Having both ``Appearance Code Enhancement'' and ``Shape Code Enhancement'' together can generate realistic person image.} \noindent \textbf{Appearance Code Enhancement.} After obtaining $A_{t-1}$, the appearance $F_{t-1}^i$ is enhanced by: \begin{equation} \begin{aligned} F_t^i = A_{t-1} \otimes F_{t-1}^i + F_{t-1}^i, \end{aligned} \label{eq:apperance} \end{equation} where $\otimes$ denotes the element-wise product. \hao{By multiplying with the attention map $A_{t-1}$, the new appearance code $F_t^i$ at certain locations can be either preserved or suppressed.} \noindent \textbf{Shape Code Enhancement.} \hao{ As the appearance code gets updated through Eq. \eqref{eq:apperance}, the shape code should also be updated to synchronize the change, i.e., update where to sample and put patches given the new appearance code. Therefore, the shape code update should incorporate the new appearance code.} Specifically, we concatenate $F_t^i $, $F_{t-1}^{pa}$ and $F_{t-1}^{pb}$, and pass them through two convolutional layers to obtain the updated shape codes $F_t^{pa}$ and $F_t^{pb}$ by splitting the result along the channel axis. This process can be formulated as: \begin{equation} \begin{aligned} F_{t}^{pa}, F_{t}^{pb} = {\rm Conv} ({\rm Concat}(F_t^i, \tilde{F}_{t-1}^{pa}, \tilde{F}_{t-1}^{pb})). \end{aligned} \end{equation} \hao{In this way, both new shape codes $F_{t}^{pa}$ and $F_{t}^{pb}$ can synchronize the changes caused by the new appearance code $F_t^i$.} \subsection{Attention-Based Image Fusion} In the $T$-th IA block, we obtain the final appearance code $F_T^{i}$. We then feed $F_T^{i}$ to an image decoder to generate the intermediate result $\tilde{I}_b$. At the same time, we feed $F_T^{i}$ to an attention decoder to produce the attention mask $A_i$. The attention encoder consists of several deconvolutional layers and a Sigmoid activation layer. Thus, the attention encoder aims to generate a one-channel attention mask $A_i$, in which each pixel value is between 0 to 1. The attention mask $A_i$ aims to selectively pick useful content from both the input image $I_a$ and the intermediate result $\tilde{I}_b$ for generating the final result~$I_b^{'}$. This process can be expressed as: \begin{equation} \begin{aligned} I_b^{'} = I_a \otimes A_i + \tilde{I}_b \otimes (1 - A_i), \end{aligned} \label{eq:att} \end{equation} where $\otimes$ denotes an element-wise product. In this way, both the image decoder and the attention decoder can interact with each other and ultimately produce better results. \begin{figure*}[!t] \centering \includegraphics[width=1\linewidth]{framework_p.jpg} \caption{Illustration of the proposed PBGR block $t$, which consists of 18 branches. Each branch aims to model local transformations between each source sub-pose $F_{t-1}^{pai}$ and each target sub-pose $F_{t-1}^{pbi}$ in a bipartite graph via a BGR block presented in Figure~\ref{fig:blocks}. \hao{Note that the shape encoders can share network parameters, so that no extra parameters are introduced, and the speed of training and testing is not significantly slow down.} } \label{fig:method_p} \end{figure*} \section{Part-Aware BiGraphGAN} The proposed part-aware bipartite graph reasoning GAN (i.e., BiGraphGAN++) employs the same framework as BiGraphGAN, presented in Figure \ref{fig:method}, with the only difference being that we need to replace the BGR block from Figure \ref{fig:method} with the new PBGR from Figure~\ref{fig:method_p}. \subsection{Part-Aware Bipartite Graph Reasoning} The framework of the proposed PBGR block is shown in Figure \ref{fig:method_p}. Specifically, we first follow OpenPose \cite{cao2017realtime} and decompose the overall source pose $P_a$ and target pose $P_b$ into 18 different sub-poses (i.e., $\{P_a^i\}_{i=1}^{18}$, and $\{P_b^i\}_{i=1}^{18}$) based on the inherent connection relationships among them. Then the corresponding source and target sub-poses are concatenated and fed into the corresponding shape encoder to generate high-level feature representations. Consider the $t$-th block given in Figure \ref{fig:method_p}. Each source and target sub-pose feature representation can be represented as $F_{t-1}^{pai}$ and $F_{t-1}^{pbi}$, respectively. Then, the feature pair $[F_{t-1}^{pai}, F_{t-1}^{pbi}]$ is fed to the $i$-th BGR block to learn the local transformation for the $i$-th sub-pose, which can ease the learning and capture the change in pose of each part more precisely. Next, the updated feature representations $\tilde{F}_{t-1}^{pai}$ and $\tilde{F}_{t-1}^{pbi}$ are concatenated to represent the local transformation of the $i$-th sub-pose, i.e., $\tilde{F}_{t-1}^{pi}{=}[\tilde{F}_{t-1}^{pai}, \tilde{F}_{t-1}^{pbi}]$. Finally, we combine all the local transformations from all the different sub-poses to obtain the global transformation between the source pose $P_a$ and target pose $P_b$, which can be expressed as follows: \begin{equation} \begin{aligned} \tilde{F}_{t-1}^{p} = \tilde{F}_{t-1}^{p1} + \tilde{F}_{t-1}^{p2} + \dots + \tilde{F}_{t-1}^{pi} + \dots + \tilde{F}_{t-1}^{p18} . \end{aligned} \label{eq:all} \end{equation} \subsection{Part-Aware Interaction and Aggregation} The proposed part-aware IA block aims to simultaneously enhance $\tilde{F}_{t-1}^{p}$ and $F_{t-1}^{i} $. Specifically, the pose feature $\tilde{F}_{t-1}^{p}$ is fed into a Sigmoid activation layer to produce the attention map $A_{t-1}$. Mathematically, \begin{equation} \begin{aligned} A_{t-1} = \sigma(\tilde{F}_{t-1}^{p}), \end{aligned} \end{equation} where $\sigma(\cdot)$ denotes the element-wise Sigmoid function. By doing so, $A_{t-1}$ provides important guidance for understanding the spatial deformation of each part region between the source and target poses, specifying which positions in the source pose should be sampled to generate the corresponding target pose. \noindent \textbf{Appearance Code Enhancement.} After obtaining $A_{t-1}$, the appearance $F_{t-1}^i$ is enhanced by: \begin{equation} \begin{aligned} F_t^i = A_{t-1} \otimes F_{t-1}^i + F_{t-1}^i, \end{aligned} \end{equation} where $\otimes$ denotes an element-wise product. \noindent \textbf{Shape Code Enhancement.} Next, we concatenate $F_t^i $ and $F_{t-1}^{pi}$, and pass them through two convolutional layers to obtain the updated shape codes $F_t^{pai}$ and $F_t^{pbi}$ by splitting the result along the channel axis. This process can be formulated as: \begin{equation} \begin{aligned} F_t^{pi} = & [F_{t}^{pai}, F_{t}^{pbi}] \\ = & {\rm Conv} ({\rm Concat}(F_t^i, F_{t-1}^{pi})), i = 1, \cdots, 18. \end{aligned} \end{equation} In this way, both new shape codes $F_{t}^{pai}$ and $F_{t}^{pbi}$ can synchronize the changes caused by the new appearance code $F_t^i$. \section{Model Training} \noindent \textbf{Appearance and Shape Discriminators.} We adopt two discriminators for adversarial training. Specifically, we feed the image-image pairs ($I_a$, $I_b$) and ($I_a$, $I_b^{'}$) into the appearance discriminator $D_{app}$ to ensure appearance consistency. Meanwhile, we feed the pose-image pairs ($P_b$, $I_b$) and ($P_b$, $I_b^{'}$) into the shape discriminator $D_{sha}$ for shape consistency. Both discriminators (i.e., $D_{app}$ and $D_{sha}$) and the proposed graph generator $G$ are trained in an end-to-end way, enabling them to enjoy mutual benefits from each other in a joint framework. \noindent \textbf{Optimization Objectives.} We follow \cite{zhu2019progressive,tang2020xinggan} and use the adversarial loss $\mathcal{L}_{gan}$, the pixel-wise $L1$ loss $\mathcal{L}_{l1}$, and the perceptual loss $\mathcal{L}_{per}$ as our optimization objectives: \begin{equation} \begin{aligned} \mathcal{L}_{full} = \lambda_{gan} \mathcal{L}_{gan} + \lambda_{l1} \mathcal{L}_{l1} + \lambda_{per} \mathcal{L}_{per}, \end{aligned} \label{eq:loss} \end{equation} where $\lambda_{gan}$, $\lambda_{l1}$, and $\lambda_{per}$ control the relative importance of the three objectives. For the perceptual loss, we follow \cite{zhu2019progressive,tang2020xinggan} and use the $Conv1\_2$ layer. \noindent \textbf{Implementation Details.} In our experiments, we follow previous work~\cite{zhu2019progressive,tang2020xinggan} and represent the source pose $P_a$ and the target pose $P_b$ as two 18-channel heat maps that encode the locations of 18 joints of a human body. The Adam optimizer \cite{kingma2014adam} is employed to learn the proposed BiGraphGAN and BiGraphGAN++ for around 90K iterations with $\beta_1{=}0.5$ and $\beta_2{=}0.999$. In our preliminary experiments, we found that as $T$ increases, the performance gets better and better. However, when $T$ reaches 9, the proposed models achieve the best results, and then the performance begins to decline. Thus, we set $T{=}9$ in the proposed graph generator. Moreover, $\lambda_{gan}$, $\lambda_{l1}$, $\lambda_{per}$ in Equation~\eqref{eq:loss}, and the number of feature map channels $C$, are set to 5, 10, 10, and 128, respectively. The proposed BiGraphGAN is implemented in PyTorch~\cite{paszke2019pytorch}. \section{Experiments} \begin{table*}[!t] \centering \caption{Quantitative comparison of different methods on Market-1501 and DeepFashion for person pose generation. For all metrics, higher is better. ($\ast$) denotes the results tested on our testing set.} \begin{tabular}{rccccccc} \toprule \multirow{2}{*}{Method} & \multicolumn{4}{c}{Market-1501} & \multicolumn{3}{c}{DeepFashion} \\ \cmidrule(lr){2-5} \cmidrule(lr){6-8} & SSIM $\uparrow$ & IS $\uparrow$ & Mask-SSIM $\uparrow$ & Mask-IS $\uparrow$ & SSIM $\uparrow$ & IS $\uparrow$ & PCKh $\uparrow$ \\ \hline PG2~\cite{ma2017pose} & 0.253 & 3.460 & 0.792 & 3.435 & 0.762 & 3.090 & - \\ DPIG~\cite{ma2018disentangled} & 0.099 & 3.483 & 0.614 & 3.491 & 0.614 & 3.228 & - \\ Deform~\cite{siarohin2018deformable} & 0.290 & 3.185 & 0.805 & 3.502 & 0.756 & 3.439 & -\\ C2GAN~\cite{tang2019cycle} & 0.282 & 3.349 & 0.811 & 3.510 & - & - & -\\ BTF~\cite{albahar2019guided} & - & - & - & - & 0.767 & 3.220 & -\\ \hline PG2$^\ast$~\cite{ma2017pose} & 0.261 & 3.495 & 0.782 & 3.367 & 0.773 & 3.163 & 0.89 \\ Deform$^\ast$~\cite{siarohin2018deformable} & 0.291 & 3.230 & 0.807 & 3.502 & 0.760 & 3.362 & 0.94 \\ VUNet$^\ast$~\cite{esser2018variational} & 0.266 & 2.965 & 0.793 & 3.549 & 0.763 & 3.440 & 0.93 \\ PATN$^\ast$~\cite{zhu2019progressive} & 0.311 & 3.323 & 0.811 & 3.773 & 0.773 & 3.209 & 0.96 \\ PoseStylizer$^\ast$~\cite{huang2020generating} & 0.312 & 3.132 & 0.808 & 3.729 & 0.775 & 3.292 & 0.96\\ XingGAN$^\ast$~\cite{tang2020xinggan} & 0.313 & 3.506 & 0.816 & \textbf{3.872} & 0.778 & 3.476 & 0.95\\ \hline BiGraphGAN (Ours) & 0.325 & 3.329 & 0.818 & 3.695 & 0.778 & 3.430 & \textbf{0.97} \\ BiGraphGAN++ (Ours) & \textbf{0.334} &\textbf{3.592} & \textbf{0.822} & 3.701 & \textbf{0.802} & \textbf{3.508} & \textbf{0.97} \\ \hline Real Data & 1.000 & 3.890 & 1.000 & 3.706 & 1.000 & 4.053 & 1.00 \\ \bottomrule \end{tabular} \label{tab:pose_reuslts} \end{table*} \begin{table*}[!ht] \centering \caption{Quantitative comparison of user study (\%) on Market-1501 and DeepFashion. `R2G' and `G2R' represent the percentage of real images rated as fake w.r.t.~all real images, and the percentage of generated images rated as real w.r.t. all generated images, respectively.} \begin{tabular}{rccccccc} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{Market-1501} & \multicolumn{2}{c}{DeepFashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & R2G $\uparrow$ & G2R $\uparrow$ & R2G $\uparrow$ & G2R $\uparrow$ \\ \hline PG2~\cite{ma2017pose} & 11.20 & 5.50 & 9.20 & 14.90 \\ Deform~\cite{siarohin2018deformable} & 22.67 & 50.24 & 12.42 & 24.61 \\ C2GAN~\cite{tang2019cycle} & 23.20 & 46.70 & - & - \\ PATN~\cite{zhu2019progressive} & 32.23 & 63.47 & 19.14 & 31.78 \\ BiGraphGAN (Ours) & 35.76 & 65.91 & 22.39 & 34.16 \\ BiGraphGAN++ (Ours) & \textbf{37.32} & \textbf{66.83} & \textbf{23.76} & \textbf{35.57} \\ \bottomrule \end{tabular} \label{tab:pose_ruser} \end{table*} \begin{figure*}[!ht]\small \centering \subfigure[]{\label{fig:market1}\includegraphics[width=0.81\linewidth]{market1.jpg}} \subfigure[]{\label{fig:market2}\includegraphics[width=0.81\linewidth]{market2.jpg}} \caption{Qualitative comparisons of person pose generation on Market-1501. (a) From left to right: Source Image ($I_a$), Source Pose ($P_a$), Target Pose ($P_b$), Target Image($I_b$), PG2~\cite{ma2017pose}, VUNet~\cite{esser2018variational}, Deform~\cite{siarohin2018deformable}, BiGraphGAN (Ours), and BiGraphGAN++ (Ours). (b) From left to right: Source Image ($I_a$), Source Pose ($P_a$), Target Pose ($P_b$), Target Image($I_b$), PATN~\cite{zhu2019progressive}, PoseStylizer~\cite{huang2020generating}, XingGAN \cite{tang2020xinggan}, BiGraphGAN (Ours), and BiGraphGAN++ (Ours).} \label{fig:mark_results} \end{figure*} \begin{figure*}[!htbp]\small \centering \subfigure[]{\label{fig:fashion1}\includegraphics[width=0.81\linewidth]{fashion1.jpg}} \subfigure[]{\label{fig:fashion2}\includegraphics[width=0.72\linewidth]{fashion2.jpg}} \caption{Qualitative comparisons of person pose generation on DeepFashion. (a) From left to right: Source Image ($I_a$), Source Pose ($P_a$), Target Pose ($P_b$), Target Image($I_b$), PG2~\cite{ma2017pose}, VUNet~\cite{esser2018variational}, Deform~\cite{siarohin2018deformable}, BiGraphGAN (Ours), and BiGraphGAN++ (Ours). (b) From left to right: Source Image ($I_a$), Source Pose ($P_a$), Target Pose ($P_b$), Target Image($I_b$), PATN~\cite{zhu2019progressive}, XingGAN \cite{tang2020xinggan}, BiGraphGAN (Ours), and BiGraphGAN++ (Ours).} \label{fig:fashion_results} \end{figure*} \subsection{Person Pose Synthesis} \noindent \textbf{Datasets.} We follow previous works \cite{ma2017pose,siarohin2018deformable,zhu2019progressive} and conduct extensive experiments on two public datasets, i.e., Market-1501 \cite{zheng2015scalable} and DeepFashion \cite{liu2016deepfashion}. Specifically, we adopt the training/test split used in \cite{zhu2019progressive,tang2020xinggan} for fair comparison. In addition, images are resized to $128 {\times} 64$ and $256 {\times} 256$ on Market-1501 and DeepFashion, respectively. \noindent \textbf{Evaluation Metrics.} We follow \cite{ma2017pose,siarohin2018deformable,zhu2019progressive} and employ Inception score (IS) \cite{salimans2016improved}, structural similarity index measure (SSIM) \cite{wang2004image}, and their masked versions (i.e., Mask-IS and Mask-SSIM) as our evaluation metrics to quantitatively measure the quality of the images generated by different approaches. Moreover, we employ the percentage of correct keypoints (PCKh) score proposed in \cite{zhu2019progressive} to explicitly evaluate the shape consistency of the person images generated for the DeepFashion dataset. \noindent \textbf{Quantitative Comparisons.} We compare the proposed BiGraphGAN and BiGraphGAN++ with several leading person image synthesis methods, i.e., PG2~\cite{ma2017pose}, DPIG~\cite{ma2018disentangled}, Deform~\cite{siarohin2018deformable}, C2GAN~\cite{tang2019cycle}, BTF~\cite{albahar2019guided}, VUNet~\cite{esser2018variational}, PATN~\cite{zhu2019progressive}, PoseStylizer~\cite{huang2020generating}, and XingGAN~\cite{tang2020xinggan}. Note that all of them use the same training data and data augmentation to train the models. Quantitative comparison results are shown in Table~\ref{tab:pose_reuslts}. We observe that the proposed methods achieve the best results in most metrics, including SSIM and Mask-SSIM on Market-1501, and SSIM and PCKh on DeepFashion. For other metrics, such as IS, the proposed methods still achieve better scores than the most related model, PATN, on both datasets. These results validate the effectiveness of our proposed methods. \noindent \textbf{Qualitative Comparisons.} We also provide visual comparison results on both datasets in Figures~\ref{fig:mark_results} and~\ref{fig:fashion_results}. As shown on the left of both figures, the proposed BiGraphGAN and BiGraphGAN++ generate remarkably better results than PG2~\cite{ma2017pose}, VUNet~\cite{esser2018variational}, and Deform~\cite{siarohin2018deformable} on both datasets. To further evaluate the effectiveness of the proposed methods, we compare BiGraphGAN and BiGraphGAN++ with the most state-of-the-art models, i.e., PATN~\cite{zhu2019progressive}, PoseStylizer~\cite{huang2020generating}, and XingGAN~\cite{tang2020xinggan}, on the right of both figures. We again observe that our proposed BiGraphGAN and BiGraphGAN++ generate clearer and more visually plausible person images than PATN, PoseStylizer, and XingGAN on both datasets. \noindent \textbf{User Study.} We also follow \cite{ma2017pose,siarohin2018deformable,zhu2019progressive} and conduct a user study to evaluate the quality of the generated images. Specifically, we follow the evaluation protocol used in \cite{zhu2019progressive,tang2020xinggan} for fair comparison. Comparison results of different methods are shown in Table~\ref{tab:pose_ruser}. We see that the proposed methods achieve the best results in all metrics, which further confirms that the images generated by the proposed BiGraphGAN and BiGraphGAN++ are more photorealistic. \begin{figure*}[!t] \centering \includegraphics[width=1\linewidth]{face_results.jpg} \caption{Qualitative comparisons of facial expression translation on Radboud Faces. From left to right: Source Image ($I_a$), Source Landmark ($P_a$), Target Landmark ($P_b$), Target Image ($I_b$), Pix2pix~\cite{isola2017image}, GPGAN~\cite{di2018gp}, PG2~\cite{ma2017pose}, CocosNet \cite{zhang2020cross}, C2GAN \cite{tang2019cycle}, BiGraphGAN (Ours), and and BiGraphGAN++ (Ours).} \label{fig:face_results} \end{figure*} \begin{table*}[!ht] \centering \caption{Quantitative comparison of facial expression image synthesis on the Radboud Faces dataset. For all the metrics except LPIPS, higher is better.} \begin{tabular}{rcccc} \toprule Method & AMT $\uparrow$ & SSIM $\uparrow$ & PSNR $\uparrow$ & LPIPS $\downarrow$ \\ \midrule Pix2pix~\cite{isola2017image} & 13.4 & 0.8217 & 19.9971 & 0.1334 \\ GPGAN~\cite{di2018gp} & 0.3 & 0.8185 & 18.7211 & 0.2531 \\ PG2~\cite{ma2017pose} & 28.4 & 0.8462 & 20.1462 & 0.1130 \\ CocosNet \cite{zhang2020cross} & 31.3 & 0.8524 & 20.7915 & 0.0985 \\ C2GAN \cite{tang2019cycle} & 34.2 & 0.8618 & 21.9192 & 0.0934 \\ BiGraphGAN (Ours) & 37.9 & 0.8644 & 27.5923 & 0.0806 \\ BiGraphGAN++ (Ours) & \textbf{39.1} & \textbf{0.8665} & \textbf{29.3917} & \textbf{0.0798} \\ \bottomrule \end{tabular} \label{tab:result_face} \end{table*} \subsection{Facial Expression Synthesis} \noindent\textbf{Datasets.} The Radboud Faces dataset~\cite{langner2010presentation} is used to conduct experiments on the facial expression generation task. This dataset consists of over 8,000 face images with eight different facial expressions, i.e., neutral, angry, contemptuous, disgusted, fearful, happy, sad, and surprised. We follow C2GAN \cite{tang2019cycle} and select 67\% of the images for training, while the remaining 33\% are used for testing. We use the public software OpenFace~\cite{amos2016openface} to extract facial landmarks. For the facial expression-to-expression translation task, we combine two different facial expression images of the same person to form an image pair for training (e.g., neutral and angry). Thus, we obtain 5,628 and 1,407 image pairs for the training and testing sets, respectively. \noindent\textbf{Evaluation Metrics.} We follow C2GAN \cite{tang2019cycle} and first adopt SSIM~\cite{wang2004image}, peak signal-to-noise ratio (PSNR), and learned perceptual image patch similarity (LPIPS) \cite{zhang2018unreasonable} for quantitative evaluation. Note that both SSIM and PSNR measure the image quality at a pixel level, while LPIPS evaluates the generated images at a deep feature level. Next, we again follow C2GAN and adopt the amazon mechanical turk (AMT) user study to evaluate the generated facial images. \noindent \textbf{Quantitative Comparisons.} To evaluate the effectiveness of the proposed BiGraphGAN, we compare it with several leading facial image generation methods, i.e., Pix2pix~\cite{isola2017image}, GPGAN~\cite{di2018gp}, PG2~\cite{ma2017pose}, CocosNet \cite{zhang2020cross}, and C2GAN \cite{tang2019cycle}. The results in terms of SSIM, PSNR, and LPIPS are shown in Table \ref{tab:result_face}. We observe that the proposed BiGraphGAN and BiGraphGAN++ achieve the best scores in all three evaluation metrics, confirming the effectiveness of our methods. Notably, the proposed BiGraphGAN is 5.6731 points higher than the current best method (i.e., C2GAN) in the PSNR metric. \noindent \textbf{Qualitative Comparisons.} We also provide qualitative results compared with the current leading models in Figure~\ref{fig:face_results}. We observe that GPGAN performs the worst among all comparison models. Pix2pix can generate correct expressions, but the faces are distorted. Moreover, the results of PG2 tend to be blurry. Compared with these methods, the results generated by the proposed BiGraphGAN are smoother, sharper, and contain more convincing details. \begin{figure*}[t] \centering \subfigure[]{\label{fig:ablation1}\includegraphics[width=0.551\linewidth]{ablation1.jpg}} \subfigure[]{\label{fig:ablation2}\includegraphics[width=0.44\linewidth]{ablation2.jpg}} \caption{Qualitative comparison of ablation study on Market-1501. (a) Qualitative comparisons of different baselines of the proposed BiGraphGAN. (b) Visualization of the learned attention masks and intermediate results.} \label{fig:ablation} \end{figure*} \begin{table*}[!t] \centering \caption{Ablation study of the proposed BiGraphGAN on Market-1501 for person pose generation. For both metrics, higher is better.} \begin{tabular}{lcc} \toprule Baselines of BiGraphGAN & SSIM $\uparrow$ & Mask-SSIM $\uparrow$ \\ \midrule B1: Our Baseline & 0.305 & 0.804 \\ B2: B1 + B2A & 0.310 & 0.809 \\ B3: B1 + A2B & 0.310 & 0.808 \\ B4: B1 + A2B + B2A (Sharing) & 0.322 & 0.813 \\ B5: B1 + A2B + B2A (Non-Sharing) & 0.324 & 0.813 \\ B6: B5 + AIF & \textbf{0.325} & \textbf{0.818} \\ \bottomrule \end{tabular} \label{tab:ablation} \end{table*} \noindent \textbf{User Study.} Following C2GAN~\cite{tang2019cycle}, we conduct a user study to evaluate the quality of the images generated by different models, i.e., Pix2pix~\cite{isola2017image}, GPGAN~\cite{di2018gp} , PG2~\cite{ma2017pose}, CocosNet \cite{zhang2020cross}, and C2GAN \cite{tang2019cycle}. Comparison results are shown in Table~\ref{tab:result_face}. We observe that the proposed BiGraphGAN achieves the best results, which further validates that the images generated by the proposed model are more photorealistic. \subsection{Ablation Study} We perform extensive ablation studies to validate the effectiveness of each component of the proposed BiGraphGAN on the Market-1501 dataset. \noindent \textbf{Baselines of BiGraphGAN.} The proposed BiGraphGAN has six baselines (i.e., B1, B2, B3, B4, B5, B6), as shown in Table \ref{tab:ablation} and Figure~\ref{fig:ablation} (left). B1 is our baseline. B2 uses the proposed B2A branch to model the cross relations from the target pose to the source pose. B3 adopts the proposed A2B branch to model the cross relations from the source pose to the target pose. B4 combines both the A2B and B2A branches to model the cross relations between the source pose and the target pose. Note that both GCNs in B4 share parameters. B5 employs a non-sharing strategy between the two GCNs to model the cross relations. B6 is our full model and employs the proposed AIF module to enable the graph generator to attentively determine which part is most useful for generating the final person image. \noindent \textbf{Ablation Analysis.} The results of the ablation study are shown in Table \ref{tab:ablation} and Figure~\ref{fig:ablation} (left). We observe that both B2 and B3 achieve significantly better results than B1, proving our initial hypothesis that modeling the cross relations between the source and target pose in a bipartite graph will boost the generation performance. In addition, we see that B4 outperforms B2 and B3, demonstrating the effectiveness of modeling the symmetric relations between the source and target poses. B5 achieves better results than B4, which indicates that using two separate GCNs to model the symmetric relations will improve the generation performance in the joint network. B6 is better than B5, which clearly proves the effectiveness of the proposed attention-based image fusion strategy. Moreover, we show several examples of the learned attention masks and intermediate results in Figure~\ref{fig:ablation} (right) We can see that the proposed module attentively selects useful content from both the input image and intermediate result to generate the final result, thus validating our design motivation. \noindent \textbf{BiGraphGAN vs. BiGraphGAN++.} We also provide comparison results of BiGraphGAN and BiGraphGAN++ on both Market-1501 and DeepFashion. The results for person pose image generation are shown in Tables \ref{tab:pose_reuslts} and \ref{tab:pose_ruser}. We see that BiGraphGAN++ achieves much better results in most metrics, indicating that the proposed PBGR module does indeed learn the local transformations among body parts, thus improving the generation performance. From the visualization results in Figures~\ref{fig:mark_results} and \ref{fig:fashion_results}, we can see that BiGraphGAN++ generates more photorealistic images with fewer visual artifacts than BiGraphGAN, on both datasets. The same conclusion can be drawn from the facial expression synthesis task, as shown in Table \ref{tab:result_face} and Figure \ref{fig:face_results}. Overall, the proposed BiGraphGAN++ can achieve better reuslts than BiGraphGAN on both challenging tasks, validating the effectiveness of our network design. \section{Conclusion} In this paper, we propose a novel bipartite graph reasoning GAN (BiGraphGAN) framework for both the challenging person pose and facial image generation tasks. We introduce two novel blocks, i.e., the bipartite graph reasoning (BGR) block and interaction-and-aggregation (IA) block. The former is employed to model the long-range cross relations between the source pose and the target pose in a bipartite graph. The latter is used to interactively enhance both a person's shape and appearance features. To further capture the detailed local structure transformations among body parts, we propose a novel part-aware bipartite graph reasoning (PBGR) block. Extensive experiments in terms of both human judgments and automatic evaluation demonstrate that the proposed BiGraphGAN achieves remarkably better performance than the state-of-the-art approaches on three challenging datasets. Lastly, we believe that the proposed method will inspire researchers to explore the cross-contextual information in other vision task. \section{Introduction} \end{document} \grid
2,869,038,154,444
arxiv
\section{Introduction} Given an infinite discrete group of isometries $\Gamma$ of a proper metric space $X$, the {\it orbital counting problem} studies, for fixed $x_0,y_0\in X$, the asymptotic as $t\rightarrow +\infty$ of $$ {\operatorname{Card}}\{\gamma\in\Gamma\;:\; d(x_0,\gamma y_0)\leq t\}\;. $$ Initiated by Gauss in the Euclidean plane and by Huber in the real hyperbolic plane, there is a huge corpus of works on this problem, including the seminal results of Margulis's thesis, see for instance \cite{Babillot02a,Oh10,Oh13} and their references for historical remarks, as well as \cite{AthBufEskMir12, PauPolSha, Quint05, Sambarino13} for variations. Given an infinite subset of the orbit $\Gamma x_0$, defined in either an algebraic, a geometric or a probabilistic way, it is interesting to study the asymptotic growth of this subset, see for example \cite{PetRis09,BouKonSar10} and Chapter 4 of \cite{PauPolSha} for recent examples. In this paper, we consider the orbit points under the elements of a fixed nontrivial conjugacy class ${\mathfrak K}$ in $\Gamma$. More precisely, we will study the asymptotic growth as $t\rightarrow+\infty$ of the counting function $$ N_{{\mathfrak K},\,x_0}(t)={\operatorname{Card}}\{\gamma\in{\mathfrak K}\;:\; d(x_0,\gamma x_0)\leq t\}\; $$ introduced by Huber \cite{Huber56} in a special case. Although we will work in the framework of negative curvature in this paper, the counting problem in (infinite) conjugacy classes is interesting even for discrete isometry groups in (nonabelian) nilpotent or solvable Lie groups endowed with left-invariant distances. We refer to Section \ref{sect:countingfingengroup} for examples of computations of the growth of $N_{{\mathfrak K},\,x_0}(t)$ when $\Gamma$ is a finitely generated group and $X$ is the set $\Gamma$ endowed with a word metric. This paper opens a new field of research, studying which growth types (or relative growth types) fixed conjugacy classes may have in finitely generated groups. For word-hyperbolic groups and negatively curved manifolds, the conjugacy classes usually have constant exponential growth rate, as illustrated by the following result (see also Proposition \ref{prop:growthconjclaswordhyp} and Corollary \ref{coro:encadreloxo} for generalisations). \begin{theo}\label{theo:loggrowthintro} If $M$ is a compact negatively curved Riemannian manifold, if $h$ is the topological entropy of its geodesic flow, if $\Gamma$ is the covering group of a universal Riemannian cover $X\rightarrow M$, if ${\mathfrak K}$ is a nontrivial conjugacy class in $\Gamma$, then $$ \lim_{t\rightarrow+\infty} \frac{1}{t}\log N_{{\mathfrak K},\,x_0}(t)=\frac{h}{2}\;. $$ \end{theo} In this introduction from now on, we concentrate on the case when $X$ is the real hyperbolic plane $\maths{H}^2_\maths{R}$, and we assume that $x_0$ is not fixed by any nontrivial element of $\Gamma$, see the main body of the text for more general statements. Given a nontrivial element $\gamma$ of a discrete group of isometries $\Gamma$ of $\maths{H}^2_\maths{R}$, we will denote by $C_{\gamma},\tau_{\gamma},\iota_\gamma$ the following objects: \smallskip \begin{itemize} \item[$\bullet$] if $\gamma$ is loxodromic, then $C_{\gamma}$ is the translation axis of $\gamma$; with $\ell_\gamma$ the translation length of $\gamma$, we define $\tau_{\gamma}=(\frac{\cosh\ell_{\gamma}\,-1}{2})^{1/2}$ if $\gamma$ preserves the orientation and $\tau_{\gamma}= (\frac{\cosh \ell_{\gamma} \,+1}{2})^{1/2}$ otherwise; $\iota_\gamma$ is $2$ if there exists an element in $\Gamma$ exchanging the two fixed points of $\gamma$, and $1$ otherwise; \item[$\bullet$] if $\gamma$ is parabolic, then $C_{\gamma}$ is a horoball centred at the parabolic fixed point of $\gamma$; we set $\tau_{\gamma}= 2\sinh \frac{d(x,\gamma x)}{2}$ for any $x\in \partial C_{\gamma}$; we define $\iota_\gamma$ as $2$ if there exists a nontrivial elliptic element of $\Gamma$ fixing the fixed point of $\gamma$, and $1$ otherwise; \item[$\bullet$] if $\gamma$ is elliptic, then $C_{\gamma}$ is the fixed point set of $\gamma$ in $\maths{H}^2_\maths{R}$; if $\gamma$ is orientation-reversing, we assume in this introduction that the stabiliser of $C_{\gamma}$ is infinite; we set $\tau_{\gamma}=\sin\frac{\theta}{2}$ if $\gamma$ preserves the orientation with rotation angle $\theta$, and $\tau_{\gamma}=1$ otherwise; we define $\iota_\gamma=1$, unless $\gamma$ preserves the orientation with rotation angle different from $\pi$ and the stabiliser in $\Gamma$ of $C_\gamma$ is dihedral, in which case $\iota_\gamma=2$. \end{itemize} \smallskip We refer for instance to \cite{Roblin03} for the definition of the critical exponent $\delta_\Gamma$ of $\Gamma$, the Patterson-Sullivan measures $(\mu_{x})_{x\in \maths{H}^2_\maths{R}}$ of $\Gamma$, the Bowen-Margulis measure $m_{\rm BM}$ of $\Gamma$, and to \cite{OhSha13,ParPau13ETDS} for the definition of the skinning measure $\sigma_C$ of $\Gamma$ associated to a nonempty proper closed convex subset $C$ of $\maths{H}^2_\maths{R}$ (see also Section \ref{sec:rappels}). We denote by $\|\mu\|$ the total mass of a measure $\mu$ and by $\Delta_x$ the unit Dirac mass at a point $x$. The following result says in particular that the exponential growth rate of the orbit under a conjugacy class is $\frac{\delta_\Gamma}{2}$ and that the unit tangent vectors at $x_0$ to these orbit points equidistribute to the pulled-back Patterson-Sullivan measure. \begin{theo} \label{theo:intro} Let $\Gamma$ be a nonelementary finitely generated discrete group of isometries of $\maths{H}^2_\maths{R}$, and let ${\mathfrak K}$ be the conjugacy class of a fixed nontrivial element $\gamma_0\in\Gamma$. \smallskip (1) As $t\rightarrow+\infty$, we have $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{\iota_{\gamma_0}\,\|\mu_{x_0}\|\;\|\sigma_{C_{\gamma_0}}\|} {\delta_\Gamma\,\|m_{\rm BM}\|\,{\tau_{\gamma_0}}^{\delta_\Gamma}}\; e^{\frac{\delta_\Gamma}{2}\;t}\;. $$ If $\Gamma$ is arithmetic or if $M$ is compact, then the error term is $\operatorname{O}(e^{(\frac{\delta_\Gamma}{2}-\kappa) t})$ for some $\kappa >0$. \smallskip (2) Let $v_\gamma$ be the unit tangent vector at $x_0$ to the geodesic segment $[x_0,\gamma x_0]$ for every nontrivial $\gamma\in\Gamma$, and let $\pi_+:T^1_{x_0}\maths{H}^2_\maths{R}\rightarrow \partial_\infty \maths{H}^2_\maths{R}$ be the homeomorphism sending $v$ to the point at infinity of the geodesic ray with initial vector $v$. For the weak-star convergence of measures on $T^1_{x_0}\maths{H}^2_\maths{R}$, we have $$ \lim_{t\rightarrow+\infty}\; \frac{\delta_\Gamma\;\|m_{\rm BM}\| \,{\tau_{\gamma_0}}^{\delta_\Gamma}}{\|\mu_{x_0}\|\;\|\sigma_{C_{\gamma_0}}\|} \;e^{-\frac{\delta_\Gamma}{2}\; t} \sum_{\gamma\in{\mathfrak K},\; d(x_0,\gamma x_0)\leq t}\; \Delta_{v_\gamma} = (\pi_+^{-1})_*\mu_{x_0}\;. $$ \end{theo} When $\Gamma$ is a cocompact lattice in dimension $2$ and $\gamma_0$ is loxodromic, the first claim is due to Huber \cite[Theorem B]{Huber56} with an improved error bound in \cite{Huber98}. The following corollary, proved in Sections \ref{sect:loxodromic} and \ref{sect:parabolic}, is a generalisation of Huber's result for noncompact quotients and for parabolic conjugacy classes. A version for elliptic conjugacy classes follows from Corollary \ref{coro:mainellip}, we leave the formulation for the reader. \begin{coro}\label{coro:intro} Let $\Gamma$ be a torsion-free and orientation-preserving discrete group of isometries of $\maths{H}^2_\maths{R}$ such that the surface $\Gamma\backslash{\HH}^2_\RR$ has finite area, with genus $g$ and $p$ punctures. If ${\mathfrak K}$ is a conjugacy class of primitive loxodromic elements with translation length $\ell$, then as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{\ell}{2\pi(2g+p-2)\sinh\frac\ell 2}\;e^{\frac t2}\,. $$ If ${\mathfrak K}$ is a conjugacy class of primitive parabolic elements, then as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t) \sim\frac{1}{2\pi(2g+p-2)}\;e^{\frac t2}\,. $$ \end{coro} \medskip When $\Gamma$ is a uniform lattice, ${\mathfrak K}$ is a conjugacy class of loxodromic elements, and $\maths{H}^2_\maths{R}$ is replaced by a regular tree, the analog of Corollary \ref{coro:intro} is due to \cite{Douma11}. See \cite{BroParPau13} for the case of any locally finite tree and more general discrete groups of isometries. \medskip The main tool of this paper (see Section \ref{sec:rappels}) is a counting and equidistribution result for the common perpendiculars between locally convex subsets of simply connected negatively curved manifolds, proved in \cite{ParPau14}. In Section \ref{sect:counting}, we will use this tool in order to prove our abstract main result, Theorem \ref{theo:genHuberabstrait}, on the counting of the orbit points by the elements in a given conjugacy class. In Sections \ref{sect:loxodromic}, \ref{sect:parabolic} and \ref{sect:elliptic}, we give the elementary computations concerning the geometry of loxodromic, parabolic and elliptic isometries of a simply connected negatively curved manifold required to apply our abstract main result, proving as a special case the above Theorem \ref{theo:intro}. Finally, in Section \ref{sect:countingsubgroups}, we give some results on the counting problem of subgroups of $\Gamma$ in a given conjugacy class of subgroups. \section{Counting in conjugacy classes in finitely generated groups} \label{sect:countingfingengroup} In this section, we study the growth of a given conjugacy class in a finitely generated group endowed with a word metric, by giving three examples. We thank E.~Breuillard, Y.~Cornulier, S.~Grigorchuk, D.~Osin and R.~Tessera for discussions on this topic. Let $\Gamma$ be a finitely generated group, endowed with a finite generating set $S$. For every $\gamma\in\Gamma$, we denote by $\|\gamma\|$ the smallest length of a word in $S\cup S^{-1}$ representing $\gamma$. We endow $\Gamma$ with the left-invariant word metric $d_S$ associated to $S$, that is, $d_S(\gamma,\gamma')=\|\gamma^{-1}\gamma'\|$ for all $\gamma,\gamma' \in\Gamma$. Given a conjugacy class ${\mathfrak K}$ in $\Gamma$, we want to study the growth as $n\rightarrow+\infty$ of $$ N_{\mathfrak K}(n)=N_{{\mathfrak K},\,S}(n)={\operatorname{Card}}\;{\mathfrak K}\cap B_{d_S}(e,n)\;, $$ the cardinality of the intersection of the conjugacy class ${\mathfrak K}$ with the ball of radius $n$ centered at the identity element $e$ for the word metric $d_S$. Given two maps $f,g:\maths{N}\rightarrow\mathopen{]}0,+\infty\mathclose{[}\,$, we write $f\asymp g$ if there exists $c\in\maths{N}-\{0\}$ such that $g(n)\leq f(c\,n)$ and $f(n)\leq g(c\,n)$ for every $n\in\maths{N}$. Note that if $S'$ is another finite generating set of $\Gamma$, then $N_{{\mathfrak K},\,S'} \asymp N_{{\mathfrak K},\,S}$. The growth of a given conjugacy class in $\Gamma$ is at most the growth of $\Gamma$, and we refer for instance to \cite{Grigorchuk14, Mann12} and their references for information on the growth of groups. The growth of the trivial conjugacy class is trivial ($N_{\{e\}}(n)=1$ for every $n\in\maths{N}$). It would be interesting to know what are the possible growths of given conjugacy classes, between these two extremal bounds, and for which group all nontrivial conjugacy classes have the same growth. We only study two examples below. The counting problem introduced in this paper is dual to the study of the asymptotic as $n\rightarrow+\infty$ either of the number of translation axes of (primitive) loxodromic elements meeting the ball of center $x_0$ and radius $n$, in the negatively curved manifold case, or of the number of (primitive) conjugacy classes meeting the ball of radius $n$, in the finitely generated group case. These asymptotics have been studied a lot, for instance by Bowen and Margulis in the manifold case, and by Hull-Osin \cite{HulOsi13} (see also the references of \cite{HulOsi13}) in the finitely generated group case. In particular, Ol'shanskii \cite[Theo.~41.2]{Olshanskii91} has constructed groups with exponential growth rate and only finitely many conjugacy classes: at least one of them has the same growth rate as the whole group, contrarily to the examples below. \medskip First, let $\Gamma=F(S)$ be the free group on a finite set $S$ of cardinality $|S|\geq 2$. Let ${\mathfrak K}$ be the conjugacy class in $\Gamma$ of a nonempty reduced and cyclically reduced word in $S\cup S^{-1}$, denoted by $\gamma_0$, of length $\ell_{\mathfrak K}= \inf_{\gamma\in{\mathfrak K}} \|\gamma\|$. Let $m_{\mathfrak K}$ be the number of cyclic conjugates of $\gamma_0$ (for instance $m_{\mathfrak K}=1$ if $ \gamma_0=s^\ell$ for some $s\in S\cap S^{-1}$). We denote by $\lfloor x\rfloor$ the integral part of a real number $x$. \begin{prop}\label{prop:freegroup} For every $n\in\maths{N}$ with $n\geq \ell_{\mathfrak K}+2$, we have $$ N_{{\mathfrak K},\,S}(n)=m_{\mathfrak K}\;(2\,|S|-2)\; (2\,|S|-1)^{\big\lfloor\frac{n-\ell_{\mathfrak K}-2}{2}\big\rfloor}\;. $$ \end{prop} In particular, $\lim_{n\rightarrow+\infty} \frac{1}{n}\log N_{{\mathfrak K},\,S}(n)= \frac{\log (2|S|-1)}{2}$ does not depend on the nontrivial conjugacy class ${\mathfrak K}$, and is half the exponential growth rate of $\Gamma$ with respect to the generating set $S$ (see Proposition \ref{prop:growthconjclaswordhyp} for a generalisation). \medskip \noindent{\bf Proof. } Let $k=|S|$ and $\ell=\ell_{\mathfrak K}$. Every element $\gamma$ in ${\mathfrak K}$ can be written uniquely as $\alpha\gamma_0'\alpha^{-1}$, where $\alpha$ is a reduced word in $S\cup S^{-1}$ and $\gamma'_0$ is a cyclic conjugate of $\gamma_0$, and where the writing is reduced, that is, the last letter of $\alpha$ is different from the inverse of the first letter $s_1$ of $\gamma_0'$ and from the last letter $s_\ell$ of $\gamma'_0$. In particular, $$ \|\gamma\|=2\,\|\alpha\|+\ell\;. $$ Note that $s_1^{-1}\neq s_\ell$, since $\gamma_0$ is cyclically reduced. For every $m\in\maths{N}$ with $m\geq 1$, there are $(2k-2)(2k-1)^{m-1}$ reduced words of length at most $m$ whose last letter is different from $s_1^{-1}$ and $s_\ell$. The result follows. \hfill$\Box$ \medskip \noindent{\bf Remark. } The group $\Gamma=F(S)$ acts faithfully on its Cayley graph associated to $S$ by left multiplication, and $N_{{\mathfrak K},\,S}(n)={\operatorname{Card}}({\mathfrak K}\cdot e\cap B(e,n))$. Proposition \ref{prop:freegroup} gives an exact expression for this orbit count, improving \cite[Thm.~1]{Douma11} in this special case for $(q+1)$-regular trees with $q$ odd. \medskip The following result says in particular that in a torsion-free word-hyperbolic group, the nontrivial conjugacy classes have constant exponential growth rate, equal to half the one of the ambient group. Recall (see for instance \cite[\S 5.1]{Champetier00}) that the {\it virtual center} $Z^{\rm virt}(\Gamma)$ of a nonelementary word-hyperbolic group $\Gamma$ is the finite subgroup of $\Gamma$ consisting of the elements $\gamma\in\Gamma$ acting by the identity on the boundary at infinity $\partial_\infty\Gamma$ of $\Gamma$, or, equivalently, having finitely many conjugates in $\Gamma$, or, equivalently, whose centraliser in $\Gamma$ has finite index in $\Gamma$. Note that $N_{{\mathfrak K}}(n)$ is bounded if (and only if) ${\mathfrak K}$ is the conjugacy class of an element in the virtual center. \begin{prop} \label{prop:growthconjclaswordhyp} Let $\Gamma$ be a nonelementary word-hyperbolic group, $S$ a finite generating set of $\Gamma$, and ${\mathfrak K}$ the conjugacy class of an element in $\Gamma-Z^{\rm virt}(\Gamma)$. Then $$ \limsup_{n\rightarrow+\infty} \frac{1}{n}\log N_{{\mathfrak K},\,S}(n)= \frac{1}{2}\;\limsup_{n\rightarrow+\infty} \frac{1}{n}\log{\operatorname{Card}}\; B_{d_S}(e,n)\;. $$ \end{prop} \noindent{\bf Proof. } Let $\gamma_0\in \Gamma-Z^{\rm virt}(\Gamma)$ and $\delta= \limsup_{n\rightarrow+\infty} \frac{1}{n}\log{\operatorname{Card}}\; B_{d_S}(e,n)$. Let $C_{\gamma_0}$ be a quasi-translation axis of $\gamma_0$ if $\gamma_0$ has infinite order, and let $C_{\gamma_0}$ be the set of quasi-fixed points of $\gamma_0$ otherwise. Note that $C_{\gamma_0}$ is quasi-convex, that $Z_\Gamma(\gamma_0)$ preserves $C_{\gamma_0}$, and that $C_{\gamma_0}$ is at bounded distance from $Z_\Gamma(\gamma_0)$ in $\Gamma$. In particular, $\Gamma_0=Z_\Gamma(\gamma_0)$ is a quasi-convex-cocompact subgroup with infinite index in the nonelementary word hyperbolic group $\Gamma$. It is well-known that the exponential growth rate of $\Gamma/\Gamma_0$ is then equal to the exponential growth rate $\delta$ of $\Gamma$. Indeed, the limit set $\Lambda\Gamma_0$ of $\Gamma_0$ is then a proper subset of $\partial_\infty\Gamma$ and $\Gamma_0$ acts properly discontinuously on $\Gamma\cup (\partial_\infty\Gamma-\Lambda\Gamma_0)$. Let $\xi \in \partial_\infty \Gamma- \Lambda\Gamma_0$. If $U$ is a small enough neighbourhood of $\xi$ in $\Gamma\cup\partial_\infty\Gamma$, then there exists $N\in\maths{N}$ such that $U$ meets at most $N$ of its images by the elements of $\Gamma_0$, and for every $x\in U\cap \Gamma$, if $p:\Gamma\rightarrow\Gamma/\Gamma_0$ is the canonical projection, then $|d(x,e)-d(p(x),p(e))|$ is uniformly bounded. It is well-known (see for instance the proof of \cite[Coro.~1]{Roblin02}) that the (sectorial) exponential growth rate $\limsup_{n\rightarrow+\infty} \frac{1}{n}\log{\operatorname{Card}}\; \big(U\cap B_{d_S}(e,n)\big)$ of $\Gamma$ in $U$ is equal to $\delta$. This proves the above claim. Up to a bounded additive constant, the distance between $e$ and $\gamma^{-1}\gamma_0\gamma$ is equal to twice the distance from $\gamma$ to $C_{\gamma_0}$, by hyperbolicity. Hence the exponential growth rate of ${\mathfrak K}$ is half the exponential growth rate of $\Gamma/Z_\Gamma(\gamma_0)$, that is $\delta/2$. \hfill$\Box$ \bigskip Now, let $A$ be a free abelian group of rank $2k$, let $\langle\cdot,\cdot\rangle$ be an integral symplectic form on $A$, and let $\Gamma$ be the associated Heisenberg group, that is, the group with underlying set $A\times\maths{Z}$ and group law $$ (a,z)(a',z')=(a+a',z+z'+\langle a,a'\rangle)\;. $$ Note that $\Gamma$ is finitely generated, and we have an exact sequence of groups $$ 0\longrightarrow \maths{Z}\stackrel{i}{\longrightarrow}\Gamma \stackrel{\pi}{\longrightarrow}A\longrightarrow 0\;, $$ where $i:z\mapsto (0,z)$ has image the center of $\Gamma$ and $\pi:(a,z)\mapsto a$. Let ${\mathfrak K}$ be a nontrivial conjugacy class (that is, the conjugacy class of a noncentral element) in $\Gamma$. \begin{prop} We have $$ N_{{\mathfrak K}}(n)\asymp n^2\;. $$ \end{prop} In particular, the growth of any nontrivial conjugacy class in the Heisenberg group $\Gamma$ is quadratic. Note that ${\operatorname{Card}}\;B_\Gamma(e,n) \asymp n^{2k+2}$ and that the number of (primitive or not) conjugacy classes meeting the ball of radius $n$ is $\asymp n^2\ln n$ if $k=1$, see \cite[Ex.~2.4]{GubSap10}. \medskip \noindent{\bf Proof. } Let $\gamma_0=(a_0,z_0)$ be a noncentral element in $\Gamma$, so that $a_0\neq 0$, and let $\|\gamma_0\|$ be its distance to the identity element $e$ for a given word metric on $\Gamma$. Since $\pi:\Gamma\rightarrow A$ is the abelianisation map, whose kernel is the center $Z$ of $\Gamma$, we have $\pi({\mathfrak K})=\{\pi(\gamma_0)\}$ and $$ {\mathfrak K}\subset \pi^{-1}(\{\pi(\gamma_0)\})=Z\,\gamma_0\;. $$ Since $\langle\cdot,\cdot\rangle$ is nondegenerate and $a_0\neq 0$, there exists $b_0\in A$ such that $n_0=2\langle a_0,b_0\rangle\neq 0$. For every $(a,z)\in\Gamma$, since $(a,z)^{-1}=(-a,-z)$, it is easy to compute that $$ (a,z)(a_0,z_0)(a,z)^{-1}=(a_0,z_0+2\langle a,a_0\rangle)\;. $$ Hence, with $Z^{n_0}=\{(0,n_0 n)\;:\;n\in\maths{Z}\}$, which is a finite index subgroup of $Z$, we have $$ Z^{n_0}\,\gamma_0\subset {\mathfrak K}\;. $$ We have $$ {\operatorname{Card}}\;{\mathfrak K}\cap B(e,n) \leq {\operatorname{Card}}\,\big(Z\cap B(e,n)\,\gamma_0^{-1}\big) \leq {\operatorname{Card}}\;Z\cap B(e,n+\|\gamma_0\|)\;, $$ and similarly, ${\operatorname{Card}}\;{\mathfrak K}\cap B(e,n) \geq{\operatorname{Card}}\;Z^{n_0}\cap B(e,n-\|\gamma_0\|)$. We hence only have to prove that for every finite index subgroup $Z'$ of $Z$, we have ${\operatorname{Card}}\;Z\cap B(e,n)\asymp n^2$. This is well known (see for instance \cite[VII.21]{Harpe00} when $A$ has rank $2$): for instance, we have $[(a_0,0)^p,(b_0,0)^q] =(0,pq\, \langle a_0,b_0\rangle)$ for all $p,q\in\maths{Z}$, and the distance to $e$ of the commutator on the left hand side of this equality is at most $c(p+q)$, for some constant $c>0$. \hfill$\Box$ \section{Counting common perpendicular arcs} \label{sec:rappels} In this section, we briefly review a simplified version of the geometric counting and equidistribution result proved in \cite{ParPau14}, which is the main tool in this paper (see also \cite{ParPauRev} for related references, \cite{ParPauTou} for arithmetic applications in real hyperbolic spaces and \cite{ParPauHeis} for the case of locally symmetric spaces). We refer to \cite{BriHae99} for the background definitions and properties concerning the isometries of $\operatorname{CAT}(-1)$ spaces. Let $\wt M$ be a complete simply connected Riemannian manifold with (dimension at least $2$ and) pinched negative sectional curvature $-b^2\le K\le -1$, let $x_0\in\wt M$, and let $T^1\wt M$ be the unit tangent bundle of $\wt M$. Let $\Gamma$ be a nonelementary discrete group of isometries of $\wt M$ and let $M=\Gamma\backslash\wt M$ and $T^1M= \Gamma\backslash T^1\wt M$ be the quotient orbifolds. We denote by $\partial_\infty \wt M$ the boundary at infinity of $\wt M$, by $\Lambda \Gamma$ the limit set of $\Gamma$ and by $(\xi,x,y)\mapsto \beta_\xi(x,y)$ the Busemann cocycle on $\partial_\infty \wt M\times \wt M\times \wt M$ defined by $$ (\xi, x,y)\mapsto \beta_{\xi}(x,y)= \lim_{t\to+\infty}d(\rho_t,x)-d(\rho_t,y)\;, $$ where $\rho:t\mapsto \rho_t$ is any geodesic ray with point at infinity $\xi$ and $d$ is the Riemannian distance. For every $v\in T^1\wt M$, let $\pi(v)\in \wt M$ be its origin, and let $v_-, v_+$ be the points at infinity of the geodesic line in $\wt M$ whose tangent vector at time $t=0$ is $v$. We denote by $\pi_+:T^1_{x_0}\wt M\rightarrow \partial_\infty \wt M$ the homeomorphism $v\mapsto v_+$. \medskip Let $D^-$ and $D^+$ be nonempty proper closed convex subsets in $\wt M$, with stabilisers $\Gamma_{D^-}$ and $\Gamma_{D^+}$ in $\Gamma$, such that the families $(\gamma D^-)_{\gamma\in\Gamma/\Gamma_{D^-}}$ and $(\gamma D^+)_{\gamma\in\Gamma/\Gamma_{D^+}}$ are locally finite in $\wt M$. We denote by $\normalpm D^\mp$ the {\it outer/inner unit normal bundle} of $\partial D^\mp$, that is, the set of $v\in T^1\wt M$ such that $\pi(v)\in \partial D^\mp$, $v_\pm \in \partial_\infty \wt M-\partial_\infty D^\mp$ and the closest point projection on $D^\mp$ of $v_\pm $ is $\pi(v)$. For every $\gamma,\gamma'$ in $\Gamma$ such that $\gamma D^-$ and $\gamma' D^+$ have a common perpendicular (that is, if the closures $\overline{\gamma D^-}$ and $\overline{\gamma' D^+}$ in $\wt M\cup\partial_\infty \wt M$ are disjoint), we denote by $\alpha_{\gamma,\,\gamma'}$ this common perpendicular (starting from $\gamma D^-$ at time $t=0$), by $\ell(\alpha_{\gamma,\,\gamma'})$ its length, and by $v^-_{\gamma,\,\gamma'} \in \gamma \normalout D^-$ its initial tangent vector. The {\em multiplicity} of $\alpha_{\gamma,\,\gamma'}$ is $$ m_{\gamma,\,\gamma'}= \frac 1{{\operatorname{Card}}(\gamma\Gamma_{D^-}\gamma^{-1}\cap\gamma'\Gamma_{D^+}{\gamma'}^{-1})}\,, $$ which equals $1$ when $\Gamma$ acts freely on $T^1\wt M$ (for instance when $\Gamma$ is torsion-free). Let $$ {\cal N}_{D^-,\,D^+}(s)=\sum_{\substack{ (\gamma,\,\gamma')\in \,\Gamma\backslash((\Gamma/\Gamma_{D^-})\times (\Gamma/\Gamma_{D^+}))\\ \phantom{\big|}\overline{\gamma D^-}\,\cap \,\overline{\gamma' D^+}\, =\emptyset,\; \ell(\alpha_{\gamma,\, \gamma'})\leq s}} m_{\gamma,\,\gamma'}= \sum_{\substack{[\gamma]\in\, \Gamma_{D^-}\backslash\Gamma/\Gamma_{D^+}\\ \phantom{\big|}\overline{D^-}\,\cap \,\overline{\gamma D^+}\,= \emptyset,\; \ell(\alpha_{e,\, \gamma})\leq s}} m_{e,\,\gamma} \;, $$ where $\Gamma$ acts diagonally on $(\Gamma/\Gamma_{D^-})\times (\Gamma/\Gamma_{D^+})$. When $\Gamma$ is torsion-free, ${\cal N}_{D^-,\,D^+}(s)$ is the number of the common perpendiculars of length at most $s$ between the images of $D^-$ and $D^+$ in $M$, with multiplicities coming from the fact that $\Gamma_{D^\pm}\backslash D^\pm$ is not assumed to be embedded in $M$. We refer to \cite[\S 4]{ParPau14} for the use of H\"older-continuous potentials on $T^1\wt M$ to modify this counting function by adding weights. Recall the following notions (see for instance \cite{Roblin03}). The {\em critical exponent} of $\Gamma$ is $$ \delta_{\Gamma}=\limsup_{N\to+\infty}\frac 1N \ln{\operatorname{Card}}\{\gamma\in\Gamma: d(x_{0},\gamma x_{0})\leq N\}\,, $$ which is positive, finite, independent of $x_{0}$ (and equal to the topological entropy $h$ if $\Gamma$ is cocompact and torsion-free). Let $(\mu_{x})_{x\in \wt M}$ be a {\em Patterson-Sullivan density} for $\Gamma$, that is, a family $(\mu_{x})_{x\in \wt M}$ of nonzero finite measures on $\partial_{\infty}\wt M$ whose support is $\Lambda\Gamma$, such that $\gamma_*\mu_x=\mu_{\gamma x}$ and $$ \frac{d\mu_{x}}{d\mu_{y}}(\xi)=e^{-\delta_{\Gamma}\beta_{\xi}(x,\,y)} $$ for all $\gamma\in\Gamma$, $x,y\in\wt M$ and $\xi\in\partial_{\infty}\wt M$. The {\em Bowen-Margulis measure} $\wt m_{\rm BM}$ for $\Gamma$ on $T^1\wt M$ is defined, using Hopf's parametrisation $v\mapsto (v_-,v_+,\beta_{v_+}(x_0,\pi(v))\,)$ of $T^1\wt M$, by $$ d\wt m_{\rm BM}(v)=e^{-\delta_{\Gamma}(\beta_{v_{-}}(\pi(v),\,x_{0})+ \beta_{v_{+}}(\pi(v),\,x_{0}))}\; d\mu_{x_{0}}(v_{-})\,d\mu_{x_{0}}(v_{+})\,dt\,. $$ The measure $\wt m_{\rm BM}$ is nonzero and independent of $x_{0}$. It is invariant under the geodesic flow, the antipodal map $\iota:v\mapsto -v$ and the action of $\Gamma$, and thus defines a nonzero measure $m_{\rm BM}$ on $T^1M=\Gamma\backslash T^1\wt M$, called the {\em Bowen-Margulis measure} on $M$, which is invariant under the geodesic flow of $M$ and the antipodal map. If $\wt M$ is symmetric and if $\Gamma$ is geometrically finite (for instance if $\wt M={\HH}^2_\RR$ and $\Gamma$ is finitely generated), then $m_{\rm BM}$ is finite. See for instance \cite{DalOtaPei00} for many more examples. If $m_{\rm BM}$ is finite, then $m_{\rm BM}$ is mixing under the geodesic flow if $\wt M$ is symmetric or if $\Lambda\Gamma$ is not totally disconnected (hence if $M$ is compact), see for instance \cite{Babillot02b,DalBo99}. \medskip Using the endpoint homeomorphisms $v\mapsto v_\pm$ from $\normalpm {D^\mp}$ to $\partial_{\infty}\wt M-\partial_{\infty}D^\mp$, the {\em skinning measure} $\wt\sigma_{D^\mp}$ of $\Gamma$ on $\normalpm{D^\mp}$ is defined by $$ d\wt\sigma_{D^\mp}(v) = e^{-\delta\,\beta_{v_{\pm}}(\pi(v),\,x_{0})}\,d\mu_{x_{0}}(v_{\pm})\,, $$ see \cite[\S 1.2]{OhSha13} when $D^\mp$ is a horoball or a totally geodesic subspace in $\wt M$ and \cite{ParPau13ETDS}, \cite{ParPau14} for the general case of convex subsets in variable curvature and with a potential. The measure $\wt\sigma_{D^\mp}$ is independent of $x_{0}\in\wt M$, it is nonzero if $\Lambda\Gamma$ is not contained in $\partial_{\infty} D^\mp$, and satisfies $\wt\sigma_{\gamma D^\mp} =\gamma_{*}\wt \sigma_{D^\mp}$ for every $\gamma\in\Gamma$. Since the family $(\gamma D^\mp)_{\gamma\in\Gamma/\Gamma_{D^\mp}}$ is locally finite in $\wt M$, the measure $\sum_{\gamma \in \Gamma/\Gamma_{D^\mp}} \;\gamma_*\wt\sigma_{D^\mp}$ is a well defined $\Gamma$-invariant locally finite (nonnegative Borel) measure on $T^1\wt M$, hence it induces a locally finite measure $\sigma_{D^\mp}$ on $T^1M$, called the {\em skinning measure} of $D^\mp$ in $T^1M$. If $\Gamma_{D^\mp}\backslash\partial D^\mp$ is compact, then $\sigma_{D^\mp}$ is finite. We refer to \cite[\S 5]{OhShaCircles} and \cite[Theo.~9]{ParPau13ETDS} for finiteness criteria of the skinning measure $\sigma_{D^\mp}$. \medskip The following result on the asymptotic behaviour of the counting function ${\cal N}_{D^-,\,D^+}$ is a special case of more general results \cite[Coro.~20, 21, Theo.~28]{ParPau14}. We refer to \cite{ParPauRev} for a survey of the particular cases known before \cite{ParPau14}, due to Huber, Margulis, Herrmann, Cosentino, Roblin, Eskin-McMullen, Oh-Shah, Martin-McKee-Wambach, Pollicott, and the authors for instance. \begin{theo}\label{theo:mainequicountdown} Let $\Gamma,D^-,D^+$ be as above. Assume that the measures $m_{\rm BM},\sigma_{D^-},\sigma_{D^+}$ are nonzero and finite, and that $m_{\rm BM}$ is mixing for the geodesic flow of $T^1M$. Then $$ {\cal N}_{D^-,\,D^+}(s)\;\sim\; \frac{\|\sigma_{D^-}\|\;\|\sigma_{D^+}\|}{\delta_\Gamma\;\|m_{\rm BM}\|}\; e^{\delta_\Gamma \,s}\;, $$ as $s\rightarrow+\infty$. If $\Gamma$ is arithmetic or if $M$ is compact, then the error term is $\operatorname{O}(e^{(\delta_\Gamma-\kappa) s})$ for some $\kappa >0$. Furthermore, the initial vectors of the common perpendiculars equidistribute in the outer unit normal bundle of $D^-$: \begin{equation}\label{eq:equidistribdown} \lim_{s\rightarrow+\infty}\; \frac{\delta_\Gamma\;\|m_{\rm BM}\|}{\|\sigma_{D^-}\|\;\|\sigma_{D^+}\|} \;e^{-\delta_\Gamma\, s}\; \sum_{\substack{[\gamma]\in\, \Gamma_{D^-}\backslash\Gamma/\Gamma_{D^+}\\ \phantom{\big|}\overline{D^-}\,\cap \,\overline{\gamma D^+}\,= \emptyset,\; \ell(\alpha_{e,\, \gamma})\leq s}} m_{e,\,\gamma} \; \Delta_{v^-_{e,\,\gamma}}\;=\; \frac{\wt\sigma_{D^-}}{\|\sigma_{D^-}\|}\; \end{equation} for the weak-star convergence of measures on the locally compact space $T^1\wt M$. \hfill$\Box$ \end{theo} \section{Counting in conjugacy classes} \label{sect:counting} Let $\wt M,x_0,\Gamma$ be as in the beginning of Section \ref{sec:rappels}. For any nontrivial element $\gamma$ in $\Gamma$, let $C_\gamma$ be $\bullet$~ the translation axis of $\gamma$ if $\gamma$ is loxodromic, $\bullet$~ the fixed point set of $\gamma$ if $\gamma$ is elliptic, $\bullet$~ a horoball centered at the fixed point of $\gamma$ if $\gamma$ is parabolic, \noindent which is a nonempty proper closed convex subset of $\wt M$. We assume (this condition is automatically satisfied unless $\gamma'$ is parabolic) that $\gamma C_{\gamma'}=C_{\gamma\ga'\gamma^{-1}}$ for all $\gamma\in\Gamma$ and $\gamma'\in \Gamma- \{e\}$. By the equivariance properties of the skinning measures, the total mass of $\sigma_{C_\gamma}$ depends only on the conjugacy class ${\mathfrak K}$ of $\gamma$, and will be denoted by $\|\sigma_{\mathfrak K}\|$. This quantity, called the {\it skinning measure} of ${\mathfrak K}$, is positive and finite for instance when $\gamma$ is loxodromic, since $\Lambda\Gamma$ contains at least $3$ points and the image of $C_\gamma$ in $M$ is compact. In Sections \ref{sect:parabolic} and \ref{sect:elliptic}, we will give other classes of examples of conjugacy classes ${\mathfrak K}$ with positive and finite skinning measure $\|\sigma_{\mathfrak K}\|$, and prove in particular that this is always true if $\wt M=\maths{H}^2_\maths{R}$ except possibly when $\gamma$ is elliptic and orientation-reversing. We define $$ m_\gamma= \frac{1}{{\operatorname{Card}}(\Gamma_{x_0}\cap \Gamma_{C_\gamma})}\,, $$ which is a natural complexity of $\gamma$, independent on the choice of $C_\gamma$ when $\gamma$ is parabolic, and equals $1$ if the stabiliser of $x_0$ in $\Gamma$ is trivial (for instance if $\Gamma$ is torsion-free). Note that for every $\alpha\in\Gamma$, the real number $m_{\alpha\gamma\alpha^{-1}}$ depends only on the double coset of $\alpha$ in $\Gamma_{x_0}\backslash\Gamma/\Gamma_{C_{\gamma}}$. The centraliser $Z_\Gamma(\gamma)$ of $\gamma$ in $\Gamma$ is contained in the stabiliser of $C_\gamma$ in $\Gamma$. The index $[\Gamma_{C_\gamma}:Z_\Gamma(\gamma)]$ depends only on the conjugacy class ${\mathfrak K}$ of $\gamma$; it will be denoted by $i_{\mathfrak K}$ and called the {\it index} of ${\mathfrak K}$. We assume in what follows that $i_{\mathfrak K}$ is finite, which is in particular the case if $\gamma$ is loxodromic (the stabiliser of its translation axis $C_\gamma$ is virtually cyclic). In Sections \ref{sect:parabolic} and \ref{sect:elliptic}, we will give other classes of examples of conjugacy classes ${\mathfrak K}$ with finite index $i_{\mathfrak K}$, and prove in particular that this is always true if $\wt M=\maths{H}^2_\maths{R}$. \medskip We define the counting function $$ N_{{\mathfrak K},\,x_0}(t)=\sum_{\alpha\in{\mathfrak K},\; d(x_0,\,\alpha x_0)\leq t} m_\alpha\,. $$ When the stabiliser of $x_0$ in $\Gamma$ is trivial, we recover the definition of the Introduction. \medskip Let $\psi:[0,+\infty\mathclose{[}\rightarrow[0,+\infty\mathclose{[}$ be an eventually nondecreasing map such that $\lim_{t\rightarrow+\infty} \psi(t) = +\infty$. We will say that a nontrivial element $\gamma_0\in\Gamma$ is {\it $\psi$-equitranslating} if for every $x\in \wt M$ at distance big enough from $C_{\gamma_0}$, we have $$ d(x,C_{\gamma_0})=\psi(d(x,\gamma_0 x))\,. $$ Note that this condition depends only on the conjugacy class of $\gamma_0$. When $\gamma_0$ is parabolic, up to replacing $\psi$ by $\psi+c$ for some constant $c\in\maths{R}$, this condition does not depend on the choice of the horoball $C_{\gamma_0}$. In Sections \ref{sect:loxodromic}, \ref{sect:parabolic} and \ref{sect:elliptic}, we will give several classes of examples of equitranslating isometries, and prove in particular that every nontrivial isometry of $\maths{H}^2_\maths{R}$ is equitranslating. The following theorem is the main abstract result of this paper. \begin{theo}\label{theo:genHuberabstrait} Assume that the Bowen-Margulis measure of $\Gamma$ is finite and mixing for the geodesic flow on $T^1M$. Let ${\mathfrak K}$ be a conjugacy class of $\psi$-equitranslating elements of $\Gamma$ with finite index $i_{\mathfrak K}$ and positive and finite skinning measure $\|\sigma_{\mathfrak K}\|$. Then, as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{i_{{\mathfrak K}}\,\|\mu_{x_0}\|\,\|\sigma_{{\mathfrak K}}\|} {\delta_\Gamma\,\|m_{\rm BM}\|} \,e^{\delta_\Gamma\, \psi(t)}\,. $$ If $\Gamma$ is arithmetic or if $M$ is compact, then the error term is $\operatorname{O}(e^{(\delta_\Gamma-\kappa) \psi(t)})$ for some $\kappa >0$. Furthermore, if $v_\alpha$ is the unit tangent vector at $x_0$ to the geodesic segment $[x_0,\alpha x_0]$ for every $\alpha\in \Gamma- \Gamma_{x_0}$, for the weak-star convergence of measures on $T^1\wt M$, we have $$ \lim_{t\rightarrow+\infty}\; \frac{\delta_\Gamma\;\|m_{\rm BM}\|}{i_{{\mathfrak K}}\,\|\sigma_{{\mathfrak K}}\|} \;e^{-\delta_\Gamma\, \psi(t)}\; \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} m_\alpha\,\Delta_{v_\alpha} \;=\; (\pi_+^{-1})_*\mu_{x_0}\;. $$ \end{theo} \noindent{\bf Proof. } Let $\gamma_0$ be a $\psi$-equitranslating element of $\Gamma-\{e\}$ and let ${\mathfrak K}=\{\gamma\ga_0\gamma^{-1}\;:\;\gamma\in\Gamma\}$ be its conjugacy class. Since $\wt\sigma_{\{x_0\}} =(\pi_+^{-1})_*\mu_{x_0}$ (see \cite[\S 3]{ParPau13ETDS}), we have $$ \|\sigma_{\{x_0\}}\|= \frac{\|\mu_{x_0}\|}{|\Gamma_{x_0}|}\,. $$ In particular, both $\|\sigma_{\{x_0\}}\|$ and $\|\sigma_{{\cal C}_{\gamma_0}} \| = \|\sigma_{\mathfrak K}\|$, are positive and finite. Hence, since $\psi$ is eventually nondecreasing, by the definition of the counting function ${\cal N}_{D^-,\,D^+}$ for $D^-=\{x_0\}$ and $D^+=C_{\gamma_0}$, and by Theorem \ref{theo:mainequicountdown}, we have, as $t\rightarrow+\infty$, \begin{align*} \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} m_\alpha&\sim \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,C_{\alpha})\leq \psi(t)} m_\alpha = \sum_{\gamma\in\Gamma/Z_\Gamma(\gamma_0),\; 0<d(x_0,\,\gamma C_{\gamma_0})\leq \psi(t)} m_{\gamma\ga_0\gamma^{-1}}\\ & = |\Gamma_{x_0}|\,i_{\mathfrak K}\; \sum_{\gamma\in\Gamma_{x_0}\backslash\Gamma/\Gamma_{C_{\gamma_0}},\; 0<d(x_0,\,\gamma C_{\gamma_0})\leq \psi(t)} m_{\gamma\ga_0\gamma^{-1}}\\ & =|\Gamma_{x_0}|\,i_{\mathfrak K}\;{\cal N}_{\{x_0\},\,C_{\gamma_0}}(\psi(t)) \\ &\sim |\Gamma_{x_0}|\,i_{\mathfrak K}\;\frac{\|\sigma_{\{x_0\}}\|\;\|\sigma_{C_{\gamma_0}}\|} {\delta_\Gamma\;\|m_{\rm BM}\|}\; e^{\delta_\Gamma \,\psi(t)}\;. \end{align*} The first claim of Theorem \ref{theo:genHuberabstrait} follows. \medskip\noindent \begin{minipage}{8cm} ~~~ For every $\alpha\in{\mathfrak K}$, let $p_\alpha$ be the closest point to $x_0$ on $C_\alpha$. Then $\alpha p_\alpha$ is the closest point to $\alpha x_0$ on $C_\alpha$. \end{minipage} \begin{minipage}{6.9cm} \begin{center} \input{fig_initvect.pstex_t} \end{center} \end{minipage} \medskip Since $\lim_{t\rightarrow+\infty} \psi(t) = +\infty$, when $d(x_0, \alpha x_0)$ is large enough, the distance $d(x_0,C_{\alpha})$ becomes large. Hence the initial tangent vector $v_{\alpha}$ to the geodesic segment $[x_0, \alpha x_0]$ becomes arbitrarily close to the initial tangent vector to the geodesic segment $[x_0,p_\alpha]$, uniformly on $\alpha\in{\mathfrak K}$ such that $d(x_0,C_{\alpha})$ is large enough, and independently on $d(p_\alpha, \alpha p_\alpha)$ which could be $0$. Hence, using again and similarly Theorem \ref{theo:mainequicountdown} with $D^-=\{x_0\}$ and $D^+=C_{\gamma_0}$, we have, as $t\rightarrow+\infty$, \begin{align*} &\frac{\delta_\Gamma\;\|m_{\rm BM}\|}{i_{{\mathfrak K}}\,\|\sigma_{{\mathfrak K}}\|} \;e^{-\delta_\Gamma\, \psi(t)} \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} m_\alpha\;\Delta_{v_\alpha} \\ \sim\;\; & \frac{\delta_\Gamma\;\|m_{\rm BM}\|}{i_{{\mathfrak K}}\,\|\sigma_{{\mathfrak K}}\|} \;e^{-\delta_\Gamma\, \psi(t)} \sum_{\gamma\in\Gamma/Z_\Gamma(\gamma_0),\; 0<d(x_0,\,\gamma C_{\gamma_0})\leq \psi(t)} m_{\gamma\ga_0\gamma^{-1}}\;\Delta_{v_{\gamma\ga_0\gamma^{-1}}} \\ \sim\;\; & \frac{\delta_\Gamma\;\|m_{\rm BM}\|}{\|\sigma_{D^+}\|} \;e^{-\delta_\Gamma\, \psi(t)} \sum_{\gamma\in\Gamma/\Gamma_{D^+},\; 0<d(x_0,\,\gamma D^+)\leq \psi(t)} m_{e,\,\gamma}\;\Delta_{v^-_{e,\,\gamma}} \\ \stackrel{*}{\rightharpoonup}\;\;& \wt\sigma_{\{x_0\}}=(\pi_+^{-1})_*\mu_{x_0}\;. \end{align*} This proves the second claim of Theorem \ref{theo:genHuberabstrait}. \hfill$\Box$ \section{The geometry of loxodromic isometries} \label{sect:loxodromic} In this section, we fix a loxodromic isometry $\gamma$ of a complete $\operatorname{CAT}(-1)$ geodesic metric space $X$. Let $\ell= \ell_{\gamma}= \inf_{x\in X} d(x,\gamma x)>0$ be its translation length and let $$ C_\gamma=\{x\in X\;:\;d(x,\gamma x)=\ell\} $$ be its translation axis. If $X={\HH}^2_\RR$, if $\gamma$ is orientation-preserving, and if $x\in{\HH}^2_\RR$ is at a distance $s$ from the translation axis of $\gamma$, then \begin{equation}\label{eq:planetranslation} d(x,\gamma x)=2\operatorname{argsinh}(\cosh s\,\sinh\frac{\ell} 2)\,. \end{equation} Indeed, after a conjugation by a suitable isometry, we may assume that the translation axis of $\gamma$ is the geodesic line with endpoints $0$ and $\infty$ in the upper halfplane model of ${\HH}^2_\RR$, that $\gamma z=e^{\ell} z$ for all $z\in\maths{C}$ with $\Im\; z>0$, and that $x$ is on the geodesic ray starting from $i$ and ending at $1$. Using the angle of parallelism formula \cite[Thm.~7.9.1]{Beardon83}, we have $x=(\tanh s,\frac 1{\cosh s})$, which gives $\gamma x=e^{\ell}(\tanh s,\frac 1{\cosh s})$. From this, Equation \eqref{eq:planetranslation} follows using the hyperbolic distance formula \cite[Thm.~7.2.1 (iii)]{Beardon83}. In the other extreme, if $X$ is a tree and if $x\in X$ is at a distance $s$ from the translation axis of $\gamma$, then $d(x,\gamma x)=\ell+2s$. The general situation lies between these two cases. \begin{lemm}\label{lem:generaltranslation} If $x\in X$ is at distance $s$ from the translation axis of $\gamma$, then $$ 2\operatorname{argsinh}(\cosh s\sinh\frac{\ell} 2)\le d(x,\gamma x) \le 2s+\ell\,. $$ \end{lemm} Note that as $s\to+\infty$, the lower bound is equal to $2s+2\log(\sinh\frac{\ell }2) +\operatorname{O}(e^{-2s})$, hence the difference of the upper and lower bounds is bounded by a constant that only depends on $\ell$. \medskip \noindent{\bf Proof. } The upper bound follows from the triangle inequality. Let us prove the lower bound. Let $p$ and $q=\gamma p$ be the closest points on $C_\gamma$ to respectively $x$ and $\gamma x$. Let $Q$ be the quadrilateral in $X$ with vertices $x$, $p$, $q$ and $\gamma x$. \begin{center} \input{fig_quadrangl.pstex_t} \end{center} \noindent Let $\overline{Q}$ be the quadrilateral in ${\HH}^2_\RR$ with vertices $\overline{x}$, $\overline{p}$, $\overline{q}$ and $\overline{\gamma x}$, obtained by gluing together, along the geodesic segment $[\overline{x}, \overline{q}]$, the comparison triangles of the two triangles in $X$ with sets of vertices $\{x,p,q\}$ and $\{x,q,\gamma x\}$. By comparison, the angles of $\overline{Q}$ at the vertices $\overline{p}$ and $\overline{q}$ are at least $\frac\pi 2$. If we adjust these angles to $\frac\pi 2$, keeping the lengths of the three sides $[\overline{x},\overline{p}]$, $[\overline{p},\overline{q}]$ and $[\overline{q}, \overline{\gamma x}]$ fixed, we obtain a quadrilateral $\overline{Q}'$ where the side that is not adjacent to the right angles has length less than $d(x,\gamma x)$. This gives the lower bound since the length of the side in question is given by Equation \eqref{eq:planetranslation}. \hfill$\Box$ \medskip The proof of the following result is then similar to that of Theorem \ref{theo:genHuberabstrait}. \begin{coro} \label{coro:encadreloxo} Let $\wt M$ be a complete simply connected Riemannian manifold with pinched negative sectional curvature, let $x_0\in\wt M$ and let $\Gamma$ be a nonelementary discrete group of isometries of $\wt M$. Assume that the Bowen-Margulis measure of $\Gamma$ is finite and mixing for the geodesic flow on $T^1M$. Let ${\mathfrak K}$ be a conjugacy class of loxodromic elements of $\Gamma$ with translation length $\ell$. Then, for every $\epsilon>0$, if $t$ is big enough, $$ \frac{i_{{\mathfrak K}}\,\|\mu_{x_0}\|\,\|\sigma_{{\mathfrak K}}\|}{\delta_\Gamma\,\|m_{\rm BM} \|\,e^{\frac{\delta_\Gamma\,\ell}{2}}} \,e^{\frac{\delta_\Gamma}{2}\, t}\,(1-\epsilon) \leq N_{{\mathfrak K},\,x_0}(t)\leq\frac{i_{{\mathfrak K}}\,\|\mu_{x_0}\|\,\|\sigma_{{\mathfrak K}}\|} {\delta_\Gamma\,\|m_{\rm BM}\|\,(\sinh\frac{\ell}{2})^{\delta_\Gamma}} \,e^{\frac{\delta_\Gamma}{2}\, t}\,(1+\epsilon)\,. \;\;\;\Box $$ \end{coro} In particular, under the assumptions of this result, we have $$ \lim_{t\rightarrow+\infty} \frac{1}{t}\log N_{{\mathfrak K},\,x_0}(t)=\frac{\delta_\Gamma}{2}\;. $$ Theorem \ref{theo:loggrowthintro} in the introduction follows from this, since if $M=\Gamma\backslash\wt M$ is a compact manifold, then any nontrivial element in $\Gamma$ is loxodromic, and, as recalled in Section \ref{sect:counting}, the critical exponent $\delta_\Gamma$ is equal to the topological entropy $h$ of the geodesic flow on $M$, and $m_{\rm BM}$ is finite and mixing. \medskip \noindent{\bf Remark. } With the notation and definitions of \cite[\S 3.1]{PauPolSha}, if $\wt F:T^1\wt M\rightarrow \maths{R}$ is a potential (that is, a $\Gamma$-invariant H\"older-continuous map), since the geodesic segment from $x_0$ to $\alpha x_0$ passes at distance uniformly bounded (by a constant $c_\ell$ depending only on $\ell$) from the translation axis $C_\alpha$ of $\alpha$, with $p_\alpha$ the closest point to $x_0$ on $C_\alpha$, the absolute value of the difference $\int_{x_0}^{\alpha x_0}\wt F-\int_{x_0}^{p_\alpha}\wt F-\int_{x_0}^{p_\alpha}\wt F\circ\iota$ is uniformly bounded (by a constant depending only on $c_\ell$ and on the maximum of $\wt F$ on the neighbourhood of $C_\alpha$ of radius $c_\ell$). Hence using the version with potential of Theorem \ref{theo:mainequicountdown} in \cite[Coro.~20]{ParPau14} for $\wt F$ and $\wt F\circ\iota$, we have upper and lower bounds for the asymptotic of the counting function with weigths defined by the potential: Assume that the critical exponent $\delta_{\Gamma,\,F}$ of $\Gamma$ for the potential $\wt F$ is finite and that the Gibbs measure of $\Gamma$ for the potential $\wt F$ is finite and mixing for the geodesic flow on $T^1M$, then there exists $c>0$ such that for all $t\geq 0$, $$ \frac{1}{c}\,e^{\frac{\delta_{\Gamma,\,F}}{2}\, t}\leq \sum_{\alpha\in{\mathfrak K},\; d(x_0,\,\alpha x_0)\leq t} m_\alpha\; e^{\int_{x_0}^{\alpha x_0}\wt F} \leq c\,e^{\frac{\delta_{\Gamma,\,F}}{2}\, t}\;. $$ \medskip Let us now consider the higher dimensional real hyperbolic spaces. If $X={\HH}^3_\RR$, if $\gamma$ is orientation-preserving, and if $x\in{\HH}^3_\RR$ is at a distance $s$ from the translation axis of $\gamma$, then \begin{equation}\label{eq:spacetranslation} \sinh^2\frac{d(x,\gamma x)}2 = \frac{\sinh^2s\;|e^\lambda-1|^2}{4\,e^\ell}+\sinh^2(\frac\ell 2)\,, \end{equation} where $\lambda=\lambda_{\gamma}$ is the {\em complex translation length} of $\gamma$, defined as follows. The loxodromic isometry $\gamma$ is conjugated in the upper halfspace model $\maths{C}\times \mathopen{]}0, +\infty[$ of ${\HH}^3_\RR$ to a transformation $(z,r)\mapsto e^{\ell} (e^{i\theta} z,r)$, where $\theta=\theta_{\gamma}\in\maths{R}$ is uniquely defined modulo $2\pi$, and we define $\lambda= \ell+i\theta\in \mathopen{]}0, +\infty\mathclose[+i\,\maths{R}/2\pi\maths{Z}$. Equation \eqref{eq:spacetranslation} follows from the distance formula in \cite[pp.~37]{Fenchel89}. Let $n\in\maths{N}-\{0,1\}$. A loxodromic isometry $\gamma$ of ${\HH}^n_\RR$ is {\em uniformly rotating} if $\gamma$ rotates all normal vectors to the translation axis of $\gamma$ by the same angle, called the {\em rotation angle} of $\gamma$ (which is $0$ if and only if $\gamma$ induces the parallel transport along its translation axis). This property is invariant under conjugation. Clearly, all loxodromic isometries of ${\HH}^2_\RR$, all orientation-preserving loxodromic isometries ${\HH}^3_\RR$, and the loxodromic isometries of any ${\HH}^n_\RR$ with a trivial rotational part, are uniformly rotating. The orientation-reversing loxodromic isometries of ${\HH}^3_\RR$ are not uniformly rotating. More generally, by the normal form up to conjugation of the elements of $\operatorname{O} (n-1)$, uniformly rotating orientation-preserving loxodromic isometries with a nontrivial rotation angle exist in ${\HH}^n_\RR$ if and only if $n$ is odd, and uniformly rotating orientation-reversing loxodromic isometries exist in ${\HH}^n_\RR$ if and only if $n$ is even. For a fixed translation length and rotation angle $\theta\in(\maths{R}-2\pi\maths{Z})/(2\pi\maths{Z})\,$, with $\theta=\pi$ in the orientation-reversing case, these elements form a unique conjugacy class. Let $\gamma$ be a uniformly rotating loxodromic isometry of ${\HH}^n_\RR$. Any configuration that consists of the translation axis of $\gamma$, a geodesic line $L$ orthogonal to the axis and its image $\gamma L$ is contained in an isometrically embedded $\gamma$-invariant copy of ${\HH}^3_\RR$ in ${\HH}^n_\RR$ (unique if the rotation angle of $\gamma$ is nonzero modulo $\pi\maths{Z}$). We then define the {\em complex translation length} of $\gamma$ as the complex translation length of the restriction of $\gamma$ to this subspace. \begin{lemm}\label{lem:offaxis} A uniformly rotating loxodromic isometry $\gamma$ of ${\HH}^n_\RR$ with complex translation length $\lambda= \ell+i\theta$ is $\psi$-equitranslating with $$\psi(t)=\frac{1}{2} (t-\log(\frac{\cosh\ell-\cos\theta}{2})) +\operatorname{O}(e^{-t})$$ as $t\rightarrow+\infty$. \end{lemm} \noindent{\bf Proof. } Let $x$ be a point in ${\HH}^n_\RR$ at a distance $s$ from the translation axis of $\gamma$. We only have to prove that, as $s\to+\infty$, $$ d(x,\gamma x)= 2s+\log\big(\frac{\cosh\ell-\cos\theta}{2}\big) +\operatorname{O}(e^{-2s})\;. $$ As noted above, it suffices to consider the case $n=3$. By Equation \eqref{eq:spacetranslation}, we have $$ \frac{e^{d(x,\,\gamma x)}}4 = \frac{e^{2\,s}\,(e^{2\ell}-2e^\ell\cos\theta+1)}{16\,e^\ell}+ \operatorname{O}(1)\,, $$ as $s\to+\infty$, which proves the claim after simplification and taking the logarithm. \hfill$\Box$ \begin{coro} \label{coro:mainloxo} Let $\Gamma$ be a nonelementary discrete group of isometries of ${\HH}^n_\RR$, whose Bowen-Margulis measure is finite, and let $x_0\in{\HH}^n_\RR$. Let ${\mathfrak K}$ be a conjugacy class of uniformly rotating loxodromic elements of $\Gamma$ with complex translation length $\lambda= \ell+i\theta$. Then, as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{2^{\frac{\delta_\Gamma}2}\,i_{\mathfrak K}\,\|\mu_{x_0}\|\,\|\sigma_{{\mathfrak K}}\|} {\delta_\Gamma\,\|m_{\rm BM}\|\, (\cosh\ell-\cos\theta)^{\frac{\delta_\Gamma}{2}}} \;e^{\frac{\delta_\Gamma}{2}\, t}\,. $$ If $\Gamma$ is arithmetic or if $M$ is compact, then the error term is $\operatorname{O}(e^{(\frac{\delta_\Gamma}{2} -\kappa) t})$ for some $\kappa >0$. Furthermore, if $v_\alpha$ is the unit tangent vector at $x_0$ to the geodesic segment $[x_0,\alpha x_0]$ for every $\alpha\in \Gamma- \Gamma_{x_0}$, for the weak-star convergence of measures on $T^1\wt M$, we have $$ \lim_{t\rightarrow+\infty}\; \frac{\delta_\Gamma\;\|m_{\rm BM}\|\, (\cosh\ell-\cos\theta)^{\frac{\delta_\Gamma}{2}}} {2^{\frac{\delta_\Gamma}2}\,i_{{\mathfrak K}}\,\|\sigma_{{\mathfrak K}}\|} \;e^{-\frac{\delta_\Gamma}{2}\, t}\; \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} m_\alpha \,\Delta_{v_\alpha}\;=\; (\pi_+^{-1})_*\mu_{x_0}\;. $$ \end{coro} \noindent{\bf Proof. } As mentioned in Section \ref{sec:rappels}, since ${\HH}^n_\RR$ has constant sectional curvature, the Bowen-Margulis measure of $\Gamma$, since finite, is mixing for the geodesic flow on $T^1M$. We have already seen that $i_{\mathfrak K}$ is finite and that $\|\sigma_{\mathfrak K}\|$ is positive and finite. The result follows from Theorem \ref{theo:genHuberabstrait} and Lemma \ref{lem:offaxis}. \hfill$\Box$ \medskip \noindent{\bf Remark. } Let $\Gamma$ be a group of isometries of $X$ and assume that $\gamma$ is a loxodromic element of $\Gamma$. The element $\gamma$ is $\Gamma${\em-reciprocal} if there exists an element in $\Gamma$ that switches the two fixed points of $\gamma$. If $\gamma$ is reciprocal, then let $\iota_\Gamma(\gamma)=2$, otherwise, we set $\iota_\Gamma(\gamma)=1$. The stabiliser in $\Gamma$ of the translation axis $C_{\gamma}$ of $\gamma$ is generated by the maximal cyclic subgroup of $\Gamma$ containing $\gamma$, by an elliptic element that switches the two points at infinity of $C_{\gamma}$ if $\gamma$ is $\Gamma$-reciprocal, and a (possibly trivial) group of finite order, which is the pointwise stabiliser $\operatorname{Fix}_\Gamma (C_\gamma)$ of $C_{\gamma}$. Thus, if ${\mathfrak K}$ is the conjugacy class of $\gamma$ in $\Gamma$, $$ \iota_{\mathfrak K}=\iota_\Gamma(\gamma) [\operatorname{Fix}_\Gamma (C_\gamma): \operatorname{Fix}_\Gamma (C_\gamma)\cap Z_{\Gamma}(\gamma)]\,. $$ In particular, if $n=2$, or if $n=3$ and $\gamma$ preserves the orientation, then $\iota_{\mathfrak K}=\iota_\Gamma(\gamma)$. Hence Theorem \ref{theo:intro} in the Introduction when $\gamma_0$ is loxodromic follows from Corollary \ref{coro:mainloxo}. \medskip When $\Gamma$ has finite covolume, the constant in Corollary \ref{coro:mainloxo} can be made more explicit. \begin{coro}\label{corogenHuber} Let $\Gamma$ be a discrete group of isometries of ${\HH}^n_\RR$ with finite covolume and let $x_0\in{\HH}^n_\RR$. Let ${\mathfrak K}$ be the conjugacy class of a uniformly rotating loxodromic element $\gamma_0$ of $\Gamma$ with complex translation length $\lambda=\ell+i\theta$, let $m_{\gamma_0}$ be the order of $\gamma_0$ in the maximal cyclic group containing $\gamma_0$, and let $n_{\gamma_0}$ be the order of the intersection of the pointwise stabiliser of the translation axis of $\gamma_0$ with the centraliser of $\gamma_0$. Then, as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{\operatorname{Vol}(\maths{S}^{n-2})\;\ell} {2^{\frac{n-1}2}\,(n-1)\,m_{\gamma_0}\,n_{\gamma_0}\,\operatorname{Vol}(\Gamma\backslash{\HH}^n_\RR)\, (\cosh \ell-\cos \theta)^{\frac{n-1}2}}\;e^{\frac{n-1}2\, t }\,. $$ If $\Gamma$ is arithmetic or if $M$ is compact, then the error term is $\operatorname{O}(e^{(\frac{n-1}{2} -\kappa) t})$ for some $\kappa >0$. Furthermore, if $\Gamma_{x_0}$ is trivial, if $v_\alpha$ is the unit tangent vector at $x_0$ to the geodesic segment $[x_0,\alpha x_0]$ for every $\alpha\in \Gamma- \{e\}$, with $\operatorname{Vol}_{T^1_{x_0}{\HH}^n_\RR}$ the spherical measure on $T^1_{x_0}{\HH}^n_\RR$, we have, for the weak-star convergence of measures on $T^1_{x_0}{\HH}^n_\RR$, \begin{multline*} \frac{(n-1)\,m_{\gamma_0}\,n_{\gamma_0}\,\operatorname{Vol}(\maths{S}^{n-1})\,\operatorname{Vol}(\Gamma\backslash{\HH}^n_\RR)\, (\cosh\ell-\cos\theta)^{\frac{n-1}{2}}} {2^{\frac{1-n}2}\,\operatorname{Vol}(\maths{S}^{n-2})\,\ell\;e^{\frac{n-1}{2}\, t}} \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} \Delta_{v_\alpha} \\\stackrel{*}{\rightharpoonup}\;\; \operatorname{Vol}_{T^1_{x_0}{\HH}^n_\RR}\;. \end{multline*} \end{coro} \noindent{\bf Proof. } Since $\Gamma$ has finite covolume, we have $\delta_\Gamma=n-1$ and we can normalise the Patterson-Sullivan measure $\mu_{x_0}$ at $x_0$ to have total mass $\operatorname{Vol}(\maths{S}^{n-1})$, so that $(\pi_+^{-1})_*\mu_{x_0}= \operatorname{Vol}_{T^1_{x_0}{\HH}^n_\RR}$. By \cite[Prop.~10, 11]{ParPauRev}, we have $$ \|m_{\rm BM}\|=2^{n-1}\operatorname{Vol}(\maths{S}^{n-1})\operatorname{Vol}(\Gamma\backslash{\HH}^n_\RR) $$ and $$ \|\sigma_{C_{\gamma_0}}\|= \operatorname{Vol}(\maths{S}^{n-2}) \frac \ell{|\operatorname{Fix}_{\Gamma_0} (C_{\gamma_0})|\,\iota_\Gamma(\gamma_0)\,m_{\gamma_0}}\,, $$ since $\operatorname{Vol}(\Gamma_{C_{\gamma_0}}\backslash C_{\gamma_0}) =\frac{\ell} {\iota_\Gamma(\gamma_0) \,m_{\gamma_0}}$. The claims hence follow from the previous remark and from Corollary \ref{coro:mainloxo}. \hfill$\Box$ \medskip The proof of the loxodromic case of Corollary \ref{coro:intro} of the Introduction follows from Corollary \ref{corogenHuber} by taking $n=2$, $\Gamma$ torsion-free (so that $n_{\gamma_0}=1$), and $\gamma_0$ primitive (so that $m_{\gamma_0}=1$) and orientation-preserving (so that $\cos\theta=1$). The area of a complete, connected, finite area hyperbolic surface with genus $g$ and $p$ punctures is $2\pi(2g+p-2)$. \section{The geometry of parabolic isometries} \label{sect:parabolic} In this section, we fix a parabolic isometry $\gamma$ of a complete $\operatorname{CAT}(-1)$ geodesic metric space $X$. We fix a horoball $C_\gamma$ centred at the fixed point of $\gamma$, and we call {\em horospherical translation length} of $\gamma$ the quantity $$ \ell=\ell_{\gamma}=\inf_{y\in \partial C_\gamma} d(y, \gamma y)\;. $$ We will say that $\gamma$ is {\em uniformly translating} if $d(y,\gamma y)$ is independent of $y\in \partial C_\gamma$. Note that being uniformly translating does not depend on the choice of $C_\gamma$, but the value of $\ell$ does (and can be fixed arbitrarily in $]0,+\infty[$ when $X$ is a Riemannian manifold). Every parabolic isometry of $X={\HH}^2_\RR,{\HH}^3_\RR$ is uniformly translating, but using Euclidean screw motions, there exist parabolic isometries in $X=\maths{H}^4_\maths{R}$ which are not uniformly translating (and the map $y\mapsto d(y, \gamma y)$ is not even bounded). If $X={\HH}^n_\RR$ and if $\gamma$ induces a Euclidean translation on $\partial C_\gamma$, then $\gamma$ is uniformly translating. Recall that by Bieberbach's theorem, any discrete group of isometries of ${\HH}^n_\RR$, preserving a given horosphere and acting cocompactly on it, contains a finite index subgroup consisting of uniformly translating parabolic isometries and the identity. If $X={\HH}^2_\RR$, if $x\in X$ is at a distance $s$ from the horoball $C_\gamma$, then \begin{equation}\label{eq:planeparabolic} d(x,\gamma x)=2\operatorname{argsinh}(e^s\,\sinh\frac{\ell} 2)\,. \end{equation} This is immediate by considering the upper halfplane model and assuming that $\gamma$ has fixed point $\infty$, by applying twice \cite[Thm.~7.2.1 (iii)]{Beardon83}. A similar triangle inequality and comparison argument as in the proof of Lemma \ref{lem:generaltranslation} shows the following result. \begin{lemm}\label{lem:generalparabolic} If $x\in X$ is at distance $s>0$ from the horoball $C_\gamma$, if $p_\gamma$ is the closest point to $x$ on $C_\gamma$, then $$ 2\operatorname{argsinh}(e^s\sinh\frac{\ell} 2)\le d(x,\gamma x) \le 2s+ d(p_\gamma,\gamma \,p_\gamma)\,. \;\;\Box $$ \end{lemm} \begin{coro}\label{lem:offhorosphere} A uniformly translating parabolic isometry $\gamma$ of ${\HH}^n_\RR$ with horospherical translation length $\ell$ is $\psi$-equitranslating with $$\psi(t)= \frac{t}{2}- \log(\sinh\frac{\ell}{2}) -\log 2 +\operatorname{O}(e^{-t})$$ as $t\rightarrow+\infty$. \end{coro} \noindent{\bf Proof. } Let $x$ be a point in ${\HH}^n_\RR$ at a distance $s$ from the horoball $C_\gamma$. We only have to prove that, as $s\to+\infty$, $$ d(x,\gamma x)= 2s+2\log\big(\sinh\frac{\ell}{2}\big)+2\log 2 +\operatorname{O}(e^{-2s})\;. $$ It suffices to consider the case $n=2$ (the points $x,\gamma x$ and the fixed point of $\gamma$ are contained in a copy of ${\HH}^2_\RR$), in which case the result follows from Equation \eqref{eq:planeparabolic}. \hfill$\Box$ \medskip \noindent{\bf Remark. } If $X=\wt M$ and $\Gamma$ are as in Section \ref{sect:counting}, if $\gamma$ is a parabolic isometry of $\Gamma$ and if ${\mathfrak K}$ is the conjugacy class of $\gamma$ in $\Gamma$, the quantities $\|\sigma_{{\mathfrak K}}\|$ and $i_{\mathfrak K}$ defined in that Section are not always finite. Note that $\|\sigma_{{\mathfrak K}}\|$ is positive, since $\Gamma$ is nonelementary. $\bullet$~ The mass $\|\sigma_{{\mathfrak K}}\|$ is finite for instance if the fixed point $\xi_\gamma$ of $\gamma$ is a bounded parabolic fixed point (that is, if its stabiliser $\Gamma_{\xi_\gamma}$ in $\Gamma$ acts cocompactly on $\Lambda\Gamma-\{\xi_\gamma\}$), which is in particular the case if $\Gamma$ is a lattice or is geometrically finite. $\bullet$~ The index $i_{\mathfrak K}$ is equal to $1$ if $\gamma$ is central in the stabiliser $\Gamma_{C_\gamma}$ of the horoball $C_\gamma$. This is in particular the case, up to passing to a finite index subgroup of $\Gamma$, if $\Gamma$ is a lattice or is geometrically finite, as well as if $X$ is a symmetric space and $\gamma$ is in the center of the nilpotent Lie group of isometries of $X$ acting simply transitively on the horosphere $C_\gamma$ (see Proposition \ref{prop:heisenberg} below: in the complex hyperbolic space ${\HH}^n_\CC$, this center consists of the vertical Heisenberg translations). If $X={\HH}^2_\RR$, we have $i_{\mathfrak K}=1$ if no nontrivial elliptic element of $\Gamma$ fixes $\xi_\gamma$ (in particular if $\Gamma$ is torsion-free), and $i_{\mathfrak K}=2$ otherwise. In the complex hyperbolic space ${\HH}^n_\CC$, the stabilisers of horoballs are not abelian and $i_{\mathfrak K}$ is finite only if ${\mathfrak K}$ consists of vertical Heisenberg translations. \medskip A proof similar to that of Corollary \ref{coro:mainloxo} gives the following result, which implies in particular Theorem \ref{theo:intro} in the Introduction when $\gamma_0$ is parabolic. \begin{coro} \label{coro:mainpara} Let $\Gamma$ be a nonelementary discrete group of isometries of ${\HH}^n_\RR$, whose Bowen-Margulis measure is finite, and let $x_0\in{\HH}^n_\RR$. Let ${\mathfrak K}$ be a conjugacy class of uniformly translating parabolic elements of $\Gamma$ with horospherical translation length $\ell$, with $\|\sigma_{{\mathfrak K}}\|$ and $i_{\mathfrak K}$ finite. Then, as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{i_{\mathfrak K}\,\|\mu_{x_0}\|\,\|\sigma_{{\mathfrak K}}\|} {\delta_\Gamma\,\|m_{\rm BM}\|\, (2\sinh\frac{\ell}{2})^{\delta_\Gamma}} \;e^{\frac{\delta_\Gamma}{2}\, t}\,. $$ If $\Gamma$ is arithmetic, then the error term is $\operatorname{O}(e^{(\frac{\delta_\Gamma}{2} -\kappa) t})$ for some $\kappa >0$. Furthermore, if $v_\alpha$ is the unit tangent vector at $x_0$ to the geodesic segment $[x_0,\alpha x_0]$ for every $\alpha\in \Gamma- \Gamma_{x_0}$, for the weak-star convergence of measures on $T^1\wt M$, we have $$ \lim_{t\rightarrow+\infty}\; \frac{\delta_\Gamma\;\|m_{\rm BM}\|\, (2\,\sinh\frac{\ell}{2})^{\delta_\Gamma}} {i_{{\mathfrak K}}\,\|\sigma_{{\mathfrak K}}\|} \;e^{-\frac{\delta_\Gamma}{2}\, t}\; \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} m_\alpha\,\Delta_{v_\alpha} \;=\; (\pi_+^{-1})_*\mu_{x_0}\;. \;\;\Box $$ \end{coro} \begin{coro}\label{coro:finitevolpara} Let $\Gamma$ be a discrete group of isometries of ${\HH}^n_\RR$ with finite covolume and let $x_0\in{\HH}^n_\RR$. Let ${\mathfrak K}$ be the conjugacy class of a uniformly translating parabolic element $\gamma_0$ of $\Gamma$ with $i_{\mathfrak K}$ finite. Then, as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t) \sim \frac{i_{\mathfrak K}\,\operatorname{Vol}(\Gamma_{C_{\gamma_0}}\backslash C_{\gamma_0})} {\operatorname{Vol}(\Gamma\backslash{\HH}^n_\RR)\,(2\sinh\frac{\ell}{2})^{n-1}}\;e^{\frac{n-1}2\, t }\,. $$ If $\Gamma$ is arithmetic, then the error term is $\operatorname{O}( e^{(\frac{n-1}{2} -\kappa) t})$ for some $\kappa >0$. Furthermore, if $\Gamma_{x_0}$ is trivial, if $v_\alpha$ is the unit tangent vector at $x_0$ to the geodesic segment $[x_0,\alpha x_0]$ for every $\alpha\in \Gamma- \{e\}$, with $\operatorname{Vol}_{T^1_{x_0}{\HH}^n_\RR}$ the spherical measure on $T^1_{x_0}{\HH}^n_\RR$, we have, for the weak-star convergence of measures on $T^1_{x_0}{\HH}^n_\RR$, $$ \frac{\operatorname{Vol}(\maths{S}^{n-1})\,\operatorname{Vol}(\Gamma\backslash{\HH}^n_\RR)\, (2\sinh\frac{\ell}{2})^{\frac{n-1}{2}}} {i_{\mathfrak K}\,\operatorname{Vol}(\Gamma_{C_{\gamma_0}}\backslash C_{\gamma_0})} \;e^{-\frac{n-1}{2}\, t} \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} \Delta_{v_\alpha} \stackrel{*}{\rightharpoonup}\;\; \operatorname{Vol}_{T^1_{x_0}{\HH}^n_\RR}\;. $$ \end{coro} \noindent{\bf Proof. } The claims are proved in the same way as Corollary \ref{corogenHuber}, using the equality $$\|\sigma_{{\mathfrak K}}\|= 2^{n-1}\,(n-1)\,\operatorname{Vol}(\Gamma_{C_{\gamma_0}}\backslash C_{\gamma_0})\,,$$ see \cite[Prop.~29 (2)]{ParPau14}. \hfill$\Box$ \medskip The parabolic case of Corollary \ref{coro:intro} of the Introduction follows from Corollary \ref{coro:finitevolpara}. Consider the upper halfplane model of ${\HH}^2_\RR$ and normalise the group such that $\gamma_0$ is the translation $z\mapsto z+1$. We choose $C_{\gamma_0}$ to be the horoball that consists of points with imaginary part at least $1$. Since $\gamma_0$ is primitive and $\Gamma$ is torsion-free, we have $\Gamma_{C_{\gamma_0}}={\gamma_0}^\maths{Z}$ and $i_{\mathfrak K}=1$. Hence $\operatorname{Vol}(\Gamma_{C_{\gamma_0}}\backslash C_{\gamma_0})=1$ by a standard computation of hyperbolic area. Now, $\sinh \frac{\ell}{2}= \frac{1}{2}$, and the claim follows as in the proof of the loxodromic case after Corollary \ref{corogenHuber}. \medskip We end this section by giving a necessary and sufficient criterion for a parabolic isometry of the complex hyperbolic space ${\HH}^n_\CC$ to be uniformly translating. We refer to \cite{Goldman99}, besides the reminder below, for the basic properties of ${\HH}^n_\CC$. On $\maths{C}^{n+1}=\maths{C}\times\maths{C}^{n-1}\times\maths{C}$, consider the Hermitian product with signature $(1,n)$ defined by $$ \langle z,w\rangle=-z_0\ov w_n+z\cdot\ov w-z_n\ov w_0\,, $$ where $(z,w)\mapsto z\cdot\ov w$ is the standard Hermitian scalar product on $\maths{C}^{n-1}$. Let $q(z)=\langle z, z\rangle$ be the corresponding Hermitian form. The projective model of the complex hyperbolic space ${\HH}^n_\CC$ corresponding to this choice of $q$ is the set $$ \{[w_0:w:1]\in\maths{P}_n(\maths{C})\;:\; q(w_0,w,1)<0\}\,, $$ endowed with the Riemannian metric, normalised to have sectional curvature between $-4$ and $-1$, whose Riemannian distance is given by $$ d(X,Y)=\operatorname{argcosh}\sqrt{\frac{\langle x,y\rangle\langle y,x\rangle} {q(x)\,q(y)}} $$ for any representatives $x,y$ of $X,Y$ in $\maths{C}^{n+1}$, see \cite[p.~77]{Goldman99}, where the sectional curvature is normalised to be between $-1$ and $-\frac 14$. The boundary at infinity of ${\HH}^n_\CC$ is $$ \partial_\infty{\HH}^n_\CC= \{[w_0:w:1]\in\maths{P}_n(\maths{C})\;:\; q(w_0,w,1)=0\}\cup\{\infty\}\,, $$ where $\infty=[1:0:0]$. For every $s>0$, the set $$ \H_s=\{[w_0:w:1]\in\maths{P}_n(\maths{C})\;:\; q(w_0:w:1)=-s\} $$ is a horosphere centred at $\infty$. The parabolic isometries $\gamma$ of ${\HH}^n_\CC$ fixing $\infty$ are the mappings induced by the projective action of the matrices \begin{equation}\label{eq:para} \wt \gamma=\begin{pmatrix} 1& \;a^*&z_0\\0&A&b\\0&0&1\end{pmatrix}\,, \end{equation} where $A\in U(n-1)$, $a^*=\;^t\overline{a}$ and $A a= b$, see \cite[\S 4.1]{CheGre74} and \cite[p.~371]{ParPau10GT}. For every $Z=[z_0:z:1]\in \partial_\infty {\HH}^n_\CC - \{\infty\}$, the isometry induced by the matrix $$ T_Z=\begin{pmatrix} 1&\;z^*&z_0\\0&1&z\\0&0&1 \end{pmatrix} $$ is called a {\it Heisenberg translation}, which is {\it vertical} if $z=0$. The group of Heisenberg translations (which identifies with the Heisenberg group of dimension $2n-1$, see \cite{Goldman99}) acts simply transitively on $\partial_\infty{\HH}^n_\CC -\{\infty\}$ and on each horosphere $\H_s$ for $s>0$. \begin{prop}\label{prop:heisenberg} A parabolic isometry $\gamma$ of the complex hyperbolic space ${\HH}^n_\CC$ is uniformly translating if and only if it is a vertical Heisenberg translation. Furthermore, if $\gamma$ is not a vertical Heisenberg translation, then the map $y\mapsto d(y,\gamma y)$ is unbounded on any horosphere of ${\HH}^n_\CC$ centred at the fixed point of $\gamma$. \end{prop} \noindent{\bf Proof. } For all $W=[w_0:w:1]\in\H_2$ and any parabolic isometry $\gamma$ as given by Equation \eqref{eq:para}, we have $$ d(W,\gamma W)= \operatorname{argcosh}\frac{|w^*(A^*-I)w+\operatorname{O}(|w|)|}2\,. $$ If $A$ is not the identity, then $w^*(A^*-I)w$ is equivalent to $|w|^2$ (up to a positive constant) on some line in $\maths{C}^{n-1}$, which makes the map $W\mapsto d(W,\gamma W)$ unbounded on $\H_2$. Thus we are reduced to considering Heisenberg translations. For all $Z=[z_0:z:1] \in \partial_\infty {\HH}^n_\CC - \{\infty\}$, we have $$ d(W,T_Z W)=\operatorname{argcosh}\frac{|z\cdot\ov w-\ov zw-z_0-2|}2\,. $$ It is easy to see that this distance is independent of $W$ if and only if $z=0$, and is unbounded otherwise. \hfill$\Box$ \section{The geometry of elliptic isometries} \label{sect:elliptic} In this section, we fix $n\geq 2$ and a nontrivial elliptic isometry $\gamma$ of ${\HH}^n_\RR$. We denote by $C_\gamma$ the fixed point set of $\gamma$, which is a nonempty proper totally geodesic subspace of ${\HH}^n_\RR$ of dimension $k=k_\gamma$. We will say that $\gamma$ is {\em uniformly rotating} if there exists $\theta=\theta_\gamma\in\mathopen{]}0,\pi]$ (called the {\it rotation angle} of $\gamma$) such that for every $v\in \normalout C_\gamma$, the (nonoriented) angle between $v$ and $\gamma v$ is $\theta$. This property is invariant under conjugation, and once $k$ and $\theta$ are fixed, there exists only one conjugacy class of uniformly rotating nontrivial elliptic isometries. Note that when $n=2$ or $n=3$, every elliptic isometry $\gamma$ is uniformly rotating, and $\theta=\pi$ if $\gamma$ does not preserve the orientation. But there exist elliptic isometries in $\maths{H}^4_\maths{R}$ which are not uniformly rotating. Assume that $\gamma$ belongs to a nonelementary discrete group of isometries $\Gamma$ of ${\HH}^n_\RR$, and let ${\mathfrak K}$ be the conjugacy class of $\gamma$ in $\Gamma$. $\bullet$~ The skinning measure $\|\sigma_{\mathfrak K}\|$ is positive if and only if $\Lambda\Gamma$ is not contained in $\partial_\infty C_\gamma$. This is in particular the case if $n=2$. Furthermore, $\|\sigma_{\mathfrak K}\|$ is finite for instance if $\Gamma_{C_\gamma}\backslash C_\gamma$ is compact or if $\partial_\infty C_\gamma\cap \Lambda\Gamma$ is empty. This is in particular the case if $n=2$ and if $\gamma$ preserves the orientation. But when $n=2$ and $\gamma$ does not preserve the orientation, the measure $\|\sigma_{\mathfrak K}\|$ is not necessary finite. For instance, let $\Gamma= T(\infty,\infty,\infty)$ be the discrete group of isometries of $\maths{H}^2_\maths{R}$ generated by the reflexions $s_1, s_2, s_3$ on the sides of an ideal hyperbolic triangle. Then $C_{s_1}$ is one of these sides. Let us prove that $\pi_*\wt{\sigma}_{C_{s_1}}$ is a constant multiple of the Lebesgue measure along $C_{s_1}$. Indeed, the Patterson-Sullivan measure at infinity of the disc model of $\maths{H}^2_\maths{R}$ based at its origin is a multiple of the Lebesgue measure $d\theta$ on the circle, since $\Gamma$ has finite covolume. Since $d\theta$ is conformally invariant under every isometry of $\maths{H}^2_\maths{R}$, the measure $\pi_*\wt{\sigma}_{C_{s_1}}$ on $C_{s_1}$ is invariant under every loxodromic isometry preserving $C_{s_1}$, hence the result. Since $C_{s_1}$ injects in $\Gamma\backslash\maths{H}^2_\maths{R}$ and since its stabiliser in $\Gamma$ has order $2$, the measure $\pi_*\sigma_{C_{s_1}}$ is the multiple by half the above constant of the Lebesgue measure on the image of $C_{s_1}$ in $\Gamma\backslash\maths{H}^2_\maths{R}$, which is infinite. $\bullet$~ If $n=2$ and $k_\gamma=1$ (so that $\gamma$ reverses the orientation), then every isometry preserving $C_{\gamma}$ commutes with $\gamma$, hence $i_{\mathfrak K}=1$. If $n=2$ and $k_\gamma=0$ (so that $\gamma$ preserves the orientation), then the finite group $\Gamma_{C_{\gamma}}$ is either cyclic, in which case $\Gamma_{C_{\gamma}}=Z_\Gamma(\gamma)$ and $i_{\mathfrak K}=1$, or it is dihedral. Assume the second case holds. If the rotation angle of $\gamma$ is $\pi$, then again $\Gamma_{C_{\gamma}}=Z_\Gamma(\gamma)$ and $i_{\mathfrak K}=1$. Otherwise, $i_{\mathfrak K}=2$. \begin{lemm}\label{lem:offfixedpoint} A uniformly rotating elliptic isometry $\gamma$ of ${\HH}^n_\RR$ with rotation angle $\theta$ is $\psi$-equitranslating with $$ \psi(t)=\frac{t}{2} -\log\frac{\sin\theta}{2} +\operatorname{O}(e^{-\frac{t}{2}}) $$ as $t\rightarrow+\infty$. \end{lemm} \noindent{\bf Proof. } By the formulas in right-angled hyperbolic triangles (see \cite[Theo.~7.11.2 (ii)]{Beardon83}), if $x\in {\HH}^n_\RR$ is at distance $s$ from the fixed point set $C_\gamma$ of $\gamma$, we have $$ \sinh\frac{d(x,\gamma x)}{2}=\sinh s\;\frac{\sin\theta}{2}\;. $$ The result follows as in Lemma \ref{lem:offaxis}. \hfill$\Box$ \medskip The next result follows from this lemma in the same way as Corollary \ref{coro:mainloxo} follows from Lemma \ref{lem:offaxis}. It implies Theorem \ref{theo:intro} in the Introduction when $\gamma_0$ is elliptic. \begin{coro} \label{coro:mainellip} Let $\Gamma$ be a nonelementary discrete group of isometries of ${\HH}^n_\RR$, whose Bowen-Margulis measure is finite, and let $x_0\in{\HH}^n_\RR$. Let ${\mathfrak K}$ be a conjugacy class of uniformly rotating nontrivial elliptic elements of $\Gamma$ with rotation angle $\theta$, such that $\|\sigma_{\mathfrak K}\|$ and $i_{\mathfrak K}$ are positive and finite. Then, as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{i_{\mathfrak K}\,\|\mu_{x_0}\|\,\|\sigma_{{\mathfrak K}}\|} {\delta_\Gamma\,\|m_{\rm BM}\|\, (\sin\frac{\theta}{2})^{\delta_\Gamma}} \;e^{\frac{\delta_\Gamma}{2}\, t}\,. $$ If $\Gamma$ is arithmetic or if $M$ is compact, then the error term is $\operatorname{O}(e^{(\frac{\delta_\Gamma}{2} -\kappa) t})$ for some $\kappa >0$. Furthermore, if $v_\alpha$ is the unit tangent vector at $x_0$ to the geodesic segment $[x_0,\alpha x_0]$ for every $\alpha\in \Gamma- \Gamma_{x_0}$, for the weak-star convergence of measures on $T^1\wt M$, we have $$ \lim_{t\rightarrow+\infty}\; \frac{\delta_\Gamma\;\|m_{\rm BM}\|\, (\sin\frac{\theta}{2})^{\delta_\Gamma}} {i_{{\mathfrak K}}\,\|\sigma_{{\mathfrak K}}\|} \;e^{-\frac{\delta_\Gamma}{2}\, t}\; \sum_{\alpha\in{\mathfrak K},\; 0<d(x_0,\,\alpha x_0)\leq t} m_\alpha\,\Delta_{v_\alpha} \;=\; (\pi_+^{-1})_*\mu_{x_0}\;.\;\;\;\Box $$ \end{coro} \section{Counting conjugacy classes of subgroups} \label{sect:countingsubgroups} Let $\wt M, x_0,\Gamma$ be as in the beginning of Section \ref{sec:rappels}. Let $\Gamma_0$ be a subgroup of $\Gamma$, and let $${\mathfrak K}=\{\gamma\Gamma_0\gamma^{-1}\;:\;\gamma\in\Gamma\}$$ be its conjugacy class in $\Gamma$. In this Section, we will study the asymptotic growth, as $t\rightarrow+\infty$, of the cardinality of $$ \{A\in{\mathfrak K}\;:\; \inf_{\alpha\in A -\{e\}} d(x_0,\,\alpha x_0)\leq t\}\;, $$ the set (assumed to be finite) of the conjugates of $\Gamma_0$ in $\Gamma$ whose minimal displacement of $x_0$ is at most $t$. We will assume the following conditions on $\Gamma_0$: ($*$)~ There exists a nonempty proper closed convex subset $C_0$ in $\wt M$ such that the normaliser $N_\Gamma(\Gamma_0)$ of $\Gamma_0$ in $\Gamma$ is a subgroup of the stabiliser $\Gamma_{C_0}$ of $C_0$ in $\Gamma$, with finite index, denoted by $i_0$, and such that the family $(\gamma C_0)_{\gamma\in\Gamma/\Gamma_{C_0}}$ is locally finite in $\wt M$; \smallskip ($**$)~ There are $c_-,c_+\in\mathopen{]}0,+\infty[$ such that $c_-\leq \inf_{\gamma\in \Gamma_0 -\{e\}} d(y,\,\gamma y)\leq c_+$ for every $y\in \partial C_0$. \medskip For instance, $\Gamma_0$ could be an infinite index malnormal torsion-free cocompact stabiliser of a proper totally geodesic subspace $C_0$ of dimension at least $1$ in $\wt M$, or a torsion-free cocompact stabiliser of a horosphere centered at a parabolic fixed point of $\Gamma$ (with $C_0$ the horoball bounded by this horosphere), in which cases $i_0=1$ and $ \|\sigma_{C_0}\|$ is positive and finite. For every $A=\gamma\Gamma_0\gamma^{-1}\in{\mathfrak K}$, let $$ m_A= ({\operatorname{Card}}(\Gamma_{x_0}\cap\Gamma_{\gamma C_0}))^{-1}\,, $$ which is well-defined since the normaliser of $\Gamma_0$ in $\Gamma$ stabilises $C_0$. We define the counting function $$ N_{{\mathfrak K},\,x_0}(t)= \sum_{A\in{\mathfrak K},\; \inf_{\alpha\in A -\{e\}} d(x_0,\,\alpha x_0)\leq t} m_A =\sum_{\gamma\in\Gamma/N_\Gamma(\Gamma_0),\; \inf_{\alpha\in \Gamma_0 -\{e\}} d(x_0,\,\gamma\alpha\gamma^{-1} x_0)\leq t} m_A\,. $$ \begin{prop} Let $\wt M$ be a complete simply connected Riemannian manifold with pinched negative sectional curvature, let $x_0\in\wt M$, and let $\Gamma$ be a nonelementary discrete group of isometries of $\wt M$. Assume that the Bowen-Margulis measure of $\Gamma$ is finite and mixing for the geodesic flow on $T^1M$. Let $\Gamma_0$ be a subgroup of $\Gamma$ and let $C_0$ be a subset of $\wt M$ satisfying the conditions ($*$) and ($**$), such that the skinning measure $\|\sigma_{C_0}\|$ is positive and finite. Let ${\mathfrak K}$ be the conjugacy class of $\Gamma_0$ in $\Gamma$. Then, for every $\epsilon>0$, if $t$ is big enough, $$ \frac{i_{0}\,\|\mu_{x_0}\|\,\|\sigma_{C_0}\|}{\delta_\Gamma\,\|m_{\rm BM} \|\,e^{\frac{\delta_\Gamma\,c_+}{2}}} \,e^{\frac{\delta_\Gamma}{2}\, t}\,(1-\epsilon) \leq N_{{\mathfrak K},\,x_0}(t)\leq\frac{i_{0}\,\|\mu_{x_0}\|\,\|\sigma_{C_0}\|} {\delta_\Gamma\,\|m_{\rm BM}\|\,(\sinh\frac{c_-}{2})^{\delta_\Gamma}} \,e^{\frac{\delta_\Gamma}{2}\, t}\,(1+\epsilon)\,. $$ \end{prop} \noindent{\bf Proof. } Let $\gamma\in\Gamma$. By the local finiteness assumption, except for finitely many cosets of $\gamma$ in $\Gamma/\Gamma_{C_0}$, the point $x_0$ does not belong to $\gamma C_0$. As in Lemma \ref{lem:generaltranslation}, if $x_0\in X$ is at distance $s$ from $\gamma C_0$, we have $$ 2\operatorname{argsinh}(\cosh s\sinh\frac{c_-}2)\leq \inf_{\alpha\in \Gamma_0 -\{e\}} d(x_0,\gamma\alpha\gamma^{-1} x_0) \leq 2s+c_+\,. $$ The proof is then similar to the proof of Corollary \ref{coro:encadreloxo}. \hfill$\Box$ \medskip We have the following more precise result under stronger assumptions on $\Gamma_0$, with a proof similar to those of Corollaries \ref{coro:mainloxo} and \ref{coro:mainpara}. \begin{theo} Let $\Gamma$ be a nonelementary discrete group of isometries of ${\HH}^n_\RR$ with finite Bowen-Margulis measure, and let $x_0\in{\HH}^n_\RR$. Let $\Gamma_0$ be the stabiliser in $\Gamma$ of a bounded parabolic fixed point of $\Gamma$, acting purely by translations on the boundary of any horoball $C_0$ centred at this fixed point. Let ${\mathfrak K}$ be the conjugacy class of $\Gamma_0$ in $\Gamma$ and let $\ell=\min_{\gamma\in\Gamma_0 -\{e\}} d(y,\gamma y)$ for any $y\in \partial C_0$. Then, as $t\rightarrow+\infty$, $$ N_{{\mathfrak K},\,x_0}(t)\sim \frac{\|\mu_{x_0}\|\,\|\sigma_{C_0}\|} {\delta_\Gamma\,\|m_{\rm BM}\|\, (2\sinh\frac{\ell}{2})^{\delta_\Gamma}} \,e^{\frac{\delta_\Gamma}{2}\, t}\,. $$ If $\Gamma$ is arithmetic, then the error term is $\operatorname{O}(e^{(\frac{\delta_\Gamma}{2} -\kappa) t})$ for some $\kappa >0$. \hfill$\Box$ \end{theo} {\small
2,869,038,154,445
arxiv
\section{Introduction} UPt$_3$~\cite{stewart:1984} belongs to the first generation of the family of heavy-fermion superconductors together with CeCu$_2$Si$_2$ and UBe$_{13}$ and has unique superconducting properties compared with them. Immediately after the pioneering discovery~\cite{fisher:1989} of the double superconducting transition, it is found~\cite{hasselbach:1989,bruls:1990,schenstrom:1989} that the phase diagram in the $H$ vs $T$ plane consists of the A, B, and C phases. The A (C) phase is at a high (low) temperature and a low (high) field, while the B phase is at a low $T$ and a low $H$. It is rather clear that the order parameter (OP) must have multicomponents. The main argument is centered on how to understand this phase diagram, or on what OP can describe it in a consistent manner~\cite{sauls:1994,joynt:2002}. Now the splitting of the superconducting transition temperatures $T_{c1}\cong 550$ mK and $T_{c2}\cong 500$ mK is generally understood owing to a symmetry breaking field for an otherwise doubly degenerate pairing state. The identification of this symmetry breaking field is still not settled yet, but it is considered to come from the antiferromagnetic (AF) ordering at $T_N=5$ K~\cite{joynt:1988,machida:1989,machida:1989b,machida:1991,ozaki:1992,hess:1989,tokuyasu:1990} or from the crystal lattice symmetry lowering that occurs at higher temperatures~\cite{machida:1996}. The remaining problem is identifying the OP symmetry. The central discussions are on the causes of OP degeneracy, that is, either the orbital part~\cite{joynt:1988,sauls:1994,hess:1989,tokuyasu:1990} or spin part of the OP~\cite{machida:1989,machida:1989b,machida:1991,ozaki:1992}. The former scenario has a fundamental difficulty where the so-called gradient coupling term in the Ginzburg-Landau (GL) functional inevitably prevents the observed crossing of the two transition lines starting from $T_{c1}$ and $T_{c2}$, removing the C phase. Therefore, the orbital scenario needs the fine tuning of the underlying Fermi surface topology and the detailed structure of the orbital function~\cite{sauls:1994,sauls:1994b}. Among the various proposals, the $E_{2u}$ symmetry is regarded as the most possible candidate, where $\bi{d}(\bi{k})\propto\bi{z}(k_x^2-k_y^2+2ik_xk_y)k_z$. This state is time-reversal-symmetry-broken and fourfold-symmetric in the A and C phases. In the B phase, there exist one line node in the equator and two point nodes in the poles. On the other hand, the spin scenario~\cite{machida:1989,machida:1989b,machida:1991,ozaki:1992,ohmi:1993,machida:1993,machida:1995,ohmi:1996,machida:1999} overcomes this difficulty, but the difficulty to qualitatively estimate the spin-orbit (SO) coupling remains because the spin scenario assumes a weak SO coupling in contrast to the strong SO coupling assumption adopted in the $E_{2u}$ scenario~\cite{sauls:1994,hess:1989,tokuyasu:1990,sauls:1994b,choi:1991,choi:1993}. This controversy is resolved experimentally because the Knight shift~\cite{tou:1996,tou:1998} starts decreasing below $T_{c2}$ when $H\sim 2$ kG for $H\parallel c$. This field $H_{\rm rot}$ corresponding to the rotation of the $d$-vector~\cite{tou:1998} gives an estimate of the SO coupling strength in this system, justifying the classification scheme from the weak SO coupling, which is never attained in a strong-SO case, where the $d$-vector is strongly tied to the underlying crystalline axes via the orbital part in OP. The basic requirements of the possible pairing state realized in UPt$_3$ can be summarized as follows: (1) The gap structure contains both horizontal line node(s) and point nodes as evidenced by power law behaviors in various directionally dependent transport measurements, such as thermal conductivity~\cite{lussier:1996b,suderow:1997} and ultrasound attenuation experiments~\cite{bishop:1984,muller:1986,shivaram:1986b,ellman:1996}, and also bulk measurements, such as specific heat~\cite{hasselbach:1989,brison:1994,ramirez:1995}, penetration depth~\cite{yaouanc:1998}, nuclear relaxation time~\cite{kohori:1988}, and magnetization~\cite{tenya:1996} experiments. (2) As mentioned above, the detailed Knight shift experiment~\cite{tou:1998} shows a decrease in the magnetic susceptibility below $T_{c2}$, depending on the field direction and its strength. Thus, it is concluded that the $d$-vector contains the $\bi{b}$-component and $\bi{c}$-component in the B phase for the hexagonal crystal. Upon increasing $H$ $(\parallel c)$, this $\bi{c}$-component becomes the $\bi{a}$-component. (3) Finally, according to the recent angle-resolved thermal conductivity measurement~\cite{machida:2012}, the twofold symmetric gap structure in the basal plane for the C phase, the full rotational symmetry in the B phase, and the horizontal line nodes are found to be at the tropical position, not at the equator of the Fermi sphere. In view of previous\cite{joynt:2002} and recent experiments\cite{machida:2012} we come to a new stage to critically examine the proposed pairing states belonging to the orbital scenario: the singlet category $E_{1g}$\cite{park:1995} and the triplet category $E_{2u}$\cite{hess:1989,tokuyasu:1990} in addition to the so-called accidental degeneracy scenario\cite{joynt:1990,chen:1993} and also belonging to the spin scenario~\cite{machida:1989,machida:1989b,machida:1991,ozaki:1992,ohmi:1993,machida:1993,machida:1995,ohmi:1996,machida:1999}. Previously, we tentatively chose the orbital part from $E_{1u}$ representation~\cite{ohmi:1993,machida:1993,machida:1995,ohmi:1996} where the OP is nonunitary and also that from $E_{2u}$ representation~\cite{machida:1999} in our spin scenario among the classified pairing functions in the absence~\cite{ozaki:1986,ozaki:1985} and presence of antiferromagnetism~\cite{ozaki:1989}. However, the precise orbital form is still to be determined. In view of this finding we are now in the position to identify the precise pair functions for all A, B, and C phases, on the basis of the spin degeneracy scenario. The paper is arranged as follows. We first classify the pairing function group-theoretically and examine the possible symmetry realized in connection with various existing experimental data in \S II. The formulation is given in \S III on the basis of the quasi-classical Eilenberger theory. With the identified pair function, we investigate the vortex structures in the B phase, including the exotic vortex core structure, namely, the so-called double-core structure associated with the multicomponent OP in \S IV. The present study is a natural extension of our GL framework for multiple OP~\cite{machida:1993b,fujita:1994,hirano:1995} to microscopic calculation based on the Eilenberger theory, which provides quasiparticle (QP) structures in periodic vortex lattices. A vortex structure is also studied for the A and C phases in connection with the twofold gap function in \S V. The final section is devoted to conclusions and future perspectives where detailed comparisons with other proposed pairing functions are made. We use the two notations $(x,y,z)$ and $(a,b,c)$ to denote the spatial coordinates interchangeably. \section{Classification of Possible Pairing States} In this section, we first enumerate possible pairing states group-theoretically~\cite{volovik:1985,ozaki:1986,ozaki:1985}. The classified states are called inert phases, which are stable under a small change in the system parameters and are different from the previous classifications, where we only treat $p$-wave states~\cite{ozaki:1986,ozaki:1985}. By the same procedure as those in refs.~\ref{ozaki1986} and \ref{ozaki1985}, we can easily extend it to $f$-wave pairing states with an orbital angular momentum $l=3$. Table I lists all the possible $f$-wave inert states and their little groups under the hexagonal symmetry $D_6$. Here, the basis functions of the irreducible representations are also given in Table I. As mentioned before, the required properties are all satisfied by the planar state $\hat{\tau }_xl_1^{E_{1u}}+\hat{\tau }_yl_2^{E_{1u}}$ in $E_{1u}$. Namely, in terms of the present context of UPt$_3$, $(\bi{c}k_b+\bi{b}k_a)(5k_z^2-1)$, where the unit vectors $\bi{a}$, $\bi{b}$, and $\bi{c}$ in $D_6$ denote the $d$-vector components. Note that this state is a natural extension of the planar state $\hat{\tau }_xk_x+\hat{\tau }_yk_y$ in the $p$-wave state that is realized in thin films of the superfluid $^3$He B-phase. \begin{table*}[t] \begin{center} \caption{$f$-Wave inert phases and their little groups in $D_6$ system.} {\tabcolsep=0.5mm \begin{tabular}{lllll} \hline\hline rep. & state & order parameter & basis of irr.~rep. & little group\\\hline $A_{2u}$ & $A_{2u}$-polar & $\hat{\tau }_zl^{A_{2u}}$ & $l^{A_{2u}}\!=\!2k_z^3\!-\!3(k_x^2\!+\!k_y^2)k_z$ & $(1\!+\!C_{21}'u_{2x})(1\!+\!C_{21}'\tilde{\pi })\{\bi{C}_6\!\times\! \bi{A}(\bi{e}_z)\!\times\! T\}$ \\ & $A_{2u}$-$\beta$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)l^{A_{2u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_{21}'u_{2z})\{\bi{C}_6\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\\hline $B_{1u}$ & $B_{1u}$-polar & $\hat{\tau }_zl^{B_{1u}}$ & $l^{B_{1u}}\!=\!k_x^3\!-\!3k_y^2k_x$ & $(1\!+\!C_2u_{2x})(1\!+\!C_{21}''\tilde{\pi })\{\bi{D}_3'\!\times\! \bi{A}(\bi{e}_z)\!\times\! T\}$ \\ & $B_{1u}$-$\beta$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)l^{B_{1u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_2u_{2z})\{\bi{D}_3'\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\\hline $B_{2u}$ & $B_{2u}$-polar & $\hat{\tau }_zl^{B_{2u}}$ & $l^{B_{2u}}\!=\!3k_yk_x^2\!-\!k_y^3$ & $(1\!+\!C_2u_{2x})(1\!+\!C_{21}'\tilde{\pi })\{\bi{D}_3''\!\times\! \bi{A}(\bi{e}_z)\!\times\! T\}$ \\ & $B_{2u}$-$\beta$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)l^{B_{2u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_{21}'u_{2z})\{\bi{D}_3''\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\\hline $E_{1u}$ & $E_{1u}$-planar & $\hat{\tau }_xl_1^{E_{1u}}\!+\!\hat{\tau }_yl_2^{E_{1u}}$ & $l_1^{E_{1u}}\!=\!(5k_z^2\!-\!1)k_x$ & $(1\!+\!C_{21}'u_{2y}\tilde{\pi })\{_{II}\bi{D}_6\!\times\! T\}$ \\ & $E_{1u}$-polar$_1$ & $\hat{\tau }_zl_1^{E_{1u}}$ & $l_2^{E_{1u}}\!=\!(5k_z^2\!-\!1)k_y$ & $(1\!+\!C_{2z}\tilde{\pi })(1\!+\!C_{2z}u_{2x})\{\bi{C}_{21}'\!\times\! \bi{A}(\bi{e}_z)\!\times\! T\}$ \\ & $E_{1u}$-polar$_2$ & $\hat{\tau }_zl_2^{E_{1u}}$ & & $(1\!+\!C_{2z}\tilde{\pi })(1\!+\!C_{2z}u_{2x})\{\bi{C}_{21}''\!\times\! \bi{A}(\bi{e}_z)\!\times\! T\}$ \\ & $E_{1u}$-bipolar & $\hat{\tau }_xl_1^{E_{1u}}\!+\!i\hat{\tau }_yl_2^{E_{1u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_{2z}\tilde{\pi })_{II}\bi{D}_2$ \\ & $E_{1u}$-axial & $\hat{\tau }_z(l_1^{E_{1u}}\!+\!il_2^{E_{1u}})$ & & $(1\!+\!tC_{21}')(1\!+\!u_{2x}\tilde{\pi })\{\tilde{\bi{C}}_6\!\times\! \bi{A}(\bi{e}_z)\}$ \\ & $E_{1u}$-$\beta_1$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)l_1^{E_{1u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_{2z}u_{2z})\{\bi{C}_{21}'\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\ & $E_{1u}$-$\beta_2$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)l_2^{E_{1u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_{2z}u_{2z})\{\bi{C}_{21}''\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\ & $E_{1u}$-$\gamma$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)(l_1^{E_{1u}}\!+\!il_2^{E_{1u}})$ & & $(1\!+\!tC_{21}'u_{2x})\{\tilde{\bi{C}}_6\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\\hline $E_{2u}$ & $E_{2u}$-planar & $\hat{\tau }_xl_1^{E_2u}\!+\!\hat{\tau }_yl_2^{E_{2u}}$ & $l_1^{E_{2u}}\!=\!2k_zk_xk_y$ & $(1\!+\!C_{21}'u_{2x})(1\!+\!u_{2z}\tilde{\pi })\{_{II}\bi{C}_6^2\!\times\! T\}$ \\ & $E_{2u}$-polar$_1$ & $\hat{\tau }_zl_1^{E_{2u}}$ & $l_2^{E_{2u}}\!=\!-k_z(k_x^2\!-\!k_y^2)$ & $(1\!+\!u_{2x}\tilde{\pi })\{\bi{D}_2\!\times\! \bi{A}(\bi{e}_z)\!\times\! T\}$ \\ & $E_{2u}$-polar$_2$ & $\hat{\tau }_zl_2^{E_{2u}}$ & & $(1\!+\!u_{2x}\tilde{\pi })(1\!+\!C_{21}'u_{2x})\{\bi{C}_2\!\times\! \bi{A}(\bi{e}_z)\!\times\! T\}$ \\ & $E_{2u}$-bipolar & $\hat{\tau }_xl_1^{E_{2u}}\!+\!i\hat{\tau }_yl_2^{E_{2u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_{21}'u_{2x})\bi{C}_2$ \\ & $E_{2u}$-axial & $\hat{\tau }_z(l_1^{E_{2u}}\!+\!il_2^{E_{2u}})$ & & $(1\!+\!tC_{21}')\{\tilde{\bi{C}}_6^2\!\times\! \bi{A}(\bi{e}_z)\}$ \\ & $E_{2u}$-$\beta_1$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)l_1^{E_{2u}}$ & & $(1\!+\!tu_{2x})\{\bi{D}_2\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\ & $E_{2u}$-$\beta_2$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)l_2^{E_{2u}}$ & & $(1\!+\!tu_{2x})(1\!+\!C_{21}'u_{2z})\{\bi{C}_2\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\ & $E_{2u}$-$\gamma$ & $(\hat{\tau }_x\!+\!i\hat{\tau }_y)(l_1^{E_{2u}}\!+\!il_2^{E_{2u}})$ & & $(1\!+\!tC_{21}'u_{2x})\{\tilde{\bi{C}}_6^2\!\times\! \tilde{\bi{A}}(\bi{e}_z)\}$ \\\hline \multicolumn{5}{l}{ $\tilde{\bi{A}}(\bi{e}_z)\!=\!\{u(\bi{e}_z,\theta)\tilde{\theta }|0\le\theta\le 2\pi\}$, $\bi{C}_2\!=\!\{E\!+\!C_{2z}\}$, $\bi{C}_{21}'\!=\!\{E\!+\!C_{21}'\}$, $\bi{D}_3'\!=\!\{\bi{C}_3\!+\!C_{21}'\bi{C}_3\}$, }\\ \multicolumn{5}{l}{ $\bi{D}_3''\!=\!\{\bi{C}_3\!+\!C_{21}''\bi{C}_3\}$, $_{II}\bi{D}_2\!=\!\{1,C_{2z}u_{2z},C_{21}'u_{2x},C_{21}''u_{2y}\}$, $\tilde{\bi{C}}_6\!=\!\{C_6^j(\frac{2\pi j}{6}),j\!=\!0,1,\cdots,5\}$, }\\ \multicolumn{5}{l}{ $\tilde{\bi{C}}_6^2\!=\!\{C_6^j(\frac{2\cdot 2\pi j}{6}),j\!=\!0,1,\cdots,5\}$, $_{II}\bi{C}_6^2\!=\!\{C_6^ju_6^{2j},j\!=\!0,1,\cdots,5\}$, $\hat{\tau }_{\mu }\!=\!i\hat{\sigma }_{\mu }\hat{\sigma }_y$ $(\mu\!=\!x,y,z)$. }\\ \hline\hline \end{tabular}} \end{center} \end{table*} \begin{figure} \begin{center} \includegraphics[width=8.5cm]{65268Fig1.eps} \end{center} \caption{(Color online) Schematic phase diagrams under $H\parallel c$ (a) and $H\perp c$ (b). The orbital states are $\lambda_a(\bi{k})=\sqrt{21/8}k_a(5k_c^2-1)$ and $\lambda_b(\bi{k})=\sqrt{21/8}k_b(5k_c^2-1)$. (c) Gap functions of the A, B, and C phases. } \label{phase} \end{figure} \subsection{Phase assignment and gap structures} This $E_{1u}$ state is assigned to each phase as the B phase $(\bi{c}k_b+\bi{b}k_a)(5k_c^2-1)$, the A phase $\bi{b}k_a(5k_c^2-1)$, and the C phase $\bi{c}k_b(5k_c^2-1)$, as shown in Fig.~\ref{phase}. Thus, the B phase is characterized by two horizontal line nodes at $\cos^2\theta=1/5$, or $\theta=63.4^{\circ}$ and $116.6^{\circ}$ and two point nodes at the poles [see Fig.~\ref{phase}(c)]. The C (A) phase is the vertical line node at $k_b=0$ ($k_a=0$) in addition to the two horizontal line nodes [see Fig.~\ref{phase}(c)]. Thus, it naturally explains the twofold thermal conductivity oscillation when $H$ rotates within the basal plane. Note also that this gap structure with two line nodes and two point nodes in the B phase is consistent with the various transport measurements and bulk measurements mentioned above. In Fig.~\ref{bulkDOS}, we show the densities of states (DOS's) $N(E)$ for the three phases. \begin{figure} \begin{center} \includegraphics[width=7cm]{65268Fig2.eps} \end{center} \caption{(Color online) DOS's in the bulk for A and C phases $|\Delta(\bi{k})|=\Delta_0|k_x(5k_z^2-1)|$ (solid line) and B phase $|\Delta(\bi{k})|=\Delta_0\sqrt{k_x^2+k_y^2}|5k_z^2-1|$ (dashed line). } \label{bulkDOS} \end{figure} \subsection{Unitary versus nonunitary} It is shown in Table I that we list up the bipolar state given by $\hat{\tau }_xl_1^{E_{1u}}+i\hat{\tau }_yl_2^{E_{1u}}$ that is nonunitary and breaks the time reversal symmetry and that can also explain the phase diagram in $H$ vs $T$. However, it is not consistent with the recent $\mu$SR experiment~\cite{reotier:1995} that negates the earlier claim that the time reversal symmetry is broken~\cite{luke:1993}. The gap structure in this bipolar state is characterized by $|k_x\pm k_y||5k_z^2-1|$ in the B phase with fourfold symmetry. This contradicts the results of a thermal conductivity experiment~\cite{machida:2012} that indicated the rotational symmetry in the B phase. \subsection{Other classified states and strong SO state} The remaining states among the classified inert phase, i.e., $A_2$, $B_1$, $B_2$, $E_1$, and $E_2$, are not accepted for the following reasons: They do not provide the double transition ($A_2$, $B_1$, and $B_2$) or explain the twofold symmetry in the C phase ($E_2$). The other $E_{1u}$ states in Table I are not appropriate except for the planar state, namely, the polar$_1$, polar$_2$, axial, $\beta_1$, $\beta_2$, and $\gamma$, either because they do not give the double transition (polar$_1$ and polar$_2$), or because they fail to give the appropriate gap structure required (axial, $\beta_1$, $\beta_2$, and $\gamma$). The $E_{2u}$ $\bi{c}(k_a^2-k_b^2+2ik_ak_b)k_c$ classified in the strong-SO case fails to explain the Knight shift experiment~\cite{tou:1996,tou:1998} and the twofold symmetry in the C phase~\cite{machida:2012}. Then we are left with only the $E_{1u}$ planar state with the $f$-wave character mentioned above. Note also that, since $E_{1u}$ with the $p$-wave character $\hat{\tau }_xk_x+i\hat{\tau }_yk_y$ has no line node, it has been excluded as a candidate from the outset. \begin{widetext} \section{Quasi-Classical Eilenberger Theory} We start with the quasi-classical spinful Eilenberger equation~\cite{eilenberger:1968,schopohl:1980,serene:1983,fogelstrom:1995,sauls:2009}. The quasi-classical Green's function $\widehat{g}(\bi{k},\bi{r},\omega_n)$ is calculated using the Eilenberger equation \begin{align} -i\hbar\bi{v}(\bi{k})\cdot\bi{\nabla }\widehat{g}(\bi{k},\bi{r},\omega_n) = \left[ \begin{pmatrix} \left[i\omega_n+(e/c)\bi{v}(\bi{k})\cdot\bi{A}(\bi{r})\right]\hat{1} & -\hat{\Delta }(\bi{k},\bi{r}) \\ \hat{\Delta }(\bi{k},\bi{r})^{\dagger } & -\left[i\omega_n+(e/c)\bi{v}(\bi{k})\cdot\bi{A}(\bi{r})\right]\hat{1} \end{pmatrix} ,\widehat{g}(\bi{k},\bi{r},\omega_n) \right], \label{Eilenberger eq} \end{align} \end{widetext} where the ordinary hat indicates the 2 $\times$ 2 matrix in spin space and the wide hat indicates the 4 $\times$ 4 matrix in particle-hole and spin spaces. The quasi-classical Green's function is described in particle-hole space by \begin{align} \widehat{g}(\bi{k},\bi{r},\omega_n) = -i\pi \begin{pmatrix} \hat{g}(\bi{k},\bi{r},\omega_n) & i\hat{f}(\bi{k},\bi{r},\omega_n) \\ -i\underline{\hat{f}}(\bi{k},\bi{r},\omega_n) & -\underline{\hat{g}}(\bi{k},\bi{r},\omega_n) \end{pmatrix}, \end{align} with the direction of the relative momentum of a Cooper pair $\bi{k}$, the center-of-mass coordinate of the Cooper pair $\bi{r}$, and the Matsubara frequency $\omega_n=(2n+1)\pi k_B T$. The quasi-classical Green's function satisfies the normalization condition $\widehat{g}^2=-\pi^2\widehat{1}$. The Fermi velocity is assumed as $\bi{v}(\bi{k})=v_F\bi{k}$ on a three-dimensional Fermi sphere. In the symmetric gauge, the vector potential $\bi{A}(\bi{r})=(\bar{\bi{B}}\times\bi{r})/2+\bi{a}(\bi{r})$, where $\bar{\bi{B}}=(0,0,\bar{B})$ is a uniform flux density and $\bi{a}(\bi{r})$ is related to the internal field $\bi{B}(\bi{r})=\bar{\bi{B}}+\nabla\times\bi{a}(\bi{r})$. The unit cell of the vortex lattice is given by $\bi{r}=s_1(\bi{u}_1-\bi{u}_2)+s_2\bi{u}_2$ with $-0.5\le s_i\le 0.5$ $(i=1,2)$, $\bi{u}_1=(a_x,0,0)$, and $\bi{u}_2=(a_x/2,a_y,0)$. In this coordinate, a hexagonal lattice is described by $a_y/a_x=\sqrt{3}/2$ or $1/(2\sqrt{3})$. The spin triplet order parameter is defined by \begin{align} \hat{\Delta }(\bi{k},\bi{r})=i\bi{d}(\bi{k},\bi{r})\cdot\hat{\bi{\sigma }}\hat{\sigma_y}, \end{align} with \begin{align} \bi{d}(\bi{k},\bi{r})=\bi{a}\Delta_a(\bi{r})\phi_a(\bi{k})+\bi{b}\Delta_b(\bi{r})\phi_b(\bi{k})+\bi{c}\Delta_c(\bi{r})\phi_c(\bi{k}), \end{align} where $\hat{\bi{\sigma }}$ is the Pauli matrix. The self-consistent condition for $\Delta_i(\bi{r})$ is given as \begin{multline} \hat{\Delta }(\bi{k},\bi{r}) = N_0\pi k_BT \\ \times\sum_{0<\omega_n \le \omega_c}\left\langle V(\bi{k}, \bi{k}') \left[\hat{f}(\bi{k}',\bi{r},\omega_n)+\underline{\hat{f}}^{\dagger }(\bi{k}',\bi{r},\omega_n)\right]\right\rangle_{\bi{k}'}, \label{order parameter} \end{multline} where $N_0$ is the DOS in the normal state, $\omega_c$ is the cutoff energy setting $\omega_c=20\pi k_B T_c$ with the transition temperature $T_c$, and $\langle\cdots\rangle_{\bi{k}}$ indicates the Fermi surface average. We neglect the splitting of $T_c$ because it is appropriate at low temperatures even in the B phase. The pairing interaction $V(\bi{k}, \bi{k}')=g\phi(\bi{k})\phi^*(\bi{k}')$, where $g$ is a coupling constant. The pairing functions $\phi(\bi{k})$ and $\phi_i(\bi{k})$ are chosen for each phase in UPt$_3$ appropriately. In our calculation, we use the relation \begin{align} \frac{1}{gN_0}=\ln \frac{T}{T_c}+2\pi k_BT\sum_{0<\omega_n \le \omega_c}\frac{1}{\omega_n}. \end{align} The vector potential for the internal magnetic field $\bi{A}(\bi{r})$ is also self-consistently determined by \begin{align} &\nabla\times[\nabla\times\bi{A}(\bi{r})]\nn\\ =&8\pi\frac{e}{c}N_02\pi k_BT\sum_{0<\omega_n\le\omega_c} \langle \bi{v}(\bi{k}) \ {\rm Im} \left[ g_0(\bi{k},\bi{r}, \omega_n) \right] \rangle_{\bi{k}}, \label{vector potential} \end{align} where $g_0$ is a component of the quasi-classical Green's function $\hat{g}$ in spin space, namely, \begin{align} \hat{g} = \begin{pmatrix} g_0+g_z & g_x-ig_y \\ g_x+ig_y & g_0-g_z \end{pmatrix}. \nn \end{align} We solve eq.~\eqref{Eilenberger eq} and eqs.~\eqref{order parameter} and \eqref{vector potential} alternately, and obtain self-consistent solutions, under a given unit cell of the vortex lattice. The unit cell is divided into $41\times 41$ mesh points, where we obtain the quasi-classical Green's functions, $\Delta_i(\bi{r})$, and $\bi{A}(\bi{r})$. When we solve eq.~\eqref{Eilenberger eq} by the Riccati method~\cite{nagato:1993,schopohl:1995}, we estimate $\Delta_i(\bi{r})$, and $\bi{A}(\bi{r})$ at arbitrary positions by the interpolation from their values at the mesh points, and by the periodic boundary condition of the unit cell including the phase factor due to the magnetic field~\cite{ichioka:1997,ichioka:1999a,ichioka:1999b}. In the numerical calculation, we use the units $R_0=\hbar v_F/(2\pi k_BT_c)$, $B_0=\hbar c/(2|e|R_0^2)$, and $E_0=\pi k_BT_c$ for the length, magnetic field, and energy, respectively. By the dimensionless expression, eq.~\eqref{vector potential} is rewritten as \begin{align} &\frac{R_0}{B_0}\nabla\times[\nabla\times\bi{A}(\bi{r})]\nn\\ =&-\frac{1}{\kappa^2}\frac{2T}{T_c}\sum_{0<\omega_n\le\omega_c} \langle \bi{k} \ {\rm Im} \left[ g_0(\bi{k},\bi{r}, \omega_n) \right] \rangle_{\bi{k}}, \end{align} where $\kappa=B_0/(E_0\sqrt{8\pi N_0})=\sqrt{7\zeta(3)/18}\kappa_{\rm GL}$. We use a large GL parameter $\kappa_{\rm GL}=60$ owing to UPt$_3$. By using the self-consistent solutions, free energy density is calculated using Luttinger-Ward thermodynamic potential~\cite{luttinger:1960} as \begin{widetext} \begin{align} \delta\Omega=&N_0\frac{1}{gN_0}\left\langle\left\langle\frac{1}{2}{\rm Tr}\hat{\Delta }(\bi{k},\bi{r})\hat{\Delta }^{\dagger }(\bi{k},\bi{r})\right\rangle_{\bi{k}}\right\rangle_{\bi{r}} +N_0E_0^2\kappa^2\left\langle\left(\frac{\nabla\times\bi{A}(\bi{r})-\bar{\bi{B}}}{B_0}\right)^2\right\rangle_{\bi{r}}\nn\\ -&N_0\int_0^1d\lambda\left\langle\pi k_BT\sum_{0<\omega_n\le\omega_c}\left\langle{\rm Re}\left[{\rm Tr}\left\{\hat{\Delta }^{\dagger }(\bi{k},\bi{r})\left(\hat{f}_{\lambda }(\bi{k},\bi{r},\omega_n)\!+\!\underline{\hat{f}}_{\lambda }^{\dagger }(\bi{k},\bi{r},\omega_n)\right)\right\}\right]\right\rangle_{\bi{k}}\right\rangle_{\bi{r}}, \label{Luttinger-Ward} \end{align} \end{widetext} where $\langle\cdots\rangle_{\bi{r}}$ indicates the spatial average. The auxiliary functions $\hat{f}_{\lambda }$ and $\underline{\hat{f}}_{\lambda }$ are obtained by the substitution of $\lambda\hat{\Delta }$ for $\hat{\Delta }$ in eq.~\eqref{Eilenberger eq}. This thermodynamic potential is relevant under large GL parameters and low magnetic fields because the replacement of the vector potential is not carried out. DOS for the energy $E$ is given by \begin{align} \bar{N}(E)=&\langle N(\bi{r},E)\rangle_{\bi{r}}\nn\\ =&\left\langle N_0 \left\langle {\rm Re} \left[g_0(\bi{k},\bi{r}, \omega_n)|_{i\omega_n \rightarrow E+i\eta}\right] \right\rangle_{\bi{k}}\right\rangle_{\bi{r}}, \end{align} where $\eta$ is a positive infinitesimal constant and $N(\bi{r},E)$ is the local density of states (LDOS). Typically, we use $\eta=0.01\pi k_BT_c$. To obtain $g_0(\bi{k},\bi{r}, \omega_n)|_{i\omega_n \rightarrow E+i\eta}$, we solve eq.~\eqref{Eilenberger eq} with $\eta -iE$ instead of $\omega_n$ under the pair potential and vector potential obtained self-consistently. \section{B Phase} In the B phase, we take the pairing functions as $\phi=\sqrt{21/8}(k_a+k_b)(5k_c^2-1)$, $\phi_b=\sqrt{21/8}k_a(5k_c^2-1)$, and $\phi_a=\phi_c=\sqrt{21/8}k_b(5k_c^2-1)$, where one component of the $d$-vector is directed toward the $b$-axis and the other can rotate in the $ac$-plane. \subsection{Double-core vortex lattice} The pairing function in the B phase is similar to that in the superfluid $^3$He B-phase, namely, $\bi{d}(\bi{k})\propto\bi{x}k_x+\bi{y}k_y+\bi{z}k_z$~\cite{vollhardt:book}. Owing to the analogy with the $^3$He B-phase, there is the possibility that the unconventional double-core vortex~\cite{thuneberg:1986b,thuneberg:1987} and $v$ vortex~\cite{salomaa:1983,salomaa:1985b} with a chiral core are stabilized against the conventional singular vortex. In fact, double-core vortex and $v$ vortex are stabilized in the low- and high-pressure regions, respectively, in the $^3$He B-phase~\cite{thuneberg:1986b,thuneberg:1987}. Under our pairing function in the UPt$_3$ B phase, the $v$ (chiral core) vortex lattice is not stabilized self-consistently; however, there are two types of self-consistent vortex lattice, namely, the hexagonal singular vortex lattice and the double-core vortex lattice. At $T=0.2T_c$ under $\bar{B}=0.05B_0$ to the $c$-axis, a spatial variation of the pair potential amplitude for the singular vortex lattice and the double-core vortex lattice is shown in Figs.~\ref{OP-B}(a) and \ref{OP-B}(b)-\ref{OP-B}(d), respectively, where the total pair potential is defined by $|\Delta(\bi{r})|\equiv\sqrt{\langle{\rm Tr}[\hat{\Delta }(\bi{k},\bi{r})^{\dagger }\hat{\Delta }(\bi{k},\bi{r})]/2\rangle_{\bi{k}}}$. Since the pair potential is axisymmetric for the $c$-axis in the B phase, conventional singular vortices form a perfect hexagonal lattice [Fig.~\ref{OP-B}(a)]. By contrast, a double-core vortex spontaneously breaks the axisymmetry. A schematic structure of the double-core vortex by the $d$-vector is shown in Fig.~\ref{OP-B}(e). The OP in the bulk is depicted by the blue (black) and red (gray) arrows, which indicate components of the $d$-vector with the orbital states $\lambda_a=\sqrt{21/8}k_a(5k_c^2-1)$ and $\lambda_b=\sqrt{21/8}k_b(5k_c^2-1)$, respectively. Along the $b$-axis across the vortex center, the red (gray) arrow rotates in the $ac$-plane from the $c$-direction far from the vortex to the $-c$-direction on the opposite side of the vortex via the $a$-direction at the vortex center. On the other hand, the blue (black) arrow directed toward the $b$-direction shortens as it approaches the vortex center and finally vanishes at the vortex center; then, across the vortex center, the arrow lengthens toward the $-b$-direction up to the initial length. Thus, since the $d$-vector can be modulated continuously across the vortex center, there is no singularity where the total amplitude vanishes. Instead of a singularity, a double core with a small amplitude $\approx 0.6E_0$ exists, as shown by contour lines in Fig.~\ref{OP-B}(b). This double core has a phase winding $\pi$ the same as that in the half-quantum vortex~\cite{volovik:1976}. By the spontaneously broken axisymmetry, the double-core vortex lattice is distorted. The stable ratio between the height and base of the triangular lattice is $a_y/a_x=\sqrt{3}/2.4$, namely, a base angle $\alpha\equiv\tan^{-1}(2a_y/a_x)\approx 55^{\circ }$. Each component of the double-core vortex lattice is also shown in Figs.~\ref{OP-B}(c) and \ref{OP-B}(d) for the amplitude of the bulk components $|\Delta_b(\bi{r})|=|\Delta_c(\bi{r})|$ and that of the compensating component at the vortex cores $|\Delta_a(\bi{r})|$, respectively. The vortex core in the bulk component is slightly elliptic with a line of apsides along the $a$-axis and the compensating component is enlarged along the $b$-axis. Since the vortex lattice tends to prevent the overlap of the vortex cores and that of the compensating component, the stable structure is fixed by the competition between them. At low temperatures and low magnetic fields, the double-core vortex is more stable than the singular vortex. At high temperatures, however, the double-core vortex is unstable against the singular vortex because the compensating component tends to connect with the neighbor vortices along the $b$-axis by the extension of the coherence length. Similarly, at high magnetic fields, since the distance between the vortex centers becomes shorter, the double-core vortex is unstable. The Pauli-paramagnetic effect, which rotates the $d$-vector under $H>H_{\rm rot}$, is neglected in this calculation. Since the compensating component of the $d$-vector for $\bi{b}\lambda_a+\bi{a}\lambda_b$ has to be directed toward the $c$-axis, the double-core vortex is unstable at high magnetic fields $H>H_{\rm rot}$, also by the Pauli-paramagnetic effect. By the measurement of small-angle neutron scattering (SANS), a perfect hexagonal lattice with $\alpha=60^{\circ }$ is observed in the B phase~\cite{huxley:2000}. Thus, the observed vortices are conventional singular vortices. Since this experiment is carried out at a magnetic field $H\approx H_{\rm rot}$, the double-core vortex may be unstable, by the Pauli-paramagnetic effect. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{65268Fig3.eps} \end{center} \caption{(Color online) Spatial variations in the pair potential at $T=0.2T_c$ and $\bar{B}=0.05B_0$ for the hexagonal singular vortex lattice (a) and the double-core vortex lattice with $\alpha\approx 55^{\circ }$ (b)-(d). (a) Amplitude of the total pair potential $|\Delta(\bi{r})|$, (b) amplitude of the total pair potential $|\Delta(\bi{r})|$ with the contour lines on $0.75E_0$ (solid lines) and $0.85E_0$ (dashed lines), (c) amplitude of the bulk components $|\Delta_b(\bi{r})|=|\Delta_c(\bi{r})|$, (d) amplitude of the compensating component at the vortex cores $|\Delta_a(\bi{r})|$, and (e) schematic spin structure of the double-core vortex around the core. } \label{OP-B} \end{figure} \subsection{Local density of states} There are clear differences in the LDOS between the double-core vortex lattice and the singular vortex lattice, which can be directly measured by scanning tunneling microscopy/spectroscopy (STM/STS). The LDOS's for the double-core vortex lattice and the singular vortex lattice are shown in Figs.~\ref{LDOS-d} and \ref{LDOS-s}, respectively. The zero-energy peak is expanded to the region between the double core [Fig.~\ref{LDOS-d}(a)] because the local OP $\bi{a}\lambda_b$ in this region has a line node in the $ac$-plane. Besides, there is an elliptic peak extending toward the $a$-axis in the LDOS at $E=0.1E_0$ [Fig.~\ref{LDOS-d}(b)]. By contrast, the singular vortex has an isotropic peak in the LDOS at $E=0$ and $E=0.1E_0$ [Figs.~\ref{LDOS-s}(a) and \ref{LDOS-s}(b)]. The spectral evolutions of the LDOS near the vortex are also different between the double-core vortex lattice and the singular vortex lattice. Near the double-core vortex center, there is a sharp low-energy peak, especially along the $a$-axis [Figs.~\ref{LDOS-d}(c) and \ref{LDOS-d}(e)] and $b$-axis, which is somewhat round [Figs.~\ref{LDOS-d}(d) and \ref{LDOS-d}(f)]. In the double-core case, the OP two minima are situated just outside the center along the $a$-axis [see Fig.~\ref{OP-B}(b)], giving rise to sharp peaks at a finite energy, as shown in Fig.~\ref{LDOS-d}(e). On the other hand, the zero-energy peak becomes a bump away from the vortex core for the singular vortex [Figs.~\ref{LDOS-s}(c)-\ref{LDOS-s}(f)]. Note that, in the bulk region away from the vortex core, DOS $\propto|E|^2$ at a low energy, as shown in Figs.~\ref{LDOS-d}(d) and \ref{LDOS-s}(d) because the gap structure is characterized by two point nodes and two line nodes. This is also shown in Fig.~\ref{bulkDOS}. \begin{figure} \begin{center} \includegraphics[width=8cm]{65268Fig4.eps} \end{center} \caption{(Color online) LDOS at $T=0.2T_c$ and $\bar{B}=0.05B_0$ for the double-core vortex lattice. Spatial variations in the LDOS at $E=0$ (a) and $E=0.1E_0$ (b). Spectral evolutions of the LDOS from the vortex center (top) along the $a$-axis (c) and $b$-axis (d). Details of these evolutions near the vortex center are shown in (e) and (f), respectively. } \label{LDOS-d} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm]{65268Fig5.eps} \end{center} \caption{(Color online) LDOS at $T=0.2T_c$ and $\bar{B}=0.05B_0$ for the singular vortex lattice. Spatial variations in the LDOS at $E=0$ (a) and $E=0.1E_0$ (b). Spectral evolutions of the LDOS from the vortex core (top) along the $a$-axis (c) and $b$-axis (d). Details of these evolutions near the vortex core are shown in (e) and (f), respectively. } \label{LDOS-s} \end{figure} \subsection{NMR spectrum} The double-core vortex lattice is also observed by NMR measurement. In the NMR experiment, the resonance frequency spectrum of the nuclear spin resonance is determined by the internal magnetic field. The distribution function is given by \begin{align} P(B)=\int\delta[B-B_c(\bi{r})]d\bi{r}, \end{align} i.e., volume counting for $B$ in a unit cell. This resonance line shape is called the ``Redfield pattern" of the vortex lattice. In Fig.~\ref{NMR-B}, we show the distribution functions $P(B)$ for the singular vortex lattice (dashed line) and double-core vortex lattice (solid line). The distribution function for the singular vortex lattice has a single peak at $B=0.049969B_0$. This peak comes from the outside of the vortex core, shown by the contour lines in the left inset of Fig.~\ref{NMR-B}. By contrast, the distribution function for the double-core vortex lattice has a double peak at $B=0.049962B_0$ and $B=0.049975B_0$. The peaks at the low and high fields come from outside the vortex (solid line) and around the vortex (dashed line), respectively, shown by the contour lines in the right inset of Fig.~\ref{NMR-B}. The distortion of the double-core vortex lattice gives a clear difference in the NMR spectrum. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{65268Fig6.eps} \end{center} \caption{(Color online) Distribution functions of the internal magnetic field $P(B)$ at $T=0.2T_c$ and $\bar{B}=0.05B_0$ for the singular vortex lattice (dashed line) and double-core vortex lattice (solid line). The height of $P(B)$ is scaled so that $\int P(B)dB=1$. Inset: Spatial variations in the internal magnetic field $B_c(\bi{r})$ for the singular vortex lattice (left) and double-core vortex lattice (right) with the contour lines on the magnetic field situated at the peaks of $P(B)$. } \label{NMR-B} \end{figure} \section{C and A Phases} In the C phase, we take the pairing functions as $\phi=\phi_a=\sqrt{21/8}k_b(5k_c^2-1)$ and $\phi_b=\phi_c=0$, where the pair potential has one spin component. Note that the A phase is the same as the C phase except for the exchange between the $a$- and $b$-axes. \subsection{Morphology of vortex lattice} In this section, we discuss the deformation of the vortex lattice in the C phase under $H\parallel c$ to determine the effects of the twofold gap function $\bi{d}(\bi{k})\propto\bi{a}k_b(5k_c^2-1)$. Since the vortex cores are extended along the antinodal $b$-direction, the height of the triangular lattice is enlarged along the $b$-direction to avoid the overlap of the vortex cores. This variation of the vortex lattice is also the same for the A phase by rotating it in the $ab$-plane. To find a stable vortex lattice, we compare the free energies among the triangular lattices with various ratios of the height to the base, namely, $a_y/a_x$. We show stable vortex lattices at a low magnetic field $\bar{B}=0.02B_0$ in Figs.~\ref{OP-C}(a)-\ref{OP-C}(c) and at a high magnetic field $\bar{B}=0.3B_0$ in Figs.~\ref{OP-C}(d)-\ref{OP-C}(f). These figures are also shown at different temperatures, that is, at a low temperature $T=0.2T_c$ in Figs.~\ref{OP-C}(a) and \ref{OP-C}(d), at an intermediate temperature $T=0.5T_c$ in Figs.~\ref{OP-C}(b) and \ref{OP-C}(e), and at a high temperature $T=0.7T_c$ in Figs.~\ref{OP-C}(c) and \ref{OP-C}(f). The triangular lattice is slightly distorted at a low magnetic field and a low temperature, as shown in Fig.~\ref{OP-C}(a). In this case, the ratio is $a_y/a_x=0.6\sqrt{3}$, that is, the base angle of the isosceles triangular lattice is $\alpha\equiv \tan^{-1}(2a_y/a_x)\approx 64^{\circ }$. As temperature increases, the stable structures of the vortex lattice are $a_y/a_x=0.65\sqrt{3}$, namely, $\alpha\approx 66^{\circ }$ [Fig.~\ref{OP-C}(b)], and $a_y/a_x=0.8\sqrt{3}$, namely, $\alpha\approx 70^{\circ }$ [Fig.~\ref{OP-C}(c)]. The vortex lattice is distorted markedly at a high magnetic field. At a low temperature, the stable structure is $a_y/a_x=0.95\sqrt{3}$, namely, $\alpha\approx 73^{\circ }$, as shown in Fig.~\ref{OP-C}(d). Physically, the maximally distorted triangular lattice is $a_y/a_x=\sqrt{17}/2$, namely, $\alpha\approx 76^{\circ }$, beyond which some of the nearest neighbors are no longer nearest. Since the distortion of the triangular lattice is near the limit even at a low temperature, the vortex lattice is hardly distorted by the increase in temperature, as shown in Figs.~\ref{OP-C}(e) and \ref{OP-C}(f), where $a_y/a_x=\sqrt{3}$, namely, $\alpha\approx 74^{\circ }$. Thus, the vortex lattice tends to be more distorted to prevent the overlap of the vortex cores at high magnetic fields and at high temperatures because the ratio of the radius of the vortex core proportional to the coherence length $\xi\propto(T_c-T)^{-1/2}$ to the distance between the vortices proportional to $\bar{B}^{-1/2}$ increases. The deformation of the vortex lattice is summarized in Fig.~\ref{OP-C}(g). A regular hexagonal vortex lattice was observed in the A phase by the measurement of the SANS~\cite{huxley:2000}. This result is discussed for the $E_{1g}$ and $E_{2u}$ models theoretically~\cite{champel:2001,agterberg:2002}. In this experiment, however, since the vortex lattice in the A phase is observed at a low temperature in which the B phase appears with rapid cooling, the observed vortex lattice may change to the hexagonal singular vortex lattice in the B phase mentioned before. \begin{figure} \begin{center} \includegraphics[width=7cm]{65268Fig7.eps} \end{center} \caption{(Color online) (a)-(f) Spatial variations in the stable pair potential amplitude $|\Delta_a(\bi{r})|$. (a) $\alpha\approx 64^{\circ }$ at $\bar{B}=0.02B_0$ and $T=0.2T_c$, (b) $\alpha\approx 66^{\circ }$ at $\bar{B}=0.02B_0$ and $T=0.5T_c$, (c) $\alpha\approx 70^{\circ }$ at $\bar{B}=0.02B_0$ and $T=0.7T_c$, (d) $\alpha\approx 73^{\circ }$ at $\bar{B}=0.3B_0$ and $T=0.2T_c$, (e) $\alpha\approx 74^{\circ }$ at $\bar{B}=0.3B_0$ and $T=0.5T_c$, and (f) $\alpha\approx 74^{\circ }$ at $\bar{B}=0.3B_0$ and $T=0.7T_c$. (g) Temperature dependences of the base angle $\alpha$ at $\bar{B}=0.02B_0$ (solid circles) and $\bar{B}=0.3B_0$ (open circles). } \label{OP-C} \end{figure} \subsection{Local density of states} The twofold symmetry of the gap function is revealed, as shown in Fig~\ref{LDOS-C}, where the LDOS is shown. The zero-energy LDOS [Fig.~\ref{LDOS-C}(a)] is well connected between nearest-neighbor vortices along the $a$-axis, which is the nodal direction. The LDOS at $E=0.1E_0$ [Fig.~\ref{LDOS-C}(b)] is also well connected between nearest-neighbor vortices; moreover, it has two peaks within a unit cell aligned along the $b$-axis. The spectral evolution of the LDOS along the $a$-axis [Figs.~\ref{LDOS-C}(c) and \ref{LDOS-C}(e)] forms a peak structure at a low energy near the vortex core. At a few $R_0\sim\xi$ away from the vortex core, the spectra still increase from the zero-energy so as to generate a low-energy peak. On the other hand, it has a rounded bump near the vortex core along the $b$-axis [Figs.~\ref{LDOS-C}(d) and \ref{LDOS-C}(f)]. At a few $R_0$ away from the vortex core, the low energy spectra are almost flat. Far from the vortex core, their spectra become of the same structure at the center between the vortex cores. Note that in the bulk region away from the vortex core, the DOS $\propto|E|$ at a low energy, as shown in Fig.~\ref{LDOS-C}(d), because the gap structure is characterized by one vertical and two horizontal line nodes. This is also shown in Fig.~\ref{bulkDOS}. \begin{figure} \begin{center} \includegraphics[width=7cm]{65268Fig8.eps} \end{center} \caption{(Color online) LDOS at $T=0.2T_c$ and $\bar{B}=0.02B_0$ in the C phase. Spatial variations in the LDOS at $E=0$ (a) and $E=0.1E_0$ (b). Spectral evolutions of the LDOS from the vortex core (top) along the $a$-axis (c) and $b$-axis (d). Details of these evolutions near the vortex core are shown in (e) and (f), respectively. } \label{LDOS-C} \end{figure} \subsection{Field-angle-resolved zero-energy density of states} We analyze the field-angle-resolved thermal conductivity experiment~\cite{machida:2012} according to the identified pair function. It is known that thermal conductivity depends on carrier density and scattering rate, both of which are angle-dependent. In this experiment, the temperature dependence of thermal conductivity obeys the Wiedemann-Franz law at low temperatures, implying that QPs play the dominant role in thermal transport. The most significant effect on the thermal transport in the vortex state comes from the Doppler shift of the QP energy spectrum, $E(\bi{p})\rightarrow E(\bi{p})-\bi{v}_s\cdot\bi{p}$, in the circulating supercurrent flow $\bi{v}_s$. This effect becomes important at such positions where the gap becomes smaller than the Doppler shift term ($|\Delta|<\bi{v}_s\cdot\bi{p}$). Thus, we analyze the experimental data~\cite{machida:2012} in terms of field-angle-resolved zero-energy DOS. Since the magnitude of the Doppler shift strongly depends on the angle between the nodal direction and the magnetic field, the oscillation of zero-energy DOS occurs. Consequently, thermal conductivity attains its maximum (minimum) when the magnetic field is directed to the antinodal (nodal) directions~\cite{vekhter:1999,miranovic:2005,sakakibara:2007}. In this experiment, however, since heat current is injected along the $c$-axis, thermal conductivity cannot be compared with zero-energy DOS directly. Then, we compare their differences when field directions are rotated along the vertical line node and antinode in the C phase. In this section, we assume the regular hexagonal vortex lattice $a_y/a_x=\sqrt{3}/2$ or $1/(2\sqrt{3})$. The stable orientation of the vortex lattice is determined by comparing the free energy calculated using eq.~\eqref{Luttinger-Ward}. At $T=0.2T_c$ and $\bar{B}=0.05B_0$, the spatial variations of the pair potential amplitude and the zero-energy LDOS are shown in the left and middle panels of Figs.~\ref{OP-LDOS-Brot}(a)-\ref{OP-LDOS-Brot}(c), respectively. When the magnetic field is directed to the $c$-axis [Fig.~\ref{OP-LDOS-Brot}(a)] or $b$-axis [Fig.~\ref{OP-LDOS-Brot}(b)], elliptic vortex cores shrink to the vertical ($H\parallel c$) or tropical ($H\parallel b$) nodal directions. Under these magnetic fields, the zero-energy LDOS is mainly connected between nearest-neighbor vortices along the $a$-axis. On the other hand, under $H\parallel a$ [Fig.~\ref{OP-LDOS-Brot}(c)], since the vertical node and tropical nodes have similar contributions to the QPs, vortex cores are hexagonal and the zero-energy LDOS is well connected among all nearest-neighbor vortices, giving rise to a rather round core profile in Fig.~\ref{OP-LDOS-Brot}(c). \begin{figure} \begin{center} \includegraphics[width=8.5cm]{65268Fig9.eps} \end{center} \caption{(Color online) Spatial variations in the pair potential amplitude $|\Delta_a(\bi{r})|$ and zero-energy LDOS $N(\bi{r},E=0)$ at $T=0.2T_c$ and $\bar{B}=0.05B_0$. The magnetic fields are directed to the $c$-axis (a), $b$-axis (b), and $a$-axis (c). Left (middle) panels show the OP amplitude (zero-energy LDOS). Right panels show a schematic view of line nodes shown in the field direction. } \label{OP-LDOS-Brot} \end{figure} By taking the spatial average of the zero-energy LDOS's under various field directions, the field-angle-resolved zero-energy DOS is obtained, as shown in Fig.~\ref{DOS-Brot}(a). When the field direction is rotated along the vertical line node from the $c$-axis (open circles), the zero-energy DOS is reduced because the number of low-energy excitations from the tropical line nodes decreases. Within $45^{\circ }<\theta<60^{\circ }$ and $120^{\circ }<\theta<135^{\circ }$, the orientation of the vortex lattice changes, that is, nearest-neighbor vortices are aligned along the $b$-axis in $60^{\circ }\le\theta\le 120^{\circ }$ and next-nearest-neighbor vortices are aligned along the $b$-axis in $\theta\le 45^{\circ },135^{\circ }\le\theta$. By contrast, when the magnetic field is rotated along the antinodal direction (solid circles), the zero-energy DOS is almost constant because the QPs mainly come from the tropical line nodes under $H\parallel c$ and from the vertical line node under $H\parallel b$. The difference in the zero-energy DOS between the fields along the vertical line node and antinode is maximum at the equator $\theta=90^{\circ }$ because horizontal line nodes are situated in the tropics. This $\theta$ dependence of the difference in the zero-energy DOS is consistent with the measurement of thermal conductivity~\cite{machida:2012} shown in Fig.~\ref{DOS-Brot}(b). The earlier spin singlet $d$-wave $E_{1g}$ model ($\phi=\sqrt{15}k_bk_c$) and spin triplet $f$-wave $E_{2u}$ model ($\phi=\phi_c=\sqrt{105}k_ak_bk_c,\ \phi_a=\phi_b=0$)~\cite{joynt:2002}, however, have two peaks for the difference in the zero-energy DOS at $\theta\ne 90^{\circ }$ caused by the equatorial line node. \begin{figure} \begin{center} \includegraphics[width=7.5cm]{65268Fig10.eps} \end{center} \caption{(Color online) (a) Field-angle-resolved zero-energy DOS at $T=0.2T_c$ and $\bar{B}=0.05B_0$. The field directions are rotated along the vertical line node (open circles) and antinode (solid circles), as schematically shown in the inset. (b) $\theta$ dependence of the thermal conductivity normalized by the normal state value $\kappa_n$ (left axis) and the DOS differences normalized at $\theta=90^{\circ }$ (right axis) along the vertical nodal and antinodal scannings for three possible gap functions in the C phase: 1. The present $E_{1u}$ $(k_b(5k_c^2-1))$, 2. $E_{1g}$ $(k_bk_c)$, 3. $E_{2u}$ $(k_ak_bk_c)$. The gap structures are sketched in the inset. The experimental data are cited from ref.~\ref{machida2012}. } \label{DOS-Brot} \end{figure} \section{Conclusions and Perspectives} \begin{table*}[t] \begin{center} \caption{Candidate pair functions.} \begin{tabular}{cccccc} \hline\hline irr.~rep. & basis & Knight shift & point + line & twofold in C & gradient coupling \\\hline $E_{1g}$ & $k_z(k_x,k_y)$ & $\times$ & $\bigcirc$ & $\bigcirc$ & $\times$ \\ $E_{2g}$ & $(k_x^2-k_y^2,k_xk_y)$ & $\times$ & $\times$ & $\times$ & $\triangle$ \\ $E_{1u}^p$ & $\bi{z}(k_x,k_y)$ & $\triangle$ & $\times$ & $\bigcirc$ & $\times$ \\ $E_{2u}$ & $\bi{z}(k_x^2-k_y^2,k_xk_y)k_z$ & $\triangle$ & $\bigcirc$ & $\times$ & $\triangle$ \\ $E_{1u}^f$ & $(\bi{x}k_y,\bi{y}k_x)(5k_z^2-1)$ & $\bigcirc$ & $\bigcirc$ & $\bigcirc$ & $\bigcirc$ \\ \hline\hline \end{tabular} \end{center} \end{table*} In this study, we have classified possible pairing functions under the given crystalline symmetry $D_{6h}$ for the heavy-fermion superconductor UPt$_3$, which belong to the $f$-wave state with a triplet channel. Then we identified a planar spin triplet state among them that maximally fits the existing experiments, particularly as follows: (A) Various bulk thermodynamic measurements that indicate the line node(s) and point node(s) in the B phase. (B) A Knight shift experiment that shows the two field directions where the Knight shift decreases below $T_c$, implying that the $d$-vector contains the $\bi{b}$-component and $\bi{c}$-component at lower fields and that the $d$-vector rotates from $\bi{c}$ to $\bi{a}$ at $H_{\rm rot}$ $(\parallel c)\sim 2$ kG. (C) An angle-resolved thermal conductivity measurement that shows a twofold gap structure in the C phase and a rotational symmetry in the B phase. These important experimental results above are all explained by the planar state with the $f$-wave channel, namely, $(\bi{c}k_b+\bi{b}k_a)(5k_c^2-1)$. In order to check our proposed state, we made several predictions that are calculated by solving the microscopic Eilenberger equation with our planar state. The predictions include the following: (1) The vortex structures in the C and A phases exhibit a strongly distorted triangular lattice that varies as functions of $T$ and $H$ when $H\parallel c$. This distortion is caused by the twofold gap structures in the A and C phases. The vortex morphology should be observed by SANS experiment. (2) Although the vortices in the A and C phases are all singular, that is, the OP vanishes at the core because of the single-component OP, in the B phase, the vortex is nonsingular, characterized by a double-core structure. To check this complex vortex structure, we provide several signatures, such as the magnetic field distribution probed as the resonance spectrum of NMR and the LDOS around a vortex core probed by STM/STS. \subsection{Comparison with other proposed states} There are several proposals made to identify the pairing symmetries in UPt$_3$. $E_{1g}$ and $E_{2g}$ are singlet and unable to explain the Knight shift experiment mentioned above. $E^p_{1u}$ with the $p$-wave character is not accepted because there is no line node whose existence is firmly established through various thermodynamic experiments. $E_{2u}$, which was regarded as the most promising candidate contradicts the observed twofold gap structure in the C phase. Table II shows a summary of the present status of various candidates. \subsection{Remaining issues} There remain several issues to be resolved. (1) $d$-Vector rotation \noindent In order to understand the $d$-vector rotation phenomenon at $H_{\rm rot}\sim 2$ kG for $H\parallel c$, we need to take into account the magnetic field energy due to the anisotropic susceptibility in the superconducting state. In the absence of this effect, the induced component $\bi{a}$ near the vortex core center in the double-core phase decreases as the vortex distance decreases. (2) Origin of symmetry breaking \noindent To split $T_c$ into $T_{c1}$ and $T_{c2}$, we need some symmetry breaking field. A good candidate is the AF order at $T_N=5$ K observed by neutron scattering. This is a high-energy probe for catching the instantaneous correlation in a snapshot. Other low-energy probes, such as NMR and $\mu$SR, fail to observe the static AF order. Since a precise correlation between the AF order and the $T_c$ splitting under pressure, which simultaneously disappear at $P\sim 3$ kbar, is observed~\cite{trappmann:1991}, this is still puzzling although we previously presented a scenario for this splitting due to AF fluctuations~\cite{machida:1996}. An alternative idea is to use the crystal symmetry lowering, which is also reported before~\cite{midgley:1993,elboussiri:1994,ellman:1995,ellman:cond}. (3) Pairing mechanism \noindent The dipole energy \begin{align} H_D\propto\left\langle 3|\bi{k}\cdot\bi{d}(\bi{k})|^2-|\bi{d}(\bi{k})|^2\right\rangle_{\bi{k}} \end{align} depends on the combination of the spin and orbital states~\cite{leggett:1975}. The most favorable combinations by the dipole energy in the $E_{1u}$ state are $\bi{a}k_b(5k_c^2-1)$ and $\bi{b}k_a(5k_c^2-1)$. In the C phase, the spin state $\bi{b}$ is selected by AF ordering and accompanies the orbital state $k_a(5k_c^2-1)$ by a dipole interaction. In the B phase, the remaining orbital state $k_b(5k_c^2-1)$ has to appear with the spin state $\bi{a}$ to minimize the dipole energy. However, the pairing state in the B phase without a magnetic field is $(\bi{b}k_a+\bi{c}k_b)(5k_c^2-1)$ actually. Thus, the combination of the spin and orbital states cannot be interpreted from only the dipole energy. This special combination between the spin direction and the orbital form hints at the pairing mechanism. This is one main issue to be resolved in future works since the pairing symmetry is determined in this paper. (4) Josephson junction experiment \noindent By Josephson interferometry experiment, the $\pi$ phase shift of the gap function in the B phase for a $90^{\circ }$ rotation about the $c$-axis was proposed~\cite{strand:2009}. Also, by measuring the critical current on Josephson tunnel junctions between the $a$- and $b$-axes, the nodal direction of the gap function in the C phase was proposed at $45^{\circ }$ with respect to the $a$-axis~\cite{strand:2010}. Their conclusions are consistent not with our $E_{1u}$ model but with the $E_{2u}$ model. (5) Topological aspect \noindent The identified pairing state $(\bi{c}k_b+\bi{b}k_a)(5k_c^2-1)$ is analogous to the superfluid $^3$He B-phase whose form is described by $\bi{x}k_x+\bi{y}k_y+\bi{z}k_z$ realized in the bulk or to the planar state $\bi{x}k_x+\bi{y}k_y$ realized in thin films. Our state has Majorana particles at the boundary as an Andreev bound state, albeit line and point nodes exist in the bulk. This is interesting because recently Sato has argued the possibility of topological protection under a nodal gap~\cite{sato:private}. The topological nature is discussed in a similar situation in connection with the superfluid $^3$He A-phase where Majorana particles exist in a point node gap\cite{tsutsumi:2010b,tsutsumi:2011b}. This topological aspect certainly deserves further investigation. Note that the double-core vortex does not contain the Majorana zero mode. \section*{Acknowledgments} We thank K.~Izawa, Y.~Machida, T.~Sakakibara, and S.~Kittaka for informative discussions on their experiments and M.~Ichioka for formulations and computational coding. This work was supported by JSPS, KAKENHI (No.~21340103). Y.T. acknowledges the financial support from the JSPS Research Fellowships for Young Scientists.
2,869,038,154,446
arxiv
\section{Introduction} \IEEEPARstart{P}{erson} re-identification (Re-ID) is to match images of the same individual captured by different cameras with non-overlapping camera views~\cite{DBLP:journals/tmm/ChenLLCH11}. Broadly, person Re-ID can be treated as a special case of the image retrieval problem with the goal of querying from a large-scale gallery set to quickly and accurately find the images that match a probe image~\cite{DBLP:journals/tip/ZhangLZZZ15}\cite{DBLP:journals/tmm/Wang0TL13}. In recent years, person Re-ID has drawn an increasing interest in both academia and industry due to its great potentials in the video analysis and understanding community. Currently, most existing work for person Re-ID mainly focuses on the supervised~\cite{DBLP:conf/cvpr/XiaoLOW16,DBLP:conf/aaai/ChenCZH17,sun2018beyond,zheng2018re,zheng2019pose} and unsupervised~\cite{fan2018unsupervised,zhong2018generalizing,Bak_2018_ECCV,wang2018transferable,lv2018unsupervised} cases. Although the supervised person Re-ID methods can achieve good performance in many public datasets, they need a large-scale labeled dataset to train models, especially for the deep-learning-based methods. However, labeling data incurs a significant cost, especially when the identities across different camera views need to be matched. Therefore, supervised methods usually do not scale well in real-world applications. On the other hand, for the unsupervised person Re-ID methods, due to the complete lack of label information to train sufficiently good models, they still have a big performance gap with their supervised counterparts. In this paper, instead of purely focusing on either supervised or unsupervised setting, we propose an unsupervised cross-camera person Re-ID task (it is also called \textbf{\textit{semi-supervised person Re-ID}} in this paper), which only has the intra-camera (within-camera) labels but no inter-camera (cross-camera) label information. Note that in practice, it is much easier to obtain the within-camera label information by employing tracking algorithms~\cite{dehghan2015gmmcp,maksai2017non} and conducting a small amount of manual labeling. Therefore, this work aims to well exploit this semi-supervised person Re-ID setting. In other words, we aim to utilize the intra-camera labeled data to effectively improve the performance of person Re-ID, thus substantially reducing the gap between the unsupervised Re-ID and the supervised ones. For this proposed semi-supervised person Re-ID setting, the main issue is the lack of cross-camera label information. This could result in poor generalization ability across camera views if merely using the within-camera labeled data to train models. In the literature, to deal with this problem, several unsupervised person Re-ID methods~\cite{fan2018unsupervised,zhong2018generalizing,Bak_2018_ECCV,wang2018transferable,lv2018unsupervised} generate pseudo-labels for unlabeled data. Moreover, in~\cite{DBLP:conf/bmvc/LinLLK18,DBLP:journals/corr/abs-1904-01308,qi2019novel}, other labeled datasets (e.g., source domains) are utilized to enhance the generalization ability of models in an unlabeled target domain. Differently, in our semi-supervised Re-ID case, we do not need any other labeled datasets to help to train the model. Instead, we focus on exploring the underlying information within the data via advanced techniques such as adversarial learning. In real-world scenarios, a person's appearance often varies greatly across camera views due to changes in body pose, view angle, occlusion, image resolution, illumination conditions, and background noises. These variations lead to the data distribution discrepancy across cameras. To better illustrate this situation, we use the ResNet50~\cite{DBLP:conf/cvpr/HeZRS16} pre-trained on ImageNet~\cite{DBLP:conf/cvpr/DengDSLL009} to extract features from raw images and employ the t-SNE~\cite{maaten2008visualizing} to visualize the data distribution on the benchmark datasets of Market1501 and DukeMTMC-reID in Fig.~\ref{fig1}. As seen, samples of different cameras usually reside in different regions, i.e., the involved cameras do not align with each other in this feature space. This is the key challenge to deal with in the proposed semi-supervised person Re-ID task. In particular, if all samples from different camera views could conform to the same data distribution in a shared feature space, we will be able to obtain sufficiently good performance by simply using the within-camera label information to train models. To deal with the above distribution discrepancy across camera views, we propose a novel Adversarial Camera Alignment Network (ACAN) for the proposed semi-supervised person re-identification task. Specifically, we develop a Multi-Camera Adversarial Learning (MCAL) method to project all data from different cameras into a common feature space. Then we utilize the intra-camera label information to conduct the discrimination task that aims to make the same identities closer and different identities farther. To carry out the multi-camera adversarial learning, we develop two different schemes to maximize the loss function of the discriminator (i.e., a classifier to discriminate data from different cameras) in the ACAN framework, which include the widely used gradient reversal layer (GRL) and a new one called ``other camera equiprobability'' (OCE) proposed by this work. In particular, we also give the theoretical analysis to show that cameras can be completely aligned when the discriminator predicts an equal probability for each camera class. Extensive experiments on five large-scale datasets including three image datasets and two video datasets well validate the superiority of the proposed work when compared with the state-of-the-art unsupervised Re-ID methods utilizing the labeled source domains and generated images by the GAN-based models. Furthermore, the experimental study shows that aligning the distributions of different cameras can indeed bring a significant improvement in the proposed framework. In summary, the main contributions of this paper are fourfold. Firstly, we propose a new person Re-ID setting named unsupervised cross-camera person Re-ID which only needs the within-camera labels but not the cross-camera label information that is more expensive to collect. Secondly, we develop an adversarial camera alignment network for the proposed task, which deals with the cross-camera unlabeled case from the perspective of reducing the cross-camera data distribution discrepancy. Thirdly, to achieve the multi-camera adversarial learning, we utilize the existing gradient reversal layer (GRL) and also propose a new scheme called ``other camera equiprobability'' (OCE), with theoretical analysis provided for the latter. Lastly, the experimental results on multiple benchmark datasets show that the proposed method outperforms the state-of-the-art unsupervised methods in the semi-supervised setting. Also, the efficacy of the proposed multi-camera adversarial leaning method is well confirmed. \begin{figure \centering \subfigure[Market1501]{ \includegraphics[width=4cm]{fig/fig1_1.pdf} } \subfigure[DukeMTMC-reID]{ \includegraphics[width=4cm]{fig/fig1_2.pdf} } \caption{Visualization of the distributions of two datasets via t-SNE~\cite{maaten2008visualizing}. The features of all images are extracted by the ResNet50~\cite{DBLP:conf/cvpr/HeZRS16} pre-trained on ImageNet~\cite{DBLP:conf/cvpr/DengDSLL009}. Different colors denote the images from different cameras. In detail, (a) shows six different cameras on Market1501; (b) shows eight different cameras on DukeMTMC-reID. Best viewed in color.} \label{fig1} \end{figure} The rest of this paper is organized as follows. The related work is reviewed in Section \ref{s-related}. The ACAN framework is proposed and discussed in Section \ref{s-framework}. Experimental results and analysis are presented in Section \ref{s-experiment}, and the conclusion is drawn in Section \ref{s-conclusion}. \section{Related work}\label{s-related} \subsection{Unsupervised Person Re-ID} In the early years, most methods for person Re-ID mainly focus on designing the discriminative hand-crafted features containing the color and structure information, such as LOMO~\cite{liao2015person} and BoW~\cite{DBLP:conf/iccv/ZhengSTWWT15}, which can be directly applied to person Re-ID without using any label information. Besides, some domain adaptation methods~\cite{DBLP:conf/cvpr/PengXWPGHT16, DBLP:journals/tcsv/WangZLZ16, DBLP:journals/tip/MaLYL15,qi2018unsupervised,DBLP:conf/iccv/YuWZ17} based on hand-crafted features are developed to transfer the discriminative information from a labeled source domain to an unlabeled target domain by learning a shared subspace or dictionary between source and target domains. However, these methods are not based on deep learning and thus do not fully explore the high-level semantics in images. With the introduction of deep learning into the computer vision community, many methods based on deep learning have been developed for Re-ID. Since deep networks need the label information to train, many methods generate pseudo labels for unlabeled samples~\cite{fan2018unsupervised,zhong2018generalizing,Bak_2018_ECCV,wang2018transferable,lv2018unsupervised}. In~\cite{fan2018unsupervised}, clustering methods are utilized to assign pseudo-labels to unlabeled samples. Lv \textit{et al.}~\cite{lv2018unsupervised} leverage spatio-temporal patterns of pedestrians to obtain robust pseudo-labels. In~\cite{wang2018transferable}, an approach is proposed to simultaneously learn an attribute-semantic and identity-discriminative feature representation by producing the pseudo-attribute-labels in target domain. However, generating pseudo-labels for unlabeled samples is complex, and the generated pseudo-labels may not be necessarily consistent with true labels. In addition, deep domain adaptation methods are developed to reduce the discrepancy between source and target domains from the perspective of feature representation~\cite{DBLP:conf/bmvc/LinLLK18,DBLP:journals/corr/abs-1904-01308,qi2019novel}. Lin \textit{et al.}\cite{DBLP:conf/bmvc/LinLLK18} develop a novel unsupervised Multi-task Mid-level Feature Alignment (MMFA) network, which uses the Maximum Mean Discrepancy (MMD) to reduce the domain discrepancy. Delorme \emph{et al.}~\cite{DBLP:journals/corr/abs-1904-01308} introduce an adversarial framework in which the discrepancy across cameras is relieved by fooling a camera discriminator. Considering the presence of camera-level sub-domains in person Re-ID, Qi \emph{et al.}~\cite{qi2019novel} develop a camera-aware domain adaptation to reduce the discrepancy not only between source and target domains but also across these sub-domains (i.e., cameras). Recently, generating extra training images for target domain has become popular~\cite{wei2018person,deng2018image,zhong2018generalizing,Bak_2018_ECCV}. Wei \textit{et al.}~\cite{wei2018person} impose constraints to maintain the identity in image generation. The approach in~\cite{deng2018image} enforces the self-similarity of an image before and after translation and the domain-dissimilarity of a translated source image and a target image. Zhong \textit{et al.}~\cite{zhong2018generalizing} propose to seek camera-invariance by using unlabeled target images and their camera-style transferred counterparts as positive match pairs. Besides, it views the source and target images as negative pairs for the domain connectedness. In~\cite{zhong2019invariance}, the camera-invariance is introduced into the model, which requires that each real image and its style-transferred counterparts share the same identity. Besides the aforementioned image-based unsupervised methods, several video-based unsupervised methods for Re-ID have also been seen in recent years~\cite{kodirov2016person,khan2016unsupervised,ye2018robust,liu2017stepwise,ye2017dynamic}. To generate robust cross-camera labels, Ye~\emph{et al.}~\cite{ye2017dynamic} construct a graph for the samples in each camera, and then introduce a graph matching scheme for the cross-camera label association. Chen \emph{et al.}~\cite{DBLP:conf/bmvc/ChenZG18} learn a deep Re-ID matching model by jointly optimizing two margin-based association losses in an end-to-end manner, which effectively constrains the association of each frame to the best-matched intra-camera representation and cross-camera representation. Li \emph{et al.}~\cite{Li_2018_ECCV} jointly learn within-camera tracklet association and cross-camera tracklet correlation by maximizing the discovery of most likely tracklet relationships across camera views. Moreover, its extension is capable of incrementally discovering and exploiting the underlying Re-ID discriminative information from automatically generated person tracklet data end-to-end~\cite{li2019unsupervised}. Different from all of the above methods, the proposed method in this work does not use any additional data, such as generated images from GANs or labeled source domains. Also, we do not need any complex pseudo-label schemes to generate the association across cameras. \subsection{Unsupervised Domain Adaptation} Unsupervised domain adaptation is also related to our work. It is a more general technique to reduce the distribution discrepancy between source and target domains. In the literature, most unsupervised domain adaptation methods learn a common mapping between source and target distributions. Several methods based on the Maximum Mean Discrepancy (MMD) have been proposed~\cite{DBLP:conf/icml/LongC0J15,NIPS2016_6110,DBLP:journals/corr/ZhangYCW15,DBLP:journals/corr/TzengHZSD14}. Long \textit{et al.}~\cite{DBLP:conf/icml/LongC0J15} introduce a new deep adaptation network, where the hidden representations of all task-specific layers are embedded in a Reproducing Kernel Hilbert space. To transfer a classifier from the source domain to the target domain, the work in~\cite{NIPS2016_6110} jointly learns adaptive classifiers between the two domains by a residual function. In~\cite{DBLP:conf/eccv/GhifaryKZBL16,DBLP:conf/nips/BousmalisTSKE16}, autoencode-based methods are investigated to explore the discriminative information in target domain. Recently, adversarial learning~\cite{DBLP:conf/icml/GaninL15,DBLP:journals/corr/abs-1803-09210,DBLP:conf/cvpr/TzengHSD17} has been applied to domain adaptation. Ganin \textit{et al.}~\cite{DBLP:conf/icml/GaninL15} propose the gradient reversal layer (GRL) to pursue the same distribution between source and target domains. Inspired by generative adversarial networks (GANs), Tzeng \textit{et al.}~\cite{DBLP:conf/cvpr/TzengHSD17} leverage a GAN loss to match the data distributions of source and target domains. Nevertheless, different from the typical unsupervised domain adaptation case which only has two domains (i.e., one source domain and one target domain), the semi-supervised person Re-ID task usually faces the distribution discrepancy across multiple domains (i.e., cameras) In recent years, some multi-domain adversarial methods have been developed to solve multi-source domain adaptation~\cite{zhao2018adversarial,xu2018deep,schoenauer-sebag2018multidomain}. In~\cite{zhao2018adversarial,xu2018deep}, adversarial learning is employed in each pair of the source domain and the target domain to map all domains into a common feature space. Sebag \textit{et al.}~\cite{schoenauer-sebag2018multidomain} utilize the gradient reversal layer (GRL) to map multiple domains into the same space. However, the above methods deal with the adversarial task in the setting of ``one to many'' (i.e., one target domain and multiple source domains). Differently, the semi-supervised Re-ID in this work needs to handle the adversarial task in the setting of ``many to many'' (i.e., multiple cameras to multiple cameras), which is more complicated and needs new methods. \section{Adversarial Camera Alignment Network}\label{s-framework} This section puts forward a novel Adversarial Camera Alignment Network (ACAN) for the unsupervised cross-camera person Re-ID task focused in this paper. It only needs the within-camera label information but no cross-camera labels\footnote{The proposed unsupervised cross-camera person Re-ID task is also called \textit{semi-supervised person Re-ID} in this paper.}. We illustrate the proposed method in Fig.~\ref{fig3}, which mainly consists of the camera alignment task and the discrimination learning task. In this paper, we develop a multi-camera adversarial learning method to align all cameras, with two different adversarial schemes to achieve this goal. For learning the discrimination information from the intra-camera labeled data, we simply employ the commonly used triplet loss with the hard-sample-mining scheme~\cite{hermans2017defense}. In the following part, we describe the camera alignment module and the discrimination learning module in Section~\ref{SEC:AC} and Section~\ref{SEC:LDI}, respectively. After that, the optimization of the objective function in the proposed ACAN is presented in Section~\ref{SEC:OPT}. \begin{figure* \centering \includegraphics[width=16cm]{fig/fig3} \caption{An illustration of the proposed adversarial camera alignment network. It is an end-to-end framework. In detail, a) is the input images from different cameras, which only has intra-camera labels but not inter-camera label information; b) is the feature extractor to obtain the feature representation of each input image from the deep network; c) shows the camera-alignment task by adversarial learning to map all images of different cameras into a shared feature space; d) denotes the supervised intra-camera discrimination task with only using the intra-camera label information in this shared feature space, which aims to pull the images of the same identity closer and push the images of different identities farther. Best viewed in color.} \label{fig3} \end{figure*} \subsection{Camera Alignment by Multi-camera Adversarial Learning} \label{SEC:AC} In unsupervised cross-camera person Re-ID, we only have the within-camera label information and thus cannot directly explore the relationship between cross-camera images. Due to the variation of body pose, occlusion, image resolution, illumination conditions, and background noises, there exists significant data discrepancy across cameras, as shown in Fig.~\ref{fig1}. If we merely use the within-camera labels to train models, the data distribution discrepancy across cameras cannot be removed. Different from most existing unsupervised person Re-ID methods~\cite{fan2018unsupervised,zhong2018generalizing,Bak_2018_ECCV,wang2018transferable,lv2018unsupervised} which generate pseudo-labels for unlabeled data, we address this problem from the perspective of reducing data distribution. In other words, we reduce the distribution discrepancy by aligning all cameras with adversarial learning. To achieve the goal, we develop a Multi-Camera Adversarial Learning (MCAL) to map images of different cameras into a common feature space. Let $X=[X_{1}, ..., X_{C}]$ be the set of training images, where $X_{c}$ denotes the set of images from the $c$-th camera and $C$ is the total number of cameras. $Y=[Y_{1}, ..., Y_{C}]$ is the corresponding set of within-camera person labels. The set of camera IDs (i.e., the label of each camera class) of the images in $X$ is denoted by $Z$, i.e., each element in $Z$ represents the camera ID of each image. Adversarial learning involves the optimization of discriminator and generator~\cite{goodfellow2014generative}. As usual, the discriminator (i.e., a classifier to distinguish the images from different cameras) in this work is optimized by a cross-entropy loss defined on the $C$ camera classes as \begin{equation}\label{eq11} \begin{aligned} &\min_{D}\mathcal{L}_\mathrm{MCAL-D}(X, Z, F)= \\ &\min_{D}\left[\mathbb{E}_{(x,z)\sim (X,Z)}\left(-\sum_{k=1}^{C}\delta(z-k)\log D(F(x), k)\right)\right], \end{aligned} \end{equation} where $x$ denotes an image, $z$ is the true camera class label of $x$, and $\delta(\cdot)$ represents the Dirac delta function. $F$ denotes the backbone network (i.e., the feature extractor module in Fig.~\ref{fig3}), and $F(x)$ is the feature representation of $x$. $D$ indicates the discriminator and $D(F(x), k)$ is the prediction score (i.e., probability) for $x$ with respect to the $k$-th camera class. To train the feature extractor $F$, we conduct the adversarial task so that the discriminator cannot effectively predict the camera ID of each image. Intuitively, to carry out the multi-camera adversarial task, we can directly optimize the feature extractor $F$ by maximizing the discriminator loss as \begin{equation}\label{eq12} \begin{aligned} &\min_{F}\mathcal{L}_\mathrm{MCAL-F}(X, Z, D)\triangleq\max_{F}\mathcal{L}_\mathrm{MCAL-D}(X, Z, D) \\ &=\min_{F}\left[\mathbb{E}_{(x, z)\sim (X, Z)}\sum_{k=1}^{C}\delta(z-k)\log D(F(x), k)\right], \end{aligned} \end{equation}where for consistency it is written as minimizing the negative discriminator loss. Following the literature, we first investigate the gradient reversal layer (GRL) technique~\cite{DBLP:conf/icml/GaninL15} to solve Eq.~(\ref{eq12}). After that, we point out its limitations for the multiple-camera adversarial task and develop a new simple but effective criterion ``other camera equiprobability'' (OCE) for our task. \subsubsection{GRL-based adversarial scheme}\label{SEC:AC-1} The gradient reversal layer (GRL)~\cite{DBLP:conf/icml/GaninL15} is commonly used to reduce the distribution discrepancy of two domains. It is equal to maximizing the domain discrimination loss. From the perspective of our work, GRL can be viewed as maximizing the camera discrimination loss (i.e., Eq.~(\ref{eq12})). Therefore, we can utilize it to solve the multi-camera adversarial task. Note that because our case deals with multiple (camera) classes, a loss will be counted as long as an image is not classified into its true camera class. To train the feature extractor $F$ with Eq.~(\ref{eq12}), we insert GRL between $F$ and $D$ as in the literature~\cite{DBLP:conf/icml/GaninL15}. During forward propagation, GRL is simply an identity transform. During backpropagation, GRL reverses (i.e., multiplying by a negative constant) the gradients of the camera discriminator loss with respect to the network parameters in feature extraction layers and pass them backward. This GRL-based adversarial scheme can somehow work to reduce distribution discrepancy across different cameras (i.e., domains), as will be experimentally demonstrated shortly. However, we observe that this scheme has a drawback. Maximizing the discriminator loss by GRL only enforces an image ``not to be classified into its true camera class''. It will appear to be ``equivalently good'' for this optimization as long as an image is classified into any wrong camera classes. As a result, this scheme may be trapped into a ``local'' camera assignment (i.e., the images from a certain camera are only misclassified into a few cameras but not any other cameras). In the extreme case, this could lead to a ``many to one'' assignment (i.e., all images are misclassified into a single camera class, as shown in Fig.~\ref{fig8}), especially when the dataset is collected from many cameras. This adversely affects the reduction of data distribution discrepancy. The disadvantage of GRL will be further verified in Section~\ref{sec:EXP-FA}. \subsubsection{OCE-based adversarial scheme}\label{SEC:AC-2} To overcome the above issue, we explicitly assign camera class labels to each training image. To find a good scheme of the camera label assignment to train the feature extractor $F$, we give the following theoretical analysis. \textbf{Proposition 1.} \textit{Given any image $x$, we explicitly assign it, with equal probability, all the C camera class labels. Training the feature extractor $F$ with this condition makes the data distribution of all cameras aligned as \begin{equation*} \begin{aligned} KL(p(x|\mathcal{C}_1)\parallel p(x))=\cdots =KL(p(x|\mathcal{C}_C)\parallel p(x))=0, \end{aligned} \end{equation*} where $p(x|\mathcal{C}_i)$ is the class-conditional probability density function of the $i$-th camera class, $p(x)$ denotes the probability density function of the images. $KL(p(x|\mathcal{C}_i)\parallel p(x))$ is the Kullback--Leibler divergence between $p(x|\mathcal{C}_i)$ and $p(x)$.} \textit{Proof.} Given an image $x$, its posteriori probability with respect to the $i$-th camera class (denoted by $\mathcal{C}_i (i=1,\cdots,C)$) can be expressed via the Bayes' rule as \begin{equation}\label{eqn:Bayes} P(\mathcal{C}_i|x) = \frac{p(x|\mathcal{C}_i)P(\mathcal{C}_i)}{p(x)}, \quad i=1,\cdots,C, \end{equation}where $p(x|\mathcal{C}_i)$ indicates the class-conditional probability density function of the $i$-th camera class, $p(x)$ is the probability density function of the images, and $P(\mathcal{C}_i)$ is the priori probability of the $i$-th camera class. Note that $P(\mathcal{C}_i|x)$ is just $D(F(x),i)$. For clarity, $D(F(x),i)$ is compactly denoted by $D_i$. For an image $x$, if we assign the equal label (i.e., $1/C$) to all camera classes to train the feature extractor $F$, we can get the loss function as \begin{equation}\label{eq22} \underset{\{D_1,\cdots,D_{C}\}}\min\left( -\frac{1}{C}\sum_{i=1}^{C}\log D_i \right) \end{equation} with the constraints of $D_i\geq{0}$ and $\sum_{i=1}^{C}D_i=1$. Due to the symmetry of the objective function with respect to the probabilities $D_1,\cdots,D_{C}$, it is not difficult to see that the optimal value of $D_i$ is $1/C$ for $i=1,\cdots,C$. A rigorous proof can be readily obtained by applying the Karush-Kuhn-Tucker conditions~\cite{boyd2004convex} to this optimization. This indicates that $P(\mathcal{C}_i|x)$ will equal $1/C$ when Eq.~(\ref{eq22}) is minimized for this given image $x$. We turn to Eq. (\ref{eqn:Bayes}) and rewrite it as \begin{equation}\label{eqn:Bayes-1} p(x|\mathcal{C}_i) = \frac{P(\mathcal{C}_i|x)}{P(\mathcal{C}_i)}p(x),\quad i=1,\cdots,C. \end{equation}Without loss of generality, equal priori probability can be set for the $C$ camera classes, that is, $P(\mathcal{C}_i)$ is constant $1/C$. Further, note that by optimizing $D_i$ in Eq.~(\ref{eq22}) above, it can be known that \begin{equation}\label{eqn:Bayes-1-1} P(\mathcal{C}_i|x)=\frac{1}{C},\quad i=1,\cdots,C. \end{equation} Combining the above results, Eq.~(\ref{eqn:Bayes-1}) becomes \begin{equation}\label{eqn:Bayes-2} p(x|\mathcal{C}_i) = \frac{1/C}{1/C}p(x)=p(x),\quad i=1,\cdots,C. \end{equation} This indicates that $p(x|\mathcal{C}_i)=p(x)$ when Eq.~(\ref{eq22}) is minimized for this given image $x\in X$. Thus, it can be easily obtained that, \begin{equation}\label{eq23} \begin{aligned} &KL(p(x|\mathcal{C}_i)\parallel p(x))=\\ &\sum_{x\in X}p(x|\mathcal{C}_i)\log(\frac{p(x|\mathcal{C}_i)}{p(x)})=0, \quad \quad i=1,\cdots,C. \end{aligned} \end{equation}\quad\quad\quad\quad\quad\quad \quad \quad\quad \quad \quad \quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~ \quad$\blacksquare$ Based on the above result, we can readily infer the following corollary. \textbf{Corollary 1.} \textit{If the discriminator predicts the equal probability for any given image $x$ with respect to all camera classes, the data distributions of all cameras can be aligned as \begin{equation*} \begin{aligned} KL(p(x|\mathcal{C}_1)\parallel p(x))=\cdots =KL(p(x|\mathcal{C}_C)\parallel p(x))=0. \end{aligned} \end{equation*}} According to \textit{Proposition $1$}, a straightforward way may be to simply require an image from any camera classes to be equiprobably classified into all of the $C$ camera classes (i.e., ``all camera equiprobability'', ACE in short), which is similar to the method in the literature~\cite{DBLP:journals/corr/abs-1904-01308}. However, such an ACE scheme cannot produce satisfactory performance as expected, which will be sufficiently validated by our experiments. This can be explained as follows. First, the objective function in Eq.~(\ref{eq22}) is a complicated composite function of network parameters in the feature extractor $F$. As a result, the optimization can hardly achieve the global minimum. That's to say, the result of \textit{Proposition $1$} will not be exactly achieved in practice, and in turn this will not guarantee the expected reduction on the data distribution discrepancy. Second, a more subtle issue is that the result of \textit{Proposition $1$} (i.e., predicting the equal probability for all camera classes) is the goal set for the discriminator \textit{after the equilibrium is achieved through adversarial learning}. It shall not be simply used as the requirement to design the discriminator loss \textit{during adversarial learning}. As evidence, in order to make the final discriminator to predict equal probability (i.e., $1/2$) for both true and fake classes, the original GANs assign the class label ``1'' explicitly (rather than with a probability of $1/2$) to any sample from the fake class to design the discriminator loss used to train the generator~\cite{goodfellow2014generative}. Applying the same rule to our case, it means that we shall not follow the ACE scheme to design the discriminator loss in Eq.~(\ref{eq22}). To better reflect this situation, we develop a more precise scheme to achieve the multi-camera adversarial learning. In this new scheme, we require that with the learned feature representation, the discriminator shall classify an image into all camera classes with equal probability \textit{except the camera class which the image originally belongs to} (i.e., zero probability for this class). This is where the name ``other camera equiprobability'' (OCE) comes from. This scheme can effectively avoid the effect of ``local camera assignment'', which is unfavourably done in the GRL-based scheme and make better efforts to align all cameras together, as demonstrated in the experiments. Furthermore, note that the proposed OCE-based scheme does not conflict with the result of \textit{Proposition $1$}, as analyzed above. As to be confirmed by Fig.~\ref{fig8} in Section~\ref{sec:EXP-FA}, the discriminator obtained by our scheme indeed has a clearer trend than the GRL-based scheme to predict the equal probability for all camera classes (i.e., \textit{Corollary $1$}). Formally, the proposed OCE scheme on an image $x^k$ in the $k$-th camera can be expressed as \begin{equation}\label{eq13} \begin{aligned} \mathcal{L}_\mathrm{OCE}(x^k)= -\frac{1}{C-1}\sum_{\substack{i=1\\i\neq k}}^{C}\log (D(F(x^k),i)), \end{aligned} \end{equation} where $D(F(x^k),i)$ denotes the predicted probability that $x^k$ belongs to the $i$-th camera class. In this way, the optimization for training the extractor feature $F$ (backbone network) is defined as \begin{equation}\label{eq14} \begin{aligned} &\min_{F}\mathcal{L}_\mathrm{MCAL-F}(X, D)=\min_{F}\mathbb{E}_{x\sim X} \mathcal{L}_\mathrm{OCE}(x). \end{aligned} \end{equation} Note that the traditional two-domain adversarial learning methods~\cite{DBLP:conf/icml/GaninL15,DBLP:conf/cvpr/TzengHSD17} are just a special case of this OCE-based scheme when there are exactly two camera classes on a dataset \textbf{Discussion.} a) \textit{Comparison among different adversarial learnings}. To show the difference between the proposed multi-camera adversarial leaning and other adversarial learning in the literature, we illustrate them in Fig.~\ref{fig9}. The conventional adversarial learning (AL)~\cite{DBLP:conf/icml/GaninL15,DBLP:journals/corr/abs-1803-09210,DBLP:conf/cvpr/TzengHSD17} conducts adversarial task between two domains (source and target domains). Multi-domain adversarial learning (MDAL)~\cite{zhao2018adversarial,xu2018deep,schoenauer-sebag2018multidomain,ghosh2018multi} deals with multi-source domain adaptation, which aligns one target domain and multiple source domains (i.e., ``one to many''). In contrast, the proposed multi-camera adversarial learning (MCAL) is specially designed for the camera alignment task, which conducts the camera adversarial task with each other (i.e., ``many to many''). b) \textit{A caution on reducing distribution discrepancy.} As aforementioned, the OCE scheme can better reduce the distribution discrepancy among cameras than the GRL scheme, which is experimentally validated in Section~\ref{sec:EXP-FA}. Reducing data distribution discrepancy has been widely regarded as an effective means to mitigate the domain gap in computer vision. This also a key motivation of this work. Nevertheless, we would like to point out that this data discrepancy reduction approach may need to be used cautiously, for example, in the person Re-ID task. We observe that although the proposed OCE scheme is clearly better than the GRL scheme on distribution discrepancy reduction, its Re-ID performance could be less competitive than the latter. In other words, the relationship between discrepancy reduction and performance improvement may not be in a linear form. In some cases, a single-minded pursuit of camera alignment could damage the intrinsic structure of the data distribution from each camera class, and this in turn adversely affects the final Re-ID performance. In Section~\ref{sec:EXP-EDC} and Section~\ref{sec:EXP-FA}, we will further discuss this issue by analyzing the results of two different adversarial schemes. \begin{figure \centering \includegraphics[width=8.5cm]{fig/fig9} \caption{Comparison among different adversarial learnings. (a) is adversarial learning (AL) for the conventional domain adaptation. (b) denotes the multi-domain adversarial learning (MDAL) for multi-source domain adaptation. (c) represents multi-camera adversarial learning (MCAL) for our camera alignment task.} \label{fig9} \end{figure} \subsection{Discrimination Learning with Intra-camera Labels}\label{SEC:LDI} We utilize multi-camera adversarial learning to project all images of different cameras into a common feature space. Based on this shared space, we learn discriminative information by using the available within-camera label information. In this paper, we employ the triplet loss with the hard-sample-mining scheme~\cite{hermans2017defense}. It selects the hardest positive sample (i.e., the farthest positive sample) and the hardest negative sample (i.e., the closest negative sample) for an anchor to generate the triplet. This scheme can well optimize the embedding space such that data points with the same identity become closer to each other than those with different identities~\cite{hermans2017defense}. With the available within-camera label information, we only choose training data within each camera class to generate triplets. In each batch, we randomly select $P$ persons and each person has $K$ images, i.e., $N=P\times K$. For the $c$-th camera, the triplet loss with hard sample mining can be described as \begin{equation}\label{eq02} \begin{aligned} \min_{F}\mathcal{L}_{\mathrm{Triplet}}(X_{c},Y_{c})=\mathbb{E}_{(x,y)\sim (X_c,Y_c)}[m+l(x)]_{+}. \end{aligned} \end{equation} For an anchor sample $x_{a}^{i}$ from the $i$-th person, \begin{equation}\label{eq03} \begin{aligned} l(x_{a}^{i})=\overbrace{\max_{p=1...K}d(F(x_{a}^{i}), F(x_{p}^{i}))}^{hardest~positive}-\overbrace{\min_{\substack{j=1...P\\ n=1,...K \\j\neq i}}d(F(x_{a}^{i}), F(x_{n}^{j}))}^{hardest~negative}, \end{aligned} \end{equation} and $m$ denotes the margin. $F(x_{a}^{i})$ is the feature of the sample $x_{a}^{i}$ and $d(\cdot,\cdot)$ is Euclidean distance of two feature vectors. \subsection{Optimization}\label{SEC:OPT} The loss function of the proposed ACAN framework is finally expressed as \begin{equation}\label{eq10} \begin{aligned} &\min_{D}\mathcal{L}_\mathrm{MCAL-D}(X, Z, F),\\ &\min_{F}\left[ \mathcal{L}_\mathrm{Triplet}(X_c, Y_c)+\lambda\mathcal{L}_\mathrm{MCAL-F}(X,D)\right], c \in \left\{1,\dots, C\right\}, \end{aligned} \end{equation} where $\lambda$ is the parameter to trade off the camera alignment task and the discrimination learning task. Let $\theta _{F}$ and $\theta _{D}$ be the learnable parameters of feature extractor $F$ and discriminator $D$, respectively. For the GRL-based adversarial scheme, we can use the following stochastic updates as \begin{equation}\label{eq06} \begin{aligned} &\theta _{D}=\theta _{D}-\mu \frac{\partial \mathcal{L}_\mathrm{MCAL-D}}{\partial \theta _{D}},\\ &\theta _{F}=\theta _{F}-\mu \left (\frac{\partial \mathcal{L}_{\mathrm{Triplet}}}{\theta _{F}} - \lambda \frac{\partial \mathcal{L}_\mathrm{MCAL-D}}{\partial \theta _{F}}\right ). \end{aligned} \end{equation} Since the GRL-based scheme employs the gradient reversal method, we directly use the reversal gradient of the discriminator to update the feature extractor $F$, as shown in Eq.~(\ref{eq06}). For the OCE-based adversarial scheme, we can update the feature extractor $F$ and discriminator $D$ as \begin{equation}\label{eq08} \begin{aligned} &\theta _{D}=\theta _{D}-\mu \frac{\partial \mathcal{L}_\mathrm{MCAL-D}}{\partial \theta_{D}},\\ &\theta _{F}=\theta _{F}-\mu \left (\frac{\partial \mathcal{L}_{\mathrm{Triplet}}}{\theta _{F}} + \lambda\frac{\partial \mathcal{L}_\mathrm{MCAL-F}}{\partial \theta _{F}}\right ), \end{aligned} \end{equation} where $\mathcal{L}_\mathrm{MCAL-F}$ is defined in Eq.~(\ref{eq14}). In the optimizing process, since the discriminator needs to calculate two different losses (i.e., the cross-entropy loss for updating the parameters of discriminator and the OCE loss for updating the parameters of feature extractor), we employ the alternate way to update the discriminator $D$ and the feature extractor $F$, which is similar to the typical GANs methods~\cite{goodfellow2014generative,DBLP:conf/iccv/ZhuPIE17}. \section{Experiments}\label{s-experiment} In this part, we first introduce the experimental datasets and settings in Section~\ref{sec:EXP-DS}. Then, we compare the proposed method with the state-of-the-art unsupervised Re-ID methods and some methods with the semi-supervised setting in Sections~\ref{sec:EXP-CUA} and~\ref{sec:EXP-SS}, respectively. To validate the effectiveness of various components in the proposed framework, we conduct ablation study in Section~\ref{sec:EXP-EDC}. Lastly, we further analyze the property of the proposed network in Section~\ref{sec:EXP-FA}. \subsection{Datasets and Experiment Settings}\label{sec:EXP-DS} We evaluate our approach on three large-scale image datasets: Market1501~\cite{DBLP:conf/iccv/ZhengSTWWT15}, DukeMTMC-reID~\cite{DBLP:conf/iccv/ZhengZY17}, and MSMT17~\cite{wei2018person}. \textbf{Market1501} contains 1,501 persons with 32,668 images from six cameras. Among them, $12,936$ images of $751$ identities are used as a training set. For evaluation, there are $3,368$ and $19,732$ images in the query set and the gallery set, respectively. \textbf{DukeMTMC-reID} has $1,404$ persons from eight cameras, with $16,522$ training images, $2,228$ queries, and $17,661$ gallery images. \textbf{MSMT17}\footnote{On MSMT17, the second camera includes $15$ identities with $193$ images. In each training batch, since 32 identities are chosen from each camera, we do not select data from the camera in all experiments. Therefore, we use the data of $14$ cameras to train our model on this dataset. is collected from a 15-camera network deployed on campus. The training set contains $32,621$ images of $1,041$ identities. For evaluation, $11,659$ and $82,161$ images are used as query and gallery images, respectively. For all datasets, we employ CMC (i.e., Cumulative Match Characteristic) accuracy and mAP (i.e., mean Average Precision) for Re-ID evaluation~\cite{DBLP:conf/iccv/ZhengSTWWT15}. On Market1501, there are single- and multi-query evaluation protocols. We use the more challenging single-query protocol in our experiments. In addition, we also evaluate the proposed method on two large-scale video datasets including MARS~\cite{DBLP:conf/eccv/ZhengBSWSWT16} and DukeMTMC-SI-Tracklet~\cite{li2019unsupervised}. \textbf{MARS} has a total of $20,478$ tracklets of $1,261$ persons captured from a 6-camera network on a university campus. All the tracklets were automatically generated by the DPM detector~\cite{felzenszwalb2009object} and the GMMCP tracker~\cite{dehghan2015gmmcp}. This dataset splits $626$ and $635$ identities into training and testing sets, respectively. \textbf{DukeMTMC-SI-Tracklet} is from DukeMTMC. It consists of $19,135$ person tracklets and $1,788$ persons from $8$ cameras. In this dataset, $702$ and $1,086$ identities are used to train and evaluate models, respectively. On the video datasets, we also employ CMC accuracy and mAP for Re-ID evaluation~\cite{DBLP:conf/iccv/ZhengSTWWT15}. For training the multi-camera adversarial learning task, we randomly select the same number (i.e., $\left \lfloor \frac{64}{C} \right \rfloor$, where $C$ is the number of cameras and $\left \lfloor \cdot \right \rfloor$ denotes the round down operation) of images per camera in a batch. To generate triplets, we set $P$ (i.e., number of persons) and $K$ (i.e., number of images per person) as $32$ and $4$ in each training batch, which is the same with the literature~\cite{hermans2017defense}. The margin of triplet loss, $m$, is $0.3$ according to the literature~\cite{hermans2017defense}. $\lambda$ in Eq. (\ref{eq10}) is set as $1$. We use a fully connected layer to implement the discriminator $D$, whose dimension is set as $128$. The initial learning rates of the fine-tuned parameters (those in the pre-trained ResNet-50 on ImageNet~\cite{DBLP:conf/cvpr/DengDSLL009}) and the new parameters (those in the newly added layers) are $0.1$ and $0.01$, respectively. The proposed model is trained with the SGD optimizer in a total of $300$ epochs. When the number of epochs reaches $100$ and $200$, we decrease the learning rates by a factor of $0.1$. The size of the input image is $256 \times 128$. Particularly, all experiments on all datasets utilize the same experimental settings. \subsection{Comparison with Unsupervised Methods}\label{sec:EXP-CUA} \renewcommand{\cmidrulesep}{0mm} \setlength{\aboverulesep}{0mm} \setlength{\belowrulesep}{0mm} \setlength{\abovetopsep}{0cm} \setlength{\belowbottomsep}{0cm} \begin{table*}[htbp] \centering \caption{Comparison with the state-of-the-art unsupervised methods on three image datasets including Market1501, DukeMTMC-reID and MSMT17. ``-'' denotes that the result is not provided. We report mAP and the Rank-1, 5, 10 accuracies of CMC. The best performance is shown in \textbf{bold}.} \begin{tabular}{|c|cccc|cccc|cccc|} \toprule \midrule \multirow{2}[1]{*}{Method} & \multicolumn{4}{c|}{Market1501} & \multicolumn{4}{c|}{DukeMTMC-reID} & \multicolumn{4}{c|}{MSMT17} \\ \cmidrule{2-13} & mAP & Rank-1 & Rank-5 & Rank-10 & mAP & Rank-1 & Rank-5 & Rank-10 & mAP & Rank-1 & Rank-5 & Rank-10 \\ \midrule LOMO~\cite{liao2015person} & 8.0 & 27.2 & 41.6 & 49.1 & 4.8 & 12.3 & 21.3 & 26.6 & - & - & - & - \\ BoW~\cite{DBLP:conf/iccv/ZhengSTWWT15} & 14.8 & 35.8 & 52.4 & 60.3 & 8.3 & 17.1 & 28.8 & 34.9 & - & - & - & - \\ UJSDL~\cite{qi2018unsupervised} & - & 50.9 & - & - & - & 32.2 & - & - & - & - & - & - \\ UMDL~\cite{DBLP:conf/cvpr/PengXWPGHT16} & 12.4 & 34.5 & - & - & 7.3 & 18.5 & - & - & - & - & - & - \\ CAMEL~\cite{DBLP:conf/iccv/YuWZ17} & 26.3 & 54.5 & - & - & - & - & - & - & - & - & - & - \\ \midrule PUL~\cite{fan2018unsupervised} & 20.5 & 45.5 & 60.7 & 66.7 & 16.4 & 30.0 & 43.4 & 48.5 & - & - & - & - \\ Tfusion~\cite{lv2018unsupervised} & - & 60.8 & - & - & - & - & - & - & - & - & - & - \\ TJ-AIDL~\cite{wang2018transferable} & 26.5 & 58.2 & 74.8 & 81.1 & 23.0 & 44.3 & 59.6 & 65.0 & - & - & - & - \\ \midrule MMFA~\cite{DBLP:conf/bmvc/LinLLK18} & 27.4 & 56.7 & 75.0 & 81.8 & 24.7 & 45.3 & 59.8 & 66.3 & - & - & - & - \\ CAT~\cite{DBLP:journals/corr/abs-1904-01308} & 27.8 & 57.8 & - & - & 28.7 & 50.9 & - & - & - & - & - & - \\ CAL-CCE~\cite{qi2019novel} & 34.5 & 64.3 & - & - & 36.7 & 55.4 & - & - & - & - & - & - \\ \midrule PTGAN~\cite{wei2018person} & - & 38.6 & - & - & - & 27.2 & - & - & 3.3 & 11.8 & - & 27.4 \\ SPGAN~\cite{deng2018image} & 22.8 & 51.5 & 70.1 & 76.8 & 22.3 & 41.1 & 56.6 & 63.0 & - & - & - & - \\ SPGAN+LMP~\cite{deng2018image} & 26.7 & 57.7 & 75.8 & 82.4 & 26.2 & 46.4 & 62.3 & 68.0 & - & - & - & - \\ HHL~\cite{zhong2018generalizing} & 31.4 & 62.2 & 78.8 & 84.0 & 27.2 & 46.9 & 61.0 & 66.7 & - & - & - & - \\ UTA~\cite{tian2019imitating} & 40.1 & 72.4 & 87.4 & 91.4 & 31.8 & 55.6 & 68.3 & 72.4 & - & - & - & - \\ ECN~\cite{zhong2019invariance} & 43.0 & \textbf{75.1} & \textbf{87.6} & 91.6 & 40.4 & 63.3 & 75.8 & 80.4 & 10.2 & 30.2 & 41.5 & 46.8 \\ \midrule ACAN-GRL (ours) & \textbf{50.6} & 73.3 & \textbf{87.6} & \textbf{91.8} & \textbf{46.6} & 65.1 & 80.6 & 85.1 & 11.2 & 27.1 & 40.9 & 47.3 \\ ACAN-OCE (ours) & 47.7 & 72.2 & 86.3 & 90.4 & 45.1 & \textbf{67.6} & \textbf{81.2} & \textbf{85.2} & \textbf{12.6} & \textbf{33.0} & \textbf{48.0} & \textbf{54.7} \\ \bottomrule \end{tabular}% \label{tab01}% \end{table*}% We compare our method with the state-of-the-art unsupervised image-based person Re-ID methods. Among them, there are five non-deep-learning-based methods (LOMO~\cite{liao2015person}, BoW~\cite{DBLP:conf/iccv/ZhengSTWWT15}, UJSDL~\cite{qi2018unsupervised}, UMDL~\cite{DBLP:conf/cvpr/PengXWPGHT16} and CAMEL~\cite{DBLP:conf/iccv/YuWZ17}), and multiple deep-learning-based methods. The latter includes three recent pseudo-label-generation-based methods (PUL~\cite{fan2018unsupervised}, TFusion~\cite{lv2018unsupervised} and TJ-AIDL~\cite{wang2018transferable}), three distribution-alignment-based methods (MMFA~\cite{DBLP:conf/bmvc/LinLLK18}, CAT~\cite{DBLP:journals/corr/abs-1904-01308} and CAL-CCE~\cite{qi2019novel}) and five recent image-generation-based approaches (PTGAN~\cite{wei2018person}, SPGAN~\cite{deng2018image}, HHL~\cite{zhong2018generalizing}, UTA~\cite{tian2019imitating} and ECN~\cite{zhong2019invariance}). Particularly, although there is no label information for the target domain in these unsupervised methods, most of them utilize the labeled source domains or generated images by the GAN-based models. Differently, our method does not utilize such information. The experimental results on Market1501, DukeMTMC-reID and MSMT17 are reported in Table~\ref{tab01}. As seen, the results consistently show the superiority of our proposed method over the aforementioned methods. For example, the proposed method significantly outperforms the recent pseudo-label-generation-based methods, such as PUL and CAMEL. This is contributed to the available intra-camera label information and the multi-camera adversarial learning method which can effectively align the distributions from different cameras. Particularly, compared with the recent ECN~\cite{zhong2018generalizing}, the state-of-the-art by utilizing both the labeled source domain and generated images from the GAN-based model, our ACAN-OCE still gains $4.7\%$ ($47.7$ vs. $43.0$), $4.7\%$ ($45.1$ vs. $40.4$) and $2.4\%$ ($12.6$ vs. $10.2$) in mAP on Market1501, DukeMTMC-reID and MSMT17, respectively. In particular, the proposed method does not use any other extra data, such as labeled source domains or generated images by the GAN-based methods~\cite{wei2018person,deng2018image}. Moreover, we also compare our method with the state-of-the-art unsupervised video-based Re-ID approaches on MARS and DukeMTMC-SI-Tracklet, which include GRDL~\cite{kodirov2016person}, UnKISS~\cite{khan2016unsupervised}, RACE~\cite{ye2018robust}, Stepwise~\cite{liu2017stepwise}, DGM+MLAPG~\cite{ye2017dynamic}, DGM+IDE~\cite{ye2017dynamic}, DAL~\cite{DBLP:conf/bmvc/ChenZG18}, TAUDL~\cite{Li_2018_ECCV} and UTAL~\cite{li2019unsupervised}. Among them, except for GRDL and UnKISS, all other methods are based on deep features. The results are reported in Table~\ref{tab02}. Compared with the recent UTAL, which has shown the superiority in the unsupervised video-based Re-ID task, our semi-supervised method (ACAN-GRL or ACAN-OCE) achieves a significant improvement in mAP and CMC accuracy. In particular, ACAN-GRL outperforms UTAL by $13.9\%$ ($49.1$ vs. $35.2$) and $9.3\%$ ($59.2$ vs. $49.9$) in mAP and Rank-1 accuracy on MARS, respectively. This further validates the effectiveness of our proposed methods. \begin{table}[htbp] \centering \caption{Comparison with the state-of-the-art unsupervised methods on two video datasets including MARS and DukeMTMC-SI-Tracklet (Duke-T). \begin{tabular}{|c|c|cccc|} \toprule \midrule & Method & mAP & Rank-1 & Rank-5 & Rank-20 \\ \midrule \multirow{11}[1]{*}{\begin{sideways}MARS\end{sideways}} & GRDL~\cite{kodirov2016person} & 9.6 & 19.3 & 33.2 & 46.5 \\ & UnKISS~\cite{khan2016unsupervised} & 10.6 & 22.3 & 37.4 & 53.6 \\ & RACE~\cite{ye2018robust} & 24.5 & 43.2 & 57.1 & 67.6 \\ & Stepwise~\cite{liu2017stepwise} & 10.5 & 23.6 & 35.8 & 44.9 \\ & DGM+MLAPG~\cite{ye2017dynamic} & 11.8 & 24.6 & 42.6 & 57.2 \\ & DGM+IDE~\cite{ye2017dynamic} & 21.3 & 36.8 & 54.0 & 68.5 \\ & DAL~\cite{DBLP:conf/bmvc/ChenZG18} & 21.4 & 46.8 & 63.9 & 77.5 \\ & TAUDL~\cite{Li_2018_ECCV} & 29.1 & 43.8 & 59.9 & 72.8 \\ & UTAL~\cite{li2019unsupervised} & 35.2 & 49.9 & 66.4 & 77.8 \\ \cmidrule{2-6} & ACAN-GRL (ours) & \textbf{49.1} & \textbf{59.2} & \textbf{77.1} & \textbf{86.7} \\ & ACAN-OCE (ours) & 47.5 & 57.7 & 75.1 & 84.0 \\ \midrule \midrule \multirow{4}[1]{*}{\begin{sideways}Duke-T\end{sideways}} & TAUDL~\cite{Li_2018_ECCV} & 20.8 & 26.1 & 42.0 & 57.2 \\ & UTAL~\cite{li2019unsupervised} & 36.6 & 43.8 & 62.8 & 76.5 \\ \cmidrule{2-6} & ACAN-GRL (ours) & \textbf{43.0} & \textbf{52.0} & \textbf{71.0} & \textbf{82.0} \\ & ACAN-OCE (ours) & 40.3 & 50.4 & 68.0 & 81.2 \\ \bottomrule \end{tabular}% \label{tab02}% \end{table}% \subsection{Comparison with Methods in Semi-supervised Setting} \label{sec:EXP-SS} In this section, we compare our method with the unsupervised video-based person Re-ID methods working in a semi-supervised setting (i.e., given the intra-camera label information). Both TAUDL~\cite{Li_2018_ECCV} and UTAL~\cite{li2019unsupervised} consist of intra-camera tracklet discrimination learning and cross-camera tracklet association learning. In the semi-supervised setting, they construct the connection of different cameras by self-discovering the cross-camera positive matching pairs. Different from both TAUDL~\cite{Li_2018_ECCV} and UTAL~\cite{li2019unsupervised}, we address the cross-camera problem from the data-distribution alignment perspective. We show the experimental results in Tables~\ref{tab03} and~\ref{tab04}. As seen, on all image datasets (Table~\ref{tab03}), the proposed method has a better performance with TAUDL and UTAL. For example, the Rank-1 accuracy of ACAN-OCE gains $3.0\%$ ($72.2$ vs. $69.2$), $5.3\%$ ($67.6$ vs. $62.3$) and $1.6\%$ ($33.0$ vs. $31.4$) over UTAL on Market1501, DukeMTMC-reID and MSMT17, respectively. Besides, on video datasets, the proposed method can also achieve competitive performance. Although the results are slightly worse when compared with UTAL on MARS, the proposed method still obtains a large improvement on DukeMTMC-SI-Tracklet. As seen, on DukeMTMC-SI-Tracklet, ACAN-GRL can improve $4.0\%$ ($43.0$ vs. $39.0$) and $5.6\%$ ($52.0$ vs. $46.4$) over UTAL in mAP and Rank-1 accuracy, respectively. Particularly, we also further demonstrate the effectiveness of multi-camera adversarial learning on all datasets in Section~\ref{sec:EXP-EDC}. \newcommand{\PreserveBackslash}[1]{\let\temp=\\#1\let\\=\temp} \newcolumntype{C}[1]{>{\PreserveBackslash\centering}p{#1}} \newcolumntype{R}[1]{>{\PreserveBackslash\raggedleft}p{#1}} \newcolumntype{L}[1]{>{\PreserveBackslash\raggedright}p{#1}} \begin{table}[htbp] \centering \caption{Comparison with TAUDL and UTAL in the semi-supervised setting on Market1501, DukeMTMC-reID (Duke) and MSMT17. The best performance is shown in \textbf{bold}.} \begin{tabular}{|C{2.2cm}|C{0.3cm}C{0.9cm}|C{0.3cm}C{0.9cm}|C{0.3cm}C{0.9cm}|} \toprule \midrule \multirow{2}[1]{*}{Method} & \multicolumn{2}{c|}{Market1501} & \multicolumn{2}{c|}{Duke} & \multicolumn{2}{c|}{MSMT17} \\ \cmidrule{2-7} & mAP & Rank-1 & mAP & Rank-1 & mAP & Rank-1 \\ \midrule TAUDL~\cite{Li_2018_ECCV} & 41.2 & 63.7 & 43.5 & 61.7 & 12.5 & 28.4 \\ UTAL~\cite{li2019unsupervised} & 46.2 & 69.2 & 44.6 & 62.3 & \textbf{13.1} & 31.4 \\ \midrule ACAN-GRL (ours) & \textbf{50.6} & \textbf{73.3} & \textbf{46.6} & 65.1 & 11.2 & 27.1 \\ ACAN-OCE (ours) & 47.7 & 72.2 & 45.1 & \textbf{67.6} & 12.6 & \textbf{33.0} \\ \bottomrule \end{tabular}% \label{tab03}% \end{table}% \begin{table}[htbp] \centering \caption{Comparison with UTAL in the semi-supervised setting on MARS and DukeMTMC-SI-Tracklet (Duke-T). \begin{tabular}{|c|cc|cc|} \toprule \midrule \multirow{2}[1]{*}{Method } & \multicolumn{2}{c|}{MARS} & \multicolumn{2}{c|}{Duke-T} \\ \cmidrule{2-5} & mAP & Rank-1 & mAP & Rank-1 \\ \midrule UTAL~\cite{li2019unsupervised} & \textbf{51.7} & \textbf{59.5} & 39.0 & 46.4 \\ \midrule ACAN-GRL (ours) & 49.1 & 59.2 & \textbf{43.0} & \textbf{52.0} \\ ACAN-OCE (ours) & 47.5 & 57.7 & 40.3 & 50.4 \\ \bottomrule \end{tabular}% \label{tab04}% \end{table}% \subsection{Effectiveness of Different Components in ACAN}\label{sec:EXP-EDC} To sufficiently validate the efficacy of different components in the proposed adversarial camera alignment network, we conduct experiments on five large-scale datasets. The experimental results are reported in Table \ref{tab05}. Firstly, on all datasets, using multi-camera adversarial learning to align all cameras can significantly improve the performance of the model conducting the only intra-camera discrimination task (i.e., ``only $\mathcal{L}_{\mathrm{Triplet}}$'' in Table \ref{tab05}). For example, ACAN-GRL can improve $14.8\%$ ($49.1$ vs. $34.3$) and $12.3\%$ ($43.0$ vs. $30.7$) in mAP on MARS and DukeMTMC-SI-Tracklet. Also, the Rank-1 accuracy can be improved by $13.7\%$ ($59.2$ vs. $45.5$) and $13.6\%$ ($52.0$ vs. $38.4$). This demonstrates the effectiveness of the proposed MCAL. Secondly, compared with results of PUL~\cite{fan2018unsupervised}, TFusion~\cite{lv2018unsupervised} and TJ-AIDL~\cite{wang2018transferable} in Table~\ref{tab01}, which are based on pseudo-label-generation, ``only $\mathcal{L}_{\mathrm{Triplet}}$'' still has the competitive performance. For example, ``only $\mathcal{L}_{\mathrm{Triplet}}$'' improves $7.9\%$ ($34.4$ vs. $26.5$) and $18.7\%$ ($41.7$ vs. $23.0$) over TJ-AIDL in mAP on Market1501 and DukeMTMC-reID, respectively. This shows using intra-camera labels indeed contributes to our task. Lastly, the OCE-based scheme is better than the GRL-based scheme on MSMT17 collected from 15 cameras, due to the limitation of GRL (i.e., maximize Eq.~(\ref{eq12}) by freely assigning camera labels), as analyzed in Section~\ref{SEC:AC}. However, for the datasets with few cameras, the OCE scheme may slightly damage the original intrinsic structure of data distribution due to the strict label-assignment strategy. We will further discuss the OCE and GRL schemes in Section~\ref{sec:EXP-FA}. \begin{table}[htbp] \centering \caption{Evaluation of different components of the proposed ACAN on three image datasets (i.e., Market1501, DukeMTMC-reID and MSMT17) and two video datasets (i.e., DukeMTMC-SI-Tracklet (Duke-T) and MARS). The best performance is shown in \textbf{bold}.} \begin{tabular}{|c|c|ccc|} \toprule \midrule Dataset & Method & mAP & Rank-1 & \multicolumn{1}{l|}{Rank-5} \\ \midrule \multirow{3}[1]{*}{Market1501} & Only $\mathcal{L}_{\mathrm{Triplet}}$ & 34.4 & 58.1 & 72.5 \\ \cmidrule{2-5} & ACAN-GRL (ours) & \textbf{50.6} & \textbf{73.3} & \textbf{87.6} \\ & ACAN-OCE (ours) & 47.7 & 72.2 & 86.3 \\ \midrule \midrule \multirow{3}[1]{*}{Duke} & Only $\mathcal{L}_{\mathrm{Triplet}}$ & 41.7 & 60.1 & 75.1 \\ \cmidrule{2-5} & ACAN-GRL (ours) & \textbf{46.6} & 65.1 & 80.6 \\ & ACAN-OCE (ours) & 45.1 & \textbf{67.6} & \textbf{81.2} \\ \midrule \midrule \multirow{3}[1]{*}{MSMT17} & Only $\mathcal{L}_{\mathrm{Triplet}}$ & 10.0 & 24.8 & 38.1 \\ \cmidrule{2-5} & ACAN-GRL (ours) & 11.2 & 27.1 & 40.9 \\ & ACAN-OCE (ours) & \textbf{12.6} & \textbf{33.0} & \textbf{48.0} \\ \midrule \midrule \multirow{3}[1]{*}{MARS} & Only $\mathcal{L}_{\mathrm{Triplet}}$ & 34.3 & 45.5 & 61.0 \\ \cmidrule{2-5} & ACAN-GRL (ours) & \textbf{49.1} & \textbf{59.2} & \textbf{77.1} \\ & ACAN-OCE (ours) & 47.5 & 57.7 & 75.1 \\ \midrule \midrule \multirow{3}[1]{*}{Duke-T} & Only $\mathcal{L}_{\mathrm{Triplet}}$ & 30.7 & 38.4 & 55.5 \\ \cmidrule{2-5} & ACAN-GRL (ours) & \textbf{43.0} & \textbf{52.0} & \textbf{71.0} \\ & ACAN-OCE (ours) & 40.3 & 50.4 & 68.0 \\ \bottomrule \end{tabular}% \label{tab05}% \end{table}% \subsection{Further Analysis}\label{sec:EXP-FA} \textbf{Algorithm Convergence.} To investigate the convergence of our algorithm, we record the mAP and Rank-1 accuracy of ACAN-GRL and ACAN-OCE during training on a validated set of DukeMTMC-reID in Fig.~\ref{fig6}. As seen, our methods can almost converge after 200 epochs. \textbf{Parameter Sensitivity.} To study the sensitivity of $\lambda$ in Eq.~(\ref{eq10}), which is the parameter to trade off the camera-alignment task and the intra-camera discrimination task, we sample the values in $\left\{0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0\right\}$, and perform the experiments by ACAN-OCE on DukeMTMC-reID. All the results are shown in Fig~\ref{fig4}, and we find that the Rank-1 accuracy first increases and then decreases. Finally, we set $\lambda = 1$ in all experiments for all datasets. Also, to analyze the sensitivity of the dimension of the discriminator in our network, we sample the values in $\left\{64, 128, 256, 512\right\}$, and perform the experiments by ACAN-OCE on DukeMTMC-reID. Fig.~\ref{fig5} shows the experimental results. As seen, we can obtain good performance when setting the dimension as 128. Consequently, we utilize a 128-dimensional discriminator (i.e., FC layer) in all experiments for all datasets. \textbf{OCE VS. ACE.} As analyzed in Section~\ref{SEC:AC-2}, we prefer the OCE-based scheme rather than the ACE-based scheme. In this part, we conduct a comparison between ACE and OCE in Table~\ref{tab06}. As shown, the OCE-based scheme outperforms the ACE-based scheme on all datasets. This is consistent with our previous analysis. For example, compared with the ACE scheme, the OCE scheme increases $7.9\%$ ($47.7$ vs. $39.8$) and $8.2\%$ ($72.2$ vs. $64.0$) in mAP and Rank-1 on Market1501, respectively. It indicates that during reducing camera-level discrepancy because i) it is difficult to achieve the ideal minimization in Eq.~(\ref{eq22}); ii) Since the ACE-based scheme does not follow the rule of the typical GANs~\cite{goodfellow2014generative} to design the discriminator loss, it cannot effectively achieve the multi-camera adversarial task. The concrete analysis is given in Section~\ref{SEC:AC-2}. \textbf{Distribution Visualization.} We examine the inter-camera (cross-camera) discrepancy to validate the effectiveness of GRL and OCE. In this experiment, to measure the inter-camera discrepancy, we use $d_{\mathrm{inter-camera}}=\frac{1}{C}\sum_{c=1}^{C}\left \| \overline{X}_{c}-\overline{X} \right \|_{2}$, where $\overline{X}_{c}$ is the sample mean of the $c$-th camera class and $\overline{X}$ denotes the mean of all samples. $C$ is the total number of cameras. These distances are calculated in Table~\ref{tab07}, where Baseline denotes ``Only $\mathcal{L}_{\mathrm{Triplet}}$'' in Eq.~(\ref{eq10}). Firstly, for the inter-camera discrepancy, both the GRL-based and OCE-based schemes achieve smaller distances than Baseline because Baseline does not attempt to reduce the cross-camera discrepancy. This demonstrates that the proposed MCAL can make each camera aligned as much as possible. Besides, this experiment also shows that the OCE-based scheme achieves the smallest distance, showing its best capability in reducing the discrepancy across cameras on Market1501, DukeMTMC-reID and MSMT17. This is attributed to the proposed special camera-label assigning scheme. Additionally, we visualize the data distributions obtained by the feature representation from Baseline, ACAN-GRL and ACAN-OCE in Fig.~\ref{fig7}. The result further illustrates the efficacy of the camera-alignment module in the proposed framework. As seen, on MSMT17, the blue samples are more dispersed by the OCE scheme when compared with Baseline and the GRL scheme. Based on the above results and analysis, we can observe that \textit{the OCE scheme can well achieve camera alignment over the GRL scheme, while this does not mean that using OCE can obtain better performance than the GRL scheme in our unsupervised cross-camera person Re-ID.} For example, on Market1501, although using the OCE scheme can better reduce the data distribution discrepancy than the GRL scheme in Table~\ref{tab07}, the latter can obtain better Re-ID performance than the former in Table~\ref{tab05}. The main reason has been discussed in Section~\ref{SEC:AC-2}. \textbf{Camera Confusion Matrix.} To further compare the difference between the GRL-based and OCE-based schemes, we visualize the camera confusion matrices of GRL and OCE on Market1501 (Market), DukeMTMC-reID (Duke) and MSMT17 (MSMT), as shown in Fig.~\ref{fig8}. As seen, the OCE-based scheme approximately confuses across multiple cameras, while the GRL-based scheme could mix local cameras, especially for these datasets with more cameras, such as DukeMTMC-reID collected from 8 cameras and MSMT17 captured from 15 cameras. This main reason is that maximizing the discriminator by GRL is approximate to freely assigning the camera label of an image except for the camera which this image belongs to. In addition, the confusion matrices of the OCE-based scheme are consistent with the analysis in \textit{Corollary $1$} of Section~\ref{SEC:AC-2}, i.e., if the discriminator outputs equal probability for all cameras, all images can be mapped into a shared space. Compared with GRL, the confusion matrices of OCE in Fig.~\ref{fig8} tend to achieve this goal. As seen in Fig.~\ref{fig8}, on Duke, the OCE scheme is more balanced to output the predicted probability for each camera class, while the GRL scheme almost assigns all images into one camera. This further confirms the efficacy of the OCE scheme. \begin{table}[htbp] \centering \caption{Evaluation of the ACE scheme and the OCE scheme on three image datasets (i.e., Market1501, DukeMTMC-reID and MSMT17) and two video datasets (i.e., DukeMTMC-SI-Tracklet (Duke-T) and MARS). The best performance is shown in \textbf{bold}.} \begin{tabular}{|c|c|cccc|} \toprule \midrule Dataset & Method & mAP & Rank-1 & Rank-5 & Rank-10 \\ \midrule \multirow{2}[1]{*}{Market1501} & ACE & 39.8 & 64.0 & 80.6 & 86.2 \\ \cmidrule{2-6} & OCE (ours) & \textbf{47.7} & \textbf{72.2} & \textbf{86.3} & \textbf{90.4} \\ \midrule \midrule \multirow{2}[1]{*}{Duke} & ACE & 16.0 & 37.3 & 52.0 & 59.4 \\ \cmidrule{2-6} & OCE (ours) & \textbf{45.1} & \textbf{67.6} & \textbf{81.2} & \textbf{85.2} \\ \midrule \midrule \multirow{2}[1]{*}{MSMT17} & ACE & 3.9 & 13.0 & 23.3 & 29.1 \\ \cmidrule{2-6} & OCE (ours) & \textbf{12.6} & \textbf{33.0} & \textbf{48.0} & \textbf{54.7}\\ \midrule \midrule \multirow{2}[1]{*}{MARS} & ACE & 40.9 & 53.9 & 71.4 & 77.1 \\ \cmidrule{2-6} & OCE (ours) & \textbf{47.5} & \textbf{57.7} & \textbf{75.1} & \textbf{79.9} \\ \midrule \midrule \multirow{2}[1]{*}{Duke-T} & ACE & 26.5 & 38.7 & 53.8 & 60.0 \\ \cmidrule{2-6} & OCE (ours) & \textbf{40.3} & \textbf{50.4} & \textbf{68.0} & \textbf{74.1} \\ \bottomrule \end{tabular}% \label{tab06}% \end{table}% \begin{figure \centering \includegraphics[width=5cm]{fig/fig4} \caption{Evaluation for different $\lambda$ in Eq.~(\ref{eq10}) on DukeMTMC-reID. Note that this result is reported on the test set.} \label{fig4} \end{figure} \begin{figure \centering \includegraphics[width=5cm]{fig/fig5} \caption{Experimental results of different dimensional discriminators (i.e., different dimensional FC layers) on DukeMTMC-reID. Note that this result is reported on the test set.} \label{fig5} \end{figure} \begin{figure \centering \subfigure[ACAN-GRL]{ \includegraphics[width=4cm]{fig/fig6_1.pdf} } \subfigure[ACAN-OCE]{ \includegraphics[width=4cm]{fig/fig6_2.pdf} } \caption{Convergence curves of ACAN-GRL and ACAN-OCE on DukeMTMC-reID.} \label{fig6} \end{figure} \begin{table}[htbp] \centering \caption{The discrepancy of data distribution across different cameras on Market1501, DukeMTMC-reID and MSMT17. Note that a smaller value indicates better performance in this table. The best performance is shown in \textbf{bold}} \begin{tabular}{|c|c|c|c|} \toprule \midrule Method & Market1501 & ~~Duke~~ & MSMT17 \\ \midrule Baseline ($\times$100) & 9.69 & 8.33 & 10.85 \\ GRL ($\times$100) & 3.54 & 6.21 & 8.59 \\ OCE (ours) ($\times$100) & \textbf{2.36} & \textbf{2.45} & \textbf{2.16} \\ \bottomrule \end{tabular}% \label{tab07}% \end{table}% \begin{figure \centering \subfigure[Duke (Baseline)]{ \includegraphics[width=4cm]{fig/fig7_1.pdf} } \subfigure[MSMT (Baseline)]{ \includegraphics[width=4cm]{fig/fig7_2.pdf} } \subfigure[Duke (ACAN-GRL)]{ \includegraphics[width=4cm]{fig/fig7_3.pdf} } \subfigure[MSMT (ACAN-GRL)]{ \includegraphics[width=4cm]{fig/fig7_4.pdf} } \subfigure[Duke (ACAN-OCE)]{ \includegraphics[width=4cm]{fig/fig7_5.pdf} } \subfigure[MSMT (ACAN-OCE)]{ \includegraphics[width=4cm]{fig/fig7_6.pdf} } \caption{Visualization of the distributions of two datasets via t-SNE~\cite{maaten2008visualizing}. The features of images are extracted by Baseline ((a) and (b)), ACAN-GRL ((c) and (d)) and ACAN-OCE ((e) and (f)). Different colors denote the images from different cameras. In detail, (a) (c) and (e) are eight different cameras on DukeMTMC-reID; (b) (d) and (f) are fourteen different cameras on MSMT17. Note that Baseline denotes ``Only $\mathcal{L}_{\mathrm{Triplet}}$'' in Eq.~(\ref{eq10}). Particularly, the visualization corresponds to Table~\ref{tab07}. Best viewed by vertical contrast.} \label{fig7} \end{figure} \begin{figure \centering \subfigure[Market (ACAN-GRL)]{ \includegraphics[width=4cm]{fig/fig8_1.pdf} } \subfigure[Market (ACAN-OCE)]{ \includegraphics[width=4cm]{fig/fig8_2.pdf} } \subfigure[Duke (ACAN-GRL)]{ \includegraphics[width=4cm]{fig/fig8_3.pdf} } \subfigure[Duke (ACAN-OCE)]{ \includegraphics[width=4cm]{fig/fig8_4.pdf} } \subfigure[MSMT (ACAN-GRL)]{ \includegraphics[width=4cm]{fig/fig8_5.pdf} } \subfigure[MSMT (ACAN-OCE)]{ \includegraphics[width=4cm]{fig/fig8_6.pdf} } \caption{Visualization of the camera confusion matrices on Market1501 (Market), DukeMTMC-reID (Duke) and MSMT17 (MSMT). In detail, (a) and (b) are six different cameras on Market1501; (c) and (d) are eight different cameras on DukeMTMC-reID. (e) and (f) are fourteen different cameras on MSMT17. Best viewed by horizontal contrast.} \label{fig8} \end{figure} \section{Conclusion}\label{s-conclusion} In this paper, we focus on a new person re-identification task named unsupervised cross-camera person Re-ID. It only assumes the availability of label information within the images from the same camera but no inter-camera label information is provided. Based on the perspective of reducing the cross-camera data distribution, we propose a novel adversarial camera alignment network (ACAN) for the proposed person Re-ID task, which can map all images from different cameras into a shared subspace. To realize the camera alignment task, we put forward a multi-camera adversarial learning method which is achieved by two strategies including the GRL-based and OCE-based schemes and give the corresponding theoretical analysis. Extensive experiments show the superiority of our proposed ACAN and confirm the efficacy of all components in ACAN. Particularly, to deal with the proposed task, a more intuitive strategy is generating pseudo-labels to explore the relationship between cross-camera samples. In future work, we will integrate the pseudo-label-based methods to further enhance the generalization ability of person Re-ID with the within-camera labeled information in real-world applications. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2,869,038,154,447
arxiv
\section{Introduction} The detection of binary black hole (BBH) mergers via gravitational wave (GW) emission became routine by the O3 observational run of the LIGO and Virgo collaborations. To date, tens of BBH mergers have been detected, with an overall merger rate density of $\mathcal{R}_{\rm BBH}=23.9^{+14.9} _{-8.6}$\rm {Gpc$^{-3}$\,yr$^{-1}$} \citep{Abbott2020b}. The identification of relevant channels which lead to mergers via GW emission is an ongoing endeavour which spans a number of subfields, including orbital dynamics, stellar evolution, and dynamics on the scale of galaxies. Channels for BBH mergers may be grouped into four broad categories. The first, isolated binary stellar evolution of massive stars \citep[e.g.,][]{Tutukov1973,Tutukov1993,Lipunov1997,Bethe1998,PortegiesZwart1998,Kalogera2000,Mandel2010,Voss2003,Kalogera2007,Belczynski2008,Dominik2012,Dominik2013,Dominik2015,deMink2015,Belczynski2016,Eldridge2017,Giacobbo2018,Olejak2020}, proposes that some massive stellar binaries evolve to short-period binaries prior to either star forming a BH. One type of such evolution occurs during one or two common envelope episodes, in which one star swells during the giant phase, imparting drag on the other and shrinking their mutual orbit. The total orbital energy loss is directly related to the amount of energy transferred to the envelope of the giant star. If the energy transfer is too efficient then the binary merges before the objects turn into BHs; if the transfer is too inefficient then the binary does not lose enough orbital energy to merge via GW emission. The result is a short-period stellar binary which then evolves to a BBH and merges via GW emission within a Hubble time. Studies of this channel predict a delay-time distribution $\propto t^{-1}$ that starts $10-100 \rm{Myr}$ after star formation. They also predict no measurable eccentricity in the LIGO detection band (due to circularisation during the binary interaction phase and the subsequent circularisation from gravitational radiation) and merger rate densities of $\sim$10$^{-2}$--10$^{3}$\,Gpc$^{-3}$\,yr$^{-1}$. Another isolated binary formation scenario is the chemically homogeneous stellar evolution \citep{Marchant2016,deMink2016,Mandel2016}. In this scenario a massive binary that is close to contact experiences intense internal mixing that keeps the stars chemically homogeneous while the cores are burning hydrogen. The hydrogen in the star is thus nearly exhausted and thus a common envelope phase is avoided. The predicted BBH merger rate is up to $500~{\rm Gpc}^{-3}~{\rm yr}^{-1}$ \citep{deMink2016}. A second merger channel is dynamical in nature and proposes that existing BBHs are induced to merge in dense environments such as galactic centers, AGN accretion disks, or globular clusters. In these settings, BBHs experience strong gravitational interactions with individual stars or high-multiplicity systems, and these interactions tend to harden the target binaries and may increase their eccentricities \cite[][]{Sigurdsson1993,Kulkarni1993,PortegiesZwart2000,Madau2001,Miller2002,Gueltekin2004,Gueltekin2006,Miller2009,McKernan2012,Samsing2014,Rodriguez2016,Stone2017,Rodriguez2018,Fragione2018,Banerjee2018,Hamers2018,Leigh2018,Rodriguez2021}. Models of these interactions predict merger rate densities of $\sim$2--25\,Gpc$^{-3}$\,yr$^{-1}$. The third channel concerns mergers of initially wide, isolated systems, either binaries or triples, in the field of the host galaxy \citep {Michaely2019,Michaely2020,Michaely2020b}. For wide systems, the field of the host galaxy is considered a collisional environment due to frequent flyby interactions with field stars. These interactions are capable of exciting the eccentricity (in the case of binaries) or outer eccentricity (in case of triples), with the result that mergers occur via increased GW emission (binaries) or three-body instabilities (triples). Predicted BBH merger rate densities for this channel are $\sim$1--100\,Gpc$^{-3}$\,yr$^{-1}$. The fourth merger channel, and the focus of this paper, is secular evolution in hierarchical triple systems. These systems reside either in the field of the host galaxy \citep[e.g.,][]{Antonini2016,Antonini2017,Silsbee2017} or in dense environments \citep[][]{Miller2002a,Antonini2012,Antonini2014,Kimpson2016,Petrovich2017,Samsing2018,Hoang2018,Fragione2019,Hamilton2019,Martinez2020,Wang2020}. In this channel, a BBH experiences secular effects due to its tertiary companion in the form of the Lidov-Kozai resonance \citep{Lidov1962,Kozai1962,Harrington1968,Lidov1976,Innanen1997,Ford2000,Blaes2002}; for a recent review, see \citet{Naoz2016}. In this resonance, the eccentricity of the BBH experiences cyclic changes which boost the GW emission rate of the inner binary and lead to a merger. Predicted merger rate densities due to this channel are $\sim$0.5--15\,Gpc$^{-3}$\,yr$^{-1}$. Distinguishing the various channels for producing BBH mergers is important, given that each may yield mergers with particular observational signatures and with different spatial or temporal distributions. BBH mergers in open clusters have been studied previously via $N$-body simulations \citep{Banerjee2018,Kumamoto2019,DiCarlo2019,DiCarlo2020,Gonzalez2020,Weatherford2021}, which predict merger rate densities of $\sim$0.3\,Gpc$^{-3}$\,yr$^{-1}$ in these environments. \citet{Michaely2018} found that a small fraction, up to fraction of a percent, of mergers are expected to occur extremely close in time to the formation of the second BH, specifically within years to decades following the supernova. Open clusters are loosely bound groups of young stars with stellar number densities $n_* \sim 0.1$--$10$\,pc$^{-3}$ and typical velocity dispersion $\sigma \sim 1$--$5$\,km\,s$^{-1}$ \citep{moraux2016}. We assume that effectively all star formation occurs in these clusters \citep{Lada2003}, which remain bound for lifetimes ranging from $\sim$100$\, \rm Myr$ for the sparsest examples to a few Gyr for the densest clusters \citep{moraux2016}. In this work, we apply semi-analytic modeling and Monte Carlo simulations to the hierarchical triple channel, studying a set of models describing different initial triple system populations. For each model, we calculate the total BBH merger rate density as well as the cumulative distribution of mergers as a function of time since star formation; this is known as the delay-time distribution (DTD). In particular, we calculate the fraction of mergers which occur while a triple still resides in its birth cluster. The main focus of this work is to estimate the fraction of mergers in the open cluster phase out of the total mergers induced by the secular evolution. We begin by describing our semi-analytic treatment of BBH mergers induced by the secular Lidov-Kozai resonance in Section \ref{sec:Kozai}. In Section \ref{sec:Numerics}, we then establish our numerical approach and the different population models considered. Section \ref{sec:Results} presents the simulation results, including the DTD and merger rate for each model. Section \ref{sec:Discussion} discusses our model assumptions and limitations, and in Section \ref{sec:Conclusions}, we summarise and offer broader context for our results. \section{BBH mergers from hierarchical triples} \label{sec:Kozai} In the following section, we briefly describe the secular evolution of triple systems under the Lidov-Kozai resonance. For a more detailed description of this mechanism, see \citet{Naoz2016}. \subsection{Newtonian treatment} A hierarchical triple system is composed of an inner binary with masses denoted $m_1, m_2$ and a distant tertiary of mass $m_3$. The inner binary is characterised by its orbital semimajor axis (SMA) $a_1$ and eccentricity $e_1$. The center of mass of the inner binary then hierarchically constitutes an additional two-body system with the tertiary; this system is referred to as the outer binary, with SMA $a_2$ and eccentricity $e_2$. Each binary defines a unique orbital plane, and the angle between these two planes is the inclination $I$ associated with the triple system. Within these planes, the orientations of the inner and outer orbits are given by their arguments of pericenter $\omega_1$ and $\omega_2$, respectively. See Fig. \ref{fig:illustration} for a diagram of a general hierarchical triple system. \begin{figure} \includegraphics[width=1\columnwidth]{hierachical_triple.PNG} \caption{Illustration of a hierarchical triple system. The inner binary consists of two objects with masses $m_1$ and $m_2$ whose orbit is defined by a SMA $a_1$ and eccentricity $e_1$. In this study we set $e_1=0$. The outer binary consists of the tertiary of mass $m_3$ and the center of mass of the inner binary. The orbit of the outer binary is defined by a SMA $a_2$ and eccentricity $e_2$. The angle between the planes of the inner and outer binaries is the system inclination $I$.} \label{fig:illustration} \end{figure} A three-body system is chaotic when the system masses and separations are similar. Such a system thus tends to break apart on dynamical timescales. Hence, on grounds of system stability, most astrophysical triple systems are hierarchical in scale; i.e., $a_1 \ll a_2$. This hierarchy of spatial scales sets a corresponding hierarchy of timescales for these systems: the inner binary orbital period $P_1$ is much shorter than the outer binary orbital period $P_2$, and any dynamical evolution of the system occurs on timescales much longer than both. When the secular approximation is applied to hierarchical triple systems, one can show that the orbital energies of each binary are conserved quantities, and therefore the SMAs $a_1$ and $a_2$ are constant in time. Long-term changes to the system do occur, however, due to mutual torque and angular momentum transfer between the inner and outer binaries. The result of this secular evolution is simultaneous oscillations of the inner eccentricity $e_1$ and system inclination $I$, such that the total angular momentum of the triple system is conserved; at higher order, the outer orbit can evolve as well. Peak eccentricity in the inner binary occurs at the time of minimum inclination, and vice versa, and these oscillations are referred to as Lidov-Kozai cycles \citep{Lidov1962,koz62}. Following \citet{Miller2002a} and \citet{VanLandingham2016}, we define a conserved quantity derived from the quadrupole-order Hamiltonian for a hierarchical triple system: \begin{equation} \label{W} W_{\rm N} = -2\epsilon + \epsilon \cos^2I + 5(1 - \epsilon) \sin^2\omega_1 (\cos^2I - 1)\, , \end{equation} where $\epsilon\equiv 1-e_1^2$. The minimum value of $\epsilon$, which corresponds to the maximum value of $e_1$, occurs when $\omega_1=\pi/2$. Hence, knowing the initial values $\omega_{1,0}$ and $e_{1,0}$, one can exploit the conservation of $W$ to calculate the maximum value of the inner binary eccentricity, denoted $e_{\rm max}$. We note here that the octupole-order result is different \citep{Harrington1968,Ford2000,Blaes2002,Thompson2011,Naoz2013,Michaely2014,Naoz2016} but is beyond the scope of this work. \citet{Innanen1997} provide a concise and useful relation between the initial inclination and the maximal eccentricity due to the Lidov-Kozai resonance in the quadrupole approximation when the tertiary dominates the system angular momentum: \begin{equation} \label{e_max_Newton} e_{\rm max}=\left( 1-\frac{5}{3}\cos^2 I_0\right)^{1/2}\, , \end{equation} which implies that for the restricted three-body problem, the inner binary eccentricity tends to unity if $I_0=\pi/2$. The growth of the inner eccentricity to its maximum value over long timescales is a consequence of coherent perturbations by the potential of the tertiary, specifically inner binary precession. If the inner eccentricity is sufficiently high, one might expect the inner binary's components to interact and thus to disrupt this precession. In the following subsection, we consider such an effect in general relativity (GR), namely GR pericenter precession in the inner binary. \subsection{Post-Newtonian treatment} In a triple system whose inner binary evolves to high sufficiently high eccentricity, GR precession of the inner binary pericenter becomes nonnegligible. This precession interferes with the coherent perturbations due to the tertiary and suppresses the Lidov-Kozai resonance. Following \citet{Miller2002a}, we account for this quenching effect of GR precession by adding to equation \eqref{W} the following post-Newtonian term: \begin{equation} \label{W_PN} W_{\rm PN}=\frac{8}{\sqrt{\epsilon}}\frac{M_{1}}{m_{3}}\left(\frac{b_{2}}{a_{1}}\right)^{3}\frac{GM_{1}}{a_{1}c^{2}}\equiv\theta_{{\rm PN}}\epsilon^{-1/2}\, . \end{equation} Here $M_1 \equiv m_1+m_2$ is the total mass of the inner binary, $b_2=a_2\left(1-e_2 ^2\right)^{1/2}$ is the semi-minor axis of the outer binary, $G$ is the Newtonian gravitational constant, and $c$ is the speed of light. Note that we include a term for GR pericenter precession but continue to treat GW emission as negligible for the purposes of the Lidov-Kozai resonance; as a result, the sum of equations \eqref{W} and \eqref{W_PN}, \begin{equation} \label{W_full} W=W_{N}+W_{\rm PN}\, , \end{equation} remains a conserved quantity. As before, the maximal eccentricity (minimal $\epsilon$) is obtained when $\omega=\pi/2$, and the result in this post-Newtonian treatment becomes \begin{equation} \label{e_max_GW} \epsilon_{{\rm min}}^{1/2}\approx\frac{1}{6}\left(\theta_{{\rm PN}}+\sqrt{\theta_{{\rm PN}}^{2}+60\cos^{2}I_{0}}\right)\, . \end{equation} This maximal eccentricity can be used to estimate the merger time of the inner binary due to GW emission. The merger timescale for a binary of eccentricity $e\approx 1$ is given by \citet{Pet64} as \begin{equation} \label{T_GW} T_{\rm GW}\approx\frac{768}{425}T_{c}\text{\ensuremath{\left(a_{1}\right)}}\left(1-e^{2}\right)^{7/2}\, , \end{equation} where $T_{c} \equiv a_{1}^{4}/\beta$ is the merger timescale for a circular binary and $\beta\equiv 64G^{3}m_{1}m_{2}\left(m_{1}+m_{2}\right)/(5c^{5})$. However, in the case of a triple system whose inner binary oscillates between its initial eccentricity $e_{1,0}$ and maximal eccentricity $e_{\rm max}$, the merger timescale due to GW emission is necessarily longer. \citet{Randall2018} analytically estimate the merger time in this case to be \begin{equation} \label{merge_time} T_{\rm merger} = \frac{T_{\rm GW}}{\epsilon_{\rm min}^{1/2}}\, . \end{equation} In this work, we are interested in merger times $T_{\rm merger} < T_{\rm Hubble}$ for the purpose of calculating the merger rate density and DTD of BBH mergers originating from hierarchical triples. In the following section, we describe a method for numerically selecting different triple populations in order to calculate these statistics. \section{Numerical method} \label{sec:Numerics} Our approach to calculating merger rate densities and DTDs is as follows. In each of several population models, described in Sections \ref{subsec:STD} and \ref{subsec:additional}, we employ a Monte Carlo simulation to generate $10^{6}$ representative triple systems. For each triple system in a given model, the model analytically determines whether an inner BBH merger occurs within $T_{\rm Hubble}$ using equation \eqref{merge_time} and records the value of $T_{\rm merger}$ in order to calculate the theoretical DTD. As stated earlier, we assume that star formation occurs entirely within open clusters \citep{Lada2003} and that as a result, black hole progenitor stars all form simultaneously; i.e., we do not calculate any detailed dynamical effects during the main sequence (MS) phase. \subsection{Creating a population model} \label{subsec:popmodel} Each population model uses a Monte Carlo approach to generate a set of stable, hierarchical triple systems whose inner binaries evolve to BBHs. Although binary stellar evolution processes are beyond the scope of this study, we do consider basic restrictions imposed on triple systems due to their passage through the MS phase. Specifically, we exclude triples whose inner binary components would have interacted as MS stars. We exclude any triple that is considered dynamically unstable by the criterion of \citet{Mardling2001}. Additionally, we work under the simplifying assumption that the probability distributions of all system parameters are independent, meaning that an individual triple system can be generated by drawing each of its parameters independently. Triple systems are produced in this model by drawing initial stellar masses and orbital parameters, then mapping those stellar masses to final BH masses. To generate the inner binary for a system, we draw the primary mass $m_1$ from the Kroupa initial mass function (IMF) \citep{kroupa2001}, denoted $f_{\rm IMF} \left(m\right)$, with a range $[m_{\rm min},m_{\rm max}]$. Because we are interested in masses of BH progenitors, we concern ourselves only with the upper end of the range of initial masses. The Kroupa and Salpeter IMFs \citep{Salpeter1955} are similar in the high-mass regime, and therefore we do not expect that a different choice of IMF would affect the results presented here. However, the specific choice of IMF is important for the normalisation of the results; see Section \ref{subsec:Normalization}. With $m_1$ determined, the next parameter drawn is the inner binary SMA, $a_1$. Motivated by observations \citep{Duchene2013,Moe2016}, we draw the inner SMA from a log-uniform distribution (\"{O}pik's law) over a range $[a_{1,\rm min}, a_{1,\rm max}]$. The mass of the second inner binary object is determined by \begin{equation} \label{innerratio} m_{2}=m_{1}q_{1}\, , \end{equation} where $q_{1}$ is the inner binary mass ratio, drawn from a power law distribution $f(q) \propto q^{\gamma}$. For high-mass stars ($M_* \gtrsim 16\, M_{\odot}$), this distribution covers the range $[0.1,1]$. The power law index $\gamma$ is determined by the SMA of the inner binary, with $\gamma = 0$ for $a_1 <100\, \rm AU$ and $\gamma = -1/2$ for $a_1 > 100\, \rm AU$ \citep{Duchene2013}. The remaining parameters of the inner binary orbit are its eccentricity $e_1$ and argument of pericenter $\omega_1$. In order to be conservative with merger time we set the inner binary eccentricity to be zero, $e_1 \rightarrow 0$. Any other choise of inner eccentricity distribution would shorten the merger timescale because of the increase of the maximal eccentricity reached in the Lidov-Kozai resonance \citep{Lidov1976,Naoz2016}. Finally, $\omega_1$ is drawn from a uniform distribution on $[0,2\pi]$. These five parameters define our inner binary progenitor star system. As mentioned previously, we discard any system whose inner binary would have interacted during the MS phase. To check for such interactions, our method calculates the radii of the progenitor stars and compares these to the stars' respective Roche limits. The stellar radius-mass relation is given by $r_{i} \propto m_{i}^{0.57}$ \citep{demircan1991} and the Roche limit by \begin{equation} R_{1(2)} = a_1\times \frac{0.49q_{1(2)}^{2/3} }{0.6q_{1(2)}^{2/3}+\ln{\left(1+q_{1 (2)}^{1/3}\right)}}\, , \end{equation} where $q_{1(2)}=m_{1(2)}/m_{2(1)}$ \citep{eggleton1983}. A system is discarded if $r_i > R_i$ for either progenitor star, reflecting the likelihood that such a system would have interacted significantly during the MS phase and might have failed to produce a BBH. We now address the parameters characterising the outer binary. The tertiary mass $m_3$ is set by drawing the outer mass ratio $q_{2}\equiv m_3/M_1$ from a power law distribution $q\propto M_1^\gamma$ with $\gamma = -2$ \citep{Moe2016} over a range $[0.1,1]$. The outer eccentricity $e_2$ is drawn from a thermal distribution $f(e) = 2e$ and the outer SMA $a_2$ from a log-uniform distribution over a range $[a_{2, \rm min},a_{2, \rm max}]$. The final parameter needed to specify the triple system is the mutual orbital inclination $I$; this value is drawn from a distribution function $f\left(I\right)$ which varies by model and is discussed further in the following sections. With the system parameters fully determined, our method next checks that the triple is indeed dynamically stable. The outer pericenter distance is given by $R^{\rm out}_{P} = a_{2}(1-e_{2})$, and \citet{Mardling2001} define the stability threshold \begin{equation} \label{stability} \kappa = 2.8\left[(1+q_{2})\frac{(1+e_{2})}{(1-e_{2})}\right]^{2/5} a_{1}\, , \end{equation} which specifies the smallest outer pericenter value for which the system remains stable. Accordingly, a system is discarded by our model if $R_P^{\rm out} < \kappa$. The steps described to this point are sufficient to generate a stable, hierarchical, stellar triple. The initial masses of the three system components must now be mapped to the final masses of the BHs or other objects to which they evolve. When the simulation generates a star of sufficient mass, it converts it into a BH by establishing two mass regimes. For a star whose initial mass $m_i$ falls in the range $20\, M_\odot \leq m_i \leq 60\, M_\odot$, the resulting BH is assigned a final mass $m_i/2$, in keeping with the approximate relation between progenitor mass and final BH mass for stars in this range. A star with initial mass $m_i > 60\, M_\odot$ is converted to a BH with a final mass of $30\, M_\odot$, reflecting the significant mass loss experienced by very massive MS stars. The tertiary is treated differently from the initial binary, as it does not necessarily evolve to a BH. For $m_3 \leq 8\, M_{\odot}$, the simulation checks the MS lifetime for that mass; if it is less than $T_{\rm merger}$, then $m_3$ is converted to a $1$-$M_{\odot}$ white dwarf. In this case, we ignore any expansion of the outer SMA $a_2$, given that the expected mass loss of the tertiary stellar companion in this case is negligible relative to the total mass of the triple. To obtain the total merger time, the original MS lifetime is then added to the merger time for the white dwarf system. For a tertiary in the range $8\, M_{\odot} < m_3 < 20\, M_{\odot}$, i.e., the mass range for forming a neutron star (NS), the calculation is stopped and the system discarded. In this case, it is expected that the triple system will be disrupted by the natal kick of the NS \citep{Hobbs2005}, precluding any secular evolution. The following section introduces the baseline (``standard'') population model, which adopts the most plausible assumptions for the various triple system parameter distributions. A set of additional models then extends the standard model by modifying a single assumption at a time. \subsection{Standard model} \label{subsec:STD} The baseline model assumes that BHs are formed with no natal kicks, either because of a failed supernova or massive fallback. The limits on the primary mass are set to $m_{1, \rm min} = 30\, M_{\odot}$ and $ m_{1, \rm max} = 100\, M_{\odot}$. While $20$-$M_{\odot}$ O-type stars might produce BHs, there is considerable speculation regarding which mass ranges will yield a natal kick when forming compact objects. By raising the minimum mass for BH formation in our simulation, we impose a conservative buffer which makes it more likely that natal kicks can be neglected. This model sets the bounds on the inner binary SMA to $a_{1, \rm min} = 0.1\, \rm AU$ and $a_{1, \rm max} = 100\, \rm AU$. In keeping with the focus on hierarchical triples, the outer binary SMA is assigned a lower bound of $a_{2, \rm min} = 5a_1$ and an upper bound of $a_{2, \rm max} = 1000\, \rm AU$. This upper bound is determined by the environment: we do not expect open clusters to contain ultra-wide systems, as these would be ionized due to the relatively high stellar density in the cluster. For a more detailed treatment of the open cluster environment, see Section \ref{sec:Discussion}. As mentioned in section \ref{subsec:popmodel} both the inner SMA, $a_1$, and the outer SMA, $a_2$ distributions are equal in log intervals of $a_1$ and $a_2$ respectively. The inclination $I$ of each system is of particular interest when studying the Lidov-Kozai resonance. Given the dearth of observational constraints on the inclinations of high-multiplicity systems within open clusters, we make the reasonable assumption that open clusters and their constituents exhibit a bias toward aligned angular momenta. For triple systems, such a bias favors coplanar orbits. To account for this preference, the standard model draws inclinations from a distribution which increases linearly with $\cos{I}$ in the range $\cos{I} \in \left[-1,1 \right]$; see Fig. \ref{dtd}. \subsection{Additional models} \label{subsec:additional} To probe the sensitivity of merger rates and the DTD to the assumptions used in the standard model, we present several additional models. Each isolates and modifies a single assumption in order to test the robustness of our results. \subsubsection*{\textbf{No Natal Kicks}} The standard model excludes primary object masses below $m_{1,\rm min} = 30\, M_{\odot}$ due to uncertainty regarding BH natal kicks below this mass. In this No Natal Kicks model, it is assumed that \emph{all} BHs are born with no natal kick, and thus $m_{1,\rm min}$ is lowered to the traditionally accepted lower limit of $20\, M_{\odot}$ for BH progenitors. \subsubsection*{\textbf{Isotropic Distribution}} In order to test the sensitivity of our results to the initial distribution of mutual inclinations, this model implements an isotropic (rather than prograde-biased) distribution for $I$. Inclinations are drawn from a uniform distribution of $\cos I\in \left[-1,1\right]$, i.e., from prograde to retrograde mutual inclinations. See Fig. \ref{dtd} for the distribution of inclinations generated by this model. \subsubsection*{\textbf{Prograde-Only}} This model restricts the mutual inclination $I$ to prograde values by drawing from a linear distribution of $\cos I\in \left[0,1\right]$. See again Fig. \ref{dtd} for the initial distribution of inclinations. \subsubsection*{\textbf{BH Tertiary}} This and the following model concern modifications to the tertiary object in the triple system. In the standard model, the tertiary star is either massive enough to become a BH, forming a hierarchical triple BH, or has a mass low enough to evolve to a white dwarf. Recall that if the tertiary mass falls in the intermediate regime $8\,M_{\odot} < m_3 < 20\, M_{\odot}$, it is assumed to form a NS and disrupt the triple via a high natal kick velocity. In this BH Tertiary population model, only tertiary companions which form black holes are included, and so only systems with tertiary masses $m_3 > 20\, M_{\odot}$ are considered. \subsubsection*{\textbf{Stellar Tertiary}} Complementary to the previous model, here only lower-mass tertiary objects are allowed. The evolution of these stars is modeled in two phases, as previously described in Section \ref{subsec:popmodel}. In the first, a star retains its zero-age MS mass $m_3$. In the second phase, the mass of the tertiary star is set to $1\, M_{\odot}$ to account for mass loss during the giant phases and final evolution to a white dwarf. As before, these low-mass tertiaries are restricted to $m_3 \leq 8\, M_{\odot}$. \subsubsection*{\textbf{SMA Boundaries Model a}} In all previous models, the inner binary SMA $a_1$ is drawn from the range $\left[0.1\, \rm AU, 100\, AU\right]$. This model considers only larger inner binaries by increasing the lower bound of the inner binary SMA by an order of magnitude, drawing $a_1 \in \left[1\,\rm AU, 100\, \rm AU \right]$. \subsubsection*{\textbf{SMA Boundaries Model b}} This model complements the previous model by doubling the upper bound on the inner binary SMA, drawing $a_1 \in \left[0.1\, \rm AU, 200\, \rm AU\right]$. \begin{figure*} \includegraphics[width=\textwidth]{standard_model_1000000_cluster_merger_distributions.png} \caption{Top row: Initial parameter distributions produced by the Monte Carlo simulation for the various population models. Left panel: inner binary SMA; middle panel: outer binary SMA; right panel: system inclination plotted as $\cos I_0$. Note that in the distribution of inclinations, only the standard, Isotropic, and Prograde models are shown; all others exhibit no significant differences from the standard model in their distributions of inclinations. Bottom row: the same three parameter distributions shown in the top row, but restricted to the subset of triple systems which merge within a Hubble time in our model. All plots are normalized to unity.} \label{dtd} \end{figure*} \begin{table*} \centering \caption{Parameter distributions and merger results for the standard and additional models. For comparison with the lifetimes of open clusters, the percentages of systems which have merged at $10^8$ and $10^9$ yr are reported.} \label{table_results} \begin{tabular}{| p{2.7cm} | p{1.3cm} | p{1.25cm} | p{3.0cm} | p{2.0cm}| p{2.0cm} | p{2.5cm} |} \hline \textbf{Model} & $\boldsymbol{m_1\, (\rm M_{\odot})}$ & $\boldsymbol{m_3\, (\rm M_{\odot})}$ & \textbf{Inclination} ($\boldsymbol{f(I)}$) & $\boldsymbol{a_1}$\,\textbf{(AU)} & \textbf{Local Rate} & \textbf{Merger Time} \\ &&&&& \textbf{(Gpc}$\boldsymbol{^{-3}}$\,\textbf{yr}$\boldsymbol{^{-1}}$\textbf{)} & $\boldsymbol{\leq$ $10^9}$ \textbf{yr\, (}$\boldsymbol{10^8}$ \textbf{yr}) \\ \hline Standard & 30--100 & $m\leq8$ & linear in $\cos I$ & 0.1--100 & 6.2 & 49.9 (18.9) \%\\ && $m\geq30$ \\ \hline No Natal Kicks & 20--100 & $m\leq8$ & linear in $\cos I$ & 0.1--100 & 4.5 & 48.4 (18.1)\%\\ && $m\geq20$ \\ \hline Isotropic Distribution & 30--100 & $m\leq8$ & uniform in $\cos I$& 0.1--100 & 6.6 & 51.0 (19.5) \%\\ && $m\geq30$ \\ \hline Prograde-Only & 30--100 & $m\leq8$ & linear in $\cos I$, & 0.1--100 & 2.1 & 33.3 (7.9)\%\\ && $m\geq30$ & 0 $\leq I \leq$ 1 \\ \hline BH Tertiary & 30--100 & $m\geq30$ & linear in $\cos I$& 0.1--100 & 6.7 & 50.5 (19.3)\%\\ \hline Stellar Tertiary & 30--100 & $m\leq8$ & linear in $\cos I$ & 0.1--100 & 0.9 & 15.8 (0.6)\%\\ \hline SMA Boundaries a & 30--100 & $m\leq8$ & linear in $\cos I$ & 1--100 & 3.7 & 51.5 (20.0)\%\\ && $m\geq30$ \\ \hline SMA Boundaries b & 30--100 & $m\leq8$ & linear in $\cos I$ & 0.1--200 & 6.2 & 33.3 (7.9)\%\\ && $m\geq30$ \\ \hline \end{tabular} \end{table*} \section{Results} \label{sec:Results} \subsection{Normalisation and rates} \label{subsec:Normalization} We calculate the merger rate density for Lidov-Kozai-assisted BBHs under the assumption that the Milky Way is the prototypical spiral galaxy with a population of $N \approx 10^{10}$ stars. The fraction of primary objects in our triple systems which will form BHs is given by \begin{equation} f_{\rm p} = \frac{ \int_{30\,M_{\odot}}^{100\,M_{\odot}}m^{-2.3}\,dm}{ \int_{0.08\,M_{\odot}}^{100\,M_{\odot}}f_{\rm IMF}(m)\,dm}\ , \end{equation} We continue to treat BHs as forming in high-multiplicity systems \citep{Duchene2013} and without natal kicks. Therefore, taking a uniform distribution of mass ratios $q_1 \in \left[0.1,1\right]$ for the inner binary, the fraction of secondary stars forming BHs is $f_{\rm s} \approx 0.4$. Drawing from a mass ratio distribution $q_2 \propto M_1^{-2}$ for a tertiary at large distances to the inner binary, the fraction of tertiary objects which remain in triple systems is $f_{\rm t} \approx 0.25$. Recall that all tertiary masses in the range $\left[8\, M_{\odot}, 30\, M_{\odot} \right]$ are rejected, as these are expected to disrupt the triple system due to large natal kicks during NS formation \citep{Hobbs2005}. The fraction of the total stellar population which resides in triple systems is taken to be $f_{\rm triple} \approx 0.1$ \citep{Tokovinin2004}. Recall that because this work concerns triples within open clusters, we consider only those triples with an maximum outer binary SMA of $1000\, \rm AU$ and maximum inner binary SMA of $100\, \rm AU$; see Section \ref{sec:Discussion} for a discussion of this choice of values. The fraction of stars which form triple systems with inner binary BBHs that merge via the Lidov-Kozai resonance is then given by \begin{equation} F_{\rm model} = 10^{-5} \left(\frac{f_{\rm p}}{10^{-3}}\right) \left(\frac{f_{\rm s}}{0.4}\right) \left(\frac{f_{\rm t}}{0.25}\right) \left(\frac{f_{\rm triple}}{0.1}\right) f_{\rm merger}\, , \label{eq:Fmodel} \end{equation} where $f_{\rm merger}$ is the merger fraction for hierarchical triples calculated by our numerical model. Recall that a triple system is considered to have merged if $T_{\rm merger} < T_{\rm Hubble}$. The average merger rate for a single Milky Way-like galaxy over a Hubble time is therefore \begin{equation} \Gamma_{\rm MW} = N \times \frac{F_{\rm model}}{T_{\rm Hubble}} \approx 0.53\, \rm {Myr^{-1}}\, . \end{equation} Following \citet{Belczynski2016}, the merger rate density in the local universe is given by \begin{equation} \mathcal{R} =\rho_{\rm gal} \times N \times \frac{F_{\rm model}}{T_{\rm Hubble}} \approx 6 \left(\frac{F_{\rm model}}{10^{-5}}\right) \rm {Gpc^{-3}\, yr^{-1}}\, , \end{equation} where $\rho_{\rm gal} \approx 0.0116\, \rm {Mpc^{-3}}$ is the Milky Way-like galaxy density in the local universe \citep{Belczynski2016}. Depending on the values of the factors that determine $F_{\rm model}$ (see Equation~(\ref{eq:Fmodel}) above), this rate is plausibly comparable to the observed LIGO rate of $\sim$20$\, \rm {Gpc^{-3}\, yr^{-1}}$. \subsection{Delay-time Distribution} Having recorded $T_{\rm merger}$ for each triple system, we can calculate the fraction of systems which merge within a given time after star formation. Fig. \ref{sm} shows the standard model DTD, i.e., the cumulative merger fraction as a function of time. We find that approximately half of mergers in the standard model occur within the lifetime of open clusters, suggesting that a significant fraction of Lidov-Kozai-induced mergers may occur in these clusters before their dissolution. Fig. \ref{ST} compares the DTD for the standard model to those for the additional models. Accounting for white dwarf formation in low-mass tertiaries and allowing all viable systems to evolve in time, we find that $\sim$20\%--50\% of Lidov-Kozai-assisted BBH mergers occur within the lifetime of open clusters. We find that the DTD is not particularly sensitive to model assumptions, with the exception of the Stellar Tertiary model, which is skewed toward later merger times and yields a smaller merger fraction within the lifetime of open clusters. This difference can be understood as the result of lower-mass tertiary objects, which have weaker effects on the secular evolution of triple systems. \begin{figure} \includegraphics[width=1\columnwidth]{dtd_standard.png} \caption{Delay-time distribution for simulated mergers in the standard model. The blue curve shows the distribution function for systems which merged within a Hubble time. The gray box indicates the fraction of mergers occurring during the open cluster phase, using $10^9\, \rm yr$ as the upper limit of an open cluster lifetime. For comparison, the orange curve shows the DTD which would result if all BBHs merged in isolation via GW emission alone.} \label{sm} \end{figure} \begin{figure} \includegraphics[width=1\columnwidth]{dtd_all.png} \caption{Comparison of delay-time distributions for simulated mergers across all population models. Curves show the distribution function for systems which merged within a Hubble time in all models. The gray box indicates the fraction of mergers which occurred during the open cluster phase in the standard model. The distribution shows little sensitivity to initial assumptions, with the exception of low tertiary masses in the Stellar Tertiary model.} \label{ST} \end{figure} \section{Discussion} \label{sec:Discussion} \subsection{Assumptions} Each of the population models developed in this work rests on a set of underlying assumptions regarding the parameter distributions of its triple systems. In what follows, we discuss the justification for and implications of several key model assumptions. \textbf{BH natal kicks}. The first and most important of these assumptions is that BHs are born with little or no natal kick. While it remains unclear whether such kicks are significant \citep{Nelemans1999,Willems2005,Wong2012,Repetto2012,Wong2014,Mandel2016a,Repetto2017}, observational evidence supports BH formation via failed SN or direct collapse \citep{Fryer1999,Ertl2016,Adams2017}. Both mechanisms imply small natal kicks or none at all, supporting the use of our simplifying assumption. In future work, however, we aim to test the importance and sensitivity of this assumption by implementing a more sophisticated population synthesis method. \textbf{Triple formation}. Throughout this study, we assume that all star formation occurs in open clusters or associations \citep{Lada2003}. The issue of triple formation is not explicitly addressed; our standard model effectively treats each hierarchical triple as primordial. This assumption of primordial system formation is reflected in the non-isotropic distribution of inclinations used in the standard model. We explore a deviation from this assumption by including the Isotropic Distribution model, which draws from a uniform distribution of $\cos I$ and thus simulates triples formed by dynamical processes. In our results, neither the merger rate density nor the DTD depends sensitively on the initial distribution of inclinations. \textbf{SMA bounds.} In the standard model, the lower bound on the inner binary SMA is set to $a_{1,\rm min} = 0.1$\,AU. For an isolated BBH with $m_1 = m_2 = 20\, M_{\odot}$ and $a_1 = 0.1$\,AU in a circular orbit, equation (\ref{T_GW}) gives an inspiral time (via GW emission only) of $T_{\rm GW}\approx 10^{10}$\,yr, which is on the order of a Hubble time. Therefore, for smaller values of $a_{1,\rm min}$, we would not expect our Lidov-Kozai channel to increase the overall rate of BBH mergers. The upper bound $a_{2,\rm max}$ on the outer binary SMA is set by environmental constraints, specifically the lifetime of a wide orbit in a collisional environment. Following \citet{Bahcall1985} one can calculate the half-life of a wide system of SMA $a_2$ in a collisional environment according to \begin{equation} t_{1/2} = 0.00233\, \frac{v_{\text{enc}}}{G m_{\rm p} n_* a_2}\ , \label{eq:halflife} \end{equation} where $v_{\rm enc}$ is the typical encounter velocity at infinity, $m_{\rm p}$ the mass of the perturbing body, and $n_*$ the local stellar number density. For an open cluster, we take $v_{\text{enc}}$ to be a typical velocity dispersion $\sigma\approx 5$\,km\,sec$^{-1}$ and assume a stellar number density $n_*\approx 0.5$\,pc$^{-3}$ and a perturber mass $m_{\rm p}=1\, M_{\odot}$. Taking $10^9$\,yr to be a typical open cluster lifetime, the outer binary SMA of a system whose half-life is equal to the lifetime of the cluster is $a_2 \approx 1000\, \rm AU$; this serves as the upper limit for the size of the outer binary. \subsection{Mergers in the open cluster phase} As summarized in Fig. \ref{ST} and in Table \ref{table_results} for all models considered, the fraction of mergers occurring during the lifetime of open clusters is significant. In the standard model, assuming an open cluster lifetime of $10^9$\, Myr ($10^8$\,Myr), we find that $49.9\%$ ($18.9\%$) of BBH mergers induced by the Lidov-Kozai resonance occur in open clusters. This result implies that at least this fraction of mergers from the secular triple channel occur in young environments within star-forming galaxies. \section{Conclusions} \label{sec:Conclusions} In this work, we calculate the merger rates and DTD of BBH mergers occuring in hierarchical triple systems within open clusters via the Lidov-Kozai resonance. This resonance increases the inner binary eccentricity in cycles, allowing the binary to dissipate orbital energy and inspiral via GW emission. Given the sensitive dependence of merger time on orbital eccentricity, BBH mergers in triple systems experiencing the Lidov-Kozai resonance are expected to occur on much shorter timescales than those in isolated binaries. Calculating the DTD for hierarchical triples in open clusters, we find that a significant fraction of mergers ($18\%$--$50\%$ in our baseline model) occur before the open cluster has dissolved. This result suggests that many mergers in hierarchical triples occur in star-forming regions and hence in spiral galaxies. \section*{Acknowledgements} E.M. thanks the University of Maryland CTC prize fellowship for supporting this research. The authors thank Selma de Mink, Chris Belczynski and Ilya Mandel for their useful comments on this manuscript. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
2,869,038,154,448
arxiv
\section{Introduction\label{secfe}} \subsection{What We Want to Do} We address the question whether it is possible to represent geometrically a function $\psi$ of the variables $x=(x_{1},x_{2},..,x_{n})$. The problem is in fact easy to answer if $\psi$ is a Gaussian function because then its Wigner transform is proportional to a Gaussian $e^{-\frac{1}{\hbar}S^{T}Sz\cdot z}$ where $S$ is a symplectic matrix uniquely determined by $\psi$. It follows that there is a one-to-one correspondence between Gaussians and the sets $S^{T}Sz\cdot z\leq\hbar$. We have called these sets \textquotedblleft quantum blobs\textquotedblright\ in \cite{de02-2,de03-2,de04,de05,Birk,go09,degostat}; the interest of these quantum blobs comes from the fact that they represent minimum uncertainty sets in phase space. The Gaussian function \begin{equation} \Psi_{0}(x)=e^{-x^{2}/2\hbar}; \label{fid \end{equation} is the (unnormalized) ground state of the one-dimensional harmonic oscillator with mass and frequency equal to one: $\widehat{H}\Psi_{0}=E_{0}\Psi_{0}$ where $E_{0}=\tfrac{1}{2}\hbar$ and \begin{equation} \widehat{H}=\frac{1}{2}\left( -\hbar^{2}\frac{d^{2}}{dx^{2}}+x^{2}\right) \label{gf5 \end{equation} This operator is the quantization of the classical oscillator Hamiltonia \begin{equation} H(x,p)=\frac{1}{2}(p^{2}+x^{2}) \label{gf7 \end{equation} The set $\Omega_{0}$ defined by the inequality $H\leq E_{0}$ is the interior of the energy hypersurface $H\leq E_{0}$; it is the disk $p^{2}+x^{2}\leq \hbar$ with radius $R_{0}=\sqrt{\hbar}$. Let us now consider the $N$-th excited state of the operator $\widehat{H}$; it is the (unnormalized) Hermite functio \begin{equation} \Psi_{N}(x)=e^{-x^{2}/2\hbar}H_{N}(x/\sqrt{\hbar}) \label{hermite1 \end{equation} where \begin{equation} H_{N}(x)=(-1)^{n}e^{x^{2}}\tfrac{d^{N}}{dx^{N}}e^{-x^{2}} \label{hermite \end{equation} is the $N$-th Hermite polynomial. It is a solution of $\widehat{H}\Psi _{N}=\left( N+\tfrac{1}{2}\right) \hbar\Psi_{N}$ and the set $\Omega_{N}$ defined by the inequality $H\leq E_{N}=(2N+1)\hbar$ is again a disk, but this time with radius $R_{N}=\sqrt{\left( N+\frac{1}{2}\right) h}$. In this paper we introduce a non-trivial extension of the notion of \textquotedblleft quantum blob\textquotedblright\ we defined and studied in previous work. Quantum blobs are deformations of the phase space ball $|x|^{2}+|p|^{2}\leq\hbar$ by translations and linear canonical transformations. Their interest come from the fact that they provide us with a coarse-graining of phase space different from the usual coarse graining by cubes with volume $\sim h^{n}$ commonly used in statistical mechanics. They appear as space units of minimum uncertainty in one-to-one correspondence with the generalized coherent states familiar from quantum optics, and have allowed us to recover the exact ground states of generalized harmonic oscillators, as well as the semiclassical energy levels of quantum systems with completely integrable Hamiltonian function, and to explain them in terms of the topological notion of symplectic capacity \cite{HZ,Polterovich} originating in Gromov's \cite{Gromov} non-squeezing theorem (alias \textquotedblleft the principle of the symplectic camel\textquotedblright). Quantum blobs, do not, however, allow a characterization of excited states; for instance there is no obvious relation between them and the Hermite functions. Why this does not work is easy to understand: quantum blobs correspond to the states saturating the Schr\"{o}dinger--Robertson inequalitie \begin{equation} (\Delta X_{j})^{2}(\Delta P_{j})^{2}\geq\Delta(X_{j},P_{j})^{2}+\tfrac{1 {4}\hbar^{2}\text{ , }1\leq j\leq n;\label{RS \end{equation} as is well-known \cite{degosat} the quantum states for which all these inequalities become equalities are Gaussians, in this case precisely those who are themselves the ground states of generalized harmonic oscillators. As soon as one consider the excited states the corresponding eigenfunctions are Hermite functions and for these the inequalities (\ref{RS}) are strict. The way out of this difficulty is to define new phase space objects, the \textquotedblleft Fermi blobs\textquotedblright\ of the title of this paper. Such an approach should certainly be welcome in times where phase space is beginning to be taken seriously (see the recent review paper \cite{new}). \subsection{How We Will Do It} We will show that a complete geometric picture of excited states can be given using an idea of the physicist Enrico Fermi in a largely forgotten paper \cite{Fermi} from 1930. Fermi associates to every quantum state $\Psi$ a certain hypersurface $g_{\mathrm{F}}(x,p)=0$ in phase space. The underlying idea is actually surprisingly simple. It consists in observing that any complex twice continuously differentiable function $\Psi(x)=R(x)e^{i\Phi (x)/\hslash}$ ($R(x)\geq0$ and $\Phi(x)$ real) defined on $\mathbb{R}^{n}$ satisfies the partial differential equatio \begin{equation} \left[ \left( -i\hbar\nabla_{x}-\nabla_{x}\Phi\right) ^{2}+\hbar^{2 \frac{\nabla_{x}^{2}R}{R}\right] \Psi=0.\label{gf1 \end{equation} where $\nabla_{x}^{2}$ is the Laplace operator in the variables $x_{1 ,...,x_{n}$ (it is assumed that $R(x)\neq0$ for $x$ in some subset of $\mathbb{R}^{n}$). Performing the gauge transformation $-i\hbar\nabla _{x}\longrightarrow-i\hbar\nabla_{x}-\nabla_{x}\Phi$, this equation is in fact equivalent to the trivial equatio \begin{equation} \left( -\hbar^{2}\nabla_{x}^{2}+\hbar^{2}\frac{\nabla_{x}^{2}R}{R}\right) R=0.\label{trivial \end{equation} The operator \begin{equation} \widehat{g_{\mathrm{F}}}=\left( -i\hbar\nabla_{x}-\nabla_{x}\Phi\right) ^{2}+\hbar^{2}\frac{\nabla_{x}^{2}R}{R}\label{fermop \end{equation} appearing in the left-hand side of Eqn. (\ref{gf1}) is the quantisation (in every reasonable physical quantisation scheme) of the real observable \begin{equation} g_{\mathrm{F}}(x,p)=\left( p-\nabla_{x}\Phi\right) ^{2}+\hbar^{2 \frac{\nabla_{x}^{2}R}{R}\label{gf2 \end{equation} and the equation $g_{\mathrm{F}}(x,p)=0$ in general determines a hypersurface $\mathcal{H}_{\mathrm{F}}$ in phase space $\mathbb{R}_{x,p}^{2n}$ which Fermi ultimately \emph{identifies} with the state $\Psi$ itself. The remarkable thing with this construction is that it shows that to an arbitrary function $\Psi$ it associates a Hamiltonian function of the classical typ \begin{equation} H=\left( p-\nabla_{x}\Phi\right) ^{2}+V\label{classical \end{equation} even if $\Psi$ is the solution of another partial (or pseudo-differential) equation. We notice that when $\Psi$ is an eigenstate of the operator $\widehat{H}\Psi=E\Psi$ then $g_{\mathrm{F}}=H-E$ and $\mathcal{H _{\mathrm{F}}$ is just the energy hypersurface $H(x,p)=E$. Of course, Fermi's analysis was very heuristic and its mathematical rigour borders the unacceptable (at least by modern standards). Fermi's paper has recently been rediscovered by Benenti \cite{benenti} and Benenti and Strini \cite{best}, who study its relationship with the level sets of the Wigner transform of $\Psi$. \begin{notation} The points in configuration and momentum space are written $x=(x_{1 ,...,x_{n})$ and $p=(p_{1},...,p_{n})$ respectively; in formulas $x$ an $p$ are viewed as column vectors. We will also use the collective notation $z=(x,p)$ for the phase space variable. The matrix $J \begin{pmatrix} 0 & I\\ -I & 0 \end{pmatrix} $ ($0$ and $I$ the $n\times n$ zero and identity matrices) defines the standard symplectic form on the phase space $\mathbb{R}_{x}^{2n}$ via the formula $\sigma(z,z^{\prime})=Jz\cdot z^{\prime}=p\cdot x^{\prime}-p^{\prime }\cdot x$. We write $\hbar=h/2\pi$, $h$ being Planck's constant. The symplectic group is denoted by $\operatorname*{Sp}(2n,\mathbb{R})$: it is the multiplicative group of all real $2n\times2n$ matrices $S$ such that $\sigma(Sz,Sz^{\prime})=\sigma(z,z^{\prime})$ for all $z,z^{\prime}$. \end{notation} \section{Symplectic Capacities and Quantum Blobs} To generalize the discussion above to the multi-dimensional case we have to introduce some concepts from symplectic topology. For a review of these notions\ see de Gosson and Luef \cite{golu10}. \subsection{Symplectic Capacities} \subsubsection*{Intrinsic symplectic capacities} An \textit{intrinsic} symplectic capacity assigns a non-negative number (or $+\infty$) $c(\Omega)$ to every subset $\Omega$ of phase space $\mathbb{R ^{2n}$; this assignment is subjected to the following properties: \begin{itemize} \item \textbf{Monotonicity:} If $\Omega\subset\Omega^{\prime}$ then $c(\Omega)\leq c(\Omega^{\prime})$; \item \textbf{Symplectic invariance:} If $f$ is a canonical transformation (linear, or not) then $c(f(\Omega))=c(\Omega)$; \item \textbf{Conformality:} If $\lambda$ is a real number then $c(\lambda \Omega)=\lambda^{2}c(\Omega)$; here $\lambda\Omega$ is the set of all points $\lambda z$ when $z\in\Omega$; \item \textbf{Normalization:} We have \begin{equation} c(B^{2n}(R))=\pi R^{2}=c(Z_{j}^{2n}(R)); \label{norm1 \end{equation} here $B^{2n}(R)$ is the phase-space ball $|x|^{2}+|p|^{2}\leq R^{2}$ and $Z_{j}^{2n}(R)$ the phase-space cylinder $x_{j}^{2}+p_{j}^{2}\leq R^{2}$. \end{itemize} Let $c$ be a symplectic capacity on the phase plane $\mathbb{R}^{2}$. We have $c(\Omega)=\operatorname*{Area}(\Omega)$ when $\Omega$ is a connected and simply connected surface. In the general case there exist infinitely many intrinsic symplectic capacities, but they all agree on phase space ellipsoids as we will see below. The smallest symplectic capacity is denoted by $c_{\min }$ (\textquotedblleft Gromov width\textquotedblright): by definition $c_{\min }(\Omega)$ is the supremum of all numbers $\pi R^{2}$ such that there exists a canonical transformation such that $f(B^{2n}(R))\subset\Omega$. The fact that $c_{\min}$ really is a symplectic capacity follows from a deep and difficult topological result, Gromov's \cite{Gromov} symplectic non-squeezing theorem, alias the principle of the symplectic camel. (For a discussion of Gromov's theorem from the point of view of Physics see de Gosson \cite{go09}, de Gosson and Luef \cite{golu10}.) Another useful example is provided by the Hofer--Zehnder \cite{HZ} capacity $c^{\mathrm{HZ}}$. It has the property that it is given by the integral of the action form $pdx=p_{1}dx_{1}+\cdot \cdot\cdot+p_{n}dx_{n}$ along a certain curve \begin{equation} c^{\text{HZ}}(\Omega)=\oint\nolimits_{\gamma_{\min}}pdx \label{chz \end{equation} when $\Omega$ is a compact convex set in phase space; here $\gamma_{\min}$ is the shortest (positively oriented) Hamiltonian periodic orbit carried by the boundary $\partial\Omega$ of $\Omega$. This formula agrees with the usual notion of area in the case $n=1$. It turns out that all intrinsic symplectic capacities agree on phase space ellipsoids, and are calculated as follows (see e.g. \cite{Birk,golu10,HZ}). Let $M$ be a $2n\times2n$ positive-definite matrix $M$ and consider the ellipsoid \begin{equation} \Omega_{M,z_{0}}:M(z-z_{0})^{2}\leq1. \label{ellipsoid \end{equation} Then, for every intrinsic symplectic capacity $c$ we have \begin{equation} c(\Omega_{M,z_{0}})=\pi/\lambda_{\max}^{\sigma} \label{capellipse \end{equation} where $\lambda_{\max}^{\sigma}=$ is the largest symplectic eigenvalue of $M$. The symplectic eigenvalues of a positive definite matrix are defined as follows: the matrix $JM$ ($J$ the standard symplectic matrix) is equivalent to the antisymmetric matrix $M^{1/2}JM^{1/2}$ hence its $2n$ eigenvalues are of the type $\pm i\lambda_{1}^{\sigma},..,$ $\pm i\lambda_{n}^{\sigma}$ where $\lambda_{j}^{\sigma}>0$. The positive numbers $\lambda_{1}^{\sigma},..,$ $\lambda_{n}^{\sigma}$ are called the \emph{symplectic eigenvalues} of the matrix $M$. In particular, if $X$ and $Y$ are real symmetric $n\times n$ matrices, then the symplectic capacity of the ellipsoi \begin{equation} \Omega_{(A,B)}:Xx^{2}+Yp^{2}\leq1 \label{capab \end{equation} is given b \begin{equation} c(\Omega_{(A,B)})=\pi/\sqrt{\lambda_{\max}} \label{cab \end{equation} where $\lambda_{\max}$ is the largest eigenvalue of $AB$. \subsubsection*{Extrinsic symplectic capacities} The definition of an extrinsic symplectic capacity is similar to that of an intrinsic capacity, but one weakens the normalization condition (\ref{norm1}) by only requiring that: \begin{itemize} \item \textbf{Nontriviality:} $c(B^{2n}(R))<+\infty$ and $c(Z_{j ^{2n}(R))<+\infty$. \end{itemize} In \cite{EH} Ekeland and Hofer defined a sequence $c_{1}^{\mathrm{EH}}$, $c_{2}^{\mathrm{EH}},...,c_{k}^{\mathrm{EH}},...$ of extrinsic symplectic capacities satisfying the nontriviality propertie \begin{equation} c_{k}^{\mathrm{EH}}(B^{2n}(R))=\left[ \frac{k+n-1}{n}\right] \pi R^{2}\ \ \text{,}\ \ c_{k}^{\mathrm{EH}}(Z_{j}^{2n}(R))=k\pi R^{2}.\label{eh \end{equation} Of course $c_{1}^{\mathrm{EH}}$ is an intrinsic capacity; in fact it coincides with the Hofer--Zehnder capacity on bounded convex sets $\Omega$. We hav \begin{equation} c_{1}^{\text{EH}}(\Omega)\leq c_{2}^{\text{EH}}(\Omega)\leq\cdot\cdot\cdot\leq c_{k}^{\text{EH}}(\Omega)\leq\cdot\cdot\cdot \end{equation} The Ekeland--Hofer capacities have the property that for each $k$ there exists an integer $N\geq0$ and a closed characteristic $\gamma$ of $\partial\Omega$ such tha \begin{equation} c_{k}^{\text{EH}}(\Omega)=N\left\vert \oint\nolimits_{\gamma}pdx\right\vert \label{acspec \end{equation} (in other words, $c_{k}^{\text{EH}}(\Omega)$ is a value of the \textit{action spectrum} \cite{quasyge} of the boundary $\partial\Omega$ of $\Omega$); this formula shows that $c_{k}^{\text{EH}}(\Omega)$ is solely determined by $\partial\Omega$; therefore the notation $c_{k}^{\text{EH}}(\partial\Omega)$ is often used in the literature. The Ekeland--Hofer capacities $c_{k ^{\text{EH}}$ allow us to classify phase-space ellipsoids. In fact, the non-decreasing sequence of numbers $c_{k}^{\text{EH}}(\Omega_{M})$ is determined as follows for an ellipsoid $\Omega:Mz\cdot z\leq1$ ($M$ symmetric and positive-definite): let \ $(\lambda_{1}^{\sigma},...,\lambda_{n}^{\sigma })$ be the symplectic eigenvalues of $M$; then \begin{equation} \{c_{k}^{\text{EH}}(\Omega):k=1,2,...\}=\{N\pi\lambda_{j}^{\sigma }:j=1,...,n;N=0,1,2,...\}.\label{cehc \end{equation} Equivalently, the increasing sequence $c_{1}^{\text{EH}}(\Omega)\leq c_{2}^{\text{EH}}(\Omega)\leq\cdot\cdot\cdot$ is obtained by writing the numbers $N\pi\lambda_{j}^{\sigma}$ in increasing order with repetitions if a number occurs more than once. \subsection{Quantum Blobs} By definition a quantum blob $\mathcal{QB}^{2n}(z_{0},S)$ is the image of the phase space ball $B^{2n}(S^{-1}z_{0},\sqrt{\hbar}):|z-S^{-1}z_{0}|\leq \sqrt{\hbar}$ by a linear canonical transformation (identified with a symplectic matrix $S$). A quantum blob is thus a phase space ellipsoid with symplectic capacity $\pi\hbar=\frac{1}{2}h$, but it is not true that, conversely, an arbitrary phase space ellipsoid with symplectic capacity $\frac{1}{2}h$ is a quantum blob. One can however show (de Gosson \cite{de04,de05,Birk}, de Gosson and Luef \cite{golu10}) that such an ellipsoid contains a unique quantum blob. One proves (ibid.) that a quantum blob $\mathcal{QB}^{2n}(z_{0},S)$ is characterized by the two following \emph{equivalent} properties: \begin{itemize} \item \textit{The intersection of the ellipsoid }$\mathcal{QB}^{2n}(z_{0 ,S)$\textit{\ with a plane passing through }$z_{0}$\textit{\ and parallel to any of the plane of canonically conjugate coordinates }$x_{j},p_{j $\textit{\ in }$\mathbb{R}_{z}^{2n}$ \textit{is an ellipse with area $\frac{1}{2}h$\textit{; } \item \textit{The supremum of the set of all numbers }$\pi R^{2 $\textit{\ such that the ball }$B^{2n}(\sqrt{R}):|z|\leq R$\textit{\ can be embedded into }$\mathcal{QB}^{2n}(z_{0},S)$\textit{\ using canonical transformations (linear, or not) is }$\frac{1}{2}h$\textit{. Hence no phase space ball with radius }$R>\sqrt{\hbar}$ \textit{can be \textquotedblleft squeezed\textquotedblright\ inside }$\mathcal{QB}^{2n}(z_{0},S) \textit{\ using only canonical transformations.} \end{itemize} It turns out (de Gosson \cite{Birk}) that in the first of these conditions one can replace the plane of conjugate coordinates with any symplectic plane (a symplectic plane is a two-dimensional subspace of $\mathbb{R}_{z}^{2n}$ on which the restriction of the symplectic form $\sigma$ is again a symplectic form). There is a natural actio \[ \operatorname*{Sp}(2n,\mathbb{R})\times\mathcal{QB}(2n,\mathbb{R )\longrightarrow\mathcal{QB}(2n,\mathbb{R}) \] of the symplectic group on quantum blobs. \section{Generalized Coherent States} \subsection{The Fermi Function of a Gaussian} We next consider arbitrary (normalized) generalized coherent state \begin{equation} \Psi_{X,Y}(x)=\left( \frac{1}{\pi\hbar}\right) ^{n/4}(\det X)^{1/4 \exp\left[ -\frac{1}{2\hbar}(X+iY)x\cdot x\right] \label{coh1 \end{equation} where $X$ and $Y$ are real symmetric $n\times n$ matrices, and $X$ is positive definite. Setting $\Phi(x)=-\frac{1}{2}Yx\cdot x$ and $R(x)=\exp\left( -\frac{1}{2\hbar}Xx\cdot x\right) $ we have \begin{equation} \nabla_{x}\Phi(x)=-Yx\text{ \ , \ }\frac{\nabla_{x}^{2}R(x)}{R(x)}=-\frac {1}{\hbar}\operatorname*{Tr}X+\frac{1}{\hbar^{2}}X^{2}x\cdot x\label{tr \end{equation} hence the Fermi function of $\Psi_{X,Y}$ is the quadratic form \begin{equation} g_{\mathrm{F}}(x,p)=(p+Yx)^{2}+X^{2}x\cdot x-\hbar\operatorname*{Tr X.\label{gf3 \end{equation} We can rewrite this formula as \begin{equation} g_{\mathrm{F}}(x,p)=M_{\mathrm{F}}z\cdot z-\hbar\operatorname*{Tr}X\label{gf \end{equation} ($z=(x,p)$) where $M_{\mathrm{F}}$ is the symmetric matrix \begin{equation} M_{\mathrm{F}} \begin{pmatrix} X^{2}+Y^{2} & Y\\ Y & I \end{pmatrix} .\label{mf \end{equation} A straightforward calculation shows that we have the factorizatio \begin{equation} M_{\mathrm{F}}=S^{T \begin{pmatrix} X & 0\\ 0 & X \end{pmatrix} S\label{mfs \end{equation} where $S$ is the \emph{symplectic} matrix \begin{equation} S \begin{pmatrix} X^{1/2} & 0\\ X^{-1/2}Y & X^{-1/2 \end{pmatrix} .\label{ess \end{equation} It turns out --and this is really a striking fact!-- that $M_{\mathrm{F}}$ is closely related to the Wigner transform \begin{equation} W\Psi_{X,Y}(z)=\left( \frac{1}{2\pi\hbar}\right) ^{n}\int_{\mathbb{R}^{n }e^{-\frac{i}{\hbar}p\cdot y}\Psi_{X,Y}(x+\tfrac{1}{2}y)\Psi_{X,Y}^{\ast }(x-\tfrac{1}{2}y)dy\label{oupsi \end{equation} of the state $\Psi_{X,Y}$ because we hav \begin{equation} W\Psi_{X,Y}(z)=\left( \frac{1}{\pi\hbar}\right) ^{n}\exp\left( -\frac {1}{\hbar}Gz\cdot z\right) \label{goupsi \end{equation} where $G$ is the symplectic matri \begin{equation} G=S^{T}S \begin{pmatrix} X+YX^{-1}Y & YX^{-1}\\ X^{-1}Y & X^{-1 \end{pmatrix} \label{G \end{equation} (see e.g. \cite{Birk,Littlejohn}). When $n=1$ and $\Psi_{X,Y}(x)=\Psi_{0}(x)$ the fiducial coherent state (\ref{fid}) we have $S^{-1}D^{-1/2}S=I$ and $\operatorname*{Tr}X=1$ hence the formul \[ W\Psi_{0}(z)=\left( \frac{1}{\pi\hbar}\right) ^{1/4}\frac{1}{e}\exp\left[ -\frac{1}{\hbar}M_{\mathrm{F}}z\cdot z\right] \] already observed by Benenti and Strini in \cite{best}. \subsection{\label{subsecgi}Geometric Interpretation} Recall (formula (\ref{capellipse})) that the symplectic capacity $c(\Omega)$ of an ellipsoid $Mz\cdot z\leq1$ ($M$ a symmetric positive definite $2n\times2n$ matrix) is given by \begin{equation} c(\Omega)=\pi/\lambda_{\max}^{\sigma}\label{cw \end{equation} where $\lambda_{\max}^{\sigma}=\max\{\lambda_{1}^{\sigma},..,$ $\lambda _{n}^{\sigma}\}$, the $\lambda_{j}^{\sigma}$ being the symplectic eigenvalues of $M$. We denote by $\Omega_{\mathrm{F}}$ the phase space ellipsoid defined by $g_{\mathrm{F}}(x,p)\leq0$, that is:\ \[ \Omega_{\mathrm{F}}:M_{\mathrm{F}}z\cdot z\leq\hbar\operatorname*{Tr}X; \] it is the ellipsoid bounded by the Fermi hypersurface $\mathcal{H _{\mathrm{F}}$ corresponding to the generalized coherent state $\Psi_{X,Y}$. Let us perform the symplectic change of variables $z^{\prime}=Sz$; in the new coordinates the ellipsoid $\Omega_{\mathrm{F}}$ is represented by the inequality \begin{equation} Xx^{\prime}\cdot x^{\prime}+Xp^{\prime}\cdot p^{\prime}\leq\hbar \operatorname*{Tr}X\label{ferx \end{equation} hence $c(\Omega_{\mathrm{F}})$ equals the symplectic capacity of the ellipsoid (\ref{ferx}). Applying the rule above we thus have to find the symplectic eigenvalues of the block-diagonal matrix \begin{pmatrix} X & 0\\ 0 & X \end{pmatrix} $; a straightforward calculation shows that these are just the eigenvalues $\omega_{1},...,\omega_{n}$ of $X$ and henc \begin{equation} c(\Omega_{\mathrm{F}})=\pi\hbar\operatorname*{Tr}X/\omega_{\max}\label{cwf \end{equation} where $\omega_{\max}=\max\{\omega_{1},...,\omega_{n}\}$. In view of the trivial inequality \begin{equation} \omega_{\max}\leq\operatorname*{Tr}X=\sum_{j=1}^{n}\omega_{j}\leq n\lambda\omega_{\max}\label{maxnmax \end{equation} we have \begin{equation} \frac{1}{2}h\leq c(\Omega_{\mathrm{F}})\leq\frac{nh}{2}.\label{nh \end{equation} An immediate consequence of the inequality $\frac{1}{2}h\leq c(\Omega _{\mathrm{F}})$ is that the Fermi ellipsoid $\Omega_{\mathrm{F}}$ of a generalized coherent state always contains a quantum blob; this is of course consistent with the uncertainty principle. Notice that when all the eigenvalues $\omega_{j}$ are equal to a number $\omega$ then $c(\Omega_{\mathrm{F}})=nh/2$; in particular when $n=1$ we have $c(\Omega_{\mathrm{F}})=h/2$ which is exactly the action calculated along the trajectory corresponding to the ground state. This observation leads us to the following question: what is the precise geometric meaning of formula (\ref{cwf})? Let us come back to the interpretation of the ellipsoid defined by the inequality (\ref{ferx}). We have seen that the symplectic eigenvalues of the matrix \begin{pmatrix} X & 0\\ 0 & X \end{pmatrix} $ are precisely the eigenvalues $\omega_{j}$, $1\leq j\leq n$, of the positive-definite matrix $X$. It follows that there exist linear symplectic coordinates $(x^{\prime\prime},p^{\prime\prime})$ in which the equation of the ellipsoid $\Omega_{\mathrm{F}}$ takes the normal for \begin{equation} \sum_{j=1}^{n}\omega_{j}(x_{j}^{\prime\prime2}+p_{j}^{\prime\prime2})\leq \sum_{j=1}^{n}\hbar\omega_{j}\label{omf1 \end{equation} whose quantum-mechanical interpretation is clear: dividing both sides by two we get the energy shell of the anisotropic harmonic oscillator in its ground state. Consider now the planes $\mathcal{P}_{1},\mathcal{P}_{2},..,\mathcal{P _{n}$ of conjugate coordinates $(x_{1},p_{1})$, $(x_{2},p_{2})$,..., $(x_{n},p_{n})$. The intersection of the ellipsoid $\Omega_{\mathrm{F}}$ with these planes are the circles \begin{gather*} C_{1}:\omega_{1}(x_{1}^{\prime\prime2}+p_{1}^{\prime\prime2})\leq\sum _{j=1}^{n}\hbar\omega_{j}\\ C_{2}:\omega_{2}(x_{2}^{\prime\prime2}+p_{2}^{\prime\prime2})\leq\sum _{j=1}^{n}\hbar\omega_{j}\\ \cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot\cdot\\ C_{n}:\omega_{n}(x_{n}^{\prime\prime2}+p_{n}^{\prime\prime2})\leq\sum _{j=1}^{n}\hbar\omega_{j}. \end{gather*} Formula (\ref{cwf}) says that $c(\Omega_{\mathrm{F}})$ is the area of the circle $C_{j}$ with smallest radius, and this corresponds to the index $j$ such that $\omega_{j}=\omega_{\max}$. This is of course perfectly in accordance with the definition of the Hofer--Zehnder capacity $c^{_{\mathrm{HZ}}}(\Omega_{\mathrm{F}})$ since all symplectic capacities agree on ellipsoids. We are now led to another question: is there any way to describe topologically Fermi's ellipsoid in such a way that the areas of every circle $C_{j}$ becomes apparent? The problem with the standard capacity of an ellipsoid is that it only \textquotedblleft sees\textquotedblright\ the smallest cut of that ellipsoid by a plane of conjugate coordinate. The way out of this difficult lies in the use of the Ekeland--Hofer capacities $c_{j}^{\mathrm{EH}}$ discussed above. To illustrate the idea, let us first consider the case $n=2$; it is no restriction to assume $\omega_{1}\leq \omega_{2}$. If $\omega_{1}=\omega_{2}$ then the ellipsoi \begin{equation} \omega_{1}(x_{1}^{\prime\prime2}+p_{1}^{\prime\prime2})+\omega_{2 (x_{2}^{\prime\prime2}+p_{2}^{\prime\prime2})\leq\hbar\omega_{1}+\hbar \omega_{2}\label{omf2 \end{equation} is just the ball $B^{2}(\sqrt{2\hbar})$ whose symplectic capacity is $2\pi\hbar=h$. Suppose now that $\omega_{1}<\omega_{2}$. Then the Ekeland--Hofer capacities are the number \begin{equation} \frac{\pi\hbar}{\omega_{2}}(\omega_{1}+\omega_{2}),\frac{\pi\hbar}{\omega_{1 }(\omega_{1}+\omega_{2}),\frac{2\pi\hbar}{\omega_{2}}(\omega_{1}+\omega _{2}),\frac{2\pi\hbar}{\omega_{1}}(\omega_{1}+\omega_{2}),....\label{seq \end{equation} and hence \[ c_{1}^{\mathrm{EH}}(\Omega_{\mathrm{F}})=c(\Omega_{\mathrm{F}})=\frac{\pi \hbar}{\omega_{2}}(\omega_{1}+\omega_{2}). \] What about $c_{2}^{\mathrm{EH}}(\Omega_{\mathrm{F}})$? A first glance at the sequence (\ref{seq}) suggests that we have \[ c_{2}^{\mathrm{EH}}(\Omega_{\mathrm{F}})=\frac{\pi\hbar}{\omega_{1} (\omega_{1}+\omega_{2}) \] but this is only true if $\omega_{1}<\omega_{2}\leq2\omega_{1}$ because if $2\omega_{1}<\omega_{2}$ then $(\omega_{1}+\omega_{2})/\omega_{2}<(\omega _{1}+\omega_{2})/\omega_{1}$ so that in this case \[ c_{2}^{\mathrm{EH}}(\Omega_{\mathrm{F}})=\frac{\pi\hbar}{\omega_{2} (\omega_{1}+\omega_{2})=c_{1}^{\mathrm{EH}}(\Omega_{\mathrm{F}}). \] The Ekeland--Hofer capacities thus allow a geometrical classification of the eigenstates. \section{Fermi Function and Excited States} The generalized coherent states can be viewed as the ground states of a generalized harmonic oscillator, with Hamiltonian function a homogeneous quadratic polynomial in the position and momentum coordinates \[ H(x,p)=\sum_{i,j}a_{ij}p_{i}p_{j}+b_{ij}p_{i}x_{j}+c_{ij}x_{i}x_{j}. \] Such a function can always be put in the for \begin{equation} H(z)=\frac{1}{2}Mz\cdot z\label{quaham \end{equation} where $M$ is a symmetric matrix (the Hessian matrix, i.e. the matrix of second derivatives, of $H$). We will assume for simplicity that $M$ is positive-definite; we can then always bring it into the normal form \[ K(z)=\sum_{j=1}^{n}\frac{\omega_{j}}{2}(x_{j}^{2}+p_{j}^{2}) \] using a linear symplectic transformation of the coordinates (symplectic diagonalization): there exists a symplectic matrix $S$ (depending on $M$) such tha \begin{equation} S^{T}MS=D \begin{pmatrix} \Lambda & 0\\ 0 & \Lambda \end{pmatrix} \label{sms \end{equation} where $\Omega$ is a diagonal matrix whose diagonal entries consist of the symplectic spectrum $\omega_{1},...,\omega_{n}$ of $M$. Thus, we have $K(z)=H(Sz)$, or, equivalently \begin{equation} H(z)=K(S^{-1}z)\label{hk \end{equation} The ground state of each one-dimensional quantum oscillator \[ \widehat{K}_{j}=\frac{\omega_{j}}{2}\left( x_{j}^{2}-\hbar^{2}\frac {\partial^{2}}{\partial x_{j}}\right) \] is the solution of $\widehat{K}_{j}\Psi=\frac{1}{2}\hbar\omega_{j}\Psi$, it is thus the one-dimensional fiducial coherent state $(\pi\hbar)^{-1/4 e^{-x^{2}/2\hbar}$. It follows that the ground $\Psi_{0}$ state of $\widehat{K}=\sum_{j}\widehat{K}_{j}$ is the tensor product of $n$ such states, that is $\Psi_{0}(x)=(\pi\hbar)^{-n/4}e^{-|x|^{2}/2\hbar}$, the fiducial coherent state (\ref{fid}). Returning to the initial Hamiltonian $H$ we note that the corresponding Weyl quantisation $\widehat{H}$ satisfies, in view of Eqn. (\ref{hk}) the symplectic covariance formula $\widehat{H =\widehat{S}\widehat{K}\widehat{S}^{-1}$where $\widehat{S}$ is any of the \textit{two} metaplectic operators corresponding to the symplectic matrix $S$ (see the Appendix). It follows that the ground state of $\widehat{H}$ is given by the formula $\Psi=\widehat{S}\Psi_{0}$. The case of the excited states is treated similarly. The solutions of the one-dimensional eigenfunction problem $\widehat{K}_{j}\Psi=E\Psi$ are given by the Hermite functions \begin{equation} \Psi_{N}(x)=e^{-x^{2}/2\hbar}H_{N}(x/\sqrt{\hbar})\label{hermitre2 \end{equation} with corresponding eigenvalues $E_{N}=(N+\frac{1}{2})\hbar\omega_{j}$. It follows that the solutions of the $n$-dimensional problem $\widehat{K \Psi=E\Psi$ are the tensor products \begin{equation} \Psi_{(N)}=\Psi_{N_{1}}\otimes\Psi_{N_{2}}\otimes\cdot\cdot\cdot\otimes \Psi_{N_{n}}\label{tensor \end{equation} where $(N)=(N_{1},N_{2},...,N_{n})$ is a sequence of non-negative integers, and the corresponding energy level i \begin{equation} E_{(N)}=\sum_{j=1}^{n}(N_{j}+\tfrac{1}{2})\hbar\omega_{j}.\label{en \end{equation} This allows us to give a geometric description of all eigenfunctions of the generalized harmonic oscillator, corresponding to a quadratic Hamiltonian (\ref{quaham}). We claim \ that: \begin{quotation} \emph{Let} $\Psi$ \emph{be an eigenfunction of the operator} \begin{equation} \widehat{H}=(x,-i\hbar\nabla_{x})M(x,-i\hbar\nabla_{x})^{T}. \label{hhat \end{equation} \emph{The symplectic capacity of the corresponding Fermi blob} $\Omega_{F}$ \emph{is} \begin{equation} c(\Omega_{F})=\sum_{j=1}^{n}(N_{j}+\tfrac{1}{2})h \label{cof \end{equation} \emph{where the numbers} $N_{1},N_{2},...,N_{n}$ \emph{are the non-negative integers} \emph{corresponding to the state }(\ref{tensor}) \emph{of the diagonalized operator} $\widehat{K}=\sum_{j=1}^{n}\widehat{K}_{j}$. \end{quotation} \section*{APPENDIX: The Metaplectic Group} The symplectic group $\operatorname*{Sp}(2n,\mathbb{R})$ has a covering group of order two, the metaplectic group $\operatorname*{Mp}(2n,\mathbb{R})$. That group consists of unitary operators (the metaplectic operators) acting on $L^{2}(\mathbb{R}^{n})$. There are several equivalent ways to describe the metaplectic operators. For our purposes the most tractable is the following: assume that $S\in\operatorname*{Sp}(2n,\mathbb{R})$ has the block-matrix for \begin{equation} S \begin{pmatrix} A & B\\ C & D \end{pmatrix} \text{ \ with \ }\det B\neq0. \tag{A1}\label{free \end{equation} The condition $\det B\neq0$ is not very restrictive, because one shows (de Gosson \cite{principi,Birk,Birkbis}, Littlejohn \cite{Littlejohn}) that every $S\in\operatorname*{Sp}(2n,\mathbb{R})$ can be written (non uniquely) as the product of two symplectic matrices of the type above; moreover the symplectic matrices arising as Jacobian matrices of Hamiltonian flows determined by physical Hamiltonians of the type \textquotedblleft kinetic energy plus potential\textquotedblright\ are of this type for almost every time $t$. To the matrix (\ref{free}) we associate the following quantities (de Gosson \cite{principi,Birk}): \begin{itemize} \item A quadratic form \begin{equation} W(x,x^{\prime})=\frac{1}{2}DB^{-1}x\cdot x-B^{-1}x\cdot x^{\prime}+\frac{1 {2}B^{-1}Ax^{\prime}\cdot x^{\prime}; \tag{A2}\label{w \end{equation} the matrices $DB^{-1}$ and $B^{-1}A$ are symmetric because $S$ is symplectic; \item The complex number $\Delta(W)=i^{m}\sqrt{|\det B^{-1}|}$ where $m$ (\textquotedblleft Maslov index\textquotedblright) is chosen in the following way: $m=0$ or $2$ if $\det B^{-1}>0$ and $m=1$ or $3$ if $\det B^{-1}<0$. \end{itemize} The two metaplectic operators associated to $S$ are then given b \begin{equation} \widehat{S}\Psi(x)=\left( \tfrac{1}{2\pi i\hbar}\right) ^{n/2}\Delta(W)\int e^{\frac{i}{\hbar}W(x,x^{\prime})}\Psi(x^{\prime})d^{n}x^{\prime .\tag{A3}\label{meta \end{equation} The fact that we have two possible choices for the Maslov index is directly related the fact that $\operatorname*{Mp}(2n,\mathbb{R})$ is a two-fold covering group of the symplectic group $\operatorname*{Sp}(2n,\mathbb{R})$ \cite{Wiley,principi,Birk,fo89}. The main interest of the metaplectic group in quantization questions comes from the two following (related) \textquotedblleft symplectic covariance\textquotedblright\ properties: \begin{itemize} \item Let $\Psi$ be a square integrable function (or, more generally, a tempered distribution), and $S$ a symplectic matrix. We hav \begin{equation} W\Psi(S^{-1}z)=W(\widehat{S}\Psi)(z) \tag{A4}\label{wigcov \end{equation} where $\widehat{S}$ is any of the two metaplectic operators corresponding to $S$; \item Let $\widehat{H}$ be the Weyl quantisation of the symbol (= observable) $H$. Let $S$ be a symplectic matrix Then the quantisation of $K(z)=H(Sz)$ is $\widehat{K}=\widehat{S}^{-1}\widehat{H}\widehat{S}$ where $\widehat{S}$ is again defined as above. \end{itemize}
2,869,038,154,449
arxiv
\section{Introduction} Continuous first-order logic is a generalization of first-order logic suitable for studying metric structures, which are mathematical structures with an underlying complete metric and with uniformly continuous functions and $\mathbb{R}$-valued predicates. A rich class of metric structures arise from Banach spaces and expansions thereof. An active area of research in continuous logic is the characterization of inseparably categorical continuous theories. For a general introduction to continuous logic, see \cite{MTFMS}. In the present work we will consider expansions of Banach spaces. We introduce the notion of an indiscernible subspace. An indiscernible subspace is a subspace in which types of tuples of elements only depend on their quantifier-free type in the reduct consisting of only the metric and the constant $\mathbf{0}$. Similarly to indiscernible sequences, indiscernible subspaces are always consistent with a Banach theory (with no stability assumption, see Theorem \ref{thm:exist}), but are not always present in every model. We will show that an indiscernible subspace always takes the form of an isometrically embedded real Hilbert space wherein the type of any tuple only depends on its quantifier-free type in the Hilbert space. The notion of an indiscernible subspace is of independent interest in the model theory of Banach and Hilbert structures, and in particular here we use it to improve the results of Shelah and Usvyatsov in the context of types in the full language (as opposed to $\Delta$-types). Specifically, in this context we give a shorter proof of Shelah and Usvyatsov's main result \cite[Prop.\ 4.13]{SHELAH2019106738}, we improve their result on the strong uniqueness of Morley sequences in minimal wide types \cite[Prop.\ 4.12]{SHELAH2019106738}, and we expand on their commentary on the ``induced structure'' of the span of a Morley sequence in a minimal wide type \cite[Rem.\ 5.6]{SHELAH2019106738}. This more restricted case is what is relevant to inseparably categorical Banach theories, so our work is applicable to the problem of their characterization. Finally, we present some relevant counterexamples and in particular we resolve (in the negative) the question of Shelah and Usvyatsov presented at the end of Section 5 of \cite{SHELAH2019106738}, in which they ask whether or not the span of a Morley sequence in a minimal wide type is always a type-definable set. \subsection{Background} For $K \in \{\mathbb{R}, \mathbb{C}\}$, we think of a $K$-Banach space $X$ as being a metric structure $\mathfrak{X}$ whose underlying set is the closed unit ball $B(X)$ of $X$ with metric $d(x,y) = \left\lVert x - y \right\rVert$.\footnote{For another equivalent approach, see \cite{MTFMS}, which encodes Banach structures as many-sorted metric structures with balls of various radii as different sorts.} This structure is taken to have for each tuple $\bar{a} \in K$ an $|\bar{a}|$-ary predicate $s_{\bar{a}}(\bar{x}) = \left\lVert \sum_{i<|\bar{a}|} a_i x_i \right\rVert$, although we will always write this in the more standard form. Note that we evaluate this in $X$ even if $\sum_{i<|\bar{a}|} a_i x_i$ is not actually an element of the structure $\mathfrak{X}$. For convenience, we will also have a constant for the zero vector, $\mathbf{0}$, and an $n$-ary function $\sigma_{\bar{a}}(\bar{x})$ such that $\sigma_{\bar{a}}(\bar{x}) = \sum_{i<|\bar{a}|} a_i x_i$ if it is in $B(X)$ and $\sigma_{\bar{a}}(\bar{x}) = \frac{\sum_{i<|\bar{a}|} a_i x_i}{\left\lVert \sum_{i<|\bar{a}|} a_i x_i \right\rVert}$ otherwise. If $|a|\leq 1$, we will write $ax$ for $\sigma_{a}(x)$. Note that while this is an uncountable language, it is interdefinable with a countable reduct of it (restricting attention to rational elements of $K$). These structures capture the typical meaning of the ultraproduct of Banach spaces. As is common, we will conflate $X$ and the metric structure $\mathfrak{X}$ in which we have encoded $X$. \begin{defn} A \emph{Banach (or Hilbert) structure} is a metric structure which is the expansion of a Banach (or Hilbert) space. A \emph{Banach (or Hilbert) theory} is the theory of such a structure. The adjectives \emph{real} and \emph{complex} refer to the scalar field $K$. \end{defn} $C^{\ast}$- and other Banach algebras are commonly studied examples of Banach structures that are not just Banach spaces. A central problem in continuous logic is the characterization of inseparably categorical countable theories, that is to say countable theories with a unique model in each uncountable density character. The analog of Morley's theorem was shown in continuous logic via related formalisms \cite{ben-yaacov_2005, Shelah2011}, but no satisfactory analog of the Baldwin-Lachlan theorem or its precise structural characterization of uncountably categorical discrete theories in terms of strongly minimal sets is known. Some progress in the specific case of Banach theories has been made in \cite{SHELAH2019106738}, in which Shelah and Usvyatsov introduce the notion of a wide type and the notion of a minimal wide type, which they argue is the correct analog of strongly minimal types in the context of inseparably categorical Banach theories. \begin{defn} A type $p$ in a Banach theory is \emph{wide} if its set of realizations consistently contain the unit sphere of an infinite dimensional real subspace. A type is \emph{minimal wide} if it is wide and has a unique wide extension to every set of parameters. \end{defn} In \cite{SHELAH2019106738}, Shelah and Usvyatsov were able to show that every Banach theory has wide complete types using the following classical concentration of measure results of Dvoretzky and Milman, which Shelah and Usvyatsov refer to as the Dvoretzky-Milman theorem. \begin{fact}[Dvoretzky-Milman theorem] \label{fact:DM-thm} Let $(X,\left\lVert \cdot \right\rVert)$ be an infinite dimensional real Banach space with unit sphere $S$ and let $f:S \rightarrow \mathbb{R}$ be a uniformly continuous function. For any $k<\omega$ and $\varepsilon > 0$, there exists a $k$-dimensional subspace $Y \subset X$ and a Euclidean norm\footnote{A norm $\vertiii{\cdot}$ is \emph{Euclidean} if it satisfies the parallelogram law, $2\vertiii{a}^2 + 2 \vertiii{b}^2 = \vertiii{a+b}^2 + \vertiii{a-b}^2,$ or equivalently if it is induced by an inner product.} $\vertiii{\cdot}$ on $Y$ such that for any $a,b \in S\cap Y$, we have $\vertiii{a} \leq \left\lVert a\right\rVert \leq (1 + \varepsilon)\vertiii{a}$ and $|f(a) - f(b)| < \varepsilon$.\footnote{Fact \ref{fact:DM-thm} without $f$ is (a form of) Dvoretsky's theorem.} \end{fact} Shelah and Usvyatsov showed that in a stable Banach theory every wide type has a minimal wide extension (possibly over a larger set of parameters) and that every Morley sequence in a minimal wide type is an orthonormal basis of a subspace isometric to a real Hilbert space. Furthermore, they showed that in an inseparably categorical Banach theory, every inseparable model is prime over a countable set of parameters and a Morley sequence in some minimal wide type, analogously to how a model of a discrete uncountably categorical theory is always prime over some finite set of parameters and a Morley sequence in some strongly minimal type. The key ingredient to our present work is the following result, due to Milman. It extends the Dvoretzky-Milman theorem in a manner analogous to the extension of the pigeonhole principle by Ramsey's theorem.\footnote{The original Dvoretzky-Milman result is often compared to Ramsey's theorem, such as when Gromov coined the term \emph{the Ramsey-Dvoretzky-Milman phenomenon} \cite{gromov1983}, but in the context of Fact \ref{fact:main} it is hard not to think of the $n=1$ case as being analogous to the pigeonhole principle and the $n>1$ cases as being analogous to Ramsey's theorem.} \begin{defn}\label{defn:main-defn} Let $(X,\left\lVert \cdot \right\rVert)$ be a Banach space. If $a_0,a_1,\dots,a_{n-1}$ and $b_0,b_1,\dots,\allowbreak b_{n-1}$ are ordered $n$-tuples of elements of $X$, we say that $\bar{a}$ and $\bar{b}$ are \emph{congruent} if $\left\lVert a_i - a_j\right\rVert=\left\lVert b_i - b_j \right\rVert$ for all $ i,j \leq n$, where we take $a_{n}=b_{n}=\mathbf{0}$. We will write this as $\bar{a} \cong \bar{b}$. \end{defn} \begin{fact}[\cite{zbMATH03376472}, Thm.\ 3] \label{fact:main} Let $S^\infty$ be the unit sphere of a separable infinite dimensional real Hilbert space ${H}$ and let $f:(S^\infty)^n \rightarrow \mathbb{R}$ be a uniformly continuous function. For any $\varepsilon>0$ and any $k<\omega$ there exists a $k$-dimensional subspace $V$ of $H$ such that for any $a_0,a_1,\dots,a_{n-1},b_0,b_1,\dots,b_{n-1}\in S^\infty$ with $\bar{a} \cong \bar{b}$, $|f(\bar{a})-f(\bar{b})| < \varepsilon$. \end{fact} Note that the analogous result for inseparable Hilbert spaces follows immediately, by restricting attention to a separable infinite dimensional subspace. Also note that by using Dvoretsky's theorem and an easy compactness argument, Fact \ref{fact:main} can be generalized to arbitrary infinite dimensional Banach spaces. \subsection{Connection to Extreme Amenability} A modern proof of Fact \ref{fact:main} would go through the extreme amenability of the unitary group of an infinite dimensional Hilbert space endowed with the strong operator topology, or in other words the fact that any continuous action of this group on a compact Hausdorff space has a fixed point, which was originally shown in \cite{10.2307/2374298}. This connection is unsurprising. It is well known that the extreme amenability of $\mathrm{Aut}(\mathbb{Q})$ (endowed with the topology of pointwise convergence) can be understood as a restatement of Ramsey's theorem. It is possible to use this to give a high brow proof of the existence of indiscernible sequences in any first-order theory $T$: \begin{proof} Fix a first-order theory $T$. Let $Q$ be a family of variables indexed by the rational numbers. The natural action of $\mathrm{Aut}(\mathbb{Q})$ on $S_Q(T)$, the Stone space of types over $T$ in the variables $Q$, is continuous and so by extreme amenability has a fixed point. A fixed point of this action is precisely the same thing as the type of a $\mathbb{Q}$-indexed indiscernible sequence over $T$, and so we get that there are models of $T$ with indiscernible sequences. \end{proof} A similar proof of the existence of indiscernible subspaces in Banach theories (Theorem \ref{thm:exist}) is possible, but requires an argument that the analog of $S_Q(T)$ is non-empty (which follows from Dvoretzky's theorem) and also requires more delicate bookkeeping to define the analog of $S_Q(T)$ and to show that the action of the unitary group of a separable Hilbert space is continuous. In the end this is more technical than a proof using Fact \ref{fact:main} directly. \section{Indiscernible Subspaces} \label{sec:ind-subsp} \begin{defn} Let $T$ be a Banach {theory}. Let $\mathfrak{M}\models T$ and let $A\subseteq \mathfrak{M}$ be some set of parameters. An \emph{indiscernible subspace over $A$} is a real subspace $V$ of $\mathfrak{M}$ such that for any $n<\omega$ and any $n$-tuples $\bar{b},\bar{c} \in V$, $\bar{b} \equiv_A \bar{c}$ if and only if $\bar{b} \cong \bar{c}$. {\sloppy If $p$ is a type over $A$, then $V$ is an \emph{indiscernible subspace in $p$ (over $A$)} if it is an indiscernible subspace over $A$ and $b\models p$ for all $b\in V$ with $\left\lVert b \right\rVert = 1$.} \end{defn} Note that an indiscernible subspace is a real subspace even if $T$ is a complex Banach theory. Also note that an indiscernible subspace in $p$ is not literally contained in the realizations of $p$, but rather has its unit sphere contained in the realizations of $p$. It might be more accurate to talk about ``indiscernible spheres,'' but we find the subspace terminology more familiar. Indiscernible subspaces are very metrically regular. \begin{prop} Suppose $V$ is an indiscernible subspace in some Banach structure. Then $V$ is isometric to a real Hilbert space. In particular, a real subspace $V$ of a Banach structure is indiscernible over $A$ if and only if it is isometric to a real Hilbert space and for every $n<\omega$ and every pair of $n$-tuples $\bar{b},\bar{c}\in V$, $\bar{b}\equiv_A\bar{c}$ if and only if for all $i,j<n$ $\left<b_i,b_j\right> = \left<c_i,c_j\right>$. \end{prop} \begin{proof} For any real Banach space $W$, if $\dim W \leq 1$, then $W$ is necessarily isometric to a real Hilbert space. If $\dim V \geq 2$, let $V_0$ be a $2$-dimensional subspace of $V$. A subspace of an indiscernible subspace is automatically an indiscernible subspace, so $V_0$ is indiscernible. For any two distinct unit vectors $a$ and $b$, indiscernibility implies that for any $r,s\in \mathbb{R}$, $\left\lVert r a + s b\right\rVert = \left\lVert s a + r b\right\rVert$, hence the unique linear map that switches $a$ and $b$ fixes $\left\lVert \cdot \right \rVert$. This implies that the automorphism group of $(V_0, \left\lVert \cdot \right \rVert)$ is transitive on the $\left\lVert \cdot \right\rVert$-unit circle. By John's theorem on maximal ellipsoids \cite{MR0030135}, the unit ball of $\left\lVert \cdot \right \rVert$ must be an ellipse, so $\left\lVert \cdot \right \rVert$ is a Euclidean norm. Thus every $2$-dimensional real subspace of $V$ is Euclidean and so $(V,\left\lVert \cdot \right \rVert)$ satisfies the parallelogram law and is therefore a real Hilbert space. The `in particular' statement follows from the fact that in a real Hilbert subspace of a Banach space, the polarization identity \cite[Prop.\ 14.1.2]{Blanchard2002} defines the inner product in terms of a particular quantifier-free formula: \begin{equation*} \left<x, y\right> = \frac{1}{4}\left( \left\lVert x + y \right\rVert ^2 - \left\lVert x - y \right\rVert^2 \right).\footnotemark \qedhere \end{equation*} \end{proof} \footnotetext{There is also a polarization identity for the complex inner product: $${\left<x, y\right>_{\mathbb{C}} = \frac{1}{4}\left( \left\lVert x + y \right\rVert ^2 - \left\lVert x - y \right\rVert^2 + i\left\lVert x - iy \right\rVert^2 - i \left\lVert x + iy \right\rVert^2 \right).}$$} \subsection{Existence of Indiscernible Subspaces} As mentioned in \cite[Cor.\ 3.9]{SHELAH2019106738}, it follows from Dvoretzky's theorem that if $p$ is a wide type and $\mathfrak{M}$ is a sufficiently saturated model, then $p(\mathfrak{M})$ contains the unit sphere of an infinite dimensional subspace isometric to a Hilbert space. We refine this by showing that, in fact, an indiscernible subspace can be found. \begin{thm} \label{thm:exist} Let $A$ be a set of parameters in a Banach {theory} $T$ and let $p$ be a wide type over $A$. For any $\kappa$, there is $\mathfrak{M} \models T$ and a subspace $V\subseteq \mathfrak{M}$ of dimension $\kappa$ such that $V$ is an indiscernible subspace in $p$ over $A$. In particular, any $\aleph_0 + \kappa+|A|$-saturated $\mathfrak{M}$ will have such a subspace. \end{thm} \begin{proof} For any set $\Delta$ of $A$-formulas, call a subspace $V$ of a model $\mathfrak{N}$ of $T_A$ \emph{$\Delta$-indiscernible in $p$} if every unit vector in $V$ models $p$ and for any $n<\omega$ and any formula $\varphi \in \Delta$ of arity $n$ and any $n$-tuples $\bar{b},\bar{c} \in V$ with $\bar{b} \cong \bar{c}$, we have $\mathfrak{N}\models \varphi(\bar{b}) = \varphi(\bar{c})$. Since $p$ is wide, there is a model $\mathfrak{N}\models T$ containing an infinite dimensional subspace $W$ isometric to a real Hilbert space such that for all $b\in W$ with $\left\lVert b \right\rVert = 1$, $b\models p$. This is an infinite dimensional $\varnothing$-indiscernible subspace in $p$. Now for any finite set of $A$-formulas $\Delta$ and formula $\varphi$, assume that we've shown that there is a model $\mathfrak{N}\models T$ containing an infinite dimensional $\Delta$-indiscernible subspace $V$ in $p$ over $A$. We want to show that there is a $\Delta \cup \{\phi\}$-indiscernible subspace in $V$. By Fact \ref{fact:main}, for every $k<\omega$ there is a $k$-dimensional subspace $W_{k}\subseteq V$ such that for any unit vectors $b_0,\dots,b_{\ell -1},c_0,\dots,c_{\ell-1}$ in $W_{k}$ with $\bar{b}\cong\bar{c}$, we have that $|\varphi^{\mathfrak{N}}(\bar{b})-\varphi^{\mathfrak{N}}(\bar{c})| < 2^{-k}$. If we let $\mathfrak{N}_k = (\mathfrak{N}_k,W_k)$ where we've expanded the language by a fresh predicate symbol $D$ such that $D^{\mathfrak{N}_k}(x)=d(x,W_k)$, then an ultraproduct of the sequence $\mathfrak{N}_k$ will be a structure $(\mathfrak{N}_\omega,W_\omega)$ in which $W_\omega$ is an infinite dimensional Hilbert space. \emph{Claim:} $W_\omega$ is $\Delta\cup\{\varphi\}$-indiscernible in $p$. \emph{Proof of claim.} Fix an $m$-ary formula $\psi \in \Delta \cup \{\varphi\}$ and let $f(k)=0$ if $\psi \in \Delta$ and $f(k)=2^{-k}$ if $\psi = \varphi$. For any $k \geq 2m$, fix $b_0,\dots,b_{m-1},c_0,\dots,c_{m-1}$ in the unit ball of $W_k$, there is a $2m$ dimensional subspace $W^\prime \subseteq W_k$ containing $\bar{b},\bar{c}$. By compactness of $B(W^\prime)^m$ (where $B(X)$ is the unit ball of $X$), we have that for any $\varepsilon > 0$ there is a $\delta(\varepsilon) > 0$ such that if $|\left<b_i,b_j \right> - \left<c_i,c_j \right>| < \delta(\varepsilon)$ for all $i,j < m$ then $|\psi^{\mathfrak{N}}(\bar{b})-\psi^{\mathfrak{N}}(\bar{c})| \leq f(k) + \varepsilon$. Note that we can take the function $\delta$ to only depend on $\psi$, specifically its arity and modulus of continuity, and not on $k$, since $B(W^\prime)^m$ is always isometric to $B(\mathbb{R}^{2m})^m$. Therefore, in the ultraproduct we will have $(\forall i,j<m)|\left<b_i,b_j \right> - \left<c_i,c_j \right>| < \delta(\varepsilon) \Rightarrow |\psi^{\mathfrak{N}}(\bar{b})-\psi^{\mathfrak{N}}(\bar{c})| \leq \varepsilon$ and thus $\bar{b}\cong \bar{c} \Rightarrow \psi^{\mathfrak{N}_\omega}(\bar{b}) = \psi^{\mathfrak{N}_\omega}(\bar{c})$, as required. \hfill $\qed_{\textit{Claim}}$ Now for each finite set of $A$-formulas we've shown that there's a structure $(\mathfrak{M}_\Delta,V_\Delta)$ (where, again, $V_\Delta$ is the set defined by the new predicate symbol $D$) such that $\mathfrak{M}_\Delta \models T_A$ and $V_\Delta$ is an infinite dimensional $\Delta$-indiscernible subspace in $p$. By taking an ultraproduct with an appropriate ultrafilter we get a structure $(\mathfrak{M},V)$ where $\mathfrak{M}\models T_A$ and $V$ is an infinite dimensional subspace. $V$ is an indiscernible subspace in $p$ over $A$ by the same argument as in the claim. Finally note that by compactness we can take $V$ to have arbitrarily large dimension and that any subspace of an indiscernible subspace in $p$ over $A$ is an indiscernible subspace in $p$ over $A$, so we get the required result. \end{proof} Together with the fact that wide types always exist in Banach theories with infinite dimensional models \cite[Thm.\ 3.7]{SHELAH2019106738}, we get a corollary. \begin{cor} \label{cor:ind-subsp} Every Banach {theory} with infinite dimensional models has an infinite dimensional indiscernible subspace in some model. In particular, every such theory has an infinite indiscernible set, namely any orthonormal basis of an infinite dimensional indiscernible subspace. \end{cor} \section{Minimal{ }Wide Types} Compare the following Theorem \ref{thm:main} with this fact in discrete logic: If $p$ is a minimal type (i.e.\ $p$ has a unique global non-algebraic extension), then an infinite sequence of realizations of $p$ is a Morley sequence in $p$ if and only if it is an indiscernible sequence. Here we are using the definition of Morley sequence for (possibly unstable) $A$-invariant types: Let $p$ be a global $A$-invariant type, and let $B\supseteq A$ be some set of parameters. A sequence $\{c_i\}_{i< \kappa}$ is a \emph{Morley sequence in $p$ over $B$} if for all $i< \kappa$, $\mathrm{tp}(c_i/Bc_{<i}) = p \upharpoonright Bc_{<i}$. Note that this definition of Morley sequence agrees with the standard definition for types that are stable in the sense of Lascar and Poizat (as described in \cite[Def.\ 4.1]{SHELAH2019106738}). \begin{thm} \label{thm:main} Let $p$ be a minimal{ }wide type over the set $A$. For $\kappa\geq \aleph_0$, a set of realizations $\{b_i\}_{i<\kappa}$ of $p$ is a Morley sequence in (the unique global minimal wide extension of) $p$ if and only if it is an orthonormal basis of an indiscernible subspace in $p$ over $A$. \end{thm} \begin{proof} All we need to show is that an orthonormal basis of an indiscernible subspace in $p$ over $A$ is a Morley sequence in $p$. The converse will follow from the fact that all Morley sequences in a fixed invariant type of the same length have the same type along with the fact that minimal wide types have a unique global wide extension, which is therefore invariant. Let $V$ be an indiscernible subspace in $p$ over $A$. Let $\{e_i\}_{i<\kappa}$ be an orthonormal basis of $V$. By construction, $\mathrm{tp}(e_0/A) = p$. Let $q$ be the global minimal wide extension of $p$. Assume that for some $j<\kappa$ we've shown for all $i<j$ that $\mathrm{tp}(e_i/Ae_{<i}) = q \upharpoonright Ae_{<i}$. Let $W = \overline{\mathrm{span}}(e_{\geq j})$. Since $V$ is an indiscernible subspace over $A$, for all unit norm $b,c\in W$, $b\equiv_{Ae_{<j}} c$, so in particular $\mathrm{tp}(b/Ae_{<j})$ is wide. Since $p$ is minimal{ }wide we must have $\mathrm{tp}(b/Ae_{<j}) = q\upharpoonright Ae_{<j}$. Therefore $\{e_i\}_{i<\kappa}$ is a Morley sequence. \end{proof} What is unclear at the moment is the answer to this question: \begin{quest} If $p$ is a minimal wide type over the set $A$, is it stable in the sense of \cite[Def.\ 4.1]{SHELAH2019106738}? In other words, is every type $q$ extending $p$ over a model $\mathfrak{M}\supseteq A$ a definable type? \end{quest} \section{Counterexamples} \label{sec:count} Here we collect some counterexamples that may be relevant to any model theoretic development of the ideas presented in this paper. \subsection{No Infinitary Ramsey-Dvoretzky-Milman Phenomena in General} Unfortunately some elements of the analogy between the Ramsey-Dvoretzky-Milman Phenomenon and discrete Ramsey theory do not work. In particular, there is no extension of Dvoretzky's theorem, and therefore Fact \ref{fact:DM-thm}, to $k \geq \omega$, even for a fixed $\varepsilon>0$. Recall that a linear map $T:X\rightarrow Y$ between Banach spaces is an \emph{isomorphism} if it is a continuous bijection. This is enough to imply that $T$ is invertible and that both $T$ and $T^{-1}$ are Lipschitz. An analog of Dvoretzky's theorem for $k \geq \omega$ would imply that every sufficiently large Banach space has an infinite dimensional subspace isomorphic to Hilbert space, which is known to be false. Here we will see as specific example of this. The following is a well known result in Banach space {theory} (for a proof see the comment after Proposition 2.a.2 in \cite{Lindenstrauss1996}). \begin{fact} \label{fact:no-no} For any distinct $X,Y \in \{\ell_p: 1\leq p < \infty\} \cup \{c_0\}$, no subspace of $X$ is isomorphic to $Y$. \end{fact} Note that, whereas Corollary \ref{cor:ind-subsp} says that every Banach theory is consistent with the partial type of an indiscernible subspace, the following corollary says that this type can sometimes be omitted in arbitrarily large models (contrast this with the fact that the existence of an Erd\"os cardinal implies that you can find indiscernible sequences in any sufficiently large structure in a countable language \cite[Thm.\ 9.3]{Kanamori2003}). \begin{cor} \label{cor:no-no-cor} For $p \in [1,\infty) \setminus \{2\}$, there are arbitrarily large models of $\mathrm{Th}(\ell_p)$ that do not contain any infinite dimensional subspaces isomorphic to a Hilbert space. \end{cor} \begin{proof} Fix $p \in [1,\infty) \setminus \{2\}$ and $\kappa \geq \aleph_0$. Let $\ell_p(\kappa)$ be the Banach space of functions $f:\kappa \rightarrow \mathbb{R}$ such that $\sum_{i<\kappa} |f(i)|^p < \infty$. Note that $\ell_p(\kappa) \equiv \ell_p$.\footnote{To see this, we can find an elementary sub-structure of $\ell_p(\kappa)$ that is isomorphic to $\ell_p$: Let $\mathfrak{L}_0$ be a separable elementary sub-structure of $\ell_p(\kappa)$. For each $i<\omega$, given $\mathfrak{L}_i$, let $B_i$ be the set of all $f \in \ell_p(\kappa)$ that are the indicator function of a singleton $\{i\}$ for some $i$ in the support of some element of $\mathfrak{L}_i$. $B_i$ is countable. Let $\mathfrak{L}_{i+1}$ be a separable elementary sub-structure of $\ell_p(\kappa)$ containing $\mathfrak{L}_i\cup B_i$. $\overline{\bigcup_{i<\omega}\mathfrak{L}_{i+1}}$ is equal to the span of $\bigcup_{i<\omega} B_i$ and so is a separable elementary sub-structure of $\ell_p(\kappa)$ isomorphic to $\ell_p$.} Pick a subspace $V \subseteq \ell_p(\kappa)$. If $V$ is isomorphic to a Hilbert space, then any separable $V_0 \subseteq V$ will also be isomorphic to a Hilbert space. There exists a countable set $A \subseteq \kappa$ such that $V_0 \subseteq \ell_p(A) \subseteq \ell_p(\kappa)$. By Fact \ref{fact:no-no}, $V_0$ is not isomorphic to a Hilbert space, which is a contradiction. Thus no such $V$ can exist. \end{proof} Even assuming we start with a Hilbert space we do not get an analog of the infinitary pigeonhole principle (i.e.\ a generalization of Fact \ref{fact:DM-thm}). The discussion by H\'ajeck and Mat\v ej in \cite[after Thm.\ 1]{Hajek2018} of a result of Maurey \cite{Maurey1995} implies that there is a Hilbert theory $T$ with a unary predicate $P$ such that for some $\varepsilon>0$ there are arbitrarily large models $\mathfrak{M}$ of $T$ such that for any infinite dimensional subspace $V \subseteq \mathfrak{M}$ there are unit vectors $a,b\in V$ with $|P^{\mathfrak{M}}(a)-P^{\mathfrak{M}}(b)| \geq \varepsilon$. Stability of a theory often has the effect of making Ramsey phenomena more prevalent in its models, so there is a natural question as to whether anything similar will happen here. Recall that a function $f:S(X)\rightarrow \mathbb{R}$ on the unit sphere $S(X)$ of a Banach space $X$ is \emph{oscillation stable} if for every infinite dimensional subspace $Y \subseteq X$ and every $\varepsilon>0$ there is an infinite dimensional subspace $Z \subseteq Y$ such that for any $a,b\in S(Z)$, $|f(a)-f(b)|\leq \varepsilon$. \begin{quest} Does (model theoretic) stability imply oscillation stability? That is to say, if $T$ is a stable Banach theory, is every unary formula oscillation stable on models of $T$? \end{quest} \subsection{The (Type-)Definability of Indiscernible Subspaces and Complex Banach Structures} \label{subsec:comp} A central question in the study of inseparably categorical Banach space theories is the degree of definability of the `minimal Hilbert space' that controls a given inseparable model of the theory. Results of Henson and Raynaud in \cite{HensonRaynaud} imply that in general the Hilbert space may not be definable. In \cite{SHELAH2019106738}, Shelah and Usvyatsov ask whether or not the Hilbert space can be taken to be type-definable or a zeroset. In Example \ref{ex:no-def} we present a simple, but hopefully clarifying, example showing that this is slightly too much to ask. It is somewhat uncomfortable that even in complex Hilbert structures we are only thinking about \emph{real} indiscernible subspaces rather than \emph{complex} indiscernible subspaces. One problem is that Ramsey-Dvoretzky-Milman phenomena only deal with real subspaces in general. The other problem is that Definition \ref{defn:main-defn} is incompatible with complex structure: \begin{prop} \label{prop:no-comp} Let $T$ be a complex Banach theory. Let $V$ be an indiscernible subspace in some model of $T$. For any non-zero $a\in V$ and $\lambda \in \mathbb{C} \setminus \{0\}$, if $\lambda a \in V$, then $\lambda \in \mathbb{R}$. \end{prop} \begin{proof} Assume that for some non-zero vector $a$, both $a$ and $ia$ are in $V$. We have that $(a,ia)\equiv(ia,a)$, but $(a,ia)\models d(ix,y)=0$ and $(ia,a)\not\models d(ix,y)=0$, which contradicts indiscernibility. Therefore we cannot have that both $a$ and $ia$ are in $V$. The same statement for $a$ and $\lambda a$ with $\lambda \in \mathbb{C}\setminus \mathbb{R}$ follows immediately, since $a,\lambda a \in V \Rightarrow ia \in V$. \end{proof} In the case of complex Hilbert space and other Hilbert spaces with a unitary Lie group action, this is the reason that indiscernible subspaces can fail to be type-definable. We will explicitly give the simplest example of this \begin{ex} \label{ex:no-def} Let $T$ be the theory of an infinite dimensional complex Hilbert space and let $\mathfrak{C}$ be the monster model of $T$. $T$ is inseparably categorical, but for any partial type $\Sigma$ over any small set of parameters $A$, $\Sigma(\mathfrak{C})$ is not an infinite dimensional indiscernible subspace (over $\varnothing$). \end{ex} \begin{proof} $T$ is clearly inseparably categorical by the same reasoning that the theory of real infinite dimensional Hilbert spaces is inseparably categorical (being an infinite dimensional complex Hilbert space is first-order and there is a unique infinite dimensional complex Hilbert space of each infinite density character). If $\Sigma(\mathfrak{C})$ is not an infinite dimensional subspace of $\mathfrak{C}$, then we are done, so assume that $\Sigma(\mathfrak{C})$ is an infinite dimensional subspace of $\mathfrak{C}$. Let $\mathfrak{N}$ be a small model containing $A$. Since $\mathfrak{N}$ is a subspace of $\mathfrak{C}$, $\Sigma(\mathfrak{N}) = \Sigma(\mathfrak{C})\cap \mathfrak{N}$ is a subspace of $\mathfrak{N}$. Let $v \in \Sigma(\mathfrak{C})\setminus \Sigma(\mathfrak{N})$. This implies that $v\in \mathfrak{C} \setminus \mathfrak{N}$, so we can write $v$ as $v_\parallel+ v_\perp$, where $v_\parallel$ is the orthogonal projection of $v$ onto $\mathfrak{N}$ and $v_\perp$ is complex orthogonal to $\mathfrak{N}$. Necessarily we have that $v_\perp \neq 0$. Let $\mathfrak{N}^\perp$ be the orthocomplement of $\mathfrak{N}$ in $\mathfrak{C}$. If we write elements of $\mathfrak{C}$ as $(x,y)$ with $x\in \mathfrak{N}$ and $y\in \mathfrak{N}^\perp$, then the maps $(x,y)\mapsto (x,-y)$, $(x,y)\mapsto (x,iy)$, and $(x,y)\mapsto(x,-iy)$ are automorphisms of $\mathfrak{C}$ fixing $\mathfrak{N}$. Therefore $(v_\parallel + v_\perp) \equiv_{\mathfrak{N}} (v_\parallel - v_\perp) \equiv_{\mathfrak{N}} (v_\parallel + iv_\perp) \equiv_{\mathfrak{N}} (v_\parallel -iv_\perp)$, so we must have that $(v_\parallel - v_\perp),(v_\parallel + iv_\perp),( v_\parallel - iv_\perp) \in \Sigma(\mathfrak{C})$ as well. Since $\Sigma(\mathfrak{C})$ is a subspace, we have that $b_\perp \in \Sigma(\mathfrak{C})$ and $ib_\perp \in \Sigma(\mathfrak{C})$. Thus by Proposition \ref{prop:no-comp} $\Sigma(\mathfrak{C})$ is not an indiscernible subspace over $\varnothing$. \end{proof} This example is a special case of this more general construction: If $G$ is a compact Lie group with an irreducible unitary representation on $\mathbb{R}^n$ for some $n$ (i.e.\ the group action is transitive on the unit sphere), then we can extend this action to $\ell_2$ by taking the Hilbert space direct sum of countably many copies of the irreducible unitary representation of $G$, and we can think of this as a structure by adding function symbols for the elements of $G$. The theory of this structure will be totally categorical and satisfy the conclusion of Example \ref{ex:no-def}. Example \ref{ex:no-def} is analogous to the fact that in many strongly minimal theories the set of generic elements in a model is not itself a basis/Morley sequence. The immediate response would be to ask the question of whether or not the unit sphere of the complex linear span (or more generally the `$G$-linear span,' i.e.\ the linear span of $G\cdot V$) of the indiscernible subspace in a minimal{ }wide type agrees with the set of realizations of that minimal{ }wide type, but this can overshoot: \begin{ex} \label{ex:bad-comp} Consider the structure whose universe is (the unit ball of) $\ell_2 \oplus \ell_2$ (where we are taking $\ell_2$ as a real Hilbert space), with a complex action $(x,y)\mapsto (-y,x)$ and orthogonal projections $P_0$ and $P_1$ for the sets $\ell_2 \oplus \{\mathbf{0}\}$ and $\{\mathbf{0}\} \oplus \ell_2$, respectively. Let $T$ be the theory of this structure. This is a totally categorical complex Hilbert structure, but for any complete type $p$ and $\mathfrak{M}\models T $, $p(\mathfrak{M})$ does not contain the unit sphere of a non-trivial complex subspace. \end{ex} \begin{proof} $T$ is bi-interpretable with a real Hilbert space, so it is totally categorical. For any complete type $p$, there are unique values of $\left\lVert P_0(x) \right\rVert$ and $\left\lVert P_1(x) \right\rVert$ that are consistent with $p$, so the set of realizations of $p$ in any model cannot contain $\{\lambda a\}_{\lambda \in \mathrm{U}(1)}$ for $a$, a unit vector, and $\mathrm{U}(1) \subset \mathbb{C}$, the set of unit complex numbers. \end{proof} The issue, of course, being that, while we declared by fiat that this is a complex Hilbert structure, the expanded structure does not respect the complex structure. So, on the one hand, Example \ref{ex:bad-comp} shows that in general the unit sphere of the complex span won't be contained in the minimal{ }wide type. On the other hand, a priori the set of realizations of the minimal{ }wide type could contain more than just the unit sphere of the complex span, such as if we have an $\mathrm{SU}(n)$ action. The complex (or $G$-linear) span of a set is of course part of the algebraic closure of the set in question, so this suggests a small refinement of the original question of Shelah and Usvyatsov: \begin{quest} If $T$ is an inseparably categorical Banach {theory}, $p$ is a minimal{ }wide type, and $\mathfrak{M}$ is a model of $T$ which is prime over an indiscernible subspace $V$ in $p$, does it follow that $p(\mathfrak{M})$ is the unit sphere of a subspace contained in the algebraic closure of $V$? \end{quest} This would be analogous to the statement that if $p$ is a strongly minimal type in an uncountably categorical discrete theory and $\mathfrak{M}$ is a model prime over a Morley sequence $I$ in $p$, then $p(\mathfrak{M})\subseteq \mathrm{acl}(I)$. \subsection{Non-Minimal Wide Types} The following example shows, unsurprisingly, that Theorem \ref{thm:main} does not hold for non-minimal{ }wide types. \begin{ex} Let $T$ be the theory of (the unit ball of) the infinite Hilbert space sum $\ell_2 \oplus \ell_2 \oplus \dots$, where we add a predicate $D$ that is the distance to $S^\infty \sqcup S^\infty \sqcup \dots$, where $S^\infty$ is the unit sphere of the corresponding copy of $\ell_2$. This theory is $\omega$-stable. The partial type $\{D = 0\}$ has a unique global non-forking extension $p$ that is wide, but the unit sphere of the linear span of any Morley sequence in $p$ is not contained in $p(\mathfrak{C})$. \end{ex} \begin{proof} This follows from the fact that on $D$ the equivalence relation `$x$ and $y$ are contained in a common unit sphere' is definable by a formula, namely \[E(x,y) = \inf_{z,w \in D}(d(x,z)\dotdiv 1) + (d(z,w)\dotdiv 1) + (d(w,y)\dotdiv 1),\] where $a \dotdiv b = \max\{a-b,0\}$. If $x,y$ are in the same sphere, then let $S$ be a great circle passing through $x$ and $y$ and choose $z$ and $w$ evenly spaced along the shorter path of $S$. It will always hold that $d(x,z),d(z,w),d(w,y) \leq 1$, so we will have $E(x,y)=0$. On the other hand, if $x$ and $y$ are in different spheres, then $E(x,y)= \sqrt{2} -1$. Therefore a Morley sequence in $p$ is just any sequence of elements of $D$ which are pairwise non-$E$-equivalent and the unit sphere of the span of any such set is clearly not contained in $D$. \end{proof} \bibliographystyle{abbrv}
2,869,038,154,450
arxiv
\section{Introduction} One of the most intriguing astrophysical objects are the ultra-luminous X-ray sources (ULXs) that have been discovered around nearby galaxies by the X-ray satellites {\itshape Einstein}, {\itshape ROSAT}, {\itshape Chandra} and {\itshape XMM}. Assuming that they are at the same distance as their parent galaxies, their luminosities in the 0.1--2.4 keV band are in the range $10^{39}$--$10^{41}$ erg s$^{-1}$. Several explanations concerning the nature of these objects in terms of intermediate-mass black holes associated with globular clusters, HII regions, supernova remnants, etc.\ (Pakull \& Mirioni 2002; Angelini et al.\ 2001; Gao et al.\ 2003; Roberts et al.\ 2003; Wang 2002), local QSOs (Burbidge et al. 2003), hypothetical supermassive stars or beamed emission (King et al. 2001; K\"ording et al.\ 2002) have been proposed. Studies with {\itshape XMM-Newton} (Jenkins et al.\ 2004) point to a heterogeneous class of objects whose spectral properties are similar to those of objects with lower X-ray luminosities. Detailed studies with {\itshape Chandra} and {\itshape XMM} and the identification of counterparts in other spectral ranges (optical/infrared/radio) are essential for making progress in the field. The number of such optical identifications is still limited (e.g.\ Roberts et al.\ 2001; Wu et al.\ 2002; Liu et al.\ 2004). Some of the these counterparts have revealed objects at higher redshift than the putative parent galaxies (Maseti et al.\ 2003; Clark et al.\ 2005), and are viewed in standard big bang cosmology as contaminated background objects. Saveral major compilations of ULX sources exist, those by Colbert \& Ptak (2002) (CP02 hereafter), Swartz et al. (2004), Liu \& Mirabel (2005) and Liu \& Bregman (2005) (LB05 hereafter). The statistical analysis of the objects in the CP02 catalogue made by Irwin et al.\ (2004) indicates considerable contamination by background sources. A direct confirmation of these and a better understanding on the nature of ULX sources can be achieved by identifying counterparts in other bands (in particular in the optical, as is widely recognized: see for instance the review by van der Marel 2004). This motivated us to start this study searching for such possible optical counterparts in the major existing optical surveys. For example, we have identified possible optical counterparts in the DSS plates for about $\sim$50\% of the objects compiled by CP02. The typical magnitudes of such objects are 17--20 in the $b$ band and are therefore bright enough targets for spectroscopic observations with 2 to 4 m telescopes. In previous work (Arp et al.\ 2004: Guti\'errez \& L\'opez-Corredoira 2005) we have demonstrated the feasibility of such studies and present our first results with the identification and characterization of nine such {\itshape ROSAT} ULX sources. In eight cases the sources look point-like and turned out to be quasars at high redshift. The remaining object was in a spiral arm of NGC~1073 and is apparently embedded in an HII region. Here, we present further results with an analysis of the counterparts of four additional ULX sources. \section{Sample selection and observations} \subsection{Imaging} In this paper we consider the cases of IXO~32, 35, 37 and 40 (we follow the notation by CP02). Table~1 summarizes the main properties of such sources (taken from the compilation by CP02). We look for possible optical counterparts of these X-ray sources in the Digital Sky Survey (DSS) plates, the USNO catalogue, and the released Sloan Digital Sky Survey (SDSS) data. For the four cases we checked that there are point-like objects compatible with the X-ray positions. The last column in Table~1 lists the offset between the optical and X-ray coordinates. The fields around IXO~32, 35 and 37 were also observed with the IAC80 telescope\footnote{The IAC~80 is located at the Spanish Teide Observatory on the island of Tenerife and is operated by the Instituto de Astrof\'\i sica de Canarias} in May 2004 and December 2005. For these objects we took single exposures of 1800 s in $BR$ for IXO~32, 600 s in $BVRI$ for IXO35, and 600 s in $R$ for IXO~37. The observations were reduced using IRAF\footnote{IRAF is the Image Reduction and Analysis Facility, written and supported by the IRAF programming group at the national Optical Astronomy Observatories (NOAO) in Tucson, Arizona.} following a standard procedure The nights were photometric and we use Landolt stars (Landolt 1992) to perform an absolute calibration with an uncertainty of (2$\sigma$) $\leq 0.05$ mag in each filter. Figure~1 show the images with the identification of the optical counterpart. The images (2$'$ $\times$ 2$'$) are centred on the nominal X-ray positions. In one case (IXO~32) there is another source slightly shifted (12 arcsec) from the X-ray coordinates. \subsection{Spectroscopy} The spectroscopic observations presented here were taken in February 2004 with the WHT\footnote{The William Herschel Telescope (WHT) is operated by the Isaac Newton Group and the IAC in Spain's Roque de los Muchachos Observatory}. We used the blue and the red arms of the ISIS spectrograph with the R300B and R158R grisms. The slit width was 2 arcsec. We took a Cu--Ar and Cu--Ne lamp with a slitwidth of 1 arcsec at the beginning of the night for wavelength calibration. The stability of the wavelength calibration during the night and pointing was checked using the main sky lines. The sampling was 1.71 \AA~and 3.26 \AA~in the blue and red arms respectively. For each target, a single image was taken with exposure times of 1800 s for the counterparts of IXO~32, 37 and 40, and 900 s for the counterpart of IXO~35. The spectra were analysed following a standard procedure using IRAF of bias subtraction, extraction of the spectra and wavelength calibration. We used the standard spectroscopic star Feige~34 (Oke 1990) to correct for the response of the configuration to different wavelengths. The star was observed only three times during two nights and the slit for the targets was not positioned at parallactic angles, so this correction is only indicative. Given the prohibitive time needed to secure flat field images (specially in the blue part of the spectra), we did not correct for that effect. However, we have checked that this correction would be very small ($\leq 1$ \%). None of these uncertainties is relevant for the analysis and results presented in this paper. \section{Analysis} The spectra of the four objects are presented in Figure~2. The four spectra show features that allow clear identification and characterization. Table~2 lists the main properties of these spectra. The analysis of each object is presented below. \subsection{IXO~32} This source is also listed in the LB05 catalogue as X06 (around the galaxy NGC~2775). The optical images show two point-like objects with a separation $\sim$7 arcsec and distant $\sim$5 and $\sim$12 arcsec respectively from the nominal X-ray position. Only the brightest (the object at $\sim$12 arcsec) is listed in the USNO catalogue. We put the slit crossing both objects. The spectrum of the bright object has typical absorption lines of a star being the most prominent absorption features, the H \& K CaII, and the Balmer lines. The fainter is very blue and turn out to be an active galactic nuclei/quasar. Figure~2 shows the spectrum of this object in which the broad emission Ly-$\beta$+OVI, Ly-$\alpha$, SiIV+OIV (1400 \AA), CII (1549 \AA) and CIII (1909 \AA) lines are obvious. From the position of the line CII (1549 \AA) we estimate a redshift $z=2.769$. After the spectroscopic observations we discovered that the field has been observed with SDSS and that the object situated at 3.2 arcsecs from the nominal CP02 X-ray position has been catalogued as a star with with magnitudes $r=18.62$ mag and $g=18.86$ mag. \subsection{IXO~35} This object is also listed in the LB05 catalogue and denoted as NGC~3226 X03. The USNO catalogue lists an object with magnitudes $b=18.8$ mag and $r=17.2$ mag at 2.8 and 2.1 arcsec from the CP02 and LB05 X-ray positions respectively. The observations taken with the IAC80 telescope show a point-like object with magnitudes 18.87, 17.97, 16.65 and 14.79 in the $B$, $V$, $R$ and $I$ bands respectively. The nearest neighbour listed in the USNO catalogue and detected with the IAC80 at $R=19.3$ mag is $\sim$27 arcsec SE. The source is also listed in 2MASS with magnitudes 13.24, 12.70 and 12.39 in the $J$, $H$ and $K$ bands respectively. According to the maps by Schlegel et al.\ (1998), possible corrections for galactic extinction are below 0.1 mag in the blue band. The optical spectrum of this object is dominated by strong absorption bands (VO, Na and TiO) typical of a cold star. We detect also the Ca H and K and Balmer emission lines ($H\alpha,~H\beta,~H\gamma,~H\delta$ and $H\epsilon$), although some of them are in the middle of absorption bands, making the estimation of equivalent widths uncertain. Following the calibration by Hawley et al.\ (2002), based on several photometric and spectroscopic indices, we classify the object as an M3--6 star. From this and with the magnitude in the $J$ band it is possible to estimate the distance and then the X-ray luminosity. The resulting distance is in the range 41 pc (for an M6 type) to 157 pc (for an M3 type), which corresponds to X-ray luminosities of $8.6\,10^{27}$ and $1.3\,10^{29}$ erg s$^{-1}$ respectively. These luminosities are within the range found by Schmitt \& Liefke (2004) for a volume-limited sample of nearby M stars. \subsection{IXO~37} The object is also listed in the LB05 catalogue with the name X02 around the galaxy IC~2597. A possible counterpart at a distance of $\sim$4 arcsec from the X-ray source appears in the optical images having a magnitude of 19.44 in $R$ in the observations at the IAC80 telescope. The spectrum is shown in Fig.~2. We identify the forbidden narrow emission line OII($\lambda\lambda$ 3727 \AA) and OIII($\lambda\lambda$ 4959, 5007\AA) as the most important features. The redshift is $z=0.567$. At this redshift the H$\beta$ lies at $\sim$7617 \AA, which we identify as a bump in the middle of a telluric line. We also identify $H\alpha$ at 10294 \AA~in a spectral region (not shown in the figure) of low sensitivity of the detector and severely contaminated by sky emission lines. Other emission lines in the red arm are from NeIII ($\lambda 3869$). Correcting the spectrum for redshift, we detect in the blue arm the MgII($\lambda 2799$ \AA) emission line. The impossibility to measure the flux of the H$\beta$ and H$\alpha$ lines precludes the application of the common diagnostic diagrams. However, based on the X-ray emission, we classsify this object as a Seyfert I, AGN \subsection{IXO~40} DSS plates show only a possible optical counterpart of this ULX at $\sim$2 arcsec from the nominal X-ray position and with magnitudes 17.9 and 19.1 in $r$ and $b$ respectively. The optical spectrum shows emission lines typical of AGN/QSOs. The main features are broad emissions of CIII(1909 \AA) and MgII (2799 \AA) in the blue arm. OII($\lambda\lambda$ 3727 \AA), NeIII($\lambda\lambda$ 3869 \AA) and possibly OIII($\lambda\lambda$ 4959, 5007\AA) are detected in the red. The resulting redshift is $z=0.789$. \section{Discussion and conclusions} The poor spatial resolution of {\itshape ROSAT} images is irrelevant for the optical identifications presented here. In fact, as discussed in Guti\'errez \& L\'opez-Corredoira (2005), the low density of bright quasars ($\sim$2-3 per square degree brighter than 19 mag) makes unlikely a chance projection between the X-ray sources and the quasars identified as the optical counterparts of IXO~32, 37 and 40. A similar argument can be applied in the case of IXO~35: from the local density of M stars (Martini \& Osmer 1998), we have estimated that the probability to have randomly a source at $\sim 3$ arcsecs from the X-ray nominal position is below $\sim 10^{-6}$. These simple arguments confirm the reliability of our identifications. The sample analyzed in this paper suffer of several biases and then can not be considered as statistical representative of the whole sample of ULXs listed in the CP02 catalogue. In fact, the objects selected are restricted to those with a bright point-like optical counterpart ($\sim 19$ mag). So, for instance we have excluded a priori the possibility to detect X-ray binary stars within the parent galaxy. The objects were also selected to be in relatively isolated regions that allow an unambiguous identification. This is against the detection of objects within star forming complexes. The statistical implications of these and other identifications are in progress and will be addressed in a forthcoming paper (L\'opez-Corredoira \& Guti\'errez A\&A submitted). In any case the results presented here reinforce the importance of multiwavelength studies of these X-ray sources as one of the most promising ways to disentangle the nature of these objects and for the construction of clean samples of ULX sources for further studies. \newpage \section*{Acknowledgements} The author is especially grateful to M. L\'opez-Corredoira, a close collaborator in this project, for useful suggestions and comments. We thank also J. A. Caballero for useful hints about the properties of the M star identified as possible counterpart of IXO~35. The author was supported by the {\it Ram\'on y Cajal} Programme of the Spanish science ministery. \clearpage
2,869,038,154,451
arxiv
\section{Introduction} Current operating imaging atmospheric Cherenkov telescopes (IACTs): H.E.S.S., MAGIC and VERITAS, have reflective dishes segmented into mirror facets. The effective mirror area and the quality of the Cherenkov light shower images play a key role in the performance of these telescopes. At present, there exist several mirror technologies used by different IACTs. Polished glass mirrors are used by H.E.S.S. and VERITAS collaborations. The main issue with this technology is the degradation of the mirror's reflective layer, when exposed to severe environmental conditions. This implies a need for re-coating after some time \cite{bib:foerster}. A different mirror type, consisting of diamond-milled aluminium facets with a quartz coating, is used for some of the MAGIC telescope mirrors \cite{Bastieri}. The main challenge of this technology is that production of such mirrors is quite expensive and time consuming. The future CTA observatory will have several tens of at least three different types of telescopes and the currently available mirror technologies may not be sufficient for production of the mirror facets for the CTA \cite{Acharya}. Besides the open-structure mirrors developed at INP PAS for MSTs another type of the glass cold-shaping technology is followed at the INAF/Brera Astronomical Observatory \cite{Canestrari2013,Canestrari2014}. A different solution, also designed for MST type telescopes, which is based on the closed sandwich technology, has been proposed by IRFU/CEA Saclay \cite{Brun}. The current status of the different mirror technologies designed for the CTA observatory is discussed in \cite{Pareschi}. An open-structure mirror technology has been developed at INP PAS since 2008. Prototype mirrors (full or reduced size) were manufactured for different telescope designs considered by the CTA collaboration and including mirrors, with a radius of curvature of 23 meters, designed for 7 m single-mirror small-sized telescope. The very first open-structure mirror prototypes were built at the beginning of 2009. We considered three different materials for the flat sandwich panels: aluminum, glass, and glass reinforced with carbon fibre tissue. Recently, mirror prototypes have been designed for the medium-sized telescopes (MST)\cite{Schlenstedt}, which have a classical Davies-Cotton construction \cite{DaviesCotton}. The basic feature of our mirror technology used so far is to use a flat, rigid support structure for the mirrors. This technology was finally used to produce the full-size MST mirrors which are hexagonal in shape, with size 1.2 m flat-to-flat \cite{Dyrdaetal}. It represents a novel approach, different from commonly applied solutions with closed aluminum honeycomb supports which requires that the side walls of the mirror be sealed perfectly to protect the structure against water penetration inside the honeycomb, which can cause damage to the structure. \section{Technology Description} The MST mirrors should have a focal length of 16.07 m and hence their radius of curvature should be 32.14 m and a total reflectivity greater than 85\% in the wavelength range between 300 and 550 nm \cite{bib:baehr}. The CTA requirement for the MST mirror facets is that more than 80\% of the reflected Cherenkov light should be focused within 1/3 of the pixel size (50 mm including photomultiplier plus light cone), which is $\sim$ 17 mm \cite{Schlenstedt}. \begin{figure*}[!t] \centering \includegraphics[width=.6\textwidth]{fig1.eps} \caption{Open-structure composite mirror designed at INP PAS, Krakow.} \label{fig1} \end{figure*} The open-structure mirrors are to be used on the medium size Davies Cotton telescope for CTA and to ensure a high-quality concave mirror surface is produced in the cold-slumping process and high precision mould is used. This mould, with a diameter of 1.6 m, is specially designed for this purpose and is equipped with vacuum and heating systems, and is mounted on a steel support. The open-structure mirror consists of a sandwich support structure and a spherical glass reflecting layer. In 2014, the INP PAS team, taking into account previous experience, designed and manufactured the first new open-structure mirror support structure prototype, as shown in Figure~\ref{fig1}. The reflective layer is made of Borofloat 33 glass sheet \cite{bib:schott} and it was coated with Al+SiO$_2$ +HfO$_2$+SiO$_2$ by the German company BTE prior to gluing, which provides high durability of the reflective surface. In contrast to the previous solution \cite{Dyrdaetal}, the new type of MST mirrors do not use a flat support structure but instead the sandwich support structure consists of two convex glass panels separated by spacers, which are aluminum tubes. These tubes are glued to the convex panels with epoxy resin. The mirror is hexagonal in shape with size 1.2 m flat-to-flat as in previously. Both panels are made of ordinary float glass and their thickness is 2 mm. The aluminum tube spacers have a diameter of 40 mm and length of 50 mm. To ensure the free flow of water inside of the mirror, six slots are cut at both ends of the tubes (see Figure \ref{fig2}). The front panel is produced by laminating a glass sheet of thickness 2 mm with epoxy resin, with a special reflective layer. A fibreglass tissue of thickness 0.4 mm is placed between these two layers to reinforce the structure and improve resistance to mechanical impact. In case of the rear panel, the two ordinary glass sheets with another fibreglass layer are simultaneously cold-slumped onto a convex mould. All the layers are glued together on a final vacuum mould. Three stainless steel pads are glued to the rear panel and this interface system is 320 mm from the mirror centre to enable mounting of the actuators designed for the CTA mirrors. In places where these mounting pads are glued to the rear panel the aluminum spacers (tubes) are thickened to increase the local stiffness. Stainless steel mesh is attached to the sidewalls to protect the sandwich structure against contamination by insects or bird waste as shown in Figure \ref{fig2}. To ensure the proper mirror installation on the telescope dish support structure markers indicating the correct mirror orientation are used. In the last step the special silicone rubber, which is resistant over a wide temperature range, from $-$60 to $+$260 [$^o$C], is attached to the mirror sidewalls to protect the mirror against damage during the transportation and mounting processes. A final open-structure prototype mirror support structure is depicted in Figure \ref{fig2}. The weight of new open-structure mirror is $\sim$32 kg and it is decreased by approximately 8 kg, in contrast to the previous solution, since the total amount of the epoxy resin is reduced to minimum. Thus, the new construction is much more homogeneous and the final production process will be simpler and more effective and hence the cost of new open-structure MST mirrors will be reduced. The sequence of technological operations described above can be used to produce mirrors with a wide range of concave surfaces, but one should bear in mind that the increase in the flat-to-flat size of the mirror results in the increase of the minimum curvature radius. \begin{figure} \centering \includegraphics[width=.8\textwidth]{fig2.eps} \caption{The front of an open structure mirror with reflective layer (left) and the sidewalls with the protective silicon rubber and stainless steel mesh (top right). Visible are the aluminum spacers with slots cut at both ends of the tubes (bottom right).} \label{fig2} \end{figure} \section{Test results} Eight hexagonal prototype mirrors were manufactured between November 2014 and May 2015 at INP PAS. Preliminary tests were performed on all of them to measure the Point Spread Function (PSF). The measurements were done using a test bench, which was designed and manufactured at INP PAS. The test bench consists of a red laser, emitting at a wavelength of 635 nm, a specially designed jig to mount the mirrors for measurements and a CCD camera with software for image capture and processing. The CCD camera is equipped with a set of filters, which allows for measurements during daytime. The laser, which is used for PSF measurements emits at slightly longer wavelength than specified in the CTA requirements, but our test with a blue laser (405 nm) shows a very good agreement between those two light sources. The mirrors are placed at the distance equal to two nominal focal lengths (32.14 m) and a preliminary measurement of the PSF is made. The focal length of a mirror can be determined from its PSF measurement, since at focal length the mirror PSF will be at its minimum. The results of a scan at twice the focal length are shown in Figure \ref{fig3}. Nine measurements of the PSF were made of this prototype mirror in the vicinity of the nominal focal length, and a quadratic function was fitted to obtain the minimum value of the PSF and hence the value of twice the focal length. The inferred value of twice the focal length value for this mirror prototype is 32.15 $\pm$ 0.09 m (statistical error only), in very good agreement with the nominal value of 32.14 m. The PSF spot - d80, defined as the radius of the circle in which 80\% of the reflected light energy is contained, for this particular mirror prototype is 10.2 mm, which compares well with the CTA requirement for the MST mirrors, that d80 $<$ 17 mm. \begin{figure} \centering \includegraphics[width=.6\textwidth]{fig3.eps} \caption{Two focal length scan of one of the open structure mirror prototype. Red points denote measurements results and the fitted function is a grey line. The inferred two focal length value is equal 32.15 m.} \label{fig3} \end{figure} At present, one of the open-structure mirror prototypes is under extensive durability tests in a climate chamber at INP PAS. It undergoes two thermal cycles per day with temperature changes from $-$20 to $+$40 [$^o$C] to check its geometrical stability and stiffness of the structure. \section{Conclusions} A novel mirror technology for Cherenkov telescopes has been proposed. The advantage of this technology is that the manufacturing steps are independent of the coating processes and hence different reflective layers can be used. The other important advantage of the mirror technology presented in this paper is its open architecture, which does not face the well-known problems of other closed structures and honeycomb technologies. Moreover, the open structure of the mirrors make them naturally pressure-equalize when placed at high altitude. Much simplified manufacturing technology, in comparison with previous solutions, results in shorter production time and hence lower price. The first two prototypes were sent to the central CTA test facility at Erlangen and they are under optical test to measure their PSF and the focal length. A fully-equipped production line has been built at INP PAS, with a production capacity of two mirrors per week. \acknowledgments{We gratefully acknowledge support from the agencies and organizations listed under Funding Agencies at this website: http://www.cta-observatory.org/.}
2,869,038,154,452
arxiv
\section{Introduction} It is believed, that variety of the magnetic fields observed in astrophysics and technics can be explained in terms of the dynamo theory, e.g. \citep{HR2004}. The main idea is that kinetic energy of the conductive motions is transformed into the energy of the magnetic field. Magnetic field generation is the threshold phenomenon: it starts when magnetic Reynolds number $\rm R_m$ reaches its critical value $\rm R_m^{\rm cr}$. After that magnetic field grows exponentially up to the moment, when it already can feed back on the flow. This influence does not come to the simple suppression of the motions and reducing of $\rm R_m$, rather to the change of the spectra of the fields closely connected to constraints caused by conservation of the magnetic energy and helicity \citep{BS2005}. The other important point is effects of the phase shift and coherence of the physical fields before and after onset of quenching discussed in \citep{TB2008}. As a result, even after quenching the saturated velocity field is still large enough, so that $\rm R_m\gg R_m^{\rm cr}$. Moreover, velocity field taken from the nonlinear problem (when the exponential growth of the magnetic field stopped) can still generate exponentially growing magnetic field providing that feed back of the magnetic field on the flow is omitted (kinematic dynamo regime) \citep{CT2009, T2008, TB2008, SSCH2009}. In other words, the problem of stability of the full dynamo equations including induction equation, the Navier-Stokes equation with the Lorentz force differs from the stability problem of the single induction equation with the given saturated velocity field taken from the full dynamo solution: stability of the first problem does not provide stability of the second one. Here we consider effect of such kind of stability on an example of the model of galactic dynamo in the thin disk, as well as some applications to the dynamo in the sphere. \section{Dynamo in the thin disk} One of the simplest galactic dynamo models is a one-dimensional model in the thin disk \citep{RSS1988}: \begin{equation}\begin{array}{l}\displaystyle {\partial A\over \partial t} =\alpha B + A'' \\ \\ \displaystyle {\partial B\over \partial t} =-{\cal D} A'+ B'', \end{array}\label{sys11} \end{equation} where $A$ and $B$ are azimuthal components of the vector potential and magnetic field, $\alpha(z)$ is a kinetic helicity, ${\cal D}$ is a dynamo number, which is a product of the amplitudes of the $\alpha$- and $\omega$-effects and primes denote derivatives with respect to a cylindrical polar coordinate $z$. Equation (\ref{sys11}) is solved in the interval $-1\le z\le 1$ with the boundary conditions $B=0$ and $A'=0$ at $z=\pm 1$. We look for a solution of the form \begin{equation}\displaystyle (A,\,B)=e^{\gamma t} ({\cal A}(z),\, {\cal B}(z)). \label{sys12} \end{equation} Substituting (\ref{sys12}) in (\ref{sys11}) yields the following eigenvalue problem: \begin{equation}\begin{array}{l}\displaystyle \gamma {\cal A} =\alpha {\cal B} + {\cal A}'' \\ \\ \displaystyle \gamma {\cal B} =-{\cal D} {\cal A}'+ {\cal B}'', \end{array}\label{sys22} \end{equation} where the constant $\gamma$ is the growth rate. So as $\alpha(-z)=-\alpha(z)$ is odd function of $z$, the generation equations have an important property: system (\ref{sys22}) is invariant under transformation $z\to -z$ when \citep{Parker1971}: \begin{equation}\begin{array}{l}\displaystyle {\cal A}(-z)= {\cal A}(z), \quad {\cal B}(-z)= -{\cal B}(z) \\ \\ {\rm or} \\ \\ {\cal A}(-z)= -{\cal A}(z), \quad {\cal B}(-z)= {\cal B}(z). \end{array}\label{sys33} \end{equation} Therefore, all solutions may be divided into two groups: odd on ${\cal B}(z)$, dipole ($\rm D$), and even, quadrupole on ${\cal B}(z)$. Then we can replace $-1\le z\le 1$ with the interval $0\le z\le 1$ and the following boundary conditions at $z=0$: ${\cal A}'=0$, ${\cal B}=0$ (D) and ${\cal A}=0$, ${\cal B}'=0$ ($\rm Q$). Usually, $\alpha=\alpha_0$ with $\alpha_0(z)=\sin(\pi z)$ is used, see also \citep{Soward1978} for $\alpha_0(z)=z$ dependence, more appropriate for analytical applications. System (\ref{sys22}) has growing solution, $\Re\gamma>0$, when $|{\cal D}|>|{\cal D}^{\rm cr}|$. For ${\cal D}<0$ the first exciting mode is quadrupole with ${\cal D}^{\rm cr}\approx -8$ and $\Im\gamma =0$: solution is non-oscillatory\footnote{For our Galaxy usual estimate is ${\cal D}=-10$.}. For ${\cal D}>0$ the leading mode is oscillatory dipole, $\Im\gamma \ne 0$ with higher threshold of generation: ${\cal D}^{\rm cr}\sim 200$. Putting nonlinearity of the form \begin{equation}\begin{array}{l}\displaystyle \displaystyle \alpha(z)={\alpha_0(z)\over 1+E_m} \quad {\rm for}\quad |{ B}|\gg 1 \end{array}\label{non} \end{equation} in (\ref{sys11}), where $\displaystyle E_m=( B^2+{ A'}^2)/2$ is a magnetic energy, gives stationary solutions for $\rm Q$-kind of symmetry and quasi-stationary solutions for $\rm D$, see about various forms of nonlinearities in \citep{Beck}. The property of the nonlinear solution is mostly defined by the form of the first eigenfunction. Now, in the spirit of \citep{CT2009, TB2008} we add to (\ref{sys11}) equations for the new magnetic field $(\widehat{A},\, \widehat{B})$ with the same $\alpha$ (\ref{non}), which depends on $({A},\, {B})$ and does not depend on $(\widehat{A},\, \widehat{B})$: \begin{equation}\begin{array}{l} \displaystyle {\partial A\over \partial t} =\alpha {B} + A'' \\ \\ \displaystyle {\partial B\over \partial t} =-{\cal D} A'+ {B}'' \\ \\ \displaystyle {\partial \widehat{A}\over \partial t} =\alpha \widehat{B} + \widehat{A}'' \\ \\ \displaystyle {\partial \widehat{B}\over \partial t} =-{\cal D} \widehat{A}'+ \widehat{B}''. \end{array}\label{sys44} \end{equation} Numerical simulations demonstrate that for the negative ${\cal D}$ the both $(A,\, B)$ and $(\widehat{A},\, \widehat{B})$ are steady, however the final magnitudes of $(\widehat{A},\, \widehat{B})$ depend on the initial conditions for $(\widehat{A},\, \widehat{B})$, see Fig.~\ref{Fig1}. The procedure was the following: equations (\ref{sys11}, \ref{non}) for $(A,\, B)$ were integrated up to the moment $t=t_0$, then the full system (\ref{sys44}) was simulated with the initial conditions for $(\widehat{A},\, \widehat{B})$ in the form: $\displaystyle(\widehat{A},\, \widehat{B})\Big{|}_{t=t_0}=(A,\, B)\Big{|}_{t=t_0}(1+{\cal C}\varepsilon)$, where $\varepsilon \in [-0.5,\, 0.5]$ is a random variable and $\cal C$ is a constant. The both vectors $(A,\, B)$ and $(\widehat{A},\, \widehat{B})$ are stable in time, however the final magnitude of $\widehat{ E_m}$ for ${\cal C}\ne 0$ slightly depends on ${\cal C}$. Presence of alignment of the fields $(A,\, B)$ and $(\widehat{A},\, \widehat{B})$ follows from linearity and homogeneity of equations for $(\widehat{A},\, \widehat{B})$, where $\alpha(z,\, E_m)$ is given. Latter we consider stability of $(\widehat{A},\, \widehat{B})$ in more details. \begin{figure} \centering \psfrag{t}{$t$} \psfrag{M}{$t_0$} \psfrag{h}{${\cal C}=1,\, {\cal C}=10$} \psfrag{n}{${\cal C}=0$} \psfrag{EAS}{$E_m,\, \widehat{E_m}$} \hskip -2cm \includegraphics[width=9cm]{fig1.eps} \caption{Evolution of magnetic energy $E_m$ for $t<t_0$ governed by the system (\ref{sys11},\ref{non}) for ${\cal D}=-10$. In the moment $t_0=300$ the new magnetic field $(\widehat{A},\, \widehat{B})$ with the initial conditions defined by constant $\cal C$ is switched on, see (\ref{sys44},\ref{non}). Plots for $E_m$ and $\widehat{E_m}$ (with ${\cal C}=0$) for $t>t_0$ coincide. All the solutions are stationary. } \label{Fig1} \end{figure} For ${\cal D}>0$ situation is different, resembling that one of instability described in \citep{CT2009, T2008, TB2008, SSCH2009} for more sophisticated models: field $(\widehat{A},\, \widehat{B})$ oscillates and starts to grow exponentially, see Fig.~\ref{Fig2}. Note, that no regime in oscillations for $(\widehat{A},\, \widehat{B})$ is observed. The other specific feature is delay of $(\widehat{A},\, \widehat{B})$ relative to $(A,\, B)$: $\displaystyle\theta\approx-{\pi\over 3}$. If $E_m$ in (\ref{non}) is averaged over the space, so that $\alpha$ is steady, then instability dissapears. The question arises: does instability depends on stationarity, either it depends on something else? It is known, that for ${\cal D}<0$ stability of the system (\ref{sys22}, \ref{non}), which has stationary solution, is tightly bound to behaviour of the linear solution of (\ref{sys22}) for ${\cal D}>0$ \citep{RSS1992}. Note, that for the complex form of (\ref{sys22}) it is equivalent to the solution of the conjugate problem. Let $(\widetilde{A},\, \widetilde{B})=({\cal A}+{ a},\, {\cal B}+{ b})$, where $({\cal A},\, {\cal B})$ is a solution of the nonlinear problem and $({ a},\, { b})$ is a perturbation with the same boundary conditions as for $({\cal A},\, {\cal B})$. Putting $(\widetilde{A},\, \widetilde{B})$ in (\ref{sys22}) with $\displaystyle \alpha\approx\alpha_0+{\partial\alpha\over\partial {\cal B}} { b}$ yields equations for $({ a},\, { b})$\footnote{It is usually supposed that in $\alpha\omega$-dynamo models $B\gg A'$. }: \begin{equation}\begin{array}{l}\displaystyle \gamma a =\alpha^{\rm e} b + a'' \\ \\ \displaystyle \gamma b =-{\cal D} a'+ b'', \end{array}\label{sys333} \end{equation} where $\displaystyle\alpha^{\rm e}=\alpha+{\partial \alpha\over\partial {\cal B}}{\cal B}$ for $\displaystyle\alpha={\alpha_0\over 1 + {\cal B}^2}$ is \begin{equation}\displaystyle \alpha^{\rm e}={1-{\cal B}^2\over \left(1+{\cal B}^2\right)^2}\alpha_0\sim -{\alpha_0\over {\cal B}^2}\quad {\rm for}\quad |{\cal B}|\gg 1. \label{non1} \end{equation} Behaviour of $\alpha\omega$-dynamo (\ref{sys22}) is defined by the sign of ${\cal D}\alpha$, and its change in the perturbed equations (\ref{sys333}) is important. In other words, instead of nonlinear equations (\ref{sys22},\ref{non}) we come to the linear problem (\ref{sys22}) with given $\alpha=\alpha(z,\, E_m)$ and effective dynamo number $\displaystyle {\cal D}^{\rm e}= -{{\cal D}\over {\cal B}^2}$. \begin{figure} \psfrag{t}{$t$} \psfrag{M}{$t_0$} \psfrag{h}{${\cal C}=0$} \psfrag{n}{${\cal C}=10^{-2}$} \psfrag{EAS}{$E_m,\, \widehat{E_m}$} \hskip -1cm \includegraphics[width=9cm]{fig2.eps} \caption{Evolution of magnetic energy $E_m$ for $t<t_0$ governed by the system (\ref{sys11},\ref{non}) for ${\cal D}=300$. In the moment $t_0=30$ the new magnetic field $(\widehat{A},\, \widehat{B})$ with the initial conditions defined by constant $\cal C$ is switched on, see (\ref{sys44},\ref{non}). Plots for $E_m$ and $\widehat{E_m}$ (with ${\cal C}=0$) for $t>t_0$ coincide. For ${\cal C}\ne 0$ after intermediate regime the phase shift $\theta$ between $E_m$ and $\widehat{E_m}$ increased and exponential growth of $\widehat{E_m}$ started. } \label{Fig2} \end{figure} Then stability of fields $(\widehat{A},\, \widehat{B})$ for the negative $\cal D$ can be explained as follows. For negative ${\cal D}$ solution $(\widehat{A},\, \widehat{B})$ is finite and stable, because the threshold of generation ${\cal D}^{\rm cr}_{+}$ for (\ref{sys22}) is much larger than $\displaystyle {\cal D}^{\rm e}$, ${\cal D}^{\rm cr}_{+}\gg {\cal D}^{\rm e}$. Field $(\widehat{A},\, \widehat{B})$ is defined up to an arbitrary factor, what corresponds to alignment of the vectors $(A,\, B)$ and $(\widehat{A},\, \widehat{B})$. Note, that ${\cal D}^{\rm cr}_{+}\ll {\cal D}^{\rm e}$ does not guarantee, that $(\widehat{A},\, \widehat{B})$ will grow exponentially due to nonlinearity (\ref{non}). It is worthy of note that nonlinear solution of (\ref{sys11},\ref{non}) and (\ref{sys44},\ref{non}) demonstrates similar stationary behaviour even for ${\cal D}\sim -10^{3}$ in spite of the fact, that ${\cal D}^{\rm cr}_{+}$ for the quadrupole oscillatory mode for positive ${\cal D}$ is $\sim 200$. The reason is that dynamo system tends to the state of the strong magnetic filed with ${\cal B}\sim {\cal D}^{1/2}$, so that $\displaystyle \alpha\sim {1\over {\cal B}^2 }$, leaving ${\cal D}^{\rm e}$ at the level of the first mode's threshold of generation. For positive $\cal D$ $(A,\, B)$, and therefore $\alpha(B)$, oscillate and one needs additional information on correlation of the waves. Here, instead of (\ref{non1}) we get $\displaystyle \alpha^{\rm e}\sim -{\alpha_0 \widehat{B}\over |B|^3}$. If phase shift between $B$ and $\widehat{B}$ is negligible, then $\alpha$-effect is saturated and time evolution of $(A,\, B)$ and $(\widehat{A},\, \widehat{B})$ is similar. However, simulations demonstrate Fig.~\ref{Fig2}, that field $(\widehat{A},\, \widehat{B})$ delays relative to $(A,\, B)$. This is typical situation, when parameter resonance takes place: $\alpha$ is modulated by signal with frequency $\Omega\sim 2\omega$, $\omega=\Im\gamma$, see \citep{DP2008} for details of spatial resonance. This assumption is supported by the fact, that instability disappears when in (\ref{non}) steady $\alpha$, averaged in time, is used. Note, that usage of quadrupole boundary conditions in (\ref{sys44}) is not important for instability: problem (\ref{sys44},\ref{non}) with periodical boundary conditions has oscillatory solution and instability depends on the form of quenching in the same way. To demonstrate what happens we consider how delay $\theta$ of $(\widehat{A},\, \widehat{B})$ relative to $(A,\,B)$ changes production of $\widehat{A}^2+\widehat{B}^2$ near the threshold of generation ${\cal D}^{\rm cr}_{+}$. We start from the linear analysis of the system in the form: \begin{equation}\begin{array}{l} \displaystyle i\omega \widehat{A} =\alpha \widehat{B} -k^2 \widehat{A} \\ \\ \displaystyle i\omega \widehat{B} =-i{\cal D}^{\rm cr}_{+} k \widehat{A}-k^2 \widehat{B}. \end{array}\label{sys44aa} \end{equation} From condition of solvability for (\ref{sys44aa}): $\displaystyle(k^2+i\omega)^2= -i{\cal D}^{\rm cr}_{+} k\alpha_0$ with $\alpha=\alpha_0$ follows that $\omega^2=k^4=1$. The other prediction of the linear analysis is the phase shift $\varphi$ between $\widehat{A}$ and $\widehat{B}$: $\displaystyle \varphi=\pm {\pi\over 4}$, what is twice smaller than for the nonlinear regime \citep{TB2008}, so that for the nonlinear regime the maximal $\widehat{A}$ is when $\widehat{B}$ is zero and quenching is absent. Then putting in (\ref{sys44}) $B=b\sin(x-t)$, $\widehat{A}=\sin(x-t+\varphi+\theta)$, $ \widehat{B}= \sin(x-t+\theta)$, and $\displaystyle\alpha={1\over 1+B^2}$ we get how generation depends on $\theta$. Equation for $\widehat{B}$ does not include original field $(A,\,B)$, so we consider only production of $\widehat{A}^2$. Then $\delta \widehat{A}(\varphi,\,\theta)=\alpha_0\int\limits_0^{2\pi}{\displaystyle \widehat{B}\widehat{A}\over\displaystyle 1+B^2}\, dt$. If $\displaystyle|\Pi|\gg 1$, where $\displaystyle\Pi={\delta \widehat{A}(\varphi,\,\theta)\over \delta \widehat{A}(\varphi,\,0)}$, then $(\widehat{A},\, \widehat{B})$ is unstable. The exact equation for $\delta \widehat{A}$ is: \begin{equation}\begin{array}{l} \displaystyle \delta \widehat{A}(\varphi,\,\theta)=h_1 + h_2 \tan(\varphi), \\ \\ \displaystyle h_1={ \cos(\theta)^2(4 -3 2^{1/2}) -2(2^{1/2}-1) \over 2^{1/2}(2^{1/2}-1)},\\ \\ \displaystyle h_2={\sin(2\theta)( 3 2^{1/2} -4) \over 2^{3/2}(2^{1/2}-1)}. \end{array}\label{sys44aa1} \end{equation} If $\theta=0$ then $h_2=0$ and $\widehat{A}(\varphi,\,0)=h_1=2^{1/2}-4$. Then, for $\displaystyle\theta= {\pi\over 3}$ $\displaystyle h_1={2^{1/2}-10\over 4}$, $\displaystyle h_2=-{3^{1/2}\over 4}(2^{1/2}-1)$, $\Pi $ at $\displaystyle\varphi\to \pm {\pi\over 2}$ is singular and instability appears. Summarizing results for the steady and oscillatory dynamos we have the following predictions for stability of field $(\widehat{A},\, \widehat{B})$. For ${\cal D}<0$ $(A,\,B)$ is steady and $(\widehat{A},\, \widehat{B})$ is unstable when $\displaystyle\Big{|}{{\cal D}\over {\cal D}^{\rm cr}_{+} B^2}\Big{|} \gg 1$. When $(A,\,B)$ oscillates then $(\widehat{A},\, \widehat{B})$ continues to oscillate with $(A,\,B)$ increasing the phase shift between $(\widehat{A},\, \widehat{B})$ and $(A,\,B)$. Then instability caused by the parameter resonanse may arise. \section{Conclusions} Here we argue, that stability of the kinematic $\alpha\omega$-dynamo problem with the $\alpha$-effect taken from the the weakly-nonlinear regime near the threshold of generation can be predicted from the knowledge on the threshold of generation of the linear problem with the opposite sign of the dynamo number. It appears, that in spite of the fact, that the magnetic field already saturated $\alpha$, it still can generate magnetic field if spectra of linear problem are similar for dynamo number $\cal D$ with the opposite sign. So, as $\cal D$ depends on the product of the $\alpha$ and $\omega$ effects similar analysis can be performed with the $\omega$-quenching, usually used in geodynamo models, see e.g. \citep{Soward1978}, as well as with the feed back of the magnetic field on diffusion. It is likely, that for the more complex systems, velocity field, taken from the saturated regime, with many exited modes will always generate magnetic field if the Lorentz force would be omitted. So as nonlinearity (\ref{non}) has quite a general form, we consider applications of these results to some other dynamo models. Linear analysis of the axi-symmetrical $\alpha\omega$-equations gives the following, see \citep{Moffatt} and references therein: for positive ${\cal D}$ (which is believed to be in the Earth) in presence of the meridional velocity $U_p$ the first exciting mode is dipole with $\Im\gamma=0$. Reducing of $U_p$ firstly leads to oscillatory dipole solution (regime of the frequent reversals \citep{B1964}). The further reduce of $U_p$ gives the quadrupole oscillatory regime with larger value of ${\cal D}^{\rm cr}$. For negative $\cal D$ and $U_p\ne 0$ the first mode is quadrupole with $\Im\gamma=0$. $U_p\to 0$ gives non-oscillatory dipole mode with decreased ${\cal D}^{\rm cr}$, see for more details \citep{Meunier1997}. In contrast to the dynamo in the disk the thresholds of generation for positive and negative $\cal D$ in the sphere are of the same order and situation with stability of the field $\widehat{\bf B}$ is uncertain, and can depend on the particular form of the $\alpha$- and $\omega$-effects. Anyway, stability of $\widehat{\bf B}$ for the steady regime is more expected. In accordance with \citep{CT2009} shell models of turbulence demonstrate exponential growth of the magnetic field. This case, as well as 3D simulations of the turbulence in the box, which have the same instabilities, correspond to the oscillatory regimes and using our predictions should be unstable. In the case of the 3D dynamo in the sphere simulations demonstrate different behaviour of $\widehat{\bf B}$ \citep{T2008, SSCH2009}. For small Rayleigh numbers, when the preferred solutions is dipole (in oscillations) and close to the single mode structure (Case 1 in \citep{Ch2001}) $\widehat{\bf B}$ is finite. Increase of the Rossby number \citep{SSCH2009} leads to the turbulent state and $\widehat{\bf B}$ becomes unstable. Author is grateful to A.Brandenburg and D.Sokoloff for discussions.
2,869,038,154,453
arxiv
\section{Introduction} Research on compressive sensing \cite{CompressedSensing-06,Candes-CompressiveSampling-06} focuses on properties of underdetermined linear systems \begin{equation} \label{eq:Ax=b} A x = b,\qquad A \in \mathbb{R}^{m \times n},\qquad m \ll n, \end{equation} that ensure the accurate recovery of sparse solutions $x$ from observed measurements $b$. Strong assertions are based on random ensembles of measurement matrices $A$ and measure concentration in high dimensions that enable to prove good recovery properties with high probability \cite{L1LPSparseApproximate-06,ErrorCorrectingLP-05}. A common obstacle in various application fields are the limited options for \emph{designing} a measurement matrix so as to exhibit desirable mathematical properties, are very limited. Accordingly, recent research has also been concerned with more restricted scenarios, spurred by their relevancy to applications (cf.~Section \ref{sec:related-work}). Consequently, we consider a representative scenario, motivated by applications in experimental fluid dynamics (Fig.~\ref{fig:TomoPIV}). A suitable mathematical abstraction of this setup gives rise to a huge and severely underdetermined linear system \eqref{eq:Ax=b} that has additional properties: a \emph{very sparse} nonnegative measurement matrix $A$ with \emph{constant small support} of all column vectors, and a nonnegative sparse solution vector $x$: \begin{equation} \label{eq:Axb-properties} A \geq 0,\; x \geq 0,\qquad \supp(\col{A}{j}) = \ell \ll m,\qquad \forall j = 1,\dotsc,n. \end{equation} Our objective is the usual one: relating accurate recovery of $x$ from given measurements $b$ to the sparsity $k = \supp(x)$ of the solution $x$ and to the dimensions $m, n$ of the measurement matrix $A$. The sparsity parameter $k$ has an immediate physical interpretation (Fig.~\ref{fig:TomoPIV}). Engineers require high values of $k$, but are well aware that too high values lead to spurious solutions. The current practice is based on a rule of thumb leading to conservative low values of $k$. In this paper, we are concerned with working out a better compromise along with a mathematical underpinning. The techniques employed are general and only specific to the class of linear systems \eqref{eq:Ax=b}, \eqref{eq:Axb-properties}, rather than to a particular application domain. \begin{figure} \centerline{ \includegraphics[width=0.3\textwidth]{setupTomoPIV} \hspace{0.025\textwidth} \includegraphics[width=0.5\textwidth]{TomoReconstr} } \caption{ Compressive sensing in experimental fluid dynamics: A multi-camera setup gathers few projections from a sparse volume function. This scenario is described by a very large and highly underdetermined sparse linear system \eqref{eq:Ax=b} having the additional properties \eqref{eq:Axb-properties}. The sparsity parameter $k$ reflects the seeding density of a given fluid with particles. Less sparse scenarios increase the spatial resolution of subsequent studies of turbulent motions, but compromise accuracy of the reconstruction. Research is concerned with working out and mathematically substantiating the best compromise. } \label{fig:TomoPIV} \end{figure} We regard the measurement matrix $A$ as \emph{given}. Concerning the design of $A$, we can only resort to small random perturbations of the non-zero entries of $A$, thus preserving the sparse structure that encodes the underlying incidence relation of the sensor. Additionally, we exploit the fact that solution vectors $x$ can be regarded as samples from a uniform distribution over $k$-sparse vectors, which represents with sufficient accuracy the underlying physical situation. Under these assumptions, we focus on an \emph{average case analysis} of conditions under which \emph{unique recovery} of $x$ can be expected with \emph{high probability}. A corresponding tail bound implies a weak threshold effect and criterion for adequately choosing the value of the sparsity parameter $k$. Our results are in excellent agreement with numerical experiments and improve the state-of-the-art by a factor of three. \subsection*{Contribution and Organization} In Section \ref{sec:preliminaries}, we detail the mathematical abstraction of the imaging process and discuss directly related work. In Section \ref{sec:expanders}, we examine recent results of compressive sensing based on sparse expanders. This sets the stage for an average case analysis conducted in Section \ref{sec:weak-equivalence} and corresponding weak recovery properties, that are in sharp contrast to poor strong recovery properties presented in Section \ref{sec:strong}. We conclude with a discussion of quantitative results and their agreement with numerical experiments in Section \ref{sec:experiments}. \subsection*{Notation} $|X|$ denotes the cardinality of a finite set $X$ and $[n] = \{1,2,\dotsc,n\}$ for $n \in \mathbb{N}$. We will denote by $\|x\|_{0} = |\{i \colon x_{i} \neq 0 \}|$ and $\mathbb{R}_{k}^{n} = \{ x \in \mathbb{R}^{n} \colon \|x\|_{0} \leq k \}$ the set of $k$-sparse vectors. The corresponding sets of non-negative vectors are denoted by $\mathbb{R}_{+}^{n}$ and $\mathbb{R}_{k,+}^{n}$, respectively. The support of a vector $x\in \mathbb{R}^{n}$, $\mrm{supp}(x) \subseteq [n]$, is the set of indices of non-vanishing components of $x$. With $I^{+}(x) = \{i \colon x_{i} > 0\}$, $I^{0}(x) = \{i \colon x_{i} = 0\}$ and $I^{-}(x) = \{i \colon x_{i} < 0\}$, we have $\mrm{supp}(x) = I^{+}(x) \cup I^{-}(x)$ and $\|x\|_{0} = |\mrm{supp}(x)|$. For a finite set $S$, the set $\mc{N}(S)$ denotes the union of all neighbors of elements of $S$ where the corresponding relation (graph) will be clear from the context. $\mathds{1} = (1,\dotsc,1)^{\top}$ denotes the one-vector of appropriate dimension. $\col{A}{i}$ denotes the $i$-th column vector of a matrix $A$. For given index sets $I, J$, matrix $A_{I J}$ denotes the submatrix of $A$ with rows and columns indexed by $I$ and $J$, respectively. $I^{c}, J^{c}$ denote the respective complement sets. Similarly, $b_{I}$ denotes a subvector of $b$. $\mathbb{E}[\cdot]$ denotes the expectation operation applied to a random variable and $\Pr(A)$ the probability to observe an event $A$. \newpage \section{Preliminaries} \label{sec:preliminaries} \subsection{Imaging Setup and Representation} \label{sec:setup} We refer to Figure \ref{fig_1} for an illustration of the mathematical abstraction of the scenario depicted by Figure \ref{fig:TomoPIV}. In order to handle in parallel the 2D and 3D cases, we will use the variable \begin{equation} \label{eq:def-D} D \in \{2,3\}. \end{equation} We measure the \textbf{problem size} in terms of $d \in \mathbb{N}$ and consider $n:=d^D$ \textbf{cells} in a square ($D=2$) or cube ($D=3$) and $m:=Dd^{D-1}$ \textbf{rays}, compare Fig.~\ref{fig_1}, left and right. It will be useful to denote the set of cells by $C = [n]$ and the set of rays by $R = [m]$. The incidence relation between cells and rays is given by a $m\times n$ \textbf{measurement matrix} $A_d^D$ \begin{equation} \label{eq:def-AdD} (A_d^D)_{ij}= \begin{cases} 1,& \quad \text{if $j$-th ray intersects $i$-th cell},\\ 0, & \quad \text{otherwise}, \end{cases} \end{equation} for all $i\in[m]$, $j\in[n]$. Thus, cells and rays correspond to columns and rows of $A_d^D$. The incidence relation encoded by $A_d^D$ gives rise to the equivalent representation in terms of a \textbf{bipartite graph} $G = (C,R;E)$ with left and right vertices $C$ and $R$, and edges $cr \in E$ iff $(A_{d}^{D})_{rc} = 1$. Figure \ref{fig_1} illustrates that $G$ has \textbf{constant left-degree} $\ell = D$. It will be convenient to use a separate symbol $\ell$. For a fixed vertex $i$, any adjacent vertex $j \sim i$ is called \textbf{neighbor} of $i$. For any non-negative measurement matrix $A$ and the corresponding graph, the set \[ \calN(S) = \{i \in [m] \colon i \sim j,\, j \in S \} = \{i\in[m] \colon A_{ij}>0,\, j\in S\} \] contains all neighbors of $S$. The same notation applies to neighbors of subsets $S \subset [m]$ of right nodes. With slight abuse, we call the matrix $A_{d}^{D}$ that encodes the adjacency $r \sim c$ of vertices $r \in R$ and $c \in C$ \textbf{adjacency matrix} of the induced bipartite graph $G$, deviating from the usual definition of the adjacency matrix of a graph that encodes the adjacency of \emph{all} nodes $v_{i} \sim v_{j},\, V = C \cup R$. Moreover, in this sense, we will call any non-negative matrix adjacency matrix, based on its non-zero entries. Let $A$ be the non-negative adjacency matrix of a bipartite graph with constant left degree $\ell$. The \textbf{perturbed matrix} $\tilde A$ is computed by uniformly perturbing the non-zero entries $A_{ij} > 0$ to obtain $\tilde A_{ij} \in [A_{ij}-\varepsilon,A_{ij}+\varepsilon]$, and by normalizing subsequently all column vectors of $\tilde A$. In practice, such perturbation can be implemented by discretizing the image by radial basis functions and choose their locations on an irregular grid, see \cite{Petra2009}. \vspace{0.25cm} The following class of graphs plays a key role in the present context and in the field of compressed sensing in general. \begin{definition}\label{def:Expander} A \textbf{$(\nu,\delta)$-unbalanced expander} is a bipartite simple graph $G = (L,R;E)$ with constant left-degree $\ell$ such that for any $X \subset L$ with $|X| \leq \nu$, the set of neighbors $\calN(X) \subset R$ of $X$ has at least size $|\calN(X)| \geq \delta \ell |X|$. \end{definition} \subsection{Deviation Bound} We will apply the following inequalities for bounding the deviation of a random variable from its expected value based on martingales, that is on sequences of random variables $(X_{i})$ defined on a finite probability space $(\Omega, \mc{F}, \mu)$ satisfying \begin{equation} \label{eq:condition-martingale} \mathbb{E}[X_{i+1}|\mc{F}_{i}] = X_{i},\qquad \text{for all}\quad i \geq 1, \end{equation} where $\mc{F}_{i}$ denotes an increasing sequence of $\sigma$-fields in $\mc{F}$ with $X_{i}$ being $\mc{F}_{i}$-measurable. This setting applies to random variables associated to measurements that are statistically \emph{dependent} due to the intersection of projection rays (cf.~Fig.~\ref{fig_1}). \begin{theorem}[Azuma's Inequality \cite{Azuma1967,DasGupta2008}] \label{thm:Azuma} Let $(X_{i})_{i=0,1,2,\dotsc}$ be a sequence of random variables such that for each $i$, \begin{equation} \label{eq:def-ci} |X_{i}-X_{i-1}| \leq c_{i}. \end{equation} Then, for all $j \geq 0$ and any $\delta > 0$, \begin{equation} \Pr\big(|X_{j}-X_{0}| \geq \delta\big) \leq 2 \exp\Big( -\frac{\delta^{2}}{2 \sum_{i=1}^{j} c_{i}^{2}}\Big). \end{equation} \end{theorem} \begin{figure} \centerline{ \includegraphics[width=0.35\textwidth]{Setup-2D} \includegraphics[width=0.4\textwidth]{Setup-3D} } \caption{ {\bf Left:} 2D imaging geometry with $d^2$ cells and $2 d$ projection rays (here: $d=6$). The incidence relation is given by the measurement matrix $A = A_d^2$ (cf.~Eqn.~\eqref{eq:def-AdD}) which is the adjacency of a bipartite graph with constant left degree $\ell=2$. {\bf Right:} 3D imaging geometry with $d^3$ cells and $3d^2$ rays (here: $d=7$). The incidence relation given by the measurement matrix $A = A_d^3$ is the adjacency of a bipartite graph with constant left degree $\ell=3$. } \label{fig_1} \end{figure} \subsection{Related Work} \label{sec:related-work} Although it was shown \cite{Candes-CompressiveSampling-06} that random measurement matrices are optimal for Compressive Sensing, in the sense that they require a minimal number of samples to recover efficiently a $k$-sparse vector, recent trends \cite{RIP-P-SMM-08, XuHassibi_Expander} tend to replace random dense matrices by adjacency matrices of ''high quality'' expander graphs. Explicit constructions of such expanders exist, but are quite involved. However, random $m\times n$ binary matrices with nonreplicative columns that have $\lfloor \ell n\rfloor$ entries equal to $1$, perform numerically extremely well, even if $\ell$ is small, as shown in \cite{RIP-P-SMM-08}. In \cite{HassibiIEEE} it is shown that perturbing the elements of adjacency matrices of expander graphs with low expansion, can also improve performance. This findings complement our prior work in \cite{Petra2009}, where we observed that by slightly perturbing the entries of a tomographic projection matrix its reconstruction performance can be improved significantly. We wish to inspect the bounds on the required sparsity that guarantee exact reconstruction of \emph{most} sparse signals, and corresponding critical parameter values similar to weak thresholds in \cite{DonTan05, DonohoT10}. The authors have computed sharp reconstruction thresholds for Gaussian measurements, such that for given a signal length $n$ and numbers of measurements $m$, the maximal sparsity value $k$ which guarantees perfect reconstruction can be determined precisely. For a matrix $A\in\mathbb{R}^{m\times n}$, Donoho and Tanner define the undersampling ratio $\delta=\frac{m}{n}\in(0,1)$ and the sparsity as a fraction of $m$, $k=\rho m$, for $\rho\in (0,1)$. The so called \emph{strong phase transition} $\rho_S(\delta)$ indicates the necessary undersampling ratio $\delta$ to recover \emph{all} $k$-sparse solutions, while the \emph{weak phase transition} $\rho_W(\delta)$ indicates when $x^*$ with $\|x^*\|_0\le\rho_W(\delta) \cdot m$ can be recovered with overwhelming probability by linear programming. Relevant for TomoPIV is the setting as $\delta\to 0$ and $n\to\infty$, that is severe undersampling, since the number of measurements is of order $O(10^4)$ and discretization of the volume can be made accordingly fine. For Gaussian ensembles a strong asymptotic threshold $\rho_S(\delta)\approx (2e \log(1/\delta)^{-1}$ and weak asymptotic threshold $\rho_W(\delta)\approx (2 \log(1/\delta)^{-1}$ holds, see e.g. \cite{DonTan05}. In this highly undersampled regime, the asymptotic thresholds are the same for nonnegative and unsigned signals. Exact sparse recovery of nonnegative vectors has been also studied in a series of recent papers \cite{HassibiIEEE, WangIEEE}, while \cite{Stojnic10a,Stojnic10b} additionally assumes that all nonzero elements are equal to each other. As expected, additional information, improves the recoverable sparsity thresholds. \subsubsection{Strong Recovery} The maximal sparsity $k$ depending on $m$ and $n$, such that \emph{all} sparse signals are \emph{unique} and coincide with the \emph{unique positive} solution of $Ax=b$, is investigated in \cite{DonTan05, DonohoT10} from the perspective of convex geometry by studying the face lattice of the convex polytope $\conv\{\col{A}{1},\dots,\col{A}{n},0\}$. It is related to the nullspace property for nonnegative signals in what follows. \begin{theorem}[\cite{DonTan05, HassibiIEEE, WangIEEE, Petra2009}] \label{thm:AllPOS} Let $A\in\Rmn$ be an arbitrary matrix. Then the following statements are equivalent: \begin{itemize} \item[(a)] Every $k$-sparse nonnegative vector $x^*$ is the unique positive solution of $Ax=Ax^*$. \item[(b)] The convex polytope defined as the convex hull of the columns in $A$ and the zero vector, i.e. $\conv\{\col{A}{1},\dots,\col{A}{n},0\}$ is outwardly $k$-neighborly. \item[(c)] Every nonzero null space vector has at least $k+1$ negative (and positive) entries. \end{itemize} \end{theorem} \subsubsection{Weak Recovery} Thm. 2 in \cite{DonTan05} shows the equivalence between $(k,\epsilon)$-weakly (outwardly) neighborliness and weak recovery, i.e. uniqueness of all except a fraction $\epsilon$ of $k$-sparse nonnegative vectors. Weak neighborliness is the same thing as saying that $A\Delta_0^{n-1}$ has at least $(1-\epsilon)$-times as many $(k-1)$-faces as the simplex $\Delta_0^{n-1}$. A different form of weak recovery is to determine the probability that a random $k$-sparse positive vector by probabilistic nullspace analysis. This concepts are related for an arbitrary sparse vector with exactly $k$ nonnegative entries in the next theorem. \begin{theorem}\label{thm:individual_uniqueness_POS} Let $A\in\Rmn$ be an arbitrary matrix. Then the following statements are equivalent: \begin{itemize} \item[(a)] The $k$-sparse nonnegative vector $x^*$ supported on $S$, $|S|=k$, is the unique positive solution of $Ax=Ax^*$. \item[(b)] Every nonzero null space vector cannot have all its negative components in $S$. \item[(c)] $A_S\mathbb{R}^k_+$ is a $k$-face of $A\mathbb{R}^n_+$, i.e. there exists a hyperplane separating the cone generated by the linearly independent columns $\{\col{A}{j}\}_{j_\in S}$ from the cone generated by the columns of the off-support $\{\col{A}{j}\}_{j_\in S^c}$. \end{itemize} \end{theorem} \begin{proof} Statement (a) holds if and only if there is no $v\ne 0$ such that $Av=0$ and $v_{S^c}\ge 0$, compare for e.g. \cite[Thm. 1]{Man09ProbInteger}. Thus (a) $\Leftrightarrow$ (b). By \cite[Lem. 5.1]{DonohoT10}, (a) $\Leftrightarrow (c)$ holds as well. \end{proof} If, in addition, all $k$ nonzero entries are equal to each other, then a stronger characterization holds. \begin{theorem}[{\cite[Prop.~2]{Man09ProbInteger}}] \label{thm:individual_uniqueness_BIN} Let $A\in\Rmn$ be an arbitrary matrix. Then the following statements are equivalent: \begin{itemize} \item[(a)] The $k$-sparse binary vector $x^*\in\{0,1\}^n$ supported on $S$, $|S|=k$, is the unique solution of $Ax=Ax^*$ with $x\in[0,1]^n$. \item[(b)] Every nonzero null space vector cannot have all its negative components in $S$ and the positive ones in $S^c$. \item[(c)] There exists a vector $r$ such that $\Diag(z^*)A^\top r > 0$, with $z^*:=e-2 x^*$. \item[(d)] $0 \in \mathbb{R}^{m}$ is not contained in the convex hull of the columns of $A\Diag(z^*)$, i.e. $0\notin\conv\{z^*_1\col{A}{1},\dots, z^*_n\col{A}{n},0\}$, with $z^*:=e-2 x^*$. \end{itemize} \end{theorem} \begin{proof} If $x^*$ is unique in $\{0,1\}^n$, it is unique in $[0,1]^n$ as well. Uniqueness in $[0,1]^n$ holds, for e.g. by \cite[Thm. 1]{Man09ProbInteger}, if there is no $v\ne 0$ such that $Av=0$, $v_{S^c}\ge 0$ and $v_{S}\le 0$, which shows equivalence to (b). With $D:=\Diag(e-2 x^*)$ and $DD=I$, (b) can be rewritten as follows: there is no $v\ne 0$ such that $ADDv=0, Dv\ge 0$, $Dv\ne 0$. With $u:=Dv$, the above condition becomes: $$ ADu=0, u\ge 0, u\ne 0\ , \rm{has\ no\ solution}\ , $$ which by Gordon's theorem of alternative gives the equivalent certificate (c): \begin{equation}\label{eq:Mangasarian-primal} \exists r \quad {\rm such\ that\ } D A^\top r > 0 \ . \end{equation} In other words, a small $k$-subset of the columns of $A$, are ''flipped'' by multiplication with $-1$, and these modified columns together with all remaining ones can be separated from the origin, which shows equivalence to (d), i.e. $0$ is not contained in the convex hull of these points. \end{proof} Note that statement (d) is related to the necessary condition for uniqueness in \cite[Thm. 1]{WangIEEE}. We further comment on Thm. \ref{thm:individual_uniqueness_BIN} (c) from a probabilistic viewpoint. Condition (c) says that all points defined by the columns of $A\Diag(e-2 x^*)$ are located in a single half space defined by a hyperplane through the origin with normal $r$. Conditions under which this is likely to hold were studied by Wendel \cite{Wendel-62}. This problem is also directly related to the basic pattern recognition problem concerning the linear classification\footnote{In this context, ``linear'' means affine decision functions.} of any dichotomy of a finite point set \cite{CoverSeparability-65}. Assuming $n$ points in $\mathbb{R}^{m}$ to be in general position, that is any subset of $m$ vectors is linearly independent, and that the distribution from which the given point set is regarded as an i.i.d.~sample set is symmetric with respect to the origin, then condition \eqref{eq:Mangasarian-primal} holds with probability \begin{equation} \label{eq:WendelPR} \Pr(n,m) = \frac{1}{2^{n-1}} \sum_{i=0}^{m-1} \binom{n-1}{i}. \end{equation} As Figure \ref{fig:WendelPR} illustrates, $\Pr(n,m) = 1$ if $n/m \leq 1$, due to the well known fact that any dichotomy of $m+1$ points in $\mathbb{R}^{m}$ can be separated by a hyper-plane \cite{Vapnik1971,Devroye1996}. For increasing dimension $m \to \infty$, this also holds almost surely if $n/m < 2$, which can be easily deduced by applying a binomial tail bound. Accordingly, assuming that the measurement matrix $A$ conforms to the assumptions, the authors of \cite{Man09ProbInteger} conclude that an existing binary solution to \eqref{eq:Ax=b} is unique with probability \eqref{eq:WendelPR} for \emph{underdetermined} systems with ratio $m/n > 1/2$. We adopt this viewpoint in Section \ref{sec:recovery-perturbed} and develop a criterion for unique recovery with high probability using the \emph{given} measurement matrix \eqref{eq:def-AdD}, based on a probabilistic average case analysis of condition \eqref{eq:Hassibi-condition} (Section \ref{sec:reduced-system}). This criterion currently characterizes best the design of tomographic scenarios (Fig.~\ref{fig_1}), with recovery performance guaranteed with high probability. We conclude this section by mentioning that \emph{exact nonasymptotic} recovery results for a $k$-sparse nonnegative vector are obtained in \cite[Thm. 1.10]{DonohoT10} by exploiting Wendel's theorem. Donoho and Tanner show that the probability of uniqueness of a $k$-sparse nonnegative vector equals $\Pr(n-m,n-k)$, provided $A$ satisfies certain conditions which do not hold in our considered application. \begin{figure} \centerline{ \includegraphics[width=0.5\textwidth]{WendelPR} } \caption{ The probability $\Pr(n,m)$ given by \eqref{eq:WendelPR} that $n$ points in general position in $\mathbb{R}^{m}$ can be linearly separated \cite{Wendel-62}. This holds with probability $\Pr(n,m)=1$ for $n/m \leq 1$, and with $\Pr(n,m) \to 1$ if $m \to \infty$ and $1 \leq n/m < 2$. } \label{fig:WendelPR} \end{figure} \section{Expanders, Perturbation, and Weak Recovery} \label{sec:expanders} This section collects recent results of recovery properties based on expanders associated with sparse measurement matrices, possibly after a random perturbation of the non-zero matrix entries. Section \ref{sec:wang-hassibi-application} applies these results to our specific setting in a form suitable for a probabilistic analysis of recovery performance presented in Section \ref{sec:weak-equivalence}. \subsection{Expanders and Recovery} The following theorem is a slight variation of Theorem 4 in \cite{WangIEEE} tailored to our specific setting. \begin{theorem}\label{thm:wang} Let $A$ be the adjacency matrix of a $(\nu,\delta)$-unbalanced expander and $1 \geq \delta>\frac{\sqrt{5}-1}{2}$. Then for any $k$-sparse vector $x^*$ with $k\le \frac{\nu}{(1+\delta)}$, the solution set $\{x \colon Ax=Ax^*,x \ge 0\}$ is a singleton. \end{theorem} \begin{proof} We will show that every nonzero null space vector has \emph{at least} $\frac{\nu}{(1+\delta)}+1$ negative and positive entries. Then Theorem \ref{thm:AllPOS} will provide the desired assertion. Suppose without loss of generality that there is a vector $v\in\ker({A})\setminus \{0\}$ with \begin{equation}\label{eq:ker1} s:=|I^{-}(v)|\le \frac{\nu}{(1+\delta)} \ . \end{equation} Then \begin{equation}\label{eq:ker2} \ell |I^{-}(v)| \ge |\calN(I^{-}(v))|\ge \delta \ell s, \end{equation} where the second inequality follows by assumption due to the expansion property. Denoting by $S$ the support of $v$, $S=I^{-}(v)\cup I^{+}(v)$, we have \begin{equation}\label{eq:equalNeigh} \calN(I^{-}(v))=\calN(I^{+}(v))=\calN(S) \ , \end{equation} since otherwise $ Av\ne 0$ because $A$ is non-negative. From $\ell |I^{+}(v)|\ge |\calN(I^{+}(v))|$, \eqref{eq:equalNeigh} and \eqref{eq:ker2}, we obtain \begin{equation}\label{eq:ker3} |I^{+}(v)| \ge \delta s \ . \end{equation} Thus, \begin{equation}\label{eq:ker33} |S|=|I^{-}(v)| +|I^{+}(v)| \ge 2 \delta s \geq (1+\delta) s. \end{equation} Let $\tilde S \subseteq S$ such that $|\tilde S|=\lfloor (\delta +1) s \rfloor$. Thus $|\tilde S|\le \nu$ and \begin{equation}\label{eq:ker4} |\calN(\tilde S)|\ge \delta \ell |\tilde S|\ge \delta \ell (\delta +1) s > s\ell\ \end{equation} provided $\delta (1+\delta) > 1 \;\Leftrightarrow\; \delta > (\sqrt{5}-1)/2$. Summarizing, we get $s\ell<|\calN(\tilde S)|\leq|\calN(S)|=|\calN(I^{-}(v))|\le s\ell$, hence a contradiction. \end{proof} The assertion of Theorem \ref{thm:wang} solely relies on the expansion property of the measurement matrix $A$. Theorem \ref{thm:SP} below will be based on it and in turn the results of Section \ref{sec:recovery-unperturbed}. \subsection{Perturbed Expanders and Recovery} \label{sec:Hassibi} We describe next an alternative route based on the \textbf{complete (Kruskal) rank} $r_{0} = r_{0}(A)$ of a measurement matrix $A$. This is the maximal integer $r_{0}$ such that every subset of $r_{0}$ columns of $A$ is linearly independent. While this number is combinatorially difficult to compute in practice, both the number and the corresponding recovery performance can be enhanced by relating it to a particular expansion property of the bipartite graph associated to a \emph{perturbed} measurement matrix $\tilde A$. The latter can be easily computed in practice while preserving its sparsity, i.e.~the constant left-degree $\ell$. \begin{theorem}[{\cite[Thm.~6.2]{Petra2009}, \cite[Thm.~4.1]{HassibiIEEE}}]\label{thm:Hassibi-4-1} Let $A$ be a non-negative matrix with $\ell$ non-zero entries in each column and complete rank $r_{0}=r_{0}(A)$. Then $|I^{-}(v)| \geq r_{0}/\ell$ for all nullspace vectors $v \in \ker(A)$. \end{theorem} \begin{remark} \label{rem:Hassibi-recovery} In view of Theorem \ref{thm:AllPOS}, (c), Theorem \ref{thm:Hassibi-4-1} says that all $k$-sparse non-negative vectors $x$ can be uniquely recovered if $k \leq \lceil r_{0}/\ell - 1 \rceil$. \end{remark} The following Lemma asserts that by a perturbation of the measurement matrix the complete rank, and hence the recovery property, may be enhanced provided all subsets of columns, up to a related cardinality, entail an expansion that is \emph{less} however than the one required by Theorem \ref{thm:wang}. \begin{lemma}[{\cite[Lemma 4.2]{HassibiIEEE}}]\label{lem:Hassibi-4-2} Let $A$ be a non-negative matrix with $\ell$ non-zero entries in each column. Suppose that for a submatrix formed by $\tilde r_{0}$ columns of $A$ it holds that $|\mc{N}(X)| \geq |X|$, for each subset $X \subset C$ of columns of cardinality $|C| \leq \tilde r_{0}$, and with respect to the bipartite graph induced by $A$. Then there exists a perturbed matrix $\tilde A$ that has the same structure as $A$ such that its complete rank satisfies $r_{0}(\tilde A) \geq \tilde r_{0}$. \end{lemma} Theorem \ref{thm:SPCS1} below and Section \ref{sec:recovery-perturbed} will be based on Theorem \ref{thm:Hassibi-4-1} and Lemma \ref{lem:Hassibi-4-2}. \subsection{Weak Reconstruction Guarantees} \label{sec:wang-hassibi-application} We introduce some further notions used subsequently to state our results. Let $A$ denote the matrix $A_{d}^{D}$ defined by \eqref{eq:def-AdD}, and consider a subset $X \subset C$ of $|X|=k$ columns and a corresponding $k$-sparse vector $x$. Then $b = A x$ has support $\mc{N}(x)$, and we may remove the subset of $\mc{N}(X)^{c} = (\mc{N}(X))^{c}$ rows from the linear system $A x = b$ corresponding to $b_{r}=0,\, \forall r \in R$. Moreover, based on the observation $\mc{N}(X)$, we know that \begin{equation}\label{eq:reduced-dimensions} X \subseteq \mc{N}(\mc{N}(X)) \qquad\text{and}\qquad \mc{N}(\mc{N}(X)^{c}) \cap X = \emptyset \ . \end{equation} Consequently, we can restrict the linear system $A x = b$ to the subset of columns $\mc{N}(\mc{N}(X)) \setminus \mc{N}(\mc{N}(X)^{c}) \subset C$. This will be detailed below by Proposition \ref{prop:redfeasSet}. \vspace{0.5cm} In practical applications, the reconstruction of a random $k$-sparse vector $x$ will be based on a reduced linear system with the above dimensions. These dimensions will be the same for \emph{all} random sets $X = \supp(x)$ contained in $\mc{N}(\mc{N}(X))$. Consequently, in view of a probabilistic average case analysis conducted in Section \ref{sec:weak-equivalence}, it suffices to measure the expansion with respect to these sets. Taking this into account, the following theorem tailors Theorem \ref{thm:wang} to our specific setting. \begin{theorem}\label{thm:SP} Let $A$ be the adjacency matrix of a bipartite graph such that for all random subsets $X \subset C$ of $|X| \leq k$ left nodes, the set of neighbors $\calN(X)$ of $X$ satisfies \begin{equation} \label{eq:condition-Wang} |\calN(X)| \geq \delta \ell |\calN(\calN(X))\setminus \calN(\calN(X)^c)| \qquad\text{with}\qquad \delta>\frac{\sqrt{5}-1}{2}. \end{equation} Then, for any $k$-sparse vector $x^*$, the solution set $\{x \colon Ax=Ax^*,x \ge 0\}$ is a singleton. \end{theorem} Likewise, the following theorem applies the statements of Section \ref{sec:Hassibi} to our specific setting. \begin{theorem}\label{thm:SPCS1} Let $A$ be the adjacency matrix of a bipartite graph such that for all subsets $X \subset C$ of $|X| \leq k$ left nodes, the set of neighbors $\calN(X)$ of $X$ satisfies \begin{equation} \label{eq:Hassibi-condition} |\calN(X)| \geq \delta \ell |\calN(\calN(X))\setminus \calN(\calN(X)^c)| \qquad\text{with}\qquad \delta > \frac{1}{\ell}. \end{equation} Then, for any $k$-sparse vector $x^*$, there exists a perturbation $\tilde A$ of $A$ such that the solution set $\{x \colon \tilde Ax=\tilde Ax^*,x \ge 0\}$ is a singleton. \end{theorem} The consequences of Theorems \ref{thm:SP} and \ref{thm:SPCS1} are investigated in Section \ref{sec:weak-equivalence} by working out critical values of the sparsity parameter $k$ for which the respective conditions are satisfied with high probability. \section{Strong Equivalence} \label{sec:strong} In \cite{Petra2009} we tested the properties of the discrete tomography matrix in focus against various conditions, like the null space property, the restricted isometry property, etc., and predicted an extremely poor worst case performance of such a measurement system. In the 3D case we showed that the strong threshold on sparsity, that is the maximal sparsity level $k_0$ for which recovery of \emph{all} $k$-sparse (positive) vectors, $k\le k_0$, is guaranteed, is a constant, not depending on the undersampling ratio $d$. \subsection{Unperturbed Systems} Given an indexing of cells and rays, we can rewrite the projection matrix $A^D_d\in\mathbb{R}^{Dd^{D-1} \times d^D}$ from \eqref{eq:def-AdD} in closed form as \begin{equation}\label{eq:A_dD} A^D_d:=\begin{cases}\left( \begin{array}{c} I_d\otimes \mathds{1}_d^T\\ \mathds{1}_d^T\otimes I_d \\ \end{array} \right) , & \text{if }D=2\ ,\\[5mm] \left( \begin{array}{c} \mathds{1}_d^\top \otimes I_d \otimes I_d \\ I_d\otimes \mathds{1}_d^\top \otimes I_d\\ I_d\otimes I_d\otimes \mathds{1}_d^\top \end{array} \right) , & \text{if } D=3 \ . \end{cases} \end{equation} Since for this matrices a sparse nullspace basis can be computed, we can derive the maximal sparsity via the nullspace property, as shown next. \begin{proposition}\cite[Prop. 2.2, Prop. 3.2]{Petra2009}\label{prop:RankKernelA} Let $D\in\{2,3\}$, $d\in \mathbb{N}$, $d\ge 3$ and $A^D_d$ from \eqref{eq:A_dD}. Define $B^D_d\in\mathbb{R}^{d^D\times (d-1)^D}$ as \begin{equation}\label{eq:B_D} B^D_d:=\begin{cases} \left(\begin{array}{c} -\mathds{1}_{d-1}^T\\ I_{d-1}\end{array} \right) \otimes \left(\begin{array}{c} -\mathds{1}_{d-1}^T\\ I_{d-1}\end{array} \right) , & \text{if }D=2,\\[5mm] \left(\begin{array}{c} -\mathds{1}_{d-1}^\top \\ I_{d-1}\end{array} \right) \otimes \left(\begin{array}{c} -\mathds{1}_{d-1}^\top \\ I_{d-1}\end{array} \right) \otimes \left(\begin{array}{c} -\mathds{1}_{d-1}^\top \\ I_{d-1}\end{array} \right), & \text{if } D=3 \ . \end{cases} \end{equation} Then the following statements hold \begin{itemize} \item[(a)] $A^D_d B^D_d=0$. \item[(b)] Every column in $B^D_d$ has exactly $2^D$ nonzero ($2^{D-1}$ positive, $2^{D-1}$ negative) elements. \item[(c)] $B^D_d$ is a full rank matrix and $\rank(B^D_d)=(d-1)^D$. \item[(d)] $\ker(A^D_d)=span\{B^D_d\}$, i.e. the columns of $B^D_d$ provide a basis for the null space of $A^D_d$. \item[(e)] $\rank(A^D_d)=d^D-(d-1)^D$. \item[(f)] $\sum_{i=1}^nv_i=0$ holds for all $v\in\ker(A^D_d)$. \item[(g)] The Kruskal rang of $A^D_d$ is $2^D-1$, i.e. $$\min_{\substack{v\in\ker(A^D_d)\\ v\ne 0}} \|v\|_0=2^D\ .$$ \item[(h)] Every nonzero nullspace vector has at least $2^{D-1}$ negative entries. i.e. $$\min_{\substack{v\in\ker(A^D_d)\\ v\ne 0}} |I^{-}(v)|=2^{D-1}\ .$$ \end{itemize} \end{proposition} Thus, (g) and (h) imply \begin{corollary}\label{cor:strong} For all $d\in \mathbb{N}$, $d\ge 3$, every $\left(2^{D-1}-1\right)$-sparse vector $x^{\ast}$ is the unique sparsest solution of $A^D_d x = A^D_d x^{\ast}$. Moreover, for every $\left(2^{D-1}-1\right)$-sparse positive vector $x^*$ $\{x \colon A^D_d x = A^D_d x^*\}$ is a singleton. \end{corollary} This bound is tight, since we can construct two $2^{D-1}$-sparse solutions $x^1$ and $x^2$ such that $A^D_d x^1=A^D_d x^2$, compare Fig. \ref{fig:NonUnique} for the 3D case. However, when D=3, not every 8-column combination, or more, in $A^3_d$ is linearly dependent. In fact, only a limited number of $k$-column combinations can be dependent without violating $\rank(A^3_d)=3d^2-3d+1$. It turns out that this number is tiny for smaller $k$ when compared to $\binom{n}{k}$. As $k$ increases this number also grows and equals 1 only when $k>\rank(A^3_d)$. Likewise, not every 4-sparse binary vector is nonunique. Due to the simple geometry of the problem it is not difficult to count the ''bad'' 4-sparse configurations in 3D. Since they are always located in 4 out of 8 corners of a cuboid in the $d^3$ cube, compare Fig. \ref{fig:NonUnique} left, and there are only two possibilities to choose them, the probability that a 4-sparse binary vector is unique, equals \begin{equation*} 1-\frac{2 \binom{d}{2}^3}{\binom{d^3}{4}}= 1-\frac{ 6 (d-1)^2 }{(d^2+ d+1)(d^3-2)(d^3-3)}= 1-\mathcal{O}(d^{-6})\xrightarrow{d \to \infty} 1\ . \end{equation*} \begin{figure} \begin{tabular}{c c} \includegraphics[clip,width=0.20\textwidth]{NonUnique_gray.png}& \includegraphics[clip,width=0.30\textwidth]{ProjEx_gray.png} \end{tabular} \caption{ Two different {\em non unique} $4$-sparse ''particle'' distributions in a $3\times 3\times 3$ volume. Both configurations (represented by black and white dots) yield identical projections in all three directions. Such nonunique configurations correspond to positive or negative entries in an $8$-sparse nullspace vector of $A^3_d$, compare to Prop. \ref{prop:RankKernelA}.} \label{fig:NonUnique} \end{figure} \begin{figure} \begin{tabular}{ccc} \includegraphics[height=0.23\textheight]{A_d5_lifted.png} & \includegraphics[height=0.23\textheight]{Nullbasis_unpertA_d5.png}& \includegraphics[height=0.23\textheight]{Nullbasis_pertA_d5.png} \end{tabular} \caption{{\bf Left:} The 3D projection matrix $A^3_5$ and, {\bf middle,} a sparse basis which spans its nullspace. {\bf Right:} If we allow a small perturbation of the nonzero entries of $A^D_d$, all corresponding nullspace vectors of the perturbed matrix will be less sparse and lie in a $d^{D-1}(d-D)$-dimensional subspace, compared to $(d-1)^D$ in the unperturbed case.} \label{fig_nullspace} \end{figure} \subsection{Perturbed Systems} The weak performance of $A^D_d$ rests upon its small Kruskal rank. In order to increase the maximal number $k$ of columns such that all $k$ (or less) column combinations are linearly independent we perturb the nonzero entries of the original matrix $A^D_d$. Figure \ref{fig_nullspace}, right, indicates that perturbation leads to less sparse nullspace vectors. If we could estimate the Kruskal rank $\tilde r_0$ of the perturbed system we could apply Thm. \ref{thm:Hassibi-4-1} and obtain a lower bound on the sparsity yielding strong recovery \emph{for all} $\lceil \tilde r_{0}/\ell - 1 \rceil$-sparse vectors. However, determining $\tilde r_0$ for the perturbed matrix seems impossible. We believe however that it increases with $d$, in contrast to the constant $2^D-1$ in case of unperturbed systems. Luckily, it will turn out in Section \ref{sec:recovery-unperturbed} that the weak recovery threshold for unperturbed systems will give a \emph{lower bound} on the strong recovery threshold for perturbed matrices, since reduced systems will be strictly overdetermined and guaranteed to have full rank. \section{Weak Recovery}\label{sec:weak-equivalence} In this section, we consider the recovery properties of the 3D setup depicted in Fig.~\ref{fig_1} and establish conditions for weak recovery, that is conditions for unique recovery that holds \emph{on average with high probability}. We clearly point out that our conditions do \emph{not} guarantee unique recovery in \emph{each} concrete problem instance. \begin{remark} \label{rem:probability} In what follows, the phrase \textbf{with high probability} refers to values of the sparsity parameter $k$ for which random supports $|\supp(b)|$ concentrate around the crucial expected value $N_{R}$ according to Prop.~\ref{prop:NR0-deviation}, thus yielding a desired threshold effect. \end{remark} We first inspect in Section \ref{sec:reduced-system} the effect of sparsity on the expected dimensions of a reduced system of linear equations, along with its equivalence to the original system. Subsequently, we establish the aforementioned conditions based on Theorems \ref{thm:SP} and \ref{thm:SPCS1}, and on the \emph{expected} quantities involved in the corresponding conditions. In particular, we establish such uniqueness conditions for reduced underdetermined systems of dimension $m/n > (\sqrt{5}-1)/2 \approx 0.618$. Our results are in excellent agreement with numerical experiments discussed in Section \ref{sec:experiments}. \subsection{Reduced System}\label{sec:reduced-system} We formalize the system reduction described in Eqn.~\eqref{eq:reduced-dimensions}. Besides checking its equivalence to the unreduced system, we compute the expected reduced dimensions together with a deviation bound. Additionally, we determine critical values of the sparsity parameter $k$ that lead to overdetermined reduced systems. Recall from Section \ref{sec:setup} that we regard a given measurement matrix $A$ also as adjacency matrix of a bipartite graph $G = (C,R;E)$. \subsubsection{Definition and Equivalence} \begin{definition} The \textbf{reduced system} corresponding to a given non-negative vector $b$, \begin{equation} \label{eq:red-system} A_{red} x = b_{red},\qquad A_{red} \in \mathbb{R}_{+}^{m_{red} \times n_{red}}, \end{equation} results from $A, b$ by choosing the subsets of rows and columns \begin{equation} \label{eq:def-RbCb} R_{b} := \supp(b),\qquad C_{b} := \mc{N}(R_{b}) \setminus \mc{N}(R_{b}^{c}) \end{equation} with \begin{equation} \label{def:mn-red} m_{red} := |R_{b}|,\qquad n_{red} := |C_{b}|. \end{equation} \end{definition} Note that for a vector $x$ and the bipartite graph induced by the measurement matrix $A$, we have the correspondence (cf.~\eqref{eq:reduced-dimensions}) \[ X = \supp(x),\qquad R_{b} = \mc{N}(X),\qquad C_{b} = \mc{N}(\mc{N}(X)) \setminus \mc{N}(\mc{N}(X)^{c}). \] We further define \begin{equation}\label{def:feasSet} \calS^+:=\{x \colon Ax=b, x\ge 0\} \end{equation} and \begin{equation}\label{def:redfeasSet} \calS_{red}^+:=\{x \colon A_{R_b C_b}x=b_{R_b}, x\ge 0\}\ . \end{equation} The following proposition asserts that solving the reduced system \eqref{eq:red-system} will always recover the support of the solution to the original system $A x = b$. \begin{proposition}\label{prop:redfeasSet} Let $A\in \mathbb{R}^{m\times n}$ and $b\in\mathbb{R}^m$ have nonnegative entries only, and let $\calS^+$ and $\calS_{red}^+$ be defined by \eqref{def:feasSet} and \eqref{def:redfeasSet}, respectively. Then \begin{equation}\label{eq:feasSet} \calS^+=\{x\in \mathbb{R}^n \colon x_{(C_b)^c}=0\;\text{ and }\; x_{C_b}\in \calS_{red}^+\}. \end{equation} \end{proposition} \begin{proof} Let $S:=\{x\in \mathbb{R}^n \colon x_{(C_b)^c}=0\text{ and } x_{C_b}\in \calS^+_{red}\}$. We first show $S\subseteq \calS^+$. Let $x\in S$. From this $x\ge 0$ follows directly. We thus just have to show $\sum_{j=1}^n a_{ij}x_j=b_i, \forall i\in [n]$. Indeed, for \begin{equation*} i\in R_b :\quad \sum_{j=1}^n a_{ij}x_j= \sum_{j\in C_b}\underbrace{a_{ij}x_j}_{=b_i} +\sum_{j\in (C_b)^c}a_{ij}\underbrace{x_j}_{=0}=b_i\ , \end{equation*} whereas for \begin{equation*} i\in (R_b)^c :\quad \sum_{j=1}^n a_{ij}x_j= \sum_{j\in C_b}\underbrace{a_{ij}}_{=0}x_j +\sum_{j\in (C_b)^c}\underbrace{a_{ij}}_{>a_{ij}} \underbrace{x_j}_{=0}=0=b_i\ . \end{equation*} Now let $x\in\calS^+$ and consider any $i\in (R_b)^c$. Then \begin{equation}\label{eq:x_(C_b)^c==0} 0=b_i=\sum_{j=1}^n a_{ij}x_j=\sum_{j\in C_b}\underbrace{a_{ij}}_{=0}x_j +\sum_{j\in (C_b)^c}\underbrace{a_{ij}}_{>a_{ij}}x_j \end{equation} holds. Since $x\ge 0$, we obtain from \eqref{eq:x_(C_b)^c==0} that $x_j=0,\forall j\in (C_b)^c$. To show that $A_{R_b C_b}x_{C_b}=b_{R_b}$, consider \begin{equation*} i\in R_b:\quad \sum_{j\in C_b}a_{ij}x_j =\sum_{j\in C_b}a_{ij}x_j+ \sum_{j\in (C_b)^c}a_{ij}\underbrace{x_j}_{=0}=\sum_{j=1}^n a_{ij}x_j=b_i\ . \end{equation*} Hence, $x_{(C_b)^c}=0$ and $x_{C_b }\in\calS^+_{red}$. Thus $x\in S$. \end{proof} In the following two sections, we compute the expected values of the reduced system dimension \eqref{def:mn-red}. \subsubsection{Expected Number of Non-Zero Measurements} We consider the uniform random assignment of $k$ particles to the $n = |C|$ cells $c \in C$. A single cell may be occupied by more than a single particle. This corresponds to the physical situation that real particles are very small relative to the discretization depicted by Figure \ref{fig_1}. The imaging optics enlarges the appearance of particles, and the action of physical projection rays is adequately represented by linear superposition. This scenario gives rise to a random vector $x \in \mathbb{R}_{k,+}^{n}$ with support $|\supp(x)| \leq k$. It generates a vector \begin{equation} b = A_{d}^{D} x \in \mathbb{R}_{+}^{m} \end{equation} of measurements. We are interested in the expected size of the support of $b$, \begin{equation} \label{eq:def-NR} N_{R} := \mathbb{E}[|\supp(b)|],\qquad N_{R}^{0} := m - N_{R}, \end{equation} that equals the number of projection rays $r \in R$ with non-vanishing measurements $b_{r} \neq 0$. We denote the event $b_{r} = 0$ by the binary random variable\footnote{We economize notation here by re-using the symbol $X$, a random indicator vector indexed by rays (right nodes) $r \in R$. Due to the context, there should be no danger of confusion with $X = \supp(x)$ denoting random subsets of left nodes used in other sections.} $X_{r}=1$, i.e.~$X_{r}=0$ corresponds to the event $b_{r} > 0$ that at least a single particle meets ray $r$. The probability that a single $c$ is met by ray $r$ is \begin{equation} \label{eq:def-qd} q_{d} := \frac{d}{|C|} = \frac{d}{n} = \frac{1}{d^{D-1}}. \end{equation} For $k$ particles, the probability that $0 \leq i \leq k$ particles meet projection ray $r$ is \begin{equation} \label{eq:def-pd} \Pr(b_{r}=i) = \binom{k}{i} q_{d}^{i} p_{d}^{k-i},\qquad p_{d} := 1-q_{d}. \end{equation} Consequently, we have \begin{subequations} \label{eq:E-Xr} \begin{align} \Pr[X_{r}=1] &= \mathbb{E}[X_{r}] = p_{d}^{k}, \\ \Pr[X_{r}=0] &= \sum_{i=1}^{k} \binom{k}{i} q_{d}^{i} p_{d}^{k-i} = 1 - p_{d}^{k}. \end{align} \end{subequations} \begin{lemma} \label{eq:lem-NR} The expected number of non-zero measurements defined by \eqref{eq:def-NR} is \begin{equation} \label{eq:NR0-values} \begin{aligned} N_{R} &= N_{R}(k) = |R| (1-p_{d}^{k}) = D d^{D-1} \bigg(1-\Big(1-\frac{1}{d^{D-1}}\Big)^{k}\bigg), \\ N_{R}^{0} &= N_{R}^{0}(k) = |R|-N_{R} = |R| p_{d}^{k} = D d^{D-1} \Big(1-\frac{1}{d^{D-1}}\Big)^{k}. \end{aligned} \end{equation} \end{lemma} \begin{proof} Due to the linearity of expectation, summing over all rays gives \[ N_{R} = \mathbb{E}\Big[\sum_{r \in R} (1-X_{r}) \Big] = |R| (1-p_{d}^{k}). \] \end{proof} \begin{remark} \label{rem:m-red} Note that $N_{R}$ specifies the expected value of $m_{red}$ in \eqref{def:mn-red} induced by random $k$-sparse vectors $x \in \mathbb{R}_{k,+}^{n}$. See Figure \ref{fig:NR-Plot} for an illustration. \end{remark} \begin{figure} \centerline{ \includegraphics[width=0.5\textwidth]{NR-Plot} } \caption{ The expected number $N_{R}$ of non-zero measurements \eqref{eq:E-Xr}. For highly sparse scenarios (small $k$), the expected support \eqref{eq:def-NR} of the measurement vector $|\supp(b)| \approx 3 k$. For large values of $k$, this rate decreases due to the multiple incidence of cells and projection rays. } \label{fig:NR-Plot} \end{figure} \subsubsection*{Bounding the Deviation of $N_{R}^{0}$} We are interested in how sharply the random number $X = \sum_{r \in R} X_{r}$ of zero measurements peaks around its expected value $N_{R}^{0} = \mathbb{E}[X]$ given by \eqref{eq:NR0-values}. We derive next a corresponding tail bound by regarding a sequence of $k$ randomly located cells and by bounding the difference of subsequent conditional expected values of the random variable $X$. Theorem \ref{thm:Azuma} then provides a bound for the deviation $|X-\mathbb{E}[X]|$. Let the set of rays $R$ represent the elementary events corresponding to the observations $X_{r}=1$ or $X_{r}=0$ for each ray $r \in R$, i.e.~ray $r$ corresponds to a zero measurement or not. Let $\mc{F}_{i} \subset 2^{R},\, i=0,1,2,\dotsc$, denote the $\sigma$-field generated by the collection of subsets of $R$ that correspond to all possible events after having observed $i$ randomly selected cells. We set $\mc{F}_{0} = \{\emptyset,R\}$. Because observing cell $i+1$ just further partitions the current state based on the previously observed $i$ cells by possibly removing some ray (or rays) from the set of zero measurements, we have a nested sequence (filtration) $\mc{F}_{0} \subseteq \mc{F}_{1} \subseteq \dotsb \subseteq \mc{F}_{k}$ of the set $2^{R}$ of all subsets of $R$. Based on this, for a fixed value of the sparsity parameter $k$, we define the sequence of random variables \begin{equation} Y_{i} = \mathbb{E}[X|\mc{F}_{i}],\quad i=0,1,\dotsc,k, \end{equation} where $Y_{i},\,i=0,1,\dotsc,k-1$, are the random variables specifying the expected number of zero measurements after having observed $k$ randomly selected cells, conditioned on the subset of events $\mc{F}_{i}$ determined by the observation of $i$ randomly selected cells. Consequently, $Y_{0}=\mathbb{E}[X]=N_{R}^{0}$ due to the absence of any information, and $Y_{k} = X$ is just the observed number of zero measurements. The sequence $(Y_{i})_{i=0,\dotsc,k}$ is a martingale by construction satisfying $\mathbb{E}[Y_{i+1}|\mc{F}_{i}]=Y_{i}$, that is condition \eqref{eq:condition-martingale}. \begin{proposition} \label{prop:NR0-deviation} Let $N_{R}^{0}=\mathbb{E}[X]$ be the expected number of zero measurements for a given sparsity parameter $k$, given by \eqref{eq:NR0-values}. Then, for any $\delta > 0$, \begin{equation} \label{eq:NR0-deviation} \begin{aligned} \Pr\big(|X-N_{R}^{0}| \geq \delta\big) \;&\leq\; 2 \exp\bigg( -\frac{1-p_{d}^{2}}{(1-p_{d}^{2k})} \;\frac{\delta^{2}}{2 D^{2}} \bigg) \\ & \nearrow\; 2 \exp\Big(-\frac{\delta^{2}}{2 D^{2} k}\Big) \qquad\text{if}\quad d \to \infty. \end{aligned} \end{equation} \end{proposition} This result shows that for large problem sizes $d$ occurring in applications, concentration of observations of $N_{R}^{0}$ primarily depends on the sparsity parameter $k$. As a consequence, the bound enables suitable choices of $k = k(d)$ of the sparsity parameter depending on the problem size. For example, typical values \begin{equation} \label{eq:k-LaVision} \begin{aligned} k = \begin{cases} 0.05 d & \text{in 2D}, \\ 0.05 d^{2} & \text{in 3D}, \end{cases} \end{aligned} \end{equation} chosen by engineers\footnote{Personal communication.} in applications according to a rule of thumb, result in \begin{equation} \label{eq:deviation-concrete} \Pr\big(|X-N_{R}^{0}| \geq \delta\big) \;\leq\; \begin{cases} 2 \exp\Big(-\frac{5}{2 d} \delta^{2} \Big) & \text{in 2D},\\ 2 \exp\Big(-\frac{10}{9 d^{2}} \delta^{2} \Big) & \text{in 3D}. \end{cases} \end{equation} For the 3D case \eqref{eq:k-LaVision}, the probability to observe deviations from $N_{R}^{0}$ larger than $1\%$ drops below $0.01$ for problem sizes $d \geq 77$, which is common in practice. Thus, the bound \eqref{eq:NR0-deviation} is strong enough to indicate not only that \eqref{eq:k-LaVision} is a particular sensible choice, but also leads to more proper choices of $k$ for applications, which still give highly concentrated values of observations of $N_{R}^{0}$. This is the essential prerequisite for threshold effects of unique recovery from sparse measurements. \begin{proof}[Proof (Proposition \ref{prop:NR0-deviation})] Let $R^{0}_{i-1} \subset R$ denote the subset of rays with zero measurements after the random selection of $i-1 < k$ cells. For the remaining $k-(i-1)$ trials, the probability that not any cell incident with some ray $r \in R^{0}_{i-1}$ will be selected, is \begin{equation} p_{d}^{k-(i-1)} = \mathbb{E}[X_{r}|\mc{F}_{i-1}], \end{equation} with $p_{d}$ given by \eqref{eq:def-pd}. Consequently, by linearity, the expectation $Y_{i-1}$ of zero measurements given $|R^{0}_{i-1}|$ zero measurements after the selection of $i-1$ cells, is \begin{equation} Y_{i-1} = \mathbb{E}[X|\mc{F}_{i-1}] = \sum_{r \in R^{0}_{i-1}} p_{d}^{k-(i-1)}. \end{equation} Now suppose we observe the random selection of the $i$-th cell. We distinguish two possible cases. \begin{enumerate} \item Cell $i$ is not incident with any ray $r \in R^{0}_{i-1}$. Then the number of zero measurements remains the same, and \begin{equation} Y_{i} = \sum_{r \in R^{0}_{i-1}} p_{d}^{k-i}. \end{equation} Furthermore, \begin{equation} \label{eq:Azuma-estimate-1} \begin{aligned} Y_{i}-Y_{i-1} &= \sum_{r \in R^{0}_{i-1}} \big( p_{d}^{k-i} - p_{d}^{k-(i-1)} \big) = |R^{0}_{i-1}| p_{d}^{k-i} (1-p_{d}) \\ &\leq (|R|-1) p_{d}^{k-i} q_{d}. \end{aligned} \end{equation} \item Cell $i$ is incident with $1, \dotsc,D$ rays contained in $R^{0}_{i-1}$. Let $R^{0}_{i}$ denote the set $R^{0}_{i-1}$ after removing these rays. Then \[ Y_{i} = \sum_{r \in R^{0}_{i}} p_{d}^{k-i}. \] Furthermore, since $R^{0}_{i} \subset R^{0}_{i-1}$ and $|R^{0}_{i-1} \setminus R^{0}_{i}| \leq D$, \begin{equation} \label{eq:Azuma-estimate-2} \begin{aligned} Y_{i-1} - Y_{i} &= \sum_{r \in R^{0}_{i-1} \setminus R^{0}_{i}} p_{d}^{k-(i-1)} - \sum_{r \in R^{0}_{i}} \big( p_{d}^{k-i} - p_{d}^{k-(i-1)} \big) \\ &\leq D p_{d}^{k-i+1} - \sum_{r \in R^{0}_{i}} p_{d}^{k-i} (1-p_{d}) \leq D p_{d}^{k-i+1}. \end{aligned} \end{equation} \end{enumerate} Comparing the bounds \eqref{eq:Azuma-estimate-1} and \eqref{eq:Azuma-estimate-2}, we have with $|R| q_{d} = D$, \[ (|R|-1) q_{d} p_{d}^{k-i} = (D - q_{d}) p_{d}^{k-i}, \qquad\qquad D p_{d} p_{d}^{k-i} = (D - D q_{d}) p_{d}^{k-i}. \] Thus, we take the larger bound \eqref{eq:Azuma-estimate-1}, drop the immaterial $-1$ in the first factor and compute \[ \sum_{i=1}^{k} (D p_{d}^{(k-i)})^{2} = D^{2} \frac{1 - p_{d}^{2 k}}{1-p_{d}^{2}}. \] Inserting $p_{d}$ from \eqref{eq:def-pd} and expanding in terms of $d^{-1}$ at $0$, we obtain \begin{align*} \frac{1 - p_{d}^{2 k}}{1-p_{d}^{2}} = \begin{cases} k + (k-k^{2}) d^{-1} + \mc{O}(d^{-2}), & \text{in 2D} \\ k + (k-k^{2}) d^{-2} + \mc{O}(d^{-4}), & \text{in 3D} \end{cases} \qquad\xrightarrow{d \to \infty} k. \end{align*} Applying Theorem \ref{thm:Azuma} completes the proof. \end{proof} \begin{figure} \centerline{ \includegraphics[width=0.5\textwidth]{NC-Plot} } \caption{ The expected number $N_{C} = \mathbb{E}[|C_{b}|]$ of cells supporting observed measurement vectors $b$, given by \eqref{eq:NC-value}. Starting with rate $N_{C} \propto k$ for very small values of $k$, it quickly increases and exceeds $N_{R}$ (Fig.~\ref{fig:NR-Plot}), thus leading to underdetermined reduced systems \eqref{eq:red-system}. } \label{fig:NC-Plot} \end{figure} \subsubsection{Expected Number of Cells} In the previous section, we computed the expected number of measurements $N_{R} = \mathbb{E}[|\supp(b)|]$ induced by a random unknown $k$-sparse vector $x$ (Lemma \ref{eq:lem-NR}) along with a tail bound for $N_{R}^{0} = |R|-N_{R}$ (Prop.~\ref{prop:NR0-deviation}). In the present section, we determine the expected number of cells corresponding to $N_{R}$, denoted by $N_{C}$. We confine ourselves to the practically more relevant 3D case. As in the previous section, $X \in \{0,1\}^{|R|}$ denotes a random vector indicating subsets of projection rays. $X_{r}=1,\, r \in R$, corresponds to a zero observation along ray $r$. For a subset of rays $R_{b} \subset R$, we say that the corresponding subset of cells $C_{b}$ in \eqref{eq:def-RbCb} \textbf{supports} $R_{b}$. \begin{proposition}\label{prop:NC} For a given value of the sparsity parameter $k$, the expected size of subsets of cells that support random subsets $R_{b} \subset R$ of observed non-zero measurements, is \begin{equation} \label{eq:NC-value} N_{C} = N_{C}(k) = d^{3} \bigg( 1 - 3\Big(1-\frac{1}{d^{2}}\Big)^{k} + 3\Big(1-\frac{2 d-1}{d^{3}}\Big)^{k} - \Big(1-\frac{3 d-2}{d^{3}}\Big)^{k} \bigg). \end{equation} \end{proposition} \begin{proof} We partition the set of rays $R = R_{1} \cup R_{2} \cup R_{3}$ according to the three projection images (Fig.~\ref{fig_1}) and associate with the cells $C$ the corresponding set of triples of projection rays \[ R_{1,2,3} = \big\{ (r_{1}, r_{2}, r_{3}) \colon \cap_{i=1}^{3} r_{i} \neq \emptyset,\; r_{i} \in R_{i},\; i=1,2,3 \big\}, \] with each triple intersecting in a single cell. Thus, we have $|R_{1,2,3}| = |C| = d^{3}$, and each cell $c_{ijk} = r_{i} \cap r_{j} \cap r_{k}$ belongs to the set $C_{b}$ supporting $R_{b}$ if $R_{b} \cap (r_{i} \cup r_{j} \cup r_{k}) \neq \emptyset$. In terms of random variables $X_{r}$ indicating zero-measurements by $X_{r}=1$, this means that $c_{ijk} \in C_{b}$ if not $X_{r_{i}}=X_{r_{j}}=X_{r_{k}}=1$. Thus, \begin{align*} N_{C} &= \mathbb{E}\Big[ \sum_{R_{1,2,3}} (1-X_{r_{1}})(1-X_{r_{2}})(1-X_{r_{3}}) \Big] \\ &= \sum_{R_{1,2,3}} \Big( 1 - \big(\mathbb{E}[X_{r_{1}}] + \mathbb{E}[X_{r_{2}}] + \mathbb{E}[X_{r_{3}}]\big) + \sum_{1 \leq i < j \leq 3} \mathbb{E}[X_{r_{i}} X_{r_{j}}] - \mathbb{E}[X_{r_{1}} X_{r_{2}} X_{r_{3}}] \Big). \end{align*} This expression takes into account the intersection of projection rays $r_{i}, r_{j}$ (inclusion-exclusion principle) in order not to overcount the number of supporting cells. We have $\mathbb{E}[X_{r_{i}}] = p_{d}^{k} = (1-d^{-2})^{k}$ by \eqref{eq:E-Xr} and \eqref{eq:def-pd}. The event $X_{r_{i}} X_{r_{j}}=1$ means that both rays correspond to zero measurements, which happens with probability \[ \Big(1 - \frac{|r_{i} \cup r_{j}|}{|C|}\Big)^{k} = \Big(1 - \frac{2 d-1}{d^{3}}\Big)^{k}. \] We have three pairs of sets of rays from $R = R_{1} \cup R_{2} \cup R_{3}$, and each of the $d^{2}$ rays $r_{i} \in R_{i}$ intersects with $d$ rays $r_{j} \in R_{j}$. Finally, three intersecting rays correspond to zero measurements with probability \[ \Big(1 - \frac{|r_{1} \cup r_{2} \cup r_{3}|}{|C|}\Big)^{k} = \Big(1 - \frac{3 d-2}{d^{3}}\Big)^{k}, \] for each of the $d^{3}$ cells $c \in C$. \end{proof} \begin{remark} \label{rem:n-red} Note that $N_{C}$ specifies the expected value of $n_{red}$ in \eqref{def:mn-red} induced by random $k$-sparse vectors $x \in \mathbb{R}_{k,+}^{n}$. See Figure \ref{fig:NC-Plot} for an illustration. \end{remark} \subsubsection{Overdetermined Reduced Systems: Critical Sparsity $k$} For small value of $k$, that is for highly sparse scenarios, the expected value $N_{R}(k) \approx 3 k$ grows faster than $N_{C}(k) \approx k$. Consequently, the \emph{expected} reduced system due to Definition \ref{eq:red-system} will be overdetermined. This holds up to a critical value $k \leq k_{crit}$ because for increasing values of $k$, it is more likely that several particles are incident with some projection ray, making $N_{C}$ increasing faster than $N_{R}$. \begin{proposition} \label{prop:kcrit} For $k \leq k_{crit}$, the reduced system \eqref{eq:red-system} will be overdetermined with high probability, where $k_{crit}$ solves \begin{equation} \label{eq:def-k-crit} N_{R}(k_{crit}) = N_{C}(k_{crit}) \end{equation} and $N_{R}(k_{crit}), N_{C}(k_{crit})$ are given by \eqref{eq:NR0-values} and \eqref{eq:NC-value}. \end{proposition} Figure \ref{fig:k-critical} shows the dependency $k_{crit} = k_{crit}(d)$ on the problem size $d$, as defined by \eqref{eq:def-k-crit} . \begin{figure} \centerline{ \includegraphics[width=0.5\textwidth]{k-critical} } \caption{Values $k_{crit} = k_{crit}(d)$ of the sparsity parameter such that $k \leq k_{crit}$ yield overdetermined reduced systems \eqref{eq:red-system}. For the depicted and practically relevant range of $d$, the slope of the $\log$-$\log$ curve slightly decreases in $1.65 \dots 1.55$.} \label{fig:k-critical} \end{figure} \subsection{Unperturbed Systems} \label{sec:recovery-unperturbed} We consider the recovery properties of the 3D setup depicted in Fig.~\ref{fig_1}, based on Theorem \ref{thm:SP} and on the \emph{expected} quantities involved in the corresponding condition \eqref{eq:condition-Wang}, as worked out in Section \ref{sec:reduced-system}. Concerning the interpretation of the following claims, we refer to Remark \ref{rem:probability}. \begin{proposition} \label{prop:appl-Wang} The system $A x = b$, with measurement matrix $A$ given by \eqref{eq:def-AdD}, admits unique recovery of $k$-sparse non-negative vectors $x$ with high probability, if \begin{subequations} \begin{gather} \label{eq:k-unpertubed} k \leq \frac{N_{C}(k_{\delta})}{1+\delta} = \frac{1}{3 \delta (1+\delta)} N_{R}(k_{\delta}),\qquad \delta > \frac{\sqrt{5}-1}{2}, \intertext{where $k_{\delta}$ solves} N_{R}(k_{\delta}) = 3 \delta N_{C}(k_{\delta}) \label{eq:def-kdelta} \end{gather} \end{subequations} and $N_{R}(k), N_{C}(k)$ are given by \eqref{eq:NR0-values} and \eqref{eq:NC-value}. \end{proposition} \begin{proof} The assertion follows from replacing the quantities forming condition \eqref{eq:condition-Wang} by their expected values, due to Remarks \ref{rem:m-red} and \ref{rem:n-red}. \end{proof} \begin{remark} \label{rem:unperturbed} Equation \eqref{eq:def-kdelta} shows that unique recovery of a $k$-sparse, $k\le\frac{n_{red}}{(1+\delta)}$, non-negative vector can be expected using the unperturbed measurement matrix provided the reduced system \eqref{eq:red-system} is by a factor $m_{red} \geq 1.854 \, n_{red}$ overdetermined. See Figure \ref{fig:k-All} for an illustration. \end{remark} \subsection{Perturbed Systems} \label{sec:recovery-perturbed} Analogously to the previous section, we evaluate the average recovery performance using perturbed systems based on Theorem \ref{thm:SPCS1}. \begin{proposition} \label{prop:appl-Hassibi} The system $\tilde A x = b$, with perturbed measurement matrix $\tilde A$ given by \eqref{eq:def-AdD}, admits unique recovery of $k$-sparse non-negative vectors $x$ with high probability, if $k$ satisfies condition $k \leq k_{crit}$ from Prop.~\ref{prop:kcrit}, that is, if the reduced system \eqref{eq:red-system} is overdetermined. \end{proposition} \begin{proof} Immediate from Theorem \ref{thm:SPCS1}, replacing the quantities forming condition \eqref{eq:Hassibi-condition} by their expected values, and taking into account $\ell=3$ for the measurement matrix \eqref{eq:def-AdD} and the case $D=3$. \end{proof} \begin{remark} In view of this assertion and Remark \ref{rem:unperturbed}, it is remarkable that a significant gain of recovery performance can be obtained by a simple device: structure-preserving perturbation of the measurement matrix. See Figure \ref{fig:k-All} for an illustration. \end{remark} \begin{figure} \centerline{ \includegraphics[width=0.6\textwidth]{k-All-Plot} } \caption{ Critical upper bound sparsity values $k = k(d)$ that guarantee unique recovery of $k$-sparse vectors $x$ on average with high probability. From bottom to top: $k_{\delta}$ \eqref{eq:k-unpertubed} for unperturbed matrices $A$, $k_{crit}$ \eqref{eq:def-k-crit} resulting in overdetermined reduced systems, $k_{max}$ \eqref{eq:kmax-criterion} for underdetermined perturbed matrices $A$, and fully random measurement matrices. } \label{fig:k-All} \end{figure} \subsection{Underdetermined Perturbed Systems} \label{sec:recovery-perturbed-underdet} Based on \eqref{eq:WendelPR} and the average case analysis of condition \eqref{eq:Hassibi-condition} (Section \ref{sec:reduced-system}), we devise a criterion for determining the maximal sparsity value $k$ (minimal sparse scenario), such that any $k$-sparse vector $x$ can be uniquely recovered with high probability using the measurement matrix $A$ given by \eqref{eq:def-AdD}. Unlike Propositions \ref{prop:appl-Wang} and \ref{prop:appl-Hassibi}, we specifically consider here less sparse scenarios that result in \emph{underdetermined} reduced systems \eqref{eq:red-system}. \begin{proposition} \label{prop:kmax} Let $A$ be a matrix satisfying the assumptions of Lemma \ref{lem:Hassibi-4-2} with $\tilde r_{0} = N_{R}(k_{max})$, where $k_{max}$ solves \begin{equation} \label{eq:kmax-definition} N_{R}(\tilde k_{max}) = \delta N_{C}(\tilde k_{max}),\qquad \delta > \frac{\sqrt{5}-1}{2}, \end{equation} with $N_{R}(k), N_{C}(k)$ given by \eqref{eq:NR0-values} and \eqref{eq:NC-value}. Then a $k$-sparse vector $x$ can be uniquely recovered with high probability, if \begin{equation} \label{eq:kmax-criterion} k \leq k_{max} = \frac{N_{R}(\tilde k_{max})}{3}. \end{equation} \end{proposition} \begin{proof} By assumption and Lemma \ref{lem:Hassibi-4-2}, Theorem \ref{thm:Hassibi-4-1} (see also Remark \ref{rem:Hassibi-recovery}) implies \eqref{eq:kmax-criterion}, thereby taking into account that Eqn.~\eqref{eq:kmax-definition} defining $k_{max}$ reflects the expected version of condition \eqref{eq:condition-Wang}, subdivided by the factor $3$ due to \eqref{eq:kmax-criterion}. \end{proof} \cite{Wendel-62,CoverSeparability-65,Man09ProbInteger} Figure \ref{fig:k-All} illustrates the value $k_{max}$ \eqref{eq:kmax-criterion} and compares it to the previous results. \vspace{0.5cm} Finally, we comment on the uniqueness condition established in \cite{Man09ProbInteger} which corresponds to the top $k(d)$ curve in Figure \ref{fig:k-All}. This result does not apply to our setting. The reason is that a basic assumption underlying the application of \eqref{eq:WendelPR} does \emph{not} hold. While after some perturbation the points corresponding to the columns of $\tilde A$ and the sparsity value $|I^{-}(x)|=k$ are in general position, the underlying distribution lacks symmetry with respect to the origin. As a result, we cannot establish the superior performance of ``fully'' random sensors considered in \cite{Man09ProbInteger}. \subsection{Two Cameras are Not Enough} \label{sec:2-cam} In the present section, we briefly discuss how the previously obtained bounds on sparsity apply in the 2D scenario. To this end, we first compute the expected value of nonempty cells connected to $R_{b}$ measurements generated by a $k$ sparse nonnegative vector. \begin{proposition}\label{prop:NC-2D} In 2D, the expected size of subsets of cells that support random subsets $R_{b} \subset R$ of observed non-zero measurements, is \begin{equation} \label{eq:NC-value-2} N_{C} = N_{C}(k) = d^{2} \bigg( 1 - \Big(1-\frac{1}{d}\Big)^{k}\bigg)^2\ , \end{equation} for a given sparsity parameter $k$, \end{proposition} \begin{proof} We partition the set of rays $R = R_{1} \cup R_{2}$ according to the two projection images (Fig.~\ref{fig_1}), left, and associate with the cells $C$ the corresponding set of pairs of projection rays \[ R_{1,2} = \big\{ (r_{1}, r_{2}) \colon \cap_{i=1}^{2} r_{i} \neq \emptyset,\; r_{i} \in R_{i},\; i=1,2 \big\}, \] with each pair intersecting in a single cell. Thus, we have $|R_{1,2}| = |C| = d^{2}$, and each cell $c_{ij} = r_{i} \cap r_{j} $ belongs to the set $C_{b}$ supporting $R_{b}$ if $R_{b} \cap (r_{i} \cup r_{j}) \neq \emptyset$. In terms of random variables $X_{r}$ indicating zero-measurements by $X_{r}=1$, this means that $c_{ij} \in C_{b}$ if not $X_{r_{i}}=X_{r_{j}}=1$. Thus, \begin{align*} N_{C} &= \mathbb{E}\Big[ \sum_{R_{1,2}} (1-X_{r_{1}})(1-X_{r_{2}}) \Big] \\ &= \sum_{R_{1,2}} \Big( 1 - \big(\mathbb{E}[X_{r_{1}}] + \mathbb{E}[X_{r_{2}}] \big) + \sum_{1 \leq i < j \leq 2} \mathbb{E}[X_{r_{i}} X_{r_{j}}] ] \Big), \end{align*} taking the intersection of projection rays $r_{i}, r_{j}$ into account. We obtained $\mathbb{E}[X_{r_{i}}] = p_{d}^{k} = (1-\frac{1}{d})^{k}$ in \eqref{eq:E-Xr} and \eqref{eq:def-pd}. The event that both rays correspond to zero measurements $X_{r_{i}} X_{r_{j}}=1$ happens with probability \[ \Big(1 - \frac{|r_{i} \cup r_{j}|}{|C|}\Big)^{k} = \Big(1 - \frac{2 d-1}{d^{2}}\Big)^{k} = \Big(1 - \frac{1}{d}\Big)^{2k}. \] \end{proof} By Prop. \ref{prop:NC-2D} and Lemma \ref{eq:lem-NR} we can now compute the the expected ratio of the dimensions of the reduced system, further denoted by $c$. We solve the polynomial $N_R(k)=c N_C(k)$ according to and \eqref{eq:NC-value-2}. Interesting are the values $c\in\{2\delta,1,\delta,\frac{1}{2}\}$. For example, if $c=2\delta$, we obtain guaranteed recovery of all $1$-sparse vectors, which also equals the strong threshold for the 2D case. If $c=1$, we obtain, on average, that any $k$-sparse vector $x$, with \begin{equation} \label{eq:Er-critical} k\le k_{crit}=\frac{\log\left( \frac{d-2}{d}\right)}{\log\left( \frac{d-1}{d}\right)}\approx 2\ , \end{equation} induces reduced reduced overdetermined systems. Thus two particles can always be reconstructed, after perturbation. If $c=\frac{1}{2}$ the critical sparsity value approximately equals $4$ for arbitrary $d$. This is the best achievable bound, which is obviously useless for application. For $k=3$ it can be shown that the probability of correct recovery via the perturbed matrix $A_d^2$ is \begin{equation*} 1-\frac{2\cdot 4\cdot \binom{d}{2}\binom{d}{3}+4\cdot \binom{d}{3}^2}{\binom{d^3}{3}}= \frac{ d^2 + 6 d -10 }{3 (d^2-2)}\xrightarrow{d \to \infty} 1/3\ . \end{equation*} We mention that the expected relative values of $N_R$ and $N_C$ do not vary much with different two camera arrangements. This highly pessimistic results can be explained by the fact that there is no expander with constant left degree $\ell$ less than $3$. \section{Numerical Experiments and Discussion} \label{sec:experiments} In this section we empirically investigate bounds on the required sparsity that guarantee unique nonnegative or binary $k$-sparse solutions. \subsection{Reduced Systems versus Analytical Sparsity Thresholds} \label{sec:frac} The workhorse of the previous theoretical average case performance analysis of the discrete tomography matrix from \eqref{eq:A_dD} is the derivation of the expected number of nonzero rows $N_R(k)$ induced by the $k$-sparse vector along with the number $N_C(k)$ of ''active'' cells which cannot be empty. This can be done also empirically, see Fig. \ref{fig:Frac_2D_3D}, left, for the 2D case and right, for the 3D case. To generate the figures we varied $k\in\{1,2,\cdots, 2000\}$ and $d\in\{10,11,\cdots, 100\}$ in 2D and $k\in\{1,2,\cdots, 2000\}$ and $d\in\{10,11,\cdots, 100\}$ in 3D, respectively, and generated for each point $(k,d)$ 500 problem instances. The plots show $N_R(k,d)/N_C(k,d)$ along with the curves: $k_{\delta}$ \eqref{eq:k-unpertubed} for unperturbed matrices $A$, $k_{crit}$ \eqref{eq:def-k-crit} resulting in overdetermined reduced systems, $k_{max}$ \eqref{eq:kmax-criterion} for underdetermined perturbed matrices $A$, and $k_{opt}$ which solves \begin{equation}\label{eq:k_opt} N_R(k_{opt})=0.5 N_C(k_{opt})\ . \end{equation} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[clip,width=0.45\textwidth]{2D_Final_Frac_with_curves.png} & \includegraphics[clip,width=0.45\textwidth]{3D_Final_Frac_with_curves.png}\\ \end{tabular} \end{center} \caption{The contourplots of the average fraction of the reduced systems as a function of the resolution parameter $d$ and the sparsity parameter $k$. {\bf Left:} In agreement with the results of Section 5.5, the plots for the 2D case show that level lines of $N_R(k,d)/N_C(k,d)$ are constant with varying $d$. {\bf Right:} In 3D the situation dramatically changes. Higher sparsity values are allowed for increasing values of $d$, as the derived threshold curves show. Below the blue curve $k_{\delta}$ \eqref{eq:k-unpertubed} reconstruction for unperturbed systems is guaranteed with high probability. Below the dashed red curve $k_{crit}$ \eqref{eq:def-k-crit} reduced systems are overdetermined. For points below the solid red curve $k_{max}$ \eqref{eq:kmax-criterion} reconstruction is guaranteed for perturbed systems. Finally, problem instances under the green curve $k_{opt}$ \eqref{eq:k_opt} could be recovered if the reduced matrices would follow a symmetrical distribution with respect to the origin.} \label{fig:Frac_2D_3D} \end{figure} \subsection{Empirical Phase Transitions} We further concentrate on the 3D case. In analogy to \cite{DonTan05} we assess the so called \emph{phase transition} $\rho$ as a function of $d$, which is reciprocally proportional to the undersampling ratio $\frac{m}{n}\in(0,1)$. We consider $d\in\{10,11,\dots,100\}$, the corresponding matrix $A^3_d\in\mathbb{R}^{3d^2\times d^3}$ from \eqref{eq:A_dD} and its perturbed version $\tilde A$ and the sparsity as a fraction of $d^2$, $k=\rho d^2$, for $\rho\in (0,1)$. This phase transition $\rho(d)$ indicates the necessary relative sparsity to recover a $k$-sparse solution with overwhelming probability. More precisely, if $\|x\|_0\le\rho(d) \cdot d^2$, then with overwhelming probability a random $k$-sparse nonnegative (or binary) vector $x^*$ is the unique solution in $\calF_+:=\{x \colon Ax= Ax^*, x\ge 0\}$ or $\calF_{ 0,1}:=\{x \colon Ax= Ax^*, x\in[0,1]^n\}$, respectively. Uniqueness can be ''verified'' by minimizing and maximizing the same objective $f^\top x$ over $\calF_+$ or $\calF_{ 0,1}$, respectively. If the minimizers coincide for several random vectors $f$ we claim uniqueness. As shown in Fig. \ref{fig:sliceA3D} the threshold for a unique nonnegative solution and a unique $0/1$-bounded solution are quite close. To generate the success and failure transition plots we generated $A$ according to \eqref{eq:A_dD} and $\tilde A$ by slightly perturbing its entries and varying $d\in\{10,11,\dots,100\}$ $\tilde A$ has the same sparsity structure as $A$, but random entries drawn from the standard uniform distribution on the open interval $(0.9,1.1)$. We have tried different perturbation levels, all leading to similar results. Thus we adopted this interval for all presented results. Then for $\rho\in[0, 1]$ a $\rho d^2$-sparse nonnegative or binary vector was generated to compute the right hand side measurement vector and for each $(d,\rho)$-point 50 random problem instances were generated. A threshold-effect is clearly visible in all figures exhibiting parameter regions where the probability of exact reconstruction is close to one and it is much stronger for the perturbed systems. The results are in excellent agreement with the derived analytical thresholds. We refer to the figure captions for detailed explanations. Finally, we refer to the summary in Figure \ref{fig:concl} for the computed sharp sparsity thresholds, which are in excellent agreement with our numerical experiments. \begin{figure}[h] \begin{center} \includegraphics[clip,width=0.55\textwidth]{Conclusions.png}\\ \end{center} \caption{ Relative critical upper bound sparsity values $k(d)$ in the practical relevant domain $d\in(500,1500)$ that guarantee unique recovery of $k$-sparse vectors $x$ on average with high probability. From bottom to top: $k_{\delta}$ \eqref{eq:k-unpertubed} for unperturbed matrices $A$ (blue line), $k_{crit}$ \eqref{eq:def-k-crit} resulting in overdetermined reduced systems (dashed red line), $k_{max}$ \eqref{eq:kmax-criterion} and $\tilde k_{max}$ \eqref{eq:kmax-definition} for underdetermined perturbed matrices $A$ (solid red and pink line), and ideal random measurement matrices $k_{opt}$ (green line). The thin black line depicts the particle density used by engineers in practice, while the black spot corresponds to the typical resolution parameter $d=1024$. The results demonstrate that specific slight random perturbations of the TomoPIV measurement matrix considerably boost the expected reconstruction performance by at least 150\%.} \label{fig:concl} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[clip,width=0.38\textwidth]{AllResults_d10_with_legend.png} & \includegraphics[clip,width=0.3\textwidth]{Final_LindInd10.png}\\ \includegraphics[clip,width=0.38\textwidth]{AllResults_d20_with_legend.png} & \includegraphics[clip,width=0.3\textwidth]{Final_LindInd20.png}\\ \includegraphics[clip,width=0.38\textwidth]{AllResults_d30_with_legend.png}& \includegraphics[clip,width=0.3\textwidth]{Final_LindInd30.png}\\ \end{tabular} \end{center} \caption{ {\bf Left:} Recovery via the unperturbed matrix $A^3_d$ (blue curves), $d\in\{10,20,30\}$ (from top to down) versus the perturbed counterpart (red curves). The dash-dot line depicts the empirical probability (500 trials) that reduced systems are overdetermined and have full rank. The solid line (blue: unperturbed, red: perturbed) shows the probability that a $k$-sparse nonnegative vector is unique. The dashed curve shows the probability that a $k$-sparse binary solution is the unique solution of in $[0,1]^n$. Additional information like binarity gives only a slight performance boost. The curve $k_{\delta}$ \eqref{eq:k-unpertubed} correctly predicts that 18 ($d=10$), 48 ($d=20$), and 85 ($d=30$) particle are reconstructed with high probability via the unperturbed systems and 66 ($d=10$), 181 ($d=20$), 328 ($d=30$) particles, via the perturbed systems according to $k_{max}$ \eqref{eq:kmax-criterion}. However, 105 ($d=10$), 241 ($d=20$), 408 ($d=30$), by $\tilde k_{max}$ from \eqref{eq:kmax-definition} are more accurate. Division by three does not seem to be necessary. {\bf Right:} Empirical probability obtained from 10000 trials that $k$ random columns of the unperturbed matrix (solid black line) or of the perturbed matrix (dashed black line) are linearly independent.} \label{fig:sliceA3D} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[clip,width=0.43\textwidth]{ContourRankA0.png} & \includegraphics[clip,width=0.43\textwidth]{ContourRankA2_with_curve.png}\\ \includegraphics[clip,width=0.43\textwidth]{ContourLPposA0_with_curve.png} & \includegraphics[clip,width=0.43\textwidth]{ContourLPposA2_with_curves.png}\\ \includegraphics[clip,width=0.43\textwidth]{ContourLPbinA0_with_curve.png} & \includegraphics[clip,width=0.43\textwidth]{ContourLPbinA2_with_curves.png}\\ \end{tabular} \end{center} \caption{{\bf Left:} Success and failure empirical phase transitions for unperturbed and perturbed systems {\bf right}. {\bf Top:} Probability that the reduced matrices are overdetermined and of full rank, along ({\bf right}) with the estimated relative critical sparsity level $k_{krit}$ (dashed red line) which induces overdetermined reduced matrices. {\bf Middle:} Probability of uniqueness of a $k=\rho d^2$ sparse nonnegative vector. {\bf Bottom:} Probability of uniqueness in $[0,1]^n$ of a $k=\rho d^2$ sparse binary vector. The blue curve depicts again $k_{\delta}$ \eqref{eq:k-unpertubed}, the dashed red curve $k_{crit}$ \eqref{eq:def-k-crit}, the solid red curve $k_{max}$ \eqref{eq:kmax-criterion}, $\tilde k_{max}$ \eqref{eq:kmax-criterion} and the green curve $k_{opt}$ \eqref{eq:k_opt}. In case of the perturbed matrix $\tilde A$ exact recovery is possible \emph{beyond} overdetermined reduced matrices. Moreover $\tilde k_{max}$ follows most accurately the empirical phase transition for perturbed systems. } \label{fig:contour3D} \end{figure} \section{Conclusions} The main contribution of this work is the transfer of recent results on compressive sensing via expander graphs with bad expansion properties to the discrete tomography problem. In particular, we consider a sparse binary measurement matrix, which encodes the incidence relation between projection rays and image discretization cells, along with its slightly perturbed counterpart. While the expected expansion of the underlying graph does not change with perturbation, the recovery performance can be boosted significantly. We investigate the average performance in recovery of exact sparse nonnegative signals by analyzing the properties of reduced systems obtained by eliminating zero measurements and related redundant discretization cells. We compute sharp sparsity thresholds, such that the maximal sparsity can be determined precisely for both perturbed and unperturbed scenarios. Our theoretical analysis suggests that a similar procedure can be applied to different geometries. \bibliographystyle{plain}
2,869,038,154,454
arxiv
\section{Introduction} Research on spin coherence generation, manipulation and detection has become a topical area in semiconductor physics.\cite{awschalom_book,spin_phys} A prospective system to study spin coherence is an ensemble of $n$-type quantum dots (QDs). Short optical pulses can induce efficiently long-living spin coherence in such structures which subsequently can be traced by precession about an external magnetic field.\cite{PhysRevLett.94.227403,greilich06} Due to the combined action of a periodic train of pump pulses and the hyperfine electron-nuclear interaction, a mode-locking of electron spin precession may occur.\cite{A.Greilich07212006,A.Greilich09282007,carter:167403} As a result, an ensemble of about a million QD electron spins is pushed into a regime given by a limited number of precession modes with commensurable frequencies. This may pave a road toward large-scale spintronic applications. A fundamental question in this regard concerns limitations of spin initialization and detection by optical pulses. In previous studies, pumping and probing of spin excitations were done by short optical pulses with durations no longer than a few picoseconds, much shorter than the period of spin precession about the magnetic field.\cite{PhysRevLett.94.227403,greilich06,A.Greilich07212006,A.Greilich09282007,carter:167403} The spin initialization is most efficient for laser pulses with area $\Theta = \pi$, requiring high peak powers generated by bulky lasers such as Ti-Sapphire oscillators, for example, pumped by intense continuous wave lasers. For applications, the use of more compact pulsed solid state lasers is appealing, which typically provide, however, considerably lower output power levels. To reach a pulse area of $\pi$ then, the pulses must have much longer duration. On the other hand, such pulses can also have a much smaller spectral width, so that a less inhomogeneous QD distribution is excited. But the condition that these pulses are much shorter than the precession period is not necessarily fulfilled then, potentially affecting the spin coherence. Usage of lasers with reduced peak powers may be also beneficial in other respects, for example the reduced importance of non-linear optical processes such as two- or multi-photon absorption, which may serve as potential sources of spin decoherence. Here we address this problem by reporting on mode-locked spin coherence initialization and detection in $n$-type singly-charged QDs, using pump and probe pulses with durations up to 80~ps. We demonstrate experimentally, that the efficiency of initialization and detection depends strongly on the ratio of laser pulse duration to spin precession period. The experimental data are in good agreement with predictions based on a microscopic model. The paper is organized as follows: In Sec.~\ref{Sec:exp} the optical techniques are described and the experimental results are presented in Sec.~III. Sec.~\ref{Sec:model} provides the theoretical background, and the comparison between experiment and theory is discussed in Sec.~V. \section{Sample and experiment} \label{Sec:exp} We study the spin coherence in an (In,Ga)As/GaAs self-assembled QD ensemble, grown by molecular-beam epitaxy. The sample contains $20$ layers of (In,Ga)As dots, separated by $60$~nm GaAs barriers. The QD density in each layer is about $10^{10}$~dots/cm$^2$. $\delta$-sheets of Si donors are positioned $20$~nm below each QD layer with a dopant density roughly equal to the dot density to achieve an average occupation of one resident electron per QD. The sample was thermally annealed at a temperature of $945$~$^\circ$C for $30$~s. It is mounted in a superconducting split-coil magnet cryostat which allows application of magnetic fields $B$ up to $6$~T. The sample is cooled down to $T=6$~K by helium contact gas. At this temperature, the ground state photoluminescence maximum is at $1.398$~eV. For monitoring the spin precession, an external magnetic field is applied perpendicular to the light propagation direction (Voigt geometry). The spin precession is traced by time-resolved pump-probe techniques. A Ti:Sapphire laser emits pulses at a repetition rate of $75.75$~MHz, corresponding to a repetition period $T_R=13.2$~ns. The range of pulse durations that can be covered with this laser extends from less than 100~fs up to 80~ps. In all cases the precise pulse duration depends on the laser adjustment with variations on the order of 10\%. In the case of sub-ps pulses this is not relevant for the physics described below, but for the few 10 ps pulses this leads to slight variations of the measured spin coherence signal, without affecting the general conclusions. We also note, that the spectral width of the pulses decreases inversely with the pulse duration increase, but the pulses are not Fourier-limited for pulse durations exceeding 2~ps. The laser beam is split into a pump beam and a probe beam, both having the same photon energies resonant to the ground state photoluminescence peak so that they excite the singlet trion transition. Spin polarization of resident electrons and electron-hole complexes is induced by the pump beam, which is modulated by a photoelastic modulator, varying between left- and right-handed circular polarization at a frequency of $50$~kHz. The intensity of the pump is about $5$ times higher than that of the linearly polarized probe beam. Independent of the pump pulse duration the laser output power was adjusted in order to obtain maximal signal amplitude, which is achieved by a pump pulse area of about $\Theta = \pi$. After transmission through the sample the probe beam is split into two orthogonal polarizations, whose intensities are detected by a balanced photodiode bridge. Depending on the $z$-component of the spin polarization (where $z$ is the light propagation direction) the plane of linear polarization of the probe beam is rotated due to the Faraday rotation (FR) effect which leads to a variation of the intensities of the two split beams. By polarizing the two beams appropriately, either Faraday rotation or ellipticity is measured.\cite{glazov2010a} The time delay between pump and probe pulses is tuned by a mechanical delay line up to $13$~ns with a precision of about $20$~fs. This setup is used as long as resonant pump and probe pulses of the same duration are applied. When varying these durations relative to each other, the setup is modified. One laser is then used as pump (probe) only, while a second laser is used as probe (pump). The duration of the pulses emitted from this second laser is fixed at 2~ps. Both lasers are synchronized with an accuracy of about 100~fs by using one of them as master laser for the second laser, whose pulse repetition rate is adjusted accordingly. The photon energies of the pump and probe pulses are kept in resonance with an accuracy of $0.1$~meV. \section{Experimental results} \label{Sec:results} Figure \ref{fig:exp1} shows time resolved Faraday rotation signals, where the durations of pump and probe pulses are equal, $\tau_{\rm pump} = \tau_{\rm probe}$. Different panels correspond to different pulse durations of 2, 10, 30, and 80~ps. In each case different magnetic field strengths $B$ are applied. \begin{figure}[htbp] \includegraphics[width=\linewidth]{fig1} \caption{Faraday rotation signals measured as function of time delay between pump and probe at different magnetic fields. The equal pump and probe pulse durations were $\tau_{\rm pump} = \tau_{\rm probe} =2$~ps (a), $10$~ps (b), $30$~ps (c), and 80$\,$ps (d), as indicated by the numbers in brackets giving $(\tau_{\rm pump}, \tau_{\rm probe})$. The noise around zero delay comes from scattered laser light, as seen particularly well for the long duration pulses.} \label{fig:exp1} \end{figure} For $2$~ps pulses strong Faraday rotation signals appear up to the highest applicable magnetic fields, so that in all cases also mode-locked spin coherence can be generated and detected, in agreement with previous reports.\cite{A.Greilich07212006} A prerequisite outlined in these studies is that the pump pulse duration is much shorter than the period of spin precession, given by $T_e = 2 \pi \hbar /(g_e \mu_B B)$, where $\hbar$ is the Planck constant and $\mu_B$ is the Bohr magneton. $g_e$ is the average electron $g$ factor of the optically excited QD electron spin ensemble. For our QDs with a $g$ factor of -0.56 at the ground state photoluminescence maximum we obtain $T_e$ [ps] $= 127/B$[T], which gives 21 ps at $B$=6 T, still an order of magnitude longer than the pulse duration. Therefore the Larmor precession is of negligible influence for the processes of both spin coherence generation and measurement as pump and probe do not average over distinctly varying spin orientations during precession. We have performed also experiments for pulse durations below 1 ps, and the appearance of the Faraday rotation traces (not shown) for magnetic fields up to 6 T is similar to the one for 2 ps pulses, so that generation and detection of spin coherence work efficiently also in these cases. Our setup permits us, however, also to increase the laser pulse duration to being comparable or even longer than the spin precession period. For this long pulse case the question arises to what extent the spin coherence can still be accessed. Faraday rotation traces for $10$~ps pulses are shown in Fig.~\ref{fig:exp1}(b). The curves recorded for low magnetic fields show strong spin precession signal, but beyond 1~T the signal strength gets continuously weaker. Above about 3~T spin precession can no longer be resolved. As a characteristic quantity for this transition we use the product of Larmor precession frequency, $\Omega_{\rm L} = 2\pi / T_e$, times the pump pulse duration: $\Omega_{\rm L} \tau_{\rm pump}$. The field strength of 3 T corresponds to a value of 1.5 for this product. This means that during the pump pulse the spins perform about a quarter of a full revolution about the magnetic field. The characteristic value of 1.5 for this product is also found when the pulse duration is extended further. For example, for 30~ps pulses in Fig.~\ref{fig:exp1}(c) strong spin coherent signal can be observed up to 0.6~T, beyond the FR signal drops strongly, so that above 1~T the signal strength reaches the noise level. The 1~T field corresponds again to a product of 1.5, as confirmed for pump pulses of 80~ps where the spin coherent signal appears up to 0.4~T only, while for higher fields it cannot be observed anymore. Note, however, that the magnetic field dependence of the Faraday rotation signal amplitude is quite complicated, because it typically shows a non-monotonous variation with $B$, as can be seen, for example, for the (30~ps, 30~ps) configuration in Fig.~\ref{fig:exp1}(c). After being strong at low fields, it drops around 0.4~T, gets stronger again around 0.6~T, and finally vanishes at higher magnetic fields. The origin for this variation is not fully clear yet, as several effects may become relevant such as the variation of the number of mode-locked modes with increasing field, which is particularly relevant at low fields, where only a few precession modes are synchronized. In addition, the nuclear-induced electron spin precession frequency focusing effects involved in the mode locking may vary with field strength.\cite{A.Greilich09282007} Independent of that, the disappearance of signal at a characteristic field where $\Omega_{\rm L}\tau_{\rm pump} \sim 1.5$ is valid for all pulse durations. In a nutshell, spin polarization can be excited and detected by pulses with widely varying durations up to 80~ps. The mode-locking of electron spin coherence is pronounced for any $\tau_{\rm pump}= \tau_{\rm probe}$ as evidenced by the strong signal at negative delays. The efficiency of spin coherence generation depends, however, critically on the parameter $\Omega_{\rm L} \tau_{\rm pump}$. The product $\Omega_{\rm L} \tau_{\rm pump}$ has to be smaller than about 1.5, for higher values spin coherence initialization and measurement do not work anymore. For completeness we note that corresponding ellipticity traces look qualitatively similar to the Faraday rotation traces, even though there are quantitative differences concerning signal amplitudes and in particular the ratio of signals before and after pump pulse application.\cite{glazov2010a} In all cases dephasing of the signal is seen on time scales of a few nanoseconds. There are two reasons for this dephasing: the excitation of an inhomogeneous spin ensemble with varying $g$ factors and therefore also varying precession frequencies and the spin precession about the randomly oriented nuclear magnetic field. The latter is important mostly at low magnetic fields, while the $g$ factor inhomogeneity becomes dominant for fields exceeding by far the nuclear field of about 10~mT.\cite{Auer} \begin{figure}[t] \includegraphics[width=0.7\linewidth]{Fig2} \caption{(a) Ellipticity signals recorded with a probe pulse duration of 2\,ps, with the pump pulse duration increased to 10~ps in panel (a), 30~ps in panel (b) and 80~ps in panel (c) at different magnetic field strengths. The numbers in brackets give the pump and probe durations $(\tau_{\rm pump}, \tau_{\rm probe})$.} \label{fig:exp2} \end{figure} In the measurements presented so far, we use pump and probe pulses of the same duration. For the long pulses this means that considerable spin precession of the involved carriers, either resident or photoexcited, occurs during pulse application. This concerns both the initialization of coherence and also its measurement. Ideally these two processes should be separated from one another, which requires independent variation of pump and probe pulse durations relative to each other $(\tau_{\rm pump} \ne \tau_{\rm probe})$. To that end, we first make experiments, in which the pump pulse duration is varied, while the probe pulse duration is kept constant at 2~ps. From above we know that this pulse duration is short enough that the spin orientation can be considered as frozen. The results are shown in Fig.~\ref{fig:exp2}. Figure~\ref{fig:exp2}(a) shows the magnetic field series of ellipticity traces for 10~ps pump pulses. Note that traces measured in Faraday rotation show a similar variation with magnetic field, but the signal strength is weaker, as seen from the enhanced noise in the signals. As soon as $\Omega_{\rm L} \tau_{\rm pump}$ passes a certain threshold with increasing $B$, the coherent signal drops considerably, indicating that spin initialization does not work anymore. When determining the threshold magnetic field, care needs to exercised. On the one hand, due to the longer pump pulse, having correspondingly a reduced spectral width, a smaller number of spins becomes initialized. As a result, a smaller number of mode-locked spin precession modes are involved. One the other hand, the probe still collects signals from a large, partially disordered ensemble. Therefore the signals become overall weaker with increasing pump duration as compared to the short pulse excitation in Fig.~\ref{fig:exp1}. Despite of the weak signal, we see that at 3~T, which was the threshold field for the (10~ps, 10~ps) configuration, the signal gets weak, but can still be observed in Fig. 2(a). This indicates that the threshold field may be slightly higher than in the case of equal pulse duration, and that not only the pumping is influential for the signal, but also the probing. But the difference is small as the spin initialization is hampered when the laser pulse duration exceeds a quarter of revolution during precession. Note also that the dependence of the ellipticity signal amplitude on magnetic field is smooth and shows no strong nonmonotonic variations with $B$ as observed in Fig.~\ref{fig:exp1}. Below we will show that this dependence can be explained accounting for the finite pulse duration. These findings are corroborated when shifting to 30~ps pump pulses [results shown in Fig.~\ref{fig:exp2}(b)]. The magnetic field at which the signal drop occurs is about 1.2~T, where the product $\Omega_{\rm L} \tau_{\rm pump}$ is 1.8. This again indicates that the threshold field is slightly larger than in the duration degenerate configuration in Fig.~\ref{fig:exp1}. In addition, as was also the case in Fig.~\ref{fig:exp2}(a), the drop of the signal amplitude before final disappearance is much more abrupt than in Fig.~\ref{fig:exp1}. Hence, the ellipticity signal remains significant up to fields very close to the threshold field and then drops rather fast to zero. That the coherent signals remain significant up to fields close to the threshold, in contrast to the observations in Fig.~\ref{fig:exp1}, is another indication for the importance of the probing process. \begin{figure}[t] \includegraphics[width=0.7\linewidth]{Fig3} \caption{Faraday rotation signals measured for fixed pump pulse duration of 2 ps, but varying probe pulse durations of 10~ps (a), 30~ps (b), and 80~ps (c) at different magnetic fields. The numbers in brackets give the pump and probe durations $(\tau_{\rm pump}, \tau_{\rm probe})$.} \label{fig:exp3} \end{figure} The overall weak signal for all configurations shown in Fig.~\ref{fig:exp2} becomes particularly pronounced for the 80~ps pump pulse case which is presented in Fig.~\ref{fig:exp2}(c). Here faint spin oscillations are seen at positive delays in magnetic fields up to 0.15~T. At higher fields the noise level exceeds the signal amplitude, so that a validation of the threshold criterion is not possible, despite of long accumulation times used in our experiments. To work out the influence of the pumping, we also test the complementary situation by fixing the pump pulse duration at 2 ps, and changing the probe pulse duration $\tau_{\rm probe}$. By doing so we isolate the effect of the probe on the spin coherence measurement. The probe then has a spectral width always smaller than the pump, so that it tests a spin ensemble smaller than the one addressed the pump, but fully initialized. Typical examples are shown in Fig. 3 for probe pulse durations of 10 ps (a), 30 ps (b) and 80 ps (c). In all cases strong signals are seen, much stronger than in Fig. 2, as seen from the smooth, almost noise-free FR traces, indicating that spin initialization works well. In contrast to the 2~ps probe pulse case where spin coherence can be detected up to 6~T, see Fig.~\ref{fig:exp1}(a), we observe here coherent signal only up to 2~T for 10~ps probe, up to 1~T for 30~ps probe, and up to 0.5~T for 80~ps probe. These data suggest that the dependence of the spin signal strength on the parameter $\Omega_{\rm L} \tau_{\rm pump}$ may be transferred also to the probe duration dependence, for which $\Omega_{\rm L} \tau_{\rm probe}$ would be the proper characteristic quantity. Also here the threshold for $\Omega_{\rm L} \tau_{\rm probe}$ is about 1.5, above which the signal drops fast to zero. Below this threshold the signal amplitude remains considerable as is validated also by Fig.~\ref{fig:exp4}, which shows Faraday rotation traces taken for products $\Omega_{\rm L} \tau_{\rm probe}$ equal to 1 and 1.5. For that purpose, different magnetic field strengths are applied for the probe durations of 10, 30, and 80~ps, as seen by the widely varying precession frequencies. When exceeding a product value of unity in every case a considerable drop in Faraday rotation signal strength is observed. The drop occurs, however, rather abruptly when approaching the threshold, similar as in Fig. 2 for ellipticity, but for all probe durations the magnetic field dependencies are smooth, showing no fluctuations.. \begin{figure}[t] \includegraphics[width=0.9\linewidth]{Fig4} \caption{(Color online) Faraday rotation signals measured such that the product $\Omega_{\rm L} \tau_{\rm probe}$ is constant at a value of 1 and 1.5 (except of the 10~ps probe case where the trace for a product value of 1.3 is shown instead of 1.5, because at 1.5 the signal is already very weak). For different probe durations of 10, 30, and 80~ps, the magnetic field strength is adjusted correspondingly, as seen from the varying precession frequencies. The pump pulse duration is fixed at 2~ps. The numbers in brackets give pump and probe durations $(\tau_{\rm pump}, \tau_{\rm probe})$. Magnetic fields for the curves are: (i) 2/10 ps: $B=2$~T for $\Omega_{\rm L} \tau_{\rm probe} =1$ and 2.5~T for $\Omega_{\rm L} \tau_{\rm probe} =1.3$, (ii) 2/30 ps: 0.7~T for $\Omega_{\rm L} \tau_{\rm probe} =1.1$ and 1~T for $\Omega_{\rm L} \tau_{\rm probe} =1.5$, (iii) 2/80 ps: 0.25~T for $\Omega_{\rm L} \tau_{\rm probe} =1$ and 0.4~T for $\Omega_{\rm L} \tau_{\rm probe} =1.6$.} \label{fig:exp4} \end{figure} All together, we find that the efficiency of spin initialization (measurement) depends sensitively on the pump (probe) duration. The effects of the duration increase for pump and probe have a rather symmetrical impact on the measured signal of spin coherence. \section{Theoretical Model}\label{Sec:model} From the results described so far we find a strong dependence of the spin coherence signal on both pump and probe pulse duration. Therefore a model description of such measurements needs to take into account generation and detection of spin coherence by finite duration pulses. The allowance is made also for the detuning between pump/probe pulse energies and the trion resonance in an individual QD. \subsection{Basic theory}\label{sec:gen} We consider $n$-type singly-charged QDs pumped and probed by optical pulses propagating along the sample growth axis $z$. We assume that the optical frequencies of pump, $\omega_{\rm pump}$, and probe, $\omega_{\rm probe}$ pulses are close to the one of the singlet $X^-$ trion resonance with transition frequency $\omega_0$. The QD is subject to a magnetic field which is assumed to be applied along the $x$ axis in the dot plane. The magnetic field induces spin splittings of the electron and trion states. The trion splitting is neglected hereafter because the in-plane heavy-hole $g$ factor is small as compared with the electron $g$ factor.~\cite{Mar99} We take into account the finite durations of pump ($\tau_{\rm pump}$) and probe ($\tau_{\rm probe}$) pulses, in contrast to Refs.~\onlinecite{shabaev:201305,economou:205415,yugova09} where these pulses were considered as negligibly short. In particular, we assume that the pulse duration can be comparable or even longer than the electron spin precession period in magnetic field, {$T_{e}$}. It is supposed, however, that the pulses are short as compared with the relaxation times in the system $\tau_{\rm pump}, \tau_{\rm probe} \ll \tau_{T},\tau_{QD}$ where $\tau_{T}$ is the spin relaxation time of the hole in trion and $\tau_{QD}$ is the trion lifetime in a quantum dot. While the first one is at least in the 100 ns range at $T <$ 10 K, Ref.[\cite{hsk}], the trion lifetime is 500 ps, as determined from time-resolved photoluminescence. \footnote{In the opposite limit the pump and probe pulses can be considered as a constant wave radiation.} Therefore, the description of pumping and probing can be carried out using the Schr\"{o}dinger equation without introducing a spin density matrix. \subsection{Generation of electron spin coherence} We describe the QD state by a four component wave function $\Psi=[\psi_{1/2}, \psi_{-1/2},\psi_{3/2},\psi_{-3/2}]$ where the subscripts $\pm 1/2$ refer to the electron states and the subscripts $\pm 3/2$ refer to the heavy-hole trion states. For a $\sigma^+$ polarized pump pulse these components obey the following equations \begin{subequations} \label{schroed:pump} \begin{equation} \mathrm i \hbar \dot\psi_{1/2} = V_+^*(t)\psi_{3/2} + \frac{\hbar\Omega_{\rm L}}{2}\psi_{-1/2}, \end{equation} \begin{equation} \mathrm i \hbar \dot\psi_{-1/2} = \frac{\hbar\Omega_{\rm L}}{2}\psi_{1/2}, \end{equation} \begin{equation} \mathrm i \hbar \dot\psi_{3/2} = \hbar\omega_0\psi_{3/2}+V_+(t)\psi_{1/2}, \end{equation} \end{subequations} where $V_+(t)= e^{-\mathrm i \omega_{\rm pump} t} f_{\rm pump}(t)/\hbar$, with $f_{\rm pump}(t)$ being the smooth envelope of the pump electric field, is the time-dependent matrix element describing the interaction of a $\sigma^+$ polarized photon with a QD. This matrix element is proportional to the electric field of the pump pulse and the transition dipole matrix element.\cite{yugova09} In the following $f_{\rm pump}(t)$ is assumed to be an even function of time with the maximum at $t=0$. This time moment coincides with the pump pulse arrival. Accounting for the Zeeman splitting in Eqs.~\eqref{schroed:pump} is a major difference between the present approach and previous treatments.\cite{shabaev:201305,economou:205415,yugova09} Note, that spin pumping of a free, two-dimensional gas by pulses long as compared with the spin precession period was considered in Ref.~\onlinecite{averkiev08}. Without optical pumping, $V_+(t)\equiv 0$, the spin system~\eqref{schroed:pump} precesses coherently about the in-plane magnetic field. It is convenient to introduce the electron spin state combinations \begin{equation} \label{x:spins} \psi_{x} = \frac{1}{\sqrt{2}}(\psi_{1/2}+\psi_{-1/2}),\quad \psi_{\bar x} = \frac{1}{\sqrt{2}}(\psi_{1/2}-\psi_{-1/2}), \end{equation} that correspond to the eigenstates in magnetic field $\bm B \parallel x$ and evolve in time as \begin{equation} \label{precess} \psi_{x}(t) = \tilde{\psi}_x\ {\exp{({-}\mathrm i \Omega_{\rm L} t/2)}}, \quad \psi_{\bar x}(t) = \tilde{\psi}_{\bar x}\ {\exp{(\mathrm i \Omega_{\rm L} t/2)}}, \end{equation} where $\tilde{\psi}_{x}$, $\tilde{\psi}_{\bar x}$ are constants determined by the initial conditions. With optical pumping, $V_+(t) \ne 0$, and $\tilde{\psi}_{x}$, $\tilde{\psi}_{\bar x}$ become time-dependent.\cite{ll3_eng} In order to describe the pump action on the electron spin we have to establish a link between the spin components before and after pump pulse arrival. The change of spin with time results from two effects: the Larmor precession about the magnetic field and the effect of the pump pulse. The Larmor precession of the electron spin during time $T$ can be described by a linear operator $\mathcal R_{\Omega}(T)$.\cite{merkulov02} It is convenient to treat the spin precession separately from the optical pulse and connect the rotated electron spin vector $\bm S^{-} = \mathcal R_{\Omega}(T_0) \bm S(-T_0)$ with the electron spin vector $\bm S^{+} = \mathcal R_{\Omega}^{-1}(T_0) \bm S(T_0)$, where $T_0$ exceeds by far the pulse duration so that on the time scale of $T_0$ the pulse action can be neglected. On the quantum-mechanical level this operation is equivalent to the unitary transformation Eq.~\eqref{precess}. Hence the components of the spin vector $\bm S^{\pm}$ are given by: \begin{eqnarray} \label{spin} S_x^{\pm} &= &\frac{1}{2}\left[|\tilde{\psi}_{x}(\pm \infty)|^2 - |\tilde{\psi}_{\bar x}(\pm \infty)|^2\right],\nonumber \\ S_y^{\pm} &=& -\Im\{\tilde{\psi}_{x}(\pm \infty)\tilde{\psi}^*_{\bar x}(\pm \infty)\},\\ S_z^{\pm} &=& \Re\{\tilde{\psi}_{x}(\pm \infty)\tilde{\psi}^*_{\bar x}(\pm \infty)\}. \nonumber \end{eqnarray} \begin{widetext} One can show that in the limit of low pump power (see Appendix~\ref{app:transform} for details) \begin{subequations} \label{spin:transf} \begin{multline} \label{Sz} S_z^{+} =-\frac{1}{2} \Re G(\Lambda,\Omega_{\rm L}) + \Re{\left[1 -\frac{1}{2} G\left(\Lambda+\frac{\Omega_{\rm L}}{2},0\right) -\frac{1}{2} G\left(\Lambda-\frac{\Omega_{\rm L}}{2},0\right)\right]}S_{z}^{-} +\\ \frac{1}{2} \Im {\left[ G\left(\Lambda+\frac{\Omega_{\rm L}}{2},0\right) + G\left(\Lambda-\frac{\Omega_{\rm L}}{2},0\right)\right]}S_{y}^{-}, \end{multline} \begin{multline} \label{Sx} S_x^{+} =-\frac{1}{4} \Re{\left[ G\left(\Lambda+\frac{\Omega_{\rm L}}{2},0\right) - G\left(\Lambda-\frac{\Omega_{\rm L}}{2},0\right)\right]} + \Re{\left[1 -\frac{1}{2} G\left(\Lambda+\frac{\Omega_{\rm L}}{2},0\right) -\frac{1}{2} G\left(\Lambda-\frac{\Omega_{\rm L}}{2},0\right)\right]}S_{x}^{-} - \\ \Im {G(\Lambda,\Omega_{\rm L})}S_{y}^{-}, \end{multline} \begin{multline} \label{Sy} S_y^{+} = \Re{\left[1-\frac{1}{2}G\left(\Lambda+\frac{\Omega_{\rm L}}{2},0\right)-\frac{1}{2} G\left(\Lambda-\frac{\Omega_{\rm L}}{2},0\right)\right]}S_{y}^{-} + \Im {G(\Lambda,\Omega_{\rm L})}S_{x}^{-}-\\ \frac{1}{2} \Im {\left[ G\left(\Lambda+\frac{\Omega_{\rm L}}{2},0\right) + G\left(\Lambda-\frac{\Omega_{\rm L}}{2},0\right)\right]}S_{z}^{-}. \end{multline} \end{subequations} where $\Lambda=\omega_{\rm pump} - \omega_0$ is the energy detuning between pump pulse and trion resonance. Here \begin{equation} \label{g:func} G_{\rm pump}(\Lambda, \Omega) = \int_{-\infty}^{\infty} \mathrm dt f_{\rm pump}(t) \int_{-\infty}^t \mathrm dt' f_{\rm pump}(t') \mathrm \ e^{\mathrm i \Lambda (t-t')} \cos{\left[\frac{\Omega}{2}(t+t')\right]}. \end{equation} The function $G_{\rm pump}(\Lambda, \Omega)$ can be found analytically for Fourier-limited pulses, such as in the case of an exponential pulse, $f_{\rm P}(t) =f_0 \mathrm e^{-|t|/\tau_{\rm pump}}$, where $f_0$ is the pump pulse amplitude that is related to its area, $\Theta = 2\int\limits_{-\infty}^\infty f_{\rm pump}(t) \mathrm d t$, by $f_0=\Theta/(4\tau_{\rm pump})$. Then one can show that \begin{equation} \label{g:res} G_{\rm pump}(\Lambda,\Omega) = \frac{\Theta^2(2+\mathrm i \Lambda\tau_{\rm pump})}{[4+(\Omega\tau_{\rm pump})^2][4-8\mathrm i \Lambda\tau_{\rm pump} -4(\Lambda\tau_{\rm pump})^2 + (\Omega\tau_{\rm pump})^2]}. \end{equation} For $\Omega \tau_{\rm pump}=0$, Eq.~\eqref{g:res} is equivalent to Eq.~(61) of Ref.~\onlinecite{yugova09}. \end{widetext} Although the Eqs.~\eqref{spin:transf} are quite bulky, they allow one to identify all essential physics features caused by the pump pulse application. First of all, circularly polarized pump pulses cause electron spin orientation along the $z$ axis \begin{equation} \label{Sz+} S_z^+=-\Re{G(\Lambda,\Omega_{\rm L})/2}. \end{equation} In absence of a magnetic field and for resonant pulse $S_z^+ = -\Theta^2/16$. This is equivalent to the regular spin initialization protocol based on very short pump pulses.\cite{shabaev:201305,greilich06} Due to the Zeeman splitting the spin can acquire some degree of orientation along the $x$ axis due to unequal transition rates out of the magnetic field split sublevels, see first term in Eq.~\eqref{Sx} \begin{equation} \label{Sx+} S_x^+=-\frac{1}{4} \Re{\left[ G\left(\Lambda+\frac{\Omega_{\rm L}}{2},0\right) - G\left(\Lambda-\frac{\Omega_{\rm L}}{2},0\right)\right]}. \end{equation} This effect arises from detuning of the pump pulse from the center of the Zeeman-split doublet, as discussed in more detail below. In addition, the spin is rotated due to the pump pulse action, in the $(xy)$ plane similar to the case of negligible Zeeman splitting\cite{economou:205415,yugova09,phelps:237402} and in the $(yz)$ plane due to the combined action of the pump pulse and the spin splitting. \begin{figure}[hptb] \includegraphics[width=0.45\textwidth]{Figure5 \caption (Color online) Electron spin $z$ component generated by a single pump pulse as a function of reduced magnetic field $\Omega_{\rm L} \tau_{\rm pump}$. The three curves correspond to different detunings between the pump and trion resonance $\Lambda\tau_{\rm pump}=0$ (black/solid), $\Lambda\tau_{\rm pump}=1$ (red/dashed), and $\Lambda\tau_{\rm pump}=2$ (blue/dotted). Pulse area $\Theta=1$.}\label{fig:init} \end{figure} \begin{figure}[hptb] \includegraphics[width=0.45\textwidth]{Figure6} \caption{Schematic illustration of electron spin initialization. Red lines indicate Zeeman-split electron sublevels with splitting $\hbar\Omega_{\rm L}$, the blue curve shows the pump pulse spectral shape. (a) and (b) give the case of a pump pulse in resonance with the center of the Zeeman doublet. (a) $\Omega_{\rm L}\tau_{\rm pump} \ll 1$, both sublevels interact strongly with the optical pulse resulting in efficient spin orientation along the $z$ axis. (b) $\Omega_{\rm L}\tau_{\rm pump} \gg 1$, inefficient spin orientation along the $z$ axis. (c) Case of a detuned pulse resulting in different interaction strengths with the Zeeman-split sublevels. As a result the electron spin acquires non-zero $x$ and $z$ components.}\label{fig:scheme} \end{figure} It is worth to note that if the electron spin was initially unpolarized, $\bm S^-=0$, a circularly polarized pump pulse creates electron spin with two non-zero components, $S_z^+$ and $S_x^+$. The appearance of $S_z^+$ is related to the transfer of photon angular momentum to the electron, with an efficiency that decreases with increasing magnetic field, see Fig.~\ref{fig:init}. Indeed, with increasing spin splitting the formation of a coherent superposition of the Zeeman split sublevels becomes hindered as schematically illustrated in Figs.~\ref{fig:scheme}(a) and \ref{fig:scheme}(b). The contained information can be also translated into the time domain: For a fixed Zeeman splitting, the action of a longer pulse corresponds to applying a spectrally narrower pulse, equivalent to going from (a) to (b), and reducing thereby the spin initialization efficiency. The microscopic origin of the appearance of the in-plane spin component $S_x^+$ is shown in Fig.~\ref{fig:scheme}(c). Indeed, if the pump pulse is detuned from the ``center-of-gravity'' of the Zeeman-split doublet, the transition efficiencies from the split levels are different. As a result, the resident carrier acquires some spin polarization parallel or anti-parallel to the magnetic field. As it was assumed that the trion lifetime, $\tau_{QD}$, exceeds by far the pump duration, the dynamics of the coupled electron and trion spins is described in the standard way, see Ref.~\onlinecite{yugova09}, Eq. (27), and Ref.~\onlinecite{zhu07}, Eq. (6). Long living electron spin coherence appears after the trion recombination, and a steady state distribution of the precessing spins develops as result of the applied pump pulses.\cite{A.Greilich07212006,yugova09} \subsection{Detection of electron spin coherence} The description of spin coherence probing is rather similar to the generation. We assume that the probe is linearly polarized along the $x$-axis, i.e. along the magnetic field direction as in our experiment. In this case, the coupled Schr\"{o}dinger equations describing the dynamics of electron and trion spins separate into two independent subsystems, corresponding to the optical transitions involving electrons with spin parallel to the $x$ axis and those with spin antiparallel to the $x$ axis. It can be shown that, similarly to Refs.~\onlinecite{yugova09,glazov2010a}, the spin ellipticity, $\mathcal E$, and Faraday rotation, $\mathcal F$, signals from an ensemble of QD spins are proportional to the real and imaginary parts of the following quantity: \begin{equation} \label{Sigma} \mathcal E(t) + \mathrm i \mathcal F(t) \propto G_{\rm probe}(\omega_{\rm probe} - \omega_0,\Omega)S_z(t), \end{equation} where $G_{\rm probe}$ is defined by Eq.~(\ref{g:func}) after replacing the pump envelope $f_{\rm pump}(t)$ by the probe envelope $f_{\rm probe}(t)$. Here $S_z(t)$ is the electron spin $z$ component at the moment of probe pulse action, i.e., at the time where the probe pulse amplitude is maximal. In deriving Eq.~\eqref{Sigma} we assumed that the pump-probe delay exceeds the trion spin lifetime in the QD, $\tau_T\tau_{QD}/(\tau_T+\tau_{QD})$, which makes possible to neglect the contribution from the trion spin polarization to the measured signal. Otherwise, an additional contribution to Eq.~\eqref{Sigma} should be taken into account which is proportional to the hole-in-trion spin polarization~\cite{yugova09}. \begin{figure}[hptb] \includegraphics[width=0.4\textwidth]{Figure7a}\\ \includegraphics[width=0.4\textwidth]{Figure7b} \caption{Ellipticity (a) and Faraday rotation (b) signals as function of detuning between the quantum dot resonance and the probe optical frequency. The three curves correspond to different values of magnetic field, $\Omega_{\rm L}\tau_{\rm probe}=0$ (black/solid), $\Omega_{\rm L}\tau_{\rm probe}=1.5$ (red/dashed), and $\Omega_{\rm L}\tau_{\rm probe}=5$ (blue/dotted). The spin $z$ component is the same for all curves. The signals are given in arbitrary units.} \label{fig:signals} \end{figure} Figure~\ref{fig:signals} shows the dependence of the ellipticty and Faraday rotation signals on the detuning between the trion resonance and the probe optical frequency. Note, that these signals are calculated for a given QD, no averaging over the ensemble is done. The overall behavior is similar to the one known for probing by short pulses, $\Omega_{\rm L}\tau_{\rm probe} \ll 1$ (shown by the black curves in Fig.~\ref{fig:signals}).\cite{yugova09,carter:167403} The ellipticity is maximal for degenerate probe and trion resonance, while the Faraday rotation has a zero for $\omega_0 = \omega_{\rm probe}$. With increasing magnetic field the signal strength drops strongly, both in ellipticity and in Faraday rotation for almost all values of the detuning. The maximum of ellipticity transforms into a minimum and a fine structure appears for $\Omega_{\rm L}\tau_{\rm probe}=5$ which corresponds to the probe tuned to the two Zeeman-split sublevels. This fine structure becomes also visible in the Faraday rotation signal. In addition the spectral shape of the signal changes. The QD ensemble is inhomogeneous, and the pump pulse excites a subensemble of dots with various trion resonance frequencies.\cite{glazov2010a} Hence, the observed Faraday rotation signal should be averaged over the spin distribution. The result of this averaging depends strongly on the possible asymmetry of the spin distribution as well as on the details of spin coherence excitation and mode-locking. Below, we show that even the simplest model, where the inhomogeneity is ignored and the asymmetry of the quantum dot distribution is modeled as an effective detuning, describes well the experimental findings. \section{Comparison of theory and experiment} \begin{figure}[t] \includegraphics[width=1.1\linewidth]{Fig8} \caption{(Color online) Ellipticity amplitude versus magnetic field for pump pulses with a duration of 10 ps and 30 ps, probed by 2 ps pulses. Panel (a) and (b) show the signal amplitude as function of magnetic field and reduced field $\Omega_{\rm L}\tau_{\rm pump}$, respectively. The solid curve in panel (b) is normalized to the $\Omega_{\rm L}\tau_{\rm pump} =0$ dependence of $S_z^+$, which is the spin component amplitude right after pump pulse arrival, calculated according to Eq.~\eqref{Sz+}. $\omega_{\rm pump}= \omega_{\rm probe}$.} \label{fig:exp2:1} \end{figure} With this general theoretical setting we can compare the calculated magnetic field dependencies of the spin coherence signal with the measured data for different durations of pump pulse and probe pulse. To distinguish between the effects of pump and probe, we focus first on the experiments where one of the pulses was made longer compared to the other pulse with duration fixed at 2~ps. The corresponding experimental data have been shown in Figs.~2 and 3, respectively. Here we need to comment on the shape of the pump-probe traces. In Ref.~\onlinecite{glazov2010a}, the ellipticity signal was shown to drop smoothly to zero when moving from zero towards negative or positive delays. For Faraday rotation, however, the signal may rise first before a signal drop due to dephasing is seen, when using resonant pump and probe pulses of the same duration. The Faraday rotation behavior described in Ref.~\onlinecite{glazov2010a} is observed in Fig.~1, where pump and probe pulses of the same duration were taken from a single laser. The signal rise for short delays is particularly pronounced at 0.2~T in the 30~ps case. When detuning pump and probe spectrally the behavior goes back to the conventional one, like in ellipticity with maximum signal at zero delay. While the traces in Fig.~2 show ellipticity anyway, the traces in Fig.~3 give Faraday rotation signals which, however, have a smooth drop when moving away from time zero, despite of the targeted pump and probe energy resonance. This makes determination of amplitudes {quite} simple. We attribute this behavior for different pump and probe durations to an effective detuning of the pulses arising from their different spectral widths, where the spectral components outside of the profile of the other laser lead to the effective detuning. In addition the accuracy of putting the pulses from the two lasers in resonance was about 0.1~meV, potentially leading to another small detuning. From these data we extract the spin coherence signal amplitudes, as measure for the efficiency of either coherence generation or readout. Focusing on the generation process, we have first done this for pump durations of 10~ps and 30~ps, with the probe duration fixed at 2~ps. For determining the amplitudes, the Faraday rotation traces were fitted by exponentially damped harmonics for delay times, when all optically excited exciton complexes have decayed. The amplitudes for different magnetic fields were then normalized by the amplitude for the magnetic field, where the amplitude was maximum. We also note here, that in contrast to the theoretical modeling (see below) maximum amplitude is reached for finite magnetic fields. We attribute this to the effects of spin precession of the hole in the optically excited trion: the hole $g$ factor $g_h$ has been measured to be small, but non-zero ($g_h \approx 0.12$) in the structure under study. In low magnetic fields electron spin coherence appears due to the hole spin relaxation only which results in depolarization of the electron left after trion recombination. In higher fields the hole precesses during the trion lifetime which leads in effect to its spin relaxation. As a result, the long-lived electron spin coherence increases in the range of small fields \cite{sokolova}. In addition, at low external fields nuclear effects can come into play leading to electron depolarization. Therefore we expect a maximum of the electron spin coherence signal at a finite field, $B_{max}$. Experimentally this field lies in the range from 0.1 to 0.2~T. The magnetic field dependence of the normalized ellipticity amplitudes for 10~ps and 30~ps pump pulses is shown in Fig. 8(a). The amplitude for 30 ps pulses drop smoothly to zero with increasing magnetic field up to slightly more than 1~T. For 10~ps pump pulses the amplitude drop with increasing $B$ does not occur as fast, but takes place over an extended field range up to 3~T. As pointed out, this behavior can be characterized by the reduced magnetic field product $\Omega_{\rm L} \tau_{\rm pump}$, for which we had found that the spin coherence signal basically disappears when the value of 1.5 is exceeded, as confirmed by Fig. 8(b), showing the amplitude data as function of $\Omega_{\rm L} \tau_{\rm pump}$. In this representation the data for the two pump pulse durations basically coincide and converge to zero for $\Omega_{\rm L} \tau_{\rm pump} = 1.5$, corroborating the universality of the threshold. The presence of a threshold, outlined already in the theory subsection, can be qualitatively understood in terms of pulse duration. As schematically shown in the inset of Fig.~8(b), the trion formation is spin selective. For $\sigma^+$ polarized light the spin-up electron contributes to the trion and gets depolarized afterwards, while the spin-down electron does not participate in trion formation. These are the electrons whose spin is accumulated due to the train of pump pulses. However, if during the pump pulse action this electron spin component has time to rotate significantly, it also participates in trion formation and becomes depolarized. Thereby the pumping efficiency is diminished. The solid curve in Fig.~8(b) shows the theoretical result for $S_z^+$ as a function of reduced magnetic field calculated after Eq.~\eqref{Sz+}. The electron spin $z$ component value at the moment of pump pulse arrival is normalized by its value at $\Omega_{\rm L}\tau_{\rm pump} =0$. The theoretical curve follows rather well the experimental points. The disappearance of spin coherent signal $\Omega_{\rm L}\tau_{\rm pump} =1.5$ can be seen from Fig. 5: compared to the zero value for the product, the initialized spin component is reduced by a factor of about 3 for $\Omega_{\rm L}\tau_{\rm pump} = 1$ and it basically has vanished for $\Omega_{\rm L}\tau_{\rm pump} = 2$, as the threshold value of 1.5 has been crossed. The theoretical modeling was done for Fourier-limited pulses, for which we find good agreement with the data, even though the spectral width of the pulses is somewhat larger than expected from their duration. Discrepancies between experiment and theory, also in Fig. 9 might arise, however, from this difference, or from the higher pump excitation power ($\Theta = \pi$) than assumed in theory for the pump. A similar threshold effect was observed for the influence of the probe pulse, as can be seen from the magnetic field dependence of the Faraday rotation amplitude for different probe durations, while the pump duration was fixed at 2~ps presented in Fig.~\ref{fig9}. The probe can reflect the initialization of the spins by the pump only as long as these show a dominant preferential orientation. This is the case for probes shorter than a quarter of revolution during precession. Therefore the drop of the Faraday signal amplitude in Fig.~9(a) occurs at higher magnetic fields when the probe pulses are shorter. Also for this situation a kind of universal behavior is found in which the absolute magnetic field strength is not decisive but rather the reduced magnetic field strength $\Omega_{\rm L} \tau_{\rm probe}$. Figure 9(b) gives the corresponding dependence of FR signal amplitude. Considering also the experimental accuracy the data for the three different probe pulse durations are close to being identical, indicating a universal behavior on reduced magnetic field. This universal behavior is in accord with the calculations shown by lines in Fig.~9(b). The different calculated curves give different detunings between the probe pulse and trion resonance: $(\omega_{\rm probe} - \omega_0)\tau_{\rm probe}=0.01$ (solid), $(\omega_{\rm probe} - \omega_0)\tau_{\rm probe}=0.5$ (dashed) and $(\omega_{\rm probe} - \omega_0)\tau_{\rm probe}=1$ (dotted), as is the case also in experiment. The difference between these curves is, however, small. The drop of probed signal amplitude with increasing magnetic field is about the same for Faraday rotation and allipticity as can be seen From Fig. 7. Up to $\Omega_{\rm L} \tau_{\rm probe} = 1.5$ the signal amplitude drops by a factor of 2.5 compared with the limit of short pulses. Beyong this threshold the signal drop ocurs rather abruptly. The agreement of the experimental results and theoretical calculations in the simplified model demonstrates that the basic mechanisms of the spin coherence excitation and detection are well understood. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{fig9} \caption{(a) Normalized Faraday rotation amplitudes as functions of magnetic field: squares correspond to $\tau_{\rm probe}=10$~ps, circles to $\tau_{\rm probe}=30$~ps and triangles $\tau_{\rm probe}=80$~ps, while the pump pulse duration was fixed at 2\,ps. (b) Normalized Faraday rotation amplitudes versus reduced magnetic field $\Omega_{\rm L}\tau_{\rm pr}$ for the data taken from panel (a). Solid, dashed and dotted curves give calculations using Eq.~\eqref{Sigma} for different detunings between the probe pulse and the trion resonance $(\omega_{\rm probe} - \omega_0)\tau_{\rm probe}=0.01$ (solid), $0.5$ (dashed) and $1$ (dotted).}\label{fig9} \end{figure} The impact of the finite pump and probe durations is brought together in the experiments with identical pump and probe durations. However, a comparison of the absolute amplitude values is basically impossible, as the corresponding experiments involve different experimental schemes with either a single or two lasers with different focusing on different sample positions in the different measurement runs. Also the magnitude of the pumped and probed spin ensembles varies for the different configurations which involved pulses with either the same or significantly different spectral widths. In addition, we found in Fig.~1 a non-monotonic dependence of the Faraday signal amplitude on magnetic field. Therefore we do not attempt to make a quantitative comparison. Still, qualitatively the picture is quite transparent. The experiments with either the pump duration or the probe duration varied demonstrate that there is a threshold field above which spin coherence is not found, see Figs.~\ref{fig:exp2:1}(b) and \ref{fig9}(b). In both cases the threshold field is similar for the same pump or probe durations. As the effects of pump and probe enter the spin coherence signal rather "symmetrically", a prolongation of both pulses in the duration-degenerate configuration leads to a disappearance of signal amplitude at basically the same field strength. From the calculations we find that the spin coherence drop due to the combined action of a pump and a probe elongation lead sot a signal drop by about an order of magnitude for $\Omega_{\rm L} \tau_{\rm pump} = 1.5$ and $\tau_{\rm pump} = \tau_{\rm probe}$. This explains the disappearance of the spin coherent signal at this threshold value. \section{Conclusions} To conclude, we have demonstrated theoretically and experimentally the feasibility to initialize and detect electron spin coherence by long optical pulses with durations up to 80~ps, comparable with the electron spin precession period. The efficiency of electron spin coherence measurement is determined by the ratio of the periods for pulse duration and spin precession and with an increase of the magnetic field the spin signals decrease. The experimental results and theoretical calculations are in good agreement. Based on this demonstration spin initialization by compact pulsed solid state lasers with limited output power becomes feasible in low magnetic fields, to which applications would be limited anyway in applications. \acknowledgments This work was supported by the Deutsche Forschungsgemeinschaft, the Bundesmninsterium f\"ur Bildung und Forschung project ``QuaHL-Rep, Russian Foundation of Basic Research, ``Dynasty'' Foundation---ICFPM and EU FP7 project Spinoptronics.
2,869,038,154,455
arxiv
\section{Introduction} Deep reinforcement learning has shown remarkable successes in the past few years. Applications in game playing and robotics have shown the power of this paradigm with applications such as learning to play Go from scratch or flying an acrobatic model helicopter~\citep{mnih2015human,silver2016mastering,abbeel2007application}. Reinforcement learning uses an environment from which training data is sampled; in contrast to supervised learning it does not need a large database of pre-labeled training data. This opens up many applications for machine learning for which no such database exists. Unfortunately, however, for most interesting applications many samples from the environment are necessary, and the computational cost of learning is prohibitive, a problem that is common in deep learning~\citep{lecun2015deep}. Achieving faster learning is a major goal of much current research. Many promising approaches are tried, among them metalearning~\citep{hospedales2020meta,huisman2020deep}, transfer learning~\citep{pan2010survey}, curriculum learning~\citep{narvekar2020curriculum} and zero-shot learning~\citep{xian2017zero}. The current paper focuses on model-based methods in deep reinforcement learning. Model-based methods can reduce sample complexity. In contrast to model-free methods that sample at will from the environment, model-based methods build up a dynamics model of the environment as they sample. By using this dynamics model for policy updates, the number of necessary samples can be reduced substantially~\citep{sutton1991dyna}. Especially in robotics sample-efficiency is important (in games environment samples can often be generated more cheaply). The success of the model-based approach hinges critically on the quality of the predictions of the dynamics model, and here the prevalence of deep learning presents a challenge~\citep{talvitie2015agnostic}. Modeling the dynamics of high dimensional problems usually requires high capacity networks that, unfortunately, require many samples for training to achieve high generalization while preventing overfitting, potentially undoing the sample efficiency gains of model-based methods. Thus, the problem statement of the methods in this survey is {\em how to train a high-capacity dynamics model with high predictive power and low sample complexity}. In addition to promising better sample efficiency than model-free methods, there is another reason for the interest in model-based methods for deep learning. Many problems in reinforcement learning are sequential decision problems, and learning the transition function is a natural way of capturing the core of long and complex decision sequences. This is what is called a forward model in game AI~\citep{risi2020chess,torrado2018deep}. When a good transition function of the domain is present, then new, unseen, problems can be solved efficiently. Hence, model-based reinforcement learning may contribute to efficient transfer learning. The contribution of this survey is to give an in-depth overview of recent methods for model-based deep reinforcement learning. We describe methods that use (1) explicit planning on given transitions, (2) explicit planning on a learned transition model, and (3) end-to-end learning of both planning and transitions. For each approach future directions are listed (specifically: latent models, uncertainty modeling, curriculum learning and multi-agent benchmarks). Many research papers have been published recently, and the field of model-based deep reinforcement learning is advancing rapidly. The papers in this survey are selected on recency and impact on the field, for different applications, highlighting relationships between papers. Since our focus is on recent work, some of the references are to preprints in arXiv (of reputable groups). Excellent works with necessary background information exist for reinforcement learning~\citep{sutton2018introduction}, deep learning~\citep{goodfellow2016deep}, machine learning~\citep{bishop2006pattern}, and artificial intelligence~\citep{russell2016artificial}. As we mentioned, the main purpose of the current survey is to focus on deep learning methods, with high-capacity models. Previous surveys provide an overview of the uses of classic (non-deep) model-based methods~\citep{deisenroth2013survey,kober2013reinforcement,kaelbling1996reinforcement}. Other relevant surveys into model-based reinforcement learning are~\citep{justesen2019deep,polydoros2017survey,hui2018model,wang2019benchmarking,ccalicsir2019model,moerland2020model}. The remainder of this survey is structured as follows. Section~\ref{sec:rl} provides necessary background and a familiar formalism of reinforcement learning. Section~\ref{sec:mbrl} then surveys recent papers in the field of model-based deep reinforcement learning. Section~\ref{sec:bench} introduces the main benchmarks of the field. Section~\ref{sec:dis} provides a discussion reflecting on the different approaches and provides open problems and future work. Section~\ref{sec:con} concludes the survey. \begin{figure}[h] \begin{center} \input rl \end{center} \caption{Reinforcement Learning: Agent Acting on Environment, that provides new State and Reward to the Agent}\label{fig:agent} \end{figure} \section{Background}\label{sec:rl} Reinforcement learning does not assume the presence of a database, as supervised learning does. Instead, it derives the ground truth from an internal model or from an external environment that can be queried by the learning agent, see Figure~\ref{fig:agent}. The environment provides a new state $s'$ and its reward $r'$ (label) for every action $a$ that the agent tries in a certain state $s$~\citep{sutton2018introduction}. In this way, as many action-reward pairs can be generated as needed, without a large hand-labeled database. Also, we can learn behavior beyond that what a supervisor prepared for us to learn. As so much of artificial intelligence, reinforcement learning draws inspiration from principles of human and animal learning~\citep{hamrick2019analogues,kahneman2011thinking}. In psychology, learning is studied as behaviorial adaptation, as a result of reinforcing reward and punishment. Publications in artificial intelligence sometimes explicitly reference analogies in how learning in the two fields is described~\citep{anthony2017thinking,duan2016rl,weng2020meta}. Supervised learning frequently studies regression and classification problems. In reinforcement learning most problems are decision and control problems. Often problems are sequential decision problems, in which a goal is reached after a sequence of decisions are taken (behavior). In sequential decision making, the dynamics of the world are taken into consideration. Sequential decision making is a step-by-step approach in which earlier decisions influence later decisions. Before we continue, let us formalize key concepts in reinforcement learning. \subsection{Formalizing Reinforcement Learning}\label{sec:mdp} Reinforcement learning problems are often modeled formally as a Markov Decision Process (MDP). First we introduce the basics: state, action, transition and reward. Then we introduce policy and value. Finally, we define model-based and model-free solution approaches. \begin{figure}[t] \begin{center} \input rltree \caption{Backup Diagram~\citep{sutton2018introduction}. Maximizing the reward for state $s$ is done by following the {\em transition} function to find the next state $s'$. Note that the policy $\pi(s,a)$ tells the first half of this story, going from $s \rightarrow a$; the transition function $T_a(s,s^\prime)$ completes the story, going from $s \rightarrow s^\prime$ (via $a$).}\label{fig:rltree} \end{center} \end{figure} A Markov Decision Process is a 4-tuple $(S, A, T_a, R_a)$ where $S$ is a finite set of states, $A$ is a finite set of actions; $A_s \subseteq A$ is the set of actions available from state $s$. Furthermore, $T_a$ is the transition function: $T_a(s,s')$ is the probability that action $a$ in state $s$ at time $t$ will lead to state $s^\prime$ at time $t+1$. Finally, $R_a(s,s^\prime)$ is the immediate reward received after transitioning from state $s$ to state $s^\prime$ due to action $a$. The goal of an MDP is to find the best decision, or action, in all states $s \in S$. The goal of reinforcement learning is to find the optimal policy $a=\pi^\star(s)$, which is the function that gives the best action $a$ in all states $s\in S$. The policy contains the actions of the answer to a sequential decision problem: a step-by-step prescription of which action must be taken in which state, in order to maximize reward for any given state. This policy can be found directly---model-free---or with the help of a transition model---model-based. Figure~\ref{fig:rltree} shows a diagram of the transitions. More formally, the goal of an MDP is to find policy $\pi(s)$ that chooses an action in state $s$ that will maximize the reward. This value $V$ is the expected sum of future rewards $V^\pi(s)=E(\sum_{t=0}^\infty \gamma^t R_{\pi(s_t)}(s_t, s_{t+1}))$ that are discounted with parameter $\gamma$ over $t$ time periods, with $s = s_0$. The function $V^\pi(s)$ is called the value function of the state. In deep learning the policy $\pi$ is determined by the parameters $\theta$ (or weights) of a neural network, and the parameterized policy is written as $\pi_\theta$. There are algorithms to compute the policy $\pi$ directly, and there are algorithms that first compute this function $V^\pi(s)$. For stochastic problems often direct policy methods work best, for deterministic problems the value-methods are most often used~\citep{kaelbling1996reinforcement}. (A third, quite popular, approach combines the best of value and policy methods: actor-critic~\citep{sutton2018introduction,konda2000actor,mnih2016asynchronous}.) In classical, table-based, reinforcement learning there is a close relation between policy and value, since the best action of a state leads to both the best policy and the best value, and finding the other can usually be done with a simple lookup. When the value and policy function are approximated, for example with a neural network, then this relation becomes weaker, and many advanced policy and value algorithms have been devised for deep reinforcement learning. Value function algorithms calculate the state-action value $Q^\pi(s,a)$. This $Q$-function gives the expected sum of discounted rewards when following action $a$ in state $s$, and then afterwards policy $\pi$. The value $V(s)$ is the maximum of the $Q(s,a)$-values of that state. The optimal value-function is denoted as $V^\star(s)$. The optimal policy can be found by recursively choosing the argmax action with $Q(s,a)=V^\star(s)$ in each state. To find the policy by planning, models for $T$ and $R$ must be known. When they are not known, an environment is assumed to be present for the agent to query in order to get the necessary reinforcing information, see Figure~\ref{fig:agent}, after~\cite{sutton2018introduction}. The samples can be used to build the model of $T$ and $R$ (model-based reinforcement learning) or they can be used to find the policy without first building the model (direct or model-free reinforcement learning). When sampling, the environment is in a known state $s$, and the agent chooses an action $a$ which it transmits to the environment, that responds with a new state $s^\prime$ and the corresponding reward value $r'=R_a(s, s^\prime)$. The literature provides many solution algorithms. We now very briefly discuss classical planning and model-free approaches, before we continue to survey model-based algorithms in more depth in the next section. \subsection{Planning}\label{sec:planning} Planning algorithms use the transition model to find the optimal policy, by selecting actions in states, looking ahead, and backing up reward values, see Figure~\ref{fig:rltree} and Figure~\ref{fig:plan}. \begin{figure}[t] \begin{center} \input plan \caption{Planning}\label{fig:plan} \end{center} \end{figure} \begin{algorithm} \caption{Value Iteration}\label{lst:vi} \begin{algorithmic} \State Initialize $V(s)$ to arbitrary values \Repeat \ForAll{$s$} \ForAll{$a$} \State $Q[s,a] = \sum_{s'} T_a(s,s')(R_a(s,s') + \gamma V(s'))$ \EndFor \State $V[s] = \max_a(Q[s,a])$ \EndFor \Until V converges \State return V \end{algorithmic} \end{algorithm} In planning algorithms, the agent has access to an explicit transition and reward model. In the deterministic case the transition model provides the next state for each of the possible actions in the states, it is a function $s^\prime = T_a(s)$. In the stochastic case, it provides the probability distribution $T_a(s, s^\prime)$. The reward model provides the immediate reward for transitioning from state $s$ to state $s^\prime$ after taking action $a$. Figure~\ref{fig:rltree} provides a backup diagram for the transition and reward function. The transition function moves downward in the diagram from state $s$ to $s'$, and the reward value goes upward in the diagram, backing up the value from the child state to the parent state. The transition function follows policy $\pi$ with action $a$, after which state $s^\prime$ is chosen with probability $p$, yielding reward $r'$. The policy function $\pi(s,a)$ concerns the top layer of the diagram, from $s$ to $a$. The transition function $T_a(s,s^\prime)$ covers both layers, from $s$ to $s^\prime$. In some domains, such as chess, there is a single deterministic state $s^\prime$ for each action $a$. Here each move leads to a single board position, simplifying the backup diagram. Together, the transition and reward functions implicitly define a space of states that can be searched for the optimal policy $\pi^\star$ and value $V^\star$. The most basic form of planning is Bellman's dynamic programming~\citep{bellman1957dynamic}, a recursive traversal of the state and action space. Value iteration is a well-known, very basic, dynamic programming method. The pseudo-code for value iteration is shown in Algorithm~\ref{lst:vi}~\citep{alpaydin2020introduction}. It traverses all actions in all states, computing the value of the entire state space. Many planning algorithms have been devised to efficiently generate and traverse state spaces, such as (depth-limited) A*, alpha-beta and Monte Carlo Tree Search (MCTS)~\citep{hart1968formal,pearl1984heuristics,korf1985depth,plaat1996best,browne2012survey,moerland2018a0c,moerland2020second}. Planning algorithms originated from exact, table-based, algorithms~\citep{sutton2018introduction} that fit in the symbolic AI tradition. For planning it is relevant to know how much of the state space must be traversed to find the optimal policy. When state spaces are too large to search fully, deep function approximation algorithms can be used to approximate the optimal policy and value~\citep{sutton2018introduction,plaat2020learning}. Planning is sample-efficient in the sense that, when the agent has a model, a policy can be found without interaction with the environment. Sampling may be costly, and sample efficiency is an important concept in reinforcement learning. A sampling action taken in an environment is irreversible, since state changes of the environment can not be undone by the agent. In contrast, a planning action taken in a transition model is reversible~\citep{moerland2020framework}. A planning agent can backtrack, a sampling agent cannot. Sampling finds local optima easily. For finding global optima the ability to backtrack out of a local optimum is useful, which is an advantage for model-based planning methods. Note, however, that there are two ways of finding dynamics models. In some problems, the transition and reward models are given by the problem, such as in games, where the move rules are known, as in Go and chess. Here the dynamics models follow the problem perfectly, and many steps can be planned accurately into the future without problem, out-performing model-free sampling. In other problems the dynamics model must be learned from sampling the environment. Here the model will not be perfect, and will contain errors and biases. Planning far ahead will only work when the agent has a $T$ and $R$ model of sufficient quality. With learned models, it may be more difficult for model-based planning to achieve the performance of model-free sampling. \begin{figure}[t] \begin{center} \input free \caption{Model-Free Learning}\label{fig:free} \end{center} \end{figure} \subsection{Model-Free } When the transition or reward model are not available to the agent, then the policy and value function have to be learned through querying the environment. Learning the policy or value function without a model, through sampling the environment, is called model-free learning, see Figure~\ref{fig:free}. Recall that the policy is a mapping of states to best actions. Each time when a new reward is returned by the environment the policy can be improved: the best action for the state is updated to reflect the new information. Algorithm~\ref{lst:free} shows the simple high-level steps of model-free reinforcement learning (later on the algorithms become more elaborate). \begin{algorithm} \caption{Model-Free Learning}\label{lst:free} \begin{algorithmic} \Repeat \State Sample env $E$ to generate data $D=(s, a, r', s')$ \State Use $D$ to update policy $\pi(s, a)$ \Until $\pi$ converges \end{algorithmic} \end{algorithm} Model-free reinforcement learning is the most basic form of reinforcement learning. It has been successfully applied to a range of challenging problems~\citep{deisenroth2013survey,kober2013reinforcement}. In model-free reinforcement learning a policy is learned from the ground up through interactions (samples) with the environment. The main goal of model-free learning is to achieve good generalization: to achieve high accuracy on test problems not seen during training. A secondary goal is to do so with good sample efficiency: to need as few environment samples as possible for good generalization. Model-free learning is essentially blind, and learning the policy and value takes many samples. A well-known model-free reinforcement learning algorithm is Q-learning~\citep{watkins1989learning}. Algorithms such as Q-learning can be used in a classical table-based setting. Deep neural networks have also been used with success in model-free learning, in domains in which samples can be generated cheaply and quickly, such as in Atari video games~\citep{mnih2015human}. Deep model-free algorithms such as Deep Q-Network (DQN)~\citep{mnih2013playing} and Proximal Policy Optimzation (PPO)~\citep{schulman2017proximal} have become quite popular. PPO is an algorithm that computes the policy directly, DQN finds the value function first (Section~\ref{sec:mdp}). Although risky, an advantage of flying blind is the absence of bias. Model-free reinforcement learning can find global optima without being distracted by a biased model (it has no model). Learned models in model-based reinforcement learning may introduce bias, and model-based methods may not always be able to find as good results as model-free can (although it does find the biased results with fewer samples). Let us look at the cost of our methods. Interaction with the environment can be costly. Especially when the environment involves the real world, such as in real robot-interaction, then sampling should be minimized, for reasons of cost, and to prevent wear of the robot arm. In virtual environments on the other hand, model-free approaches have been quite successful, as we have noted in Atari and other game play~\citep{mnih2015human}. A good overview of model-free reinforcement learning can be found in~\citep{ccalicsir2019model,sutton2018introduction,kaelbling1996reinforcement}. \subsection{Model-Based } It is now time to look at model-based reinforcement learning, a method that learns the policy and value in a different way than by sampling the environment directly. Recall that the environment samples return $(s',r')$ pairs, when the agent selects action $a$ in state $s$. Therefore all information is present to learn the transition model $T_a(s,s')$ and the reward model $R_a(s,s')$, for example by supervised learning. When no transition model is given by the problem, then the model can be learned by sampling the environment, and be used with planning to update the policy and value as often as we like. This alternative approach of finding the policy and the value is called model-based learning. If the model is given, then no environment samples are needed and model-based methods are more sample efficient. But if the model is not given, why would we want to go this convoluted model-and-planning route, if the samples can teach us the optimal policy and value directly? The reason is that the convoluted route may be more sample efficient. When the complexity of learning the transition/reward model is smaller than the complexity of learning the policy model directly, and planning is fast, then the model-based route can be more efficient. In model-free learning a sample is used once to optimize the policy, and then thrown away, in model-based learning the sample is used to learn a transition model, which can then be used many times in planning to optimize the policy. The sample is used more efficiently. The recent successes in deep learning caused much interest and progress in deep model-free learning. Many of the deep function approximation methods that have been so successful in supervised learning~\citep{goodfellow2016deep,lecun2015deep} can also be used in model-free reinforcement learning for approximating the policy and value function. However, there are reasons for interest in model-based methods as well. Many real world problems are long and complex sequential decision making problems, and we are now seeing efforts to make progress in model-based methods. Furthermore, the interest in lifelong learning stimulates interest in model-based learning~\citep{silver2013lifelong}. Model-based reinforcement learning is close to human and animal learning, in that all new knowledge is interpreted in the context of existing knowledge. The dynamics model is used to process and interpret new samples, in contrast to model-free learning, where all samples, old and new, are treated alike, and are not interpreted using the knowledge that has been accumulated so far in the model. After these introductory words, we are now ready to take a deeper look into recent concrete deep model-based reinforcement learning methods. \section{Survey of Model-Based Deep Reinforcement Learning}\label{sec:mbrl} The success of model-based reinforcement learning depends on the quality of the dynamics model. The model is typically used by planning algorithms for multiple sequential predictions, and errors in predictions accumulate quickly. We group the methods in three main approaches. First the transitions are given and used by explicit planning, second the transitions are learned and used by explicit planning, and third both transitions and planning are learned end-to-end: \begin{enumerate} \item {\bf Explicit Planning on Given Transitions}\\ First, we discuss methods for problems that give us clear transition rules. In this case transition models are perfect, and classical, explicit, planning methods are used to optimize the value and policy functions of large state spaces. Recently, large and complex problems have been solved in two-agent games using self-learning methods that give rise to curriculum learning. Curriculum learning has also been used in single agent problems. \item {\bf Explicit Planning on Learned Transitions}\\ Second, we discuss methods for problems where no clear rules exist, and the transition model must be learned from sampling the environment. (The transitions are again used with conventional planning methods.) The environment samples allow learning by backpropagation of high-capacity models. It is important that the model has as few errors as possible. Uncertainty modeling and limited lookahead can reduce the impact of prediction errors. \item {\bf End-to-end Learning of Planning and Transitions}\\ Third, we discuss the situation where both the transition model and the planning algorithm are learned from the samples, end-to-end. A neural network can be used in a way that it performs the actual steps of certain planners, in addition to learning the transitions from the samples, as before. The model-based algorithm is learned fully end-to-end. A drawback of this approach is the tight connection between network architecture and problem type, limiting its applicabily. This drawback can be resolved with the use of latent models, see below. \end{enumerate} In addition to the three main approaches, we now discuss two orthogonal approaches. These can be used to improve performance of the three main approaches. They are the hybrid imagination idea from Sutton's Dyna~\citep{sutton1991dyna}, and abstract, or latent, models. \begin{itemize} \item {\bf Hybrid Model-Free/Model-Based Imagination}\\ We first mention a sub-approach where environment samples are not only used to train the transition model, but also to train the policy function directly, just as in model-free learning. This hybrid approach thus combines model-based and model-free learning. It is also called \emph{imagination} because the looking ahead with the dynamics model resembles simulating or imagining environment samples outside the real environment. In this approach the imagined, or planned, ``samples'' augment the real (environment) samples. This augmentation reduces sample complexity of model-free methods. \item {\bf Latent Models}\\ Next, we discuss a sub-approach where the learned dynamics model is split into several lower-capacity, specialized, latent models. These latent models are then used with planning or imagination to find the policy. Latent models have been used with and without end-to-end model training and with and without imagination. Latent models thus build on and improve the previous approaches. \end{itemize} \begin{table*}[h] \begin{center}\footnotesize \begin{tabular}{llllccl} {\em Approach}&{\em Name}&{\em Learning}&{\em Planning}&{\em Hybrid}&{\em Latent}&{\em Application}\\ & & & & {\em Imagination}&{\em Models}& \\ \hline\hline Explicit Planning&TD-Gammon \citep{tesauro1995td}&Fully connected net&Alpha-beta&-&-&Backgammon\\ Given Transitions &Expert Iteration \citep{anthony2017thinking}&Policy/Value CNN&MCTS&-&-&Hex\\ (Sect.~\ref{sec:selfplay}) &Alpha(Go) Zero \citep{silver2017mastering}&Policy/Value ResNet&MCTS&-&-&Go/chess/shogi\\ &Single Agent \citep{feng2020solving}&Resnet&MCTS&-&-&Sokoban\\ \hline Explicit Planning&PILCO~\citep{deisenroth2011pilco}&Gaussian Processes& Gradient based &-&-&Pendulum\\ Learned Transitions& iLQG~\citep{tassa2012synthesis}& Quadratic Non-linear &MPC&-&-& Humanoid\\ (Sect.~\ref{sec:foresight}) & GPS~\citep{levine2014learning}& iLQG& Trajectory&-&-&Swimmer\\ & SVG~\citep{heess2015learning}&Value Gradients& Trajectory &-&-& Swimmer\\ & PETS \citep{chua2018deep}& Uncertainty Ensemble&MPC&-&-& Cheetah\\ & Visual Foresight \citep{finn2017deep}& Video Prediction&MPC&-&-&Manipulation\\ & Local Model~\citep{gu2016continuous}& Quadratic Non-linear& Short rollouts &+&-& Cheetah\\ &MVE \citep{feinberg2018model} & Samples& Short rollouts&+&-& Cheetah\\ &Meta Policy \citep{clavera2018model}&Meta-ensembles& Short rollouts &+&-& Cheetah\\ &GATS \citep{azizzadenesheli2018surprising}& Pix2pix&MCTS &+&-& Cheetah\\ &Policy Optim \citep{janner2019trust}&Ensemble& Short rollouts &+&-&Cheetah\\ &Video-prediction \citep{oh2015action}&CNN/LSTM&Action&+&+&Atari\\ &VPN \citep{oh2017value}&CNN encoder&$d$-step &+&+&Atari\\ &SimPLe \citep{kaiser2019model}&VAE, LSTM&MPC&+&+&Atari\\ &PlaNet \citep{hafner2018learning}&RSSM (VAE/RNN) & CEM&-&+&Cheetah\\ &Dreamer \citep{hafner2019dream}&RSSM+CNN& Imagine&-&+&Hopper\\ &Plan2Explore \citep{sekar2020planning}&RSSM& Planning&-&+&Hopper\\ \hline End-to-End Learning&VIN \citep{tamar2016value}&CNN&Rollout in network&+&-&Mazes\\ Planning \& Transitions &VProp \citep{nardelli2018value}& CNN&Hierarch Rollouts &+&-&Maze, nav\\ (Sect.~\ref{sec:e2e}) &TreeQN \citep{farquhar2018treeqn}& Tree-shape Net& Plan-functions&+&+&Box-push\\ &Planning \citep{guez2019investigation}&CNN+LSTM& Rollouts in network&+&-&Sokoban\\ &I2A \citep{racaniere2017imagination}&CNN/LSTM encoder&Meta-controller&+&+&Sokoban\\ &Predictron \citep{silver2017predictron}& $k,\gamma,\lambda$-CNN-predictr& $k$-rollout&+&+&Mazes\\ &World Model \citep{ha2018world}&VAE & CMA-ES&+&+&Car Racing\\ &MuZero \citep{schrittwieser2019mastering}&Latent&MCTS&-&+&Atari/Go\\ \hline\hline\\ \end{tabular} \caption{Overview of Deep Model-Based Reinforcement Learning Methods}\label{tab:overview} \end{center} \end{table*} The different approaches can and have been used alone and in combination, as we will see shortly. Table~\ref{tab:overview} provides an overview of all approaches and methods that we will discuss in this survey. The methods are grouped into the three main categories that were introduced above. The use of the two orthogonal approaches by the methods (imagination and latent models) is indicated in Table~\ref{tab:overview} in two separate columns. The final column provides an indication of the application that the method is used on (such as Swimmer, Chess, and Cheetah). In the next section, Sect.~\ref{sec:bench}, these applications will be explained in more depth. All methods in the table will be explained in detail in the remainder of this section (for ease of reference, we will repeat the methods of each subsection in their own table). The sections will again mention some of the applications on which they were tested. Please refer to the section on Benchmarks. Model-based methods work well for low-dimensional tasks where the transition and reward dynamics are relatively simple~\citep{sutton2018introduction}. While efficient methods such as Gaussian processes can learn these models quickly---with few samples---they struggle to represent complex and discontinuous systems~\citep{wang2019benchmarking}. Most current model-free methods use deep neural networks to deal with problems that have such complex, high-dimensional, and discontinuous characteristics, leading to a high sample complexity. The main challenge that the model-based reinforcement learning algorithms in this survey thus address is as follows. For high-dimensional tasks the curse of dimensionality causes data to be sparse and variance to be high. Deep methods tend to overfit on small datasets, and model-free methods use large data sets and have bad sample efficiency. Model-based methods that use poor models make poor planning predictions far into the future~\citep{talvitie2015agnostic}. The challenge is to learn deep, high-dimensional transition functions from limited data, that can account for model uncertainty, and plan over these models to achieve policy and value functions that perform as well or better than model-free methods. We will now discuss the algorithms. We will discuss (1) methods that use explicit planning on given transitions, (2) use explicit planning on a learned transition model, and (3) use end-to-end learning of planning and transitions. We will encounter the first occurrence of hybrid imagination and latent models approaches in the second section, on explicit planning/learned transitions. \subsection{Explicit Planning on Given Transitions}\label{sec:selfplay} The first approach in model-based learning is when the transition and reward model is provided clearly in the rules of the problem. This is the case, for example, in games such as Go and chess. Table~\ref{tab:self} summarizes the approaches of this subsection. Note the addition of the reinforcement learning method in an extra column. With this approach high performing results have recently been achieved on large and complex domains. These results have been achieved by combining classical, explicit, heuristic search planning algorithms such as Alpha-beta and MCTS~\citep{knuth1975analysis,browne2012survey,plaat2020learning}, and deep learning with self-play, achieving tabula rasa curriculum learning. Curriculum learning is based on the observation that a difficult problem is learned more quickly by first learning a sequence of easy, but related problems---just as we teach school children easy concepts (such as addition) first before we teach them harder concepts (such as multiplication, or logarithms). In self-play the agent plays against the environment, which is also the same agent with the same network, see Figure~\ref{fig:self}. The states and actions in the games are then used by a deep learning system to improve the policy and value functions. These functions are used as the selection and evaluation functions in MCTS, and thus improving them improves the quality of play of MCTS. This has the effect that as the agent is getting smarter, so is the environment. A virtuous circle of a mutually increasing level of play is the result, a natural form of curriculum learning~\citep{bengio2009curriculum}. A sequence of ever-improving tournaments is played, in which a game can be learned to play from scratch, from zero-knowledge to world champion level~\citep{silver2017mastering}. \begin{figure}[t] \begin{center} \input self \caption{Explicit Planning/Given Transitions}\label{fig:self} \end{center} \end{figure} The concept of self-play was invented in multiple places and has a long history in two-agent game playing AI. Three well-known examples are Samuel's checkers player~\citep{samuel1959some}, Tesauro's Backgammon player~\citep{tesauro1995td,tesauro2002programming} and DeepMind's Alpha(Go) Zero~\citep{silver2017mastering,silver2018general}. \begin{table*}[ht] \begin{center} \begin{tabular}{lllll} {\em Approach}&{\em Learning}&{\em Planning}& {\em Reinforcement Learning}&{\em Application}\\ \hline\hline TD-Gammon \citep{tesauro1995td}&Fully connected net&Alpha-beta& Temporal Difference&Backgammon\\ Expert Iteration \citep{anthony2017thinking}&Pol/Val CNN&MCTS &Curriculum&Hex\\ Alpha(Go) Zero \citep{silver2017mastering}&Pol/Val ResNet&MCTS&Curriculum&Go/chess/shogi\\ Single Agent \citep{feng2020solving}&ResNet&MCTS&Curriculum&Sokoban\\ \\ \end{tabular} \caption{Overview of Explicit Planning/Given Transitionds Methods}\label{tab:self} \end{center} \end{table*} Let us discuss some of the self-play approaches. {\bf TD-Gammon}~\citep{tesauro1995td} is a Backgammon playing program that uses a small neural network with a single fully connected hidden layer with just 80 hidden units and a small (two-level deep) Alpha-beta search~\citep{knuth1975analysis}. It teaches itself to play Backgammon from scratch using temporal-difference learning. A small neural network learns the value function. TD-Gammon was the first Backgammon program to play at World-Champion level, and the first program to successfully use a self-learning curriculum approach in game playing since Samuel's checkers program~\citep{samuel1959some}. An approach similar to the AlphaGo and AlphaZero programs was presented as {\bf Expert Iteration}~\citep{anthony2017thinking}. The problem was again how to learn to play a complex game from scratch. Expert Iteration (ExIt) combines search-based planning (the expert) with deep learning (by iteration). The expert finds improvements to the current policy. ExIt uses a single multi-task neural network, for the policy and the value function. The planner uses the neural network policy and value estimates to improve the quality of its plans, resulting in a cycle of mutual improvement. The planner in ExIt is MCTS. ExIt uses a version with rollouts. ExIt was used with the boardgame Hex~\citep{hayward2019hex}, and compared favorably against a strong MCTS-only program MoHex~\citep{arneson2010monte}. A further development of ExIt is Policy Gradient Search, which uses planning without an explicit search tree~\citep{anthony2019policy}. {\bf AlphaZero}, and its predecessor AlphaGo Zero, are self-play curriculum learning programs that were developed by a team of researchers~\citep{silver2018general,silver2017mastering}. The programs are desiged to play complex board games full of tactics and strategy well, specifically Go, chess, and shogi, a Japanese game similar to chess, but more complex~\citep{iida2002computer}. AlphaZero and AlphaGo Zero are self-play model-based reinforcement learning programs. The environment against which they play is the same program as the agent that is learning to play. The transition function and the reward function are defined by the rules of the game. The goal is to learn optimal policy and value functions. AlphaZero uses a single neural network, a 19-block residual network with a value head and a policy head. For each different game---Go, chess, shogi---it uses different input and output layers, but the hidden layers are identical, and so is the rest of the architecture and the hyperparameters that govern the learning process. The loss-function is the sum of the policy-loss and the value-loss~\citep{wang2019alternative}. The planning algorithm is based on Monte Carlo Tree Search~\citep{browne2012survey,coulom2006efficient} although it does not perform random rollouts. Instead, it uses the value head of the resnet for evaluation and the policy head of the ResNet to augment the UCT selection function~\citep{kocsis2006bandit}, as in P-UCT~\citep{rosin2011multi}. \begin{figure}[t] \centering{ \input selfplay3 \caption{Self-Play/Curriculum Learning Loop}\label{fig:selfplay}} \end{figure} % The residual network is used in the evaluation and selection of MCTS. The self-play mechanism starts from a randomly initialized resnet. MCTS is used to play a tournament of games, to generate training positions for the resnet to be trained on, using a DQN-style replay buffer~\citep{mnih2015human}. This trained resnet is then again used by MCTS in the next training tournament to generate training positions, etc., see Figure~\ref{fig:selfplay}. Self-play feeds on itself in multiple ways, and achieving stable learning is a challenging task, requiring judicious tuning, exploration, and much training. AlphaZero is currently the worldwide strongest player in Go, chess, and shogi~\citep{silver2018general}. The success of curriculum learning in two-player self-play has inspired work on {\bf single-agent curriculum learning}. These single-agent approaches do not do self-play, but do use curriculum learning. Laterre et al.\ introduce the Ranked Reward method for solving bin packing problems~\citep{laterre2018ranked} and Wang et al.\ presented a method for Morpion Solitaire~\citep{wang2020tackling}. Feng et al.\ use an AlphaZero based approach to solve hard Sokoban instances~\citep{feng2020solving}. Their model is an 8 block standard residual network, with MCTS as planner. Solving Sokoban instances is a hard problem in single-agent combinatorial search. The curriculum approach, where the agent learns to solve easy instances before it tries to solve harder instances, is a natural fit. In two-player games, a curriculum is generated in self-play. Feng et al. create a curriculum in a different way, by constructing simpler subproblems from hard instances, using the fact that Sokoban problems have a natural hierarchical structure. As in AlphaZero, the problem learns from scratch, no Sokoban heuristics are provided to the solver. This approach was able to solve harder Sokoban instances than had been solved before. \subsubsection*{Conclusion} In self-play curriculum learning the opponent has the same model as the agent. The opponent is the environment of the agent. As the agent learns, so does its opponent, providing tougher counterplay, teaching the agent more. The agent is exposed to curriculum learning, a sequence of increasingly harder learning tasks. In this way, learning strong play has been achieved in Backgammon, Go, chess and shogi~\citep{tesauro1995temporal,silver2018general}. In two-agent search a natural idea is to duplicate the agent as the environment, creating a self-play system. Self-play has been used in planning (as minimax), with policy learning, and in combination with latent models. Self-generated curriculum learning is a powerful paradigm. Work is under way to see if it can be applied to single-agent problems as well~\citep{narvekar2020curriculum,feng2020solving,doan2019line,laterre2018ranked}, and in multi-agent (real-time strategy) games, addressing problems with specialization of two-agent games (Sect.~\ref{sec:rts} ~\citep{vinyals2019grandmaster}). \subsection{Explicit Planning on Learned Transitions}\label{sec:foresight} In the previous section, transition rules could be derived from the problem directly (by inspection). In many problems, this is not the case, and we have to resort to sampling the environment to learn a model of the transitions. The second category of algorithms of this survey is to learn the tranition model by backpropagation from environment samples. This learned model is then still used by classical, explicit, planning algorithms, as before. We will discuss various approaches where the transition model is learned with supervised learning methods such as backpropagation through time~\citep{werbos1988generalization}, see Figure~\ref{fig:learn}. \begin{figure}[t] \centering \input learn \caption{Explicit Planning/Learned Transitions}\label{fig:learn} \end{figure} \begin{algorithm}[t] \begin{algorithmic} \Repeat \State Sample env $E$ to generate data $D=(s, a, r', s')$ \State Use $D$ to learn $T_a(s,s')$ \State Use $T$ to update policy $\pi(s, a)$ by planning \Until $\pi$ converges \end{algorithmic} \caption{Explicit Planning/Learned Transitions}\label{lst:back} \end{algorithm} \begin{table*}[ht] \begin{center} \begin{tabular}{llll} {\em Approach}&{\em Learning}&{\em Planning}&{\em Application}\\ \hline\hline PILCO~\citep{deisenroth2011pilco}&Gaussian Processes& Gradient based &Pendulum\\ iLQG~\citep{tassa2012synthesis}& Quadratic Non-linear &MPC& Humanoid\\ GPS~\citep{levine2014learning}& iLQG& Trajectory&Swimmer\\ SVG~\citep{heess2015learning}&Value Gradients& Trajectory & Swimmer\\ PETS \citep{chua2018deep}& Uncertainty Ensemble&MPC& Cheetah\\ Visual Foresight \citep{finn2017deep}& Video Prediction&MPC&Manipulation\\ \\ \end{tabular} \caption{Overview of Explicit Planning/Learned Transitions Methods}\label{tab:learn} \end{center} \end{table*} Algorithm~\ref{lst:back} shows the steps of using explicit planning and transition learning by backpropagation. Table~\ref{tab:learn} summarizes the approaches of this subsection, showing both the \emph{learning} and the \emph{planning} approach. Two variants of this approach are also discussed in this subsection: hybrid imagination and latent models, see Table~\ref{tab:imag} and Table~\ref{tab:abstract}. We will first see how simple Gaussian Processes and quadratic methods can create predictive transition models. Next, precision is improved with trajectory methods, and we make the step to video prediction methods. Finally, methods that focus on uncertainty and ensemble methods will be introduced. We know that deep neural nets need much data and learn slowly, or will overfit. Uncertainty modeling is based on the insight that early in the training the model has seen little data, and tends to overfit, and later on, as it has seen more data, it may underfit. This issue can be mitigated by incorporating uncertainty into the dynamics models, as we shall see in the later methods~\citep{chua2018deep}. For smaller models, environment samples can be used to approximate a transition model as a {\bf Gaussian Process} of random variables. This approach is followed in PILCO, which stands for Probabilistic Inference for Learning Control, see~\citep{deisenroth2011pilco,deisenroth2013gaussian,kamthe2017data}. Gaussian Processes can accurately learn simple processes with good sample efficiency~\citep{bishop2006pattern}, although for high dimensional problems they need more samples. PILCO treats the transition model $T_a(s,s^\prime)$ as a probabilistic function of the environment samples. The planner improves the policy based on the analytic gradients relative to the policy parameters $\theta$. PILCO has been used to optimize small problems such as Mountain car and Cartpole pendulum swings, for which it works well. Although they achieve model learning using higher order model information, Gaussian Processes do not scale to high dimensional environments, and the method is limited to smaller applications. A related method uses a trajectory optimization approach with nonlinear least-squares optimization. In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems. {\bf Iterative LQG}~\citep{tassa2012synthesis} is the control analog of the Gauss-Newton method for nonlinear least-squares optimization. In contrast to PILCO, the model learner uses quadratic approximation on the reward function and linear approximation of the transition function. The planning part of this method uses a form of online trajectory optimization, model-predictive control (MPC), in which step-by-step real-time local optimization is used, as opposed to full-problem optimization~\citep{richards2005robust}. By using many further improvements throughout the MPC pipeline, including the trajectory optimization algorithm, the physics engine, and cost function design, Tassa et al.\ were able to achieve near-real-time performance in humanoid simulated robot manipulation tasks, such as grasping. Another trajectory optimization method takes its inspiration from model-free learning. Levine and Koltun \citep{levine2013guided} introduce {\bf Guided Policy Search} (GPS) in which the search uses trajectory optimization to avoid poor local optima. In GPS, the parameterized policy is trained in a supervised way with samples from a trajectory distribution. The GPS model optimizes the trajectory distribution for cost and the current policy, to create a good training set for the policy. Guiding samples are generated by differential dynamic programming and are incorporated into the policy with regularized importance sampling. In contrast to the previous methods, GPS algorithms can train complex policies with thousands of parameters. In a sense, Guided Policy Search transforms the iLQG controller into a neural network policy $\pi_\theta$ with a trust region in which the new controller does not deviate too much from the samples~\citep{levine2014learning,finn2016guided,montgomery2016guided}. GPS has been evaluated on planar swimming, hopping, and walking, as well as simulated 3D humanoid running. Another attempt at increasing the accuracy of learned parameterized transition models in continuous control problems is {\bf Stochastic Value Gradients} (SVG)~\citep{heess2015learning}. It mitigates learned model inaccuracy by computing value gradients along the real environment trajectories instead of planned ones. The mismatch between predicted and real transitions is addressed with re-parametrization and backpropagation through the stochastic samples. In comparison, PILCO uses Gaussian process models to compute analytic policy gradients that are sensitive to model-uncertainty and GPS optimizes policies with the aid of a stochastic trajectory optimizer and locally-linear models. SVG in contrast focuses on global neural network value function approximators. SVG results are reported on simulated robotics applications in Swimmer, Reacher, Gripper, Monoped, Half-Cheetah, and Walker. Other methods also focus on uncertainty in high dimensional modeling, but use ensembles. Chua et al.\ propose {\bf probabilistic ensembles with trajectory sampling} (PETS)~\citep{chua2018deep}. The learned transition model of PETS has an uncertainty-aware deep network, which is combined with sampling-based uncertainty propagation. PETS uses a combination of probabilistic ensembles~\citep{lakshminarayanan2017simple}. The dynamics are modelled by an ensemble of probabilistic neural network models in a model-predictive control setting (the agent only applies the first action from the optimal sequence and re-plans at every time-step)~\citep{nagabandi2018neural}. Chua et al.\ report experiments on simulated robot tasks such as Half-Cheetah, Pusher, Reacher. Performance on these tasks is reported to approach asymptotic model-free baselines, stressing the importance of uncertainty estimation in model-baed reinforcement learning. An important problem in robotics is to learn arm manipulation directly from video camera input by seeing which movements work and which fail. The video input provides a high dimensional and difficult input and increases problem size and complexity substantially. Both Finn et al.\ and Ebert et al.\ report on learning complex robotic manipulation skills from high-dimenstional raw sensory pixel inputs in a method called {\bf Visual Foresight}~\citep{finn2017deep,ebert2018visual}. The aim of Visual Foresight is to generalize deep learning methods to never-before-seen tasks and objects. It uses a training procedure where data is sampled according to a probability distribution. Concurrently, a video prediction model is trained with the samples. This model generates the corresponding sequence of future frames based on an image and a sequence of actions, as in GPS. At test time, the least-cost sequence of actions is selected in a model-predictive control planning framework. Visual Foresight is able to perform multi-object manipulation, pushing, picking and placing, and cloth-folding tasks. \subsubsection*{Conclusion} Model learning with a single network works well for low-dimensional problems. We have seen that Gaussian Process modeling achieves sample efficiency and generalization to good policies. For high-dimensional problems, generalization and sample efficiency deteriorate, more samples are needed and policies do not perform as well. We have discussed methods for improvement by guiding policies with real samples (GPS), limiting the scope of predictions with model-predictive control, and using ensembles and uncertainty aware neural networks to model uncertainty (PETS). \subsubsection{Hybrid Model-Free/Model-Based Imagination} In the preceding subsection, we have looked at how to use environment samples to build a transition model. Many methods were covered to learn transition models with as few samples as possible. These methods are related to supervised learning methods. The transition model was then used by a planning method to optimize the policy or value function. We will now review methods that use a complementary approach, a hybrid model-based/model-free approach of using the environment samples for two purposes. Here the emphasis is no longer on learning the model but on using it effectively. This approach was introduced by Sutton~\citep{sutton1990integrated,sutton1991dyna} in the Dyna system, long before deep learning was used widely. Dyna uses the samples to update the policy function directly (model-free learning) and also uses the samples to learn a transition model, which is then used by planning to augment the model-free environment-samples with the model-based imagined ``samples.'' In this way the sample-efficiency of model-free learning is improved quite directly. Figure~\ref{fig:imagination} illustrates the working of the Dyna approach. (Note that now two arrows learn from the environment samples. Model-free learning is drawn bold.) \begin{figure}[t] \begin{center} \input imagination \caption{Hybrid Model-Free/Model-Based Imagination}\label{fig:imagination} \end{center} \end{figure} Dyna was introduced for a table-based approach before deep learning became popular. Originally, in Dyna, the transition model is updated directly with samples, without learning through backpropagation, however, here we discuss only deep imagination methods. Algorithm~\ref{lst:dyna} shows the steps of the algorithm (compared to Algorithm~\ref{lst:back}, the line in italics is new, from Algorithm~\ref{lst:free}). Note how the policy is updated twice in each iteration, by environment sampling, and by transition planning. \begin{table*}[h] \begin{center} \begin{tabular}{lllll} {\em Approach}&{\em Learning}&{\em Planning}& {\em Reinforcement Learning}&{\em Application}\\ \hline\hline Local Model~\citep{gu2016continuous}& Quadratic Non-linear& Short rollouts &Q-learning& Cheetah\\ MVE \citep{feinberg2018model} & Samples& Short rollouts& Actor-critic&Cheetah\\ Meta Policy \citep{clavera2018model}&Meta-ensembles& Short rollouts & Policy optimization & Cheetah\\ GATS \citep{azizzadenesheli2018surprising}& Pix2pix&MCTS&Deep Q Network & Cheetah\\ Policy Optim \citep{janner2019trust}&Ensemble& Short rollouts &Soft-Actor-Critic&Cheetah\\ Video predict \citep{oh2015action}&CNN/LSTM&Action&Curriculum&Atari\\ VPN \citep{oh2017value}&CNN encoder&$d$-step &$k$-step&Mazes, Atari\\ SimPLe \citep{kaiser2019model}&VAE, LSTM&MPC&PPO&Atari\\ \\ \end{tabular} \caption{Overview of Hybrid Model-Free/Model-based Imagination Methods}\label{tab:imag} \end{center} \end{table*} \begin{algorithm} \begin{algorithmic} \Repeat \State Sample env $E$ to generate data $D=(s, a, r', s')$ \State {\em Use $D$ to update policy $\pi(s, a)$} \State Use $D$ to learn $T_a(s,s')$ \State Use $T$ to update policy $\pi(s, a)$ by planning \Until $\pi$ converges \end{algorithmic} \caption{Hybrid Model-Free/Model-Based Imagination}\label{lst:dyna} \end{algorithm} We will describe five deep learning approaches that use imagination to augment the sample-free data. Table~\ref{tab:imag} summarizes these approaches. Note that in the next subsection more methods are described that also use hybrid model-free/model-based updating of the policy function. These are also listed in Table~\ref{tab:imag}. We will first see how quadratic methods are used with imagination rollouts. Next, short rollouts are introduced, and ensembles, to improve the precision of the predictive model. Imagination is a hybrid model-free/model-based approach, we will see methods that build on successful model-free deep learning approaches such as meta-learning and generative-adversarial networks. Let us have a look at how Dyna-style imagination works in deep model-based algorithms. Earlier, we saw that linear–quadratic–Gaussian methods were used to improve model learning. Gu et al.\ merge the backpropagation iLQG aproaches with Dyna-style synthetic policy rollouts~\citep{gu2016continuous}. To accelerate model-free continuous Q-learning they combine {\bf locally linear models} with local on-policy imagination rollouts. The paper introduces a version of continuous Q-learning called normalized advantage functions, accelerating the learning with imagination rollouts. Data efficiency is improved with model-guided exploration using off-policy iLQG rollouts. As application the approach has been tested on simulated robotics tasks such as Gripper, Half-Cheetah and Reacher. Feinberg et al.\ present {\bf model-based value expansion} (MVE) which, like the previous algorithm~\citep{gu2016continuous}, controls for uncertainty in the deep model by only allowing imagination to fixed depth~\citep{feinberg2018model}. Value estimates are split into a near-future model-based component and a distant future model-free component. In contrast to stochastic value gradients (SVG), MVE works without differentiable dynamics, which is important since transitions can include non-differentiable contact interactions~\citep{heess2015learning}. The planning part of MVE uses short rollouts. The overall reinforcement learning algorithm that is used is a combined value-policy actor-critic setting~\citep{sutton2018introduction} and deep deterministic policy gradients (DDPG)~\citep{lillicrap2015continuous}. As application re-implementations of simulated robotics were used such as for Cheetah, Swimmer and Walker. An ensemble approach has been used in combination with gradient-based meta-learning by Clavera et al.\ who introduced Model-based Reinforcement Learning via {\bf Meta-Policy Optimization} (MP-MPO)~\citep{clavera2018model}. This method learns an ensemble of dynamics models and then it learns a policy that can be adapted quickly to any of the fitted dynamics models with one gradient step (the MAML-like meta-learning step~\citep{finn2017model}). MB-MPO frames model-based reinforcement learning as meta-learning a policy on a distribution of dynamic models, in the form of an ensemble of the real environment dynamics. The approach builds on the gradient-based meta-learning framework MAML~\citep{finn2017model}. The planning part of the algorithm samples imagined trajectories. MB-MPO is evaluated on continuous control benchmark tasks in a robotics simulator: Ant, Half-Cheetah, Hopper, Swimmer, Walker. The results reported indicate that meta-learning a policy over an ensemble of learned models approaches the level of performance of model-free methods with substantially better sample complexity. Another attempt to improve the accuracy and efficiency of dynamics models has been through generative adversarial networks~\citep{goodfellow2014generative}. Azizzadenesheli et al.\ aim to combine successes of {\bf generative adversarial networks} with planning robot motion in model-based reinforcement learning~\citep{azizzadenesheli2018surprising}. Manipulating robot arms based on video input is an important application in AI (see also Visual Foresight in Section~\ref{sec:foresight}, and the SimPLe approach, in Section~\ref{sec:simple}). A generative dynamics model is introduced to model the transition dynamics based on the pix2pix architecture~\citep{isola2017image}. For planning Monte Carlo Tree Search~\citep{coulom2006efficient,browne2012survey} is used. GATS is evaluated on Atari games such as Pong, and does not perform better than model-free DQN~\citep{mnih2015human}. Achieving a good performing high-dimensional predictive model remains a challenge. Janner et al.\ propose in Model-based Policy Optimization (MBPO) a new approach to {\bf short rollouts with ensembles}~\citep{janner2019trust}. In this approach the model horizon is much shorter than the task horizon. These model rollouts are combined with real samples, and matched with plausible environment observations~\citep{kalweit2017uncertainty}. MBPO uses an ensemble of probabilistic networks, as in PETS~\citep{chua2018deep}. Soft-actor-critic~\citep{haarnoja2018soft} is used as reinforcement learning method. Experiments show that the policy optimization algorithm learns substantially faster with short rollouts than other algorithms, while retaining asymptotic performance relative to model-free algorithms. The applications used are simulated robotics tasks: Hopper, Walker, Half-Cheetah, Ant. The method surpasses the sample efficiency of prior model-based algorithms and matches the performance of model-free algorithms. \subsubsection*{Conclusion} The hybrid imagination methods aim to combine the advantage of model-free methods with model-based methods in a hybrid approach augmenting ``real'' with ``imagined'' samples, to improve sample effciency of deep model-free learning. A problem is that inaccuracies in the model may be enlarged in the planned rollouts. Most methods limited lookahead to local lookahead. We have discussed interesting approaches combining meta-learning and generative-adversarial networks, and ensemble methods learning robotic movement directly from images. \subsubsection{Latent Models}\label{sec:abstract} The next group of methods that we describe are the latent or abstract model algorithms. Latent models are born out of the need for more accurate predictive deep models. Latent models replace the single transition model with separate, smaller, specialized, representation models, for the different functions in a reinforcement learning algorithm. All of the elements of the MDP-tuple may now get their own model. Planning occurs in latent space. \begin{table*}[h] \begin{center} \begin{tabular}{lllll} {\em Approach}&{\em Learning}&{\em Planning}& {\em Reinforcement Learning}&{\em Application}\\ \hline\hline Video predict \citep{oh2015action}&CNN/LSTM&Action&Curriculum&Atari\\ VPN \citep{oh2017value}&CNN encoder&$d$-step &$k$-step&Mazes, Atari\\ SimPLe \citep{kaiser2019model}&VAE, LSTM&MPC&PPO&Atari\\ PlaNet \citep{hafner2018learning}&RSSM (VAE/RNN) & CEM & MPC & Cheetah\\ Dreamer \citep{hafner2019dream}&RSSM+CNN& Imagine & Actor-Critic & Control\\ Plan2Explore \citep{sekar2020planning}&RSSM& Planning & Few-shot & Control\\ \\ \end{tabular} \caption{Overview of Latent Modeling Methods}\label{tab:abstract} \end{center} \end{table*} Traditional deep learning models represent input states directly in a single model: the layers of neurons and filters are all related in some way to the input and output of the domain, be it an image, a sound, a text or a joystick action or arm movement. All of the MDP functions, state, value, action, reward, policy, and discount, act on this single model. Latent models, on the other hand, are not connected directly to the input and output, but are connected to other models and signals. They do not work on direct representations, but on latent, more compact, representations. The interactions are captured in three to four different models, such as observation, representation, transition, and reward models. These may be smaller, lower capacity, models. They may be trained with unsupervised or self-supervised deep learning such as variational autoencoders~\citep{kingma2013auto,kingma2019introduction} or generative adversarial networks~\citep{goodfellow2014generative}, or recurrent networks. Latent models use multiple specialized networks, one for each function to be approximated. The intuition behind the use of latent models is dimension reduction: they can better specialize and thus have more precise predictions, or can better capture the essence of higher level reasoning in the input domains, and need fewer samples (without overfitting) due to their lower capacity. Figure~\ref{fig:abstract} illustrates the abstract (latent) learning process (using the modules of Dreamer~\citep{hafner2019dream} as an example). \begin{figure}[t] \begin{center} \input abstract \caption{Latent Models}\label{fig:abstract} \end{center} \end{figure} Table~\ref{tab:abstract} summarizes the methods of this subsection (three are also mentioned in Table~\ref{tab:imag}). Quite a few different latent (abstract) model approaches have been published. Latent models work well, both for games and robotics. Different rollout methods are proposed, such as local rollouts, and differentiable imagination, and end-to-end model learning and planning. Finally, latent models are applied to transfer learning in few-shot learning. In the next subsection more methods are described that also use latent models (see Table~\ref{tab:netw} and the overview Table~\ref{tab:overview}). Let us now have a look at the latent model approaches. An important application in games and robotics is the long range prediction of video images. Building a {\bf generative model for video data} is a challenging problem involving high-dimensional natural-scene data with temporal dynamics, introduced by~\citep{schmidhuber1991learningfovea}. In many applications next-frame prediction also depends on control or action variables, especially in games. A first paper by~\citep{oh2015action} builds a model to predict Atari games using a high-dimensional video encoding model and action-conditional transformation. The authors describe three-step experiments with a convolutional and with a recurrent (LSTM) encoder. The next step performs action-conditional encoding, after which convolutional decoding takes place. To reduce the effect of small prediction errors compounding through time, a multi-step prediction target is used. Short-term future frames are predicted and fine-tuned to predict longer-term future frames after the previous phase converges, using a curriculum that stabilizes training~\citep{bengio2009curriculum}. Oh et al.\ perform planning on an abstract, encoded, representation; showing the benefit of acting in latent space. Experimental results on Atari games showed generation of visually-realistic frames useful for control over up to 100-step action-conditional predictions in some games. This architecture was developed further into the VPN approach, which we will describe next. The {\bf Value Prediction Network} (VPN) approach~\citep{oh2017value} integrates model-free and model-based reinforcement learning into a single abstract neural network that consists of four modules. For training, VPN combines temporal-difference search~\citep{silver2012temporal} and $n$-step Q-learning~\citep{mnih2016asynchronous}. VPN performs lookahead planning to choose actions. Classical model-based reinforcement learning predicts future observations $T_a(s,s')$. VPN plans future values without having to predict future observations, using abstract representations instead. The VPN network architecture consists of the modules: encoding, transition, outcome, and value. The encoding module is applied to the environment observation to produce a latent state $s$. The value, outcome, and transition modules work in latent space, and are recursively applied to expand the tree.\footnote{VPN uses a convolutional neural network as the encoding module. The transition module consists of an option-conditional convolution layer (see~\citep{oh2015action}). A residual connection from the previous abstract-state to the next asbtract-state is used~\citep{he2016deep}. The outcome module is similar to the transition module. The value module consists of two fully-connected layers. The number of layers and hidden units varies depending on the application domain.} It does not use MCTS, but a simpler rollout algorithm that performs planning up to a planning horizon. VPN uses imagination to update the policy. It outperforms model-free DQN on Mazes and Atari games such as Seaquest, QBert, Krull, and Crazy Climber. Value Prediction Networks are related to Value Iteration Networks and to the Predictron, which we will describe next. For robotics and games, video prediction methods are important.\label{sec:simple} Simulated policy learning, or SimPLe, uses {\bf stochastic video prediction} techniques~\citep{kaiser2019model}. SimPLe uses video frame prediction as a basis for model-based reinforcement learning. In contrast to Visual Foresight, SimPLe builds on model-free work on video prediction using variational autoencoders, recurrent world models and generative models~\citep{oh2015action,chiappa2017recurrent,leibfried2016deep} and model-based work~\citep{oh2017value,ha2018world,azizzadenesheli2018surprising}. The latent model is formed with a variational autoencoder that is used to deal with the limited horizon of past observation frames~\citep{babaeizadeh2017stochastic,bengio2015scheduled}. The model-free PPO algorithm~\citep{schulman2017proximal} is used for policy optimization. In an experimental evaluation, SimPLe is more sample efficient than the Rainbow algorithm~\citep{hessel2017rainbow} on 26 ALE games to learn Atari games with 100,000 sample steps (400k frames). Learning dynamics models that are accurate enough for planning is a long standing challenge, especially in image-based domains. PlaNet trains a model-based agent to learn the environment dynamics from images and choose actions through planning in latent space with both deterministic and stochastic transition elements. PlaNet is introduced in {\bf Planning from Pixels}~\citep{hafner2018learning}. PlaNet uses a Recurrent State Space Model (RSSM) that consists of a transition model, an observation model, a variational encoder and a reward model. Based on these models a Model-Predictive Control agent is used to adapt its plan, replanning each step. For planning, the RSSM is used by the Cross-Entropy-Method (CEM) to search for the best action sequence~\citep{karl2016deep,buesing2018learning,doerr2018probabilistic}. In contrast to many model-free reinforcement learning approaches, no explicit policy or value network is used. PlaNet is tested on tasks from MuJoCo and the DeepMind control suite: Swing-up, Reacher, Cheetah, Cup Catch. It reaches performance that is close to strong model-free algorithms. A year after the PlaNet paper was pubished \citep{hafner2019dream} published Dream to Control: Learning Behaviors by {\bf Latent Imagination}. World models enable interpolating between past experience, and latent models predict both actions and values. The latent models in Dreamer consist of a representation model, an observation model, a transition model, and a reward model. It allows the agent to plan (imagine) the outcomes of potential action sequences without executing them in the environment. It uses an actor-critic approach to learn behaviors that consider rewards beyond the horizon. Dreamer backpropagates through the value model, similar to DDPG~\citep{lillicrap2015continuous} and Soft-actor-critic~\citep{haarnoja2018soft}. Dreamer is tested with applications from the DeepMind control suite: 20 visual control tasks such as Cup, Acrobot, Hopper, Walker, Quadruped, on which it achieves good performance. Finally, Plan2Explore~\citep{sekar2020planning} studies how reinforcement learning with latent models can be used for transfer learning, in partiular, few-shot and {\bf zero-shot learning}~\citep{xian2017zero}. Plan2Explore is a self-supervised reinforcement learning method that learns a world model of its environment through unsupervised exploration, which it then uses to solve zero-shot and few-shot tasks. Plan2Explore was built on PlaNet~\citep{hafner2018learning} and Dreamer~\citep{hafner2019dream} learning dynamics models from images, using the same latent models: image encoder (convolutional neural network), dynamics (recurrent state space model), reward predictor, image decoder. With this world model, behaviors must be derived for the learning tasks. The agent first uses planning to explore to learn a world model in a self-supervised manner. After exploration, it receives reward functions to adapt to multiple tasks such as standing, walking, running and flipping. Plan2Explore achieved good zero-shot performance on the DeepMind Control Suite (Swingup, Hopper, Pendulum, Reacher, Cup Catch, Walker) in the sense that the agent's self-supervised zero-shot performance was competitive to Dreamer's supervised reinforcement learning performance. \subsubsection*{Conclusion} In the preceding methods we have seen how a single network model can be specialized in three or four separate models. Different rollout methods were proposed, such as local rollouts, and differentiable imagination. Latent, or abstract, models are a direct descendent of model learning networks, with different models for different aspects of the reinforcement learning algorithms. The latent representations have lower capacity, allowing for greater accuracy, better generalization and reduced sample complexity. The smaller latent representation models are often learned unsupervised or self-supervised, using variational autoencoders or recurrent LSTMs. Latent models were applied to transfer learning in few-shot learning. \subsection{End-to-end Learning of Planning and Transitions}\label{sec:e2e} In the previous subsection the approach is (1) to learn a transition model through backpropagation and then (2) to do conventional lookahead rollouts using a planning algorithm such as value iteration, depth-limited search, or MCTS. A larger trend in machine learning is to replace conventional algorithms by differentiable or gradient style approaches, that are self-learning and self-adapting. Would it be possible to make the conventional rollouts differentiable as well? If updates can be made differentiable, why not planning? The final approach of this survey is indeed to learn both the transition model and planning steps end-to-end. This means that the neural network represents both the transition model and executes the planning steps with it. This is a challenge that has to do with a single neural network, but we will see that abstract models, with latent representations, can more easily be used to achieve the execution of planning steps. When we look at the action that a neural network normally performs as a transformation and filter activity (selection, or classification) then it is easy to see than planning, which consists of state unrolling and selection, is not so far from what a neural network is normally used for. Note that especially recurrent neural networks and LSTM contain implicit state, making their use as a planner even easier. \begin{table*}[h] \begin{center} \begin{tabular}{lllll} {\em Approach}&{\em Learning}&{\em Planning}& {\em Reinforcement Learning}&{\em Application}\\ \hline\hline VIN \citep{tamar2016value}&CNN&Rollout in network&Value Iteration&Mazes\\ VProp \citep{nardelli2018value}& CNN&Hierarch Rollouts &Value Iteration&Navigation\\ TreeQN \citep{farquhar2018treeqn}& Tree-shape Net& Plan-functions&DQN/Actor-Critic&Box pushing\\ ConvLSTM \citep{guez2019investigation}&CNN+LSTM& Rollouts in network & A3C & Sokoban\\ I2A \citep{racaniere2017imagination}&CNN/LSTM encoder&Meta-controller& A3C&Sokoban\\ Predictron \citep{silver2017predictron}& $k,\gamma,\lambda$-CNN-predictr& $k$-rollout& $\lambda$-accum&Mazes\\ World Model \citep{ha2018world}& VAE & CMA-ES & MDN-RNN & Car Racing\\ MuZero \citep{schrittwieser2019mastering}& Latent&MCTS &Curriculum&Go/chess/shogi+Atari\\ \\ \end{tabular} \caption{Overview of End-to-End Planning/Transition Methods}\label{tab:netw} \end{center} \end{table*} Some progress has been made with this idea. One approach is to map the planning iterations onto the layers of a deep neural network, with each layer representing a lookahead step. The transition \emph{model} becomes embedded in a transition \emph{network}, see Figure~\ref{fig:network}. \begin{figure}[t] \begin{center} \input network \caption{End-to-End Planning/Transitions}\label{fig:network} \end{center} \end{figure} In this way, the planner becomes part of one large trained end-to-end agent. (In the figure the full circle is made bold to signal end-to-end learning.) Table~\ref{tab:netw} summarizes the approaches of this subsection. We will see how the iterations of value iteration can be implemented in the layers of a convolutional neural network (CNN). Next, two variations of this method are presented, and a way to implement planning with convolutional LSTM modules. All these approaches implement differentiable, trainable, planning algorithms, that can generalize to different inputs. The later methods use elaborate schemes with latent models so that the learning can be applied to different application domains. {\bf Value Iteration Networks} (VIN) are introduced by Tamar et al. in~\citep{tamar2016value}, see also~\citep{niu2018generalized}. A VIN is a differentiable multi-layer network that is used to perform the steps of a simple planning algorithm. The core idea it that value iteration (VI, see Algorithm~\ref{lst:vi}) or step-by-step planning can be implemented by a multi-layer convolutional network: each layer does a step of lookahead. The VI iterations are rolled-out in the network layers $Q$ with $A$ channels. Through backpropagation the model learns the value iteration parameters. The aim is to learn a general model, that can navigate in unseen environments. VIN learns a fully differentiable planning algorithm. The idea of planning by gradient descent exists for some time, several authors explored learning approximations of dynamics in neural networks~\citep{kelley1960gradient,schmidhuber1990line,ilin2007efficient}. VIN can be used for discrete and continuous path planning, and has been tried in grid world problems and natural language tasks. VIN has achieved generalization of finding shortest paths in unseen mazes. However, a limitation of VIN is that the number of layers of the CNN restricts the number of planning steps, restricting VINs to small and low-dimensional domains. Schleich et al.~\citep{schleich2019value} extend VINs by adding abstraction, and Srinivas et al.~\citep{srinivas2018universal} introduce universal planning networks, UPN, which generalize to modified robot morphologies. VProp, or {\bf Value Propagation}~\citep{nardelli2018value} is another attempt at creating generalizable planners inspired by VIN. By using a hierarchical structure VProp has the ability to generalize to larger map sizes and dynamic environments. VProp not only learns to plan and navigate in dynamic environments, but their hierarchical structure provides a way to generalize to navigation tasks where the required planning horizon and the size of the map are much larger than the ones seen at training time. VProp is evaluated on grid-worlds and also on dynamic environments and on a navigation scenario from StarCraft. A different approach is taken in TreeQN/ATreeC. Again, the aim is to create {\bf differentiable tree planning} for deep reinforcement learning~\citep{farquhar2018treeqn}. As VIN, TreeQN is focused on combining planning and deep reinforcement learning. Unlike VIN, however, TreeQN does so by incorporating a recursive tree structure in the network. It models an MDP by incorporating an explicit encoder function, a transition function, a reward function, a value function, and a backup function (see also latent models in the next subsection). In this way, it aims to achieve the same goal as VIN, that is, to create a differentiable neural network architecture that is suitable for planning. TreeQN is based on DQN-value-functions, an actor-critic variant is proposed as ATreeC. TreeQN is a prelude to latent models methods in the next subsection. In addition to being related to VIN, this approach is also related to VPN~\citep{oh2017value} and the Predictron~\citep{silver2017predictron}. TreeQN is tried on box pushing applications, like Sokoban, and nine Atari games. Another approach to differentiable planning is to teach a sequence of convolutional neural networks to exhibit planning behavior. A paper by~\citep{guez2019investigation} takes this approach. The paper demonstrates that a neural network architecture consisting of modules of a {\bf convolutional network and LSTM} can learn to exhibit the behavior of a planner. In this approach the planning occurs implicitly, by the network, which the authors call model-free planning, in contrast to the previous approaches in which the network structure more explicitly resembles a planner~\citep{farquhar2018treeqn,guez2018learning,tamar2016value}. In this method model-based behavior is learned with a general recurrent architecture consisting of LSTMs and a convolutional network~\citep{schmidhuber1990making} in the form of a stack of ConvLSTM modules~\citep{xingjian2015convolutional}. For the learning of the ConvLSTM modules the A3C actor-critic approach is used~\citep{mnih2016asynchronous}. The method is tried on Sokoban and Boxworld~\citep{zambaldi2018relational}. A stack of depth $D$, repreated $N$ times (time-ticks) allows the network to plan. In harder Sokoban instances, larger capacity networks with larger depth performed better. The experiments used a large number of environment steps, future work should investigate how to achieve sample-efficiency with this architecture. \subsubsection*{Conclusion} Planning networks combine planning and transition learning. They fold the planning into the network, making the planning process itself differentiable. The network then learns which planning decisions to make. Value Iteration Networks have shown how learning can transfer to mazes that have not been seen before. A drawback is that due to the marriage of problem size and network topology the approach has been limited to smaller sizes, something that subsequent methods have tried to reduce. One of these approaches is TreeQN, which uses multiple smaller models and a tree-structured network. The related Predictron architecture~\citep{silver2017predictron} also learns planning end-to-end, and is applicable to different kinds and sizes of problems. The Predictron uses abstract models, and will be discussed in the next subsection. \subsubsection{End-to-End Planning/Transitions with Latent Models} We will now discuss latent model approaches in end-to-end learning of planning and transitions. The first abstract imagination-based approach that we discuss is {\bf Imagination-Augmented Agent}, or I2A, by~\citep{pascanu2017learning,racaniere2017imagination,buesing2018learning}. A problem of model-based algorithms is the sensitivity of the planning to model imperfections. I2A deals with these imperfections by introducing a latent model, that learns to interpret internal simulations and adapt a strategy to the current state. I2A uses latent models of the environment, based on~\citep{chiappa2017recurrent,buesing2018learning}. The core architectural feature of I2A is an environment model, a recurrent architecture trained unsupervised from agent trajectories. I2A has four elements that together constitute the abstract model: (1) It has a manager that constructs a plan, which can be implemented with a CNN. (2) It has a controller that creates an action policy. (3) It has an environment model to do imagination. (4) Finally, it has a memory, which can be implemented with an LSTM~\citep{pascanu2017learning}. I2A uses a manager or meta-controller to choose between rolling out actions in the environment or by imagination (see~\citep{hamrick2017metacontrol}). This allows the use of models which only coarsely capture the environmental dynamics, even when those dynamics are not perfect. The I2A network uses a recurrent architecture in which a CNN is trained from agent trajectories with A3C~\citep{mnih2016asynchronous}. I2A achieves success with little data and imperfect models, optimizing point-estimates of the expected Q-values of the actions in a discrete action space. I2A is applied to Sokoban and Mini-Pacman by~\citep{racaniere2017imagination,buesing2018learning}. Performance is compared favorably to model-free and planning algorithms (MCTS). Pascanu et al. apply the approach on a maze and a spaceship task~\citep{pascanu2017learning}. Planning networks (VIN) combine planning and learning end-to-end. A limitation of VIN is that the tight connection between problem domain, iteration algorithm, and network architecture limited the applicability to small grid world problem. The {\bf Predictron} introduces an abstract model to remove this limitation. The Predictron was introduced by Silver et al.\ and combines end-to-end planning and model learning~\citep{silver2017predictron}. As with~\citep{oh2017value}, the model is an abstract model that consists of four components: a representation model, a next-state model, a reward model, and a discount model. All models are differentiable. The goal of the abstract model in Predictron is to facilitate value prediction (not state prediction) or prediction of pseudo-reward functions that can encode special events, such as ``staying alive'' or ``reaching the next room.'' The planning part rolls forward its internal model $k$ steps. As in the Dyna architecture, imagined forward steps can be combined with samples from the actual environment, combining model-free and model-based updates. The Predictron has been applied to procedurally generated mazes and a simulated pool domain. In both cases it out-performed model-free algorithms. Latent models of the dynamics of the environment can also be viewed as {\bf World Models}, a term used by~\citep{ha2018recurrent,ha2018world}. World Models are inspired by the manner in which humans are thought to contruct a mental model of the world in which we live. World Models are generative recurrent neural networks that are trained unsupervised to generate states for simulation using a variational autoencoder and a recurrent network. They learn a compressed spatial and temporal representation of the environment. By using features extracted from the World Model as inputs to the agent, a compact and simple policy can be trained to solve a task, and planning occurs in the compressed or simplified world. For a visual environment, World Models consist of a vision model, a memory model, and a controller. The vision model is often trained unsupervised with a variational autoencoder. The memory model is approximated with a mixture density network of a Gaussian distribution (MDN-RNN)~\citep{bishop1994mixture,graves2013generating}. The controller model is a linear model that maps directly to actions. It uses the CMA-ES Evolutionary Strategy for optimizing Controller models. Rollouts in World Models are also called {\em dreams}, to contrast them with samples from the real environment. With World Models a policy can in principle even be trained completely inside the dream, using imagination only inside the World Model, to test it out later in the actual environment. World Models have been applied experimentally to VizDoom tasks such as Car Racing~\citep{kempka2016vizdoom}. Taking the development of AlphaZero further is the work on {\bf MuZero}~\citep{schrittwieser2019mastering}. Board games are well suited for model-based methods because the transition function is given by the rules of the game. However, would it be possible to do well if the rules of the game were {\em not} given? In MuZero a new architecture is used to learn transition functions for a range of different games, from Atari to board games. MuZero learns the transition model for all games from interaction with the environment, with one architecture, that is able to learn different transition models. As with the Predictron~\citep{silver2017predictron} and Value Prediction Networks~\citep{oh2017value}, MuZero has an abstract model with different modules: representation, dynamics, and prediction function. The dynamics function is a recurrent process that computes transition latent state) and reward. The prediction function computes policy and value functions. For planning, MuZero uses a version of MCTS, without the rollouts, and with P-UCT as selection rule, using information from the abstract model as input for node selection. MuZero can be regarded as joining the Predictron with self-play. It performs well on Atari games and on board games, learning to play the games from scratch, after having learned the rules of the games from scratch from the environment. \subsubsection*{Conclusion} Latent models represent states with a number of smaller, latent models, allowing planning to happen in a smaller latent space. They are useful for explicit planning and with end-to-end learnable planning, as in the Predictron~\citep{silver2017predictron}. Latent models allow end-to-end planning to be applied to a broader range of applications, beyond small mazes. The Predictron creates an abstract planning network, in the spirit of, but without the limitations of, Value Iteration Networks~\citep{tamar2016value}. The World Models interpretation links latent models to the way in which humans create mental models of the world that we live in. After this detailed survey of model-based methods---the agents---it is time to discuss our findings and draw conclusions. Before we do so, we will first look at one of the most important elements for reproducible reinforcement learning research, the benchmark. \begin{figure}[t] \begin{center} \includegraphics[width=6cm]{Cart-pendulum} \caption{Cartpole Pendulum~\citep{sutton2018introduction}}\label{fig:cartpole} \end{center} \end{figure} \section{Benchmarks}\label{sec:bench} Benchmarks---the environments---play a key role in artificial intelligence. Without them, progress cannot be measured, and results cannot be compared in a meaningful way. The benchmarks define the kind of intelligence that our artificial methods should approach. For reinforcement learning, Mountain car and Cartpole are well-known small problems that characterize the kind of problem to solve (see Figure~\ref{fig:cartpole}). Chess has been called the Drosophila of AI~\citep{landis2001aleksandr}. In addition to Mountain car and chess a series of benchmark applications have been used to measure progress of artificially intelligent methods. Some of the benchmarks are well-known and have been driving progress. In image recognition, the ImageNet sequence of competitions has stimulated great progress~\citep{fei2009imagenet,krizhevsky2012imagenet,guo2016deep}. The current focus on reproducibility in reinforcement learning emphasizes the importance of benchmarks~\citep{henderson2017deep,islam2017reproducibility,khetarpal2018re}. Most papers that introduce new model-based reinforcement learning algorithms perform some form of experimental evaluation of the algorithm. Still, since papers use different versions and hyper-parameter settings, comparing algorithm performance remains difficult in practice. A recent benchmarking study compared the performance of 14 algorithms, and some baseline algorithms on a number of MuJoCo~\citep{todorov2012mujoco} robotics benchmarks~\citep{wang2019benchmarking}. There was no clear winner. Performance of methods varied widely from application to application. There is much room for further improvement on many applications of model-based reinforcement learning algorithms, and for making methods more robust. The use of benchmarks should become more standardized to ease meaningful performance comparisons. We will now describe benchmarks commonly used in deep model-based reinforcement learning. We will discuss five sets of benchmarks: (1) puzzles and mazes, (2) Atari arcade games such as Pac-Man, (3) board games such as Go and chess, (4) real-time strategy games such as StarCraft, and (5) simulated robotics tasks such as Half-Cheetah. As an aside, some of these benchmarks resemble challenges that children and adults use to play and learn new skills. We will have a closer look at these sets of benchmarks, some with discrete, some with continuous action spaces. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{sokoban2} \end{center} \caption{Sokoban Puzzle~\citep{chao2013}}\label{fig:sokoban} \end{figure} \subsection{Mazes} Trajectory planning algorithms are crucial in robotics~\citep{latombe2012robot,gasparetto2015path}. There is a long tradition of using 2D and 3D path-finding problems in reinforcement learning and AI. The Taxi domain was introduced by~\citep{dietterich2000hierarchical} in the context of hierarchical problem solving, and box-pushing problems such as Sokoban have been used frequently~\citep{junghanns2001sokoban,dor1999sokoban,murase1996automatic,zhou2013tabled}, see Figure~\ref{fig:sokoban}. The action space of these puzzles and mazes is discrete. The related problems are typically NP-hard or PSPACE-hard~\citep{culberson1997sokoban,hearn2009games} and solving them requires basic path and motion planning skills. Small versions of the mazes can be solved exactly by planning, larger instances are only suitable for approximate planning or learning methods. Mazes can be used to test algorithms for ``flat'' reinforcement learning path finding problems~\citep{tamar2016value,nardelli2018value,silver2017predictron}. Grids and box-pushing games such as Sokoban can also feature rooms or subgoals, that may then be used to test algorithms for hierarchically structured problems~\citep{farquhar2018treeqn,guez2019investigation,racaniere2017imagination,feng2020solving}. The problems can be made more difficult by enlarging the grid and by inserting more obstacles. Mazes and Sokoban grids are sometimes procedurally generated~\citep{shaker2016procedural,hendrikx2013procedural,togelius2013procedural}. The goal for the algorithms is typically to find a solution for a grid of a certain difficulty class, to find a shortest solution, or to learn to solve a class of grids by training on a different class of grids, to test transfer learning. \subsection{Board Games} Another classic group of benchmarks for planning and learning algorithms is board games. Two-person zero-sum perfect information board games such as tic tac toe, chess, checkers, Go, and shogi have been used since the 1950s as benchmarks in AI. The action space of these games is discrete. Notable achievements were in checkers, chess, and Go, where human world champions were defeated in 1994, 1997, and 2016, respectively~\citep{schaeffer1996chinook,campbell2002deep,silver2016mastering}. Other games are used as well as benchmarks, such as Poker~\citep{brown2018superhuman} and Diplomacy~\citep{anthony2020learning}. The board games are typically used ``as is'' and are not changed for different experiments (as with mazes). They are fixed benchmarks, challenging and inspirational games where the goal is often beating human world champions. In model-based deep reinforcement learning they are used for self-play methods~\citep{tesauro1995td,anthony2017thinking,schrittwieser2019mastering}. Board games have been traditional mainstays of artificial intelligence, mostly associated with the symbolic reasoning approach to AI. In contrast, the next benchmark is associated with connectionist AI. \subsection{Atari} Shortly after 2010 the Atari Learning Environment (ALE)~\citep{bellemare2013arcade} was introduced for the sole purpose of evaluating reinforcement learning algorithms on high-dimensional input, to see if end-to-end pixel-to-joystick learning would be possible. ALE has been used widely in the field of reinforcement learning, after impressive results such as~\citep{mnih2015human}. ALE runs on top of an emulator for the classic 1980s Atari gaming console, and features more than 50 arcade games such as Pong, Breakout, Space Invaders, Pac-Man, and Montezuma's Revenge. ALE is well suited for benchmarking perception and eye-hand-coordination type skills, less so for planning. ALE is mostly used for deep reinforcement learning algorithms in sensing and recognition tasks. ALE is a popular benchmark that has been used in many model-based papers~\citep{kaiser2019model,oh2015action,oh2017value,ha2018world,schrittwieser2019mastering}, and many model-free reinforcement learning methods. As with mazes, the action space is discrete and low-dimensional---9 joystick directions and a push-button---although the input space is high-dimensional. The ALE games are quite varied in nature. There are ``easy'' eye-hand-coordination tasks such as Pong and Breakout, and there are more strategic level games where long periods of movement exist without changes in score, such as Pitfall and Montezuma's Revenge. The goal of ALE experiments is typically to achieve a score level comparable to humans in as many of the 57 games as possible (which has recently been achieved~\citep{badia2020agent57}). After this achievement, some researchers believe that the field is ready for more challenging benchmarks~\citep{machado2018revisiting}. \subsection{Real-Time Strategy and Video Games}\label{sec:rts} The Atari benchmarks are based on simple arcade games of 35--40 years ago, most of which are mostly challenging for eye-hand-coordination skills. Real-time strategy (RTS) games and games such as StarCraft, DOTA, and Capture the Flag provide more challenging tasks. The strategy space is large; the state space of StarCraft has been estimated at $10^{1685}$, much larger than chess ($10^{47}$) or go ($10^{147}$). Most RTS games are multi-player, non-zero-sum, imperfect information games that also feature high-dimensional pixel input, reasoning, team collaboration, as well as eye-hand-coordination. The action space is stochastic and is a mix of discrete and continuous actions. A very high degree of diversity is necessary to prevent specialization. Despite the challenging nature, impressive achievements have been reported recently in all three mentioned games where human performance was matched or even exceeded~\citep{vinyals2019grandmaster,berner2019dota,jaderberg2019human}. In these efforts deep model-based reinforcement learning is combined with multi-agent and population based methods. These mixed approaches achieve impressive results on RTS games; added diversity diminishes the specialization trap of two-agent approaches~\citep{vinyals2019grandmaster}, their approaches may combine aspects of self-play and latent models, although often the well-tuned combination of a number of methods is credited with the high achievements in these games. The mixed approaches are not listed separately in the taxonomy of the next section. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{halfcheetah} \end{center} \caption{Half-Cheetah}\label{fig:cheetah} \end{figure} \subsection{Robotics} Reinforcement learning is a paradigm that is well-suited for modeling planning and control problems in robotics. Instead of minutely programming high-dimensional robot-movements step-by-step, reinforcement learning is used to train behavior more flexibly, and possibly end-to-end from camera input to arm manipulation. Training with real-world robots is expensive and complicated. In robotics sample efficiency is of great importance because of the cost of interaction with real-world environments and the wear of physical robot arms. For this reason virtual environments have been devised. Todorov et al.~\citep{todorov2012mujoco} introduced MuJoCo, a software suite for simulated robot behavior. It is used extensively in reinforcement learning research. Well-known benchmark tasks are Reacher, Swimmer, Half-Cheetah, and Ant, in which the agent's task is to teach itself the appropriate movement actions, see Figure~\ref{fig:cheetah} for an example. Many model-based deep reinforcement learning methods are tested on MuJoCo~\citep{heess2015learning,chua2018deep, janner2019trust,gu2016continuous,feinberg2018model,clavera2018model,azizzadenesheli2018surprising,hafner2018learning} and other robotics tasks~\citep{tassa2012synthesis,levine2014learning,finn2017deep, hafner2019dream,sekar2020planning}. The action space of these tasks is continuous, and the emphasis in experiments is on sample efficiency. \subsubsection*{Conclusion} No discussion on empirical deep reinforcement learning is complete without the mention of OpenAI Gym~\citep{brockman2016openai}. Gym is a toolkit for developing and comparing reinforcement learning algorithms and provides a training environment for reinforcement learning agents. Gym includes interfaces to benchmark sets such as ALE and MuJoCo. Other software suites are \citep{tassa2018deepmind,vinyals2017starcraft,yu2020meta,bellemare2013arcade,todorov2012mujoco}. Baseline implementations of many deep reinforcement learning agent algorithms can also be found at the Gym website \url{https://gym.openai.com}. Research into suitable benchmarks is active. Further interesting approaches are Procedural Content Generation~\citep{togelius2013procedural}, MuJoCo Soccer~\citep{liu2019emergent}, and the Obstacle Tower Challenge~\citep{juliani2019obstacle}. Now that we have seen the benchmarks on which our model-based methods are tested, it is time for an in-depth discussion and outlook for future work. \section{Discussion and Outlook}\label{sec:dis} Model-based reinforcement learning promises lower sample complexity. Sutton's work on imagination, where a model is created with environment samples that are then used to create extra imagined samples for free, clearly suggests this aspect of model-based reinforcement learning. The transition model acts as a multiplier on the amount of information that is used from each environment sample. Another, and perhaps more important aspect, is generalization performance. Model-based reinforcement learning builds a dynamics model of the domain. This model can be used multiple times, for new problem instances, but also for new problem classes. By learning the transition and reward model, model-based reinforcement learning may be better at capturing the essence of a domain than model-free methods. Model-based reinforcement learning may thus be better suited for solving transfer learning problems, and for solving long sequential decision making problems, a class of problems that is important in real world decision making. Classical table based approaches and Gaussian Process approaches have been quite succesful in achieving low sample complexity for problems of moderate complexity~\citep{sutton2018introduction,deisenroth2013survey,kober2013reinforcement}. However, the topic of the current survey is \emph{deep} models, for high dimensional problems with complex, non-linear, and discontinuous functions. These application domains pose a problem for classical model-based approaches. Since high-capacity deep neural networks require many samples to achieve high generalization (without overfitting), a challenge in deep model-based reinforcement learning is to maintain low sample complexity. We have seen a wide range of approaches that attempt this goal. Let us now discuss and summarize the benchmarks, the approaches for deep models, and possible future work. \subsection{Benchmarking} Benchmarks are the lifeblood of AI. We must test our algorithms to know if they exhibit intelligent behavior. Many of the benchmarks allow difficult decision making situations to be modeled. Two-person games allow modeling of competition. In real world decision making, collaboration and negotiation are also important. Real-time strategy games allow collaboration, competition and negotiation to be modelled, and multi-agent and hierarchical algorithms are being developed for these decision making situations~\citep{jaderberg2019human,kulkarni2016hierarchical,makar2001hierarchical}. Unfortunately, the wealth of choice in benchmarks makes it difficult to compare results that are reported in the literature. Not all authors publish their code. We typically need to rerun experiments with identical benchmarks to compare algorithms conclusively. Outcomes often differ from the original works, also because not all hyperparameter settings are always clear and implementation details may differ~\citep{henderson2017deep,islam2017reproducibility,wang2019benchmarking}. Authors should publish their code if our field wants to make progress. The recent attention for reproducibility in deep reinforcement learning is helping the field move in the right direction. Many papers now publish their code and the hyperparameter settings that were used in the reported experiments. \subsection{Curriculum Learning and Latent Models} Model-based reinforcement learning works well when the transition and reward models are {\bf given} by the rules of the problem. We have seen how perfect models in games such as chess and Go allow deep and accurate planning. Systems were constructed~\citep{tesauro1995td,silver2018general,anthony2017thinking} where curriculum learning facilitated tabula rasa self-learning of highly complex games of strategy; see also~\citep{bengio2009curriculum,plaat2020learning,narvekar2020curriculum}. The success of self-play has led to interest in curriculum learning in single-agent problems~\citep{feng2020solving,doan2019line,duan2016rl,laterre2018ranked}. When the rules are not given, they might be {\bf learned}, to create a transition model. Unfortunately, the planning accuracy of learned models is less than perfect. We have seen efforts with Gaussian Processes and ensembles to improve model quality, and efforts with local planning and Model-Predictive Control, to limit the damage of imperfections in the transition model. We have also discussed, at length, latent, abstract, models, where for each of the functions of the Markov Decision Process a separate sub-module is learned. Latent models achieve better accuracy with explicit planning, as planning occurs in latent space~\citep{oh2017value,hafner2019dream,kaiser2019model}. The work on Value Iteration Networks~\citep{tamar2016value} inspired {\bf end-to-end} learning, where both the transition model and the planning algorithm are learned, end-to-end. Combined with latent models (or World Models~\citep{ha2018world}) impressive results where achieved~\citep{silver2017predictron}, and model/planning accuracy was improved to the extent that tabula rasa curriculum self-learning was achieved, in Muzero~\citep{schrittwieser2019mastering} for both chess, Go, and Atari games. End-to-end learning and latent models together allowed the circle to be closed, achieving curriculum learning self-play also for problems where the rules are not given. See Figure~\ref{fig:influence} for relations between the different approaches of this survey. The main categories are color-coded, latent methods are dashed. \begin{figure} \begin{center} \input influence \caption{Influence of Model-Based Deep Reinforcement Learning Approaches. Green: given transitions/explicit planning; Red: learned transitions/explicit planning; Yellow: end-to-end transitions and planning. Dashed: latent models. }\label{fig:influence} \end{center} \end{figure} Optimizing directly in a latent space has been successful with a generative adversarial network~\citep{volz2018evolving}. World Models are linked to neuroevolution by~\citep{risi2019deep}. For future work, the combination of curriculum learning, ensembles, and latent models appears quite fruitful. Self-play has been used to achieve further success in other challenging games, such as StarCraft~\citep{vinyals2019grandmaster}, DOTA~\citep{berner2019dota}, Capture the Flag~\citep{jaderberg2019human}, and Poker~\citep{brown2019superhuman}. In multi-agent real-time strategy games the aspect of collaboration and teams is important, and self-play model-based reinforcement learning has been combined with multi-agent, hierarchical, and population based methods. In future work, more research is needed to explore the potential of (end-to-end) planning with latent representational models more fully for larger problems, for transfer learning, and for cooperative problems. More research is needed in uncertainty-aware neural networks. For such challenging problems as real-time strategy and other video games, more combinations of deep model-based reinforcement learning with multi-agent and population based methods can be expected~\citep{vinyals2019grandmaster,risi2020chess,back1996evolutionary}. In end-to-end planning and learning, latent models introduce differentiable versions of more and more classical algorithms. For example, World Models~\citep{ha2018world} have a trio of Vision, Memory, Control models, reminding us of the Model, View, Controller design pattern~\citep{gamma2009design}, or even of classical computer architecture elements such as ALU, RAM, and Control Unit~\citep{hennessy2011computer}. Future work will show if differentiable algorithms will be found for even more classical algorithms. \section{Conclusion}\label{sec:con} Deep learning has revolutionized reinforcement learning. The new methods allow us to approach more complicated problems than before. Control and decision making tasks involving high dimensional visual input come within reach. In principle, model-based methods offer the advantage of lower sample complexity over model-free methods, because of their transition model. However, traditional methods, such as Gaussian Processes, that work well on moderately complex problems with few samples, do not perform well on high-dimensional problems. High-capacity models have high sample complexity, and finding methods that generalize well with low sample complexity has been difficult. In the last five years many new methods have been devised, and great success has been achieved in model-free and in model-based deep reinforcement learning. This survey has summarized the main ideas of recent papers in three approaches. For more and more aspects of model-based reinforcement learning algorithms differentiable methods appear. Latent models condense complex problems into compact latent representations that are easier to learn. End-to-end curriculum learning of latent planning and learning models has been achieved. In the discussion we mentioned open problems for each of the approaches, where we expect worthwhile future work will occur, such as curriculum learning, uncertainty modeling and transfer learning by latent models. Benchmarks are important to test progress, and more benchmarks of latent models can be expected. Curriculum learning has shown how complex problems can be learned relatively quickly from scratch, and latent models allow planning in efficient latent spaces. Impressive results have been reported; future work can be expected in transfer learning with latent models, and the interplay of curriculum learning with (generative) latent models, in combination with end-to-end learning of larger problems. Benchmarks in the field have also had to keep up. Benchmarks have progressed from single-agent grid worlds to multi-agent Real-time strategy games and complicated camera-arm robotics manipulation tasks. Reproducibility and benchmarking studies are of great importance for real progress. In real-time strategy games model-based methods are being combined with multi-agent, hierarchical and evolutionary approaches, allowing the study of collaboration, competation and negotiation. Model-based deep reinforcement learning is a vibrant field of AI with a long history before deep learning. The field is blessed with a high degree of activity, an open culture, clear benchmarks, shared code-bases~\citep{brockman2016openai,vinyals2017starcraft,tassa2018deepmind} and a quick turnaround of ideas. We hope that this survey will lower the barrier of entry even further. \section*{Acknowledgments} We thank the members of the Leiden Reinforcement Learning Group, and especially Thomas Moerland and Mike Huisman, for many discussions and insights. {\footnotesize \bibliographystyle{plainnat}
2,869,038,154,456
arxiv
\chapter{\clearpage\thispagestyle{plain}\global\@topnum\mf{z}@ \@afterindenttrue \secdef\@chapter\@schapter} \makeatother \newtheorem{thm} {Theorem} [section] \newtheorem{prop}{Proposition} [section] \newtheorem{lem} {Lemma} [section] \newtheorem{cor} {Corollary}[section] \newtheorem{Def} {Definition} [section] \newtheorem{defs} {Definitions} [section] \newtheorem{Not} [Def] {Notation} \newtheorem{thmgl} {Theorem} \newtheorem{propgl}{Proposition} \newtheorem{lemgl} {Lemma} \newtheorem{corgl} {Corollary} \newtheorem{defgl} {Definition} \newtheorem{defsgl} [defgl]{Definitions} \newtheorem{notgl} [defgl] {Notation} \newtheorem{thmnn}{Theorem} \renewcommand{\thethmnn}{\!\!} \newtheorem{propnn}{Proposition} \renewcommand{\thepropnn}{\!\!} \newtheorem{lemnn}{Lemma} \renewcommand{\thelemnn}{\!\!} \newtheorem{cornn}{Corollary} \renewcommand{\thecornn}{\!\!} \newtheorem{defnn}{Definition} \renewcommand{\thedefnn}{\!\!} \newtheorem{defsnn}{Definitions} \renewcommand{\thedefsnn}{\!\!} \newtheorem{notnn}{Notation} \renewcommand{\thenotnn}{\!\!} \theoremstyle{definition} \newtheorem{alg} {Algorithm} [section] \newtheorem{exe} {Exercise} [section] \newtheorem{exes} {Exercises} [section] \newtheorem{rem} {Remark} [section] \newtheorem{rems} {Remarks} [section] \newtheorem{exa} [rem] {Example} \newtheorem{exas} [rem] {Examples} \newtheorem{alggl} {Algorithm} \newtheorem{exegl} {Exercise} \newtheorem{exesgl} {Exercises} \newtheorem{remgl} {Remark} \newtheorem{remsgl} {Remarks} \newtheorem{exagl} [remgl] {Example} \newtheorem{exasgl} [remgl] {Examples} \newtheorem{algnn} {Algorithm} \renewcommand{\thealgnn}{\!\!} \newtheorem{exenn} {Exercise} \renewcommand{\theexenn}{\!\!} \newtheorem{exesnn} {Exercises} \renewcommand{\theexesnn}{\!\!} \newtheorem{remnn}{Remark} \renewcommand{\theremnn}{\!\!} \newtheorem{remsnn}{Remarks} \renewcommand{\theremsnn}{\!\!} \newtheorem{exann}{Example} \renewcommand{\theexann}{\!\!} \newtheorem{exasnn}{Examples} \renewcommand{\theexasnn}{\!\!} \newcommand{\fo}{\footnotesize} \newcommand{\mathfrak}{\mathfrak} \newcommand{\mathcal}{\mathcal} \newcommand{\mathbb}{\mathbb} \newcommand{\nts}{\negthinspace} \newcommand{\nts\nts}{\nts\nts} \newcommand{\cdot}{\cdot} \newcommand{\nts\cdot\nts}{\nts\cdot\nts} \newcommand{\overline}{\overline} \newcommand{\underline}{\underline} \newcommand{^{^{_{_\vee}}}}{^{^{_{_\vee}}}} \newcommand{\sm}{\setminus} \newcommand{\ot}{\otimes} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{\Hom}{{\rm Hom}} \newcommand{{\rm End}}{{\rm End}} \newcommand{{\rm Aut}}{{\rm Aut}} \newcommand{{\rm Mat}}{{\rm Mat}} \newcommand{{\rm Der}}{{\rm Der}} \newcommand{{\rm Ext}}{{\rm Ext}} \newcommand{{\rm Tor}}{{\rm Tor}} \newcommand{{\rm soc}}{{\rm soc}} \newcommand{{\rm ind}}{{\rm ind}} \newcommand{{\rm coind}}{{\rm coind}} \newcommand{{\rm res}}{{\rm res}} \newcommand{{\rm Spec}}{{\rm Spec}} \newcommand{{\rm Maxspec}}{{\rm Maxspec}} \newcommand{\Sym}{{\rm Sym}} \newcommand{{\rm Mon}}{{\rm Mon}} \newcommand{{\rm Ker}}{{\rm Ker}} \renewcommand{\Im}{{\rm Im}} \newcommand{{\rm codim}}{{\rm codim}} \newcommand{{\rm rank}}{{\rm rank}} \newcommand{{\rm sgn}}{{\rm sgn}} \newcommand{{\rm tr}}{{\rm tr}} \newcommand{{\rm id}}{{\rm id}} \newcommand{\rad}{{\rm rad}} \newcommand{\gr}{{\rm gr}} \newcommand{{\mf{g}}}{{\mathfrak{g}}} \newcommand{{\mf{h}}}{{\mathfrak{h}}} \renewcommand{\t}{\mathfrak{t}} \newcommand{\mf{n}}{\mathfrak{n}} \renewcommand{\b}{\mathfrak{b}} \newcommand{\mf{z}}{\mathfrak{z}} \newcommand{\ad}{{\rm ad}} \newcommand{\mf{gl}}{\mathfrak{gl}} \newcommand{\mf{sl}}{\mathfrak{sl}} \newcommand{\mf{psl}}{\mathfrak{psl}} \newcommand{\mf{pgl}}{\mathfrak{pgl}} \newcommand{\Ad}{{\rm Ad}} \newcommand{{\rm Lie}}{{\rm Lie}} \newcommand{{\rm GL}}{{\rm GL}} \newcommand{{\rm SL}}{{\rm SL}} \newcommand{{\rm O}}{{\rm O}} \newcommand{{\rm SO}}{{\rm SO}} \newcommand{\Sp}{{\rm Sp}} \newcommand{\WHILE}{{\bf while\ }} \newcommand{{\bf do\ }}{{\bf do\ }} \newcommand{{\bf od\ }}{{\bf od\ }} \newcommand{{\bf if\ }}{{\bf if\ }} \newcommand{{\bf then\ }}{{\bf then\ }} \newcommand{{\bf else\ }}{{\bf else\ }} \newcommand{{\bf fi\ }}{{\bf fi\ }} \newcommand{{\bf for\ }}{{\bf for\ }} \newcommand{{\bf foreach\ }}{{\bf foreach\ }} \newcommand{\ }{\ } \newcommand{\ \ }{\ \ } \newcommand{\quad}{\quad} \newcommand{\quad\ }{\quad\ } \newcommand{\quad\ \ }{\quad\ \ } \newcommand{\qquad}{\qquad} \newcommand{\qquad\ }{\qquad\ } \newcommand{\qquad\ \ }{\qquad\ \ } \newcommand{\qquad\quad}{\qquad\quad} \newcommand{\qquad\quad}{\qquad\quad} \newcommand{\varepsilon}{\varepsilon} \newcommand{{\rm sym}}{{\rm sym}} \begin{document} \newcommand{{\Large $\mf{sl}_n$}}{{\Large $\mathfrak{sl}_n$}} \title{The centre of quantum {\Large $\mf{sl}_n$}\ at a root of unity} \author{Rudolf Tange} \begin{abstract} It is proved that the centre $Z$ of the simply connected quantised universal enveloping algebra $U_{\varepsilon,P}(\mf{sl}_n)$, $\varepsilon$ a primitive $l$-th root of unity, $l$ an odd integer $>1$, has a rational field of fractions. Furthermore it is proved that if $l$ is a power of an odd prime, $Z$ is a unique factorisation domain. \end{abstract} \address{Department of Mathematics, The University of Manchester, Oxford Road, M13 9PL, UK} \email{[email protected]} \maketitle \section*{Introduction} In \cite{DCKP} DeConcini, Kac and Procesi introduced the simply connected quantised universal enveloping algebra $U=U_{\varepsilon,P}({\mf{g}})$ over $\mathbb C$ at a primitive $l$-th root of unity $\varepsilon$ associated to a simple finite dimensional complex Lie algebra ${\mf{g}}$. The importance of the study of the centre $Z$ of $U$ and its spectrum ${\rm Maxspec}(Z)$ is also pointed out in \cite{DCK}. In this article we consider the following two conjectures concerning the centre $Z$ of $U$ in the case ${\mf{g}}=\mf{sl}_n$: \begin{enumerate}[1.] \item $Z$ has a rational field of fractions. \item $Z$ is a unique factorisation domain (UFD). \end{enumerate} The same conjectures can be made for the universal enveloping algebra $U({\mf{g}})$ of the Lie algebra ${\mf{g}}$ of a reductive group over an algebraically closed field of positive characteristic. In \cite{PrT} these conjectures were proved for ${\mf{g}}=\mf{gl}_n$ and for ${\mf{g}}=\mf{sl}_n$ under the condition that $n$ is nonzero in the field. The second conjecture was made by Braun and Hajarnavis in \cite{BHa} for the universal enveloping algebra $U({\mf{g}})$ and suggested for $U=U_{\varepsilon,P}({\mf{g}})$. There it was also proved that $Z$ is locally a UFD. In Section~\ref{s.quf} below, this conjecture is proved for $\mf{sl}_n$ under the condition that $l$ is a power of a prime ($\ne 2$). The auxiliary results and step 1 of the proof of Theorem~\ref{thm.quf}, however, hold without extra assumptions on $l$. The first conjecture was posed as a question by J.\ Alev for the universal enveloping algebra $U({\mf{g}})$. It can be considered as a first step towards a proof of a version of the Gelfand-Kirillov conjecture for $U$. Indeed the Gelfand-Kirillov conjecture for $\mf{gl}_n$ and $\mf{sl}_n$ in positive characteristic\footnote{The {\it Gelfand-Kirillov conjecture} for a Lie algebra ${\mf{g}}$ over $K$ states that the fraction field of $U({\mf{g}})$ is isomorphic to a Weyl skew field $D_n(L)$ over a purely transcendental extension $L$ of $K$.} was proved recently by J.-M.\ Bois in his PhD thesis \cite{Bois} using results in \cite{PrT} on the centres of their universal enveloping algebras (for $\mf{sl}_n$ it was required that $n\ne0$ in the field). It should be noted that the Gelfand-Kirillov conjecture for $U({\mf{g}})$ in characteristic $0$ (and in positive characteristic) is still open for ${\mf{g}}$ not of type $A$. As in \cite{PrT}, a certain semi-invariant $d$ for a maximal parabolic subgroup of ${\rm GL}_n$ will play an important r\^ole. Later we learned that (a version of) this semi-invariant already appeared before in the literature, see \cite{Dix}. For quantum versions, see \cite{FM1} and \cite{FM2}. \section{Preliminaries} In this section we recall some basic results, mostly from \cite{DCKP}, that are needed to prove the main results (Theorems~\ref{thm.qrat} and \ref{thm.quf}) of this article. Short proofs are added in case the results are not explicitly stated in \cite{DCKP}. \subsection{Elementary definitions} \ \\ Let ${\mf{g}}$ be a simple finite dimensional Lie algebra over $\mathbb{C}$ with Cartan subalgebra ${\mf{h}}$, let $\Phi$ be its root system relative to ${\mf{h}}$, let $(\alpha_1,\ldots,\alpha_r)$ be a basis of $\Phi$ and let $(.|.)$ be the symmetric bilinear form on ${\mf{h}}^*$ which is invariant for the Weyl group $W$ and satisfies $(\alpha|\alpha)=2$ for all short roots $\alpha$. Put $d_i=(\alpha_i|\alpha_i)/2$. The root lattice and the weight lattice of $\Phi$ are denoted by resp. $Q$ and $P$. Note that $(.|.)$ is integral on $Q\times P$. Mostly we will be in the situation where ${\mf{g}}=\mf{sl}_n$. In this case $r=n-1$ and all the $d_i$ are equal to 1. We then take ${\mf{h}}$ the subalgebra that consists of the diagonal matrices in $\mf{sl}_n$ and we take $\alpha_i=A\mapsto A_{i\,i}-A_{i+1\,i+1}:{\mf{h}}\to\mathbb{C}$. Let $l$ be an odd integer $>1$ and coprime to all the $d_i$, let $\varepsilon$ be a primitive $l$-th root of unity and let $\Lambda$ be a lattice between $Q$ and $P$. Let $U=U_{\varepsilon,\Lambda}({\mf{g}})$ be the quantised universal enveloping algebra of ${\mf{g}}$ at the root of unity $\varepsilon$ defined in \cite{DCK} and denote the centre of $U$ by $Z$. Since $U$ has no zero divisors (see \cite{DCK} 1.6-1.8), $Z$ is an integral domain. Let $U^+,U^-,U^0$ be the subalgebras of $U$ generated by resp. the $E_i$, the $F_i$ and the $K_\lambda$ with $\lambda\in\Lambda$. Then the multiplication $U^-\ot U^0\ot U^+\to U$ is an isomorphism of vector spaces. We identify $U^0$ with the group algebra $\mathbb{C}\Lambda$ of $\Lambda$. Note that $W$ acts on $U^0$, since it acts on $\Lambda$. Let $T$ be the complex torus $\Hom(\Lambda,\mathbb{C}^\times)$. Then $T$ can be identified with ${\rm Maxspec}(U^0)=\Hom_{\mathbb{C}\text{-Alg}}(U^0,\mathbb{C})$ and for the action of $T$ on $U^0=\mathbb{C}[T]$ by translation we have $t\cdot K_\lambda=t(\lambda)K_\lambda$. The {\it braid group} $\mathcal{B}$ acts on $U$ by automorphisms. See \cite{DCKP} 0.4. The subalgebra $Z_0$ of $U$ is defined as the smallest $\mathcal{B}$-stable subalgebra containing the elements $K_\lambda^l$, $\lambda\in\Lambda$ and $E_i^l,F_i^l$, $i=1,\ldots,r$. We have $Z_0\subseteq Z$. Put $z_\lambda=K_\lambda^l$ and let $Z_0^0$ be the subalgebra of $Z_0$ spanned by the $z_\lambda$. Then the identification of $U^0$ with $\mathbb{C}\Lambda$ gives an identification of $Z_0^0$ with $\mathbb{C}l\Lambda$. If we replace $K_\lambda$ by $z_\lambda$ in foregoing remarks, then we obtain an identification of $T$ with ${\rm Maxspec}(Z_0^0)$. Put $Z_0^\pm=Z_0\cap U^\pm$. Then the multiplication $Z_0^-\ot Z_0^0\ot Z_0^+\to Z_0$ is an isomorphism (of algebras). See e.g. \cite{DCK} 3.3. \subsection{The Harish-Chandra centre $Z_1$ and the quantum restriction theorem} \ \\ Let $Q^{^{_{_\vee}}}$ be the dual root lattice, that is, the $\mathbb{Z}$-span of the dual root system $\Phi^{^{_{_\vee}}}$. We have $Q^{^{_{_\vee}}}\cong P^*\hookrightarrow\Lambda^*$. Denote the image of $Q^{^{_{_\vee}}}$ under the homomorphism $f\mapsto(\lambda\mapsto (-1)^{f(\lambda)}):\Lambda^*\to T$ by $Q_2^{^{_{_\vee}}}$. Then the elements $\ne1$ of $Q_2^{^{_{_\vee}}}$ are of order 2 and $U^{0 Q_2^{^{_{_\vee}}}}=\mathbb{C}(\Lambda\cap 2P)$. Since $Q_2^{^{_{_\vee}}}$ is $W$-stable, we can form the semi-direct product $\tilde{W}=W\ltimes Q_2^{^{_{_\vee}}}$ and then $U^{0 \tilde{W}}=(\mathbb{C}(\Lambda\cap 2P))^W$. Let $h':U=U^-\ot U^0\ot U^+\to U^0$ be the linear map taking $x\ot u\ot y$ to $\varepsilon_U(x)u\,\varepsilon_U(y)$, where $\varepsilon_U$ is de counit of $U$. Then $h'$ is a projection of $U$ onto $U^0$. Furthermore $h'(Z_0)=Z_0^0=\mathbb{C}l\Lambda$ and $h'|_{Z_0}:Z_0\to Z_0^0$ has a similar description as $h'$ and is a homomorphism of algebras. Define the shift automorphism $\gamma$ of $U^{0 Q_2^{^{_{_\vee}}}}$ by setting $\gamma(K_\lambda)=\varepsilon^{(\rho|\lambda)}K_\lambda$ for $\lambda\in\Lambda\cap 2P$. Here $\rho$ is the half sum of the positive roots. Note that $\gamma={\rm id}$ on $Z_0^{0 Q_2^{^{_{_\vee}}}}=\mathbb{C}l(\Lambda\cap 2P)$. In \cite{DCKP} p 174 and \cite{DCK} \S2, there was constructed an injective homomorphism $\overline{h}:U^{0 \tilde{W}}\to Z$, whose image is denoted by $Z_1$, such that $h'(Z_1)\subseteq U^{0 Q_2^{^{_{_\vee}}}}$ and the inverse $$h:Z_1\stackrel{\sim}{\to}U^{0 \tilde{W}}$$ of $\overline{h}$ is equal to $\gamma^{-1}\circ h'$. Note that $h=h'$ on $Z_0\cap Z_1$ and that $h'|_{Z_1}$ is a homomorphism of algebras. Since ${\rm Ker}(h')$ is stable under left and right multiplication by elements of $U^0$ and under multiplication by elements of $Z$, we can conclude that the restriction of $h'$ to the subalgebra generated by $Z_0$ and $Z_1$ is a homomorphism of algebras. From now on we assume that $\Lambda=P$. Let $G$ be the simply connected almost simple complex algebraic group with Lie algebra ${\mf{g}}$ and let $T$ be a maximal torus of $G$. We identify $\Phi$ and $W$ with the root system and the Weyl group of $G$ relative to $T$. Note that the character group $X(T)$ of $T$ is equal to $P$. In case ${\mf{g}}=\mf{sl}_n$ we take $T$ the subgroup of diagonal matrices in ${\rm SL}_n$. \subsection{Generators for $\mathbb{C}[G]^G$ and $Z_1$}\label{ss.groupinvgen} \ \\ We denote the fundamental weights corresponding to the basis $(\alpha_1,\ldots,\alpha_r)$ by $\varpi_1,\ldots,\varpi_r$. As is well known, they form a basis of $P$. Let $\mathbb{C}[G]$ be the algebra of regular functions on $G$. Then the restriction homomorphism $\mathbb{C}[G]\to\mathbb{C}[T]=\mathbb{C}P$ induces an isomorphism $\mathbb{C}[G]^G\stackrel{\sim}{\to}\mathbb{C}[T]^W=(\mathbb{C}P)^W$, see \cite{St1} \S6. For $\lambda\in P$ denote the basis element of $\mathbb{C}P$ corresponding to $\lambda$ by $e(\lambda)$, denote the $W$-orbit of $\lambda$ by $W\nts\cdot\nts\lambda$ and put ${\rm sym}(\lambda)=\sum_{\mu\in W\nts\cdot\nts\lambda}e(\mu)$. Then the ${\rm sym}(\varpi_i)$, $i=1,\ldots,r$ are algebraically independent generators of $(\mathbb{C}P)^W$. See \cite{Bou1} no. VI.3.4, Thm. 1. For a field $K$, we denote the vector space of all $n\times n$ matrices over $K$ by ${\rm Mat}_n={\rm Mat}_n(K)$. Now assume that $K={\mathbb C}$. In this section we denote the restriction to ${\rm SL}_n$ of the standard coordinate functionals on ${\rm Mat}_n$ by $\xi_{ij}$, $1\le i,j\le n$. Furthermore, for $i\in\{1,\ldots,n-1\}$, $s_i\in\mathbb{C}[{\rm SL}_n]$ is defined by $s_i(A)={\rm tr}(\wedge^iA)$, where $\wedge^iA$ denotes the $i$-th exterior power of $A$ and ${\rm tr}$ denotes the trace. Then $\varpi_i=(\xi_{11}\cdots\xi_{ii})|_T$ and therefore ${\rm sym}(\varpi_i)=s_i|_T$~(*), the $i$-th elementary symmetric function in the $\xi_{jj}|_T$. See \cite{PrT} 2.4. In the general case we use the restriction theorem for $\mathbb{C}[G]$ and define $s_i\in\mathbb{C}[G]^G$ by (*). So then $s_1,\ldots,s_r$ are algebraically independent generators of $\mathbb{C}[G]^G$. Identifying $U^0$ and $\mathbb{C}P$, we have $U^{0 \tilde{W}}=(\mathbb{C}2P)^W$. Put $u_i=\overline{h}({\rm sym}(2\varpi_i))$. Then $h(u_i)={\rm sym}(2\varpi_i)$ and $u_1,\ldots,u_r$ are algebraically independent generators of $Z_1$. \subsection{The cover $\pi$ and the intersection $Z_0\cap Z_1$} \ \\ Let $\Phi^+$ be the set of positive roots corresponding to the basis $(\alpha_1,\ldots,\alpha_r)$ of $\Phi$ and let $U_+$ resp $U_-$ be the maximal unipotent subgroup of $G$ corresponding to $\Phi^+$ resp. $-\Phi^+$. If ${\mf{g}}=\mf{sl}_n$, then $U_+$ and $U_-$ consist of the upper resp. lower triangular matrices in ${\rm SL}_n$ with ones on the diagonal. Put $\mathcal{O}=U_-TU_+$. Then $\mathcal{O}$ is a nonempty open and therefore dense subset of $G$. Furthermore, the group multiplication defines an isomorphism $U_-\times T\times U_+\stackrel{\sim}{\to}\mathcal{O}$ of varieties. Put $\Omega={\rm Maxspec}(Z_0)$. In \cite{DCK} (3.4-3.6) there was constructed a group $\tilde{G}$ of automorphisms of $\hat{U}=\hat{Z_0}\ot_{Z_0}U$, where $\hat{Z}_0$ denotes the algebra of holomorphic functions on the complex analytic variety $\Omega$. The group $\tilde{G}$ leaves $\hat{Z_0}$ and $\hat{Z}=\hat{Z_0}\ot_{Z_0}Z$ stable. In particular it acts by automorphisms on the complex analytic variety $\Omega$. In \cite{DCKP} this action is called the "quantum coadjoint action". In \cite{DCKP} \S4 there was constructed an unramified cover $\pi:\Omega\to \mathcal{O}$ of degree $2^r$. I give a short description of the construction of $\pi$. Put $\Omega^\pm={\rm Maxspec}(Z_0^\pm)$. Then we have $\Omega=\Omega^-\times T\times\Omega^+$. Now $Z:\Omega\to T$ is defined as the projection on $T$, $X:\Omega\to U_+$ and $Y:\Omega\to U_-$ as the projection on $\Omega^\pm$ followed by some isomorphism $\Omega^\pm\stackrel{\sim}{\to}U_\pm$ and $\pi$ is defined as $YZ^2X$ (multiplication in $G$).\footnote{In \cite{DCKP} $Z^2$ is denoted by $Z$. The notation here comes from \cite{DCP}. The centre of $U$ is denoted by the same letter, but this will cause no confusion.} This means: $\pi(x)=Y(x)Z(x)^2X(x)$. The following proposition says something about how $\tilde{G}$ and $\pi$ are related to the "Harish-Chandra centre" $Z_1$ and the conjugation action of $G$ on $\mathbb{C}[G]$. For more precise statements see 5.4, 5.5 and \S6 in \cite{DCKP}. \begin{thmgl}[\cite{DCKP} Prop 6.3, Thm 6.7]\label{thm.coverHCcentre} Consider $\pi$ as a morphism to $G$. Then the comorphism $\pi^{co}:\mathbb{C}[G]\to Z_0$ is injective and the following holds: \begin{enumerate}[(i)] \item $Z^{\tilde{G}}=Z_1$.\ \footnotemark \item $\pi^{co}$ induces an isomorphism $\mathbb{C}[G]^G\stackrel{\sim}{\to}Z_0^{\tilde{G}}=Z_0\cap Z_1$. \item The monomorphism $(\mathbb{C}P)^W\stackrel{\sim}{\to}(\mathbb{C}P)^W$ obtained by combining the isomorphism in (ii) with the restriction homomorphism $\mathbb{C}[G]\to\mathbb{C}[T]={\mathbb C}P$ and $h:Z_1\to U^0={\mathbb C}P$, is given by $x\mapsto 2lx:P\to P$. In particular $h(Z_0\cap Z_1)=(\mathbb{C}2lP)^W$. \end{enumerate} \end{thmgl} \footnotetext{$\tilde{G}$ is a group of automorphisms of the algebra $\hat{U}$ and does not leave $Z$ stable. However, $S^{\tilde{G}}$ can be defined in the obvious way for every subset $S$ of $\hat{U}$.} I will give the proof of (iii). If we identify $Z_0^0$ with ${\mathbb C}[T]$, then the homomorphism $h'|_{Z_0}:Z_0\to Z_0^0$ is the comorphism of a natural embedding $T\hookrightarrow\Omega$. Now we have a commutative diagram \begin{diagram}[small,nohug G &\lTo^{\pi} &\Omega\\ \uIntoB& &\uIntoB\\ T &\lTo^{t\mapsto t^2}&T\\ \end{diagram} Expressed in terms of the comorphisms this reads: $(x\mapsto 2x)\circ{\rm res}_{G,T}={\rm res}_{\Omega,T}\circ\pi^{co}$, where ${\rm res}_{G,T}$ and ${\rm res}_{\Omega,T}$ are the restrictions to $T$ and the comorphism of the morphism between the tori is denoted by its restrictions to the character groups. Now we identify $U^0$ with ${\mathbb C}[T]$. Composing both sides on the left with $x\mapsto lx$ and using $(x\mapsto lx)\circ{\rm res}_{\Omega,T}=h'|_{Z_0}:Z_0\to U^0=\mathbb{C}P$ we obtain $(x\mapsto 2lx)\circ{\rm res}_{G,T}=h'\circ\pi^{co}$. If we restrict both sides of this equality to $\mathbb{C}[G]^G$, then we can replace $h'$ by $h$ and we obtain the assertion.\\ \subsection{$Z_0$ and $Z_1$ generate $Z$}\label{ss.Z0andZ1} \begin{thmgl}[\cite{DCKP} Proposition 6.4, Theorem 6.4]\label{thm.Z0andZ1} Let $u_1,\ldots,u_r$ be the elements of $Z_1$ defined in Subsection~\ref{ss.groupinvgen}. Then the following holds: \begin{enumerate}[(i)] \item The multiplication $Z_1\ot_{Z_0\cap Z_1}Z_0\to Z$ is an isomorphism of algebras. \item $Z$ is a free $Z_0$-module of rank $l^r$ with the restricted monomials $u_1^{k_1}\cdots u_r^{k_r}$, $0\leq k_i<l$ as a basis. \end{enumerate} \end{thmgl} I give a proof of (ii). In \cite{DCKP} Prop.~6.4 it is proved that $(\mathbb{C}P)^W$ is a free $(\mathbb{C}lP)^W$-module of rank $l^r$ with the restricted monomials (exponents $<l$) in the ${\rm sym}(\varpi_i)$ as a basis. The same holds then of course for $(\mathbb{C}2P)^W$, $(\mathbb{C}2lP)^W$ and the ${\rm sym}(2\varpi_i)$. But then the same holds for $Z_1$, $Z_0\cap Z_1$ and the $u_i$ by (iii) of Theorem~\ref{thm.coverHCcentre}. So the result follows from (i). Recall that $\Omega=\Omega^-\times T\times\Omega^+$ and that $\Omega^\pm\cong U_\pm$. So $Z_0$ is a polynomial algebra in $\dim({\mf{g}})$ variables with $r$ variables inverted. In particular it's Krull dimension (which coincides with the transcendence degree of its field of fractions) is $\dim({\mf{g}})$. The same holds then for $Z$, since it is a finitely generated $Z_0$-module. Let $Z_0'$ be a subalgebra of $Z_0$ containing $Z_1\cap Z_0$. Then the multiplication $Z_1\ot_{Z_0\cap Z_1}Z_0'\to Z_0'Z_1$ is an isomorphism of algebras by the above theorem. This gives us a way to determine generators and relations for $Z_0'Z_1$: Let $s_1,\ldots,s_r$ be the generators of ${\mathbb C}[G]^G$ defined in Subsection~\ref{ss.groupinvgen}. Then $\pi^{co}(s_1),\ldots,\pi^{co}(s_r)$ are generators of $Z_0\cap Z_1=Z_0'\cap Z_1$ by Theorem~\ref{thm.coverHCcentre}(ii). Now assume that we have generators and relations for $Z_0'$. We use for $Z_1$ the generators $u_1,\ldots,u_r$ defined in Subsection~\ref{ss.groupinvgen}. For each $i\in\{1,\ldots,r\}$ we can express $\pi^{co}(s_i)$ as a polynomial $g_i$ in the generators of $Z_0'$ and as a polynomial $f_i$ in the $u_j$. Then the generators and relations for $Z_0'$ together with the $u_i$ and the relations $f_i=g_i$ form a presentation of $Z_0'Z_1$.\footnote{This method was also used by Krylyuk in \cite{Kry} to determine generators and relations for the centre of the universal enveloping algebra $U({\mf{g}})$ of ${\mf{g}}$. Our homomorphism $\pi^{co}:\mathbb{C}[G]\to Z_0$ plays the r\^ole of Krylyuk's $G$-equivariant isomorphism $\eta:S({\mf{g}})^{(1)}\to Z_p$, where we use the notation of \cite{PrT}.} The $f_i$ can be determined as follows. Write ${\rm sym}(l\varpi_i)$ as a polynomial $f_i$ in the ${\rm sym}(\varpi_j)$. Then ${\rm sym}(2l\varpi_i)$ is the same polynomial in the ${\rm sym}(2\varpi_j)$ and $\pi^{co}(s_i)=f_i(u_1,\ldots,u_r)$ by Theorem~\ref{thm.coverHCcentre}(iii). Note that $\pi^{co}(\mathbb{C}[\mathcal{O}])=Z_0^-\mathbb{C}(2lP)Z_0^+$ and that $Z_0=\pi^{co}(\mathbb{C}[\mathcal{O}])[z_{\varpi_1},\ldots,z_{\varpi_r}]$. Now assume that $G={\rm SL}_n$. For $f\in\mathbb{C}[{\rm SL}_n]$ denote $f\circ\pi$ by $\tilde{f}$ and put $\tilde{Z_0}=\pi^{co}(\mathbb{C}[{\rm SL}_n])$. Then $\tilde{Z_0}$ is generated by the $\tilde{\xi}_{ij}$; it is a copy of $\mathbb{C}[{\rm SL}_n]$ in $Z_0$. Now $\mathcal{O}$ consists of the matrices $A\in{\rm SL}_n$ that have an LU-decomposition (without row permutations), that is, whose principal minors $\Delta_1(A),\ldots,\Delta_{n-1}(A)$ are nonzero. So $\mathbb{C}[\mathcal{O}]= \mathbb{C}[{\rm SL}_n][\Delta_1^{-1},\ldots,\Delta_{n-1}^{-1}]$, $\pi^{co}(\mathbb{C}[\mathcal{O}])= \tilde{Z_0}[\tilde{\Delta}^{-1}_1,\ldots,\tilde{\Delta}^{-1}_{n-1}]$ and $$Z_0=\tilde{Z_0}[z_{\varpi_1},\ldots,z_{\varpi_{n-1}}][\tilde{\Delta}^{-1}_1,\ldots, \tilde{\Delta}^{-1}_{n-1}].$$ Let ${\rm pr}_{\mathcal{O},T}$ be the projection of $\mathcal{O}$ on $T$. An easy computation shows that $\Delta_i|_\mathcal{O}=(\xi_{11}\cdots\xi_{ii})\circ{\rm pr}_{\mathcal{O},T}=\varpi_i\circ{\rm pr}_{\mathcal{O},T}$ for $i=1,\ldots,n-1$.\footnote{For two $n\times n$ matrices $A$ and $B$ we have $\wedge^k(AB)=\wedge^k(A)\wedge^k(B)$. From this it follows that if either $A$ is lower triangular or $B$ is upper triangular, then $\Delta_k(AB)=\Delta_k(A)\Delta_k(B)$.} So $\tilde{\Delta}_i=\varpi_i\circ {\rm pr}_{\mathcal{O},T}\circ\pi=\varpi_i\circ(t\mapsto t^2)\circ {\rm pr}_{\Omega,T}=2\varpi_i\circ {\rm pr}_{\Omega,T}=z_{\varpi_i}^2$. In Subsection~\ref{ss.presentation} we will determine generators and relations for $Z_0'Z_1$, where $Z_0'=\tilde{Z_0}[z_{\varpi_1},\ldots,z_{\varpi_{n-1}}]$ using the method mentioned above. \section{Rationality} We use the notation of Section 1 with the following modifications. The functions $\xi_{ij}$, $1\le i,j\le n$, now denote the standard coordinate functionals on ${\rm Mat}_n$ and for $i\in\{1,\ldots,n\}$, $s_i\in K[{\rm Mat}_n]$ is defined by $s_i(A)={\rm tr}(\wedge^iA)$ for $A\in{\rm Mat}_n$. Then $\det(x{\rm id}-A)=$ $x^n+\sum_{i=1}^n (-1)^i s_i(A)x^{n-i}$. This notation is in accordance with \cite{PrT}. For $f\in\mathbb{C}[{\rm Mat}_n]$ we denote its restriction to ${\rm SL}_n$ by $f'$ and we denote $\pi^{co}(f')$ by $\tilde{f}$. So now $s_1',\ldots,s_{n-1}'$ and $\xi_{ij}'$ are the functions $s_1,\ldots,s_{n-1}$ and $\xi_{ij}$ of Subsection~\ref{ss.groupinvgen} and the $\tilde{\xi_{ij}}$ are the same. To prove theorem below we need to look at the expressions of the functions $s_i$ in terms of the $\xi_{ij}$. We use that those equations are linear in $\xi_{1n},\xi_{2n},\ldots,\xi_{nn}$. The treatment is completely analogous to that in \cite{PrT} 4.1 (we use the same symbols $R$, $M$, $d$ and $x_{\bf a}$) to which we refer for more explanation. Let $R$ be the $\mathbb{Z}$-subalgebra of $\mathbb{C}[{\rm Mat}_n]$ generated by all $\xi_{ij}$ with $j\ne n$. Let $\partial_{ij}$ denote differentiation with respect to the variable $\xi_{ij}$ and set $$M= \left[\begin{array}{cccc} \partial_{1n} (s_1)&\partial_{2n}(s_1)&\ldots& \partial_{nn}(s_1)\\ \partial_{1n} (s_2)&\partial_{2n}(s_2)&\ldots& \partial_{nn}(s_2)\\ \vdots&\vdots& &\vdots \\ \partial_{1n} (s_n)&\partial_{2n}(s_n)&\ldots& \partial_{nn}(s_n) \end{array}\right],\quad {\bf c}=\left[\begin{array}{c} \xi_{1n}\\ \xi_{2n}\\ \vdots \\ \xi_{nn} \end{array}\right],\quad {\bf s}=\left[\begin{array}{c} s_1\\ s_2\\ \vdots \\ s_n \end{array}\right]. $$ Then the matrix $M$ has entries in $R$ and the following vector equation holds: \begin{equation}\label{eq.gln2} M\cdot {\bf c}\,=\,{\bf s} + {\bf r}, \quad\,\mbox{where} \ \ {\mathbf r}\in R^n. \end{equation} We denote the determinant of $M$ by $d$. For ${\bf a}=(a_1,\ldots,a_n)\in K^n$ we set $$x_{\bf a}\,=\, \left[\begin{array}{ccccc} 0&\cdots&0&0&a_n\\ 1&\cdots&0&0& a_{n-1}\\ \vdots&\ddots&\vdots&\vdots&\vdots\\ 0&\cdots&1&0&a_2 \\ 0&\cdots&0&1&a_1\end{array}\right].$$ Then the minimal polynomial of $x_{\bf a}$ equals $x^n-\sum_{i=1}^n a_ix^{n-i}$, $\det(x_{\bf a})=(-1)^{n-1}a_n$ and $d(x_{\bf{a}})=1$ (Compare Lemma~3 in \cite{PrT}). \begin{thmgl}\label{thm.qrat} $Z$ has a rational field of fractions. \end{thmgl} \begin{proof} Denote the field of fractions of $Z$ by $Q(Z)$. From Subsection~\ref{ss.Z0andZ1} it is clear that $Q(Z)$ has transcendence degree $\dim(\mf{sl}_n)=n^2-1$ over $\mathbb{C}$ and that it is generated as a field by the $n^2+2(n-1)$ variables $\tilde{\xi}_{ij}$, $u_1,\ldots,u_{n-1}$ and $z_{\varpi_1},\ldots,z_{\varpi_{n-1}}$. To prove the assertion we will show that $Q(Z)$ is generated by the $n^2-1$ elements $\tilde{\xi}_{ij}$, $i\ne j$, $j\ne n$, $u_1,\ldots,u_{n-1}$ and $z_{\varpi_1},\ldots,z_{\varpi_{n-1}}$. We will first eliminate the $n$ generators $\tilde{\xi}_{1n},\ldots,\tilde{\xi}_{nn}$ and then the $n-1$ generators $\tilde{\xi}_{11},\ldots,\tilde{\xi}_{n-1,n-1}$. Applying the homomorphism $f\mapsto\tilde{f}=\pi^{co}\circ (f\mapsto f'):\mathbb{C}[{\rm Mat}_n]\to Z_0$ to both sides of \eqref{eq.gln2} we obtain the following equations in the $\tilde{\xi}_{ij}$ and $\tilde{s}_1,\ldots,\tilde{s}_{n-1}$ \begin{equation}\label{eq.Ze} \tilde{M}\cdot \tilde{{\bf c}}\,=\,\tilde{{\bf s}} + \tilde{{\bf r}},\text{\quad where\ }\tilde{{\bf r}}\in \tilde{R}^n. \end{equation} Here $\tilde{M},\tilde{{\bf c}},\tilde{{\bf s}},\tilde{{\bf r}}$ have the obvious meaning, except that we put the last component of $\tilde{{\bf s}}$ and $\tilde{{\bf r}}$ equal to 0 resp. 1, and $\tilde{R}$ is the $\mathbb{Z}$-subalgebra of $Z_0$ generated by all $\tilde{\xi}_{ij}$ with $j\ne n$. Choosing $\bf{a}$ such that $a_n=(-1)^{n-1}$ we have $x_{\bf a}\in{\rm SL}_n$. Since $d(x_{\bf a})=1$, we have $d'\ne 0$ and therefore $\det(\tilde{M})=\tilde{d}\ne0$. Furthermore, for $i=1,\ldots,n-1$, $(\tilde{{\bf s}})_i=\tilde{s}_i\in Z_0\cap Z_1$ and $Z_1$ is generated by $u_1,\ldots,u_{n-1}$. It follows that $\tilde{\xi}_{1n},\ldots,\tilde{\xi}_{nn}$ are in the subfield of $Q(Z)$ generated by the $\tilde{\xi}_{ij}$ with $j\ne n$ and $u_1,\ldots,u_{n-1}$. Now we will eliminate the generators $\tilde{\xi}_{11},\ldots,\tilde{\xi}_{n-1,n-1}$. We have $$z_{\varpi_1}^2=\tilde{\Delta}_1=\tilde{\xi}_{11}$$ and for $k=2,\ldots,n-1$ we have, by the Laplace expansion rule, $$z_{\varpi_k}^2= \tilde{\Delta}_k= \tilde{\xi}_{kk}\tilde{\Delta}_{k-1}+t_k= \tilde{\xi}_{kk}z_{\varpi_{k-1}}^2+t_k,$$ where $t_k$ is in the $\mathbb{Z}$-subalgebra of $Z$ generated by the $\tilde{\xi}_{ij}$ with $i,j\le k$ and $(i,j)\ne (k,k)$. It follows by induction $k$ that for $k=1,\ldots,n-1$, $\tilde{\xi}_{11},\ldots,\tilde{\xi}_{kk}$ are in the subfield of $Q(Z)$ generated by the $z_{\varpi_i}$ with $i\le k$ and the $\tilde{\xi}_{ij}$ with $i,j\le k$ and $i\ne j$. \end{proof} \section{Unique Factorisation}\label{s.quf} Recall that {\it Nagata's lemma} asserts the following: If $x$ is a prime element of a Noetherian integral domain $S$ such that $S[x^{-1}]$ is a UFD, then $S$ is a UFD. See \cite{Eis} Lemma 19.20. In Theorem~\ref{thm.quf} we will see that, by Nagata's lemma, it suffices to show that the algebra $Z/(\tilde{d})$ is an integral domain in order to prove that $Z$ is a UFD. To prove this we will show by induction that the two sequences of algebras (to be defined later): $$K[{\rm SL}_n]/(d')\cong\overline{A}(K)=\overline{B}_{0,0}(K)\subseteq\overline{B}_{0,1}(K)\subseteq\cdots\subseteq \overline{B}_{0,n-1}(K)=\overline{B}_0(K)$$ in characteristic $p$ and $$\overline{B}_0({\mathbb C})\subseteq\overline{B}_1({\mathbb C})\subseteq\cdots\subseteq\overline{B}_{n-1}({\mathbb C})=\overline{B}({\mathbb C})$$ consist of integral domains. Lemma's~\ref{lem.sln} and \ref{lem.slnd} are, among other things, needed to show that $\overline{A}(K)\cong K[{\rm SL}_n]/(d')$ is an integral domain. Lemma's~\ref{lem.leadmondetd} and \ref{lem.leadmonfi} are needed to obtain bases over $\mathbb Z$ (see Proposition~\ref{prop.presentation(d)}), which, in turn, is needed to pass to fields of positive characteristic and to apply mod $p$ reduction (see Lemma~\ref{lem.modp}). \subsection{The case $n=2$}\label{ss.nis2} \ \\ In this subsection we show that the centre of $U_{\varepsilon,P}(\mf{sl}_2)$ is always a UFD, without any extra assumptions on $l$. The standard generators for $U=U_{\varepsilon,P}(\mf{sl}_2)$ are $E,F,K_\varpi$ and $K_\varpi^{-1}$. Put $K=K_\alpha=K_\varpi^2$, $z_1=z_\varpi=K_\varpi^l$, $z=z_\alpha=z_1^2=K^l$. Furthermore, following \cite{DCKP} 3.1, we put $c=(\varepsilon-\varepsilon^{-1})^l$, $x=-cz^{-1}E^l$, $y=cF^l$. Then $x,y$ and $z_1$ are algebraically independent over $\mathbb C$ and $Z_0={\mathbb C}[x,y,z_1][z_1^{-1}]$ (see \cite{DCKP} \S 3). We have $U^0=\mathbb{C}[K_\varpi,K_\varpi^{-1}]$ and $U^{0\tilde{W}}=\mathbb{C}[K,K^{-1}]^W=\mathbb{C}[K+K^{-1}]$. Identifying $U^0$ and $\mathbb{C}P$, we have ${\rm sym}(2\varpi)=K+K^{-1}$ and ${\rm sym}(2l\varpi)=z+z^{-1}$. Put $u=\overline{h}({\rm sym}(2\varpi))$. By the restriction theorem for $U$, $Z_1$ is a polynomial algebra in $u$. Denote the trace map on ${\rm Mat}_2$ by ${\rm tr}$. Then ${\rm tr}|_T={\rm sym}(\varpi)$. By the restriction theorem for ${\mathbb C}[G]$ and Theorem~\ref{thm.coverHCcentre}(ii), $\tilde{{\rm tr}}$ generates $Z_0\cap Z_1$. Furthermore $\tilde{{\rm tr}}=\overline{h}(z+z^{-1})$, by Theorem~\ref{thm.coverHCcentre}(iii). Let $f\in\mathbb{C}[u]$ be the polynomial with $z+z^{-1}=f(K+K^{-1})$. Then $\tilde{{\rm tr}}=f(u)$. From the formula's in \cite{DCKP} 5.2 it follows that $\tilde{{\rm tr}}=-zxy+z+z^{-1}$. By the construction from Subsection~\ref{ss.Z0andZ1} (we take $Z_0'=Z_0$), $Z$ is isomorphic to the quotient of the localised polynomial algebra $\mathbb{C}[x,y,z_1,u][z_1^{-1}]$ by the ideal generated by $-z_1^2xy+z_1^2+z_1^{-2}-f(u)$. Clearly $x,u$ and $z_1$ generate the field of fractions of $Z$. In particular they are algebraically independent. So $Z[x^{-1}]$ is isomorphic to the localised polynomial algebra $\mathbb{C}[x,z_1,u][z_1^{-1},x^{-1}]$ and therefore a UFD. By Nagata's lemma it suffices to show that $x$ is a prime element in $Z$. But $Z/(x)$ is isomorphic to the quotient of $\mathbb{C}[y,z_1,u][z_1^{-1}]$ by the ideal generated by $z_1^2+z_1^{-2}-f(u)$. This ideal is also generated by $z_1^4-f(u)z_1^2+1$. So it suffices to show that $z_1^4-f(u)z_1^2+1$ is irreducible in $\mathbb{C}[y,z_1,u][z_1^{-1}]$. From the fact that $f$ is of odd degree $l>0$ (see e.g. Lemma~\ref{lem.leadmonfi} below), one easily deduces that $z_1^4-f(u)z_1^2+1$ is irreducible in $\mathbb{C}[z_1,u]$ % and therefore also in $\mathbb{C}[y,z_1,u]$. Clearly $z_1^4-f(u)z_1^2+1$ is not invertible in $\mathbb{C}[y,z_1,u][z_1^{-1}]$, so it is also irreducible in this ring. \subsection{${\rm SL}_n$ and the function $d$}\label{ss.slnd} \ \\ Part (i) of the next lemma is needed for the proof of Lemma~\ref{lem.sln} and part (ii) is needed for the proof of Theorem~\ref{thm.quf}. The Jacobian matrices below consist of the partial derivatives of the functions in question with respect to the variables $\xi_{ij}$. \begin{lemgl}\label{lem.jac} \begin{enumerate}[(i)] \item There exists a matrix $A\in{\rm SL}_n({\mathbb Z})$ such that $\Delta_{n-1}(A)=0$ and such that some second order minor of the Jacobian matrix of $(\det,\Delta_{n-1})$ is $\pm1$ at $A$. \item If $n\ge3$, then there exists a matrix $A\in{\rm SL}_n({\mathbb Z})$ such that $d(A)=0$ and such that some $2n$-th order minor of the Jacobian matrix of $(s_1,\ldots,s_n,d,\Delta_1,\ldots,$\\ $\Delta_{n-1})$ is $\pm1$ at $A$. \end{enumerate} \end{lemgl} \begin{proof} The computations below are very similar to those in \cite{PrT} Section~6. We denote by $\mathcal{X}$ the $n\times n$ matrix $(\xi_{ij})$ and for an $n\times n$ matrix $B=(b_{ij})$ and $\Lambda_1 ,\Lambda_2\subseteq\{1,\ldots,n\}$ we denote by $B_{\Lambda_1,\,\Lambda_2}$ the matrix $(b_{ij})_{i\in\Lambda_1,j\in\Lambda_2}$, where the indices are taken in the natural order. In the computations below we will use the following two facts:\\ For $\Lambda_1,\Lambda_2\subseteq\{1,\ldots,n\}$ with $|\Lambda_1|=|\Lambda_2|$ we have $$\partial_{ij}\big(\det(\mathcal{X}_{\Lambda_1,\,\Lambda_2})\big)= \begin{cases} (-1)^{n_1(i)+n_2(j)}\det(\mathcal{X}_{\Lambda_1\sm\{i\},\,\,\Lambda_2\sm\{j\}})& \mbox{when}\ \, (i,j)\in (\Lambda_1\times\Lambda_2),\\ 0&\mbox{when}\ \,(i,j)\not\in (\Lambda_1\times \Lambda_2), \end{cases}$$ where $n_1(i)$ denotes the position in which $i$ occurs in $\Lambda_1$ and similarly for $n_2(j)$.\\ For $k\le n$ we have $s_k=\sum_{\Lambda}\det(\mathcal{X}_{\Lambda,\,\Lambda})$ where the sum ranges over all $k$-subsets $\Lambda$ of $\{1,\ldots,n\}$.\\ (i).\ We let $A$ be the following $n\times n$-matrix: $$A=\left[\begin{array}{ccccc} 0&0&\cdots&0&-1\\ 0&1&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&1&0\\ 1&0&\cdots&0&0\end{array}\right].$$ Clearly $\det(A)=1$ and $\Delta_{n-1}(A)=0$. From the above two facts it is easy to deduce that $\left[\begin{array}{cc} \partial_{1\,n}\det&\partial_{1\,1}\det\\ \partial_{1\,n}\Delta_{n-1}&\partial_{1\,1}\Delta_{n-1} \end{array}\right]$ is equal to $\left[\begin{array}{cc} \pm1&0\\ 0&\pm1 \end{array}\right]$ at $A$.\\ (ii).\ Put $\alpha=\big((1\,1),(2\,2),(2\,3),\ldots,(2\,n-1),(n\,n),(n-1\,n),\ldots,(2\,n),(2\,1),(1\,2)\big)$, and let $\alpha_i$ denote the $i$-th component of $\alpha$. We let $A$ be the following $n\times n$-matrix: $$A=\left[\begin{array}{cccccc} 1&0&0&\cdots&0&\nts\nts(-1)^n\nts\nts\\ 0&1&0&\cdots&0&0\\ 1&1&0&\cdots&0&0\\ 0&0&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1&0\end{array}\right].$$ The columns of the Jacobian matrix are indexed by the pairs $(i,j)$ with $1\le i,j\le n$. Let $M_\alpha$ be the $2n$-square submatrix of the Jacobian matrix consisting of the columns with indices from $\alpha$. By permuting in $A$ the first row to the last position and interchanging the first two columns, we see that $\det(A)=1$. We will show that $d(A)=0$ and that the minor $d_{\alpha}:=\det(M_\alpha)$ of the Jacobian matrix is nonzero at $A$. First we consider the $\Delta_k$, $k\in\{1,\ldots,n-1\}$. By inspecting the matrix $A$ and using the fact that $\partial_{ij}\Delta_k=0$ if $i>k$ or $j>k$, we deduce the following facts: $(\partial_{2\,i}\Delta_k)(A)= \begin{cases} \pm1&\text{if $i=k$,}\\ 0&\text{if $i>k$,} \end{cases}$\quad for $i,k\in\{1,\ldots,n-1\}\ i\ne1$,\quad $(\partial_{1\,1}\Delta_1)(A)=1$,\\ $(\partial_{1\,2}\Delta_k)(A)=(\partial_{2\,1}\Delta_k)(A)=0$ for all $k\in\{1,\ldots,n-1\}$ and\\ $(\partial_{i\,n}\Delta_k)(A)=0$ for all $k\in\{1,\ldots,n-1\}$ and all $i\in\{1,\ldots,n\}$. Now we consider the $s_k$. Let $i\in\{i,\ldots,n\}$ and let $\Lambda\subseteq\{1,\ldots,n\}$. Assume that $\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)$ is nonzero at $A$. Then we have: \begin{enumerate}[$\bullet$] \item $i,n\in\Lambda$; \item $j\in\Lambda\Rightarrow j-1\in\Lambda$ for all $j$ with $4\leq j\leq n$ and $j\ne i$, since otherwise there would be a zero row (in $\mathcal{X}_{\Lambda\sm\{i\},\,\Lambda\sm\{n\}}(A)=A_{\Lambda\sm\{i\},\,\Lambda\sm\{n\}}$); \item $j\in\Lambda\Rightarrow j+1\in\Lambda$ for all $j$ with $3\leq j\leq n-1$, since otherwise there would be a zero column. \end{enumerate} \noindent First assume that $i\ge3$ and that $|\Lambda|\le n-i+1$. Then it follows that $\Lambda=\{i,\ldots,n\}$ and that $\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)(A)=\pm1$. Next assume that $i=2$. Then it follows that either $\Lambda=\{2,\ldots,n\}$ or $\Lambda=\{1,\ldots,n\}$. In the first case we have $\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)(A)=(-1)^{1+n-1}=(-1)^n$. In the second case we have $\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)(A)=(-1)^{2+n}=(-1)^n$. Now assume that $i=1$. Then it follows that either $\Lambda=\{1,3,\ldots,n\}$ or $\Lambda=\{1,\ldots,n\}$. In the first case we have $\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)(A)=(-1)^{1+n-1}=(-1)^n$. In the second case we have $\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)(A)=(-1)^{1+n}.-1=(-1)^n$. So for $i\in\{1,\ldots,n-1\}$ and $k\in\{1,\ldots,n\}$ we have: $$(\partial_{i n}s_k)(A)= \begin{cases} \pm1 &\text{if $i\ge3$ and $i+k=n+1$,}\\ 0 &\text{if $i\ge3$ and $i+k<n+1$,}\\ (-1)^n &\text{if $i\in\{1,2\}$ and $k\in\{n-1,n\}$,}\\ 0 &\text{if $i\in\{1,2\}$ and $k<n-1$.} \end{cases} $$ It follows from the above equalities that in $M(A)$ the first $2$ columns are equal. So $d(A)=\det(M(A))=0$. Let $\Lambda\subseteq\{1,\ldots,n\}$. Assume that $\partial_{1\,2}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)$ is nonzero at $A$. Then $1,2\in\Lambda$ and the first row is zero. A contradiction. So $\partial_{1\,2}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)$ is zero at $A$. Now assume that $\partial_{2\,1}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)$ is nonzero at $A$. Then \begin{enumerate}[$\bullet$] \item $1,2\in\Lambda$; \item $n\in\Lambda$, since otherwise the first row would be zero; \item $j\in\Lambda\Rightarrow j-1\in\Lambda$ for all $j$ with $4\leq j\leq n$, since otherwise there would be a zero row. \end{enumerate} So $\Lambda=\{1,\ldots,n\}$ and $\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)(A)=\pm1$. Thus we have $(\partial_{1\,2}s_k)(A)=0$ for all $k\in\{1,\ldots,n\}$ and $(\partial_{2\,1}s_k)(A)= \begin{cases} \pm1&\text{if $k=n$,}\\ 0 &\text{otherwise.} \end{cases}$ Finally, we consider the function $d$. Let $i\in\{1,\ldots,n\}$, let $\Lambda\subseteq\{1,\ldots,n\}$ and assume that $\partial_{1\,2}\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)$ is nonzero at $A$. Then we have: \begin{enumerate}[$\bullet$] \item $1,2,i,n\in\Lambda$ and $i\ne 1$; \item $i=2$, since otherwise the first row would be zero. \item $j\in\Lambda\Rightarrow j-1\in\Lambda$ for all $j$ with $4\leq j\leq n$, since otherwise there would be a zero row. \end{enumerate} It follows that $i=2$, $\Lambda=\{1,\ldots,n\}$ and $\partial_{1\,2}\partial_{i n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)=\pm1$. So for $i,k\in\{1,\ldots,n\}$ we have: $$(\partial_{1\,2}\partial_{i n}s_k)(A)= \begin{cases} \pm 1 &\text{if $(i,k)=(2,n)$,}\\ 0 &\text{if $(i,k)\ne(2,n)$.} \end{cases} $$ We have \begin{equation}\label{eq.d2} d\,=\sum_{\pi\in {\mathfrak S}_n} {\rm sgn}(\pi)\,\partial_{\pi(1),n}(s_1)\,\cdots\,\partial_{\pi(n),n}(s_n). \end{equation} So, by the above, $(\partial_{1\,2}d)(A)\,=$ $$\Big(\sum {\rm sgn}(\pi)\partial_{\pi(1) n}(s_1)\partial_{\pi(2)n}(s_2)\cdots\partial_{\pi(n-1)n}(s_{n-1}) \partial_{1\,2}\partial_{2\,n}(s_n)\Big)(A),$$ where the sum is over all $\pi\in\mathfrak{S}_n$ with $\pi(n)=2$. From what we know about the $\partial_{i n}s_k$ we deduce that the only permutation that survives in the above sum is given by $(\pi(1),\ldots,\pi(n))=(n,n-1,\ldots,3,1,2)$ and that $(\partial_{1\,2}d)(A)=\pm1$. If we permute the rows of $M_\alpha(A)$ in the order given by $\Delta_1,\ldots,\Delta_{n-1},s_1,\ldots,s_n,d$ and take the columns in the order given by $\alpha$, then the resulting matrix is lower triangular with $\pm1$'s on the diagonal. So we can conclude that $d_\alpha(A)=\det(M_\alpha(A))=\pm1$. \end{proof} \noindent In the remainder of this subsection $K$ denotes an algebraically closed field. It is well known that the algebra of regular functions $K[G]$ of a simply connected semi-simple algebraic group $G$ is a UFD (see \cite{Po} the corollary to Proposition~1), but the elementary proof below provides a way to show that $d'$ and the $\Delta_i'$ are irreducible elements of $K[{\rm SL}_n]$. I did not know how to use the fact that $K[{\rm SL}_n]$ is a UFD to simplify the proof that $\Delta_{n-1}'$ is irreducible. Modifying the terminology of \cite{Eis} \S 16.6, we define the {\it Jacobian ideal} of an $m$-tuple of polynomials $\varphi_1,\ldots,\varphi_m$ as the ideal generated by the $k\times k$ minors of the Jacobian matrix of $\varphi_1,\ldots,\varphi_m$, where $k$ is the height of the ideal generated by the $\varphi_i$. \begin{lemgl}\label{lem.sln} $K[{\rm SL}_n]$ is a unique factorisation domain and $\Delta'_{n-1}$ is an irreducible element of $K[{\rm SL}_n]$. \end{lemgl} \begin{proof} From the Laplace expansion for $\det$ with respect to the last row or the last column it is clear that we can eliminate $\xi_{nn}$ using the relation $\det=1$, if we make $\Delta'_{n-1}$ invertible. So we have an isomorphism of $K[{\rm SL}_n][\Delta_{n-1}'^{\ -1}]$ with the localised polynomial algebra $K[(\xi_{ij})_{(i,j)\ne(n,n)}][\Delta_{n-1}^{-1}]$. Since the latter algebra is a UFD, it suffices, by Nagata's lemma, to prove that $\Delta'_{n-1}$ is prime in $K[{\rm SL}_n]$, i.e. that $(\Delta_{n-1},\det-1)$ generates a prime ideal in $K[{\rm Mat}_n]$. First we show that the closed subvariety $\mathcal V$ of ${\rm Mat}_n$ defined by this ideal is irreducible. Let $\mathcal{X}$ the matrix introduced above and let $\alpha_1,\ldots,\alpha_{n-2}$ be variables. For a matrix $A$ denote by $A^{(i,j)}$ the matrix which is obtained from $A$ by deleting the $i$-th row and the $j$-th column. Let ${\mathcal X}_\alpha$ be the $n\times n$ matrix which is obtained by replacing in $\mathcal X$ the $(n-1)$-th column of ${\mathcal X}^{(n,n)}$ by the linear combination $\sum_{j=1}^{n-2}\alpha_j(\xi_{ij})_i$ of the first $n-2$ columns of ${\mathcal X}^{(n,n)}$. Then $\det({\mathcal X}_\alpha^{(n,j)})=\pm \alpha_j\det({\mathcal X}^{(n,n-1)})$ for all $j\in\{1,\ldots,n-2\}$ and $\det({\mathcal X}_\alpha^{(n,n)})=0$. So, by the Laplace expansion rule \begin{align*} \det({\mathcal X}_\alpha)-1&=\sum_{j=1}^{n-1}\pm\xi_{nj}\det({\mathcal X_\alpha}^{(n,j)})-1\\ &=\pm\xi_{n\,n-1}\det({\mathcal X}^{(n,n-1)})+\sum_{j=1}^{n-2}\pm\alpha_j\xi_{nj}\det({\mathcal X}^{(n,n-1)})-1 \end{align*} Let $K[{\mathcal X}_\alpha]$ be the polynomial ring in the variables that occur in ${\mathcal X}_\alpha$. If we consider $\det({\mathcal X}_\alpha)-1$ as a polynomial in the variable $\xi_{n,n-1}$, then it is linear and its leading coefficient is $\pm\det({\mathcal X}^{(n,n-1)})$ which is irreducible and does not divide the constant term $\sum_{j=1}^{n-2}\pm\alpha_j\xi_{nj}\det({\mathcal X}^{(n,n-1)})-1$. So $\det({\mathcal X}_\alpha)-1$ is irreducible in $K[{\mathcal X}_\alpha]$ and it defines an irreducible closed subvariety ${\mathcal V}_1$ of an $n^2-1$ dimensional affine space with coordinate functionals $\xi_{ij}$, $j\ne n-1$, $\xi_{n\,n-1}$, $\alpha_1,\ldots,\alpha_{n-2}$. Let $H$ be the algebraic group of $n\times n$ matrices $(a_{ij})$ of determinant $1$ with $a_{nn}=1$ and $a_{in}=a_{ni}=0$ for all $i\in\{1,\ldots,n-1\}$. Then $H\cong{\rm SL}_{n-1}$ and for every $A\in{\mathcal V}$ there exists an $S\in H$ such that in $(AS)^{(n,n)}$ the last column is a linear combination of the others. So the morphism of varieties $(u,S)\mapsto {\mathcal X}_\alpha(u)S:{\mathcal V}_1\times H\to{\rm Mat}_n$ has image $\mathcal V$. Now the irreducibility of $\mathcal V$ follows from the irreducibility of ${\mathcal V}_1\times H$. It remains to show that $(\det-1,\Delta_{n-1})$ is a radical ideal of $K[{\rm Mat}_n]$, i.e. that $K[{\rm Mat}_n]/(\det-1,\Delta_{n-1})$ is reduced. We know that $\Delta_{n-1}\ne0$ on the irreducible variety ${\rm SL}_n$, so $\dim({\mathcal V})=n^2-2$ and $K[{\rm Mat}_n]/(\det-1,\Delta_{n-1})$ is Cohen-Macaulay (see \cite{Eis} Proposition 18.13). By Theorem 18.15 in \cite{Eis} it suffices to show that the closed subvariety of $\mathcal V$ defined by the Jacobian ideal of $\det-1,\Delta_{n-1}$ is of codimension $\ge1$. Since $\mathcal V$ is irreducible this follows from Lemma~\ref{lem.jac}(i). \end{proof} \begin{lemgl}\label{lem.slnd} \begin{enumerate}[(i)] \item $d$ is an irreducible element of $K[{\rm Mat}_n]$. \item The invertible elements of $K[{\rm SL}_n]$ are the nonzero scalars. \item $d',\Delta'_1,\ldots,\Delta'_{n-1}$ is are mutually inequivalent irreducible elements of $K[{\rm SL}_n]$. \end{enumerate} \end{lemgl} \begin{proof} (i).\ The proof of this is completely analogous to that of Proposition 3 in \cite{PrT}. One now has to work with the maximal parabolic subgroup $P$ of ${\rm GL}_n$ that consists of the invertible matrices $(a_{ij})$ with $a_{n\,i}=0$ for all $i<n$. The element $d$ is then a semi-invariant of $P$ with the weight $\det\cdot\xi_{nn}^{-n}$ (the restriction of this weight to the maximal torus of diagonal matrices is $n\varpi_{n-1}$).\\ (ii) and (iii).\ Consider the isomorphism $K[{\rm SL}_n][\Delta_{n-1}'^{\ -1}]\cong K[(\xi_{ij})_{(i,j)\ne(n,n)}][\Delta_{n-1}^{-1}]$ from the proof of the above lemma. It maps $d',\Delta'_1,\ldots,\Delta'_{n-1}$ to respectively $d,\Delta_1,\ldots,\Delta_{n-1}$, since these polynomials do not contain the variable $\xi_{nn}$. The invertible elements of $K[(\xi_{ij})_{(i,j)\ne(n,n)}][\Delta_{n-1}^{-1}]$ are the elements $\alpha\Delta_{n-1}^k$, $\alpha\in K\sm\{0\}$, $k\in{\mathbb Z}$, since $\Delta_{n-1}$ is irreducible in $K[(\xi_{ij})_{(i,j)\ne(n,n)}]$. So the invertible elements of $K[{\rm SL}_n][\Delta_{n-1}'^{\ -1}]$ are the elements $\alpha\Delta_{n-1}'^{\ k}$, $\alpha\in K\sm\{0\}$, $k\in{\mathbb Z}$. By Lemma~\ref{lem.sln} $\Delta_{n-1}'$ is irreducible in $K[{\rm SL}_n]$, so the invertible elements of $K[{\rm SL}_n]$ are the nonzero scalars. Since $d$ and the $\Delta_i$ are not scalar multiples of each other, all that remains is to show that $d'$ and $\Delta'_1,\ldots,\Delta'_{n-2}$ are irreducible. We only do this for $d'$, the argument for the $\Delta_i'$ is completely similar. Since $d$ is prime in $K[(\xi_{ij})_{(i,j)\ne(n,n)}]$ and $d$ does not divide $\Delta_{n-1}$, it follows that $d$ is prime in $K[(\xi_{ij})_{(i,j)\ne(n,n)}][\Delta_{n-1}^{-1}]$ and therefore that $d'$ is prime in $K[{\rm SL}_n][\Delta_{n-1}'^{\ -1}]$. To show that $d'$ is prime in $K[{\rm SL}_n]$ it suffices to show that for every $f\in K[{\rm SL}_n]$, $\Delta'_{n-1}f\in(d')$ implies $f\in(d')$. So assume that $\Delta'_{n-1}f=gd'$ (*) for some $f,g\in K[{\rm SL}_n]$. If we take ${\bf a}\in K^n$ such that $a_n=(-1)^{n-1}$, then we have $x_{\bf a}\in{\rm SL}_n$, $d'(x_{\bf a})=1$ and $\Delta'_{n-1}(x_{\bf a})=0$. So $\Delta'_{n-1}$ does not divide $d'$. But then, by Lemma~\ref{lem.sln}, $\Delta'_{n-1}$ divides $g$. Cancelling a factor $\Delta'_{n-1}$ on both sides of (*), we obtain that $f\in(d')$. \end{proof} \subsection{Generators and relations and a $\mathbb Z$-form for $\tilde{Z_0}[z_{\varpi_1},\ldots,z_{\varpi_{n-1}}]Z_1$}\label{ss.presentation} \ \\ For the basics about monomial orderings and Gr\"obner bases I refer to \cite{CLO}. \begin{lemgl}\label{lem.leadmondetd} If we give the monomials in the variables $\xi_{ij}$ the lexicographic monomial ordering for which $\xi_{n\,n}>\xi_{n\,n-1}\cdots>\xi_{n1}>\xi_{n-1\,n}>\cdots> \xi_{n-1\,1}>\cdots>\xi_{11}$, then $\det$ has leading term $\pm\xi_{n\,n}\cdots\xi_{2\,2}\xi_{1\,1}$ and $d$ has leading term $\pm\xi_{n\,n-1}^{n-1}\cdots\xi_{3\,2}^2\xi_{2\,1}$. \end{lemgl} \newcommand{{\rm LT}}{{\rm LT}} \begin{proof} \newcommand{\partial_{i\,n}\big(\det(\mc{X}_{\Lambda,\,\Lambda})\big)}{\partial_{i\,n}\big(\det(\mathcal{X}_{\Lambda,\,\Lambda})\big)} I leave the proof of the first assertion to the reader. For the second assertion we use the notation and the formulas of Subsection~\ref{ss.slnd}. The leading term of a nonzero polynomial $f$ is denoted by ${\rm LT}(f)$. Let $i\in\{1,\ldots,n\}$ and $\Lambda\subseteq\{1,\ldots,n\}$ with $|\Lambda|=k\ge2$ and assume that $\partial_{i\,n}\big(\det(\mc{X}_{\Lambda,\,\Lambda})\big)\ne0$. Then $i,n\in\Lambda$. Now we use the fact that no monomial in $\partial_{i\,n}\big(\det(\mc{X}_{\Lambda,\,\Lambda})\big)$ contains a variable with row index equal to $i$ or with column index equal to $n$ or a product of two variables which have the same row or column index. First assume that $i>n-k+1$. Then $${\rm LT}(\partial_{i\,n}\big(\det(\mc{X}_{\Lambda,\,\Lambda})\big))\le\pm\xi_{n\,n-1}\cdots\xi_{i+1\,i}\xi_{i-1\,i-1}\cdots\xi_{n-k+1\,n-k+1}$$ with equality if and only if $\Lambda=\{n,n-1,\ldots,n-k+1\}$. Now assume that $i=n-k+1$. Then $${\rm LT}(\partial_{i\,n}\big(\det(\mc{X}_{\Lambda,\,\Lambda})\big))\le\pm\xi_{n\,n-1}\cdots\xi_{n-k+2\,n-k+1}$$ with equality if and only if $\Lambda=\{n,n-1,\ldots,n-k+1\}$. Finally assume that $i<n-k+1$. Then $${\rm LT}(\partial_{i\,n}\big(\det(\mc{X}_{\Lambda,\,\Lambda})\big))\le\pm\xi_{n\,n-1}\cdots\xi_{n-k+3\,n-k+2}\xi_{n-k+2\,i}$$ with equality if and only if $\Lambda=\{n,n-1,\ldots,n-k+2,i\}$. So for $i,k\in\{1,\ldots,n\}$ with $k\ge2$ we have: $${\rm LT}(\partial_{i n}s_k)= \begin{cases} \pm\xi_{n\,n-1}\cdots\xi_{i+1\,i}\xi_{i-1\,i-1}\cdots\xi_{n-k+1\,n-k+1} &\text{if $i+k>n+1$,}\\ \pm\xi_{n\,n-1}\cdots\xi_{n-k+2\,n-k+1} &\text{if $i+k=n+1$,}\\ \pm\xi_{n\,n-1}\cdots\xi_{n-k+3\,n-k+2}\xi_{n-k+2\,i} &\text{if $i+k<n+1$.} \end{cases} $$ In particular ${\rm LT}(\partial_{i n}s_k)\le\pm\xi_{n\,n-1}\cdots\xi_{n-k+1\,n-k+1}$ with equality if and only if $i+k=n+1$. But then, by equation~\eqref{eq.d2}, ${\rm LT}(d)={\rm LT}(\partial_{n\,n}s_1){\rm LT}(\partial_{n-1\,n}s_2)\cdots{\rm LT}(\partial_{1\,n}s_n)$\\ $=\pm\xi_{n\,n-1}^{n-1}\cdots\xi_{3\,2}^2\xi_{2\,1}$ \end{proof} Recall that the degree reverse lexicographical ordering on the monomials $u^\alpha=u_1^{\alpha_1}\cdots u_k^{\alpha_k}$ in the variables $u_1,\ldots,u_k$ is defined as follows: $u^\alpha > u^\beta$ if $\deg(u^\alpha)>\deg(u^\beta)$ or $\deg(u^\alpha)=\deg(u^\beta)$ and $\alpha_i<\beta_i$ for the last index $i$ with $\alpha_i\ne\beta_i$. \begin{lemgl}\label{lem.leadmonfi} Let $f_i\in\mathbb{Z}[u_1,\ldots,u_{n-1}]$ be the polynomial such that ${\rm sym}(l\varpi_i)=$\\ $f_i({\rm sym}(\varpi_1),\ldots,{\rm sym}(\varpi_{n-1}))$. If we give the monomials in the $u_i$ the degree reverse lexicographic monomial ordering for which $u_1>\cdots>u_{n-1}$, then $f_i$ has leading term $u_i^l$. Furthermore, the monomials that appear in $f_i-u_i^l$ are of total degree $\le l$ and have exponents $<l$.\ \footnotemark \end{lemgl} \footnotetext{So our $f_i$ are related to the polynomials $P_i=x_i^l-\sum_\mu d_{i\mu}x_\mu$ from the proof of Proposition 6.4 in \cite{DCKP} as follows: $P_i=f_i(x_1,\ldots,x_{n-1})-{\rm sym}(l\varpi_i)$. In particular $d_{i0}={\rm sym}(l\varpi_i)$ and $d_{i\mu}\in{\mathbb Z}$ for all $\mu\in P\sm\{0\}$ (we are, of course, in the situation that ${\mf{g}}=\mf{sl}_n$).} \begin{proof} Let $\sigma_i$ be the $i$-th elementary symmetric function in the variables $x_1,\ldots,x_n$ and let $\lambda_i\in P=X(T)$ be the character $A\mapsto A_{ii}$ of $T$. Then ${\rm sym}(\varpi_i)=\sigma_i(e(\lambda_1),\ldots,e(\lambda_n))$ for $i\in\{1,\ldots,n-1\}$. So the $f_i$ can be found as follows. For $i\in\{1,\ldots,n-1\}$, determine $F_i\in\mathbb{Z}[u_1,\ldots,u_n]$ such that $\sigma_i(x_1^l,\ldots,x_n^l)=F_i(\sigma_1,\ldots,\sigma_n)$. Then $f_i=F_i(u_1,\ldots,u_{n-1},1)$. It now suffices to show that for $i\in\{1,\ldots,n-1\}$, $F_i-u_i^l$ is a $\mathbb Z$-linear combination of monomials in the $u_j$ that have exponents $<l$, are of total degree $\le l$ and that contain some $u_j$ with $j>i$ (the monomials that contain $u_n$ will become of total degree $<l$ when $u_n$ is replaced by 1). Fix $i\in\{1,\ldots,n-1\}$. Consider the following properties of a monomial in the $x_j$: \begin{enumerate}[(x1)] \item the monomial contains at least $i+1$ variables. \item the exponents are $\le l$. \item the number of exponents equal to $l$ is $\le i$. \end{enumerate} and the following properties of a monomial in the $u_j$: \begin{enumerate}[(u1)] \item the monomial contains a variable $u_j$ for some $j>i$. \item the total degree is $\le l$. \item the exponents are $<l$. \end{enumerate} Let $h$ be a symmetric polynomial in the $x_i$ and let $H$ be the polynomial in the $u_i$ such that $h=H(\sigma_1,\ldots,\sigma_n)$. Give the monomials in the $x_i$ the lexicographic monomial ordering for which $x_1>\cdots>x_n$. We will show by induction on the leading monomial of $h$ that if each monomial that appears in $h$ has property (x1) resp. property (x2) resp. properties (x1), (x2) and (x3), then each monomial that appears in $H$ has property (u1) resp. property (u2) resp. properties (u1), (u2) and (u3). Let $x^\alpha:=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$ be the leading monomial of $h$. Then $\alpha_1\ge\alpha_2\cdots\ge\alpha_n$. Put $\beta=(\alpha_1-\alpha_2,\ldots,\alpha_{n-1}-\alpha_n,\alpha_n)$. Let $k$ be the last index for which $\alpha_k\ne 0$. Then $\beta=(\alpha_1-\alpha_2,\ldots,\alpha_{k-1}-\alpha_k,\alpha_k,0,\ldots,0)$. If $x^\alpha$ has property (x1), then $k\ge i+1$, $u^\beta$ has property (u1) and the monomials that appear in $\sigma^\beta$ have property (x1), since $\sigma_k$ appears in $\sigma^\beta$. If $x^\alpha$ has property (x2), then $\alpha_1\le l$, $u^\beta$ is of total degree $\alpha_1\le l$ and the monomials that appear in $\sigma^\beta$ have exponents $\le\beta_1+\cdots+\beta_k=\alpha_1\le l$. Now assume that $x^\alpha$ has properties (x1), (x2) and (x3). For $j<k$ we have $\beta_j=\alpha_j-\alpha_{j+1}<l$, since $\alpha_{j+1}\ne0$. So we have to show that $\beta_k=\alpha_k<l$. If $\alpha_k$ were equal to $l$, then we would have $\alpha_1=\cdots=\alpha_k=l$, by (x2). This contradicts (x3), since we have $k\ge i+1$ by (x1). Finally we show that the monomials that appear in $\sigma^\beta$ have property (x3). If $\alpha_1<l$, then all these monomials have exponents $<l$. So assume $\alpha_1=l$. Let $j$ be the smallest index for which $\beta_j\ne 0$. Then the number of exponents equal to $l$ in a monomial that appears in $\sigma^\beta$ is $\le j$. On the other hand $\alpha_1=\cdots=\alpha_j=l$. So we must have $j\le i$, since $x^\alpha$ has property (x3). Now we can apply the induction hypothesis to $h-c\sigma^\beta$, where $c$ is the leading coefficient of $h$. The assertion about $F_i-u_i^l$ now follows, because the monomials that appear in $\sigma_i(x_1^l,\ldots,x_n^l)-\sigma_i^l$ have the properties (x1), (x2) and (x3). \end{proof} From now on we denote $z_{\varpi_i}$ by $z_i$.\footnote{In \cite{DCKP} and \cite{DCP} $z_{\alpha_i}$ is denoted by $z_i$.} Let $\mathbb{Z}[{\rm SL}_n]$ be the $\mathbb Z$-subalgebra of $\mathbb{C}[{\rm SL}_n]$ generated by the $\xi_{ij}'$ and $A$ be the $\mathbb Z$-subalgebra of $Z$ generated by the $\tilde{\xi}_{ij}$. So $A=\pi^{co}(\mathbb{Z}[{\rm SL}_n])$. Let $B$ be the $\mathbb Z$-subalgebra generated by the elements $\tilde{\xi}_{ij}$, $u_1,\ldots,u_{n-1}$ and $z_1,\ldots,z_{n-1}$. For a commutative ring $R$ we put $A(R)=R\ot_{\mathbb Z}A$ and $B(R)=R\ot_{\mathbb Z}B$. Clearly we can identify $A({\mathbb C})$ with $\tilde{Z_0}$. In the proposition below "natural homomorphism" means a homomorphism that maps $\xi_{ij}$ to $\tilde{\xi}_{ij}$ and, if this applies, the variables $u_i$ and $z_i$ to the equally named elements of $Z$. The polynomials $f_i$ below are the ones defined in Lemma~\ref{lem.leadmonfi}. \begin{propgl}\label{prop.presentation} The following holds: \begin{enumerate}[(i)] \item The kernel of the natural homomorphism from the polynomial algebra\\ $\mathbb{Z}[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1},z_1,\ldots,z_{n-1}]$ to $B$ is generated by the elements\\ $\det-1,f_1-s_1,\ldots,f_{n-1}-s_{n-1},z_1^2-\Delta_1,\ldots,z_{n-1}^2-\Delta_{n-1}$. \item The homomorphism $B(\mathbb{C})\to Z$, given by the universal property of ring transfer, is injective. \item $A$ is a free $\mathbb{Z}$-module and $B$ is a free $A$-module with the monomials\\ $u_1^{k_1}\cdots u_{n-1}^{k_{n-1}}z_1^{m_1}\cdots z_{n-1}^{m_{n-1}}$, $0\leq k_i<l$, $0\leq m_i<2$ as a basis. \item $A[z_1,\ldots,z_{n-1}]\cap Z_1=A\cap Z_1={\mathbb Z}[\tilde{s}_1,\ldots,\tilde{s}_{n-1}]$ and $B\cap Z_1$ is a free $A\cap Z_1$-module with the monomials $u_1^{k_1}\cdots u_{n-1}^{k_{n-1}}$, $0\leq k_i<l$ as a basis. \end{enumerate} \end{propgl} \begin{proof} Let $Z_0'$ be the $\mathbb C$-subalgebra of $Z$ generated by the $\tilde{\xi}_{ij}$ and $z_1,\ldots,z_{n-1}$. As we have seen in Subsection~\ref{ss.Z0andZ1}, the $z_i$ satisfy the relations $z_i^2=\tilde{\Delta_i}$. The $\tilde{\Delta_i}$ are part of a generating transcendence basis of the field of fractions ${\rm Fr}(\tilde{Z_0})$ of $\tilde{Z_0}$ by arguments very similar to those at the end of the proof of Theorem~\ref{thm.qrat}. This shows that the monomials $z_1^{m_1}\cdots z_{n-1}^{m_{n-1}}$, $0\leq m_i<2$, form a basis of ${\rm Fr}(Z_0')$ over ${\rm Fr}(\tilde{Z_0})$ and of $Z_0'$ over $\tilde{Z_0}$. It follows that the kernel of the natural homomorphism from the polynomial algebra $\mathbb{C}[(\xi_{ij})_{ij},z_1,\ldots,z_{n-1}]$ to $Z_0'$ is generated by the elements $\det-1,z_1^2-\Delta_1,\ldots,z_{n-1}^2-\Delta_{n-1}$. So we have generators and relations for $Z_0'$. By the construction from Subsection~\ref{ss.Z0andZ1} we then obtain that the kernel $I$ of the natural homomorphism from the polynomial algebra $\mathbb{C}[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1},z_1,\ldots,z_{n-1}]$ to $Z_0'Z_1$ is generated by the elements $\det-1,f_1-s_1,\ldots,f_{n-1}-s_{n-1},z_1^2-\Delta_1,\ldots,z_{n-1}^2-\Delta_{n-1}$. Now we give the monomials in the variables $(\xi_{ij})_{ij},u_1,\ldots,u_{n-1},z_1,\ldots,z_{n-1}$ a monomial ordering which is the lexicographical product of an arbitrary monomial ordering on the monomials in the $z_i$, the monomial ordering of Lemma~\ref{lem.leadmonfi} on the monomials in the $u_i$ and the monomial ordering of Lemma~\ref{lem.leadmondetd} on the $\xi_{ij}$.\ \footnote{so the $z_i$ are greater than the $u_i$ which are greater than the $\xi_{ij}$} Then the ideal generators mentioned above have leading monomials $\xi_{n\,n}\cdots\xi_{2\,2}\xi_{1\,1},$ $u_1^l,\ldots,u_{n-1}^l, z_1^2,\ldots,z_{n-1}^2$ and the leading coefficients are all $\pm1$. Since the leading monomials have gcd $1$, the ideal generators form a Gr\"obner basis; see \cite{CLO} Ch.~2 \S~9 Theorem~3 and Proposition~4, for example. Since the leading coefficients are all $\pm1$, it follows from the division with remainder algorithm that the ideal of $\mathbb{Z}[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1},z_1,\ldots,z_{n-1}]$ generated by these elements consists of the polynomials in $I$ that have integral coefficients and that it has the $\mathbb Z$-span of the monomials that are not divisible by any of the above leading monomials as a direct complement. This proves (i) and (ii).\\ (iii).\ The canonical images of the above monomials form a $\mathbb Z$-basis of $B$. These monomials are the products of the monomials in the $\xi_{ij}$ that are not divisible by $\xi_{n\,n}\cdots\xi_{2\,2}\xi_{1\,1}$ and the restricted monomials mentioned in the assertion. The canonical images of the monomials in the $\xi_{ij}$ that are not divisible by $\xi_{n\,n}\cdots\xi_{2\,2}\xi_{1\,1}$ form a $\mathbb Z$-basis of $A$.\\ (iv).\ As we have seen, the monomials with exponents $<2$ in the $z_i$ form a basis of the $\tilde{Z_0}$-module $Z_0'$. So $A[z_1,\ldots,z_{n-1}]\cap \tilde{Z_0}=A$. Therefore, by Theorem~\ref{thm.coverHCcentre}(ii), $A[z_1,\ldots,z_{n-1}]\cap Z_1=A\cap Z_1=\pi^{co}({\mathbb Z}[{\rm SL}_n]^{{\rm SL}_n})$. Now $({\mathbb Z}P)^W={\mathbb Z}[{\rm sym}(\varpi_1),\ldots,{\rm sym}(\varpi_{n-1})]$ (see \cite{Bou1} no. VI.3.4, Thm. 1.) and the $s_i'$ are in ${\mathbb Z}[{\rm SL}_n]$, so ${\mathbb Z}[{\rm SL}_n]^{{\rm SL}_n}={\mathbb Z}[s_1',\ldots,s_{n-1}']$ by the restriction theorem for ${\mathbb C}[{\rm SL}_n]$. This proves the first assertion. From the proof of Theorem~\ref{thm.Z0andZ1} we know that the given monomials form a basis of $Z_1$ over $Z_0\cap Z_1$ and a basis of $Z$ over $Z_0$. So an element of $Z$ is in $Z_1$ if and only if its coefficients with respect to this basis are in $Z_0\cap Z_1$. The second assertion now follows from (iii). \end{proof} By (ii) of the above proposition we may identify $B({\mathbb C})$ with $\tilde{Z_0}[z_1,\ldots,z_{n-1}]Z_1$ and $B({\mathbb C})[\tilde{\Delta}^{-1}_1,\ldots,\tilde{\Delta}^{-1}_{n-1}]$ with $Z$. Put $\overline{Z}=Z/(\tilde{d})$. For the proof of Theorem~\ref{thm.quf} we need a version for $\overline{Z}$ of Proposition~\ref{prop.presentation}. First we introduce some more notation. For $u\in Z$ we denote the canonical image of $u$ in $\overline{Z}$ by $\overline{u}$. For $f\in\mathbb{C}[{\rm Mat}_n]$ we write $\overline{f}$ instead of $\overline{\tilde{f}}$. Let $\overline{A}$ be the $\mathbb Z$-subalgebra of $\overline{Z}$ generated by the $\overline{\xi}_{ij}$ and let $\overline{B}$ be the $\mathbb Z$-subalgebra generated by the elements $\overline{\xi}_{ij}$, $\overline{u}_1,\ldots,\overline{u}_{n-1}$ and $\overline{z}_1,\ldots,\overline{z}_{n-1}$. For a commutative ring $R$ we put $\overline{A}(R)=R\ot_{\mathbb Z}\overline{A}$ and $\overline{B}(R)=R\ot_{\mathbb Z}\overline{B}$. \renewcommand{\thepropnn}{$\overline{\text{\ref{prop.presentation}}}$} \begin{propnn}\label{prop.presentation(d)} The following holds: \begin{enumerate}[(i)] \item The kernel of the natural homomorphism from the polynomial algebra\\ $\mathbb{Z}[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1},z_1,\ldots,z_{n-1}]$ to $\overline{B}$ is generated by the elements\\ $\det-1,d,f_1-s_1,\ldots,f_{n-1}-s_{n-1},z_1^2-\Delta_1,\ldots,z_{n-1}^2-\Delta_{n-1}$. \item The kernel of the natural homomorphism $\mathbb{Z}[{\rm Mat}_n]\to\overline{A}$ is $(\det-1,d)$. \item The homomorphism $\overline{B}(\mathbb{C})\to \overline{Z}$, given by the universal property of ring transfer, is injective. \item $\overline{A}$ is a free $\mathbb{Z}$-module and $\overline{B}$ is a free $\overline{A}$-module with the monomials\\ $\overline{u}_1^{k_1}\cdots \overline{u}_{n-1}^{k_{n-1}}\overline{z}_1^{m_1}\cdots \overline{z}_{n-1}^{m_{n-1}}$, $0\leq k_i<l$, $0\leq m_i<2$ as a basis. \item The $\overline{A}$-span of the monomials $\overline{u}_1^{k_1}\cdots \overline{u}_{n-1}^{k_{n-1}}$, $0\leq k_i<l$, is closed under multiplication. \end{enumerate} \end{propnn} \renewcommand{\thepropnn}{\!\!} \begin{proof} From Lemma~\ref{lem.slnd}(iii) we deduce that $\big(A({\mathbb C})[\tilde{\Delta}_1^{-1},\ldots,\tilde{\Delta}_{n-1}^{-1}]\tilde{d}\big)\cap A({\mathbb C})=A({\mathbb C})\tilde{d}$. From this it follows, using the $A({\mathbb C})$-basis of $B({\mathbb C})$, that $(Z\tilde{d})\cap B({\mathbb C})$, which is the kernel of the natural homomorphism $B({\mathbb C})\to\overline{Z}$, equals $B({\mathbb C})\tilde{d}$. From (i) and (ii) of Proposition~\ref{prop.presentation} or from its proof it now follows that the kernel of the natural homomorphism from the polynomial algebra\\ $\mathbb{C}[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1},z_1,\ldots,z_{n-1}]$ to $\overline{Z}$ is generated by the elements $\det-1,d,f_1-s_1,\ldots,f_{n-1}-s_{n-1},z_1^2-\Delta_1,\ldots,z_{n-1}^2-\Delta_{n-1}$. Again using the $A({\mathbb C})$-basis of $B({\mathbb C})$ we obtain that $(B({\mathbb C})\tilde{d})\cap A({\mathbb C})=A({\mathbb C})\tilde{d}$. From this it follows that the kernel of the natural homomorphism $\mathbb{C}[{\rm Mat}_n]\to\overline{Z}$ is generated by $\det-1$ and $d$. By Lemma~\ref{lem.leadmondetd} we have ${\rm LT}(d)=\pm\xi_{n\,n-1}^{n-1}\cdots\xi_{3\,2}^2\xi_{2\,1}$ which has gcd $1$ with the leading monomials of the other ideal generators, so the ideal generators mentioned above form a Gr\"obner basis over $\mathbb Z$. Now (i)-(iv) follow as in the proof of Proposition~\ref{prop.presentation}.\\ (v).\ This follows from the fact that the remainder modulo the Gr\"obner basis of a polynomial in $\mathbb{Z}[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1}]$ is again in $\mathbb{Z}[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1}]$. \end{proof} By (ii) and (iii) of the above proposition $\overline{A}$ and $\overline{B}({\mathbb C})[\overline{\Delta}^{\ -1}_1,\ldots,\overline{\Delta}^{\ -1}_{n-1}]$ can be identified with respectively ${\mathbb Z}[{\rm Mat}_n]/(\det-1,d)$ and $\overline{Z}$. From (iv) it follows that, for any commutative ring $R$, $\overline{A}(R)$ embeds in $\overline{B}(R)$. \subsection{The theorem} \begin{lemgl}\label{lem.algnum} Let $A$ be an associative algebra with 1 over a field $F$ and let $L$ be an extension of $F$. Assume that for every finite extension $F'$ of $F$, $F'\ot_FA$ has no zero divisors. Then the same holds for $L\ot_FA$. \end{lemgl} \begin{proof} Assume that there exist $a,b\in L\ot_FA\sm\{0\}$ with $ab=0$. Let $(e_i)_{i\in I}$ be an $F$-basis of $A$ and let $c_{ij}^k\in F$ be the structure constants. Write $a=\sum_{i\in I}\alpha_ie_i$ and $b=\sum_{i\in I}\beta_ie_i$. Let $I_a$ resp. $I_b$ be the set of indices $i$ such that $\alpha_i\ne0$ resp. $\beta_i\ne0$ and let $J$ be the set of indices $k$ such that $c_{ij}^k\ne0$ for some $(i,j)\in I_a\times I_b$. Then $I_a$ and $I_b$ are nonempty and $I_a$, $I_b$ and $J$ are finite. Take $i_a\in I_a$ and $i_b\in I_b$. Since $ab=0$, the following equations over $F$ in the variables $x_i$, $i\in I_a$, $y_i$, $i\in I_b$, $u$ and $v$ have a solution over $L$: \begin{align*} &\sum_{i\in I_a, j\in I_b}c_{ij}^kx_iy_j=0\text{ for all }k\in J,\\ &x_{i_a}u=1, y_{i_b}v=1. \end{align*} But then they also have a solution over a finite extension $F'$ of $F$ by Hilbert's Nullstellensatz. This solution gives us nonzero elements $a',b'\in F'\ot_FA$ with $a'b'=0$. \end{proof} \begin{lemgl}\label{lem.modp} Let $R$ be the valuation ring of a nontrivial discrete valuation of a field $F$ and let $K$ be its residue class field. Let $A$ be an associative algebra with 1 over $R$ which is free as an $R$-module and let $L$ be an extension of $F$. Assume that for every finite extension $K'$ of $K$, $K'\ot_RA$ has no zero divisors. Then the same holds for $L\ot_RA$. \end{lemgl} \begin{proof} Assume that there exist $a,b\in L\ot_RA\sm\{0\}$ with $ab=0$. By the above lemma we may assume that $a,b\in F'\ot_RA\sm\{0\}$ for some finite extension $F'$ of $F$. Let $(e_i)_{i\in I}$ be an $R$-basis of $A$. Let $\nu$ be an extension to $F'$ of the given valuation of $F$, let $R'$ be the valuation ring of $\nu$, let $K'$ be the residue class field and let $\delta\in R'$ be a uniformiser for $\nu$. Note that $R'$ is a local ring and a principal ideal domain (and therefore a UFD) and that $K'$ is a finite extension of $K$ (see e.g. \cite{Cohn} Chapter 8 Theorem 5.1). By multiplying $a$ and $b$ by suitable integral powers of $\delta$ we may assume that their coefficients with respect to the basis $(e_i)_{i\in I}$ are in $R'$ and not all divisible by $\delta$ (in $R'$). By passing to the residue class field $K$ we then obtain nonzero $a',b'\in K'\ot_{R'}(R'\ot_RA)=K'\ot_RA$ with $a'b'=0$. \end{proof} \begin{remnn} The above lemmas also hold if we replace "zero divisors" by "nonzero nilpotent elements". \end{remnn} For $t\in\{0,\ldots,n-1\}$ let $\overline{B}_t$ be the $\mathbb Z$-subalgebra generated by the elements $\overline{\xi}_{ij}$, $\overline{u}_1,\ldots,\overline{u}_{n-1}$ and $\overline{z}_1,\ldots,\overline{z}_t$. So $\overline{B}_{n-1}=\overline{B}$. For a commutative ring $R$ we put $\overline{B}_t(R)=R\ot_{\mathbb Z}\overline{B}_t$. From (iv) and (v) of Proposition~\ref{prop.presentation(d)} we deduce that the monomials $\overline{u}_1^{k_1}\cdots \overline{u}_{n-1}^{k_{n-1}}\overline{z}_1^{m_1}\cdots \overline{z}_t^{m_t}$, $0\leq k_i<l$, $0\leq m_i<2$ form a basis of $\overline{B}_t$ over $\overline{A}$. So for any commutative ring $R$ we have bases for $\overline{B}_t(R)$ over $\overline{A}(R)$ and over $R$. Note that $\overline{B}_t(R)$ embeds in $\overline{B}(R)$, since the $\mathbb Z$-basis of $\overline{B}_t$ is part of the $\mathbb Z$-basis of $\overline{B}$. \begin{thmgl}\label{thm.quf} If $l$ is a power of an odd prime $p$, then $Z$ is a unique factorisation domain. \end{thmgl} \begin{proof} We have seen in Subsection~\ref{ss.nis2} that for $n=2$ it holds without any extra assumptions on $l$, so assume that $n\ge3$. For the elimination of variables in the proof of Theorem~\ref{thm.qrat} we only needed the invertibility of $\tilde{d}$, so $Z[\tilde{d}^{-1}]$ is isomorphic to a localisation of a polynomial algebra and therefore a UFD. So, by Nagata's lemma, it suffices to prove that $\tilde{d}$ is a prime element of $Z$, i.e. that $\overline{Z}=Z/(\tilde{d})$ is an integral domain. We do this in 5 steps. \noindent 1.\ $\overline{B}(K)$ is reduced for any field $K$. We may assume that $K$ is algebraically closed. Since $\overline{B}(K)$ is a finite $\overline{A}(K)$-module it follows that $\overline{B}(K)$ is integral over $\overline{A}(K)\cong K[{\rm Mat}_n]/(\det-1,d)$. So it its Krull dimension is $n^2-2$. By Proposition~\ref{prop.presentation(d)}, $\overline{B}(K)$ is isomorphic to the quotient of a polynomial ring over $K$ in $n^2+2(n-1)$ variables by an ideal $I$ which is generated by $2n$ elements. So $\overline{B}(K)$ is Cohen-Macaulay (see \cite{Eis} Proposition 18.13). Let $\mathcal V$ be the closed subvariety of $n^2+2(n-1)$-dimensional affine space defined by $I$. By Theorem 18.15 in \cite{Eis} it suffices to show that the closed subvariety of $\mathcal V$ defined by the Jacobian ideal of $\det-1,d, f_1-s_1,\ldots,f_{n-1}-s_{n-1}, z_1^2-\Delta_1,z_{n-1}^2-\Delta_{n-1}$ is of codimension $\ge1$. By Lemmas~\ref{lem.slnd} and \ref{lem.sln}, $(\det-1,d)$ is a prime ideal of $K[{\rm Mat}_n]$. So we have an embedding $K[{\rm Mat}_n]/(\det-1,d)\to K[\mathcal V]$ which is the comorphism of a finite surjective morphism of varieties ${\mathcal V}\to V(\det-1,d)$, where $V(\det-1,d)$ is the closed subvariety of ${\rm Mat}_n$ that consists of the matrices of determinant 1 on which $d$ vanishes. This morphism maps the closed subvariety of $\mathcal V$ defined by the Jacobian ideal of $\det-1,d, f_1-s_1,\ldots,f_{n-1}-s_{n-1}, z_1^2-\Delta_1,\ldots,z_{n-1}^2-\Delta_{n-1}$ into the closed subvariety of $V(\det-1,d)$ defined by the ideal generated by the $2n$-th order minors of the Jacobian matrix of $(s_1,\ldots,s_n, d, \Delta_1,\ldots,\Delta_{n-1})$ with respect to the variables $\xi_{ij}$. This follows easily from the fact that $s_n=\det$ and that the $z_j$ and $u_j$ do not appear in the $s_i$ and $\Delta_i$. Since finite morphisms preserve dimension (see e.g. \cite{Eis} Corollary~9.3), it suffices to show that the latter variety is of codimension $\ge 1$ in $V(\det-1,d)$. Since $V(\det-1,d)$ is irreducible, this follows from Lemma~\ref{lem.jac}(ii). \noindent 2.\ $\overline{B}_0(K)$ is an integral domain for any field $K$ of characteristic $p$. We may assume that $K$ is algebraically closed. From the construction of the $f_i$ (see the proof of Lemma~\ref{lem.leadmonfi}) and the additivity of the $p$-th power map in characteristic $p$ it follows that $f_i=u_i^l$ mod $p$. So the kernel of the natural homomorphism from the polynomial algebra $K[(\xi_{ij})_{ij},u_1,\ldots,u_{n-1},z_1,\ldots,z_{n-1}]$ to $\overline{B}(K)$ is generated by the elements $\det-1,d,u_1^l-s_1,\ldots,u_{n-1}^l-s_{n-1}$ and the $\overline{A}(K)$-span of the monomials $\overline{u}_1^{k_1}\cdots \overline{u}_t^{k_t}$, $0\leq k_i<l$, is closed under multiplication for each $t\in\{0,\ldots,n-1\}$. We show by induction on $t$ that $\overline{B}_{0,t}(K):=\overline{A}(K)[\overline{u}_1,\ldots,\overline{u}_t]$ is an integral domain for $t=0,\ldots,n-1$. For $t=0$ this follows from Lemma~\ref{lem.slnd} and Proposition~\ref{prop.presentation(d)}(ii). Let $t\in\{1,\ldots,n-1\}$ and assume that it holds for $t-1$. Clearly $\overline{B}_{0,t}(K)=\overline{B}_{0,t-1}(K)[\overline{u}_t]\cong \overline{B}_{t-1}(K)[x]/(x^l-\overline{s}_t)$. So it suffices to prove that $x^l-\overline{s}_t$ is irreducible over the field of fractions of $\overline{B}_{0,t-1}(K)$. By the Vahlen-Capelli criterion or a more direct argument, it suffices to show that $\overline{s}_t$ is not a $p$-th power in the field of fractions of $\overline{B}_{0,t-1}(K)$. So assume that $\overline{s}_t=(v/w)^p$ for some $v,w\in\overline{B}_{0,t-1}(K)$ with $w\ne0$. Then we have $v^p=\overline{s}_tw^p=\overline{u}_t^lw^p$. So with $l'=l/p$, we have $(v-\overline{u}_t^{l'}w)^p=0$. But then $v-\overline{u}_t^{l'}w=0$ by Step 1. Now recall that $v$ and $w$ can be expressed uniquely as $\overline{A}(K)$-linear combinations of monomials in $\overline{u}_1,\ldots,\overline{u}_{t-1}$ with exponents $<l$. If such a monomial appears with a nonzero coefficient in $w$, then $\overline{u}_t^{l'}$ times this monomial appears with the same coefficient in the expression of $0=v-\overline{u}_t^{l'}w$ as an $\overline{A}(K)$-linear combination of restricted monomials in $\overline{u}_1,\ldots,\overline{u}_{n-1}$. Since this is impossible, we must have $w=0$. A contradiction. \noindent 3.\ $\overline{B}_0({\mathbb C})$ is an integral domain. This follows immediately from Step 2 and Lemma~\ref{lem.modp} applied to the $p$-adic valuation of $\mathbb Q$ and with $L=\mathbb C$. \noindent 4.\ $\overline{B}_t({\mathbb C})$ is an integral domain for $t=0,\ldots,n-1$. We prove this by induction on $t$. For $t=0$ it is the assertion of Step 3. Let $t\in\{1,\ldots,n-1\}$ and assume that it holds for $t-1$. Clearly $\overline{B}_t({\mathbb C})=\overline{B}_{t-1}({\mathbb C})[\overline{z}_t]\cong \overline{B}_{t-1}({\mathbb C})[x]/(x^2-\overline{\Delta}_t)$. So it suffices to prove that $x^2-\overline{\Delta}_t$ is irreducible over the field of fractions of $\overline{B}_{t-1}({\mathbb C})$. Assume that $x^2-\overline{\Delta}_t$ has a root in this field, i.e. that $\overline{\Delta}_t=(v/w)^2$ for some $v,w\in \overline{B}_{t-1}({\mathbb C})$ with $w\ne0$. By the same arguments as in the proof of Lemma~\ref{lem.algnum} we may assume that for some finite extension $F$ of $\mathbb Q$ there exist $v,w\in \overline{B}_{t-1}(F)$ with $w\ne0$ and $w^2\overline{\Delta}_t=v^2$. Let $\nu_2$ be an extension to $F$ of the $2$-adic valuation of $\mathbb Q$, let $S_2$ be the valuation ring of $\nu_2$, let $K$ be the residue class field and let $\delta\in S_2$ be a uniformiser for $\nu_2$. We may assume that the coefficients of $v$ and $w$ with respect to the ${\mathbb Z}$-basis of $\overline{B}_{t-1}$ mentioned earlier are in $S_2$. Assume that the coefficients of $w$ are all divisible by $\delta$ (in $S_2$). Then $w=0$ in $\overline{B}_{t-1}(K)$ and therefore $v^2=0$ in $\overline{B}_{t-1}(K)$. But by Step 1, $\overline{B}_{t-1}(K)$ is reduced, so $v=0$ in $\overline{B}_{t-1}(K)$ and all coefficients of $v$ are divisible by $\delta$. So, by cancelling a suitable power of $\delta$ in $w$ and $v$, we may assume that not all coefficients of $w$ are divisible by $\delta$. By passing to the residue class field $K$ we then obtain $v,w\in \overline{B}_{t-1}(K)$ with $w\ne0$ and $w^2\overline{\Delta}_t=v^2$. But then $(w\overline{z}_t-v)^2=0$ in $\overline{B}_t(K)$, since $\overline{z}_t^2=\overline{\Delta}_t$ and $K$ is of characteristic $2$. The reducedness of $\overline{B}_t(K)$ (Step 1) now gives $w\overline{z}_t-v=0$ in $\overline{B}_t(K)$. Now recall that $v$ and $w$ can be expressed uniquely as $\overline{A}(K)$-linear combinations of the monomials $\overline{u}_1^{k_1}\cdots \overline{u}_{n-1}^{k_{n-1}}\overline{z}_1^{m_1}\cdots \overline{z}_{t-1}^{m_{t-1}}$, $0\leq k_i<l$, $0\leq m_i<2$. We then obtain a contradiction in the same way as at the end of Step 2. \noindent 5.\ $Z/(d)$ is an integral domain. Since $\overline{Z}=\overline{B}({\mathbb C})[\overline{\Delta}^{\ -1}_1,\ldots,\overline{\Delta}^{\ -1}_{n-1}]$ and the $\overline{\Delta}_i$ are nonzero in $\overline{A}({\mathbb C})\cong {\mathbb C}[{\rm SL}_n]/(d')$ by Lemma~\ref{lem.slnd}, this follows from Step 4. \end{proof} \begin{remsnn}\ \\ 1.\ Note that we didn't prove that $\overline{B}(K)$ is an integral domain for $K$ some algebraically closed field of positive characteristic.\\ 2.\ To attempt a proof for arbitrary odd $l>1$ I have tried the filtration with $\deg(\xi_{ij})=2l$, $\deg(z_i)=li$ and $\deg(u_i)=2i$. But the main problem with this filtration is that it does not simplify the relations $s_i=f_i(u_1,\ldots,u_{n-1})$ enough. \end{remsnn}
2,869,038,154,457
arxiv
\section{\label{sec:level1}Introduction} The elastic scattering of light heavy-ions at intermediate energies has clearly demonstrated the sensitivity at large angles to the underlying optical potential through the "appearance" in the angular distribution of Airy oscillations associated with nuclear rainbow scattering \cite{nova4}. Such studies pin down several aspects of the optical potential, usually constructed through the double-folding procedure. Among these aspects, we mention the degree of non-locality, genuine energy-dependence, the density dependence of the effective nucleon-nucleon G-matrix, etc. Recently, the elastic scattering of halo-type nuclei, such as $^{11}$Li, $^6$He, $^{11}$Be, $^{19}$C, at intermediate energies has been studied. In such cases, due to the low beam intensity, it is rather difficult to cover the Airy region. Thus, at most, one is bound to extract from the small angle, near/far interference, region, information about the strength of the coupling to the break-up channel. The energy dependence associate with non locality of the local equivalent optical potential has a paramount importance in such studies. In this work, we discuss the elastic scattering of stable, weakly bound and exotic nuclei on a variety of targets, both light and heavy, in order to assess the energy-dependence. For this purpose, we use the S\~{a}o Paulo potential and the Lax interaction discussed in details in Refs. \cite{4,10,15}. In section II, we give an account of the optical potential and its energy-dependence. In section III, we present the data analysis. Finally, in section IV, we present a summary and concluding remarks. \section{\label{sec:level2}The optical potential} Two different phenomena, called Pauli non-locality (PNL) and Pauli blocking (PB), are important to understand the energy-dependence of the optical potential for heavy-ion systems. The PNL arises from quantum exchange effects and has been studied in the context of neutron-nucleus \cite{1}, alpha-nucleus \cite{2} and heavy-ion \cite{4,10,3,5,6,7,8,9,11,12,13,14} collisions. The nonlocal interaction has been used in the description of the elastic scattering process through an integro-differential equation \cite{4,10,1}. It is possible to define a local-equivalent potential which, within the usual framework of the Schroedinger differential equation, reproduces the results of the integro-differential approach \cite{10,1}. In the case of heavy-ion systems, the real part of the local-equivalent interaction is associated to the double-folding potential ($V_F$) through \cite{10}: \begin{equation} V_{N}(R) = V_{F}(R)e^{-4v^2/c^2} \end{equation} where {\em $c$\/} is the speed of light and {\em $v$\/} is the local relative velocity between the two nuclei. This model is known as S\~{a}o Paulo potential. The velocity/energy-dependence of the potential is very important to account for the data from near-barrier to intermediate energies \cite{4,10,3,5,6,7,8,9,11,12,13}. Eq. (1) describes the effect of the PNL on the real part of the potential, but the local-equivalent potential that arises from the solution of the corresponding integro-differential equation indicates that the exchange correlation also affects the imaginary part of the optical potential \cite{4}. Another model used in the analyzes of elastic scattering data is the Lax interaction \cite{15}, which is the optical limit of the Glauber high-energy approximation \cite{16}. The Lax interaction is essentially a zero range double-folding potential used for both the real and imaginary parts of the optical potential. Similar to the S\~{a}o Paulo potential, the Lax interaction is also dependent on the nuclear densities and may be expressed in terms of the relative velocity between the two nuclei. The imaginary part of the Lax interaction is thus written as: \begin{equation} W(R) = -\frac{1}{2} \hbar v \int \sigma_{T}^{NN}(v) \rho_{1}(\vec{r}) \rho_{2}(\vec{r}-\vec{R}) \; d\vec{r} \end{equation} where $\sigma_{T}^{NN}(v)$ is an energy-dependent spin-isospin-averaged total nucleon-nucleon cross-section. Eq. (2) has been derived from multiple-scattering theories and should be valid for stable (non-exotic) nuclei at high energies. For lower energies, Eq. (2) must be corrected in order to take into account the closure of the phase space due to the Pauli exclusion principle. This phenomenon, known as Pauli blocking (PB), can be simulated in Eq. (2) by introducing a further dependence of $\sigma_{T}^{NN}$ on the densities of the nuclei \cite{15}. Usually, the PB is expected to be essential at small internuclear distances due to the corresponding large overlap of the nuclei. Since the PB can distort significantly the densities in the overlap region, it should affect both the real and imaginary parts of the optical potential. In this work, we have assumed these two semi-phenomenological models for the real (Eq. 1) and imaginary (Eq. 2) parts of the optical potential. We have analyzed several elastic scattering angular distributions for systems with reduced mass between 3 and 34. As extensively discussed in Ref. \cite{10}, Eq. (1) can be used in several different frameworks that provide very similar results in data analyzes. In the present work, we use Eq. (1) within the zero-range approach for the effective nucleon-nucleon interaction ($v_{nn}(\vec{r})=V_0 \delta(\vec{r})$) with the matter densities assumed in the folding calculations (see \cite{10}). This approach is equivalent \cite{10} to the more usual procedure of using the M3Y nucleon-nucleon interaction with the nucleon densities of the nuclei. For $\sigma_{T}^{NN}$ in Eq. (2), we have interpolated values from the corresponding experimental results of Ref. \cite{nova2}. \section{\label{sec:level3}Data Analysis} Table 1 presents all systems that have been analyzed in the present work. The data have been obtained from Refs. \cite{20,21,22,23,24,25,26,27,28}. Eqs. (1) and (2) involve the folding of the nuclear densities. In an earlier paper \cite{10}, we presented an extensive systematics of heavy-ion densities. In that work, the Fermi distribution was assumed to describe the densities. The systematics indicates that the radii of the matter distributions are well represented by: \begin{equation} R_0=1.31 A^{1/3} - 0.84 \; \mbox{fm}. \end{equation} where $A$ is the number of nucleons of the nucleus. The densities present an average diffuseness value of $a=0.56$ fm. Owing to specific nuclear structure effects (single particle and/or collective), the parameters $R_0$ and $a$ show small variations around the corresponding average values throughout the periodic table. In the present work, we have assumed Eq. (3) for all nuclei and allowed $a$ to vary around its average value in order to obtain the best data fits. The only exception is the $^4$He nucleus for which the shape of the corresponding matter density was assumed to be similar to the charge density obtained from electron scattering experiments \cite{19}. Of course, the use of a Fermi-type density is not well justified for light, both stable and unstable, nuclei. However, we decided to use this universal form in order to assess the adequacy and limitations of our model. The values obtained for the diffuseness of the nuclei are shown in Table 2. In a consistent manner, these values present very small variations around the average value obtained in the previous systematics: $a=0.56$ fm. We emphasize the much greater value obtained for the diffuseness of the exotic $^6$He in comparison with its partner $^4$He. Indeed, the diffuseness of the $^6$He is comparable with the values obtained for heavy-ions. Similar results have already been observed in other works \cite{11,12}. Table 2 also presents the root-mean-square (RMS) radii for the matter densities and for charge distributions extracted from electron scattering experiments \cite{19}. The RMS radii for the matter densities agree with the values for charge distributions within about 5\%, except for $^{12}$C where a difference of 10\% was found. For the imaginary part of the optical potential we have adopted Eq. (2), without PB, multiplied by a factor of normalization $N_I$. The corresponding predictions for some elastic scattering angular distributions are shown in Figs. (1-3). In these figures, the dashed lines represent the predictions with $N_I=1$ while the solid ones correspond to the results obtained considering $N_I$ as a free parameter. The best fit values obtained for $N_I$ are presented in Table 1 and Fig. 4. A strong reduction of absorption is observed for systems with small reduced mass. As already commented in section I, PNL and PB affect both real and imaginary parts of the optical potential. The detected reduction of absorption could partially arises from the effect of the PB. In order to illustrate this point, Fig. (5 - Bottom) shows $W(R)$ obtained from Eq. (2), for the $^{12}$C + $^{12}$C system in two different energies, with (solid lines) and without (dashed lines) including the effect of PB. The calculation of $W(R)$ with PB was performed considering a density-dependent nucleon-nucleon total cross section according Refs. \cite{15,20}. Clearly, the blocking reduces $W(R)$ in an internal region of distances and almost no effect is observed in the surface region. This behavior is connected with the large overlap of the densities for small distances that makes the blocking very effective. In Fig. (5 - Top) we show the results of a notch test, where we have included a spline with Gaussian shape in the imaginary potential and calculated the variation of the chi-square as a function of the position of this perturbation. This test has the purpose of determining the region of sensitivity that is relevant for the elastic scattering process. Just as a guide, the position of the s-wave barrier radius is also indicated in the same figure. In the region of sensitivity, the imaginary potential with PB is in average less intense than the result without blocking. This fact probably is connected with the value $N_I \approx 0.6$ obtained for the $^{12}$C + $^{12}$C system. Still in Fig. (5) one can observe that the region of sensitivity is more internal for the higher energy. However, the difference between $W(R)$ with and without blocking is smaller for the higher energy. Probably, this two effects cancelate each other and one obtains approximately the same $N_I$ value for the two energies (see Table 1). In Fig. (6) we present the reaction cross sections for the $^{12}$C + $^{12}$C system in a very wide energy range (from Refs. \cite{24,29,30,31,32,33,34}). The lines represent the predictions obtained with $N_I=1$ and $N_I=0.6$, where the smaller value was obtained from the elastic scattering data fits. Clearly the value $N_I=0.6$ also provides a better reproduction of the reaction data in the energy region studied in this work: $25 \le E \le 120$ MeV/nucleon that corresponds to $300 \le E_{Lab} \le 1440$ MeV for $^{12}$C + $^{12}$C. In Fig. (7), we present the notch test for systems with different reduced masses, but for approximately the same bombarding energy. Again as a guide, the positions of the corresponding s-wave barrier radii are indicated in the figure. For the heaviest system, the region of sensitivity is close to the barrier radius and therefore it is in the surface region. The lighter systems present sensitivity regions much more internal in comparison with the corresponding barrier radii. In fact, it is well known that the scattering between light heavy-ions probes more efficiently the internal internuclear distance region \cite{nova4} and the present results of the notch test just confirms this point. Thus, the simple approach of using Eqs. (1) and (2), with $N_I=1$, fails for lighter systems that are sensitive to inner distances. The discussion about Fig. 5 clearly shows that the effect of PB on the imaginary part of the potential should be partially responsible by this behavior of light systems, but the present analysis can not discern if such behavior is also connected with effects of PNL on the imaginary part and/or even of PB on the real part of the optical potential. On the other hand, considering our results for heavier systems, the analysis clearly indicates that the surface region of the optical potential is well represented by Eq. (1), real part that includes the PNL effect, and Eq. (2), imaginary part without the PB effect. In order to check the consistency between the present and earlier works, we have calculated the volume integral of the real part of the potential, Eq. 4, and the reaction cross sections that are connected with the absorptive part of the potential. \begin{equation} J_R= \frac{4\pi}{A_1 A_2} \int V(R) R^2 dR \end{equation} The values obtained for $J_R$ and $\sigma_R$ (see Table 1) are similar to those of earlier works (from Refs. \cite{20,21,22,23,24,26,28,37,38,39,40,42,43,44}). In the present systematics we have included the weakly bound $^{6,7}$Li nuclei and also the exotic $^{6}$He. In some works, due to the break-up process or effects of the halo, these projectiles have been pointed out as responsible by a different behavior in comparison with nuclear reactions involving only normal stable nuclei. In fact, our systematics for $N_I$ also indicates a slightly greater absorption for systems involving $^{6,7}$Li in comparison with other systems with similar reduced masses (see Fig. 4). On the other hand, as already commented, the diffuseness obtained in this work for $^6$He is much greater than the value extracted for $^4$He from electron scattering experiments. \section{\label{sec:level4}Summary and Conclusions} In summary, we have analyzed elastic scattering angular distributions for several systems. The real part of the optical potential was assumed to be energy-dependent due to the PNL that arises from quantum exchange effects. For the imaginary part, we have adopted the Lax interaction that also presents an energy-dependence very well established and connected with the total nucleon-nucleon cross section. In the imaginary part, we have also considered a factor of normalization with the aim of simulating the effect of PB, which arises from the exclusion principle and prevent scattered nucleons to occupy filled states. The notch test indicated that lighter systems present greater sensitivity to the internal region of internuclear distances, where PB is expected to be essential due to the large overlap of the nuclei. For heavier systems it was possible to obtain a good data description with $N_I=1$. This result indicates that, in the surface region, the PNL and PB does not significantly affect the imaginary part of the potential, while Eq. (1) correctly describes the PNL effect on the real part. For lighter systems, however, reasonable accounts of the data were obtained only considering $N_I$ as a free parameter. This result is compatible with the expected reduction of the absorption due to PB and also with the fact that light nuclei have a smaller density of states. The present analysis, however, can not discern if such behavior is also connected with effects of PNL on the imaginary part and/or even of PB on the real part of the optical potential. \begin{acknowledgments} This work was partially supported by Financiadora de Estudos e Projetos (FINEP), Funda\c{c}\~{a}o de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP), and Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq). \end{acknowledgments}
2,869,038,154,458
arxiv
\section{Introduction}\label{s:Introduction} Flow control of turbulent flows, in particular with the purpose of drag reduction, is a recurrent research objective that has regained interest during the last decade \citep{Noack2019control}. Machine learning, in particular, plays a key role in the development of sophisticated and efficient flow-control algorithms \citep{BruntonNoackKoumoutsakos2020} and presents a possible solution to the difficulties imposed by the non-linearity, time-dependence, and high dimensionality inherent to the Navier-Stokes equations. Control of wake flows, in particular, has attracted significant interest due to its relevance in a wide variety of research and industrial applications. Control strategies commonly target two main types of drag: the skin friction drag, caused due to the viscosity of the fluid interacting with the wall surface, and the wake drag, which originates after the affected body. Passive drag reduction methods, such as the classical dimples on the surface of a sphere \citep{Bearman1993dimples} to delay transition or the use of splitter plates in the wake \citep{Ozono1999splitter} to suppress the vortex shedding, have shown to be quite successful. Nonetheless, during these last decades, hardware/software advances are pushing towards active methods instead, which exploit the potential advantage of tuning the action according to the flow state. Active drag reduction techniques initially focused on increasing base pressure to reduce pressure drag, using transpiration and vibration techniques for such an objective. Continuous or pulsating-based-bleeds could be used to modify the flow in the separated region. For the latter, drag reduction could be achieved with zero net mass addition, with maximum effectiveness at a frequency twice the K\'arm\'an shedding frequency \citep{Williams1989}. Among the wide variety of active drag reduction techniques that resulted effectively, small jets have shown to be very efficient, enabling separation control with weak actuation \citep{glezer2011control}. The design of effective flow control techniques is a challenging objective, especially when the solution is based only on limited velocity or pressure data extracted from the fluid flow \citep{duriez2017book}. In its essence, the flow control problem can be described as a functional optimization problem, in which the state of the dynamical system has to be inferred from a limited number of observable quantities. The objective is to find a control function that minimizes (or maximizes) a cost (or reward) function, based on the desired features for the controlled flow configuration. Considering the categorization of control strategies, one of the main classifications is based on the existence of a model to describe the system to be controlled, distinguishing between model-based and model-free control. The latter encloses the kind of control strategies where an optimized control law is extracted without imposing any model of the dynamical system. The popularity of these approaches has been growing considerably in the last decade, thanks to the popularization and advances in machine learning techniques. Two of the most prominent model-free control techniques from the machine learning literature are Reinforcement Learning \citep[RL]{sutton2018reinforcement} and Genetic Programming \citep[GP]{koza1994}. Reinforcement Learning is an unsupervised learning technique focused on optimizing a decision-making process interactively, which makes it a preferred choice in flow control over its alternatives. On the other hand, Genetic Programming algorithms are focused on recombining good control policies by systematic testing, exploiting the ones with the best results and exploring possible alternatives in the solution space. Genetic Programming, originally pioneered by \citet{koza1994} belongs to the family of Evolutionary Algorithms (EA), which have a common workaround: a population of individuals, called a generation, compete at a given task with a well-defined cost function, and evolve based on a set of rules, promoting the most successful strategies to the next generation \citep{banzhaf1998genetic, duriez2017book}. It constitutes a powerful regression technique able to re-discover and combine flow control strategies, which have been proven useful in the cases of multi-frequency forcing, direct feedback control and controls based on ARMAX (Autoregressive Moving Average eXogenous), without any physics information \citep{cornejomaceda2019pamm,cornejo2021gMLC}. Machine Learning Control (MLC)\citep{duriez2017book} based on tree-based Genetic Programming has been able to develop laws from small to moderate complexity, e.g. the phasor control, threshold-level based control, periodic or multi-frequency forcing, including jet mixing optimization with multiple independently unsteady minijets placed upstream of nozzle exit \citep{zhou_artificial_2020}, the analysis of the effect of a single unsteady minijet for control \citep{wu_jet_2018}, drag reduction past bluff bodies \citep{li2019prf} , shear flow separation control \citep{gautier2015MLC}, reduction of vortex-induced vibrations of a cylinder \citep{Ren2019pof}, mixing layer control \citep{Parezanovic2016jfm}, and wake stabilization of complex shedding flows \citep{raibaudo2019pof} among others. Some relevant improvements have been made to the MLC framework such as the integration of a Linear-based Genetic Programming (LGP) algorithm \citep{li2017GP}, which is the chosen option in this study. Recently, a faster version of MLC based on the addition of intermediate gradient-descent steps between the generations (gMLC) has been developed \citep{cornejo2021gMLC} Reinforcement Learning is based on an agent learning an optimized policy based on the different inputs and outputs. The key feature of RL is that the only information available for the algorithm is given in the form of a penalty/reward concerning a certain action performed by the model, but no prior information is available on the best action to take. In Deep RL (DRL), the agent is modelled by an Artificial Neural Network (ANN) which needs to be trained \citep{rabault2019DRL}. ANN have a long history of use to parametrize control laws \citep{Lee1997NN}, or to find the optimized flow control strategy for several problems such as swimming of fish schoolings \citep{gazzola2014RL}, control of unmanned aerial vehicles \citep{bohn2019DRL}, and optimization of glider trajectory taking ascendance \citep{reddy_learning_2016}. On the other hand, the exploitation of DRL specifically on flow control is relatively recent. The first applications targeted the control of the shedding wake of a cylinder in simulations \citep{rabault2019DRL,rabault2019JFM} and in experiments \citep{Fan2020}. DRL has been enforced in tuning the heat transport in a two-dimensional Rayleigh–Benard convection \citep{Beintema2020}, in the control of the the interface of unsteady liquid films \citep{Belus2019} and in stabilizing the wake past a cylinder by imposing a rotation on two control cylinders located at both sides \citep{Xu2020joh}. Recently, \citet{li2021ReLe} investigated how to use and embed physical information of the flow in the DRL control of the wake of a confined cylinder, and \citet{paris2021} explored the utilization of DRL for optimal sensor layout to control the flow past a cylinder. It can be argued that LGP and DRL share many similarities, up to the point in which many fitness functions in LGP can be considered as DRL systems\citep{Banzhaf1997}. The recent ongoing developments of DRL and LGP in flow control applications open up relevant questions on their applicability in experimental environments, where the number of sensors is limited and data are likely to be corrupted by noise. Furthermore, the generalization of the identified policies is often hindered by the challenges of interpreting the control laws. This work sheds light on the main features of Deep Reinforcement Learning as Linear Genetic Programming Control (LGPC) in this direction. To the author's best knowledge, only the recent contribution by \citet{pino2022comparative} performs a comparative assessment of machine learning methods for flow control. The work focuses on the comparison of the relation of DRL and LGPC with optimal control for a reduced set of sensors. The robustness to noise and the effect of initial condition of such algorithms is not discussed. The present study aims to address the performance of DRL and LGPC in the simple scenario of the control of the 2D shedding wake of a cylinder at a low Reynolds number in the conditions of a limited number of sensors. The robustness of both processes to noise contamination on the sensor data and variable initial conditions for training individuals is assessed. Finally, an interpretation of the control actions using a cluster-based technique is provided. For this purpose, the same DRL framework and simulation environment used by \citet{rabault2019DRL} is considered, and compared against the LGPC environment developed by \citet{li2017GP}. It is to be noted that this contribution is a proof of concept. The simulation environment proposed by \citet{rabault2019DRL} was chosen given its simplicity and affordability for extensive analysis as herein presented. Nonetheless, the application of akin algorithms to similar environments, though with more challenging conditions, was investigated by \citet{Tang2020pof}, achieving a robust control in the flow past the confined cylinder at multiple Reynolds numbers, concluding that the drag reduction increases with the Reynolds number. Additionally, \citet{Ren2021pof} exploits the framework by \citet{rabault2019DRL} at weak turbulent conditions (Re = 1000) with a drag reduction of 30\%. Nonetheless, in these studies, 236 and 151 probes are considered respectively, and the robustness to noise is not explored. The present article is structured as follows: Section \ref{s:Methodology} defines the main methodologies applied to implement the different machine learning models, the simulation environment and the problem description. Results are collected and described in Section \ref{s:result} while the interpretation of the achieved controls and their performance is outlined in Section~\ref{s:interpretability}. Ultimately, the conclusions of the study are drawn in Section \ref{s:Conclusions}. \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{Figures/1.pdf} \caption{Description of the numerical setup adapted from \citet{rabault2019JFM}. (a) Sketch of the numerical domain and the non-structured mesh, defining the diameter $D=1$, length $L=20$, and height $H=4.1$. The sensor arrangement is shown for (b) 5 and (c) 11 probes, being probes identified as \sy{black}{o*} and actuators as \sy{redR}{t*}. (d) Detail of the cylinder and the jet actuators definition.} \label{fig:NumericalSetup} \end{figure*} \section{Methodology}\label{s:Methodology} This Section is focused on the main methodologies to define the environments in which the Machine Learning models will be tested, as well as the conditions imposed. Although the methodologies to be compared present important differences, they share several fundamental characteristics that must be highlighted before delving into each of the independent methods. \subsection{Simulation Environment} The active drag reduction is performed in a 2D Direct Numerical Simulation (DNS) environment that builds upon that of \citet{rabault2019JFM}, differing only for the sensor strategy. The environment is described here for completeness. The geometry of the simulation, adapted from state of the art benchmarks \cite{schafer_benchmark_1996}, consists of a cylinder of diameter $D$ immersed in a box of total length $L=22D$ (along the $x$-axis) and height $H=4.1D$ (along the $y$-axis) as shown in figure~\ref{fig:NumericalSetup}(a). The inflow velocity (on the left wall of the domain) is modelled as a parabolic profile so that the mean velocity magnitude results in $U$. A no-slip boundary condition is imposed on the top and bottom walls of the channel, and also on the solid cylinder walls. An outflow Dirichlet boundary condition is imposed on the right wall of the domain. The Reynolds number based on the mean velocity magnitude and cylinder diameter ($Re=\frac{{U} D}{\nu}$, with $\nu$ the kinematic viscosity) is set to $Re=100$. This choice is based on the previous work by \citet{rabault2019JFM} which has been a pioneer application of DRL to flow control and the baseline to compare within this study. Working with higher Reynolds numbers would have implied a tremendous increase in computational cost, making unaffordable all the set of simulations required for changing the initial conditions and assessing the noise robustness that will be discussed in the following. The control action is performed by two jets (1 and 2) controlled through a non-dimensional mass flow rate $Q_{\rm jet}$ by imposing a parabolic velocity profile with the jet width of $\omega = 10^\circ$. The jets are perpendicular to the cylinder wall and located at angles $\theta_1 = 90^\circ$ and $\theta_2 = 270^\circ$ relative to the flow direction as shown in figure~\ref{fig:NumericalSetup}(d), what guarantees that all the promoted drag reduction is the result of indirect flow control, rather than direct injection of momentum~\citep{rabault2019JFM} To prevent numerical instability while presenting a more realistic scenario, the total mass flow rate injected by the jets is zero, i.e. $Q_{\rm jet_1} = -Q_{\rm jet_2}$. Note, however, that the cylinder does not present physical cavities on its surface, meaning that there is no physical interference of the jet slot with the flow field. The simulation environment is based on the open-source finite-element framework FEniCS \cite{logg2012book} version 2017.2.0, solving the unsteady Navier–Stokes equations equations by DNS. Computations are performed on an unstructured mesh generated with Gmsh \cite{geuzaine2009gmsh}. The mesh is refined around the cylinder and is composed of $9262$ triangular elements (see figure~\ref{fig:NumericalSetup}(a)). A non-dimensional, constant numerical time step $dt=5 \times 10^{-3}$ is used. The CFL condition is enforced in the problematic zones, that is, close to the actuation jets, by imposing a maximum jet mass flow rate ($|Q_{\rm jet}|<Q_{\rm jet_{max}}$). The flow control framework developed by \citet{rabault2019JFM} was conceived to work either with velocity or pressure probes as sensing. Since pressure probes are often more difficult to be installed in a customary location for an experimental application, it was preferred to chose the velocity probes which resembles what could be extracted from hot wire anemometry (HWA). Two sensor configurations are considered in the present study with $5$ and $11$ probes which report the local value of the horizontal and vertical components of the velocity field (see figure~\ref{fig:NumericalSetup}(b) and \ref{fig:NumericalSetup}(c), respectively). The probes are located in the wake of the cylinder to enable the controller to learn from the vortex shedding pattern. \subsection{Formulation of the optimization problem} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{Figures/2.pdf} \caption{Implementation of the control algorithms. (a) Sketch of the closed-loop control based on Deep Reinforcement Learning. (b) Sketch of the closed-loop control based on Linear genetic Programming.} \label{fig:LearningLoop} \end{figure*} The drag reduction of a 2D cylinder wake flow is a classical optimization problem with a simple target, i.e. reducing drag. The control problem is formulated as a regression problem, i.e. to find the control law which optimizes a given cost function $J$ \citep{duriez2017book}. The proposed cost function has been shaped as a combination of drag and lift coefficients, modifying the one proposed by \citet{rabault2019JFM} as follows: \begin{equation} J= 1+\left\langle C_{D}\right\rangle_{T} - \left\langle C_{D_0}\right\rangle_{T}+0.2\left|\left\langle C_{L}\right\rangle_{T}\right| \label{Eq:J} \end{equation} where $\langle C_{D_0}\rangle_T = 3.206$ is the drag coefficient of the unforced flow, and $\langle \cdot \rangle_T$ indicates the sliding average back in time over a duration corresponding to one vortex shedding cycle $T$ of the unforced vortex shedding flow. The cost function in equation~\ref{Eq:J} has shown to be better than using just the instantaneous drag coefficient, i.e. $J(t) = C_D(t)$. First, the average values of the lift and drag coefficients over one vortex shedding cycle reduce the oscillations of the objective function, which has been reported to improve learning speed and stability \citep{rabault2019JFM}. Secondly, the drag reduction is defined as the increment with respect to the unforced flow ($\Delta C_D = \left\langle C_{D}\right\rangle_{T} - \left\langle C_{D_0}\right\rangle_{T}$), which is intended to be as negative as possible. Thirdly, a penalization term based on the lift coefficient is considered to prevent the controller from finding undesired asymmetric solutions\citep{rabault2019JFM}. Finally, adding the unity to the cost function provides a bias to prevent $J$ negative values that could affect the convergence of the learning process. Ultimately, the algorithm would try to minimise $J$, by reducing drag while maintaining low lift components. From an optimization perspective, the resultant control is the strategy that minimizes the cost function with a control law $\bm{b}(t) = {K}(\bm{s}(t))$, where $\bm{b}(t) = (b_1(t), ...,b_{N_b}(t))^T$ comprises $N_b$ actuation commands, and $\bm{s}(t) = (s_1(t), ..., s_{N_s}(t))^T$ consist of $N_s$ sensor signals. In this study, the actuation $b$ is performed with two jets that are related due to the net zero flux condition ($N_b = 1$) at the bottom and top sides of the cylinder, using velocity probes ($N_s = 5$ or $11$) as sensing. The control problem is equivalent to finding ${K}^{*}$ such that \begin{equation}\label{eq:Optproblem} \begin{aligned} {K}^{*}(\bm{s}) = \underset{{K}}{\arg\min}~~ J({K}(\bm{s})) \end{aligned} \end{equation} The optimized feedback ${K}^{*}$ is computed following LGPC and DRL, as described in the following sections. The resultant control laws map $N_s$ sensor signals into $N_b$ actuation commands. The resultant control action is subjected to a smoothing operation to ensure continuous control signals without abrupt alterations in the pressure and velocity due to the use of an incompressible solver. The control action is then adjusted from one-time step in the simulation to the next by \begin{equation}\label{eq:Qsmooth} Q_{\rm jet}(t)=Q_{\rm jet}(t-dt)+\alpha\left[{b}(t)-Q_{\rm jet}(t-dt)\right], \end{equation} being $\alpha=0.1$ a numerical parameter, $Q_{\rm jet}(t)$ the jet actuation used by the plant at time instant $t$, and ${b}(t)$ the actuation proposed by the machine learning model at time instant $t$. \subsection{Deep Reinforcement Learning} Deep Reinforcement Learning (DRL) is an ML control algorithm in which an agent (managed by an ANN) learns the best control by interacting with the environment to exchange information in a closed-loop process (see figure~\ref{fig:ApproxMethod}(b)). In the present work, the plant or environment is the above-mentioned simulation environment, which interacts with the agent by three channels: the observation or sensor state, $\bm{s}$ ($N_s$ point measurements of velocity); the action, $b$ ($N_b$ values of mass flow rate to impose on the jets); and the reward, $r$ (the cost function in equation \ref{Eq:J} based on $C_D$ and $C_L$, i.e. $r=J$). Based on the sensing data and the reward of the current state, DRL trains an ANN to find the optimized closed-loop control strategies that maximize the expected reward. The DRL framework is the same as in \citet{rabault2019JFM} in which the agent uses the proximal policy optimization (PPO) method \citep{schulman_proximal_2017} for performing learning. The PPO method is episode-based, so that it iteratively learns by applying a certain control for a limited amount of time (episode duration) before analysing the rewards and sensors, and resuming learning with a new episode. The considered architecture for the ANN is relatively simple, being composed of two dense layers of $512$ fully-connected neurons, the input layer to acquire data from the probes, and the output layer to generate data for the two jets. For more details, readers are referred to \citet{rabault2019JFM}. The training loop of DRL is sketched in Figure \ref{fig:LearningLoop}(a). At the beginning of the learning process, the PPO explores purely random controls to assess the values of the reward function. This initial approach implies difficulties to learn the necessity to set time-correlated, continuous control signals. To solve this issue, \citet{rabault2019JFM} implemented the agent such that the control value provided by the network is kept constant for a duration of $50$ numerical time steps. Therefore, the PPO agent interacts and updates the ANN coefficients only every $50$ time steps, which is the duration of a fixed actuation. This numerical trick together with the smoothing described in equation \ref{eq:Qsmooth} provides a continuous control signal. \subsection{Linear Genetic Programming} Linear Genetic Programming \citep[]{Wahde2008book} is an evolutionary algorithm, that applies biological-inspired operations to select the fittest individuals for a given purpose. The control laws are effectively mappings between the outputs (sensor information) and the inputs (actuation) of a dynamical system. In the following, the control laws are also referred to as \textit{individuals} to comply with the evolutionary terminology. LGP is able to learn control laws in a model-free meaner, optimizing both the structure of the function and its parameters. In practice, the control laws are internally represented by a matrix encoding a list of instructions. Each row of the matrix codes for a mathematical operation from a set of input registers, constants and operations and stores the result in a memory register. The matrix is then read linearly modifying sequentially the memory registers, hence the name of the method. The control law is then read in the first register. LGPC is here selected as the preferred option for its simpler implementation of the genetic operators for Multiple-Input-Multiple-Output control \citep{cornejomacedaPhD}. The learning process, sketched in Figure \ref{fig:LearningLoop}(b), is divided into an outer loop devoted to evolving the generations and an inner loop to evaluate all the individuals in a real-time control process. First, an initial population of individuals is randomly generated and evaluated with a Monte-Carlo optimization to explore the control law space. We recall that the individuals are analytical functions of the input data, i.e. the velocity sensor signals. A measure of the performance of each individual is given by its cost $J$ (equation~\ref{Eq:J}). Once the entire population is evaluated, the next generation of individuals is created with genetic operators (crossover, mutation, replication) applied to the most performing individuals. The best individuals are selected with the tournament selection method. Crossover combines two individuals and generates a new pair of individuals by exchanging randomly their instructions. This operation contributes to the exploitation of the learned data by recombining well-performing individuals. The mutation operation modifies randomly elements of one given control law to explore potentially new and better minima. Replication is the memory operator. It assures that good structures are not lost in the evolution process. Finally, an elitism operation saves the best individuals of one generation to the next, ensuring that the performance does not degrade after each generation. The genetic operators (crossover, mutation and replication) are chosen following respective probabilities ($P_c,P_m,P_r$). The process is repeated for every new generation until the stopping criterion is met or if the termination is triggered. In this study, all the training processes have been performed for a fixed number of generations $N_{g} = 15$ as explained later. Among the wide variety of custom settings when dealing with LGPC, the most relevant parameters for proper performance and convergence of the algorithms are the population size, the number of generations, the tournament selection size and the genetic operators' probability ($P_c,P_m,P_r$). LGPC parameters are chosen following the recommendations of \citet{duriez2017book} and \citet{li2017GP}. They are summarized in table~\ref{tab:GAparameters} \begin{table}[h] \centering \begin{tabular}{lc} \toprule Number of controllers & 1 \\ Number of sensors & 5,11 \\ Population size & 100 \\ Number of generations & 15 \\ Tournament selection size & 7 \\ Crossover probability & 0.6 \\ Mutation probability & 0.3 \\ Replication probability & 0.1 \\ Elitism & 1 \\ Operations & $+$, $-$, $\times$, $\div$, $\sin$, $\cos$, $\tanh$ \\ \bottomrule \end{tabular} \caption{Selection of LGPC parameters} \label{tab:GAparameters} \end{table} An important difference between the DRL and the LGPC algorithm is related to their learning process during an episode. In DRL, the agent builds its internal representation of how the flow in a given state will be affected by actuation, and how this will affect the reward value. This is done globally at the end of the episode and also each time the actuation changes. The agent is modified according to the expected reward (total or partial, respectively), which is not an immediate value just after the actuation, but also after the medium/long-term reward. This means that the neural network on which the DRL is capable of learning both from the committed errors but also from the future effect associated with each actuation. In LGPC, there is no chance of the actuation law during the simulation run, as it corresponds to a single individual mapping from the input to the outputs. It is therefore interesting considering the incorporation of time-delayed sensor signals. Following \citet{cornejo2021gMLC}, the values of the probes at $1/4, 1/2$ and $3/4$ of the shedding period are included, as well as the instantaneous value. Assuming a periodic flow, the addition of such time delays enables the reconstruction of the flow phase. \subsection{Training standards} The training process of both DRL and LGPC is performed with the same parameters, which were chosen based on the recommendations by \citet{rabault2019JFM} and extensive empirical analysis. The duration of the simulation (or episode duration, according to DRL nomenclature) is set to $T_{sim} = 20.0$, which translates into approximately 6 vortex shedding periods, and corresponds to 4000 numerical time steps. Note, however, that the cost function in equation~\ref{Eq:J} is evaluated for the last shedding period (650 numerical time steps) in which a new steady state is expected to be reached upon actuation. Regarding the action, the DRL agent adjusts the policy every $50$ numerical time steps, which means that the control is updated $80$ times during the episode. The transition from the current action to the updated following action is smooth and continuous based on the smoothing described in equation~\ref{eq:Qsmooth}. On the other hand, LGPC provides analytical control laws that are continuous and fully dependent on the sensing data, which means that the control action is adjusted every numerical time step mildly thanks to the smoothing. Given the intrinsic differences between LGPC and DRL, it is required to set the training standard to guarantee a fair comparison. The followed criterion is to keep the same computational time during the training process. The PPO agent is able to learn a fully-stabilized control after approximately $400$ epochs (corresponding to $32000$ sample actions); however, the convergence rate is lower when applying noise to the probes. On the other hand, for LGPC with a pool size of $100$ individuals per generation, a converged control law is achieved before the $10^{th}$ generation both with and without noise consideration. Both for LGPC and DRL, it is straightforward to assess that the main computational cost comes from the plant, i.e. from the fluid mechanic simulation of the 2D cylinder wake. The evaluation of the sensing, update of the agent or generation of new individuals are operations with negligible time consumption. Based on these figures of merit, it was decided to set the training duration to $1500$ episodes in the case of DRL and $15$ generations of $100$ individuals in the case of LGPC. This common criterion guarantees that the computational effort is the same for both algorithms since a total of $1500$ simulations are launched for each method. For the investigation of robustness to noise, a perturbation with Gaussian distribution is added to the probes such that the input used by the DRL agent or the LGPC control laws are altered. The noise implementation is the following, \begin{equation} u_i(t) = u_i(t) + \varepsilon \cdot \digamma_i(t) \qquad \forall \quad i = [1,2, ... , N_b], \end{equation} being $u_i(t)$ the velocity value, $\digamma_i(t)$ a random normally distributed value for the probe $i$ at time instant $t$, and $\varepsilon$ the noise level or intensity. Three noise levels are considered, i.e. $1\%,5\%$ and $10\%$ of the freestream velocity. On the contrary, the mean state quantities used to compute the cost function (i.e., $C_D$ and $C_L$) are not altered by noise since the averaging operator in the cost function would make the noise of minor relevance. \subsection{Control law visualization}\label{Sec:Method_ControlLawVisu} \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/3.pdf} \caption{Control interpretation methodology. The sensor vector is built from the vertical and horizontal components of the measured velocity. The elements of the sensor vector are grouped in clusters (three clusters (1,2,3) are presented here for clarity). High probability transitions are depicted with darker colours. $\gamma_1$ and $\gamma_2$ define the projection plane for the proximity map. The actuation magnitude is represented by rectangles in the network model; red (blue) for a positive (negative) actuation with respect to the mean value. See the text for more details.} \label{fig:ApproxMethod} \end{figure} A cluster-based interpretation method is considered to have an insight into the actuation mechanisms involved in the control learned by DRL and LGPC. Cluster-based methods have been recently applied to build network models able to reproduce the main characteristics of fluid flows and dynamical systems, such as temporal evolution and fluctuation levels \citep{Fernex2021,LiH2021jfm}. Understanding the relationship between the control inputs and the corresponding actuation command is not an easy task, especially with a large number of inputs. Thus, clustering is employed to extract representative states from the sensors. Figure~\ref{fig:ApproxMethod} summarizes the main steps of the control interpretation methodology described below. For this study, the metric employed for the cluster analysis is the one induced by the $L^2$ norm. The sensor time series are reduced to $10$ clusters. For each cluster, its centroid is computed as the average of all states in the cluster. Then, the $10$ centroids represent the main states of the flow. Moreover, the transition information from one flow state to another is gathered in a probability transition matrix ($P=[p_{i,j}]_{i,j}$) that translates the probability to jump from one cluster to another. The probability transition from cluster $i$ to cluster $j$ is defined by $p_{i,j}={n_{i,j}}/{n_i}$, being $n_{i,j}$ the number of states from cluster $i$ that transition to cluster $j$ and $n_i$ the total number of states in cluster $i$. The centroids combined with the transition matrix allow for building a network model of the controlled flow based only on the controller inputs. On the other hand, the sensor time series are projected on a 2D proximity map with classical multidimensional scaling \citep[MDS]{Kaiser2017ifac,LiA2022jfm,foroozan2021unsupervised}. MDS is a powerful tool for dimensionality reduction, that projects the data in the two directions ($\gamma_1$,$\gamma_2$) of maximum dispersion of the feature vector distance matrix. The distance matrix is computed with the same metric as the clustering. Combining the network model and the proximity map allows to have a reconstruction of the flow phase space. Finally, the average actuation performed in the cluster is associated with each centroid. The resulting 2D visualization represents the dynamics of the flow alongside the actuation performed allowing an easy interpretation of the control actuation mechanism. \begin{table*}[t] \centering \begin{tabular}{>{\centering}p{0.125\linewidth-2\tabcolsep} >{\centering}p{0.125\linewidth-2\tabcolsep} >{\centering}p{0.125\linewidth-2\tabcolsep} >{\centering}p{0.125\linewidth-2\tabcolsep} >{\centering}p{0.125\linewidth-2\tabcolsep} >{\centering}p{0.125\linewidth-2\tabcolsep} >{\centering\arraybackslash}p{0.125\linewidth-2\tabcolsep}} \toprule Algoritm & Probes & Noise & $\langle J \rangle$ & $\langle C_D \rangle$ & $\langle C_L \rangle$ & $std(Q_{jet}) (\times 10^4)$ \\ \midrule \multicolumn{3}{c}{-- No Control --} & 1.160 & 3.206 & 0.022 & 0 \\ \midrule DRL & 5 & 0\% & 0.992 & 3.085 & -0.207 & 5.744 \\ DRL & 11 & 0\% & 0.807 & 2.958 & 0.138 & 6.526 \\ DRL & 11 & 1\% & 0.808 & 2.961 & -0.095 & 6.3848 \\ DRL & 11 & 5\% & 0.811 & 2.961 & 0.001 & 8.828 \\ DRL & 11 & 10\% & 0.843 & 2.982 & 0.043 & 10.382 \\ \midrule LGPC & 5 & 0\% & 0.966 & 3.065 & -0.271 & 10.225 \\ LGPC & 11 & 0\% & 0.846 & 2.954 & 0.058 & 20.285 \\ LGPC & 11 & 1\% & 0.797 & 2.946 & -0.129 & 12.912 \\ LGPC & 11 & 5\% & 0.984 & 2.997 & 0.233 & 24.446 \\ LGPC & 11 & 10\% & 0.901 & 2.984 & 0.155 & 12.246 \\ \bottomrule \end{tabular} \caption{Summary of results.} \label{tab:SummaryResults} \end{table*} \begin{table*}[t] \centering \begin{tabular}{>{\centering}p{0.08\linewidth-2\tabcolsep} >{\centering}p{0.08\linewidth-2\tabcolsep} >{\centering\arraybackslash}p{0.76\linewidth-2\tabcolsep}} \toprule Probes & Noise & Control Law ($Q_{jet} \times 10^2$)\\ \midrule 5 & 0\% & $\begin{array}{c} \sin\left(\tanh\left(\left( \tanh\left(\left(\left(\left( u_4\left(t-\frac{3T}{4}\right)+v_0\left(t-\frac{T}{2}\right)\right)+\left(\left(v_0+\left(u_2-u_4\left(t-\frac{T}{2}\right)\right)\right) \cdot v_3\left(t-\frac{3T}{4}\right)\right)\right)+v[1]\right)\right) \right.\right. \right. \cdot \\ \left.\left.\left. \left(\left(\cos\left(\left(\left(u_4\left(t-\frac{3T}{4}\right)+v_0\left(t-\frac{T}{2}\right)\right)+\left(\left(v_0-\left(0-\left(u_2-u_4\left(t-\frac{T}{2}\right)\right)\right)\right) \right. \right.\right.\right.\right.\right.\right.\right. \cdot \\ \left.\left.\left.\left.\left.\left.\left.\left. v_3\left(t-\frac{3T}{4}\right)\right)\right)\right)-v_2\left(t-\frac{3T}{4}\right)\right)+\left(v[2]+\left(u_4\left(t-\frac{T}{2}\right)-u_4\left(t-\frac{3T}{4}\right)\right)\right)\right)\right)\right)\right) \end{array}$ \\ \midrule 11 & 0\% & $v_6+\left(v_2-v_2\left(t-\frac{3T}{4}\right)\right)$ \\ \midrule 11 & 1\% & $\tanh\left(\sin\left(\sin\left(v_6\right)\right)\right)-\cos\left(u_7\left(t-\frac{T}{4}\right)\right)$ \\ \midrule 11 & 5\% & $\tanh\left(\tanh\left(v_6\right)\right)+v_2$ \\ \midrule 11 & 10\% & $0.72965 \cdot \left( \sin\left( u_4\left(t-\frac{T}{4} \right)\right) - v_4 \left(t-\frac{T}{4}\right) \right) \cdot \left( \tanh\left(v_7\right) + \sin\left(v_6\right) \right)$ \\ \bottomrule \end{tabular} \caption{Control laws from LGPC.} \label{tab:ControlLawsMLC} \end{table*} \section{Performance analysis}\label{s:result} In this section, the performances of reinforcement learning and linear genetic programming are analyzed. In each training episode, the initial condition is randomly selected, thus replicating an experimental scenario, in which full control of the initial condition to start the actuation is difficult to achieve. The particular case where the starting trigger can be set corresponding to a specific case is included in Appendix A. \subsection{Controller in the absence of noise \label{ss:variableIC} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{Figures/4.pdf} \caption{Influence of the initial condition on the control performance. Results are shown for DRL (a-b) and LGPC (c-d). The probability density function (a,c) and the $C_D$ envelope (b,d) are computed from the $C_D$ profiles extracted for 55 equispaced initial phases over the whole unperturbed shedding cycle. Distributions are shown for 5 \sy{redRr}{rec} and 11 \sy{blueRr}{rec} probes.} \label{fig:Clean_rIC} \end{figure*} DRL and LGPC are first trained on clean data, i.e. in absence of noise. It is important to remark that the performance of each selected actuation depends on the corresponding flow condition when the actuation is started, i.e. the same control law (or weight distribution for the ANN) determine different performances if run under different initial condition. \begin{figure*} \centering \includegraphics[width=0.85\linewidth]{Figures/5.pdf} \caption{Evolution of $J$, $C_D$, $C_L$, and $Q_j$ upon the actuation of DRL \lcap{-}{blueR}, and LGPC \lcap{-}{redR} for 5 and 11 probes. Unforced case \lcap{-}{greyR} shown for reference.} \label{fig:Clean_rIC_history} \end{figure*} The probability distribution function (PDF) of the drag coefficient for the final selected actuation after training is illustrated in Figure \ref{fig:Clean_rIC}(a) for the case of the DRL, and in Figure \ref{fig:Clean_rIC}(c)c for LGPC, both considering 5 and 11 probes. These distributions are obtained analyzing the $C_D$ value obtained running the simulation with 55 equispaced initial phases over the whole unperturbed shedding cycle and averaging the last $650$ simulations steps, corresponding to $3.25t^*$ (being $t^* = t U_\infty/D$ and $U_\infty$ the freestream velocity), i.e. a shedding period of the unforced configuration. This is the same time interval adopted for the computation of the cost function $J$. The PDF demonstrates that the DRL is less sensitive to the effect of the initial condition if compared to LGPC. This result is not surprising, considering that the agent of the DRL is more complex than the control laws obtained through LGPC (see Table \ref{tab:ControlLawsMLC}), thus it is potentially more flexible to variable initial conditions. While this is a desirable aspect of DRL, on the downside it comes at the expense of a less interpretable control policy. A bit more surprisingly, DRL shows a degradation in performance when passing from $11$ to $151$ probes, which is the case evaluated by \citet{rabault2019JFM}. As an hypothesis, this might be due to the non-sufficiently complex architecture of the ANN since it is common despite the sensing configuration. The input probe number is an order of magnitude larger for 151 probes, thus possibly requiring a more powerful network for the agent and more intense training. Nonetheless, it can be concluded that increasing the number of probes to a reasonable extent seems to reduce the variability of the drag coefficient, thus delivering reliable and robust action against variable initial conditions. This is also observed in Figure \ref{fig:Clean_rIC}b, where the envelope of the drag coefficient history in the set of analyzed initial conditions is presented. The shaded region is centred on the mean drag coefficient at each instant, and the half-width is set to one standard deviation of the drag coefficient. The mean values corresponding to such initial conditions are reported in Table \ref{tab:SummaryResults}. Observing the results for LGPC, it can be observed that it benefits mainly in terms of average drag coefficient when increasing the probe number (see Figure \ref{fig:Clean_rIC}d), although there is no significant impact on the dispersion of the drag coefficient. The time history of the cost function $J$, the drag and lift coefficients $C_D,C_L$ and the flow rate of the actuator $Q_j$, are reported in Figure \ref{fig:Clean_rIC_history} for the final selected actuation after training of DRL and LGPC. The initial condition is selected among the $50$ tested cases like the one resulting in a final drag coefficient closest to the mean value. The results are presented for $5$ and $11$ probes, including the case without actuation as a reference. In the limit of low probe number ($5$ probes), LGPC is observed to have slightly superior performance than DRL in terms of $C_D$. The average lift coefficient in the final phases of the observation horizon is in both cases weakly negative (i.e. LGPC and DRL converge to an asymmetric flow configuration determined by the control action). This effect is slightly more significant for LGPC, thus showing that the actuation is also aiming to alter the flow symmetry to reduce drag. In terms of the actuation flow rate, DRL converges to a substantially lower standard deviation of the flow rate of a single jet (used here as a parameter, since the net mass flow rate is zero), thus meaning that it requires less power consumption for the actuation. The differences are more significant for the case with $11$ probes. The actuation obtained with DRL features a faster convergence to the asymptotic drag and lift coefficient, with minimal fluctuations of the latter. This is achieved with strong action in the initial phase, which rapidly damps to a significantly lower intensity to counteract the triggering of the shedding. The actuation identified by LGCP, on the other hand, needs a significantly longer time to converge. The performances in terms of final drag coefficient are similar to DRL, although the oscillations of the lift coefficient and the flow rate of the actuators are more significant. It is nonetheless remarkable the simplicity of the obtained control law (see Table \ref{tab:ControlLawsMLC}). For the case of 11 probes with a random initial condition, LGPC converges to a law that involves only two probes, both using the crosswise velocity component, located in the middle and on one side of the cylinder (see Figure \ref{fig:NumericalSetup} for probe numbering). Remarkably, LGPC is able to identify the flow symmetry and address time-delayed feedback (for probe 2 the control law also includes a time-delayed signal with a delay of $3/4$ of the period). While the identified control law seems less robust to the effect of the initial conditions, it leads to identifying a subset of probes that is sufficient to obtain effective control. \subsection{Robustness to measurement noise} \label{ss:noise} In this section, the robustness of DRL and LGPC in presence of noise is addressed. Additive noise with Gaussian distribution is included in the sensor data. Three noise levels are investigated, i.e. $1\%,5\%$ and $10\%$ of the freestream velocity. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{Figures/6.pdf} \caption{Influence of the initial condition on the control performance in the presence of noise. Results are shown for DRL (a-b) and LGPC (c-d). The probability density function (a,c) and the $C_D$ envelope (b,d) are computed from the $C_D$ profiles extracted for 55 equispaced initial phases over the whole unperturbed shedding cycle. Distributions are shown for 1\% \sy{greenRr}{rec}, 5\% \sy{blueRr}{rec}, and 10\% \sy{redRr}{rec} noise level.} \label{fig:Noise_rIC_pdf} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{Figures/7.pdf} \caption{Evolution of $J$, $C_D$, $C_L$, and $Q_j$ upon the actuation of the DRL \lcap{-}{blueR}, and LGPC \lcap{-}{redR} controller in the presence of 1\%, 5\% and 10\% noise levels. Unforced case \lcap{-}{greyR} shown for reference.} \label{fig:Noise_rIC_history} \end{figure*} Similarly to the \S \ref{ss:variableIC}, the final actuation policy obtained after training is tested under a range of initial conditions for the case of $11$ probes. The PDF of the drag coefficient, as well as its time evolution and the corresponding dispersion for different initial conditions, are illustrated in Figure \ref{fig:Noise_rIC_pdf}a,b. For the DRL, the scatter around the mean drag coefficient is not significantly affected in the initial steps of the actuation ($t^*<5$), in which the actuation is aiming to displace the flow configuration from one limit cycle to another. For larger times, the dispersion around the mean drag coefficient seems to increase with the noise level, as expected. The final achieved performance shows only a minor degradation for noise levels up to $5\%$, while the penalty becomes more significant at $10\%$ noise level. Figure \ref{fig:Noise_rIC_history} reports the evolution with time of the cost function, drag and lift coefficients and actuator flow rate for different noise levels. The initial condition is chosen according to the PDF of the drag coefficient. It is set to be the one yielding the drag coefficient most similar to the mean. It can be observed that the agent is capable to obtain in all tested cases a drag coefficient with relatively small fluctuations, while the lift coefficient experiences larger variations with the increasing noise level. This is expected since the lift coefficient is directly affected by the asymmetry introduced by the actuation, whose flow rate is directly related by the agent to the signal recorded by the probes. The drag coefficient, on the other hand, is related to the wake configuration and thus the effect is partially damped. Interestingly enough, the results reported in Table \ref{tab:SummaryResults} also show that the average lift coefficient is close to zero, i.e. the noise has the effect of avoiding the agent tricking the policy to achieve drag reduction by introducing asymmetries. For the case of LGPC, the effect of noise appears more significant, as illustrated in Figure \ref{fig:Noise_rIC_pdf}c,d. According to the results in Table \ref{tab:SummaryResults}, the drag coefficient is quite significantly affected by increasing noise. Interestingly, for the noise level of $1\%$ and $5\%$, the obtained control law is relatively simple and still identifies that 2 probes are sufficient to perform an effective control action. In particular, probe 6 and an off-axis probe (either 2 or 7) are selected in both cases. The history of the cost function, force coefficients and actuator flow rate are also presented in Figure \ref{fig:Noise_rIC_history}. It is indeed confirmed that, for the case of low noise, the optimization successfully reduce the drag coefficient and the oscillations of the lift coefficient, with even more satisfying results than for the case without noise. For larger noise levels, the control law identified by LGPC is not capable of reducing the oscillations of the lift coefficients, thus inevitably affecting also the share of drag coefficient ascribed to vortex shedding. A direct undesirable consequence of the simplicity of the control law is that, with the increasing noise level, the action is less effective. The different effects of noise between DRL and LGPC can be addressed on one side by the different complexity of the policy, and on the other side by the procedure to determine the action. As described in \S \ref{s:Methodology}, the action selected by the DRL agent is obtained by weighting the current ANN output with the previous step control action, thus introducing a certain soothing effect. It can be speculated that low-pass filtering of the probe signal could improve LGPC performance. Nonetheless, for fairness of comparison, we maintained the same implementation of LGPC presented originally by \citet{li2017GP}, and leave this as an object for future study. The robustness study against noise for the 5 probes configuration leads to similar conclusions as for the 11 probes case. Hence, for the sake of brevity, it is not included. With a smaller number of probes, a higher noise is observed, as filtering out noise becomes more difficult. The probes in the wake of the cylinder have shown to be the most relevant for feedback because the vortex shedding is more pronounced and the signal-to-noise ratio is better. Intriguingly, LGPC has shown to have lower performance degradation than DRL when using only 5 probes. This is not surprising, since all the control laws extracted with LGPC in the 11 probe case lead to parsimonious use of probes, rarely exceeding 3 or 4 sensors. \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{Figures/8.pdf} \caption{Visualization of the laws/policies learned by DRL (top) and LGPC (bottom) for the 5-sensor configuration. (left) The proximity maps of the sensor signals of the controlled regime show each cluster in a different colour with their centroids denoted by numbers (1-10). (centre) The transition matrix is associated with the clustering process. (right) The control network. The yellow boxes set the maximum range of $b$ during the control and the red and blue boxes indicates positive and negative levels (the sign of $b$). The dashed line and red/blue background indicate the assumed separation between the actuation regions.} \label{fig:CL_S5} \end{figure*} \begin{figure}[h!] \centering \includegraphics[width=0.99\linewidth]{Figures/9.pdf} \caption{Control network for laws/policies learned by DRL (top) and LGPC (bottom) for the 5 (left) and 11 (right) probes. For more details see the caption of Figure~\ref{fig:CL_S5}} \label{fig:CL_S} \end{figure} \section{Control results and discussion \label{s:interpretability}} In this section, the controls achieved by DRL and LGPC are analysed with the clustering method described in \S~\ref{Sec:Method_ControlLawVisu}. \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{Figures/10.pdf} \caption{Proximity map of the clustered sensors and centroids for the velocity time series extracted from the DRL/LGPC controls with increasing noise. The noise intensity increases from left to right: $0\%$, $1\%$, $5\%$ and $10\%$. The dashed black line divides the reconstructed phase space into two regions: one of positive actuation and one with negative actuation (the sign of $b$) denoted respectively by a red $+$ and a blue $-$. The $\gamma_1$ and $\gamma_2$ axis have been reflected such the red $+$ is located in the top right corner for all cases.} \label{fig:CL_Noise} \end{figure*} For all cases the transient has been excluded by the analysis by removing the first $3000$ time steps, corresponding to $15t^*$. It must be remarked, however, that for the LGPC cases with 0\% and 1\%-level noise, the transient lasts for more than the observed $60$ time units. Figure~\ref{fig:CL_S5} shows the cluster-based interpretation methodology applied to the laws/policies learned by DRL and LGPC for the 5-sensor cases without noise. The interpretation methodology is applied independently to each case. First, note that the proximity maps of the sensor time series are reminiscent of phase space. Such visualization is not surprising as the controlled flow remains mainly periodic and therefore can be represented in a 2D space. The transition matrices display high probabilities for the self-transitions (diagonal values). This is due to the high sampling rate that excessively populates each centroid. The transition states are then under-represented, hence the low values of the transition probabilities outside the diagonal. Thus, the sensor time series have been subsampled to 1/5th of the data to artificially increase the transition probabilities located outside the diagonal. This operation is done to render the transition matrix easier to read while it does not change either the results or the interpretations. For the DRL control case, note that for each cluster there is only one arrival cluster. This is translated in the control network by a cycle that can be interpreted as a limit cycle in the phase space. For the LGPC case, a limit cycle can also be inferred from the data. However, two centroids close to the centre of the limit cycle (centroids 7 and 8) play a role in short-circuiting the limit cycle. Concerning the control achieved, in both cases, the plane is divided into two regions: one of positive actuation including half the centroids and one of negative actuation including the other half. Furthermore, note that the actuation level increases with the distance to the dividing line. Such organization of the actuation around a limit cycle indicates that the actuation mechanism employed for drag reduction is phasor control, meaning that the control depends on the flow phase. In addition to this visualization, an analysis combining the two dynamics (not included here for brevity) is performed. For this, the time series have been appended and 20 centroids have been chosen for the description of the flow. The resulting proximity map shows the two limit cycles on the same plane, almost concentric, revealing that the controlled flows are close dynamically. However, the probability transition matrix loses the dynamic relationship between the centroids as some clusters contain states from both learning strategies. Figure~\ref{fig:CL_S}, displays the control network for both the number of sensors (5 and 11). Note that the overall shape and relations between the centroids remain the same when going from $5$ sensors to $11$ sensors. Again, the main limit cycle is divided into two regions of positive and negative actuation revealing, again, a phasor control mechanism. Finally, the impact of noise on the actuation mechanism is investigated. Figure~\ref{fig:CL_Noise} displays time series and the centroids projected in the proximity map for the controlled flows with increasing noise. First, note that going from $0\%$ to $1\%$, the noise disturbs the smooth distribution of data. For DRL, as the noise level increases the clusters starts to overlap so much that for the $10\%$-level noise case, separating visually the clusters becomes impossible on the proximity map. The resulting control network, not plotted because of its complexity, shows that for the post-transient regime the dynamics are mainly driven by the noise. Nonetheless, note that the centroids are distributed around an ellipsoid, indicating a periodicity in the flow. As for the LGPC cases, the proximity maps describe successfully the controlled dynamics. For 0\% and 1\%-noise level, the centroids describe a spiral which is consistent with the long transients (see corresponding lift coefficient $C_L$ in figures~\ref{fig:Noise_rIC_history} and \ref{fig:Noise_rIC_pdf}). For the 5\%-noise level, the persistent high-amplitude oscillations plotted in figure~\ref{fig:Noise_rIC_history} are represented by a clear limit cycle in the proximity map and for the 10\%-noise level, the centroids describe a cycle blurred by the noise which is consistent with the low-amplitude oscillations of the lift coefficient in the post-transient regime (figure~\ref{fig:Noise_rIC_history}). In summary, in all the cases, the flow includes a periodic behaviour. Regarding the actuation command, all the proximity maps can be separated into two regions, one of positive actuation and negative actuation, indicating that both DRL and LGPC managed to learn a phasor control regardless of the noise level. \section{Conclusions}\label{s:Conclusions} Two machine-learning-based strategies which minimize the drag of a cylinder exhibiting vortex shedding wake are evaluated. The performance of Deep Reinforcement Learning and Linear Genetic Programming Control in identifying effective control policies has been assessed in the realistic conditions of a limited number of sensors and noise contamination of the sensed data. The training is performed using random initial conditions, in the attempt to reproduce an experimental scenario in which it is not feasible to control the full flow state when the actuation is started. It is observed that, in absence of noise, the average performance achieved by DRL and LGPC is similar in terms of average lift and drag coefficient, although with a significant advantage of the DRL in terms of the standard deviation of the mass flow rate of the actuation. It must be remarked, however, that the amount of power needed to control has not been included in the loss function. Furthermore, the policy obtained by DRL appears to be more robust in terms of dependency on the initial condition. On the other hand, LGPC achieves significantly more compact and interpretable control laws, which also identify only subsets of the probes as being relevant to define control actions. In particular, for the cases with 11 probes and low to moderate noise levels, LGPC identifies that only 2 probes are sufficient to define a sufficiently robust control law. The potential reduction in sensorization complexity is a very desirable feature for experimental application. Regarding the effect of noise, DRL shows superior performances in terms of robustness to noise of the sensors up to relatively high noise levels (10\%). LGPC, on the other hand, is able to identify control laws that are effective in reducing the average drag coefficient, although maintaining a larger level of fluctuations around the mean lift and drag coefficient if compared to DRL, and in general larger standard deviation of the actuator mass flow rate. This superior robustness of the DRL can certainly be ascribed to the higher complexity of the adopted agent if compared to the control policies identified by LGPC. Also in presence of noise, LGPC converges to compact control laws and automatically identifies only a few significant probes for the control, almost independently of the noise level within the tested range. Finally, an analysis using clustering of the sensor data using MDS has been carried out to interpret the control laws obtained by DRL and LGPC. In absence of noise, it is rather evident the convergence to a limit cycle in both cases and a clear relation between the phase of the shedding and the control actuation. This is an indicator that both solutions converge to phasor control, which was an expected result for this simple flow configuration. Although the number of sensors might seem high for a real application, it is remarkable that this study has already considerably reduced the number of probes compared to other recent contributions~\citep{Ren2021pof,rabault2019DRL,rabault2019JFM}. Having a smaller set of probes is viable but not recommended for a proof of concept as the one presented herein, since the information provided to the controller would be very limited and hence the solution would probably be suboptimal. The chosen sets of sensors allow testing the performance of both algorithms and also evaluate their relevance for the final controller. Consequently, one of the main conclusions of the study is the capability of LGPC of identifying a subset of probes as the most relevant, which suggests the possibility of further reduction. This is a milestone for future implementations in a real-world application or experiment. Any comparison contains subjective biases associated with the computational load, the number of parameters, the complexity of the control problem, and even the experience of authors with various approaches. Also, each approach could have been improved. e.g., DRL has many architectures with different performances and LGPC consistently profits from subplex iterations\cite{cornejo2021gMLC}. Even, the very formulation of the control ansatz will affect the performance. Yet, our study points already to desirable features of two different machine learning approaches. Future machine learning control can be expected to integrate the fast adaptive learning of DRL, the analytical laws and interpretability of LGPC, the fast optimization of cluster-based control for smooth control laws\cite{Nair2019jfm} and the mathematically rigorous framework of Bayesian optimization\cite{Blanchard2022ams}, to name only a few aspects. \begin{acknowledgments} Work produced with the support of a 2020 Leonardo Grant for Researchers and Cultural Creators, BBVA Foundation, grant n. IN[20]\_ING\_ING\_0163. The Foundation takes no responsibility for the opinions, statements and contents of this project, which are entirely the responsibility of its authors. Funding of the National Natural Science Foundation China (NSFC) under grants 12172109 and 12172111 and a Natural Science \& Engineering grant of the Guangdong province, China, is gratefully acknowledged. \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request.
2,869,038,154,459
arxiv
\section*{Introduction} \label{sec:introduction} For data assimilation problems there are different ways in utilizing the available observations. While certain data assimilation algorithms, for instance, the ensemble Kalman filter (EnKF, see, for example, \cite{Aanonsen-ensemble-2009,Evensen2006}), assimilate the observations sequentially in time, other data assimilation algorithms may instead collect the observations at different time instants and assimilate them simultaneously. Examples in this aspect include the ensemble smoother (ES, see, for example, \cite{Evensen2000}) and its iterative variants. The EnKF has been widely used for reservoir data assimilation (history matching) problems since its introduction to the community of petroleum engineering \cite{naevdal2002near}. The applications of the ES to reservoir data assimilation problems are also investigated recently (see, for example, \cite{skjervheim2011ensemble}). Compared to the EnKF, the ES has certain technical advantages, including, for instance, avoiding the restarts associated with each update step in the EnKF for certain reservoir simulators (e.g., ECLIPSE$^\copyright$ \cite{eclipse2010}) and also having fewer variables to update. The formal benefit (avoiding restarts) may result in a significant reduction of simulation time in certain circumstances \cite{skjervheim2011ensemble}, while the latter (having fewer variables) can reduce the amount of computer memory in use. To further improve the performance of the ES, some iterative ensemble smoothers (iES) are suggested in the literature, in which the iterations are carried out in the forms of certain iterative optimization algorithms, e.g., the Gaussian-Newton \cite{chen2012ensemble,Gu2007-iterative,Li2009-iterative} or the Levenberg-Marquardt method \cite{chen2013-levenberg,emerick2012ensemble,Iglesias2013regularizing,Lorentzen2011-iterative,luo2014alternative,Luo2014ensemble}\footnote{These optimization algorithms are also applied to iterative EnKFs, e.g., in some of the aforementioned works.}, or in the context of adaptive Gaussian mixture (AGM, see \cite{stordal2014Iterative}). In \cite{Iglesias2013regularizing,Luo2014ensemble}, the iteration formulae are adopted following the regularized Levenberg-Marquardt (RLM) algorithm in the deterministic inverse problem theory (see, for example, \cite{Engl2000-regularization,kaltenbacher2008iterative}). Essentially these formulae aim to find a single solution of the inverse problem, and the gradient involved in the iteration is obtained either through the adjoint model \cite{Iglesias2013regularizing}, or through a stochastic approximation method \cite{Luo2014ensemble}. While in \cite{emerick2012ensemble}, the iteration formula is derived based on the idea that, the final result of the iES should be equal to the estimate of the EnKF, at least for linear systems with Gaussian model and observation errors. Consequently, this algorithm is called \textit{ensemble smoother with multiple data assimilation} (ES-MDA for short). On the other hand, in \cite{chen2013-levenberg} an iteration formula is obtained based on the standard Levenberg-Marquardt (LM) algorithm. By discarding a model term in the standard LM algorithm, an approximate iteration formula is derived, which is similar to that in \cite{emerick2012ensemble}. For distinction, we call it approximate Levenberg-Marquardt ensemble randomized maximum likelihood (aLM-EnRML) algorithm. In this work we show that an iteration formula similar to those used in the ES-MDA and RLM-MAC can be derived by applying the RLM to find an ensemble of solutions via solving a minimum-average-cost problem \cite{luo2014alternative}. The gradient involved in the iteration is obtained in a way similar to the computation of the Kalman gain matrix in the EnKF. This derivation not only leads to an alternative theoretical tool in understanding and analyzing the behaviour of the aforementioned iES, but also provides insights and guidelines for further developments of the iES algorithm\footnote{For instance, instead of using the RLM algorithm, one could apply other types of regularized inversion methods, e.g., the regularized Gaussian-Newton or conjugate gradient algorithm \cite{Engl2000-regularization,kaltenbacher2008iterative}, to solve the minimum-average-cost problem. This will thus lead to more types of iterative ES algorithms.}. As an example, we derive an alternative implementation of the iterative ES based on the RLM algorithm. Three numerical examples are then used to illustrate the performance of this new algorithm and compare it to the aLM-EnRML. \section*{Methodologies} \subsection*{ES-MDA and aLM-EnRML} Let $\mathbf{m}$ denote an $m$-dimensional reservoir model that contains the petrophysical properties to be estimated, $\mathbf{g}$ the reservoir simulator, and $\mathbf{d}^o$ a $p$-dimensional vector that contains all available observations in a certain time interval. Here $\mathbf{d}^o$ is assumed to contain certain measurement errors, with zero mean and covariance $\mathbf{C}_d$. In the context of iterative ensemble smoothing, suppose that there is an ensemble $\mathbf{M}^i \equiv \{ \mathbf{m}_j^i \}_{j=1}^{{N_e}}$ of ${N_e}$ reservoir models available at the $i$th iteration step, then an iES updates $\mathbf{M}^i$ to its counterpart $\mathbf{M}^{i+1} \equiv \{ \mathbf{m}_j^{i+1} \}_{j=1}^{{N_e}}$ at the next iteration step via a certain iteration formula, which is the focus of our discussion below. For convenience of discussion later, let us define the following square root matrix $\mathbf{S}_m^i$ with respect to the model $\mathbf{m}$ (\textit{model square root} for short): {\small \begin{linenomath*} \begin{IEEEeqnarray}{clc} \label{eq:model_sqrt} & \mathbf{S}_m^i = \frac{1}{\sqrt{{N_e}-1}}\left[\mathbf{m}_1^i - \bar{\mathbf{m}}^i,\dotsb, \mathbf{m}_{N_e}^i - \bar{\mathbf{m}}^i \right] \, , & \quad \bar{\mathbf{m}}^i = \frac{1}{{N_e}} \sum_{j=1}^{{N_e}} \mathbf{m}_j^i \, , \end{IEEEeqnarray} \end{linenomath*} } in the sense that the product $\mathbf{S}_m^i \left( \mathbf{S}_m^i \right)^T$ is equal to the sample covariance of $\mathbf{M}^i$. The idea in the ES-MDA is to make the final iES estimate equal the estimate of the EnKF, at least for linear systems with Gaussian model and observation errors (some thoughts are also provided in Appendix \ref{sec:appdendix_iES} in situations with nonlinearity and/or non-Gaussianity). To this end, an iteration formula is derived in terms of \cite{emerick2012ensemble} {\small \begin{linenomath*} \begin{IEEEeqnarray}{clc} \label{eq:linear_ls} & \mathbf{m}_j^{i+1} = \mathbf{m}_j^{i} + \mathbf{S}_m^i \left( \mathbf{S}_d^i \right)^T \left( \mathbf{S}_d^i \left( \mathbf{S}_d^i \right)^T + \gamma^i \, \mathbf{C}_{d} \right)^{-1} \left( \mathbf{d}_j^o - \mathbf{g} \left( \mathbf{m}_j^{i} \right) \right),~ i = 1, 2, \dotsb , I; & \\ \label{eq:esmad_Sdi}& \mathbf{S}_d^i = \dfrac{1}{\sqrt{{N_e}-1}} \left[\mathbf{g}\left(\mathbf{m}_1^{i}\right) - \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}, \dotsb, \mathbf{g}\left(\mathbf{m}_{N_e}^{i}\right) - \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} \right] \, ; \\ & \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} = \dfrac{1}{{N_e}} \sum\limits_{j=1}^{N_e} \mathbf{g}\left(\mathbf{m}_j^{i}\right) \, . & \end{IEEEeqnarray} \end{linenomath*} } The total number of iteration steps $I$ is chosen before the iteration starts; $\mathbf{d}_j^o$ ($j = 1, \dotsb, {N_e}$) are the perturbations of $\mathbf{d}^o$, similar to those in the EnKF; and $\gamma^i$ ($i = 1, \dotsb , I$) satisfy {\small \begin{linenomath*} \begin{IEEEeqnarray}{clc} \label{eq:ES-MDA_constraints} \gamma^i \geq 1 \text{~ and ~} \sum\limits_{i=1}^I \dfrac{1}{\gamma^i} = 1 \, . & \end{IEEEeqnarray} \end{linenomath*} } For convenience, we also call $\mathbf{S}_d^i$ \textit{data square root} hereafter, in the sense that $\mathbf{S}_d^i (\mathbf{S}_d^i)^T$ is equal to the sample covariance matrix of the simulated data $\{ \mathbf{g}\left(\mathbf{m}_j^{i}\right) \}_{j=1}^{{N_e}}$. In the Levenberg-Marquardt ensemble randomized maximum likelihood (LM-EnRML) method \cite{chen2013-levenberg}, history matching is recast as a weighted least-squares problem, in the sense that each model, say $\mathbf{m}_j^{i}$ (the $j$th ensemble model at the $i$th iteration step), is updated to the one $\mathbf{m}_j^{i+1}$ at the next iteration step by solving the following minimization problem: { {\small \begin{linenomath*} \begin{IEEEeqnarray}{clc} \label{eq:individual_cost} & \underset{\mathbf{m}}{\operatorname{argmin}} \, \left( \mathbf{d}_j^o - \mathbf{g} \left( \mathbf{m} \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}_j^o - \mathbf{g} \left( \mathbf{m} \right) \right) + \left( \mathbf{m} - \mathbf{m}_j^{0}\right)^T \left( \mathbf{C}_m^0 \right)^{-1} \left( \mathbf{m} - \mathbf{m}_j^{0}\right) , & \end{IEEEeqnarray} \end{linenomath*} } } with ${\mathbf{C}_m^0} = \mathbf{S}_m^0 \left(\mathbf{S}_m^0\right)^T$, and $\mathbf{S}_m^0$ given by Eq. (\ref{eq:model_sqrt}). Applying the Levenberg-Marquardt algorithm \cite{marquardt1963algorithm} to (\ref{eq:individual_cost}), one has the following iteration formula to update the model \cite{chen2013-levenberg} {\small \begin{linenomath*} \begin{IEEEeqnarray}{crlc} \label{eq:lm_sln} & \mathbf{m}_j^{i+1} = & \, \mathbf{m}_j^{i} + \Delta\mathbf{m}_j^{i} \, , \\ & \Delta\mathbf{m}_j^{i} = & \, \left[ \left( \mathbf{C}_m^0 \right)^{-1} + \left( \mathbf{G}^i_j \right)^T \mathbf{C}_{d}^{-1} \mathbf{G}^i_j \right]^{-1} \left[ \left( \mathbf{G}^i_j \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}_j^o - \mathbf{g} \left( \mathbf{m}_j^{i} \right) \right) - \left( \mathbf{C}_m^0 \right)^{-1} \left( \mathbf{m}_j^{i} - \mathbf{m}_j^{0} \right) \right] \nonumber \\ & \approx & \, \left[ (1+\lambda^i) \left( \mathbf{C}_m^i \right)^{-1} + \left( \mathbf{G}^i_j \right)^T \mathbf{C}_{d}^{-1} \mathbf{G}^i_j \right]^{-1} \left[ \left( \mathbf{G}^i_j \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}_j^o - \mathbf{g} \left( \mathbf{m}_j^{i} \right) \right) - \left( \mathbf{C}_m^0 \right)^{-1} \left( \mathbf{m}_j^{i} - \mathbf{m}_j^{0} \right) \right] \, , \nonumber \end{IEEEeqnarray} \end{linenomath*} } where ${\mathbf{C}_m^i} = \mathbf{S}_m^i \left(\mathbf{S}_m^i\right)^T$, $\mathbf{G}^i_j$ is the Jacobian of $\mathbf{g}$ evaluated at $\mathbf{m}_j^{i}$, and $\lambda^i$ a positive scalar. In the course of deriving the third line of Eq. (\ref{eq:lm_sln}), the first $\left( \mathbf{C}_m^0 \right)^{-1}$ is approximated by $(1+\lambda^i) \left( \mathbf{C}_m^i \right)^{-1}$, in accordance to the Levenberg-Marquardt algorithm \cite{chen2013-levenberg}. In large-scale problems, it can be expensive to evaluate the term $\left( \mathbf{C}_m^0 \right)^{-1} \left( \mathbf{m}_j^{i} - \mathbf{m}_j^{0} \right)$, {\color{black}because the model size $m$ is usually much larger than the observation size $p$ (i.e., $m \gg p$). Hence discarding the term $\left( \mathbf{C}_m^0 \right)^{-1} \left( \mathbf{m}_j^{i} - \mathbf{m}_j^{0} \right)$,} using the Sherman-Morrison-Woodbury formula \cite{sherman1950adjustment} and some algebra, one has {\small \begin{linenomath*} \begin{IEEEeqnarray}{crlc} \label{eq:discarding_a_term_enrml} & \mathbf{m}_j^{i+1} \approx \mathbf{m}_j^{i} + \mathbf{C}_m^i (\mathbf{G}^i_j)^T \left[ \mathbf{G}^i_j \mathbf{C}_m^i (\mathbf{G}^i_j)^T + (1+\lambda^i) \, \mathbf{C}_{d} \right]^{-1} \left( \mathbf{d}_j^o - \mathbf{g} \left( \mathbf{m}_j^{i} \right) \right) \, . & & \end{IEEEeqnarray} \end{linenomath*} } As to be shown later (see Eq. (\ref{eq:wls_rlm_sln_con})), with a suitable choice for $\lambda^i$, the iteration formula in Eq. (\ref{eq:discarding_a_term_enrml}) is consistent with the one derived from the RLM algorithm. Under suitable technical conditions (see, for example, \cite{jin2010regularized}), $\mathbf{m}_j^{i}$ will converge to a solution of the equation $\mathbf{g} \left( \mathbf{m} \right) = \mathbf{d}_j^o$ as $i \rightarrow +\infty$. Following the EnKF, one may introduce a further approximation to the gain matrix in Eq. (\ref{eq:discarding_a_term_enrml}), such that {\small \begin{linenomath*} \begin{IEEEeqnarray}{crlc} \label{eq:gain_approx_enrml} & \mathbf{C}_m^i (\mathbf{G}^i_j)^T \left[ \mathbf{G}^i_j \mathbf{C}_m^i (\mathbf{G}^i_j)^T + (1+\lambda^i) \, \mathbf{C}_{d} \right]^{-1} \approx \mathbf{S}_m^i \left( \mathbf{S}_d^i \right)^T \left( \mathbf{S}_d^i \left( \mathbf{S}_d^i \right)^T + (1+\lambda^i) \, \mathbf{C}_{d} \right)^{-1} \, . & & \end{IEEEeqnarray} \end{linenomath*} } Therefore, with the above approximations, one has the following final iteration formula {\small \begin{linenomath*} \begin{IEEEeqnarray}{crlc} \label{eq:final_iteration_form_enrml} & \mathbf{m}_j^{i+1} = \mathbf{m}_j^{i} + \mathbf{S}_m^i \left( \mathbf{S}_d^i \right)^T \left( \mathbf{S}_d^i \left( \mathbf{S}_d^i \right)^T + (1+\lambda^i) \, \mathbf{C}_{d} \right)^{-1} \left( \mathbf{d}_j^o - \mathbf{g} \left( \mathbf{m}_j^{i} \right) \right) \, . & & \end{IEEEeqnarray} \end{linenomath*} } Comparing Eqs. (\ref{eq:final_iteration_form_enrml}) and (\ref{eq:linear_ls}), it is clear that the iteration formulae become identical when $(1+\lambda^i)$ in Eq. (\ref{eq:final_iteration_form_enrml}) is replaced by $\gamma^i$ in Eq. (\ref{eq:linear_ls}). For distinction, we call the algorithm in Eq. (\ref{eq:final_iteration_form_enrml}) \textit{approximate Levenberg-Marquardt ensemble randomized maximum likelihood} (aLM-EnRML) method. In the aLM-EnRML, starting from an initial value $\lambda^0$, the subsequent values of $\lambda^i$ will decrease if the average data mismatch with respect to the models is reduced, otherwise the values of $\lambda^i$ will increase instead. The iteration process will stop if the maximum iteration number is reached, or if the relative change of the average data mismatch in two consecutive iteration steps is lower than a given threshold. \subsection*{The regularized Levenberg-Marquardt (RLM) algorithm and an extension} In the sequel we first introduce the regularized Levenberg-Marquardt (RLM, as used in \cite{Iglesias2013regularizing,Luo2014ensemble}) algorithm in the context of deterministic inverse problems theory, and then consider an extension of the RLM algorithm as an ensemble data assimilation method. In the course of deduction, the similarities between the ES-MDA/aLM-EnRML and the extended method in the current study will become clear, while some implications in the extended method will also be discussed. Note that in a recent work \cite{Iglesias2014iterative}, the author also considers using the RLM algorithm in the context of ensemble smoothing. Compared to \cite{Iglesias2014iterative}, the current study has different focuses, e.g., in terms of the way in approximating the gradient of the cost function of the minimization problem, and the parameter rule used in the RLM algorithm (see (\ref{eq:par_rule}) later)\footnote{To summarize, \cite{Iglesias2014iterative,Iglesias2013regularizing,Luo2014ensemble} and the current work are all related to the RLM algorithm. \cite{Iglesias2014iterative,Iglesias2013regularizing} use the parameter rule based on, for example, the work \cite{hanke1997regularizing}, while \cite{Luo2014ensemble} and the current work use the parameter rule based on \cite{jin2010regularized}. Although different in the concrete forms, both parameter rules are proven to lead to local convergence of the RLM algorithm \cite{hanke1997regularizing,jin2010regularized}. In addition, \cite{Iglesias2013regularizing,Luo2014ensemble} both aim to find a single solution, but employ different methods in approximating the gradient of the cost function, as discussed previously. In contrast, \cite{Iglesias2014iterative} and the current study target for multiple solutions in the context of iES, and differentiate each other in terms of the ways in constructing the data square root matrices (e.g., \cite{Iglesias2014iterative} uses the same way as in the EnKF). In Appendix \ref{sec:appdendix_iES}, we also provide an alternative point of view to interpret the iES algorithm derived in the current study.}. Readers are referred to \cite{Iglesias2014iterative} for more detail. In the conventional deterministic inverse problem theory (see, for example, \cite{Engl2000-regularization,kaltenbacher2008iterative}), one aims to find a single set of parameters (e.g., a reservoir model $\mathbf{m}$) whose simulated output (e.g., $\mathbf{g} (\mathbf{m})$) matches the observation $\mathbf{d}^o$. Intuitively, one can search for such a model by solving the following weighted least-squares problem {\small \begin{linenomath*} \begin{IEEEeqnarray}{cc} \label{eq:wls_original} \underset{\mathbf{m}}{\operatorname{argmin}} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right) \, , \end{IEEEeqnarray} \end{linenomath*} } which aims to minimize the \textit{data mismatch} $\left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right)$, where $\mathbf{C}_{d}^{-1}$ is the weight matrix associated with the data mismatch term. For convenience of discussion later, let us define $\Vert \mathbf{d} \Vert_{2} \equiv \sqrt{\mathbf{d}^T \mathbf{d}}$ and $\Vert \mathbf{d} \Vert_{\mathbf{C}_{d}} \equiv \sqrt{\mathbf{d}^T \mathbf{C}_{d}^{-1} \mathbf{d}}$ for a vector $\mathbf{d}$, then it is clear that $\left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right) = \Vert \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \Vert_{\mathbf{C}_{d}}^2 = \Vert \mathbf{C}_{d}^{-1/2}\left(\mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right) \Vert_{2}^2$. When the dimension of the observation space is lower than that of the model space (as is often the case in practice), (\ref{eq:wls_original}) is an under-determined and ill-posed problem, which has some undesirable features, e.g., non-uniqueness of the solution and potentially large sensitivity of the solution to the observation, in the sense that even a very small change in the observation might lead to a large change in the solution \cite{Engl2000-regularization}. To mitigate the above problems, in deterministic inverse problems theory it is customary to introduce a certain regularization term to (\ref{eq:wls_original}). Here we consider Tikhonov regularization, in which a regularization term, in terms of $\left( \mathbf{m} - \mathbf{m}^{b} \right)^T \mathbf{C}_{m}^{-1} \left( \mathbf{m} - \mathbf{m}^{b} \right)$, is introduced to (\ref{eq:wls_original}), such that the weighted least-squares problem becomes {\small \begin{linenomath*} \begin{IEEEeqnarray}{cc} \label{eq:wls_stablized} \underset{\mathbf{m}}{\operatorname{argmin}} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m} \right) \right) + \gamma \left( \mathbf{m} - \mathbf{m}^{b} \right)^T \mathbf{C}_{m}^{-1} \left( \mathbf{m} - \mathbf{m}^{b} \right) \, , \end{IEEEeqnarray} \end{linenomath*} } where $\gamma$ is a positive scalar, $\mathbf{m}^{b}$ denotes a background model, and $\mathbf{C}_{m}^{-1}$ is the associated weight matrix. The choice of $\gamma$ affects the relative weights assigned to the data mismatch term and the regularization term, and thus influences the solution of (\ref{eq:wls_stablized}). For instance, if $\gamma$ is relatively large (e.g., tending to $\infty$ in the extreme case), then the obtained solution will approach $\mathbf{m}^{b}$. On the other hand, if $\gamma$ is relatively small (e.g., tending to $0$), then (\ref{eq:wls_original}) and (\ref{eq:wls_stablized}) tend to coincide with each other, and the solution of (\ref{eq:wls_stablized}) can be considered as an approximation to one of the solutions of (\ref{eq:wls_original}). Apart from $\gamma$, the matrices $\mathbf{C}_{m}$ and $\mathbf{C}_{d}$ would also affect the relative weights between the regularization and data mismatch terms. If, in addition, $\mathbf{C}_{m}$ and $\mathbf{C}_{d}$ are chosen to be the error covariance matrices of the background and the observations, respectively, then the correlations of the variables in the model and observation spaces, respectively, are also incorporated in the minimization problem. In general, one may wish to choose a $\gamma$ such that the data mismatch of the resulting solution is comparable to the noise level of the observations \cite{Engl2000-regularization,kaltenbacher2008iterative}. The true noise level is often unknown, therefore in practice it may be replaced by, for instance, a threshold value proportional to $p$ (i.e., the size of $\mathbf{d}^o$). Readers are referred to, for example, \cite{Iglesias2013regularizing,Luo2012-residual,Oliver2008} for the rationales behind this choice. For linear systems, there are straightforward ways for one to construct a solution that has the desired data mismatch (see, for example, \cite{Luo2012-residual}). For nonlinear systems, however, one often has to rely on a certain iterative algorithm to find the solution. One of such methods, called the regularized Levenberg-Marquardt algorithm following \cite{jin2010regularized} (or the regularizing Levenberg-Marquardt algorithm following \cite{hanke1997regularizing}), constructs a sequence of model $\{ \mathbf{m}^i \}$ ($i=1,2,\dotsb$) by solving a linearized weighted least-squares problem {\small \begin{linenomath*} \begin{IEEEeqnarray}{ll} \label{eq:wls_rlm} \underset{\mathbf{m}^{i+1}}{\operatorname{argmin}} & \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m}^{i} \right) -\mathbf{G}_{i} (\mathbf{m}^{i+1} - \mathbf{m}^{i}) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m}^{i} \right) -\mathbf{G}_{i} (\mathbf{m}^{i+1} - \mathbf{m}^{i}) \right) \\ & + \gamma^{i} \left( \mathbf{m}^{i+1} - \mathbf{m}^{i+1,b} \right)^T \mathbf{C}_{m}^{-1} \left( \mathbf{m}^{i+1} - \mathbf{m}^{i+1,b} \right) \, , \nonumber \end{IEEEeqnarray} \end{linenomath*} } at each iteration step. In (\ref{eq:wls_rlm}), $\mathbf{G}_{i}$ is the Jacobian of $\mathbf{g}$ evaluated at $\mathbf{m}^{i}$, such that $\mathbf{g} \left( \mathbf{m}^{i} \right) + \mathbf{G}_{i} (\mathbf{m}^{i+1} - \mathbf{m}^{i})$ represents a first order Taylor approximation to $\mathbf{g} \left( \mathbf{m}^{i+1} \right)$; $\gamma^{i}$ is a positive scalar as in (\ref{eq:wls_stablized}), but now changes over the iteration; $\mathbf{m}^{i+1,b}$ is the background model that may also be adaptive with the iteration. A convenient choice can be $\mathbf{m}^{i+1,b} = \mathbf{m}^{i}$, meaning that we want the new model $\mathbf{m}^{i+1}$ not to be too far away from the previous model, such that the approximation $ \mathbf{g} \left( \mathbf{m}^{i+1} \right) \approx \mathbf{g} \left( \mathbf{m}^{i} \right) + \mathbf{G}_{i} (\mathbf{m}^{i+1} - \mathbf{m}^{i})$ can be roughly valid \cite{Luo2014ensemble}. In addition, the choice $\mathbf{m}^{i+1,b} = \mathbf{m}^{i}$ also simplifies the iteration formula, as will be shown below. The solution of the weighted least-squares problem (\ref{eq:wls_rlm}) is given by {\small \begin{linenomath*} \begin{IEEEeqnarray}{rll} \label{eq:wls_rlm_sln_inv} &\mathbf{m}^{i+1} = & \, \mathbf{m}^{i} + \Delta\mathbf{m}^{i} \\ & \Delta\mathbf{m}^{i} = & \, \left[ \gamma^{i} \mathbf{C}_m^{-1} + \left( \mathbf{G}^i \right)^T \mathbf{C}_{d}^{-1} \mathbf{G}^i \right]^{-1} \left[ \left( \mathbf{G}^i \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m}^{i} \right) \right) - \gamma^{i} \mathbf{C}_m^{-1} \left( \mathbf{m}^{i} - \mathbf{m}^{i+1,b} \right) \right] \, , \nonumber \end{IEEEeqnarray} \end{linenomath*} } which is similar to Eq. (\ref{eq:lm_sln}) (e.g., by replacing $1+\lambda^i$ therein with $\gamma^i$). By letting $\mathbf{m}^{i+1,b} = \mathbf{m}^{i}$ and with some algebra, one has the simplified iteration formula {\small \begin{linenomath*} \begin{IEEEeqnarray}{crlc} \label{eq:wls_rlm_sln_con} & \mathbf{m}^{i+1} = \mathbf{m}^{i} + \mathbf{C}_m (\mathbf{G}^i)^T \left[ \mathbf{G}^i \mathbf{C}_m (\mathbf{G}^i)^T + \gamma^i \, \mathbf{C}_{d} \right]^{-1} \left( \mathbf{d}^o - \mathbf{g} \left( \mathbf{m}^{i} \right) \right) \, . & & \end{IEEEeqnarray} \end{linenomath*} } which is thus similar to Eq. (\ref{eq:final_iteration_form_enrml}). One may also let the weight matrix $\mathbf{C}_m$ be adaptive with the iteration, which will be considered later in the context of iES. Intuitively, provided that the step size $\Delta\mathbf{m}^{i}$ is small enough (which can be controlled by $\gamma^i$) such that the first order Taylor expansion of $\mathbf{g}( \mathbf{m}^{i+1} )$ around $\mathbf{m}^{i}$ is a roughly valid approximation of $\mathbf{g}( \mathbf{m}^{i+1} )$, one has {\small \begin{linenomath*} \begin{IEEEeqnarray}{ll} \label{eq:apprx_ineq} \Vert \mathbf{g}( \mathbf{m}^{i+1} ) - \mathbf{d}^{o} \Vert_{\mathbf{C}_d}^2 & \approx \Vert \mathbf{g}\left( \mathbf{m}^{i} \right) - \mathbf{d}^{o} + \mathbf{G}^i \left(\mathbf{m}^{i+1} - \mathbf{m}^{i}\right) \Vert_{\mathbf{C}_d}^2 \nonumber \\ & = \Vert [ \mathbf{I} - \mathbf{G}^i \mathbf{C}_m (\mathbf{G}^i)^T \left[ \mathbf{G}^i \mathbf{C}_m (\mathbf{G}^i)^T + \gamma^i \, \mathbf{C}_{d} \right]^{-1} ] \left(\mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \right) \Vert_{\mathbf{C}_d}^2 \nonumber \\ & = \Vert \gamma^i \, \mathbf{C}_{d} \left[ \mathbf{G}^i \mathbf{C}_m (\mathbf{G}^i)^T + \gamma^i \, \mathbf{C}_{d} \right]^{-1} \left(\mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \right) \Vert_{\mathbf{C}_d}^2 \nonumber \\ & = \left( \mathbf{C}_{d}^{-1/2} \left(\mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \right) \right)^T \left[ \gamma^i \left[ \left( \mathbf{C}_{d}^{-1/2} \mathbf{G}^i \right) \mathbf{C}_m (\mathbf{C}_{d}^{-1/2} \mathbf{G}^i)^T + \gamma^i \, \mathbf{I}_{p} \right]^{-1} \right]^2 \left( \mathbf{C}_{d}^{-1/2} \left(\mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \right) \right) \nonumber \\ & = \Vert \gamma^i \left[ \left( \mathbf{C}_{d}^{-1/2} \mathbf{G}^i \right) \mathbf{C}_m (\mathbf{C}_{d}^{-1/2} \mathbf{G}^i)^T + \gamma^i \, \mathbf{I}_{p} \right]^{-1} \left( \mathbf{C}_{d}^{-1/2} \left(\mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \right) \right) \Vert_2^2 \nonumber \\ & \leq \Vert \gamma^i \left[ \left( \mathbf{C}_{d}^{-1/2} \mathbf{G}^i \right) \mathbf{C}_m (\mathbf{C}_{d}^{-1/2} \mathbf{G}^i)^T + \gamma^i \, \mathbf{I}_{p} \right]^{-1} \Vert_{2}^2 ~ \Vert \mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \Vert_{\mathbf{C}_d}^2 \nonumber \\ & \leq \Vert \mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \Vert_{\mathbf{C}_d}^2 \, , \nonumber \end{IEEEeqnarray} \end{linenomath*} since $\Vert \gamma^i \left[ \left( \mathbf{C}_{d}^{-1/2} \mathbf{G}^i \right) \mathbf{C}_m (\mathbf{C}_{d}^{-1/2} \mathbf{G}^i)^T + \gamma^i \, \mathbf{I}_{p} \right]^{-1} \Vert_{2}^2 \leq 1$, where $\mathbf{I}_{p}$ denotes the $p$-dimensional identity matrix, and $\Vert \mathbf{A} \Vert_2$ represents the spectral norm of the matrix $\mathbf{A}$.} This implies that the data mismatch $\Vert \mathbf{g}( \mathbf{m}^{i+1} ) - \mathbf{d}^{o} \Vert_{\mathbf{C}_d}^2$ at the $(i+1)$-th iteration tends to be no larger than the one $\Vert \mathbf{g}( \mathbf{m}^{i} ) - \mathbf{d}^{o} \Vert_{\mathbf{C}_d}^2$ at the previous iteration. In terms of the choice of $\gamma^i$, the following parameter rule from \cite{jin2010regularized} {\small \begin{linenomath*} \begin{IEEEeqnarray}{ll} \label{eq:par_rule} & \gamma^0 > 0 \, , \nonumber \\ & \gamma^{i+1} = \rho \, \gamma^i, \text{~with~} 1/r < \rho < 1 \text{~for some scalar~} r > 1 \, , \\ & \underset{i \rightarrow +\infty}{\lim} \gamma^i = 0 \, , \nonumber \end{IEEEeqnarray} \end{linenomath*} } can be used, in which the scalar sequence $\{\gamma^i\}$ gradually reduces to zero as $i$ tends to $+\infty$, while the presence of the lower bound $1/r$ for the coefficient $\rho$ aims to prevent any abrupt drop-down of $\gamma^i$ to zero. When Eq. (\ref{eq:wls_rlm_sln_con}) is used in conjunction with (\ref{eq:par_rule}), it can be analytically shown that the data mismatch of the sequence of models $\{\mathbf{m}^{i}\}$ converges to zero locally as $i \rightarrow +\infty$ and the observation noise level tends to zero, provided that the equation $\mathbf{g}(\mathbf{m}) = \mathbf{d}^{o}$ is solvable and some other conditions are satisfied \cite{jin2010regularized}. Instead of using (\ref{eq:par_rule}), one can adopt other parameter rules that also lead to local convergence of the data mismatch under similar technical conditions (see, for example, \cite{hanke1997regularizing}). When the observation noise is present, however, it may not be desirable to have a too small data mismatch in order to prevent over-fitting the observations. In general, one may wish to let the iteration stop when the data mismatch of the current iterated model is comparable to the noise level, which is often referred to as the \textit{discrepancy principle} \cite{Engl2000-regularization,kaltenbacher2008iterative}. Of course, in practice the true noise level is typically unknown. As a result, one may let the iteration process Eq. (\ref{eq:wls_rlm_sln_con}) stop when either of the following three conditions are satisfied: (a) the data mismatch of the iterated model is lower than a pre-set threshold $\beta_u^2 \, p$ for the first time for a given positive scalar $\beta_u$, in light of the discussions in \cite{Iglesias2013regularizing,Luo2014ensemble,Luo2012-residual,Oliver2008}; or (b) the maximum iteration number is reached; or (c) the relative change of the data mismatch in two consecutive iteration steps is lower than a given threshold ($0.01\%$ in our implementation). Note that, with the stopping condition (a), the total number of (actual) iteration steps can be less than the maximum iteration number, for instance, when the RLM reaches the threshold $\beta_u^2 \, p$ earlier than up to the maximum iteration number. Conditions (b) and (c) are the same as those in the aLM-EnRML, and are mainly introduced to control the runtime of the iteration process. These settings will be applied to the aLM-EnRML and our extension scheme in the experiments later, in order to make the comparison of both algorithms be conducted under the same experimental settings as far as possible. In the practical implementation of the RLM algorithm (and our extension scheme below), there are a few issues that need to be taken into account. Firstly, in the theoretical analysis of the RLM algorithm (see, for example, \cite{Engl2000-regularization,hanke1997regularizing,jin2010regularized,kaltenbacher2008iterative}), it typically assumes that the initial model stays sufficiently close to a solution of the equation $\mathbf{g}(\mathbf{m}) = \mathbf{d}^{o}$, which is not necessarily true in reality. In addition, as can be seen in our previous discussion, the validity of the first order Taylor approximation $ \mathbf{g} \left( \mathbf{m}^{i+1} \right) \approx \mathbf{g} \left( \mathbf{m}^{i} \right) + \mathbf{G}_{i} (\mathbf{m}^{i+1} - \mathbf{m}^{i})$ is essential to the derivation of the RLM algorithm. In practice, it is often not a trivial issue to obtain the Jacobian $\mathbf{G}_{i}$. If the exact Jacobian $\mathbf{G}_{i}$ is not available, then one must evaluate it based on a certain approximation scheme, which often has to struggle between the computational efficiency and accuracy. Putting these factors together, in practice it is likely that the iteration process, Eq. (\ref{eq:wls_rlm_sln_con}), may occasionally lead to higher data mismatch for the current iterated model, in comparison to that of the previous one, due to, for example, a too large step size that invalidates the first order Taylor approximation. In such cases, following \cite{chen2013-levenberg}, a simple remedy can then be to increase the parameter $\gamma^i$ to some extent and re-execute the iteration in Eq. (\ref{eq:wls_rlm_sln_con}) (which will then have a smaller step size). This strategy is essentially similar to the back-tracking line search method in optimization theory (see, for example, \cite{Nocedal-numerical}) and works well in our numerical studies. % In the remainder of this section we consider an extended RLM scheme in the context of ensemble smoothing. As in the aLM-EnRML, instead of examining the data mismatch of a single model, we are more interested in studying how the average data mismatch of the ensemble models changes over the iteration step. To this end, the weighted least-squares problem in (\ref{eq:wls_stablized}) is replaced with the following minimum-average-cost (MAC) problem, namely, {\small \begin{linenomath*} \begin{IEEEeqnarray}{lll} \label{eq:wls_rlm_mac} \underset{\{\mathbf{m}^{i+1}_j\}_{j=1}^{N_e}}{\operatorname{argmin}} & \dfrac{1}{N_e} \sum\limits_{j=1}^{N_e} & \, \left[ \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i+1}_j \right) \right)^T \mathbf{C}_{d}^{-1} \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i+1}_j \right) \right) \right. \\ & & \quad \left. + \gamma^{i} \left( \mathbf{m}^{i+1}_j - \mathbf{m}^{i}_j \right)^T \left( \tilde{\mathbf{C}}_{m}^{i} \right)^{-1} \left( \mathbf{m}^{i+1}_j - \mathbf{m}^{i}_j \right) \right] \, , \nonumber \end{IEEEeqnarray} \end{linenomath*} } where $\tilde{\mathbf{C}}_{m}^{i} = \tilde{\mathbf{S}}_{m}^{i} \left( \tilde{\mathbf{S}}_{m}^{i} \right)^T$ ($\tilde{\mathbf{S}}_{m}^{i}$ will be specified later), and $\mathbf{d}^o_j$, $\mathbf{m}^{i}_j$ and $\mathbf{m}^{i+1}_j$ are the same as those defined previously. For distinction, we call the resulting iES \textit{regularized Levenberg-Marquardt algorithm based on minimum-average-cost} (RLM-MAC for short). Appendix \ref{sec:appdendix_iES} also includes some thoughts that aim to interpret the RLM-MAC from the point of view of an expectation-maximization algorithm \cite{Dempster1977maximum}. To solve (\ref{eq:wls_rlm_mac}), it is necessary to linearize the forward model $\mathbf{g}$ as in the RLM algorithm. Note that in (\ref{eq:wls_rlm_mac}) there are $N_e$ simulated observation terms $\mathbf{g} \left( \mathbf{m}^{i+1}_j \right)$ ($j=1,2,\dotsb,N_e$). To reduce the computational cost in evaluating the Jacobian matrices, one may choose to linearize the forward model $\mathbf{g}$, for each ensemble member $\mathbf{m}^{i+1}_j$, around a ``common'' point, say $\mathbf{m}^{i}_c$, such that {\small \begin{linenomath*} \begin{IEEEeqnarray}{ll} \label{eq:linearization_common} \mathbf{g} \left( \mathbf{m}^{i+1}_j \right) \approx \mathbf{g} \left(\mathbf{m}^{i}_c \right) + \mathbf{G}^{i}_c \left( \mathbf{m}^{i+1}_j - \mathbf{m}^{i}_c \right) \, , \end{IEEEeqnarray} \end{linenomath*} } where $\mathbf{G}^{i}_c$ is the Jacobian of $\mathbf{g}$ at $\mathbf{m}^{i}_c$. Inserting Eq. (\ref{eq:linearization_common}) into (\ref{eq:wls_rlm_mac}) and with some algebra (see Appendix \ref{sec:appdendix_deduction}), one has the following iteration formula {\small \begin{linenomath*} \begin{IEEEeqnarray}{lllll} \label{eq:wls_rlm_mac_orig} & \mathbf{m}^{i+1}_j = & \, \mathbf{m}^{i}_j + \Delta\mathbf{m}^{i}_j & & \\ & \Delta\mathbf{m}^{i}_j = & \, \tilde{\mathbf{S}}_m^i (\mathbf{G}^{i}_c \tilde{\mathbf{S}}_m^i)^T \left[ (\mathbf{G}^{i}_c \tilde{\mathbf{S}}_m^i) (\mathbf{G}^{i}_c \tilde{\mathbf{S}}_m^i)^T + \gamma^i \, \mathbf{C}_{d} \right]^{-1} \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i}_c \right) - \mathbf{G}^{i}_c \left( \mathbf{m}^{i}_j - \mathbf{m}^{i}_c \right) \right) \, ,& & \nonumber \end{IEEEeqnarray} \end{linenomath*} } for $j = 1, \dotsb, N_e$. If one lets {\small \begin{linenomath*} \begin{IEEEeqnarray}{ll} \label{eq:new_Smi} \tilde{\mathbf{S}}_m^i = \frac{1}{\sqrt{{N_e}-1}}\left[\mathbf{m}_1^i - {\mathbf{m}}^i_c,\dotsb, \mathbf{m}_{N_e}^i - {\mathbf{m}}^i_c \right], \end{IEEEeqnarray} \end{linenomath*} } and adopts the approximation $\mathbf{G}^{i}_c \left( \mathbf{m}^{i}_j - \mathbf{m}^{i}_c \right) \approx \mathbf{g} \left( \mathbf{m}^{i}_j \right) - \mathbf{g} \left(\mathbf{m}^{i}_c \right)$, then Eq. (\ref{eq:wls_rlm_mac_orig}) reduces to {\small \begin{linenomath*} \begin{IEEEeqnarray}{crlc} \label{eq:wls_rlm_mac_simplified} & \mathbf{m}^{i+1}_j = \mathbf{m}^{i}_j + \tilde{\mathbf{S}}_m^i (\tilde{\mathbf{S}}_d^i)^T \left( \tilde{\mathbf{S}}_d^i (\tilde{\mathbf{S}}_d^i)^T + \gamma^i \, \mathbf{C}_{d} \right)^{-1} \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i}_j \right) \right) \, , & & \end{IEEEeqnarray} \end{linenomath*} } where {\small \begin{linenomath*} \begin{IEEEeqnarray}{ll} \label{eq:new_Sdi} \tilde{\mathbf{S}}_d^i = \frac{1}{\sqrt{{N_e}-1}}\left[\mathbf{g} \left( \mathbf{m}^{i}_1 \right) - \mathbf{g} \left(\mathbf{m}^i_c \right),\dotsb, \mathbf{g} \left( \mathbf{m}^{i}_{N_e} \right) - \mathbf{g} \left(\mathbf{m}^i_c \right) \right]. \end{IEEEeqnarray} \end{linenomath*} } Compared to the iteration formulae of ES-MDA and aLM-EnRML (e.g., Eq. (\ref{eq:final_iteration_form_enrml})), the iteration formula of RLM-MAC in general differs in the way of constructing the square root matrices $\tilde{\mathbf{S}}_m^i$ and $\tilde{\mathbf{S}}_d^i$. Possible choices for $\mathbf{m}^i_c$ could be, for instance, the mean $\bar{\mathbf{m}}^{i}$ or a member (e.g., the one closest to $\bar{\mathbf{m}}^{i}$ ) of the ensemble $\{ \mathbf{m}^{i}_j \}_{j=1}^{N_e}$. In the current work we let $\mathbf{m}^i_c = \bar{\mathbf{m}}^{i}$. Under the choice $\mathbf{m}^i_c = \bar{\mathbf{m}}^{i}$, $\tilde{\mathbf{S}}_m^i$ coincides with ${\mathbf{S}}_m^i$, therefore the resulting RLM-MAC algorithm mainly differs from the aLM-EnRML in the data square root matrix. In general, $\tilde{\mathbf{S}}_d^i$ and ${\mathbf{S}}_d^i$ may have different ``centering points'': in Eq. (\ref{eq:esmad_Sdi}), the simulated observations $\mathbf{g} \left( \mathbf{m}^{i}_j \right)$ in ${\mathbf{S}}_d^i$ center around $\overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}$, while in $\tilde{\mathbf{S}}_d^i$ they center around $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$ instead. In certain circumstances, e.g., when the forward model $\mathbf{g}$ is linear or weakly nonlinear, $\overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}$ and $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$ may be close to each other\footnote{In particular, when $\mathbf{g}$ is linear, $\overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} = \mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$, such that the RLM-MAC and aLM-EnRML become identical.}. In some other cases, the distinction between $\overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}$ and $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$ may be more substantial, and the behaviour of aLM-EnRML and RLM-MAC may thus become more diverged, as will be shown later. In terms of the performance comparison between the aLM-EnRML and RLM-MAC, however, we stress that, although one may see the relative superiority of one algorithm over the other in some specific cases, it is expected that the conclusion might not be valid in a broader sense, following the ``no free lunch theorems for optimization'' \cite{wolpert1997no}. Intuitively, since both the aLM-EnRML and RLM-MAC are local optimization algorithms and they may have different search directions, these two algorithms might end up with different local optima in general, and the relative superiority between these two algorithms may thus vary from case to case. Finally we note that, in the course of solving (\ref{eq:wls_rlm_mac}), we have made use of the first order Taylor approximation around a common point $\mathbf{m}^{i}_c$ in order to reduce the computational cost. In this regard, a theoretically more sound -- yet computationally more expensive -- strategy would be to linearize the forward model $\mathbf{g}$, for each ensemble model, around a certain point in its neighbourhood. A trade-off strategy may also be employed, e.g., by clustering the ensemble models into a small number of groups, and then linearizing $\mathbf{g}$ around a common point for each group. In addition, in the context of solving a MAC problem, it is also possible for one to explicitly incorporate into (\ref{eq:wls_rlm_mac}) additional terms that impose certain constraints onto the statistics of the ensemble members or the corresponding simulated observations, such that the iteration step may be in a form beyond the Kalman update formula as in Eq. (\ref{eq:wls_rlm_mac_simplified}). An investigation in this aspect will be carried out in the future. \section*{Numerical examples} After discussing the similarities and differences between the ES-MDA, aLM-EnRML and RLM-MAC in the previous section, we focus on demonstrating potentially different behaviour of the ensemble smoothers that is resulted from using different ``centering points'' ($\overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}$ or $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$) to construct the data square root matrix. As noted previously, in the ES-MDA, the total number $I$ of iteration steps is decided before the iteration process starts, and the parameters $\gamma^i$ are dependent on $I$, in light of the constraint (\ref{eq:ES-MDA_constraints}). The aLM-EnRML and RLM-MAC do not have the above features in general: Under the stopping conditions (a-c) in the previous section, the total numbers of the iteration steps of the aLM-EnRML and RLM-MAC depend on the threshold $\beta_u^2 \, p$ and the convergence speed of the algorithms in specific problems, subject to the constraint of maximum iteration number. In addition, the parameters $\gamma^i$ are chosen according to a different rule, which is independent of $I$ or the maximum iteration number in general. Therefore, for our purpose here, we confine ourselves to the comparison between the aLM-EnRML and the RLM-MAC. In the implementations, both algorithms apply the truncated singular value decomposition (TSVD) to the data square root matrices (normalized by $\mathbf{C}_d^{-1/2}$) in their iteration formulae\footnote{In doing so, the iteration formulae, e.g., that in Eq. (\ref{eq:wls_rlm_mac_simplified}), are re-expressed as $\mathbf{m}^{i+1}_j = \mathbf{m}^{i}_j + \tilde{\mathbf{S}}_m^i ( \mathbf{C}_d^{-1/2} \tilde{\mathbf{S}}_d^i)^T \left( (\mathbf{C}_d^{-1/2} \tilde{\mathbf{S}}_d^i ) ( \mathbf{C}_d^{-1/2} \tilde{\mathbf{S}}_d^i)^T + \gamma^i \, \mathbf{I}_p \right)^{-1} \mathbf{C}_d^{-1/2} \left( \mathbf{d}^o_j - \mathbf{g} \left( \mathbf{m}^{i}_j \right) \right)$, where $\mathbf{I}_p$ is the $p$-dimensional identity matrix. The TSVD is applied to the product $(\mathbf{C}_d^{-1/2} \tilde{\mathbf{S}}_d^i ) ( \mathbf{C}_d^{-1/2} \tilde{\mathbf{S}}_d^i)^T$, which can have certain numerical benefits, e.g., mitigating the issue of different orders of amplitudes in the measurement data.}, and retain the leading eigenvalues which sum up to at least $99\%$ of the ``total energies'' (see \cite{chen2013-levenberg,tavakoli2010history} for the details). In the experiments below we first examine the performance of both algorithms in a strongly nonlinear system, and then apply both algorithms to a reservoir facies estimation problem and the Brugge field case. \subsection*{An initial state estimation problem in a strongly nonlinear system} Here we adopt the strongly nonlinear $40$-dimensional Lorenz 96 (L96) model \cite{Lorenz-optimal} to demonstrate the potentially different behaviour between the aLM-EnRML and RLM-MAC. The governing equations of the L96 model are given by \begin{linenomath*} \begin{equation} \label{eq:LE98} \frac{dx_k}{dt} = \left( x_{k+1} - x_{k-2} \right) x_{k-1} - x_k + F, \, k=1, \dotsb, 40. \end{equation} \end{linenomath*} In Eq.~(\ref{eq:LE98}) the variables $x_{k}$ are defined in a cyclic way such that $x_{-1}=x_{39}$, $x_{0}=x_{40}$ and $x_{41}=x_{1}$. The L96 model is designed to mimic baroclinic turbulence in the midlatitude atmosphere. Because of its strongly nonlinear nature, this model is often employed to test the performance of data assimilation algorithms in the weather data assimilation community. In this work we let $F=8$, with which the L96 model is chaotic. The L96 model is discretized through the fourth-order Runge-Kutta method with a constant integration step of $0.05$ (corresponding to 6 hours in real time, see \cite{Lorenz-optimal}). To initialize the model, we draw an initial state vector at random from the multivariate normal distribution $N(\mathbf{0},\mathbf{I}_{40})$, where $\mathbf{I}_{40}$ is the 40-dimensional identity matrix. To avoid the transit effect, the model states within the first 500 integration steps are discarded, and data assimilation starts after this ``spin-up'' period. For reference, hereafter we re-label the model state at time step 500 as the initial state, which has an impact on subsequent model states and their observations. In the experiment the initial state vector is assumed unknown, and will be estimated based on the observations at subsequent time steps. In the experiment the assimilation time window is 10 days, and the observations are available every 24 hours. At each observation time instant (e.g., 24h, 48h etc.), the synthetic observations are generated by first applying the nonlinear function $f(x) = x^3/5$ to the odd number state variables $x_1,x_3,\dotsb,x_{39}$ (overall 20 variables), and then adding a sample from the normal distribution $N(0,1)$ to each function value $f(x_k)$ ($k=1,3,\dotsb,39$). The total number of observation elements in the assimilation time window is thus $10 \times 20 = 200$. The initial background ensemble is drawn from a multivariate normal distribution whose mean and covariance are obtained from the ``climatological'' statistics of the L96 model in a long free-run with 100,000 integration steps (see, for example, \cite{Luo2014ensemble} for the details). The ensemble size of the aLM-EnRML is 100. Note that in the RLM-MAC one also needs to compute $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$, the simulated observation of the ensemble mean, apart from those of the ensemble members. Because of this, we let the ensemble size of the RLM-MAC be 99, such that the aLM-EnRML and RLM-MAC have the same number of simulated observations at each iteration step. The maximum number of iterations is 100, and the coefficient $\beta_u = 2$. In light of the requirement that the first order Taylor approximation be roughly valid, it is suggested to start with a relatively small step size at the beginning \cite{Luo2014ensemble}. As a result, the initial value $\gamma^0$ should be relatively large (but not too large in order to reduce the total number of iteration steps). In our implementation, we let $\gamma^i = \alpha^i \, \sqrt{\text{trace}(\mathbf{S}_d^i (\mathbf{S}_d^i)^T)} / N_e$, where $\text{trace}(\bullet)$ calculates the trace of a matrix, $\alpha^0 = 1$ and $\alpha^{i+1} = 0.9 \, \alpha^{i}$ if the average data mismatch of the ensemble is reduced, otherwise the $i$th iteration step is repeated, with $\alpha^{i} = 2 \, \alpha^{i} $, for maximum 5 times, following \cite{chen2013-levenberg}. In the subsequent two other case studies, the same way is applied to tune $\alpha^i$ adaptively. We stress that this should not be considered as the optimal rule. Instead, it is expected that one may improve it for specific applications in general. As the dimension of the L96 model is relatively low, we can afford to repeat the experiment 100 times. In each repetition, the initial background ensemble and the associated observations are drawn at random. In the same repetition run, the aLM-EnRML and RLM-MAC have the same initial background ensemble, the associated observations and the parameter rule in choosing $\gamma^i$. Figures \ref{fig:L96_histogram} - \ref{fig:L96_pie_charts_rmse} report the data mismatch and RMSEs of the aLM-EnRML and RLM-MAC, obtained from 100 repetitions of the experiment\footnote{Here the data mismatch is calculated according to (\ref{eq:wls_original}), and is averaged over all ensemble members. The RMSE of a $m$-dimensional estimate $\hat{\mathbf{v}}$ with respect to its truth ${\mathbf{v}}^{tr}$ is give by $\Vert \hat{\mathbf{v}} - {\mathbf{v}}^{tr} \Vert_2 / \sqrt{m}$, where $\Vert \bullet \Vert_2$ denotes the $\ell_2$-norm.}. In terms of the data mismatch, from Figures \ref{fig:L96_histogram} and \ref{fig:L96_pie_charts_data_mismtach}, one can see that for the aLM-EnRML, its range is roughly between $10^3$ and $10^6$. Overall, $2\%$ of the data mismatch lies in the interval $[10^3,10^4]$, $51\%$ in the interval $[10^4,10^5]$, and $47\%$ in the interval $[10^5,10^6]$. For the RLM-MAC, the range of the data mismatch is roughly between $10^2$ and $10^6$. there are $11\%$ of the data mismatch contained in the interval $[10^2,10^3]$, $13\%$ in the interval $[10^3,10^4]$, $51\%$ in the interval $[10^4,10^5]$, and the remaining $25\%$ in the interval $[10^5,10^6]$. This suggests that in some cases the RLM-MAC will have data mismatch lower than the minimum one of the aLM-EnRML. Similar results can also be observed in terms of the RMSEs (Figures \ref{fig:L96_histogram} and \ref{fig:L96_pie_charts_rmse}). As can be seen in Figure \ref{fig:L96_histogram}, for the aLM-EnRML, its RMSEs seem to follow a uni-modal distribution, with its (single) peak around 3. On the other hand, for the RLM-MAC, the distribution of its RMSEs tends to be more flat, with one of its peaks being closer to zero. By counting the numbers of the RMSEs in the sub-intervals (e.g., $[0,1]$, $[1,2]$ etc) of the range $[0 , 6]$, the pie charts in Figure \ref{fig:L96_pie_charts_rmse} indicate that, the percentages of the RMSEs of the aLM-EnRML in the sub-intervals (from low to high values) are $2\%$, $18\%$, $34\%$, $31\%$, $12\%$ and $3\%$, respectively, while the percentages with respect to the RLM-MAC are $22\%$, $18\%$, $28\%$, $16\%$, $14\%$ and $2\%$. Overall, Figures \ref{fig:L96_histogram} and \ref{fig:L96_pie_charts_rmse} also suggest that in some cases the RLM-MAC will have RMSEs lower than the minimum one of the aLM-EnRML. Figure \ref{fig:L96_single_data_mismatch} shows the box plots of the data mismatch of the aLM-EnRML (left) and RLM-MAC (right) at different iteration steps in one repetition run. The ensemble average data mismatch in both iES decreases with the iteration. In this particular case, however, the data mismatch of the RLM-MAC tends to converge faster than that of the aLM-EnRML. In addition, its final data mismatch (up to the maximum iteration step in the figure) is substantially lower than that of the aLM-EnRML. We also note that, the top outlier in each box plot of the aLM-EnRML does not appear to follow the tendency of changes of other data mismatch values in the same box plot, which is a phenomenon not observed in the RLM-MAC. A possible explanation of this difference might be as follows: In the RLM-MAC, in order for the cost function in the MAC problem to decrease with respect to the change of each ensemble member (see the deduction in Appendix \ref{sec:appdendix_deduction}), it is natural to use the deviations \begin{linenomath*} \begin{equation} \label{eq:data_sqrt_rlmmac} \mathbf{g} \left( \mathbf{m}_j^{i} \right) - \mathbf{g} \left( \bar{\mathbf{m}}^i \right) \end{equation} \end{linenomath*} to construct the data square root matrix on top of the first order Taylor approximation. In the aLM-EnRML, however, (\ref{eq:data_sqrt_rlmmac}) is replaced by \begin{linenomath*} \begin{equation} \label{eq:data_sqrt_aLMEnRML} \mathbf{g} \left( \mathbf{m}_j^{i} \right) - \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} \, . \end{equation} \end{linenomath*} The difference between (\ref{eq:data_sqrt_rlmmac}) and (\ref{eq:data_sqrt_aLMEnRML}) would then depend on the degree of nonlinearity in $\mathbf{g}$. In this particular case, because of the strong nonlinearity in the L96 model, the difference between (\ref{eq:data_sqrt_aLMEnRML}) and (\ref{eq:data_sqrt_rlmmac}) may be relatively large (see Figure \ref{fig:obsDiff_boxPlot} later for a numerical investigation), such that the iteration formula based on the choice (\ref{eq:data_sqrt_aLMEnRML}) does not form gradient descent directions for all the ensemble members in the aLM-EnRML. In accordance to the results in Figure \ref{fig:L96_single_data_mismatch}, top panels of Figure \ref{fig:L96_rmse_spread} show the box plots of RMSEs of the ensemble members with respect to the initial conditions at different iteration steps in the aLM-EnRML (left) and RLM-MAC (right), and bottom panels indicate the associated ensemble spreads\footnote{Suppose that $\{\mathbf{m}_j\}_{j=1}^{N_e}$ is an ensemble of models with the ensemble size being $N_e$, and that $m_{j,s}$ is the $s$th element of the vector $\mathbf{m}_j$. Let $\sigma^2_s$ be the sample variance of the $s$th elements of all ensemble members, then the ensemble spread is defined as $\sqrt{\sum_{s=1}^{m}\sigma^2_s / m}$, where $m$ is the size of the ensemble members $\mathbf{m}_j$.}. For both the aLM-EnRML and RLM-MAC, their RMSEs drop at the first few iteration steps, bounce back subsequently, and then drop again until they converge (although a slight oscillation can also be observed around iteration step 85 in the aLM-EnRML). In this particular case, the final RMSEs of the RLM-MAC tend to be lower than those of the aLM-EnRML. The ensemble spread of the aLM-EnRML monotonically decreases with the iteration, while that of the RLM-MAC reaches its minimum around iteration step 10, and then slightly increases afterwards. The final ensemble spreads of both algorithms are close to each other, and they are both underestimated in comparison with the corresponding RMSEs. One possible reason of this underestimation phenomenon may be because in both smoothers the ensemble members tend to converge to regions surrounding certain local minima, therefore the ensemble spreads are relatively small, while the RMSEs are determined by the distances between the local minima and the truth, and thus are not necessarily as small as the corresponding ensemble spreads. Apart from this, we also envision that the effects of finite ensemble size (e.g., sampling errors and correlations among ensemble members \cite{naevdal2011quantifying}) may also contribute to the underestimation phenomenon. Figure \ref{fig:obsDiff_boxPlot} depicts the normalized differences $\mathbf{C}_d^{-1/2} \left( \mathbf{g} \left(\bar{\mathbf{m}}^{i} \right) - \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} \right)$ at different iteration steps, based on the corresponding ensemble members of the RLM-MAC. Please note that in this case, the box plots indicate the spatial distribution of the elements of the vector $\mathbf{C}_d^{-1/2} \left( \mathbf{g} \left(\bar{\mathbf{m}}^{i} \right) - \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} \right)$, whose size is equal to the number of observations (rather than the ensemble size). In addition, in the box plots, $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$ and $\overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}$ are computed from the same ensemble of the RLM-MAC at each iteration step, rather than one (e.g., $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$) from the ensemble of the RLM-MAC and another (e.g., $\overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}$) from the corresponding ensemble of the aLM-EnRML. From Figure \ref{fig:obsDiff_boxPlot} one can see that the spreads of the normalized differences are relatively large at the first few iteration steps. This may be because the initial ensemble spreads are relatively large (see the lower right panel of Figure \ref{fig:L96_rmse_spread}), and the chaos nature of the L96 model tends to further amplify them. However, as the number of iterations increases, the ensemble members of the RLM-MAC tend to converge toward a certain local minimum, and the ensemble spread decreases rapidly. As a result, $\mathbf{g} \left(\bar{\mathbf{m}}^{i} \right)$ approach $ \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)}$, such that the normalized differences decrease and tend to center around zero. \subsection*{A reservoir facies estimation problem} Here we consider a simple 2D facies model (see Figure \ref{fig:true_facies}) previously studied in \cite{lorentzen2012history}. The reservoir consists of $45 \times 45$ gridblocks, with only oil and water inside. The porosity of the reservoir is 0.1 in each gridblock, but the permeability varies spatially: It is 10000 md in the channels, and 500 md in the background. In the field there are 8 water injectors (I1 - I8) and 8 producers (P1 - P8), and their locations are marked as white dots in Figure \ref{fig:true_facies}. The injectors are constrained by injection rates (15.9 m$^3$/day in each), while the producers are constrained by bottom hole pressures (BHP, 399 bar in each). The measurements used for history matching are the oil and water production rates in the producers and BHP in the injectors. To generate the synthetic measurements, we use the reference facies model in Figure \ref{fig:true_facies} as the input to ECLIPSE$^\copyright$ \cite{eclipse2010} and run the simulation for 1900 days. The measurements are then taken as the outputs of ECLIPSE$^\copyright$ every 190 days, plus certain Gaussian noise (whose variances are 0.16 m$^3$/day for production rates, and 0.07 bar for BHP). To enhance the connectivities of the estimated channels, eight statistical measures used in \cite{lorentzen2012history} are also assimilated into the model. These statistics measurements include the sample mean values of: (1) area of bodies (a body is defined as neighboring connected gridblocks with the same facies type, see \cite{lorentzen2012history}); (2) volume of bodies; (3) area-to-volume ratio of bodies; (4) maximal body extension in x-direction; (5) maximal body extension in y-direction; (6) $75$th percentile for the extensions in the x-direction; (7) number of bodies; and (8) fraction of the total volume of the facies with respect to the total volume of the field, where all the mean values are evaluated with respect to the initial background ensemble \cite{lorentzen2012history}. Accordingly, in the course of assimilation, the measurement error variances associated with these statistics measurements are taken as their sample variances that are also evaluated with respect to the initial background ensemble. We have also run both the aLM-EnRML and RLM-MAC when the statistics measurements are not included in assimilation. In this case, the final data mismatch values are close to what we show in Figure \ref{fig:facies_obj} later, but the corresponding estimated channels tend to be less connected (results not shown), consistent with the observation in \cite{lorentzen2012history}. We follow \cite{lorentzen2012history} and use the signed distance function in the level set method to characterize the facies. When using an iES for history matching, there is no need to augment the model and the corresponding state variables as in the EnKF. Therefore for the problem considered here, the initial ensemble only contains the signed distance function values at each gridblock, which are converted from the facies models generated by SNESIM \cite{strebelle2002conditional} through the MATLAB level set toolbox (version 1.1, \cite{mitchell2008flexible}). As in the previous example, the ensemble size in the aLM-EnRML is 100, while that in the RLM-MAC is 99. In addition, the aforementioned parameter rule $\gamma^i = \alpha^i \, \sqrt{\text{trace}(\mathbf{S}_d^i (\mathbf{S}_d^i)^T)} / N_e$ is also adopted in this example. The maximum number of iterations is 20, and the coefficient $\beta_u = 2$. A few initial models are shown in the first column of Figure \ref{fig:facies_models} (a, d, g), and the corresponding models obtained at the final iteration steps of the aLM-EnRML and RLM-MAC are plotted in the second column (b, e, h) and the third one (e, f, i), respectively. As can be seen in these images, despite the clear difference among the initial models, all the updated models of the aLM-EnRML and RLM-MAC bear certain similarities to each other, and they also, to some extent, resemble the reference model in Figure \ref{fig:true_facies}. Figure \ref{fig:facies_scores} reports the average scores of the initial ensemble (panel (a)) and the final ensembles of the aLM-EnRML (panel (b)) and RLM-MAC (panel (c)) in matching the facies of the reference model. The score is calculated as follows: at each gridblock, if a model matches the facies, and it has a score of 1, otherwise it has a score of 0. The final score is then averaged over all the models in the ensemble. Therefore a higher score means that more models have the correct estimated facies. From Figure \ref{fig:facies_scores} it is seen that, compared with the scores in the initial ensemble, it seems that the aLM-EnRML and RLM-MAC tend to improve the facies estimation in the channels, but also have deteriorated matching in certain areas (e.g., that around the injector I2). In addition, in this particular case, it seems that the RLM-MAC tends to perform better than the aLM-EnRML in terms of matching the facies. Figure \ref{fig:facies_obj} indicates the data mismatch of the aLM-EnRML (left) and RLM-MAC (right) as functions of the number of iteration steps. The aLM-EnRML stops at the 10th iteration step due to the stopping condition (c) mentioned early (relative change of data mismatch less than $0.01\%$), while the RLM-MAC stops at the maximum iteration step. Despite this difference, in this case the aLM-EnRML and RLM-MAC appear to have comparable performance, in terms of the convergence speed and the final data mismatch value. For the aLM-EnRML, its final data mismatch is $2197.71 \pm 234.63$, where the number before $\pm$ is the mean, and that after $\pm$ is the standard deviations (STD). On the other hand, for the RLM-MAC, its final data mismatch is $1864.27 \pm 98.9975$, slightly lower than that of the aLM-EnRML. Following \cite{lorentzen2012history}, Figures \ref{fig:facies_forecasts_WOPR} and \ref{fig:facies_forecasts_WWPR} show the performance of history matching oil and water production rates in producers P5 and P7, for the initial ensemble and the ensembles of the aLM-EnRML and RLM-MAC at their final iteration steps, respectively. In both figures, compared to the results with respect to the initial ensemble, those obtained from the final ensembles of the aLM-EnRML and RLM-MAC appear to have reduced ensemble spread and improved data mismatch. The aLM-EnRML performs better than the RLM-MAC in history matching producer P5, but the situation is reverted in producer P7. {\color{black} In the initial ensemble the oil production rates in P5 and the water production rates in P7 tend to be lower than the reference values (which may be related to the bias in the facies distribution of the initial ensemble, as highlighted in Figure \ref{subfig:fieldScore0_1}), and this bias trend remains in the final ensembles obtained by the aLM-EnRML and RLM-MAC. Similarly, in the initial ensemble the oil production rates in P7 and the water production rates in P5 tend to be higher than the reference values. This bias trend is clearly visible in the final ensemble of the aLM-EnRML, but seems to be better mitigated in the final ensemble of the RLM-MAC.} The top panels of Figure \ref{fig:facies_rmse_spread} show the box plots of RMSEs (in permeability values) of ensemble models with respect to the reference model at different iteration steps in the aLM-EnRML (left) and RLM-MAC (right), and bottom ones indicate the corresponding spreads in the ensemble. Similar to the situation in Figure \ref{fig:L96_rmse_spread}, oscillations of the RMSEs (e.g., in terms of the medians of the box plots) are observed in both iES before the RMSE values enter certain plateaus. In this particular case, the final RMSEs of the RLM-MAC tend to be lower than those of the aLM-EnRML. On the other hand, the ensemble spread of the aLM-EnRML tends to decrease with the iteration, while that of the RLM-MAC exhibits slight U-turn behaviour twice before it enters a plateau. The final ensemble spread of the RLM-MAC is higher than that of the aLM-EnRML, and both are underestimated in comparison with the corresponding RMSEs. Figure \ref{fig:facies_obsDiff_boxPlot} shows the normalized differences $\mathbf{C}_d^{-1/2} \left( \mathbf{g} \left(\bar{\mathbf{m}}^{i} \right) - \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} \right)$ at different iteration steps, calculated based on the ensemble models of the RLM-MAC. Compared to Figure \ref{fig:obsDiff_boxPlot}, it is seen that the normalized differences remain relatively small over the iterations, which may be because in this case the phase flows are non-turbulent, and they tend not to amplify the differences among the ensemble models (in contrast to the turbulence nature in the L96 model of the previous subsection). \subsection*{History matching study in the Brugge field case} Here we compare the history matching performance of the aLM-EnRML and RLM-MAC in the Brugge benchmark case study \cite{peters2010results}. The simulation model of the Brugge field case has $60048$ ($139 \times 48 \times 9$) gridblocks, with $44550$ of them being active. In the case study there are 20 producers and 10 water injectors. The dataset we used is from TNO \cite{peters2010results} and contains 20 years' production data\footnote{The second decade's production data are provided by TNO based on IRIS' proposed production optimization strategy for that period, see, for example, \cite{peters2010results}.}. The first decade's data are collected at 127 time instants, while the second decade's at 185 time instants. Following \cite{chen2013-levenberg}, we use the production data at 20 out of 127 time instants in the first decade for history matching, and the rest of the 20 years' production data for out-of-sample cross validation, including, for example, predicting the oil production rates during the second decade based on the history-matched models. In the course of history matching, the producers and injectors are controlled by the liquid rate (LRAT) target. The production data consist of the historical oil production rates (WOPRH) and water cuts (WWCTH) at 20 producers, and the historical bottom hole pressure (WBHPH) at all 30 wells. Therefore the total number of production data is $1400$. The standard deviations for WOPRH, WWCTH and WBHPH are 100 stb/day, 0.05 and 50 psi, respectively. The model variables to be updated include porosity (PORO), permeability (PERMX, PERMY, PERMZ) and net-to-gross (NTG) ratio at all active gridblocks. Consequently the total number of parameters is $222,750$. In the aLM-EnRML we use the 104 model realizations from the initial ensemble provided by TNO \cite{peters2010results}, while in the RLM-MAC we only use 103 of them, after dropping an ensemble member at random (so that the aLM-EnRML and RLM-MAC have the same number of forward runs at each iteration). In both iES, the parameter rule is $\gamma^i = \alpha^i \, \text{trace}(\mathbf{S}_d^i (\mathbf{S}_d^i)^T) / N_e$, with the maximum number of iterations being 20, and the coefficient $\beta_u = 0.5$. In all experiment runs we use ECLIPSE$^\copyright$ \cite{eclipse2010} for reservoir simulation. Figure \ref{fig:brugge_permx_layers12} indicates the distributions of PERMX on layers 1 and 2. The top images are from a realization of the reservoir models in the initial ensemble, the middle ones from the corresponding reservoir model updated by the aLM-EnRML, and the bottom ones from the corresponding model obtained by the RLM-MAC. Although the final data mismatch values of the aLM-EnRML and RLM-MAC are close to each other (see Figure \ref{fig:brugge_data_mismatch_hm}), there do not seem to be substantial similarities between the resulting PERMX images in Figure \ref{fig:brugge_permx_layers12}. This may be due to the difference of both iES in their ways to approximate the gradient of the objective function, hence the difference in the subsequent search directions, as we have discussed previously. Figure \ref{fig:brugge_data_mismatch_hm} shows the box plots of data mismatch of the aLM-EnRML (left) and RLM-MAC (right) as the function of iteration steps. The aLM-EnRML stops at the 15th iteration step due to the stopping condition on the relative change of data mismatch (less than 0.01\%), and the RLM-MAC stops at the maximum iteration step. The aLM-EnRML and RLM-MAC have close performance again, in terms of the convergence speed and the final data mismatch value. In this case, the final data mismatch of the aLM-EnRML is $869.90 \pm 23.38$, while that of the RLM-MAC is $928.96 \pm 25.76$, about 7\% higher in the average data mismatch value. Figure \ref{fig:brugge_wopr_hm} depicts the history matching profiles of oil production rates (WOPR) at producers BR-P-9 (top row), BR-P-13 (middle row) and BR-P-19 (bottom row) in the first 10 years, with respect to the initial ensemble of reservoir models (1st column), the history matched reservoir models of the aLM-EnRML using production data at 20 time instants (2nd column), and the counterpart history matched reservoir models of the RLM-MAC (3rd column), respectively. In the pictures, the red dots represent the historical WOPRH data at the producers, and the blue curves are the forecasts with respect to the initial ensemble (1st column), and the history matching profiles of both smoothers (2nd and 3rd columns). Consistent with the results in Figure \ref{fig:brugge_data_mismatch_hm}, the aLM-EnRML and RLM-MAC exhibit close performance in matching the WOPRH data. Similar results are also obtained for history matching the WWCTH data at BR-P-9, BR-P-13 and BR-P-19, as can be seen in Figure \ref{fig:brugge_WWCT_hm}. Figure \ref{fig:Brugge_obsDiff_boxPlot} also shows the normalized differences $\mathbf{C}_d^{-1/2} \left( \mathbf{g} \left(\bar{\mathbf{m}}^{i} \right) - \overline{\mathbf{g}\left(\mathbf{m}_j^{i}\right)} \right)$ at different iteration steps, calculated based on the ensemble models of the RLM-MAC, in the Brugge field case. Similar to Figure \ref{fig:facies_obsDiff_boxPlot}, the normalized differences remain relatively small over the iterations, in contrast to the corresponding results in the L96 model (see Figure \ref{fig:obsDiff_boxPlot}). This may be also because in this case the reservoir fluid dynamics is non-turbulent, and tends to be less nonlinear than that in the L96 model. Next we present some cross validation results. Figure \ref{fig:brugge_data_mismatch_validation} shows the box plots of the data mismatch values in 20 years, with respect to the reservoir models from the initial ensemble (left), the final ensemble of the aLM-EnRML using production data at 20 time instants in the first 10 years (middle), and the corresponding final ensemble of the RLM-MAC (right). In this case, the data mismatch values with respect to the initial ensemble and the final ensembles of the aLM-EnRML and RLM-MAC are $3.48 \times 10^5 \pm 2.46 \times 10^5$, $4707.93 \pm 357.83$ and $4540.31 \pm 285.34$, respectively. Therefore, the reservoir models history matched by both iES lead to substantially lower data mismatch than those of the initial ensemble. The data mismatch values of the aLM-EnRML and RLM-MAC are close to each other again, with that of the RLM-MAC being slightly lower. Figure \ref{fig:brugge_wopr_validation} depicts the WOPR profiles at producers BR-P-9 (top row), BR-P-13 (middle row) and BR-P-19 (bottom row) in 20 years, with respect to the reservoir models from the initial ensemble (1st column), the final ensemble of the aLM-EnRML (2nd column), and the corresponding final ensemble of the RLM-MAC (3rd column), respectively. In the pictures, the red dots represent the historical WOPRH data at the producers, and the blue curves are the forecasts with respect to the initial ensemble and the final ensembles obtained by both smoothers. The pictures in the 2nd and 3rd columns also contain some vertical dashed lines, which are used to separate the production periods between the first and second decades. In this case, the aLM-EnRML and RLM-MAC again exhibit close performance in cross-validating the WOPRH data at BR-P-9. However, in the second decade, the aLM-EnRML predicts the WOPRH data at BR-P-19 better than the RLM-MAC does, which may be related to the fact that the WWCTH data at BR-P-19 is better predicted by the aLM-EnRML (see Figure \ref{fig:brugge_WWCT_validation}). The situation is reverted for the prediction of the WOPRH data at BR-P-13 in the second decade, and the RLM-MAC outperforms the aLM-EnRML instead. This may be because the WWCTH data at BR-P-13 is better predicted by the RLM-MAC instead, as can be seen in Figure \ref{fig:brugge_WWCT_validation}. \section*{Conclusion} In this work we showed that an iteration formula similar to those used in the ensemble smoother with multiple data assimilation (ES-MDA) and the approximate Levenberg-Marquardt ensemble randomized maximum likelihood (aLM-EnRML) method can be derived from the regularized Levenberg-Marquardt (RLM) algorithm in inverse problems theory, when history matching is recast as a minimum-average-cost (MAC) problem. Specifically, to reduce the computational cost in solving the MAC problem, a simple strategy is to approximate the nonlinear simulator, for all ensemble models, through a first order Taylor expansion around a single common point, e.g., the ensemble mean at the previous iteration step as considered in the current study. In addition, to avoid directly evaluating the resulting (common) Jacobian matrix, one may follow the convention in the ensemble Kalman filter (EnKF) to approximate the product between the Jacobian matrix and the model square root matrix by the corresponding data square root matrix. In this way, the resulting iteration formula is similar to those in the ES-MDA and aLM-EnRML, but differs in the way of constructing the data square root matrix: In the ES-MDA and aLM-EnRML, the simulated data center around their sample mean, while in the RLM-MAC, they center around the simulated data of the mean of the ensemble models. In general, it is expected that one may solve the MAC problem in a more accurate way -- but possibly with a higher computational cost -- by evaluating the gradients at multiple common points. To examine the effect of different ways in constructing the data square root matrix, we compare the performance of the aLM-EnRML and RLM-MAC in three numerical examples. In the first example we consider a strongly nonlinear system (the Lorenz 96 model). In this particular case the aLM-EnRML and RLM-MAC exhibit more diverged behaviour and the RLM-MAC tends to perform better than the aLM-EnRML. In the second and third examples we apply the aLM-EnRML and RLM-MAC to a reservoir facies estimation problem and the history matching problem in the Brugge field case, respectively. In both cases it appears that the aLM-EnRML and RLM-MAC exhibit close performance due to weaker nonlinearity in the reservoir fluid dynamics. \newpage \section*{Nomenclature} \begin{tabular}{rcl} $\mathbf{d}$ & = & data \\ $\mathbf{d}^o$ & = & observation data \\ $\mathbf{f}$ & = & function \\ $\mathbf{g}$ & = & reservoir simulator \\ $m$ & = & dimension of reservoir model \\ $\mathbf{m}$ & = & reservoir model \\ $\mathbf{m}_c$ & = & common reservoir model in the minimum-average-cost problem \\ $\bar{\mathbf{m}}$ & = & ensemble-average reservoir model \\ $N_e$ & = & ensemble size \\ $p$ & = & number of observation data\\ $p_k$ & = & probability density function (pdf) at the $k$th iteration step \\ $r$ & = & scalar coefficient larger than 1 \\ $t$ & = & unit-less sudo-time variable in the Lorenz 96 model \\ $\mathbf{v}$ & = & generic random vector \\ $\hat{\mathbf{v}}$ & = & estimation of $\mathbf{v}$ \\ $\mathbf{v}^{tr}$ & = & true realization value of the random vector $\mathbf{v}$ \\ $x$ & = & state variable in the Lorenz 96 model \\ $\mathbf{y}$ & = & random vector that represents observations \\ $\mathbf{A}$ & = & generic matrix\\ $\mathbf{C}_d$ & = & observation error covariance matrix\\ $\mathbf{C}_m$ & = & model error covariance matrix as in the EnKF\\ $\tilde{\mathbf{C}}_m$ & = & modified model error covariance matrix in the RLM-MAC\\ $F$ & = & unit-less driving force in the Lorenz 96 model \\ $\mathcal{F}$ & = & functional \\ $\mathcal{G}$ & = & inverse functional of $\mathcal{F}$ \\ $\mathbf{G}$ & = & Jacobian of reservoir simulator\\ $\mathcal{G}^C$ & = & composite functional \\ $I$ & = & number of iterations in ES-MDA\\ \end{tabular} \begin{tabular}{rcl} $\mathbf{I}_p$ & = & identity matrix of size $p$\\ $\mathbf{M}$ & = & ensemble of reservoir models \\ $\mathcal{O}$ & = & objective function \\ $\bar{\mathcal{O}}$ & = & expectation of $\mathcal{O}$ \\ $\mathbf{S}_d$ & = & data square root matrix as in the EnKF \\ $\tilde{\mathbf{S}}_d$ & = & modified data square root matrix in the RLM-MAC \\ $\mathbf{S}_m$ & = & model square root matrix \\ \end{tabular} \subsection*{Greek symbols} \begin{tabular}{ccl} $\alpha$ & = & adaptive coefficient in the RLM-MAC that determines $\gamma$ \\ $\delta$ & = & Dirac delta function \\ $\gamma$ & = & adaptive coefficient in iterative ensemble smoothers \\ $\sigma^2$ & = & sample variances of the ensemble of reservoir models \\ $\rho$ & = & tapering factor of $\gamma$ \\ $\Delta m$ & = & model change \\ \end{tabular} \subsection*{Subscript} \begin{tabular}{ccl} $a$ & = & analysis \\ $b$ & = & background \\ $d$ & = & data\\ $e$ & = & ensemble \\ $j$ & = & index of ensemble member \\ $k$ & = & iteration index \\ $m$ & = & model \\ $s$ & = & index of sample variances of the ensemble of reservoir models \\ \end{tabular} \subsection*{Superscript} \begin{tabular}{ccl} $b$ & = & background \\ $i$ & = & iteration index \\ $o$ & = & observation\\ $tr$ & = & truth\\ \end{tabular} \newpage \begin{acknowledgements} We would like to thank Drs. Yan Chen (Total), Randi Valestrand (IRIS) and three anonymous reviewers for their valuable helps, comments and suggestions, which have significantly improved the work. We also acknowledge the IRIS/CIPR cooperative research project ``Integrated Workflow and Realistic Geology'' which is funded by industry partners ConocoPhillips, Eni, Petrobras, Statoil, and Total, as well as the Research Council of Norway (PETROMAKS) for financial support, and thank Schlumberger for ECLIPSE academic licenses and the developers of SGeSM and MATLAB level set toolbox (Ian M. Mitchell) for providing their softwares. \end{acknowledgements}
2,869,038,154,460
arxiv
\section{Introduction} \label{sec:introduction} There has been considerable interest in diffusive-growth processes including growth phenomena for a droplet on a substrate. This growth phenomena in the case where diffusion and coalescence play the major roles are common in many areas of science and technology \cite{sc1,sc2}. In this process, each droplet diffuses and grows individually and coalesces with contacting droplets. The kinetics of these phenomenon have been studied experimentally and theoretically [3-29]. Some models have been developed to explain the kinetics of these processes. One such model \cite{static88,static91,krapquasi} consists of a single, motionless three dimensional droplet formed by diffusion and adsorption of non-coalescing monomers on a 2D substrate. In the model it is assumed that the diffusing monomers coalesce only with large immobile growing droplet and not with each other. In \cite{static88,static91} a {\it static} approximation was used to solve the diffusion equation and an approximate description of the long time behaviour was obtained. The static approach predicted an asymptotic power law growth rate for the radius of the droplet. Because of the growth of the droplet, the present problem involves a moving boundary. Moving boundary problems in the context of the diffusion equation are referred to as Stefan problems [30-32]. The only exact solutions for these problems have been found using a {\it similarity variable} method, see for instance, [30-34] and references therein. Using this method for a droplet of dimensionality $d$ growing on a substrate of the same dimensionality, an exact scaling solution can be found. In \cite{harriso} such a solution in one dimension has been derived which can be generalised to a higher dimension. However, the problem of a 3D droplet growing on a 2D substrate, may be treated by approximate methods. A simple treatment based on a {\it quasistatic} approximation has been presented in \cite{krapquasi}. A {\it similarity variable} approach was used to solve the Stefan problem with moving boundary \cite{krapquasi}. The results predicted that the radius of the droplet increases as $[t/\ln(t)]^{1/3}$ asymptotically. The asymptotic growth law predicted by the static approach differs from the quasistatic answer by a slowly varying logarithmic factor. In all the models in \cite{static88,static91,krapquasi} an adsorption boundary condition at the aggregate perimeter of the droplet, was considered. In \cite{mozy} a generalisation of Smoluchowski model \cite{smol,smolhelp} for diffusional growth of colloids, was presented. Smoluchowski \cite{smol} considered the process of diffusional capture of particles assuming the growing aggregate is modeled as a sphere. He then solved the diffusion equation with an absorbing boundary condition at the aggregate surface of the sphere. In \cite{mozy} two other approaches were considered, a phenomenological model for the boundary condition and a radiation boundary condition. Both approaches allowed for incorporation of particle detachment in Smoluchowski model. Explicit expressions for the concentration and intake rate of particles were given in the long time limit \cite{mozy}. In this paper we consider a single, motionless three-dimensional droplet growing by adsorption of diffusing monomers on a two-dimensional substrate. The diffusing non-coalescing monomers are adsorbed at the aggregate perimeter of the droplet with different boundary conditions. Models with different boundary conditions for the concentration of monomers are considered and solved in a quasistatic approximation. For each model, the diffusion equation is solved exactly, subject to a {\it fixed} boundary. Using mass conservation law at the aggregate perimeter of the growing droplet, we then obtain an expression for the growth rate of the {\it moving} boundary. Explicit asymptotic solutions in the both short and long time limits are given for the concentration, total flux of monomers at the perimeter of the growing droplet and for the growth rate of the droplet radius. This paper is organised as follows. In section 2, a model with an adsorption boundary condition is examined. In sections 3 and 4 we consider the two approaches which were introduced in \cite{mozy} to allow for particle detachment. A phenomenological model and a model with a radiation boundary condition are considered in sections 3 and 4, respectively. Another boundary condition which assumes a constant flux of monomers at the aggregate perimeter of the droplet, is also introduced in section 5. Finally, in section 6 we compare the results of different approaches and summarise our conclusions. \section{Growth Equations with Adsorption Boundary Condition} \label{sec:Adsorption} Consider an immobile three-dimensional droplet which is initially surrounded by monodisperse droplets. The droplet lies on a two-dimensional plane substrate on which the monomers diffuse. Monomers have the volume $V$ and diffuse with the diffusion constant $D$. Then, the concentration of monomers at point $r$ and at time $t$, $c(r,t)$, is described by the diffusion equation \begin{equation} \label{difequ} \frac{\partial c(r,t)}{\partial t} = D\,\frac{1}{r}\,\frac{\partial}{\partial r}\left(r\,\frac{\partial c(r,t)}{\partial r}\right) \end{equation} for $r\geq R$, where $R(t)$ is the radius of the immobile growing droplet. The initial conditions are given by \begin{equation} \label{cinitial} c(r,t=0) = c_0, \end{equation} which is the initial, uniform, monomer concentration and \begin{equation} \label{Rinitial} R(t=0) = 0, \end{equation} which shows that the droplet is not present at the beginning of the process. We consider an adsorption boundary condition at the perimeter of the droplet \begin{equation} \label{adbc} c(r=R,t>0) = 0 \end{equation} and assume that at infinity the concentration of the monomers is finite and equal to $c_0$. Concentration gradients in the neighborhood of the droplet create a flux of monomers on the two-dimensional substrate. This flux feeds the growth of the droplet. Therefore, the rate of increase of the droplet volume is related to the total flux of monomers at the perimeter of the droplet by mass conservation, \begin{equation} \label{growthdef} \Phi (t) = \lambda \,R^2\,\frac{dR}{dt}, \end{equation} where the total flux \begin{equation}\label{intakedef} \Phi (t) = V\left[2\pi RD\left. \frac{\partial c}{\partial r} \right\vert_R\right] \end{equation} corresponds to the monomers incorporated at the perimeter of the droplet. In (\ref{growthdef}) $\lambda $ is a dimensionless factor related to the contact angle of the droplet. In order to solve (\ref{difequ}) with (2-4), we introduce the Laplace transform of the concentration, \begin{equation} \bar c(r,s) = \int_0^\infty dt\,e^{-st}c(r,t), \end{equation} which satisfies the equation \begin{equation} D\,\frac{1}{r}\,\frac{\partial}{\partial r} \left( r\,\frac{\partial \bar c}{\partial r}\right) = s\,\bar c - c_0. \end{equation} Here we have already used the initial condition (\ref{cinitial}). The general solution of this equation is given by \begin{equation} \bar c(r,s) = \frac{c_0}{s} + A(s)K_0(qr) + B(s)I_0(qr), \end{equation} where $q = \sqrt {s/D}$, and $K_0$ and $I_0$ are Modified Bessel functions of order zero. To have a finite solution as $r\to \infty $, we set $B(s) = 0$. The boundary condition (\ref{adbc}) in the Laplace transform version becomes \begin{equation} \label{cbar at r=R} \bar c(r=R,s)=0. \end{equation} Using (\ref{cbar at r=R}) the transformed concentration and its gradient normal to the droplet perimeter, yield \begin{equation} \label{cbarad} \bar c(r,s) = \frac{c_0}{s} \left[ 1 - \frac{K_0(qr)}{K_0(qR)} \right], \end{equation} \begin{equation} \label{dcbarad} \frac{\partial \bar c(r,s)}{\partial r} = \frac{c_0}{(Ds)^{1/2}} \,\frac{K_1(qr)}{K_0(qR)}. \end{equation} To find time dependent concentration and its radial gradient, we use the Inversion theorem for (\ref{cbarad},\ref{dcbarad}). Both (\ref{cbarad},\ref{dcbarad}) have a branch point at $s=0$, so in the Inversion formula, we use a contour which does not contain any zeros of $s$ and $K_0(qR)$. Consequently, time dependent concentration and also total flux at the droplet perimeter from (\ref{intakedef}), are given by \begin{equation} \label{cad} c(r,t) = \frac{2c_0}{\pi}\int_0^\infty e^{-Du^2t} \left[ \frac{J_0(Ru)N_0(ru) - J_0(ru)N_0(Ru)}{J_0^2(Ru) + N_0^2(Ru)}\right] \frac{du}{u}, \end{equation} \begin{equation} \label{intakead} \Phi (t) = \frac{8\,c_0DV}{\pi}\int_0^\infty e^{-Du^2t}\frac{1}{\left[ J_0^2(Ru) + N_0^2(Ru) \right]}\frac{du}{u}, \end{equation} where $J_0$ and $N_0$ are Bessel functions of order zero. Using (\ref{growthdef},\ref{intakead}) a differential equation for the growth rate of the droplet radius can be obtained \begin{equation} \lambda \,R^2\,\frac{dR}{dt} = \frac{8c_0DV}{\pi} \int_0^\infty e^{-Du^2t}\frac{1}{\left[ J_0^2(Ru) + N_0^2(Ru) \right]}\frac{du}{u}, \end{equation} which gives a general solution for $R$ as a function of the time. We are interested in the short and long time solutions for the concentration, the total flux of monomers at the perimeter of the droplet and the growth rate of the droplet radius. For small values of the time, it is shown that the behaviours of $c(r,t)$ and $ \partial c(r,t)/\partial r$ may be determined from the behaviors of $\bar c(r,s)$ and $\partial \bar c(r,s)/\partial r$, respectively, for large values of the transformed parameter $s$. Then, we expand the Bessel functions occuring in (\ref{cbarad},\ref{dcbarad}) supposing s to be large. The final result for the concentration of monomers, keeping the leading time dependent term, is \begin{equation}\label{cstad} c(r,t) \simeq c_0\left[ 1 - \left(\frac{R}{r} \right)^{1/2} Erfc\left(\frac{r-R}{2\sqrt{Dt}}\right)\right]. \end{equation} The total flux at the droplet perimeter and the growth rate of the droplet radius also in this limit using (\ref{intakedef}) and (\ref{growthdef}), respectively, are given by \begin{equation}\label{fstad} \Phi (t) \simeq 2\,c_0VR\sqrt {\pi D}\,t^{-1/2}, \end{equation} \begin{equation}\label{Rstad} R(t)\simeq \left(\frac{8\,c_0V\sqrt {\pi D}}{\lambda}\right)^{1/2}t^{1/4}. \end{equation} We see that in the short time limit, R grows as a power of the time with an exponent of $1/4$. For large values of the time, the behaviours of $c(r,t)$ and $\partial c(r,t)/\partial r$ may be determined from the bahaviours of $\bar c(r,t)$ and $\partial \bar c(r,s)/\partial r$, respectively, for small values of the transform parameter $s$. We then expand the Bessel functions occuring in (\ref{cbarad},\ref{dcbarad}) supposing s to be small. Keeping the leading time dependence term, the concentration of monomers yields \begin{equation}\label{cltad} c(r,t) \simeq 2\,c_0 \,\displaystyle\frac{\ln\left(\displaystyle\frac{r}{R}\right)}{\ln\left(\displaystyle\frac{4Dt}{\sigma ^2R^2}\right)}, \end{equation} where $\sigma = e^\gamma = 1.78107...$, where $\gamma = 0.57722...$ is Euler's constant. The total flux at the droplet perimeter and the growth rate of the droplet radius also in this limit using (\ref{intakedef}) and (\ref{growthdef}), respectively are given by \begin{equation}\label{fltad} \Phi (t) \simeq 4\pi c_0 DV\left[\ln\left(\frac{4Dt}{\sigma ^2R^2}\right) \right]^{-1}, \end{equation} \begin{equation}\label{Rltad} R(t) \simeq A\left[\frac{\tau}{\ln(\tau)}\right]^{1/3}, \end{equation} where $A = \left( 9\pi V\sigma ^2/\lambda \right)^{1/3}$ and $\tau = 4c_0Dt/\sigma ^2 $ is the dimensionless time. Up to a constant, these are the same results which were obtained by Krapvisky based on a quasistatic approximation using a similarity variable approach \cite{krapquasi}. \section{Phenomenological Rate Equation Model} One can consider various modification of the initial and boundary conditions (2) and (4). Here we improve the model and incorporate effects other than the irreversible adsorption at $r = R$ expressed by (4), Ref. \cite{mozy}. In this section we consider a phenomenological modification of the boundary condition (4) to allow for detachment. This was introduced in \cite{mozy} where the relation \begin{equation} \label{pmbc} \frac{\partial c(r,t)}{\partial t} = -\,m\,c(r,t) + k \end{equation} at $r=R$ was replaced for (4). Here it is assumed that the diffusing monomers that reach the perimeter of the droplet are incorporated in the aggregate structure at the rate $mc$ proportional to their concentration at $R$. The second term in (\ref{pmbc}) corresponds to detachment and is assumed that only depends on the internal processes, so there is no dependence on the external diffuser concentration \cite{mozy}. To solve (1) with (2,3) and (\ref{pmbc}), we go through steps similar to section 2 and only emphasize the final expressions. In the Laplace transform version, the boundary condition becomes \begin{equation} (s + m) \,\bar c(r,s) = \frac{k}{s} + c_0 \end{equation} at $r=R$. The concentration and the radial gradient of the concentration in this version become \begin{equation}\label{cbarpm} \bar c(r,s) = \frac{c_0}{s} - \frac{mc_0-k}{s(s+m)} \,\frac{K_0(qr)}{K_0(qR)}, \end{equation} \begin{equation}\label{dcbarpm} \frac{\partial \bar c(r,s)}{\partial r} = \frac{mc_0-k}{(Ds)^{1/2}} \,\frac{1}{(s+m)} \,\frac{K_1(qr)}{K_0(qR)}. \end{equation} Now we look for the solutions in the short and long time limits. For small values of the time, we use the asymptotic expansions of the Bessel functions in (\ref{cbarpm},\ref{dcbarpm}) for large values of $s$ and ignore $m$ in comparison to $s$ in the term $(s+m)$. Then, the concentration, the total flux at the droplet perimeter and the growth rate of the droplet radius in this limit, keeping only the leading time-dependent terms, are given by \begin{equation}\label{cstpm} c(r,t) \simeq c_0 + 4mt\left( c_0-\frac{k}{m}\right)\left( \frac{R}{r}\right)^{1/2} \, Erfc\left(\frac{r-R}{2\sqrt {Dt}}\right), \end{equation} \begin{equation}\label{fstpm} \Phi (t) \simeq 4mVR\sqrt{\pi D}\left( c_0-\frac{k}{m}\right)t^{1/2}, \end{equation} \begin{equation}\label{Rstpm} R(t)\simeq \left[ \frac{16mV}{3\lambda}\sqrt{\pi D}\left(c_0-\frac{k}{m}\right)\right]^{1/2}t^{3/4}. \end{equation} We see that in a phenomenological model, R grows as a power of the time with an exponent of $3/4$ in the short time limit. In the expressions (\ref{cstpm}-\ref{Rstpm}), in comparison with (\ref{cstad}-\ref{Rstad}) in the previous section, there is a term as $(c_0-k/m)$ which shows a reduction of the rate due to detachment, proportional to the ratio $k/m$. For large values of the time, we use the expansions of the Bessel functions in (\ref{cbarpm},\ref{dcbarpm}) supposing $s$ to be small and ignore $s$ in comparison with $m$ in the term $(s+m)$. Then, the concentration, total flux at the droplet perimeter and growth rate of the droplet radius, keeping only the leading time dependent terms, yield \begin{equation}\label{cltpm} c(r,t)\simeq \frac{k}{m}+2 \left (c_0-\frac{k}{m}\right)\displaystyle\frac{\ln\left(\displaystyle\frac{r}{R}\right)}{\ln\left(\displaystyle\frac{4Dt}{\sigma ^2R^2}\right)}, \end{equation} \begin{equation}\label{fltpm} \Phi (t)\simeq 4\pi DV\left(c_0-\frac{k}{m}\right)\left[\ln\left(\frac{4Dt}{\sigma ^2R^2}\right)\right]^{-1}, \end{equation} \begin{equation}\label{Rltpm} R(t)\simeq A\left[\frac{\tau}{\ln(\tau)}\right]^{1/3}, \end{equation} where $A=(9\pi V\sigma ^2/\lambda)^{1/3}$ and $\tau = 4Dt(c_0-k/m)/\sigma ^2$. These asymptotic expressions are quite similar to the long time forms (\ref{cltad}-\ref{Rltad}) in section 2. The only change is the reduction of the rate due to the detachment, proportional to the ratio $k/m$. For a fast enough detachment, the total fluxes of the monomers at the boundary in the both short and long time limits (\ref{fstpm},\ref{fltpm}) can actually become negative. In this case, the flux does not feed the growth of the droplet and the droplet volume does not increase anymore. Therefore, the mass conservation (\ref{growthdef}) does not hold and the growth laws (\ref{Rstpm},\ref{Rltpm}) are not valid anymore. For a case in which $c_0=k/m$, the system reaches a stationary state and therefore the total rate and the total flux of the monomers at the droplet perimeter, become zero for all times. Consequently, there is no growth for the droplet and the concentration of the monomers is equal to the initial concentration, $c_0$, for all times. These results can be obtained from the both short and long time expressions (\ref{cstpm}-\ref{Rstpm}) and (\ref{cltpm}-\ref{Rltpm}), respectively. \section{Radiation Boundary Condition} In this section we consider another modification of the boundary condition (4) and replace it with a radiation boundary condition \begin{equation} \alpha \,\frac{\partial c(r,t)}{\partial r}+\beta = c(r,t) \end{equation} at $r=R$, Ref. \cite{mozy}. Here it is assumed that the concentration is proportional to its derivative, with an additional constant $\beta$. Again we go through steps similar to the section 2 and only emphasize the final expressions. In the Laplace transform version, the boundary condition becomes \begin{equation} \alpha \,\frac{\partial \bar c(r,s)}{\partial r}+\frac{\beta}{s} = \bar c(r,s) \end{equation} at $r=R$. The concentration and its radial gradient in this version become \begin{equation}\label{cbarra} \bar c(r,s)=\frac{c_0}{s}-\frac{(c_0-\beta)}{s} \,\frac{K_0(qr)}{K_0(qR)+\alpha qK_1(qR)}, \end{equation} \begin{equation}\label{dcbarra} \frac{\partial \bar c(r,s)}{\partial r}=\frac{(c_0-\beta)}{(Ds)^{1/2}} \,\frac{K_1(qr)}{K_0(qR)+\alpha qK_1(qR)}. \end{equation} We concentrate our attention to the solutions in the short and long time limits. For small values of the time, we use the asymptotic expansions of the Bessel functions in (\ref{cbarra},\ref{dcbarra}) to get the leading time dependent terms for the concentration, the total flux and the droplet growth rate \begin{equation}\label{cstra} c(r,t)\simeq c_0-\frac{2i}{\alpha}(c_0-\beta)\left(\frac{DR\,t}{r}\right)^{1/2}Erfc\left(\frac{r-R}{2\sqrt {Dt}}\right), \end{equation} \begin{equation}\label{fstra} \Phi (t)\simeq \frac{2\pi RDV}{\alpha}(c_0-\beta), \end{equation} \begin{equation}\label{Rstra} R(t)\simeq \left[\frac{4\pi DV}{\alpha \lambda}(c_0-\beta)\right]^{1/2}t^{1/2}. \end{equation} The term $(c_0-\beta )$ in these expressions shows a reduction of the rate due to the detachment, proportional to the ratio $\beta $. We see that in the short time limit, the total flux at the droplet perimeter is time-independent and $R$ grows as a power law with an exponent equal to $1/2$. For large values of the time, we expand the Bessel functions in (\ref{cbarra},\ref{dcbarra}) supposing $s$ to be small. Consequently, the asymptotic expressions for the concentration, total flux and the droplet growth rate, yield \begin{equation}\label{cltra} c(r,t)\simeq \beta + 2(c_0-\beta)\displaystyle\frac{\ln\left(\displaystyle\frac{r}{R}\right)}{\ln\left(\displaystyle\frac{4Dt}{\sigma ^2R^2}\right)}, \end{equation} \begin{equation}\label{fltra} \Phi (t)\simeq 4\pi DV(c_0-\beta)\left[\ln\left(\frac{4Dt}{\sigma ^2R^2}\right)\right]^{-1}, \end{equation} \begin{equation}\label{Rltra} R(t)\simeq A\left[\frac{\tau}{\ln(\tau)}\right]^{1/3}, \end{equation} where $A=(9\pi V\sigma ^2/\lambda)^{1/3}$ and $\tau = 4Dt(c_0-\beta )/\sigma ^2$. These long time expressions have the same forms as (\ref{cltpm}-\ref{Rltpm}) provided we identify \begin{equation} \beta = \frac{k}{m}. \end{equation} For a fast enough detachment, analogue to the section 3, the total fluxes of the monomers at the boundary (\ref{fstra},\ref{fltra}) can become negative. In this case, the growth laws (\ref{Rstra},\ref{Rltra}) do not hold anymore. For a case in which $c_0=\beta $, analogue to the section 3, the system reaches a stationary state and therefore the total flux and the droplet growth rate become zero. The concentration also does not change with the time and is equal to the initial one. These can be seen from the both short and long time results (\ref{cstra}-\ref{Rstra}) and (\ref{cltra}-\ref{Rltra}), respectively. \section{Constant Flux Boundary Condition} In this section we impose a condition on the flux of the monomers assuming that the total flux of monomers at the droplet perimeter is constant. Therefore, we replace (4) with \begin{equation}\label{cfbc} \Phi (t)=Q \end{equation} at $r=R$, where $\Phi (t)$ is given by (\ref{intakedef}) and $Q$ is a constant. The analogue to the previous sections, in the Laplace transform version, the boundary condition becomes \begin{equation} 2\pi RDV\,\frac{\partial \bar c(r,s)}{\partial r}=\frac{Q}{s} \end{equation} at $r=R$. The concentration and its radial gradient in this version are \begin{equation}\label{cbarcf} \bar c(r,s)=\frac{c_0}{s}-\frac{Q}{2\pi RVD^{1/2}} \,\frac{K_0(qr)}{s^{3/2}K_1(qR)} \end{equation} and \begin{equation}\label{dcbarcf} \frac{\partial \bar c(r,s)}{\partial r}=\frac{Q}{2\pi RDV} \,\frac{K_1(qr)}{sK_1(qR)}. \end{equation} Appropriate expansions of the Bessel functions in (\ref{cbarcf}) give us the limiting forms of the concentrations. For small values of the time it yields \begin{equation} c(r,t)\simeq c_0-\frac{i\,Q}{\pi V}\left(\frac{t}{DR\,r}\right)^{1/2}Erfc\left(\frac{r-R}{2\sqrt {Dt}}\right) \end{equation} and for large values of the time it gives \begin{equation} c(r,t)\simeq c_0-\frac{Q}{4\pi DV}\,\ln\left(\frac{4Dt}{\sigma r^2}\right). \end{equation} The trivial solution for the droplet growth rate using (\ref{growthdef},\ref{cfbc}) is \begin{equation} R(t)=\left(\frac{3\,Q}{\lambda}\right)^{1/3}t^{1/3}, \end{equation} for all times. \section{Conclusions} We studied the growth of a single, motionless, three-dimensional droplet that accommodates monomers at its perimeter on a 2D substrate. The noncoalescing monomers diffuse and are adsorbed at the aggregate perimeter of the droplet with different boundary conditions. Models with adsorption and radiation boundary conditions, and a phenomenological model for the boundary condition, were considered and solved in a quasistatic approximation. In a model with adsorption boundary condition, the droplet forms an absorber and the concentration of the monomers at its perimeter is zero. In a phenomenological model, we assumed that the diffusing monomers that reach the perimeter of the droplet, are incorporated in the aggregate structure at a rate proportional to their concentration at the boundary. We also added another term which corresponds to detachment. In a model with radiation boundary condition we assumed that the concentration is proportional to its derivative with an extra detachment term. For each model, we solved exactly the diffusion equation for the concentration of the monomers, subject to a {\it fixed} boundary. Then, using a mass conservation law at the perimeter of the droplet, we found an expression for the growth rate of the {\it moving} boundary. Models were subjected to an initial, uniform concentration of monomers. Asymptotic results for the concentration, total flux of monomers at the boundary and the growth rate of the droplet radius, were obtained in both short and long time limits. The results revealed that in both phenomenological and radiation models, in comparison with adsorption model, there is a reduction of the rate due to the detachment. The rate can become negative if the detachment is fast enough. In this case, the total flux of the monomers at the perimeter of the droplet become negative. Therefore, the flux does not feed the growth of the droplet volume and the droplet growth laws obtained in the models, are not valid anymore. For a value of the detachment for which the total rate and therefore the total flux become zero, the system reaches a stationary state and there is no growth for the droplet anymore. The same reduction of the rate was obtained in \cite{mozy} where incorporation of particle detachment in Smoluchowski model of colloidal growth, was considered. The results in the short time limit predicted that the radius of the droplet grows as a power of the time with different exponents for different boundary conditions. The exponents of the power laws were $1/4$, $1/2$ and $3/4$, respectively, for the models with adsorption, radiation and phenomenological boundary conditions. We see that the growth rate is the slowest for the adsorption boundary condition and is the fastest for the phenomenological model. This is because, as was said before, in the phenomenological model, the diffusing monomers at the perimeter of the droplet are incorporated in the aggregate structure of the droplet. The total flux of the monomers at the droplet perimeter is also power law with exponents of $-1/2$ and $1/2$ for the adsorption and phenomenological model, respectively, and is a constant for the radiation model. Again the flux is maximum for the phenomenological model and is minimum for the adsorption model. In the long time limit, the growth law for the radius of the droplet was the same for all boundary conditions. Also the concentration and total flux had the same time dependency in all models. The only change, as we said before, was the reduction of the rate due to the detachment in the both phenomenological and radiation models in comparison with adsorption model. Asymptotic results for large values of the time exhibited that the radius of the droplet increases as $[t/\ln(t)]^{1/3}$ in all models. This was obtained by Krapivsky \cite{krapquasi} where a similarity variable approach was used to treat the growth of a droplet with an adsorption boundary condition based on a quasistatic approximation. We saw that the time dependency of the results was the same for all the models in the long time limit and was different for different models in the short time limit. This suggests that initially the flux of the monomers at the boundary and therefore the droplet growth rate, are affected by the condition at the boundary. But in the long time limit, the system reaches a stable state and the initial effects can be ignored, therefore all the models give the same results. This suggests that a rate as $[t/\ln(t)]^{1/3}$ is a universal asymptotic growth law for the radius of the droplet independent of the boundary conditions. In the both models with phenomenological and radiation boundary conditions, similar to the results in \cite{mozy}, the value of the concentration at $r=R$ for large times, see (\ref{cltpm}) and (\ref{cltra}), is exactly equal to $\beta =k/m$, independent of $R$. This suggests that as far as large $R$ and large time behaviours are concerned, we can use the boundary condition \begin{equation} c=\beta =\frac{k}{m} \end{equation} at $r=R$, instead of phenomenological and radiation boundary conditions. Indeed, the value of the concentration at $r=R$ is the only parameter needed to calculate the modifications of the asymptotic behaviours due to the detachment. With this boundary condition for all times, the asymptotic results (\ref{cltpm}-\ref{Rltpm}) and (\ref{cltra}-\ref{Rltra}) become exact. Therefore, a constant concentration of the monomers at $r=R$ for all times, gives an exact growth rate as $[t/\ln(t)]^{1/3}$ . We also examined another model with a constant flux of monomers at $r=R$. The results showed that the radius of the droplet grows as $t^{1/3}$ for all times. Thus, the growth laws predicted by a constant concentration and by a constant flux at the boundary, differ from each other by a slowly varying logarithmic factor. \newpage
2,869,038,154,461
arxiv
\section{Introduction} The basic goal of density-functional theory is to express the energy of a quantum-mechanical state in terms only of its one-particle density $\rho(\mathbf{r})$ and then to minimize the resulting functional (the `density functional') with respect to $\rho(\mathbf{r})$ (under the subsidiary condition that $\int_{\mathbb{R}^3} \rho(\mathbf{r}) d\mathbf{r} =N=$ number of electrons) in order to calculate the ground-state energy of the system, which could be an atom or a molecule or a solid. Although the first -- and by far most used and important density functional in theory, computation, and mathematical investigation of multi-electron systems -- is the Thomas-Fermi functional (Lenz \cite{Lenz1932}), strong interest in the subject was triggered by Hohenberg and Kohn \cite{HohenbergKohn1964}. We refer the reader interested in the recent developments to the books by Eschrig \cite{Eschrig2003} and Gross and Dreizler \cite{GrossDreizler1995} and the review \cite{Lieb1983D}. While this program is possible in principal, experience has shown that it is far from easy to guess the appropriate functional -- especially if one wants the functional to be universal and not simply `tuned' to the particular kind of atom or molecule under investigation. There are also pitfalls connected with the admissible class of functions to use in the variational principle \cite{Levy1982,Lieb1983D}. Whereas the external potential energy can easily be expressed in terms of the one-particle density, it is not known how to express the kinetic energy and the interaction energy in terms of $\rho(\mathbf{r})$. Going from density- to density-matrix-functional theory eliminates the first problem altogether, since all expectations of one-particle operators can be expressed in term of the one-particle density matrix. The density matrix analogue of the Hohenberg-Kohn density-functional program was established by Gilbert \cite{Gilbert1975}. See also \cite{Levy1979}. The most difficult component of the density-functional to estimate is the exchange-correlation energy (which we shall henceforth simply call exchange energy), and it is that energy that will concern us here. Owing to this and other difficulties, it has been the tendency recently to replace the energy as a functional of $\rho(\mathbf{r})$ by a functional of the one-body density {\it matrix}, $\gamma(\mathbf{x}, \mathbf{x}')$. In this way it is hoped to have more flexibility and achieve, hopefully, more accurate answers. Fermions have spin and it is convenient to write a particle's coordinates as $\mathbf{x}=(\mathbf{r},\sigma)$ for a pair consisting of a vector $\mathbf{r}$ in space and an integer $\sigma$ taking values from $1$ to $q$. Here $q$ is the number of spin states for the particles which -- in the physical case of electrons -- is equal to 2. (In nuclear physics one sometimes considers $q=4$.) We shall, however, call the particles electrons. Similarly we write for any function $f$ depending on space and spin variables \begin{equation} \label{int} \int f(\mathbf{x}) \, d\mathbf{x}= \sum_{\sigma=1}^q \int_{\mathbb{R}^3}\, f(\mathbf{r},\sigma)\, d\mathbf{r}, \end{equation} i.e., $\int d\mathbf{x}$ indicates integration over the whole space and summation over all spin indices. This allows us to write the density matrix $\gamma$ as an operator on the Hilbert space of spinors $\psi$ for which $\int |\psi(\mathbf{x})|^2\, d\mathbf{x}<\infty$. Its integral kernel is $\gamma(\mathbf{x},\mathbf{x}')$. The Schr\"odinger Hamiltonian we wish to consider is \begin{equation} \label{ham} H=\sum_{i=1}^N \left( - \frac{\hbar^2}{2m}\nabla_i^2 -e^2 V_c(\mathbf{r}_i)\right) + e^2R \end{equation} where \begin{equation} V_c(\mathbf{r}) = \sum_{j=1}^K \frac{Z_j}{|\mathbf{r}-\mathbf{R}_j|} \end{equation} is the Coulomb potential of $K\geq 1$ fixed nuclei acting on the $N$ electrons. The $j^{\rm th}$ nucleus has charge $+Z_je>0 $ and is located at some fixed point ${\mathbf R}_j \in \mathbb{R}^3$. We define the total nuclear charge by $Z\equiv \sum_{j=1}^K Z_j$. The electron-electron repulsion $R$ is given by \begin{equation} R= \sum_{1\leq i < j \leq N} |\mathbf{r}_i-\mathbf{r}_j|^{-1} \,. \end{equation} If one is interested in minimizing over the nuclear positions ${\mathbf R}_j$, one also has to take into account the nucleus-nucleus repulsion $e^2 U$, of course, which is given by \begin{equation} U=\sum_{1\leq i < j \leq K}Z_iZ_j |\mathbf{R}_i -\mathbf{R}_j|^{-1}. \end{equation} Since we will not be concerned with this question but rather consider the nuclei to be fixed, we will not take this term into consideration here. \subsection{Hartree-Fock Exchange Energy} The best known density-matrix-functional associated with \eqref{ham} is the {\it Hartree-Fock} functional \begin{equation} \label{hfeqn} \mathcal{E}^{\rm HF}(\gamma) = \frac{\hbar^2}{2m} \tr (-\nabla^2 \gamma) -e^2 \int_{\mathbb{R}^3} V_c(\mathbf{r}) \rho_\gamma(\mathbf{r}) d\mathbf{r} + e^2 D(\rho_\gamma,\rho_\gamma) -e^2 X(\gamma)\,, \end{equation} where $\rho_\gamma(\mathbf{r}) = \sum_{\sigma=1}^q\gamma(\mathbf{x},\mathbf{x}) = \sum_{\sigma=1}^q\gamma(\mathbf{r},\sigma,\mathbf{r},\sigma)$ is the particle density, \begin{equation} \label{D} D(\rho,\mu) = \frac{1}{2}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{\rho(\mathbf{r})\mu({\mathbf{r}'})}{|\mathbf{r}-{\mathbf{r}'}|} d\mathbf{r} d{\mathbf{r}'}, \end{equation} and where the exchange term is (note the sign in \eqref{hfeqn}) \begin{equation} \label{X} X(\gamma)= \frac12\int \int \, \frac{|\gamma(\mathbf{x},\mathbf{x}')|^2}{|\mathbf{r}-{\mathbf{r}'}|} \, d\mathbf{x} \, d\mathbf{x}' \ . \end{equation} As is well known, this functional $\mathcal{E}^{\rm HF}$ is the expectation value of $H$ in a determinantal wavefunction $\Psi $ made of orthonormal functions $\phi_i$ \begin{equation} \Psi (\mathbf{x}_1, \, \mathbf{x}_2,\ \dots \, \mathbf{x}_N) = (N!)^{-1/2}\, {\rm det} \phi_i(\mathbf{x}_j)|_{i,j=1}^N\ , \end{equation} in which case \begin{equation} \label{hfgamma} \gamma(\mathbf{x},\mathbf{x}') = \sum_{i=1}^N \phi_i(\mathbf{x}) \phi_i(\mathbf{x}')^* \ . \end{equation} It is also well known that {\it any} one-body density matrix $\gamma$ for fermions always has two properties (in addition to the obvious requirement of self-adjointness, i.e., $\gamma(\mathbf{x},\mathbf{x}') = \gamma(\mathbf{x}',\mathbf{x})^*$) which are necessary and sufficient to ensure that it comes from a normalized $N$-body state satisfying the Pauli exclusion principle, see e.g., \cite{Lieb1983D,Lieb1985}: \begin{equation} \label{props} 0\leq \gamma \leq 1 \ {\rm as \ an \ operator} \quad {\rm and} \quad \tr \gamma =N, \end{equation} where $\tr$ denotes the trace $= \int d\mathbf{x}\,\gamma(\mathbf{x},\mathbf{x}) =$ sum of the eigenvalues of $\gamma$. A simple consequence of \eqref{props} is that the spin-summed density matrix $(\tr_\sigma \gamma ) (\mathbf{r}, \mathbf{r}') = \sum_\sigma \gamma (\mathbf{r}, \sigma, \mathbf{r}' \sigma) $, which acts on functions of space alone, satisfies \begin{equation} \label{propssum} 0\leq \tr_\sigma \gamma \leq q \ {\rm as \ an \ operator} \quad {\rm and} \quad \tr (\tr_\sigma \gamma) =N. \end{equation} The HF $\gamma$ in \eqref{hfgamma} has $N$ eigenvalues equal to 1, and the rest equal to 0, but one could ignore this feature and apply \eqref{hfeqn} to any $\gamma$ satisfying \eqref{props}. If we do this, then we can define the HF energy (for all $N\geq 0$) by \begin{equation} \label{hfenergy} E^{\rm HF}(N) = \inf_\gamma \{\mathcal{E}^{\rm HF}(\gamma) : \ 0\leq \gamma \leq1, \tr \gamma =N\} \ . \end{equation} (We say `infimum' in \eqref{hfenergy} instead of `minimum' because there may be no actual minimizer -- as occurs when $N \gg Z = \sum_j Z_j$.) A HF energy minimizer does exist when $N< Z+1$, at least, and possibly for larger $N$'s as well \cite{LiebSimon1974,LiebSimon1977T}. It is a fact \cite{Lieb1981} (see also \cite{Bach1992}) that $ E^{\rm HF}(N) $ is the infimum over all $\gamma$'s of the determinantal form \eqref{hfgamma}, i.e., the determinantal functions always win the competition in \eqref{hfenergy}. Therefore, $E^{\rm HF}(N) \geq E_0(N)$, where $E_0(N)$ is the true ground state energy of the Hamiltonian \eqref{ham}. Thus, the HF density-matrix-functional has the advantage of providing an upper bound to $E_0 $, but it cannot do better than HF theory. We know, however, that this is often not very good, numerically, especially for dissociation energies. Another disadvantage of $\mathcal{E}^{\rm HF}$ is that the energy minimizer $\gamma^{\rm HF} $ (if there is one) may not be unique although, in some cases, it is known to be unique (see \cite{HuberSiedentop2007} for the Dirac-Fock equations). In fact it follows from Hund's rule that in many cases the spatial part of the wave function has a non-zero angular momentum and cannot, therefore, be spherically symmetric. A third point to note is that in HF theory the electron Coulomb repulsion is modeled by $D(\rho_\gamma, \rho_\gamma) - X(\gamma) $. This energy really should be $\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} |\mathbf{r}-{\mathbf{r}'}|^{-1} \, \rho^{(2)}(\mathbf{r},{\mathbf{r}'})d\mathbf{r} d{\mathbf{r}'}$, however, where $\rho^{(2)}(\mathbf{r},{\mathbf{r}'})$ is the two-particle density, i.e., the spin summed diagonal part of the \textit{two-particle} density matrix. In effect, one is replacing $\rho^{(2)}(\mathbf{r},{\mathbf{r}'})$ by $G^{(2)}(\mathbf{r},{\mathbf{r}'}) = \frac12\rho_\gamma(\mathbf{r})\rho_\gamma({\mathbf{r}'}) - \frac12 \sum_{\sigma,\sigma'=1}^q|\gamma(\mathbf{x},\mathbf{x}')|^2$. It is not possible for this $G^{(2)}$ to be the two-body density of any state because that would require that $\int_{\mathbb{R}^3} G^{(2)}(\mathbf{r},{\mathbf{r}'}) d{\mathbf{r}'} = \frac{N-1}{2} \rho(\mathbf{r})$. This condition fails unless the state is a HF state (because even the total integral is wrong, namely, $\int\!\!\int G^{(2)} d\mathbf{r} d{\mathbf{r}'} > N(N-1)/2$ unless we have a HF state). \subsection{M\"uller's Square-Root Exchange-Correlation Energy} \label{1B} There is an alternative to $\mathcal{E}^{\rm HF}(\gamma)$, which we will call $\mathcal{E}^{\rm M} (\gamma)$ (M\"uller \cite{Muller1984}). It replaces the operator $\gamma$ in $X(\gamma) $ by $\gamma^{1/2}$. This means the {\it operator} square root (note that $\gamma$ is self-adjoint and positive as an operator, so the square root is well defined). Thus, $\gamma(\mathbf{x},\mathbf{x}') = \int d\mathbf{x}''\, \gamma^{1/2}(\mathbf{x},\mathbf{x}'') \gamma^{1/2}(\mathbf{x}'',\mathbf{x}')$. In terms of spectral representations, with eigenvalues $\lambda_i $ and orthonormal eigenfunctions $\phi_i$ (the `natural orbitals'), \begin{equation} \label{exp} \gamma(\mathbf{x},\mathbf{x}') = \sum_{i=1}^\infty \lambda_i \, \phi_i(\mathbf{x}) \phi_i(\mathbf{x}')^* \quad {\rm and} \quad \gamma^{1/2} (\mathbf{x},\mathbf{x}') = \sum_{i=1}^\infty \lambda_i^{1/2} \phi_i(\mathbf{x}) \phi_i(\mathbf{x}')^* . \end{equation} There is no simple formula for the calculation of $\gamma^{1/2}(\mathbf{x},\mathbf{x}')$ in terms of $\gamma(\mathbf{x},\mathbf{x}')$, unfortunately, but there is an integral representation, which we shall use later. Thus, \begin{equation} \label{mullerfunct} \mathcal{E}^{\rm M}(\gamma) = \frac{\hbar^2}{2m} \tr (-\nabla^2 \gamma) -e^2 \int_{\mathbb{R}^3} V_c(\mathbf{r}) \rho_\gamma(\mathbf{r}) d\mathbf{r} +e^2D(\rho_\gamma, \rho_\gamma) -e^2 X(\gamma^{1/2}) \ , \end{equation} and \begin{equation} \label{sqrtenergy} E^{\rm M}(N) = \inf_\gamma \{\mathcal{E}^{\rm M}(\gamma)\, : \, 0\leq \gamma \leq1, \tr \gamma =N\} \ . \end{equation} The functional $\mathcal{E}^{\rm M}(\gamma) $ was introduced by M\"uller \cite{Muller1984} and was rederived by other methods by Buijse and Baerends \cite{BuijseBaerends2002}. A similar functional was introduced by Goedecker and Umrigar \cite{GoedeckerUmrigar1998}, the chief difference being that \cite{GoedeckerUmrigar1998} attempts to remove an electron `self-energy' by omitting certain diagonal terms that arise when \eqref{sqrtenergy} is explicitly written out using the expansion of $\gamma$ into its orbitals \eqref{exp}. In particular, quite analogous to density functional theory, explicit corrections terms have been added to correct the overestimate of binding energies using M\"uller's functional (Gritsenko et al. \cite{Gritsenkoetal2005}). {}From now on we will use atomic units, i.e., $\hbar=m=e=1$. To get some idea of the magnitudes involved we can look at hydrogen. Numerical computations \cite[Figure 6]{Gritsenkoetal2005} and \cite[Figure 3.1]{Helbig2006} suggest that $E^{\rm M}(1)\approx -0.525$. This is to be compared with the true energy, $-0.5$. It might be wondered how M\"uller's exchange energy compares to the old Dirac $-\int \rho_\gamma(\mathbf{r})^{4/3} d\mathbf{r}$. As remarked after Lemma \ref{xhardy}, and as found earlier by Cioslowski and Pernal \cite{CioslowskiPernal1999}, $ X(\gamma^{1/2}) $ can not be bounded by $C\int \rho_\gamma(\mathbf{r})^{4/3} d\mathbf{r}$ for any $C$. M\"uller \cite{Muller1984} also considered using $\gamma^{p}(\mathbf{x},\mathbf{x}') \gamma^{1-p}(\mathbf{x}',\mathbf{x})$ for some $0<p<1$ in place of $|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2=\gamma^{1/2}(\mathbf{x},\mathbf{x}') \gamma^{1/2}(\mathbf{x}',\mathbf{x})$, which satisfies the integral condition, but he decided to take $p=1/2$ because this yields the smallest value of $X$, and hence the largest energy. (The proof is analogous to $a^p b^{1-p} + a^{1-p}b^p \geq 2\sqrt{ab}$ for positive numbers $a,b$.) M\"uller's functional \eqref{mullerfunct} has several advantages, the first of which is {\bf A.1.} The quantity that effectively replaces $\rho^{(2)}(\mathbf{r},{\mathbf{r}'})$ in the functional is now $$\frac12\rho_\gamma(\mathbf{r})\rho_\gamma({\mathbf{r}'}) -\frac12\sum_{\sigma,\sigma'=1}^q |\gamma^{1/2}(\mathbf{r},\sigma,\ {\mathbf{r}'}, \sigma')|^2 , $$ and this satisfies the correct integral condition $$ \frac{1}{2}\int \left[\rho_\gamma(\mathbf{r})\rho_\gamma ({\mathbf{r}'}) - \sum_{\sigma,\sigma'=1}^q\gamma^{1/2}(\mathbf{x},\mathbf{x}') \gamma^{1/2}(\mathbf{x}',\mathbf{x}) \right] d{\mathbf{r}'}= \frac{N-1}{2} \rho_\gamma (\mathbf{r}). $$ On the other hand, $\rho_\gamma(\mathbf{r})\rho_\gamma({\mathbf{r}'}) - \sum_{\sigma,\sigma'=1}^q|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2$ is not necessarily positive as a \textit{function} of $\mathbf{r},{\mathbf{r}'}$, whereas the HF choice $\rho_\gamma(\mathbf{r})\rho_\gamma({\mathbf{r}'}) - \sum_{\sigma,\sigma'=1}^q|\gamma(\mathbf{x},\mathbf{x}') |^2\geq 0$ (which is true for any positive semi-definite operator). This non-positivity is a source of some annoyance. In particular, it prevents the application of a standard method \cite{Lieb1984} for proving a bound on the maximum $N$. {\bf A.2.} A special choice of $\gamma$ is a HF type of $\gamma$, namely one in which all the $\lambda_i$ are 0 or 1. In this special case $\gamma^{1/2} = \gamma$ and the value of the M\"uller energy equals the HF energy. Thus, the M\"uller functional is a generalization of the HF functional, and its energy satisfies $E^{\rm M}(N)\leq E^{\rm HF}(N)$ (because, as we remarked above, the minimizers for the HF problem always have this projection property). Later, we shall propose that the quantity $\widehat{{E}}^{\rm M}(N) = E^{\rm M} (N) +N/8$ should be interpreted as the binding energy; it is not obvious that $\widehat{{E}}^{\rm M}(N)$ satisfies such an inequality, however. Indeed, it does not, in general, as the hydrogen example shows ($ -0.525 + 1/8 > -0.5$). {\bf A.3.} The original M\"uller functional seems to give good numerical results when few electrons are involved. Moreover, $E^{\rm M}(N)$ appears to satisfy $E^{\rm M}(N) \leq E_0(N)$ for all electron numbers $N$, i.e., it is always a lower bound. We shall {\it prove} this inequality when $N=2$ in the last section. (Numerical accuracy of larger electron numbers seem to require appropriately modified functionals. We refer the reader interested on numerical results and improved density matrix functionals to the papers of Buijse and Baerends \cite{BuijseBaerends2002}, Staroverov and Scuseria \cite{StaroverovScuseria2002}, Herbert and Harriman \cite{HerbertHarriman2003}, Gritsenko et al. \cite{Gritsenkoetal2005}, Poater et al. \cite{Poateretal2005}, Lathiotakis et al. \cite{Lathiotakisetal2005}, and Helbig \cite{Helbig2006}.) Since we are primarily interested in the structure of the underlying theory rather than numerical results, we concentrate on the unmodified original M\"uller functional despite the above mentioned numerical deficiency for large electron number. The M\"uller functional can be viewed as a prototype of density matrix functionals with simple structures, but which are potentially useable as the basis of more elaborate functionals, e.g., \cite{GoedeckerUmrigar1998,CsanyiArias2000,Csanyietal2002,Gritsenkoetal2005}. \subsection{Convexity and Some of its Uses} A key observation about $\mathcal{E}^{\rm M}(\gamma)$ is that it is a {\it convex functional} of $\gamma$. This means that for all $0<\lambda<1 $ and density matrices $\gamma_1, \gamma_2$ (not necessarily with the same trace and not necessarily satisfying $\gamma \leq 1$) \begin{equation} \label{convexity} \mathcal{E}^{\rm M}( \lambda \gamma_1 + (1-\lambda)\gamma_2) \leq \lambda \mathcal{E}^{\rm M} ( \gamma_1 )+ (1-\lambda)\mathcal{E}^{\rm M}(\gamma_2) \ . \end{equation} (Note that the convex combination $\lambda \gamma_1 + (1-\lambda)\gamma_2$ satisfies the conditions in \eqref{sqrtenergy} if $\gamma_1$ and $\gamma_2$ both satisfy the conditions.) The convexity is a bit surprising, given the minus sign in the exchange term of $\mathcal{E}^{\rm M}$, and it will lead to several important theorems. One is that the electron {\it density} $\rho_\gamma(\mathbf{r})$ of the minimizer (if there is one) is the same for all minimizers with the same $N$, and hence that the density of an atom is always spherically symmetric. This contrasts sharply with HF theory, whose functional \eqref{hfeqn} is {\it not} convex, and it can contradict the original Schr\"odinger theory (since an atom can have a nonzero angular momentum in its ground state). Also, the Dirac estimate for the exchange energy, $-\int \rho^{4/3}$ is not convex; it is concave, in fact! Some writers \cite{Grossetal1991} regard the retention of symmetry as a desirable property for an approximate theory; one speaks of the ``symmetry dilemma'' of HF theory (which means that while symmetry restriction of HF orbitals improves the overall symmetry it raises the minimum energy). M\"uller theory has no symmetry dilemma! {}From another perspective the sphericity of an atom might be seen as a drawback since real atoms sometimes have a non-zero angular momentum, and such states are not spherically symmetric. Sphericity is not a drawback, in fact, since density-matrix-functional theory deals with density-matrices obtained from {\it all} $N-$particle states, including mixed ones (because the only restriction we impose is that the eigenvalues of $\gamma$ lie between 0 and 1, and this condition precisely defines the set of $\gamma$ obtained from the set of mixed states, not the set of pure states). In the case of atoms there is always a mixed state with spherical symmetry, namely the projection onto all the ground states, divided by the degeneracy. This is the state that one sees (in principle) when looking at an atom at zero temperature (L\"uders' projection postulate \cite{Luders1951}). A second consequence of convexity is that the energy $E^{\rm M}(N) $ is always a convex function of $N$, as it is in Thomas-Fermi theory, for example \cite{LiebSimon1977,Lieb1981}. This means that as we add one electron at a time to our molecule, the (differential) binding energy steadily decreases. Such a property is {\it not} known to hold for the true Schr\"odinger energy $E_0(N)$. The convexity of $\mathcal{E}^{\rm M}(\gamma)$ is not at all obvious. All the terms except $-X(\gamma^{1/2})$ are clearly convex. In fact, the term $D(\rho_\gamma, \rho_\gamma)$ is {\it strictly} convex as a function of the density $\rho_\gamma(\mathbf{r})$ (strict inequality in \eqref{convexity} when $\rho_{\gamma_1}\neq \rho_{\gamma_2}$) since the Coulomb kernel $|\mathbf{r}-{\mathbf{r}'}|^{-1} $ is positive definite. It is this strict convexity that implies the uniqueness of $\rho_\gamma(\mathbf{r})$ when there is a minimizer. To show convexity of $\mathcal{E}(\gamma)$, therefore, we have to show concavity (like \eqref{convexity} but with the inequality reversed) of the functional $X(\gamma^{1/2})$. First, we write $|\mathbf{r}-{\mathbf{r}'}|^{-1} = \int_\Lambda B_\lambda(\mathbf{r})^*B_\lambda({\mathbf{r}'}) d\lambda $ where $\lambda $ is in some parameter-space $\Lambda$. There are many ways to construct such a decomposition. One way is due to Fefferman and de la Llave \cite{FeffermandelaLlave1986}, which we shall use in the sequel, in which the functions $B_\lambda$ are all characteristic functions of balls in $\mathbb{R}^3$ and $\lambda$ parametrizes their radii and centers. Another way is $|\mathbf{r}-{\mathbf{r}'}|^{-1} = C\int_{\mathbb{R}^3} |\mathbf{r}-\mathbf{z}|^{-2} |{\mathbf{r}'}-\mathbf{z}|^{-2} d\mathbf{z}$. Anyway, it suffices now to prove that $\int d\mathbf{x}\, d\mathbf{x}'\, \gamma^{1/2}(\mathbf{x}, \mathbf{x}') B(\mathbf{r})^* \gamma^{1/2}(\mathbf{x}', \mathbf{x}) B({\mathbf{r}'})$ is concave in $\gamma$, for any fixed function $B(\mathbf{r})$. We can write this in abstract operator form as $\tr \gamma^{1/2} B^\dagger \gamma^{1/2} B$. The concavity of such functions of $\gamma$ was proved by Wigner and Yanase \cite{WignerYanase1964} in connection with a study of entropy. Convexity also holds for M\"uller's general $p$ functional, which we mentioned earlier. It uses $\gamma^{p}(\mathbf{x},\mathbf{x}')\gamma^{1-p}(\mathbf{x}',\mathbf{x})$ in the exchange term. The fact that $\tr \gamma^{p} B^\dagger \gamma^{1-p} B$ is concave for all $0<p<1$ was proved in \cite{Lieb1973C} and plays a role in quantum information theory \cite{NielsenChuang2000}. Another important use of the convexity of $\mathcal{E}^{\rm M}(\gamma)$ is to significantly simplify the question of the spin dependence of $\gamma(\mathbf{r},\sigma, {\mathbf{r}'}, \sigma')$. For concreteness, let us assume the usual case of two spin states ($q=2$), but the conclusion holds for any $q$. In the HF problem it is not obvious how $\gamma$ should depend on $\sigma, \sigma'$ and usually one makes some standard a-priori assumption, such as that $\gamma^{\rm HF}(\mathbf{r},\sigma, {\mathbf{r}'}, \sigma') = \gamma_{\uparrow,\uparrow} (\mathbf{r},{\mathbf{r}'}, ) \delta_{\sigma, \uparrow} \delta_{\sigma', \uparrow} + \gamma_{\downarrow,\downarrow} (\mathbf{r},{\mathbf{r}'}, ) \delta_{\sigma, \downarrow} \delta_{\sigma', \downarrow} $. In the M\"uller case this problem does not arise. Note that the functional $\mathcal{E}^{\rm M}$ is invariant under simultaneous rotation of $\sigma $ and $\sigma'$ in spin-space. (This means that we regard $\gamma$ as a $2\times 2$ matrix whose elements are function of $\mathbf{r}, {\mathbf{r}'}$. The spin rotation is then just a $2\times2$ unitary transformation of this matrix.) If we take any $\gamma(\mathbf{r},\sigma, {\mathbf{r}'}, \sigma')$ and average it over all such simultaneous rotations we will obtain a new $\widetilde\gamma$ whose energy $\mathcal{E}^{\rm M}(\widetilde\gamma)$ is at least as low as that of the original $\gamma$ (by convexity). But $\widetilde\gamma$ is clearly spin-space rotation invariant, which means it must have the form \begin{equation} \label{gammaform} \widetilde\gamma (\mathbf{r},\sigma, {\mathbf{r}'}, \sigma') = \frac12 \widehat\gamma (\mathbf{r}, {\mathbf{r}'}) \otimes \mathbb{I} \end{equation} where $\mathbb{I}$ is the $2\times 2$ identity matrix. The subsidiary conditions become \begin{equation} \label{newcondition} \tr \widehat\gamma \equiv \int \widehat\gamma (\mathbf{r},\mathbf{r}) d\mathbf{r} =N \qquad\qquad {\rm and } \qquad\qquad 0\leq \widehat\gamma \leq 2 \ . \end{equation} The change from 1 to 2 in \eqref{newcondition} is to be noted. Often $\widehat\gamma $ is called the {\it spin-summed density matrix}. The conclusion is that to get the correct minimum energy one can always restrict attention to the simpler, spin-independent $\widehat\gamma$, but with the revised conditions \eqref{newcondition}. This is a significant simplification relative to HF theory. In much of the sequel we utilize the formal notation $\mathbf{x}$ instead of $\mathbf{r}$, but the reader should keep in mind that one can always assume that $\gamma$ has the form \eqref{gammaform} and all spin summations become trivial. A question will arise: Although it is possible to choose $\gamma $ in the form \eqref{gammaform}, are there other possibilities? They will certainly exist if $\widehat\gamma $ is not unique, (but we conjecture that it is unique since its density is unique, as we said). Even if $\widehat\gamma $ is unique we still might have other possibilities, however, when $N$ is small. For example, we could take $\gamma (\mathbf{x}, \mathbf{x}') = \widehat\gamma (\mathbf{r},{\mathbf{r}'}) \times \delta_{\sigma, \uparrow} \delta_{\sigma', \uparrow} $, but this density matrix is bounded above by 1 only if $\widehat\gamma \leq 1$ (not $\leq 2)$. This situation can arise if $N$ is small, but we expect that it does not arise when $N\geq 1$. In any case, we show that, for large $N$ and $Z$, $\widehat\gamma $ has at least one maximal eigenvalue, namely 2 (see Prop.~\ref{propq}). In short, it is likely that whatever the M\"uller functional has to say about the energy, it probably has little to say, reliably, about the spin of the ground state. Unlike HF theory, we do not have to worry about spin here. This does not mean that HF theory is necessarily better as concerns spin. Sometimes it is \cite{Bachetal1994}, and sometimes it is not \cite{Bachetal2006}. In the atomic case $\mathcal{E}^{\rm M}(\gamma)$ is also rotationally invariant and we can apply the same logic used above for the spin to the simultaneous rotation of $\mathbf{r}, {\mathbf{r}'}$ in $\widehat\gamma (\mathbf{r},{\mathbf{r}'})$. The conclusion is that we may assume the following computationally useful representation: \begin{equation} \widehat\gamma (\mathbf{r},{\mathbf{r}'}) = \sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell \gamma_\ell(r,r') Y_{\ell, m}(\theta_\mathbf{r})\, Y_{\ell, m}^*(\theta_{\mathbf{r}'}) =\frac{2\ell +1}{4\pi} \sum_{\ell=0}^\infty \gamma_\ell(r , r') P_\ell (\cos \Theta) , \end{equation} where $r=|\mathbf{r}|, r'=|{\mathbf{r}'}|$. The $Y_{\ell, m}$ are normalized spherical harmonics, $\theta_\mathbf{r}$ is the angle of the vector $\mathbf{r}$, etc., $P_\ell $ is the $\ell^{\rm th}$ Legendre polynomial and $\Theta $ is the angle between $\mathbf{r}$ and ${\mathbf{r}'}$. Another way to say this is that we can assume that the eigenfunctions of $\widehat\gamma (\mathbf{r},{\mathbf{r}'}) $ are radial functions times spherical harmonics $Y_{\ell, m}$ and that the allowed $m$ values occur with equal weight. This observation can simplify numerical computations. Any other symmetry can be treated in a similar way. For example, in the case of a solid there is translation invariance of the lattice of nuclei. By wrapping a large, finite piece of the lattice on a torus (periodic boundary conditions) we have a finite system with translation invariance and we can conclude, as above, that we can assume that $\widehat\gamma (\mathbf{r},{\mathbf{r}'}) $ is also translation invariant, which means that $\widehat\gamma (\mathbf{r},{\mathbf{r}'}) $, viewed as a function of $\mathbf{r}+{\mathbf{r}'}$ and $\mathbf{r}-{\mathbf{r}'}$ is periodic in the variable $\mathbf{r}+{\mathbf{r}'}$. One obvious symmetry is complex conjugation ($i\to -i$ ) in the absence of a magnetic field. Convexity implies that in the spin-independent formulation any minimizing $\gamma $ must be real, as shown in Proposition \ref{realmin} of Section \ref{minprop}. \subsection{The M\"uller Equations\label{subsec:Mueller}} If the M\"uller functional has a minimizing $\gamma$ (with $\tr \gamma =N$) then this $\gamma$ satisfies an Euler equation. A minimizer does exist if $N\leq Z$ as we show in Theorem \ref{fulltrace}. It is not altogether a trivial matter to write down an equation satisfied by a minimizing $\gamma$. Conversely, one can ask whether a $\gamma $ that satisfies this equation is necessarily a minimizer. We partly answer these questions in several ways. \quad 1. Suppose that $\gamma$ satisfies $\tr \gamma =N $ and that $\gamma$ minimizes $\mathcal{E}^{\rm M}(\gamma)$, i.e., $\mathcal{E}^{\rm M}(\gamma) = E^{\rm M}(N)$. Then we conclude (by definition of the minimum) that \begin{equation} \label{min1} \mathcal{E}^{\rm M}((1-t)\gamma + t\gamma') \geq \mathcal{E}^{\rm M}(\gamma) \end{equation} for all admissible $\gamma'$ with $\tr \gamma' =N $ and for all $0\leq t \leq 1$. Conversely, if $\tr \gamma =N $ and if \eqref{min1} is true for all such $\gamma'$ and for {\it some} $0<t\leq 1$ (with $t$ possibly depending on $\gamma'$) then $\gamma$ is a minimizer. Alternatively, it suffices to require that for all such $\gamma'$ \begin{equation} \label{min2} \frac{d}{dt} \mathcal{E}^{\rm M}((1-t)\gamma + t\gamma') |_{t=0 } = \lim_{t \downarrow 0} \frac1t \left[ \mathcal{E}^{\rm M}((1-t)\gamma + t\gamma') - \mathcal{E}^{\rm M}(\gamma)\right] \geq 0\, . \end{equation} To see that $\gamma$ is a minimizer we exploit the convexity of the functional $\mathcal{E}^{\rm M}$, which implies that $\mathcal{E}^{\rm M}((1-t)\gamma + t\gamma') \leq (1-t)\mathcal{E}^{\rm M}(\gamma) +t\mathcal{E}^{\rm M}(\gamma'),$ and hence, from \eqref{min1} or \eqref{min2}, that $\mathcal{E}^{\rm M}(\gamma) \leq \mathcal{E}^{\rm M}(\gamma')$. (Note that the convexity also implies that $\mathcal{E}^{\rm M}((1-t)\gamma + t\gamma')$ is a convex function of $t$ in the interval $[0,1]$, which, in turn, implies that the right derivative defined in \eqref{min2} always exists.) To summarize, we say that {\it the equation defining a minimizer is \eqref{min2}} (for all $\gamma'$). To make this more explicit we have to compute the derivative in \eqref{min2}. \quad 2. The variational equations are most conveniently written down in terms of $\gamma^{1/2}$, the square root of a minimizer. In Proposition~\ref{lagrange2}, we will show that $\gamma^{1/2}(\mathbf{r},{\mathbf{r}'})$ satisfies the following variational equation. Let $\varphi_\gamma$ denote the effective potential $\varphi_\gamma(\mathbf{r})= V_c(\mathbf{r}) - \int \rho_\gamma({\mathbf{r}'}) |\mathbf{r}-{\mathbf{r}'}|^{-1} d{\mathbf{r}'}$, where $\rho_\gamma(\mathbf{r})=\sum_\sigma \gamma(\mathbf{x},\mathbf{x})=\sum_\sigma \int |\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2 d\mathbf{x}'$ denotes the particle density. Then \begin{equation}\label{vareq} \left(-\mbox{$\frac12$} \nabla_\mathbf{r}^2 - \mbox{$\frac12$} \nabla_{\mathbf{r}'}^2 - \varphi_\gamma(\mathbf{r}) - \varphi_\gamma({\mathbf{r}'}) - \frac 1{|\mathbf{r}-{\mathbf{r}'}|} - 2\mu\right) \gamma^{1/2}(\mathbf{x},\mathbf{x}') = \sum_i 2 e_i \psi_i(\mathbf{x}) \psi_i(\mathbf{x}')^* \end{equation} where $\mu\leq -1/8$, $e_i\leq 0$ and $\psi_i(\mathbf{x})$ is an eigenfunction of $\gamma^{1/2}$ with eigenvalue $1$, i.e., $\int \gamma^{1/2}(\mathbf{x},\mathbf{x}') \psi_i(\mathbf{x}') d\mathbf{x}' = \psi_i(\mathbf{x})$ for all $i$. Note that the number of $\psi_i$'s corresponding to eigenvalue $1$ is necessarily less than $N$. Conversely, is it true that any $\gamma^{1/2}$ satisfying $0\leq \gamma^{1/2} \leq 1$ (as an operator) and $\tr \left(\gamma^{1/2}\right)^2 = \tr \gamma= N$ which is a solution to \eqref{vareq} under the constraints mentioned above, is a minimizer of $\mathcal{E}^{\rm M}(\gamma)$? Unfortunately, we can answer this question affirmatively only if we know that the density $\rho_\gamma(\mathbf{r})$ does not vanish on a set of positive measure. Presumably such a vanishing does not occur, but we do not know how to prove this and leave it as an open problem. \iffalse We note that it is not necessary to restrict ones attention to solutions $\gamma^{1/2}(\mathbf{x},\mathbf{x}')$ of \eqref{vareq} that define non-negative operators here. It is enough to look for solutions with $-1\leq \gamma^{1/2}\leq 1$, but $e_i\geq 0$ in case $\psi_i$ is an eigenfunction of $\gamma^{1/2}$ with eigenvalue $-1$. \fi \quad 3. As a practical matter it is the fact that $\gamma$ satisfies \eqref{vareq} that is important because it gives us equations for the orbitals of $\gamma$. A minimizer $\gamma$ can be expanded in natural orbitals $\psi_j(\mathbf{x})$ as $$ \gamma(\mathbf{x},\mathbf{x}')=\sum_j\lambda_j\psi_j(\mathbf{x})\psi_j(\mathbf{x}')^* $$ with corresponding occupation numbers (eigenvalues) $0<\lambda_j\leq 1$. Then $\gamma^{1/2}(\mathbf{x},\mathbf{x}')=\sum_j \lambda_j^{1/2}\psi_j(\mathbf{x})\psi_j(\mathbf{x}')^*$. Multiplying (\ref{vareq}) by $\psi_i(\mathbf{x}')$ and integrating over $\mathbf{x}'$ yields an eigenvalue equation for the $\psi_i(\mathbf{x})$, namely \begin{equation} \left[\left(-\tfrac12\nabla^2- \phi_\gamma\right)\gamma^{1/2} + \gamma^{1/2}\left(-\tfrac12\nabla^2-\phi_\gamma\right)\right] |\psi_i\rangle - \left(Z_\gamma+2\mu\lambda_i^{1/2}\right)|\psi_i\rangle \\= 2 e_i|\psi_i\rangle\,. \end{equation} Here, $Z_\gamma$ is the operator with integral kernel \begin{equation} \label{zgamma} Z_\gamma(\mathbf{x},\mathbf{x}') = \gamma^{1/2}(\mathbf{x},\mathbf{x}')|\mathbf{r}-{\mathbf{r}'}|^{-1}\,. \end{equation} Taking the product with $\langle \psi_j|$, this implies, in particular, that \begin{equation}\label{me} \langle \psi_j| -\tfrac 12 \nabla^2 - \varphi_\gamma | \psi_i\rangle - \frac 1{\sqrt{\lambda_i}+\sqrt{\lambda_j}} \langle \psi_j| Z_\gamma |\psi_i\rangle = \left( \mu+ e_i\right) \delta_{ij}. \end{equation} (See also Pernal \cite{Pernal2005} who derived -- although merely on a formal level -- similar equations for more general functionals). \quad 4. We shall show that $\gamma $ has no zero eigenvalues unless the density $\rho_{\gamma}(\mathbf{r})$ vanishes identically on a set $\Omega$ of positive measure. We do not expect such a set to exist but we do not know how to exclude this possibility. Any non-zero, square integrable function that vanishes identically outside $\Omega$ is a zero eigenvalue eigenfunction of $\gamma$. In any case, there are no other zero eigenvalue eigenfunctions! Hence the orbitals $\psi_j(\mathbf{x})$ form a complete set in $L^2(\mathbb{R}^3\setminus \Omega)$. Formally, we can thus rewrite Eq.~(\ref{me}) as an eigenvalue equation for a linear operator $H_\gamma$ on $L^2(\mathbb{R}^3\setminus\Omega)$. Let \begin{equation} \label{eq:h1} H_\gamma = -\tfrac12 \nabla^2 -\varphi_\gamma - \mathfrak X_\gamma\,, \end{equation} where $\mathfrak X_\gamma$ is the nonlocal exchange operator with matrix elements $\langle \psi_i |\mathfrak X_\gamma|\psi_j\rangle = (\sqrt{\lambda_i}+\sqrt{\lambda_j})^{-1} \langle \psi_i | Z_\gamma|\psi_j\rangle$. Alternatively, one can write \begin{equation} \label{defxg} \mathfrak X_\gamma = \frac 1\pi \int_0 ^\infty \frac 1{\gamma+s} Z_\gamma \frac 1{\gamma+s} \sqrt{s} \,ds\,. \end{equation} The variational equations are then \begin{equation} \label{equation} H_\gamma |\psi_j\rangle = \mu| \psi_j\rangle \end{equation} for all $j$ with $0<\lambda_j<1$, where $\mu\leq -1/8$ is the chemical potential. Notice that all eigenvalues in \eqref{equation} are identical, namely $\mu$. In the subspace in which $\gamma$ has eigenvalue 1, which can only be finite dimensional since $\tr \gamma =N$, there is an orthonormal basis such that \begin{equation}\label{upperequation} H_\gamma |\psi_j\rangle = \left(\mu +e_j\right) |\psi_j\rangle \end{equation} with all $e_j \leq 0$. The finite collection of numbers $\mu + e_j$ constitutes all the eigenvalues of $H_\gamma $ that are less than $\mu$. The reason we say that \eqref{equation} and \eqref{upperequation} are formal is that the operator $H_\gamma$ is only formally defined by \eqref{eq:h1}. Both $\nabla^2$ and $\mathfrak X_\gamma$ are unbounded operators. Their sum is defined as a quadratic form (i.e., expectation values) but this form does not uniquely define the operator sum. If we knew that there are no zero eigenvalues then the set $\Omega $ would be empty and $\nabla^2$ would be defined as the usual Laplacian on $\mathbb{R}^3$, but if $\mathbb{R}^3 \setminus \Omega$ has a boundary there are many extensions of $\nabla^2$ with different boundary conditions, and this prevents the precise specification of \eqref{equation} and \eqref{upperequation}. There is no problem with the matrix elements in \eqref{me}, however, since the $\psi_i$ vanish on the boundary of $\mathbb{R}^3 \setminus \Omega$. On the other hand \eqref{vareq}, which is an equation for the {\it function} $\gamma^{1/2} (\mathbf{x},\mathbf{x}')$, is true on the whole space. It is not necessary to impose any boundary conditions and $\nabla^2$ is just the usual Laplacian -- whether or not the set $\Omega$ is empty. Surely $\Omega $ is empty, in fact, and the practical quantum chemist can freely use \eqref{equation} and \eqref{upperequation}. \subsection{Other Considerations about the M\"uller Functional} Let us conclude this introduction with a list of other significant questions about $ E^{\rm M}(N)$ and with statements about what we can prove rigorously. {\bf Q1.} If there are no nuclei at all ($K=0$), and if we try to minimize $\mathcal{E}^{\rm M}(\gamma)$ (with $\tr \gamma =N$, however) it is clear that there will be no energy minimizing $\gamma$. There will, of course, be a minimizing sequence (i.e., a sequence $\gamma_n$, $ n=1,2,....$ such that $\mathcal{E}^{\rm M}(\gamma_n) \to E^{\rm M}(N)$ as $n\to \infty$. Such a sequence will tend to `spread out' and get smaller and smaller as it spreads (always with $\tr \gamma_n =N$). What, then, is $ E^{\rm M}(N)$? We prove that it is \textit{exactly} given by \begin{equation} E^{\rm M}(N) = -N/8 \qquad\qquad {\rm when \ all \ }Z_j=0. \end{equation} (If the units are included the energy is $-(me^4 /8\hbar^2)N$.) A similar calculation in the context of the homogeneous electron gas was done by Cioslowski and Pernal \cite{CioslowskiPernal1999}. This situation is reminiscent of Thomas-Fermi-Dirac theory \cite{Lieb1981} where, in the absence of nuclei, the energy equals $-(const.)N$. This negative energy comes from balancing the kinetic energy against the negative exchange. In such a case it is convenient to add $+(const.) \tr \gamma$ to $\mathcal{E}^{\rm M}(\gamma)$ (with $(const.) = 1/8$ in our case) in order that $E^{\rm M}(N) \equiv 0$ when there are no nuclei. Another way to say this is that the energy, $-1/8$, is the {\it self-energy} of a particle in this theory. It has no physical or chemical meaning but we have to pay attention to it. It is the quantity \begin{equation} \label{sqrthatenergy} \widehat{{E}}^{\rm M}(N) = E^{\rm M}(N) +\frac{N}{8} \end{equation} that might properly be regarded as the energy of $N$ electrons in the presence of the nuclei, i.e., $-\widehat{{E}}^{\rm M}(N) $ is the physical binding (or dissociation) energy. We do not insist on this interpretation, however. On the other hand, if we are interested in the binding energy with fixed $N$ (e.g., the binding energy of two atoms to form a molecule) then it makes no difference whether we use the difference of $\widehat{{E}}^{\rm M}(N) $ or $E^{\rm M}(N)$. The motivation here is to ensure that the ground state energy of free electrons is zero. This can be compared with the formulation in \cite{GoedeckerUmrigar1998} in which the `self-energy' correction is obtained by omitting certain diagonal terms in the energy (when the energy is written in terms of the orbitals of $\gamma$). This procedure does not have a natural physical interpretation and, more importantly, does not appear to give the zero energy condition for free electrons. This consideration leads us to the functional \begin{equation} \label{ehatfunctional} \widehat{\mathcal{E}}^{\rm M}(\gamma) = \mathcal{E}^{\rm M}(\gamma) +\frac{1}{8}\tr \gamma \end{equation} and its corresponding infimum $\widehat{{E}}^{\rm M} (N)$. Note that $\widehat{\mathcal{E}}^{\rm M}(\gamma)$ is also a convex functional of $\gamma$ since the new term $\tr \gamma /8$ is linear, and hence convex. Likewise, $\widehat{{E}}^{\rm M}(N) $ is a convex function of $N$. Having added this term, and with nuclei present, $\widehat{{E}}^{\rm M}(N)$ will qualitatively look like the Thomas-Fermi energy, $E^{\rm TF}(N)$. That is, $\widehat E^{\rm M}(0)=0$ and $\widehat E^{\rm M}(N)$ decreases monotonically, and with non-decreasing derivative, as $N$ increases \cite{LiebSimon1977}, \cite[Fig. 1]{Lieb1981}. It is bounded below, that is, \begin{equation} \label{lowerbound-eh} \widehat{{E}}^{\rm M}(N) \geq \widehat{{E}}^{\rm M}(\infty) , \end{equation} where $\widehat{{E}}^{\rm M}(\infty) $ is some finite, negative constant. We shall {\it prove} this here. These features are displayed schematically in Fig. 1. There is another feature of $E^{\rm TF}(N)$ that we believe to be true for $\widehat{{E}}^{\rm M}(N)$, but leave as an {\it open question}. At a certain critical value, $N_c$, of the electron number $E^{\rm TF}(N) $ stops decreasing and becomes constant for all $N\geq N_c$. When $N>N_c$ the excess charge $N-N_c$ just leaks off to infinity. In TF theory $N_c $ is the neutrality point $Z =\sum Z_j$, but this need not be so in other theories. In the original Schr\"odinger theory (\ref{ham}) $N_c$ is greater than $Z$ for many atoms (since stable, negative ions exist) but we know it is less than $2Z+1$ \cite{Lieb1984}. In the Thomas-Fermi-Weizs\"acker theory, $N_c $ is approximately $Z+ (const.)$ \cite{BenguriaLieb1985,Lieb1981}. We do not know how to prove that there is a finite $N_c$ for $\widehat{{E}}^{\rm M}(N)$, but we believe there is one. \begin{figure}[htf] \includegraphics[width=13cm, height=10cm]{curve.eps} \caption{\normalsize {Schematic diagram of the energy dependence on the particle number $N$. The lower, dashed curve is the M\"uller energy $E^{\rm M}(N)$ and the upper, solid curve is $\widehat{{E}}^{\rm M}(N) = E^{\rm M}(N) + N/8$, in which the `self-energy' $-N/8$ has been subtracted. Beyond the value $N_c$ each curve is linear, whereas for $N < N_c $ each is strictly convex and there is an energy minimizing density matrix.}} \label{Fig.1} \end{figure} {\bf Q2.} The main problem that has to be addressed is whether or not there is a $\gamma$ that minimizes $\widehat{\mathcal{E}}^{\rm M}(\gamma)$ in \eqref{sqrtenergy}. If $N_c <\infty$ we know that there is {\it no} minimizer when $N>N_c$, so we obviously do not expect to prove the existence of a minimizer for all $N$. The way around this problem, as used in \cite{LiebSimon1977}, for example, is to consider the {\it relaxed problem} \begin{equation} \label{relaxed} \widehat{{E}}^{\rm M}_\leq (N) = \inf_\gamma \{\widehat{\mathcal{E}}^{\rm M}(\gamma) \, : \, 0\leq \gamma \leq1, \tr \gamma \leq N\} \ . \end{equation} The relaxation of the number condition allows electrons to move to infinity in case $N$ is larger than the maximal number of electrons that can be bound. In Proposition \ref{minz=} we show that $\widehat{{E}}^{\rm M}_\leq (N) = \widehat{{E}}^{\rm M} (N)$ for all $N$. The difference is that while the $\widehat{{E}}^{\rm M} $ problem may not have a minimizer we prove that the $\widehat{{E}}^{\rm M}_\leq$ problem \eqref{relaxed} has a minimizer for all $N$. The proof is more complicated in several ways than the analogous proof in TF theory \cite{LiebSimon1977,Lieb1981}. A minimizer, which we can call $\gamma_\leq(N)$, will have some particle number $\tr \gamma_\leq(N) \equiv N_\leq \leq N$. It then follows from standard arguments using convexity (and strict convexity of $D(\rho, \rho)$) that the following is true, as displayed in Fig. 1 : If $N_\leq < N$ then $\gamma_\leq(N) = \gamma_\leq(N_\leq) $ and $\widehat{{E}}^{\rm M}(N) = ({\rm constant}) = \widehat{{E}}^{\rm M}(N_\leq) $, i.e., the original problem \eqref{sqrtenergy} has no minimizer. If $N_\leq =N$ then $\gamma_\leq(N) $ is also a minimizer for the original problem \eqref{sqrtenergy}. That is, the relaxed problem and the original problem give the same minimizer and the same energy. In this case, $\widehat{{E}}^{\rm M}(N) < \widehat{{E}}^{\rm M}(N')$ for all $N'< N$. The largest $N$ with this property is equal to $N_c$. It might occur to the reader that nothing said so far precludes the possibility that $N_c = 0$, but this is not so. We prove that $N_c \geq Z$ = total nuclear charge. {\bf Q3.} How many orbitals are contained in a minimizing $\gamma$? We shall prove that $\gamma$ has infinitely many positive eigenvalues. This feature also holds for the full Schr\"odinger theory (Friesecke \cite{Friesecke2003} and Lewin \cite{Lewin2004}), whereas there are only $N$ in HF theory. We believe that $\gamma$ has no zero eigenvalues (in the `spin-summed' version), but cannot prove this. In other words, we believe that the eigenfunctions belonging to the nonzero eigenvalues span Hilbert space (they form a complete set). We can, however, prove that the eigenfunctions of the spin-summed $\gamma$ are a complete set on the support of $\rho_\gamma(\mathbf{r})$, namely on the set of $\mathbf{r} \in \mathbb{R}^3$ for which $\rho_\gamma(\mathbf{r}) >0$. Presumably, this is the whole of $\mathbb{R}^3$. This introduction is long, but we hope it serves to clarify our goals and results, since the rest of the paper is unavoidably technical. \subsection{Open Problems} For the reader's convenience we give a brief summary of some of the open problems raised by this work, some of which are discussed at various places in this paper. \begin{enumerate} \item What is the critical value of the total electron charge, $N_c$, beyond which there is no energy minimizing $\gamma$ and the energy $\widehat{{E}}^{\rm M}(N)$ is constant? Is $N_c$ finite and can one give upper and lower bounds to it? In particular is $N_c>Z$, i.e., can negative ions exist? (We prove $N_c \geq Z$ and we prove that $\widehat{{E}}^{\rm M}(N)$ is bounded below, for all $N$, by a $Z$-dependent constant.) \item Is $E^{\rm M}(N) \leq$ the true Schr\"odinger ground state energy? (We prove this for $N=2$.) Can anything be said, in this regard, about $\widehat{{E}}^{\rm M}(N) = E^{\rm M}(N) +N/8 $? \item Is the spin-summed energy minimizing $\gamma$ unique? (We prove that all minimizers have the same density $\rho(\mathbf{r})$, however.) \item Is the domain on which the unique $\rho(\mathbf{r}) >0$ equal to the whole of $\mathbb{R}^3$ (except, possibly, for sets of measure zero)? If so, this would imply that the spin-summed $\gamma$ does not have a zero eigenvalue. \iffalse \item Do all solutions to the variational equations for the eigenfunctions of $\gamma$, in Subsection~\ref{subsec:Mueller} define a minimizing $\gamma$ or are there spurious solutions? If the M\"uller functional were {\it strictly} convex, instead of merely convex, the answer would immediately be yes, as is the case in Thomas-Fermi theory, for example \cite{LiebSimon1977}. \fi \item What are the qualitative properties of the density $\rho(\mathbf{r})$? How does it fall off for large $|\mathbf{r}|$? What is its behavior near the nuclei? \item In this theory do atoms bind to form molecules? (Recall that there is no binding in Thomas-Fermi theory \cite{LiebSimon1977}.) \end{enumerate} \section{The Case $Z=0$} \label{sec2} As noted in the introduction the energy of {\it free} electrons $E^{\rm M}(N)$ is not zero but is proportional to $N$. To be precise, $E^{\rm M}(N) =-N/8$ (in atomic units) when there are no nuclei, and comes about from the negative exchange energy $-X(\gamma^{1/2})$. This negative energy could be $-\infty$ were it not for the positive kinetic energy, which controls it and leads to a finite result. We shall prove that the direct Coulomb repulsion term, $D(\rho_\gamma, \rho_\gamma)$ plays {\it no role} here because it is quadratic in $\gamma$, whereas the terms we are concerned with are homogeneous of order $1$. We would get $-N/8$ even if we omitted the direct term. Similarly, the value $-N/8$ is independent of the number of spin states $q$. Moreover, the assumption $\gamma\leq 1$ is {\it not} needed in the proof. In this section, $Z= \sum Z_j= 0$, and we are considering the functional \begin{equation} \label{z=0functional} \mathcal E^{\rm M}(\gamma) \equiv \tr(-\mbox{$\frac12$}\nabla^2\gamma) + D(\rho_\gamma,\rho_\gamma)-X(\gamma^{1/2}) \end{equation} and the minimal energy $E^{\rm M}(N)$ in \eqref{sqrtenergy}. We also consider the relaxed energy $E^{\rm M}_{\leq}(N)$ for which, in analogy with \eqref{relaxed}, the condition $\tr \gamma =N $ is replaced by $\tr \gamma \leq N $. We always assume that $(-\nabla^2+1)^{1/2}\gamma^{1/2}\in\mathfrak S^2$, the set of Hil\-bert-Schmidt operators, so $\tr((1-\nabla^2)\gamma) = \int \int d\mathbf{x}\, d\mathbf{x}'\, \left(|\nabla\gamma^{1/2}(\mathbf{x}, \mathbf{x}')|^2 +|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2\right) <\infty$. We use the usual notation for $L^p$-norms, namely $$\Vert f \Vert_p = \left(\int |f(\mathbf{x})|^p d\mathbf{x} \right)^{1/p}\ \text{and}\ \Vert f \Vert_\infty = \sup_\mathbf{x} \{|f(\mathbf{x} )|\}. $$ \begin{proposition}\label{z0} If $Z=0$, then for any $N>0$, \begin{equation} E^{\rm M}(N) = E^{\rm M}_\leq(N) = -N/8 \end{equation} and there is no minimizing $\gamma$. \end{proposition} \begin{proof \underline{Lower bound}: We use the lower semi-boundedness of the hydrogenic Hamiltonian (i.e., for an imaginary nucleus with $Z=1/2$, located at ${\mathbf{r}'}$) \begin{equation}\label{eq:hydrogene} -\tfrac12\nabla^2_\mathbf{r} - (2|\mathbf{r}-{\mathbf{r}'}|)^{-1} \geq -\tfrac18 \end{equation} for all ${\mathbf{r}'}\in\mathbb{R}^3$, together with the fact that $D(\rho_\gamma,\rho_\gamma)\geq 0$ to get \begin{align*} \mathcal E^{\rm M}(\gamma) & \geq \frac12\iint \left(|\nabla_\mathbf{r} \gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2 - \frac{|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2}{|\mathbf{r}-{\mathbf{r}'}|}\right)\,d\mathbf{x}\,d\mathbf{x}' \\ & \geq -\frac18 \iint |\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2 \,d\mathbf{x}\,d\mathbf{x}' =-\frac18\tr\gamma. \end{align*} This proves the lower bound on $E^{\rm M}(N)$ and $E^{\rm M}_\leq (N)$. To prove the non-existence of a minimizer we denote by $g(\mathbf{r}-{\mathbf{r}'})$ the ground state of $-\nabla_{\mathbf{r}}^2-|\mathbf{r}-{\mathbf{r}'}|^{-1}$, i.e., \begin{equation}\label{eq:g} g(\mathbf{r}-{\mathbf{r}'}) = \pi^{-1/2} e^{-|\mathbf{r}-{\mathbf{r}'}|}, \end{equation} and note that the inequality $\leq $ in \eqref{eq:hydrogene} is strict (i.e., it is $>$), except for multiples of the function $g(\mathbf{r} -{\mathbf{r}'})$. Hence the above lower bound on $\mathcal E^{\rm M}(\gamma)$ is strict unless $\gamma^{1/2}(\mathbf{x},\mathbf{x}') = c_{\sigma\sigma'}({\mathbf{r}'})g(\mathbf{r}-{\mathbf{r}'})$. By self-adjointness, $c_{\sigma\sigma'}$ has to be a constant, and since $\gamma\in\mathfrak S^1$, the set of trace class operators, $c_{\sigma\sigma'}=0$. But this means that there exists no minimizer. \underline{Upper bound}: We define a trial density matrix $\gamma$ by defining its square root: \begin{equation}\label{eq:freetrial} \gamma^{1/2}(\mathbf{x},\mathbf{x}') = \chi (\mathbf{r})^* g(\mathbf{r}-{\mathbf{r}'}) \chi ({\mathbf{r}'}) \, q^{-1/2}\delta_{\sigma,\sigma'}. \end{equation} Here, $g$ is the same as in \eqref{eq:g} and $\chi$ is a smooth function which will be specified later. Note that this definition makes sense, since the operator whose kernel is given on the right side of \eqref{eq:freetrial} is non-negative. This follows from the positivity of $\widehat{g}$, the Fourier transform of $g$, given by \begin{equation*} \widehat g(\mathbf{p}) = \frac{2^{3/2}}\pi\frac1{(1+|\mathbf{p}|^2)^2}. \end{equation*} An easy calculation shows that \begin{equation*} \tr(-\nabla^2_\mathbf{r}\gamma) = \iint \left(|\chi(\mathbf{r})|^2 |\chi({\mathbf{r}'})|^2 (-\nabla^2_\mathbf{r} g(\mathbf{r}-{\mathbf{r}'})) g(\mathbf{r}-{\mathbf{r}'}) + |\nabla \chi(\mathbf{r})|^2 g(\mathbf{r}-{\mathbf{r}'})^2\, |\chi({\mathbf{r}'})|^2 \right)d\mathbf{r} \,d{\mathbf{r}'}. \end{equation*} Using the eigenvalue equation for $g$ one finds \begin{equation*} \tr(-\nabla^2_\mathbf{r})\gamma = 2 X(\gamma^{1/2}) - \frac14\tr\gamma + \iint |\nabla \chi(\mathbf{r})|^2 g(\mathbf{r}-{\mathbf{r}'})^2\, |\chi({\mathbf{r}'})|^2 \,d\mathbf{r}\,d{\mathbf{r}'}\,. \end{equation*} The upper bound will follow from this if we can find functions $\chi_L$ (where $L$ is some free parameter) such that for $\gamma_L$ defined via $\chi_L$, \begin{equation}\label{eq:choicechi1} \gamma_L\leq 1, \ {\rm as\ an\ operator}, \qquad{\rm } \qquad \tr\gamma_L\to N, \end{equation} \begin{equation}\label{eq:choicechi2} \iint |\nabla \chi_L(\mathbf{r})|^2 g(\mathbf{r}-{\mathbf{r}'})^2\, |\chi_L({\mathbf{r}'})|^2 \,d\mathbf{r} \, d{\mathbf{r}'} \to 0, \qquad {\rm and} \qquad \qquad D(\rho_{\gamma_L},\rho_{\gamma_L}) \to 0 \end{equation} as $L\to \infty$. We shall choose $\chi_L$ of the form $\chi_L(\mathbf{r}) = L^{-3/4}\chi(\mathbf{r}/L)$ for a fixed smooth function $\chi\geq 0$ satisfying $\|\chi\|_4^4=N$. We note that for any $L^2 $ function $\psi$ (and with $\widehat{\cdots}$ denoting the Fourier transform) \begin{equation*} (\psi,\gamma^{1/2}_L\psi) = (2\pi)^{3/2} \int \widehat g(\mathbf{p}) |(\widehat{ \chi_L\psi)}(\mathbf{p})|^2\,d\mathbf{p} \leq (2\pi)^{3/2} \|\widehat g\|_\infty \|\chi_L\|_\infty^2 \|\psi\|^2_2, \end{equation*} which is less than or equal to $\|\psi\|^2_2$ for $L$ large, since $\|\chi_L\|_\infty\to 0$. This implies the first condition in \eqref{eq:choicechi1}. To check the second one, we write \begin{equation*} \tr\gamma_L= (2\pi)^{3/2} \int \widehat{(g^2)}(\mathbf{p}) | \widehat{(\chi_L^2)}(\mathbf{p})|^2\, d \mathbf{p}. \end{equation*} Now $|\widehat{(\chi_L^2)}(\mathbf{p})|^2=L^3|\widehat{(\chi^2)}(L\mathbf{p})|^2$, which converges to $N\delta(\mathbf{p})$ as $L\to\infty$ (recall that $\|\chi\|_4^4=N$). Therefore \begin{equation*} \tr\gamma_L \to (2\pi)^{3/2} \widehat{(g^2)}(0) N = N. \end{equation*} To check conditions \eqref{eq:choicechi2} we estimate (again using that $\|g\|_2 =1$), \begin{equation*} \iint |\nabla \chi_L(\mathbf{r})|^2 g(\mathbf{r}-{\mathbf{r}'})^2\, \chi_L({\mathbf{r}'})^2\,d\mathbf{r} \,d{\mathbf{r}'} \leq \|\chi_L\|_\infty^2 \int |\nabla \chi_L(\mathbf{r})|^2 \,d\mathbf{r} = L^{-2} \|\chi\|_\infty^2 \|\nabla\chi\|^2. \end{equation*} Moreover, \begin{equation*} D(\rho_{\gamma_L},\rho_{\gamma_L}) = \frac1{2L} \iint \frac{\chi^2(\mathbf{r})\phi_L(\mathbf{r})\phi_L({\mathbf{r}'})\chi^2({\mathbf{r}'})}{|\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{r}\,d{\mathbf{r}'} \end{equation*} where $\phi_L(\mathbf{r}) = L^3 \int g^2(L(\mathbf{r}-{\mathbf{r}'}))\chi^2({\mathbf{r}'})\,d{\mathbf{r}'}$. Since $\phi_L(\mathbf{r})\to\chi^2(\mathbf{r})$ as $L\to \infty$, we conclude that $D(\rho_{\gamma_L},\rho_{\gamma_L})= L^{-1} D[\chi^4] + o(L^{-1})$ by dominated convergence. Hence \eqref{eq:choicechi2} holds, and the proof is complete. \end{proof} \textit{Remark:} One might ask whether $X(\gamma^{1/2})$ can be bounded from above in terms of the usual Dirac type estimate for the exchange energy, $\int\rho_\gamma(\mathbf{r})^{4/3}\, d\mathbf{r}$ (cf. \cite{Lieb1981}). However, this is not the case, as the following example shows: define $\gamma_L$, as in the proof Proposition \ref{z0}, by $\gamma_L^{1/2}(\mathbf{x},\mathbf{x}')=L^{-3/2}\chi(\mathbf{r}/L)g(\mathbf{r}-{\mathbf{r}'})\chi({\mathbf{r}'}/L)q^{-1/2}\delta_{\sigma,\sigma'}$, and carry out calculations similar to those done above. We find that \begin{align*} X(\gamma_L^{1/2}) & \to \|\chi\|_4^4 \int\frac{|g(\mathbf{r})|^2}{2|\mathbf{r}|}\,d\mathbf{r} \ , \\ \int \rho_{\gamma_L}(\mathbf{r})^{4/3}\,d\mathbf{r} & \sim L^{-1} \|\chi\|_{16/3}^{16/3} \ , \\ \int \rho_{\gamma_L}(\mathbf{r})\,dx & \to \|\chi\|_4^4 \ . \end{align*} Hence a bound in terms of the $4/3$-norm can not hold. This example can be traced back to Cioslowski and Pernal \cite{CioslowskiPernal1999}. \section{Minimizer in the Case $Z>0$} We return here, and in the remainder of this paper, to the general case in which all $Z_j >0$. We investigate the functional $\widehat{\mathcal{E}}^{\rm M}$ in \eqref{ehatfunctional} and the corresponding relaxed minimization problem given in \eqref{relaxed}. Our goal is to show that there is an energy minimizing $\gamma $ for this problem and that its trace is $\tr \gamma = N$ whenever $N \leq Z= \sum_j Z_j$. The main result of this section is contained in the following two theorems, whose elaborate proof will be given in several parts. \begin{theorem}\label{exmini} For any $Z>0$ and $N>0$ one has $\widehat E^{\rm M}_\leq (N)<0$ and the infimum \eqref{relaxed} is attained. \end{theorem} As explained in the introduction, we do not know how to prove that the minimizer is unique. The strict convexity of the direct energy $D(\rho_\gamma,\rho_\gamma) $, however, does imply that all minimizing $\gamma$'s have the same (spin summed) density $\rho_\gamma (\mathbf{r})$. \begin{theorem}\label{fulltrace} Assume that $N\leq Z$. Then a minimizer of \eqref{relaxed} has trace $N$. \end{theorem} In particular, this result implies that in the original problem \eqref{sqrtenergy} the infimum is achieved in case $N\leq Z$. The critical number $N_c$ mentioned in the introduction is thus at least $Z$. \subsection{Proof of Theorem~\ref{exmini}} By Proposition \ref{z0}, the functional $\widehat{\mathcal{E}}^{\rm M}(\gamma)$ is non-negative, if $Z=0$. By using a trial density matrix, we will first show that it assumes negative values as soon as $Z$ is positive. \begin{lemma}\label{negative} For any $Z>0$ and $N>0$ one has $\widehat E^{\rm M}_\leq (N)<0$. \end{lemma} \begin{proof} Without loss of generality we may assume that there is only one nucleus of charge $Z$ located at the origin $\mathbf{r}=0$. We use the same family $\gamma_L$ of trial density matrices as in the proof of the upper bound in Proposition \ref{z0}. Using the same estimates, we have \begin{equation}\label{eq:z0trial} \mathcal {\widehat E}^{\rm M}(\gamma_L) = -Z \tr |\mathbf{r}|^{-1} \gamma_L + \frac 1L D[\chi^4] + o(L^{-1}) \qquad \mbox{as $L\to\infty$.} \end{equation} Since $L^3 \int g^2(L(\mathbf{r}-{\mathbf{r}'}))\chi^2({\mathbf{r}'})\,d{\mathbf{r}'} \to\chi^2(\mathbf{r})$, we have $\tr|\mathbf{r}|^{-1}\gamma_L = L^{-1} \int |\mathbf{r}|^{-1} \chi^4(\mathbf{r}) \,d\mathbf{r} + o(L^{-1})$. Hence, \begin{equation}\label{eq:ztrial} \widehat{\mathcal{E}}^{\rm M} (\gamma_L) = L^{-1}\left(- Z \int |\mathbf{r}|^{-1} \chi^4(\mathbf{r}) \,d\mathbf{r} + D[\chi^4]\right) + o(L^{-1}) \qquad \mbox{as $L\to\infty$.} \end{equation} For $Z>0$ and $N=\|\chi\|_4^4$ small enough, the first term in brackets can clearly be made negative by an appropriate choice of $\chi$. This shows that $\widehat{{E}}^{\rm M}_\leq (N)<0$ for small $N$, and hence for all $N$. \end{proof} \begin{proposition}\label{exmin} Let $Z>0$ and $N>0$. There exists a minimizing sequence $\gamma_j$ for \eqref{relaxed} which converges in $\mathfrak S^1$, the space of trace-class operators, i.e., there is a $\gamma$ such that $\tr |\gamma_j -\gamma| \to 0$. \end{proposition} Before giving the proof of this proposition, we collect some useful auxiliary material. \begin{lemma}\label{xhardy} For every $\epsilon>0$ \begin{equation}\label{eq:xhardy} \iint_{\{|\mathbf{r}-{\mathbf{r}'}|<\epsilon\}} \frac{|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2}{|\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \leq 4\epsilon \tr(-\nabla^2)\gamma \end{equation} and \begin{equation}\label{eq:xhardytotal} X(\gamma^{1/2}) \leq \frac\epsilon4 \tr(-\nabla^2)\gamma + \frac1{4\epsilon} \tr\gamma\ . \end{equation} \end{lemma} \begin{proof} The first inequality can be easily deduced from Hardy's inequality, which states that \begin{equation}\label{hard} -\nabla^2 \geq \frac 1{4|\mathbf{r}|^2}\,. \end{equation} For the second inequality, we use the well known expression for the ground state energy of the hydrogen atom, namely, \begin{equation}\label{hydr} -\nabla^2 - \frac{z}{|\mathbf{r}|} \geq -\frac{z^2}4 \ , \end{equation} from which it follows (with $z= 2/\epsilon$) that for every $\mathbf{x}'$ \begin{equation} \frac 12 \int \frac{|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2}{|\mathbf{r}-{\mathbf{r}'}|}\, d\mathbf{x} \leq \frac{\epsilon}{4} \int |\nabla \gamma^{1/2}(\mathbf{x},\mathbf{x}') |^2 \, d\mathbf{x} +\frac{1}{4 \epsilon} \int |\gamma^{1/2}(\mathbf{x},\mathbf{x}') |^2\, d\mathbf{x} \ . \end{equation} The lemma follows by integrating over $\mathbf{x}'$. \end{proof} \begin{lemma}\label{xlocal} Let $\chi(\mathbf{r})$ satisfy $|\chi(\mathbf{r})|\leq 1$. Then \begin{equation*} X(\chi^*\gamma^{1/2} \chi) \leq X((\chi^*\gamma\chi)^{1/2}). \end{equation*} \end{lemma} \begin{proof} For convenience we introduce the characteristic function of a ball of radius $r$ centered at $\mathbf{z}$ \begin{equation} B_{\mathbf{z},r}(\mathbf{r})= \begin{cases} 1& |\mathbf{r}-\mathbf{z}|<r\\ 0& |\mathbf{r}-\mathbf{z}|\geq r. \end{cases} \end{equation} Writing the Coulomb kernel as \begin{equation} \label{eq:fefferman} |\mathbf{r}-{\mathbf{r}'}|^{-1} = \frac1\pi \int_0^\infty \int_{\mathbb{R}^3} B_{\mathbf{z},r}(\mathbf{r}) B_{\mathbf{z},r}({\mathbf{r}'}) \,d\mathbf{z}\,\frac{dr}{r^5} \end{equation} (Fefferman and de la Llave \cite{FeffermandelaLlave1986}), we get \begin{equation}\label{eq:xdecomposition} X(\delta) = \frac1{2\pi} \int_0^\infty \int_{\mathbb{R}^3} \tr \left(\delta B_{\mathbf{z},r} \delta B_{\mathbf{z},r}\right) \,d\mathbf{z}\,\frac{dr}{r^5}. \end{equation} It follows from $|\chi|\leq 1$ and the monotonicity of the operator square root that $$\chi^*\gamma^{1/2}\chi= \left((\chi^*\gamma^{1/2}\chi)( \chi^*\gamma^{1/2}\chi )\right)^{1/2} \leq (\chi^*\gamma^{1/2} \gamma^{1/2} \chi)^{1/2} =(\chi^* \gamma\chi)^{1/2}.$$ Hence \begin{equation*} \tr \left(\chi^*\gamma^{1/2}\chi B_{\mathbf{z},r} \chi^*\gamma^{1/2}\chi B_{\mathbf{z},r}\right) \leq \tr \left((\chi^*\gamma\chi)^{1/2} B_{\mathbf{z},r} (\chi^*\gamma^{1/2}\chi)^{1/2} B_{\mathbf{z},r}\right). \end{equation*} The assertion follows now from \eqref{eq:xdecomposition}. \end{proof} \begin{proof}[Proof of Proposition \ref{exmin}] We choose an arbitrary minimizing sequence $\gamma_j$ for \eqref{relaxed} and, after passing to a subsequence (if necessary), assume that $\tr\gamma_j\to\tilde N\in[0,N]$. It follows from \eqref{eq:xhardytotal} and the hydrogen bound, $\tr Z_k|\mathbf{r} - \mathbf{R}_k|^{-1}\gamma \leq (Z_k\epsilon/4Z)\tr(-\nabla^2)\gamma +(Z_k Z/\epsilon)\tr\gamma$ that \begin{equation}\label{eq:kinbound} \frac 12(1-\epsilon)\tr(-\nabla^2)\gamma_j \leq \widehat{\mathcal{E}}^{\rm M}(\gamma_j) + \frac 1\epsilon(Z^2 + 1/4)\tr\gamma_j. \end{equation} Hence the sequence $(-\nabla^2+1)^{1/2}\ \gamma_j \ (-\nabla^2+1)^{1/2}$ is bounded in $\mathfrak S^1$ and, by the Banach-Alaoglu theorem (see \cite{LiebLoss2001}) there exists a $\gamma$ such that, after passing to a subsequence (if necessary), $\tr K\gamma_j\to \tr K\gamma$ for any operator $K$ such that $(-\nabla^2+1)^{-1/2}K(-\nabla^2+1)^{-1/2} $ is compact. This compactness condition is satisfied if $K$ is simply multiplication by some function $f\in L^p(\mathbb{R}^3)$ for some $3/2\leq p<\infty$ (see \cite[section 13.4]{ReedSimon1978}). In this case we have that \begin{equation}\label{eq:densityconv} \int f(\mathbf{r}) \rho_{\gamma_j}(\mathbf{r}) \,d\mathbf{r} = \tr f \gamma_j \to \tr f \gamma = \int f (\mathbf{r}) \rho_\gamma(\mathbf{r}) \,d\mathbf{r} \ . \end{equation} In particular, we can take $f$ in \eqref{eq:densityconv} to be the Coulomb potential since this potential can be written as the sum of two functions, one of which is in $ L^p(\mathbb{R}^3)$ and the other in $ L^q(\mathbb{R}^3)$ with $3/2 < p < 3$ and $3< q < \infty$. Note that $0\leq\gamma\leq 1$ and, by the lower semicontinuity of the $\mathfrak S^1$-norm, \begin{equation*} M = \tr\gamma \leq \liminf_{j\to\infty}\tr\gamma_j=\tilde N\leq N. \end{equation*} We claim that $\gamma\not\equiv 0$ (and hence $M>0$). Indeed, by Proposition \ref{negative} one has $\mathcal{\widehat E}^{\rm M}(\gamma_j)\leq-\epsilon$ for some $\epsilon>0$ and all sufficiently large $j$. Hence $\tr V_c\gamma_j\geq\epsilon$ and by \eqref{eq:densityconv} also $\tr V_c\gamma\geq\epsilon$. Clearly, $\gamma_j\rightharpoonup\gamma$ in the sense of weak operator convergence. If $M=\tilde N$, then also $\tr\gamma_j\to\tr\gamma$, and thus $\gamma_j\to\gamma$ in $\mathfrak S^1$ (see Theorem A.6 in \cite{Simon1979T}) and we are done. We are thus left with the case $M<\tilde N$. Our strategy will be to construct a minimizing sequence $\gamma_j^0$ out of the $\gamma_j$ which converges to $\gamma$ in $\mathfrak S^1$. We choose a quadratic partition of unity, $(\chi^0)^2 + (\chi^1)^2 \equiv 1$, where $\chi^0$ is a smooth, symmetric decreasing function with $\chi^0({\bf 0})=1$, $\chi^0(\mathbf{r})<1$ if $|\mathbf{r}|>0$ and $\chi^0(\mathbf{r})=0$ if $|\mathbf{r}|\geq 2$. For fixed $j$, $\tr(\chi^0(\mathbf{r}/R))^2\gamma_j$ is a continuous function of $R$ which increases from $0$ to $\tr\gamma_j$. If we restrict ourselves to large $j$, then $\tr\gamma_j>M$ and we can choose an $R_j$ such that $\tr(\chi^0(\mathbf{r}/R_j))^2\gamma_j= M$. We write $\chi_j^\nu(\mathbf{r})=\chi^\nu(\mathbf{r}/R_j)$ and $\gamma_j^\nu=\chi^\nu_j\gamma_j\chi^\nu_j$ for $\nu=0,1$. We claim $R_j\to\infty$. To see this, assume the contrary, namely that there is a subsequence that converges to some $R<\infty$. Then, for this subsequence, $\chi_j^0(\mathbf{r})^2\to \chi^0(\mathbf{r}/R)^2$ strongly in any $L^p$. Since $\rho_{\gamma_j} \rightharpoonup \rho_\gamma$ weakly in $L^p$ for $1<p<3$ by \eqref{eq:densityconv}, one has \begin{equation*} \int\chi_j^0(\mathbf{r})^2 \rho_{\gamma_j}(\mathbf{r}) \,d\mathbf{r} \to \int \chi^0(\mathbf{r}/R)^2 \rho_{\gamma}(\mathbf{r})\,d\mathbf{r}\,. \end{equation*} But, by definition, the left side is independent of $j$ and equals $\int \chi_j^0(\mathbf{r})^2 \rho_{\gamma_j}(\mathbf{r})\,d\mathbf{r} = M = \int\rho_{\gamma}(\mathbf{r})\,d\mathbf{r}$. This is a contradiction, since $\chi^0(\mathbf{r})^2<1$ almost everywhere and $\gamma\not\equiv 0$. Therefore $\lim_{j\to \infty} R_j=\infty$. We note that $\gamma_j^0\rightharpoonup\gamma$ in the sense of weak operator convergence. (It suffices to check the weak convergence on functions of compact support, since the $\gamma_j^0$ remain uniformly bounded.) By construction, $\tr\gamma_j^0=\tr\gamma$, so that $\gamma_j^0\to\gamma$ in $\mathfrak S^1$ (again by Theorem A.6 in \cite{Simon1979T}) and it remains to prove that $\gamma_j^0$ is a minimizing sequence. For the kinetic energy we use the IMS formula \cite{Cyconetal1987} \begin{equation*} \tr(-\nabla^2\gamma_j) = \tr(-\nabla^2\gamma_j^0) + \tr(-\nabla^2\gamma_j^1) - \tr[(|\nabla\chi_j^0|^2+|\nabla\chi_j^1|^2)\gamma_j]. \end{equation*} Since $R_j\to\infty$, one has $\||\nabla\chi_j^0|^2+|\nabla\chi_j^1|^2\|_\infty\to 0$ and therefore \begin{equation}\label{eq:minkin} \tr(-\nabla^2)\gamma_j = \tr(-\nabla^2)\gamma_j^0 + \tr(-\nabla^2)\gamma_j^1 + o(1). \end{equation} For the attraction term we use again that $R_j\to\infty$, so $\tr|\mathbf{r}-{\bf R}_k|^{-1}\gamma_j^1\to 0$ and \begin{equation}\label{eq:minattr} \tr|\mathbf{r}-{\bf R}_k|^{-1}\gamma_j = \tr|\mathbf{r}-{\bf R}_k|^{-1}\gamma_j^0 + o(1). \end{equation} For the repulsion term we use that $\rho_{\gamma_j^0}\leq \rho_{\gamma_j}$ pointwise and get \begin{equation}\label{eq:minrep} D(\rho_{\gamma_j},\rho_{\gamma_j})\geq D(\rho_{\gamma_j^0},\rho_{\gamma_j^0}). \end{equation} Finally, we turn to the exchange term, which we write as \begin{align*} X(\gamma_j^{1/2}) = X(\chi_j^0 \gamma_j^{1/2} \chi_j^0) + X(\chi_j^1 \gamma_j^{1/2} \chi_j^1) + 2 X(\chi_j^0 \gamma_j^{1/2} \chi_j^1). \end{align*} We shall show that \begin{equation}\label{eq:minx} X(\gamma_j^{1/2}) \leq X((\gamma_j^0)^{1/2}) + X((\gamma_j^1)^{1/2}) + o(1). \end{equation} It follows from Lemma \ref{xlocal} that $X(\chi_j^\nu \gamma_j^{1/2} \chi_j^\nu) \leq X((\gamma_j^\nu)^{1/2})$. To show that the off-diagonal term tends to zero we decompose, for any $\epsilon>0$, \begin{align*} X(\chi_j^0 \gamma_j^{1/2} \chi_j^1) = & \iint_{\{|\mathbf{r}-{\mathbf{r}'}|<\epsilon/2\}} \frac{|\chi_j^0(\mathbf{r})\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')\chi_j^1({\mathbf{r}'})|^2}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \\ & + \iint_{\{|\mathbf{r}-{\mathbf{r}'}|\geq\epsilon/2\}} \frac{|\chi_j^0(\mathbf{r})\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')\chi_j^1({\mathbf{r}'})|^2}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}'. \end{align*} The term with the singularity is controlled by \eqref{eq:xhardy}, \begin{align*} \iint_{\{|\mathbf{r}-{\mathbf{r}'}|<\epsilon/2\}} \frac{|\chi_j^0(\mathbf{r})\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')\chi_j^1({\mathbf{r}'})|^2}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' & \leq \epsilon \tr(-\nabla^2)\chi_j^0\gamma_j^{1/2}(\chi_j^1)^2\gamma_j^{1/2}\chi_j^0 \\ & \leq \epsilon \tr(-\nabla^2)\chi_j^0\gamma_j\chi_j^0 . \end{align*} \renewcommand{\thefootnote}{${1}$} This can be made arbitrarily small be choosing $\epsilon$ small. We pick some $\delta>0$ and decompose the term without singularity into two pieces, depending on whether $|{\mathbf{r}'}|<\delta R_j$ or not. In the first case we estimate$^1$ \footnotetext{The following two paragraphs slightly differ from the published version in Phys. Rev. A \textbf{76} (2007), 052517. We are grateful to M. Tiefenbeck for pointing out an error at this point of the proof.} \begin{align}\label{neweq} & \iint_{\{|\mathbf{r}-{\mathbf{r}'}|\geq \epsilon/2, \, |{\mathbf{r}'}|<\delta R_j \}} \frac{|\chi_j^0(\mathbf{r})\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')\chi_j^1({\mathbf{r}'})|^2}{2|\mathbf{r}-{\mathbf{r}'}|} \,d\mathbf{x} d\mathbf{x}' \\ \nonumber & \qquad \leq \epsilon^{-1} \iint_{\{|{\mathbf{r}'}|<\delta R_j \}} |\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')\chi_j^1({\mathbf{r}'})|^2 \,d\mathbf{x} d\mathbf{x}' \\ \nonumber & \qquad = \epsilon^{-1} \tr\chi_{\{|\mathbf{r}|<\delta R_j\}} (\chi_j^1)^2 \gamma_j \\ \nonumber & \qquad \leq \epsilon^{-1} N \|\chi_{\{|\mathbf{r}|<\delta R_j\}} \chi_j^1\|_\infty^2 \,. \end{align} Since $\chi^1$ is smooth with $\chi^1(0)=0$, the supremum-norm of the function $\chi_{\{|\mathbf{r}|<\delta R_j\}} \chi_j^1$ (which is independent of $R_j$ by scaling) can be made arbitrarily small by choosing $\delta$ small. Hence the double integral \eqref{neweq} can be made arbitrarily small. In the complementary region one may argue as follows. We pick some $A$ and choose $j$ so large that $R_j> \delta^{-1} A$. By estimating $|\mathbf{r}-{\mathbf{r}'}|\geq \delta R_j -A$ if $|\mathbf{r}|<A$ and $|{\mathbf{r}'}|>\delta R_j$, we obtain \begin{align*} & \iint_{\{|\mathbf{r}-{\mathbf{r}'}|\geq\epsilon/2,\, |{\mathbf{r}'}|\geq \delta R_j \}} \frac{|\chi_j^0(\mathbf{r})\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')\chi_j^1({\mathbf{r}'})|^2}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \\ & \qquad \leq \iint_{\{|\mathbf{r}-{\mathbf{r}'}|\geq\epsilon/2, |\mathbf{r}|\geq A \}} \frac{|\chi_j^0(\mathbf{r})\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')|^2}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' + \iint_{\{|\mathbf{r}|< A,\, |{\mathbf{r}'}|\geq \delta R_j \}} \frac{|\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')|^2}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \\ & \qquad \leq \epsilon^{-1} \iint_{\{|\mathbf{r}|\geq A \}} \chi_j^0(\mathbf{r})^2|\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')|^2 \,d\mathbf{x}\,d\mathbf{x}' + (2(\delta R_j-A))^{-1} \iint |\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')|^2 \,d\mathbf{x}\,d\mathbf{x}' \\ & \qquad = \epsilon^{-1} \tr \chi_{\{|\mathbf{r}|\geq A \}}\gamma_j^0 + (2(\delta R_j-A))^{-1} \tr\gamma_j. \end{align*} Since $\gamma_j^0\to\gamma$ in $\mathfrak S^1$, one has $\tr \chi_{\{|\mathbf{r}|\geq A \}}\gamma_j^0 \to \tr\chi_{\{|\mathbf{r}|\geq A \}}\gamma$. This can be made arbitrarily small by choosing $A$ large. Since $R_j\to\infty$, the term $(2(\delta R_j-A))^{-1} \tr\gamma_j$ converges to $0$. This proves \eqref{eq:minx}. Collecting \eqref{eq:minkin}--\eqref{eq:minx} we find that \begin{equation*} \mathcal{\widehat E}^{\rm M}(\gamma_j) \geq \mathcal{\widehat E}^{\rm M}(\gamma_j^0) + \left( -\frac 12 \tr \nabla^2 \gamma_j^1 - X(\gamma_j^1)+ \frac1{8}\tr\gamma_j^1 \right) + o(1). \end{equation*} We have shown in the proof of Proposition \ref{z0} that the term in brackets is non-negative. Hence \begin{equation*} \liminf_{j\to\infty} \mathcal{\widehat E}^{\rm M}(\gamma_j) \geq \liminf_{j\to\infty} \mathcal{\widehat E}^{\rm M}(\gamma_j^0), \end{equation*} which shows that $\gamma_j^0$ is a minimizing sequence. This concludes the proof. \end{proof} \begin{proposition}\label{lsc} Let $\gamma_j\to\gamma$ in $\mathfrak S^1$. Then \begin{equation}\label{eq:lsc} \liminf_{j\to\infty} \mathcal{\widehat E}^{\rm M}(\gamma_j) \geq \mathcal{\widehat E}^{\rm M}(\gamma). \end{equation} \end{proposition} \begin{proof} The bound \eqref{eq:kinbound} shows that $E=\liminf_{j\to\infty} \mathcal{\widehat E}^{\rm M}(\gamma_j)>-\infty$. Moreover, we may assume that $E<\infty$, for otherwise there is nothing to prove. After passing to a subsequence (if necessary), we may assume that $\mathcal{\widehat E}^{\rm M}(\gamma_j)\to E$. As in the proof of Proposition \ref{exmin} there exists a $\gamma$ such that, after passing to a subsequence if necessary, $\tr K\gamma_j\to \tr K\gamma$ for any operator $K$ such that $(-\nabla^2+1)^{-1/2}K(-\nabla^2+1)^{-1/2}$ is compact. In particular, \eqref{eq:densityconv} holds. By weak lower-semicontinuity we infer that \begin{equation}\label{eq:lsckin} \tr\left(-\mbox{$\frac12$} \nabla^2+1/8\right)\gamma \leq \liminf_{j\to\infty} \tr\left(-\mbox{$\frac12$}\nabla^2+1/8\right)\gamma_j. \end{equation} Now we turn to the repulsion term. Since $D(\rho_{\gamma_j},\rho_{\gamma_j})$ is bounded we may, passing to a subsequence (if necessary), assume that $\rho_{\gamma_j}$ converges weakly to some $\rho$ with respect to the $D$-scalar product. With the help of \eqref{eq:densityconv} one concludes that $\rho=\rho_\gamma$. Weak lower-semicontinuity with respect to the $D$-norm implies that \begin{equation}\label{eq:lscrep} D(\rho_\gamma,\rho_\gamma) \leq \liminf_{j\to\infty} D(\rho_{\gamma_j},\rho_{\gamma_j}). \end{equation} The continuity of the attraction term follows from \eqref{eq:densityconv}, since $|\mathbf{r}|^{-1}\in L^{3/2} + L^p$ for $p>3$, therefore \begin{equation}\label{eq:lscattr} \lim_{j\to\infty}\tr V_c\gamma_j=\tr V_c\gamma \ . \end{equation} Finally, we prove continuity of the exchange term. Similarly as in the proof of Proposition \ref{exmin} we decompose, for any $\epsilon>0$, \begin{align*} |X(\gamma_j^{1/2}) - X(\gamma^{1/2})| & \leq \iint_{\{|\mathbf{r}-{\mathbf{r}'}|<\epsilon/2\}} \frac{|\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')|^2 + |\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \\ & \qquad + \iint_{\{|\mathbf{r}-{\mathbf{r}'}|\geq\epsilon/2\}} \frac{\left| |\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')|^2 -|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2\right|}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \end{align*} According to Lemma \ref{xhardy} the term involving the singularity is bounded by $\epsilon \tr(-\nabla^2)(\gamma_j + \gamma )$, which can be made arbitrarily small (recall that $\tr[-\nabla^2(\gamma_j + \gamma)]$ is bounded). To treat the term without the singularity we use the fact that the mapping $K\mapsto |K|^{1/2}$ is continuous from $\mathfrak S^1$ to $\mathfrak S^2$ (see Example 2 after Theorem 2.21 in \cite{Simon1979T}). Hence $\gamma_j^{1/2}\to\gamma^{1/2}$ in $\mathfrak S^2$, and we can bound \begin{multline}\nonumber \left( \iint_{\{|\mathbf{r}-{\mathbf{r}'}|\geq\epsilon/2\}} \frac{||\gamma_j^{1/2} (\mathbf{x},\mathbf{x}')|^2 -|\gamma^{1/2}(\mathbf{x},\mathbf{x}')|^2|}{2 |\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \right)^2\\ \leq {\iint \left|\gamma_j^{1/2}(\mathbf{x},\mathbf{x}') - \gamma^{1/2}(\mathbf{x},\mathbf{x}')\right|^2 \,d\mathbf{x}\,d\mathbf{x}' \iint_{\{|\mathbf{r}-{\mathbf{r}'}|\geq\epsilon/2\}} \frac{( |\gamma_j^{1/2}(\mathbf{x},\mathbf{x}')| + |\gamma^{1/2}(\mathbf{x},\mathbf{x}')|)^2 }{4 |\mathbf{r}-{\mathbf{r}'}|^2}\,d\mathbf{x}'\,d\mathbf{x}'} \\ \leq \|\gamma_j^{1/2} - \gamma^{1/2}\|_2^2\ 2 \epsilon^{-2} \tr(\gamma_j+\gamma) . \end{multline} The first factor tends to zero by the convergence of $\gamma_j^{1/2}$ mentioned before, and the second one remains bounded. Hence we have proved that \begin{equation}\label{eq:lscx} \lim_{j\to\infty} X(\gamma_j^{1/2}) = X(\gamma^{1/2}). \end{equation} By collecting \eqref{eq:lsckin}--\eqref{eq:lscx} we arrive at \eqref{eq:lsc}. \end{proof} \begin{proof}[Proof of Theorem~\ref{exmini}] According to Proposition~\ref{exmin}, there exists a minimizing sequence that converges strongly to some $\gamma$. By Proposition~\ref{lsc}, this $\gamma$ is a minimizer of $\widehat{\mathcal{E}}^{\rm M}$. \end{proof} \subsection{Proof of Theorem~\ref{fulltrace}} Assume that $N\leq Z$. Under this assumption we shall show that a $\gamma$ minimizing $\widehat{\mathcal{E}}^{\rm M}(\gamma)$ satisfies $\tr\gamma =N$. Assuming the contrary, we shall find a trace class operator $\sigma\geq 0$ such that for $\gamma_\epsilon=(1-\epsilon\|\sigma\|)\gamma+\epsilon\sigma$ and all sufficiently small $\epsilon>0$, \begin{equation}\label{eq:fulltrace} \mathcal{\widehat E}^{\rm M}(\gamma_\epsilon)<\mathcal{\widehat E}^{\rm M}(\gamma) \ . \end{equation} The factor $(1-\epsilon\|\sigma\|)$ guarantees that $0\leq \gamma_\epsilon\leq 1$ for $0<\epsilon\leq\|\sigma\|^{-1}$. If $\tr \gamma< N$, which we assume, then also $\tr \gamma_\epsilon <N$ for small $\epsilon$ and \eqref{eq:fulltrace} leads to a contradiction since $\gamma$ was assumed to be a minimizer. To prove \eqref{eq:fulltrace} we use convexity for the homogeneous terms in the functional $\mathcal{\widehat E}^{\rm M}$ and expand the repulsion term explicitly. This leads to \begin{equation}\label{eq:fulltrace1} \mathcal{\widehat E}^{\rm M}(\gamma_\epsilon) \leq \mathcal{\widehat E}^{\rm M}(\gamma) + \epsilon\left(\tr(-\nabla^2-\phi_\gamma+1/8)\sigma - X(\sigma^{1/2})\right) - \epsilon R_1 + \epsilon^2 R_2 \ , \end{equation} where \begin{align*} \phi_\gamma(\mathbf{r})& = V_c(\mathbf{r})-\int\frac{\rho_\gamma({\mathbf{r}'})}{|\mathbf{r}-{\mathbf{r}'}|}\,d{\mathbf{r}'} \ , \\ R_1 & = \|\sigma\| \left(\mathcal{\widehat E}^{\rm M}(\gamma) + D(\rho_\gamma,\rho_\gamma) \right) \ , \\ R_2 & = D(\rho_\sigma-\|\sigma\|\rho_\gamma,\rho_\sigma-\|\sigma\| \rho_\gamma) \ . \end{align*} Now we proceed similarly as in the proof of Proposition \ref{z0}, letting $\sigma=\sigma_L$ depend on a (large) parameter $L$. More precisely, we define $\sigma_L$ by \begin{equation}\label{eq:sigmal} \sigma_L^{1/2}(\mathbf{x},\mathbf{x}') = L^{-3/2}\chi(\mathbf{r}/L) g(\mathbf{r}-{\mathbf{r}'}) \chi({\mathbf{r}'}/L)\, q^{-1/2} \delta_{\sigma,\sigma'} \ , \end{equation} with $g$ as in \eqref{eq:g} and $\chi\geq 0$ a smooth function satisfying $\|\chi\|_4^4=1$. Asymptotically, for large $|\mathbf{r}|$, $\phi_\gamma(\mathbf{r}) \approx (Z- \tr \gamma )|\mathbf{r}|^{-1}$, which is positive by our assumption. It follows similarly to the proof of Proposition \ref{negative} that \begin{equation*} \tr(-\nabla^2-\phi_\gamma+1/8)\sigma_L - X(\sigma_L^{1/2}) = - \frac{Z-\tr \gamma }L \int |\mathbf{r}|^{-1} \chi^4(\mathbf{r}) \,d\mathbf{r} + o(L^{-1}) \ . \end{equation*} It remains to show that the terms $R_1$ and $R_2$ are relatively small. In the proof of Proposition \ref{z0} and in \eqref{eq:z0trial} we showed that $\|\sigma_L\|=\mathcal O(L^{-3})$ and $D(\rho_{\sigma_L},\rho_{\sigma_L})=\mathcal O(L^{-1})$, which implies that $R_1= \mathcal O(L^{-3})$ and $R_2= \mathcal O(L^{-1})$. We can then choose $L$ large enough and $\epsilon$ small enough to conclude \eqref{eq:fulltrace}. This finishes the proof of Theorem~\ref{fulltrace}. \section{Further Properties} \subsection{Properties of the Minimal Energy} Recall that $E^{\rm M}(N)$ as defined in (\ref{sqrtenergy}) is the lowest energy of $\mathcal{E}^{\rm M} (\gamma) $ under the condition $\tr\gamma=N$. This energy is closely related to $\widehat{{E}}^{\rm M}_\leq(N)$ defined in (\ref{relaxed}). \begin{proposition}\label{minz=} For any $Z>0$ and $N>0$ one has $E^{\rm M}(N) = \widehat E^{\rm M}_\leq (N)-N/8$. \end{proposition} What this proposition really says is that $E^{\rm M}(N) + N/8$ is a monotone non-decreasing function of $N$. This, in turn, follows from the fact that we can always add mass $\delta N$ far away from the nuclei, with an energy as close as we please to $-\delta N/8$. This was shown in the proof of Theorem \ref{fulltrace}, and we shall not repeat the argument. \begin{proposition} For any $Z>0$ the energies $\widehat E^{\rm M}_\leq(N)$ and $E^{\rm M}(N)$ are convex functions of $N$. They are strictly convex for $0<N\leq Z$. \end{proposition} \begin{proof} By Proposition \ref{minz=} it suffices to consider $\widehat E^{\rm M}_\leq(N)$. The convexity follows from the convexity of the functional. Moreover, from Theorem \ref{fulltrace} we know that minimizers for $0<N<N'\leq Z$ have different traces, and hence different densities. The {\emph strict} convexity follows hence from the strict convexity of $D(\rho,\rho)$ in $\rho$. \end{proof} We now prove that the energy is bounded from below uniformly in $N$ for fixed $Z$. \begin{proposition} There is a constant $C>0$ (independent of $N$ and the charges and positions of the nuclei) such that for all $Z>0$ and $N>0$, $\widehat E^{\rm M}_\leq (N)\geq -CZ^3$. \end{proposition} {\it Remark:} The proof below does {\it not} use the property that $\gamma\leq 1$ and this results in the exponent $3$ , which is not optimal in the fermionic case. Without the restriction $\gamma \leq 1$, the exponent 3 is optimal, however. \begin{proof} First, let us consider the atomic case with a nucleus of charge $Z$ located at the origin ${\bf R} =0$. We consider $\psi(\mathbf{x},\mathbf{x}')=\gamma^{1/2}(\mathbf{x},\mathbf{x}')$ as a wave function in $L^2(\mathbb{R}^6)$ and find after symmetrization \begin{equation*} \mathcal{\widehat E}^{\rm M}(\gamma) = \frac12 \left\bra\psi \left| -\frac 12\nabla^2_\mathbf{r}-\frac12\nabla^2_{\mathbf{r}'}-Z|\mathbf{r}|^{-1} -Z|{\mathbf{r}'}|^{-1}-\frac 1{|\mathbf{r}-{\mathbf{r}'}|}+\frac 14\right| \psi\right\ket + D(\rho_\gamma,\rho_\gamma). \end{equation*} By the positive definiteness of the Coulomb kernel, $D(\rho_\gamma,\rho_\gamma)\geq 2D(\rho_\gamma,\sigma_Z) - D(\sigma_Z,\sigma_Z)$ for any $\sigma_Z$. Hence \begin{equation*} \mathcal{\widehat E}^{\rm M}(\gamma) \geq \frac12 \left\bra\psi\left|- \frac12 \nabla^2_\mathbf{r}-\frac 12\nabla^2_{\mathbf{r}'}-V_{Z}(\mathbf{r})-V_{Z}({\mathbf{r}'})-\frac 1{|\mathbf{r}-{\mathbf{r}'}|}+\frac 14\right|\psi\right\ket - D(\sigma_Z,\sigma_Z) \end{equation*} with $V_{Z}(\mathbf{r}) = Z|\mathbf{r}|^{-1} - \int |\mathbf{r}- \mathbf{r}'|^{-1} \sigma_Z (\mathbf{r}') d\mathbf{r}' $. We shall choose $\sigma_Z$ in such a way that \begin{equation}\label{eq:nobs} -\frac 12\nabla^2_\mathbf{r}-\frac12\nabla^2_{\mathbf{r}'}-V_{Z}(\mathbf{r})-V_{Z}({\mathbf{r}'})-\frac 1{|\mathbf{r}-{\mathbf{r}'}|}+\frac 14\geq 0. \end{equation} From this it follows that $\mathcal{\widehat E}^{\rm M}(\gamma) \geq -D(\sigma_Z,\sigma_Z)$. Actually, we shall choose $\sigma_Z$ of the form $\sigma_Z(\mathbf{r})=Z^4\sigma(Z\mathbf{r})$ for some fixed $\sigma$, which yields $D(\sigma_Z,\sigma_Z)= Z^3 D(\sigma,\sigma)$. To prove \eqref{eq:nobs} we make an orthogonal change of variables, $\mathbf{s}=(\mathbf{r}-{\mathbf{r}'})/\sqrt2$, $\mathbf{t}=(\mathbf{r}+{\mathbf{r}'})/\sqrt2$, so that the operator on the left side of \eqref{eq:nobs} becomes \begin{equation*} \left(-\frac12\nabla^2_\mathbf{s}- \frac 1{\sqrt2 |\mathbf{s}|}+\frac 14\right) + \frac14\left(-\nabla^2_\mathbf{t}- 4V_Z((\mathbf{t}+\mathbf{s})/\sqrt2)\right) + \frac14\left(-\nabla^2_\mathbf{t}- 4V_Z((\mathbf{t}-\mathbf{s})/\sqrt2)\right). \end{equation*} The operator in the first brackets is non-negative (see Eq.~(\ref{hydr})). Hence it suffices to choose $\sigma$ such that the operator $-\nabla^2_\mathbf{t}- 4V_Z((\mathbf{t}+\mathbf{a})/\sqrt2)$ is non-negative of any $\mathbf{a}\in\mathbb{R}^3$. Note that $V_{Z}(\mathbf{t}) = Z^2 V(Z\mathbf{t})$ with $V(\mathbf{r}) = |\mathbf{r}|^{-1} - \int |\mathbf{r}- \mathbf{r}'|^{-1} \sigma (\mathbf{r}') d\mathbf{r}'$. After scaling and translation, we have to prove that $-\nabla^2_\mathbf{r}-8V(\mathbf{r})\geq 0$. For this we choose $\sigma$ a non-negative, spherically symmetric function with $\int \sigma\,dx =1$ and with support in $\{|\mathbf{r}|\leq 1/32\}$. Then by Newton's theorem $V(\mathbf{r})=0$ for $|\mathbf{r}|\geq 1/32 $, and for $|\mathbf{r}|\leq 1/32 $ one has $8V(\mathbf{r})\leq 1/(4|\mathbf{r}|^2)$, so $-\nabla^2_\mathbf{r}-8V(\mathbf{r})\geq 0$ by Hardy's inequality \eqref{hard}. This concludes the proof in the atomic case. In the molecular case we proceed as follows: We recall that we are not taking account of the (fixed) nuclear repulsion $U$, and this means that we can freely place the nuclei at locations that minimizes the energy $\widehat{{E}}^{\rm M}(N)$. We assert that the best choice of the ${\bf R}_j$ is one in which they are all equal and, by translation invariance, this common point can be the origin. The problem thus reduces to the atomic case with a nucleus whose charge is the total charge $Z$. That the optimum choice is equal ${\bf R}_j$ follows from the fact that for {\it any} $\gamma$ the attractive energy for nucleus $j$ is $-\int \rho_\gamma (\mathbf{r}) |\mathbf{r}-{\bf R}_j|^{-1} d\mathbf{r}$ and the best possible energy is obtained by placing all the ${\bf R}_j $ at the point ${\bf R}$ that maximizes this integral. \end{proof} \subsection{Properties of the Minimizer} \label{minprop} \begin{proposition} \label{nullspace} Let $\gamma$ be a minimizer of \eqref{relaxed} and let $M_\gamma=\{\mathbf{r} : \rho_\gamma(\mathbf{r})>0 \}$. Then the null-space of the spin-summed density matrix, $\ker\, \tr_\sigma\gamma$, coincides with the set of $L^2(\mathbb{R}^3)$ functions that vanish identically on $M_\gamma$. \end{proposition} Another way to say this is that if $ \tr_\sigma\gamma$ has a zero eigenvalue then the eigenfunction vanishes wherever the density $\rho_\gamma$ is non-zero. In particular, if $\rho_\gamma>0$ almost everywhere then $0$ is not an eigenvalue of the spin-summed density matrix $\tr_\sigma\gamma$. \begin{proof} Write $(\tr_\sigma\gamma)(\mathbf{r},\mathbf{r}')=\sum_j\lambda_j \psi_j(\mathbf{r})\psi(\mathbf{r}')^*$ with $\psi_j$ orthonormal and $0<\lambda_j\leq q$. Then $\mathbb{R}^3\setminus M_\gamma=\bigcap_j \{\mathbf{r} : \psi_j(\mathbf{r})= 0 \}$, and if $\phi=0$ a.e. on $M_\gamma$ then obviously $\gamma\phi\equiv 0$. Conversely, let $\phi\in\ker\, \tr_\sigma\gamma$ and consider \begin{equation*} \gamma_\epsilon= \tr_\sigma\gamma + \epsilon \left(|\phi\rangle\langle \phi| - |\psi_1\rangle\langle \psi_1|\right). \end{equation*} One has $\tr\gamma_\epsilon=\tr\gamma\leq N$, $0\leq\gamma_\epsilon\leq 1$ for $0\leq\epsilon\leq \lambda_1$ and \begin{equation*} \gamma_\epsilon^{1/2} = \big(\tr_\sigma\gamma\big)^{1/2} + \sqrt\epsilon |\phi\ket\bra\phi| + \left(\sqrt{\lambda_1-\epsilon}-\sqrt\lambda\right) |\psi_1\ket\bra\psi_1| \ . \end{equation*} As noted in the introduction, it follows from convexity that minimizing $\mathcal{\widehat E}^{\rm M}$ for density matrices $0\leq \gamma\leq 1$ with $q$ spin states is equivalent to minimizing under the condition $0\leq \gamma\leq q$ without spin. Hence \begin{equation*} E_\leq ^{\rm M}(N) \leq \mathcal{\widehat E}^{\rm M}(\gamma_\epsilon) = \mathcal{\widehat E}^{\rm M}(\gamma) - \sqrt\epsilon C[\phi] +\mathcal O(\epsilon), \end{equation*} where \begin{equation*} C[\phi] = \iint \frac{\phi(\mathbf{r})^*\gamma^{1/2}(\mathbf{r},\mathbf{r}')\phi(\mathbf{r}')}{|\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{r}\,d\mathbf{r}' = \sum_j\sqrt\lambda_j \iint \frac{\phi(\mathbf{r})^*\psi_j(\mathbf{r})\psi_j(\mathbf{r}')^*\phi(\mathbf{r}')}{|\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{r}\,d\mathbf{r}' \geq 0\ . \end{equation*} Since $\gamma$ is a minimizer, one has $C[\phi]=0$, which by the positive definiteness of the Coulomb kernel means $\phi \psi_j^*=0$ a.e. for all $j$. Hence $\phi=0$ a.e. on $M_\gamma$. \end{proof} At the other end of the spectrum of $\gamma$, we comment on the eigenvalue $1$ of the minimizer. Consider the minimization problem \eqref{relaxed} without the constraint $\gamma\leq 1$, \begin{equation}\label{eq:minzb} \widehat E^{\rm boson}_\leq (N)= \inf\{ \mathcal{\widehat E}^{\rm M}(\gamma) :\ \gamma\geq 0, \,\tr\gamma\leq N\} \ . \end{equation} This energy can be interpreted as the ground state energy of $N$ {\it bosons} in the M\"uller model. Obviously, $\widehat E^{\rm boson}_\leq (N)\leq \widehat E^{\rm M}_\leq (N)$ with equality for $N\leq 1$. For large values of $N$ we expect them to differ, however. \begin{proposition}\label{propq} Assume that $\widehat E^{\rm boson}_\leq (N)< \widehat E^{\rm M}_\leq (N)$ for some $N$ and $Z$. Then any minimizer $\gamma$ of \eqref{relaxed} has at least one eigenvalue $1$. \end{proposition} \begin{proof} Assume, on the contrary, that $\gamma<1$ and let $\gamma_b$ denote a minimizer for \eqref{eq:minzb}. (The existence is shown in the same way as in the proof of Theorem \ref{exmini}.) Then $\gamma_\epsilon =(1-\epsilon)\gamma+\epsilon\gamma_b$ satisfies $\tr\gamma_\epsilon\leq N$ and $0\leq\gamma_\epsilon\leq 1$ for sufficiently small $\epsilon>0$. Moreover, by convexity, \begin{equation*} \mathcal{\widehat E}^{\rm M}(\gamma_\epsilon) \leq (1-\epsilon) \widehat E^{\rm M}_\leq (N) + \epsilon \widehat E^{\rm boson}_\leq (N) < \widehat E^{\rm M}_\leq (N)\ , \end{equation*} contradicting the fact that $\gamma$ is a minimizer. \end{proof} It is not difficult to see that $\widehat E^{\rm M}_\leq (N) \sim N^{1/3} Z^2$ for large $N$ and $Z$, while $\widehat E^{\rm boson}_\leq (N)\sim N Z^2$. Hence clearly $\widehat E^{\rm boson}_\leq (N)< \widehat E^{\rm M}_\leq (N)$ for large $N$ and $Z$. Lathiotakis et al. \cite{Lathiotakisetal2007} find numerically that in fact occupation numbers that correspond to core electrons of large atoms all have the value one. \begin{proposition}\label{realmin} Let $\gamma(\mathbf{x}, \mathbf{x}')$ be a minimizer of $\widehat{\mathcal{E}}^{\rm M}(\gamma)$ for some $N$ and let $\widehat\gamma(\mathbf{r},\mathbf{r}') = \sum_\sigma \gamma(\mathbf{r},\sigma, {\mathbf{r}'},\sigma) $ be the spin-summed minimizer. Then $\widehat\gamma(\mathbf{r},\mathbf{r}') $ is necessarily real. \end{proposition} \begin{proof} It suffices to show that $\widehat\gamma^{1/2}$ is real. Write $\widehat\gamma^{1/2}(\mathbf{r},{\mathbf{r}'}) = A(\mathbf{r},\mathbf{r}') + i B(\mathbf{r},\mathbf{r}')$, where $A$ is real and symmetric and $B$ is real and antisymmetric, whence $iB$ is self adjoint. Define $\delta =A^2-B^2$, noting that both $A^2 $ and $-B^2$ are positive (semidefinite). The kinetic and potential energy of $\delta $ and $\gamma$ are equal. Moreover, the densities $\rho_\gamma(\mathbf{r})$ and $\rho_\delta(\mathbf{r})$ are equal. Therefore, we just have to show that the exchange terms favor $\delta$, i.e., $X(\delta^{1/2}) > X(\gamma^{1/2})$. To prove this assertion use the concavity of $X(\cdot)$ to conclude that $X(\delta^{1/2}) \geq X(|A|) +X(|B|)$, where $|A| = \sqrt{A^2}$ and $|B| = \sqrt{B^\dagger B } =\sqrt{-B^2}$. On the other hand $X(\gamma^{1/2} )= X(A) +X(B)$, with the obvious meaning that $X(A) = \mbox{$\frac12$} \int |A(\mathbf{r},\mathbf{r}')|^2 |\mathbf{r}-\mathbf{r}'|^{-1} d\mathbf{r}\, d{\mathbf{r}'}$ and similarly for $X(B)$. To conclude the proof we have to show that $X(|A|) \geq X(A)$ and $X(|B|) > X(B)$ if $B\neq 0$. For the first, we write $A=A_+ - A_-$ and $|A| = A_+ +A_-$, where $A_\pm$ are both positive operators. Clearly, the cross term $\int A_+(\mathbf{r},\mathbf{r}') A_-(\mathbf{r}', \mathbf{r}) |\mathbf{r}-\mathbf{r}'|^{-1} d\mathbf{r}\, d{\mathbf{r}'} \geq 0$ since $|\mathbf{r}-\mathbf{r}'|$ is positive definite. The same argument applies to $iB=B_+ - B_-$, but now we want to show that $\int B_+(\mathbf{r},\mathbf{r}') B_-(\mathbf{r}', \mathbf{r}) |\mathbf{r}-\mathbf{r}'|^{-1} d\mathbf{r}\, d{\mathbf{r}'} >0$ unless $B=0$. To show this we use the fact that the positive definiteness of the Coulomb kernel implies that $\int \alpha (\mathbf{r},\mathbf{r}') \beta (\mathbf{r}', \mathbf{r}) ) |\mathbf{r}-\mathbf{r}'|^{-1} d\mathbf{r}\, d{\mathbf{r}'}$ is (operator) monotone in $\alpha$ and in $\beta$. Therefore, it suffices to show positivity for selected eigenfunctions of $B_\pm$. That is, we replace $B_+(\mathbf{r},\mathbf{r}') $ by eigenfunctions $\phi_+(\mathbf{r}) \phi_+(\mathbf{r}')^*$ and similarly we replace $B_-(\mathbf{r},\mathbf{r}') $ by $\phi_-(\mathbf{r}) \phi_-(\mathbf{r}')^*$. Since $iB$ is imaginary and antisymmetric, however, its positive and negative spectra are equal, apart from sign, so $B_\pm$ have the same spectrum. Moreover, $B_\pm$ are complex conjugates of each other. Therefore, for every $\phi_+(\mathbf{r})$ there is a $\phi_-(\mathbf{r})$ and the two functions are complex conjugates of each other. In short, it suffices to show strict positivity of $\int \phi(\mathbf{r})^2 (\phi(\mathbf{r}')^*)^2 |\mathbf{r}-\mathbf{r}'|^{-1} d\mathbf{r}\, d{\mathbf{r}'}$, but this is true as long as the function $\phi $ is not identically zero (since the Coulomb kernel is positive definite). \end{proof} Finally, we show that a minimizer of $\widehat{\mathcal{E}}^{\rm M}(\gamma)$ satisfies the variational equation~\eqref{vareq}, as claimed in the Introduction. \begin{proposition}\label{lagrange2} Let $\gamma$ be a minimizer of $\widehat{\mathcal{E}}^{\rm M}(\gamma)$. Then \begin{equation}\label{vareq2} \left(-\mbox{$\frac12$} \nabla_\mathbf{r}^2 - \mbox{$\frac12$} \nabla_{\mathbf{r}'}^2 - \varphi_\gamma(\mathbf{r}) - \varphi_\gamma({\mathbf{r}'}) - \frac 1{|\mathbf{r}-{\mathbf{r}'}|} - 2\mu\right) \gamma^{1/2}(\mathbf{x},\mathbf{x}') = \sum_i 2 e_i \psi_i(\mathbf{x}) \psi_i(\mathbf{x}')^*\,. \end{equation} Here, $\varphi_\gamma(\mathbf{r})= V_c(\mathbf{r}) - \int \rho_\gamma({\mathbf{r}'}) |\mathbf{r}-{\mathbf{r}'}|^{-1} d{\mathbf{r}'}$ denotes the effective potential, $\mu\leq -1/8$ is the chemical potential, $e_i\leq 0$ and the $\psi_i(\mathbf{x})$ are eigenfunctions of $\gamma$ with eigenvalue $1$. \end{proposition} \begin{proof} Let $\mu$ be the slope of a tangent to the curve $E^{\rm M}(N)$ at $N$. Since $E^{\rm M}(N)$ is convex, such a tangent always exists, although it may not be unique in case the derivative of $E^{\rm M}(N)$ is discontinuous at this point. Since $\gamma$ is a minimizer of $\widehat{\mathcal{E}}^{\rm M}(\gamma)$, its square-root $\gamma^{1/2}$ minimizes the expression \begin{equation}\label{dfu} {\mathcal F}(\delta) = \tr\left( -\mbox{$\frac12$} \nabla^2 - V_c(\mathbf{r}) -\mu\right) \delta^2 + D(\rho_{\delta^2},\rho_{\delta^2}) - X(\delta) \end{equation} among {\it all} $\delta$ with $0\leq \delta \leq 1$, irrespective of the trace of $\delta^2$. In fact, it is even a minimizer if one relaxes the condition $\delta \geq 0$. This follows from the fact that $X(\delta)\leq X(|\delta|)$ for any self-adjoint operator $\delta$, which was shown in the proof of the previous proposition \ref{realmin}. Consequently, $\gamma^{1/2}$ is a minimizer of \eqref{dfu} subject to the constraint $-1\leq \delta\leq 1$. From this we conclude that for any self-adjoint $\sigma$ with finite trace such that, for small $\epsilon$, $\gamma^{1/2}+\epsilon \sigma \leq 1 +$ terms of order $\epsilon^2$, \begin{equation} \left. \frac{d}{d\epsilon} {\mathcal F}(\gamma^{1/2}+\epsilon \sigma) \right|_{\epsilon=0} \geq 0\,. \end{equation} The derivative can easily be calculated to be \begin{equation} \tr \left[\left( (-\mbox{$\frac12$} \nabla^2 - \varphi_\gamma) \gamma^{1/2} + \gamma^{1/2} (-\mbox{$\frac12$} \nabla^2-\varphi_\gamma) - Z_\gamma - 2 \mu\gamma^{1/2} \right)\, \sigma \right]\,, \end{equation} where $Z_\gamma$ is defined in \eqref{zgamma}. The condition on $\sigma$ is that $\langle\psi_i|\sigma|\psi_i\rangle\leq 0$ for all $|\psi_i\rangle$ with $\gamma|\psi_i\rangle = |\psi_i\rangle$. Hence we conclude that \begin{equation} (-\mbox{$\frac12$} \nabla^2 - \varphi_\gamma) \gamma^{1/2} + \gamma^{1/2} (-\mbox{$\frac12$} \nabla^2-\varphi_\gamma) - Z_\gamma - 2 \mu\gamma^{1/2} = \sum_i 2e_i |\psi_i\rangle\langle\psi_i|\,, \end{equation} with $e_i\leq 0$. \end{proof} The variational equation~\eqref{vareq2} was obtained by varying $\gamma^{1/2}$ instead of $\gamma$. If $\gamma$ does not have a zero eigenvalue (which, for a spin-invariant minimizer $\gamma$, is the case if $\rho_\gamma$ does not vanish on a set of positive measure, see Prop.~\ref{nullspace}), then these variations are equivalent. Hence we conclude that \eqref{vareq2} is actually {\it equivalent} to $\gamma$ being a minimizer in case $\gamma$ has no zero eigenvalue. (See the discussion in Section~\ref{subsec:Mueller}). \subsection{Virial Theorem} A well known property of Coulomb systems is the virial theorem, which quantifies a relation between the kinetic and potential energies. We state it here for an atom. \begin{proposition}\label{virial} Let $K=1$ (i.e., consider an atom) and let $\gamma$ be a minimizer for $\widehat E^{\rm M} _\leq (N)$. Then \begin{equation}\label{eq:virial} 2 \tr(-\mbox{$\frac12$}\nabla^2\gamma) = \tr(Z|\mathbf{r}|^{-1} \gamma) - D(\rho_\gamma,\rho_\gamma) + X(\gamma^{1/2})\ . \end{equation} \end{proposition} \begin{proof} For any $\lambda>0$ the density matrix $\gamma_\lambda$ defined by $\gamma_\lambda(\mathbf{x},\mathbf{x}') = \lambda^3 \gamma(\lambda\mathbf{r},\sigma,\lambda{\mathbf{r}'},\sigma')$ is unitarily equivalent to $\gamma$ and hence satisfies $0\leq\gamma_\lambda\leq 1$ and $\tr\gamma_\lambda=\tr\gamma\leq N$. Since $\gamma$ is a minimizer, the function \begin{equation*} \mathcal{\widehat E}^{\rm M}(\gamma_\lambda) = \lambda^2 \tr(-\mbox{$\frac12$}\nabla^2\gamma) - \lambda \tr(Z|\mathbf{r}|^{-1} \gamma) + \frac 18 \tr \gamma + \lambda D(\rho_\gamma,\rho_\gamma) - \lambda X(\gamma^{1/2}) \end{equation*} has a minimum at $\lambda=1$. This implies the assertion. \end{proof} \section{The M\"uller Functional as a Lower Bound to Quantum Mechanics} We are able to show that the M\"uller energy $E^{\rm M}(N)$ (without the addition of $N/8$) is a lower bound to the true Schr\"odinger energy when $N=2$, but with arbitrarily many nuclei. The situation for $N>2 $ is open. As we remark below, our $N=2$ proof definitely fails when $N>2$. Consider the $N$-particle Hamiltonian (\ref{ham}) in either the symmetric or the anti-symmetric $N$-fold tensor product of $L^2(\mathbb{R}^3,\mathbb{C}^q)$. For a symmetric or anti-symmetric $\psi$ we recall that the one-particle density matrix $\gamma_\psi$ is defined by $$ \gamma_\psi(\mathbf{x},\mathbf{x}') = N \int \psi(\mathbf{x},\mathbf{x}_2,\ldots,\mathbf{x}_N)\psi(\mathbf{x}',\mathbf{x}_2,\ldots,\mathbf{x}_N)^* \, d\mathbf{x}_2\cdots d\mathbf{x}_N \ . $$ \begin{proposition}\label{lowerbound} Assume that $N=2$. Then for any symmetric or anti-symmetric normalized $\psi$, \begin{equation*} \bra\psi| H| \psi\ket \geq \mathcal E^{\rm M}(\gamma_\psi) \ . \end{equation*} \end{proposition} \begin{proof} Since $\bra\psi| \sum_{j=1}^2 (-\mbox{$\frac12$}\nabla^2_j- V_c(\mathbf{r}_j))|\psi\ket= \tr(-\mbox{$\frac12$}\nabla^2-V_c(\mathbf{r}))\gamma_\psi$, we have to prove that \begin{equation*} \int \frac{|\psi(\mathbf{x}_1,\mathbf{x}_2)|^2}{|\mathbf{r}_1-\mathbf{r}_2|}\,d\mathbf{x}_1\,d\mathbf{x}_2+ \int \frac{|\gamma_\psi^{1/2}(\mathbf{x},\mathbf{x}')|^2}{2|\mathbf{r}-{\mathbf{r}'}|}\,d\mathbf{x}\,d\mathbf{x}' \geq \int \frac{\gamma_\psi(\mathbf{x}_1,\mathbf{x}_1) \gamma_\psi(\mathbf{x}_2,\mathbf{x}_2)}{2|\mathbf{r}_1-\mathbf{r}_2|}\,d\mathbf{x}_1\,d\mathbf{x}_2\ . \end{equation*} By \eqref{eq:fefferman} it suffices to prove that for any characteristic function $\chi$ of a ball (or, more generally, for any real-valued function $\chi$) \begin{equation} 2 \int \chi(\mathbf{r}_1) |\psi(\mathbf{x}_1,\mathbf{x}_2)|^2 \chi(\mathbf{r}_2)d\mathbf{x}_1d\mathbf{x}_2 + \int \chi(\mathbf{r}) |\gamma_\psi^{1/2}(\mathbf{x},\mathbf{x}')|^2 \chi({\mathbf{r}'})d\mathbf{x} d\mathbf{x}'\label{junk} \geq \left(\int \chi(\mathbf{r}) \gamma_\psi(\mathbf{x},\mathbf{x}) d\mathbf{x}\right)^2. \end{equation} Introducing $\Psi$ as the (non-self-adjoint) operator in $L^2(\mathbb{R}^3)$ with kernel $\sqrt 2 \, \psi(\mathbf{x},\mathbf{x}')$, we can rewrite the previous inequality as \begin{equation}\label{eq:lowerbound1} \tr\chi\Psi^\dagger\chi\Psi + \tr\chi\gamma_\psi^{1/2}\chi\gamma_\psi^{1/2} \geq (\tr \chi\gamma_\psi)^2 \ . \end{equation} The proof of this inequality can be found in \cite{WignerYanase1963}. For completeness, we present the proof here. Note that $\Psi\Psi^\dagger=\gamma_\psi$, so $\Psi=\gamma_\psi^{1/2} \mathcal V$ for a partial isometry $\mathcal V$. Since $\psi$ is (anti-) symmetric, $\Psi^\dagger\Psi=\mathcal C\gamma_\psi\mathcal C$, where $\mathcal C$ denotes complex conjugation. Hence $\mathcal V^\dagger \gamma_\psi \mathcal V = \mathcal C\gamma_\psi\mathcal C$ and, since the square root is uniquely defined, \begin{equation}\label{eq:commutesqrt} \mathcal V^\dagger\gamma_\psi^{1/2}\mathcal V = \mathcal C \gamma_\psi^{1/2} \mathcal C \ . \end{equation} We write $\delta=\gamma_\psi^{1/2}$ for simplicity and consider the quadratic form \begin{equation*} Q(A,C)= \frac14(2\tr A^\dagger\delta C\delta + \tr A^\dagger\delta\mathcal V C\mathcal V^\dagger\delta + \tr \mathcal V A^\dagger\mathcal V^\dagger\delta C \delta) \ . \end{equation*} We consider this quadratic form on the real vector space of \emph{real} operators, i.e., operators satisfying \begin{equation}\label{eq:commuteop} \mathcal C A \mathcal C =A \ . \end{equation} Note that $Q(A,A)= \frac12(\tr A^\dagger\delta A\delta + \tr A^\dagger\delta\mathcal V A\mathcal V^\dagger\delta)$ and that, by Schwarz's inequality, \begin{equation*} (\tr A^\dagger\delta\mathcal V A\mathcal V^\dagger\delta)^2 \leq (\tr A^\dagger\delta A\delta) (\tr \mathcal V A^\dagger\mathcal V^\dagger\delta\mathcal V A\mathcal V^\dagger\delta) \ . \end{equation*} Recalling \eqref{eq:commutesqrt} and \eqref{eq:commuteop} we thus see that $Q$ is positive semi-definite. This implies in particular that $Q(\chi,1)^2\leq Q(\chi,\chi)Q(1,1)$. This is the desired inequality \eqref{eq:lowerbound1}, since $Q(1,1)=\tr\gamma_\psi=2$, $Q(\chi)=\frac12(\tr \chi\delta \chi\delta + \tr \chi\delta\mathcal V \chi\mathcal V^\dagger\delta) = \frac12(\tr \chi\delta \chi\delta + \tr \chi\Psi \chi\Psi^\dagger)$ and \begin{equation*} Q(1,\chi)= \frac14(3\tr \chi\delta^2 + \tr \chi \mathcal V^\dagger \delta^2\mathcal V ) = \tr \chi\gamma_\psi \ . \end{equation*} Here we used \eqref{eq:commutesqrt} once more. \end{proof} The obvious generalization of inequality \eqref{junk} to $N\geq 3$ is not true, as the paper \cite{Seiringer2007} shows. But this does not mean that the M\"uller energy is not a lower bound to the true energy. There is some numerical evidence for this, as mentioned in A.3 of subsection \ref{1B}. \bigskip \underline{\textit{Acknowledgments:}} Rupert Frank and Heinz Siedentop thank the Departments of Mathematics and Physics at Princeton University for hospitality while this work was done. The following partial support is gratefully acknowledged: The Swedish Foundation for International Cooperation in Research and Higher Education (STINT) (R.F.); U.S. National Science Foundation, grants PHY 01 39984 (E.H.L and H.S.) and PHY 03 53181 (R.S.); an A.P. Sloan Fellowship (R.S.); Deutsche Forschungsgemeinschaft, grant SI 348/13-1 (H.S.).
2,869,038,154,462
arxiv
\section{Introduction} \IEEEPARstart{A}s the semiconductor industry pushes the limits of transistor technology in a never ending pursuit of miniaturization, radiation effects have become a serious concern not only for aerospace and military applications, but also for terrestrial applications. Of the many radiation effects an integrated circuit (IC) may suffer from, Single-Event Transients (SETs) and Single-Event Upsets (SEUs) \cite{Baumann2005} are widely studied. The underlying principle is that a charged particle, upon striking the IC, may cause shifts in voltage levels at combinational or sequential elements, creating SETs or SEUs, respectively. Over the years, many efficient techniques have been used to mitigate radiation effects \cite{Kasap2020}, often making use of some notion of spatial or temporal redundancy \cite{nicolaidis, nsrec2012, jeon2012, almeida2012, pagliarini2021}. Triple Modular Redundancy (TMR), one of the most commonly employed solutions, is a technique that employs three instances of a module and adds a majority voter at their outputs. The scheme, therefore, protects against any single fault in any of the modules. TMR can be deployed with different levels of granularity \cite{almeida2012, pagliarini2021}, with diversification \cite{meubrodersubadestanford}, and also with approximation \cite{apptmr}. TMR also presents partial protection against multiple faults caused by single-event-induced charge sharing \cite{almeida2012}. However, when a fault tolerant circuit is implemented with TMR or a similar technique, its resiliency against security vulnerabilities tends to be overlooked. Recently, the field of Hardware Security has received a lot of attention and defense techniques against various adversaries have been implemented for a range of circuits. Yet, the interplay between security techniques and fault tolerance techniques is still poorly understood. In this paper, our aim is to highlight this interplay by taking an Advanced Encryption Standard (AES) crypto core as a case study. The reliability technique we are concerned with is TMR in its many forms. The security attack we are concerned with is the side-channel attack (SCA). The key-finding of this paper is that \textbf{TMR appears to increase the resiliency to SCAs}. Let us now give a brief background on SCAs. \section{Background on Side-Channel Attacks} \label{sec:background} In an SCA, an adversary collects, in a non-invasive way, leakage data that can be used to discover private information and/or to gain privileged access to a circuit \cite{standaert2010}. Power consumption, timing, electromagnetic emanations, and even sound are examples of side-channels that can and have been exploited. Based on the analysis of this residual information, it is possible to perform an attack that breaks security assumptions. In this paper, our focus is on SCAs that exploit power traces as a form of leakage. These power-based SCAs can be categorized in three groups: i)~Simple Power Analysis (SPA); ii)~Differential Power Analysis (DPA); iii)~Correlation Power Analysis (CPA). SPA is a simple graph analysis of the power trace consumption over time. DPA uses statistical analyses at different times to correlate power consumption measurements with functionality. CPA uses a Hamming weight power model method~\cite{Mangard2007} for a more powerful attack. Crypto cores have been the typical targets of SCAs. In principle, the math behind the crypto function is sound and cannot be broken by formal cryptanalysis. However, the physical realization of the crypto function gives adversaries powerful information. In \cite{Maingot2009}, an evaluation of the sensitivity to DPA of several protected versions of an AES circuit is discussed. In \cite{Ors}, a power analysis attack on an AES hardware implementation is presented and an SCA is mounted on a physical device with the aid of a simple setup (scope and probes). The attack utilizes the power consumption during the first two clock cycles of the AES computation to discover the secret key. The reason for which the attack works is that in the considered AES implementation, an {\sc xor} operation between the plaintext and the secret key is executed in the first clock cycle. The result of this operation is saved in an \textit{intermediate register} in the second clock cycle. The adversary can devise a \textbf{hypothetical power model} to account for changes in the value of the intermediate register, i.e., the adversary can use bit changes in this register as a proxy for the behavior of the power consumption of the entire AES circuit. Even further, by simulation means, the adversary can analyse all possible changes the register might have, e.g., toggle count, in a cycle-accurate manner. This type of modelling is widely utilized in SCAs to discover the secret key in a device that implements AES. This paper focuses on power consumption information leakage to discover the secret key in an AES crypto core. We assume the AES core is meant for a high-dependability application and therefore, TMR has been applied to it. We also assume the adversary has access to power traces of the circuit under attack. For this reason, the attack is more realistic for terrestrial applications. Furthermore, our approach emulates a physical attack by obtaining detailed power traces from physical synthesis. In practice, a real attack is more complicated because the environment, board, and package become sources of noise that have to be accounted for. For more details on attack feasibility, we direct the readers to \cite{Ors}. \section{AES Crypto Core Implementation and Side-Channel Power Analysis Attack} \label{sec:single} A 128-bit AES crypto core from~\cite{aes_core} is used to perform a side-channel power analysis attack. Fig.~\ref{fig:block_view}(a) shows its block diagram. The AES circuit takes a 128-bit secret key ($key$) and a plaintext ($text\_in$) as inputs and produces a ciphertext as an output ($text\_out$). \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{Figures/block_view.pdf} \vspace{-6mm} \caption{(a)~Block diagram of the AES circuit; (b)~its layout.} \label{fig:block_view} \vspace{-6mm} \end{figure} The AES crypto core is implemented in a standard design flow which includes the synthesis of Verilog Hardware Description Language (HDL) codes of the AES circuit into a gate-level netlist. Synthesis is performed by Cadence Genus with a commercial 65 nm standard cell library. The target frequency is 500 MHz. Physical synthesis, including floorplanning, placement, and routing, is performed by Cadence Innovus. Fig.~\ref{fig:block_view}(b) presents the layout of the AES crypto core. This is our baseline implementation and is referred as single AES in the experiments that follow. The flow of our side-channel power analysis attack is illustrated in Fig.~\ref{fig:sca_flow}. Compared to the traditional design flow, extra steps were included to enable our attack. To cope with the exponential size of all possible keys i.e., $2^{128}$, the simulation data is obtained for $L$-bits of the 128-bit secret key, where $L$ is set to 8 in our experiments. Therefore, a netlist generated after logic synthesis is instantiated 256 times in a testbench, one instance for each possible 8-bit key. As a result of simulation, we obtain the \textit{simulation data set} which consists of the number of bit changes in the \textit{intermediate register} under all possible values of the $L$-bit key. \begin{figure}[!t] \centering \includegraphics[width=0.6\linewidth]{Figures/sca_flow.pdf} \vspace*{-1mm} \caption{Side-channel power analysis attack flow.} \label{fig:sca_flow} \vspace{-6mm} \end{figure} Another output of the simulation is a Value Change Dump (VCD) file, which annotates any changes in any signals of the design, along with the time of the change. The VCD file is used as an input of another extra step in our design flow, i.e., the power vector profile. Cadence Innovus reads the VCD file and generates a vector-based dynamic power report for any time window of interest. This power estimation is a good representation of the power dissipation of the fabricated chip because it takes into account parasitic information from extraction and representative input patterns from simulation\footnote{For readers with IC design background, we clarify that we utilize the Voltus power analysis engine of Innovus with VCD and Standard Delay Format (SDF) files. We ask the tool to generate a power estimation at every 1ns to oversample the 500 MHz frequency of operation of the circuit. This matches the capability of an adversary equipped with a typical oscilloscope.}. We obtain the \textit{power data set} which is computed as the difference of the power dissipation values of the AES crypto core in the first and second clock cycles as described in~\cite{Ors}. Note that 1000 randomly generated plaintexts were used to obtain these simulation and power data sets. Finally, for each possible key, the Pearson Correlation Coefficient (PCC) is computed between the simulation data and the power data set, and the one that leads to the maximum PCC value is determined to be the guessed key. In the text and results that follow, without loss of generality, we perform attacks on 8 bits of the secret key at a time. The same attack can be repeated 16 times to uncover the entire 128-bit secret key. The design flow is automated using Python scripting and the runtime to discover the 8 bits of the secret key in an AES is approximately 2 hours for 1000 plaintext inputs. The majority of the runtime is spent doing power analysis and the correlation calculation is much simpler in comparison. Fig.~\ref{fig:final_graph}(a) presents the PCC value for each possible key guess for the 8 Most Significant Bits (MSBs) of the secret key. In this experiment, we set 8 MSBs of the secret key to 222. Note that the attack has been successful as the highest PCC value is also 222. We note that the minimum number of plaintexts required to find the correct value of 8 MSBs of the secret key is 698, as shown in Fig.~\ref{fig:final_graph}(b). Note that at that point, when 698 plaintexts have been correlated, the green line becomes the one with the highest PCC. As more plaintexts are considered, the correlation tends to become clearer. \begin{figure*}[!hbt] \centering \includegraphics[width=1.0\linewidth]{Figures/final_graph.pdf} \vspace*{-6mm} \caption{Correlation between the simulation and power data sets and number of plaintexts necessary to discover the 8 MSBs of the secret key when they were set to 222: (a)-(b)~single AES, (c)-(d)~{\sc aes\_tmr}, and (e)-(f)~{\sc aes\_tmr\_macro}.} \label{fig:final_graph} \vspace*{-6mm} \end{figure*} \section{Side-channel Attacks on TMR schemes} \label{sec:tmr} In order to demonstrate the SCA resiliency of a TMR'd circuit, the same AES crypto core was designed under a coarse-grain TMR architecture. Two different physical designs, called {\sc aes\_tmr} and {\sc aes\_tmr\_macro}, were considered. In the {\sc aes\_tmr} design, the physical synthesis tool is allowed to perform independent optimizations in the three instances if applicable. In the {\sc aes\_tmr\_macro} design, each instance in the TMR architecture is purposefully made identical: all cells and all metal routing lines are the same for all three instances. Fig.~\ref{fig:TMR}(a) shows the amoeba and physical layout views of the {\sc aes\_tmr} design which has three instances of the AES crypto core with different numbers of cells, placement, and routing. Figure~\ref{fig:TMR}(b) presents the amoeba and physical layout views of the {\sc aes\_tmr\_macro} design which has identical instances. Note that both TMR designs have the same timing constraints, core area, and pinouts for the sake of a fair comparison. The SCA described in Section~\ref{sec:single} is also applied to these TMR architectures. Fig.~\ref{fig:final_graph}(c) presents the PCC values for all possible key guesses under the {\sc aes\_tmr} architecture, where the maximum value, which is 222, denotes the correctly guessed key. Note that it is the same as the one found in the single AES crypto core, meaning that the attack has been successful. Moreover, Fig.~\ref{fig:final_graph}(d) shows the number of plaintexts necessary to determine the secret key under the {\sc aes\_tmr} architecture, i.e., 810. Note that the {\sc aes\_tmr} circuit needs a larger number of plaintexts to discover the secret key when compared to the single AES crypto core. This result is, at first glance, not logical. After all, the TRM'd circuit is performing the same computation three times, which intuitively leads us to believe it would leak 3x as much information. We hypothesize that the TMR instances are acting as noise sources to one another, making the attack's convergence slightly harder. \begin{figure}[!t] \centering \vspace{2mm} \begin{subfigure}[t]{0.16\textwidth} \includegraphics[width=\textwidth]{Figures/tmr_ameba.pdf} \end{subfigure}% \quad \begin{subfigure}[t]{0.168\textwidth} \centering \includegraphics[width=\textwidth]{Figures/tmr.pdf} \end{subfigure}% \vspace{1mm}{\centering (a)}\vspace{1mm} \begin{subfigure}[t]{0.16\textwidth} \includegraphics[width=\textwidth]{Figures/tmr_macro_ameba.pdf} \end{subfigure}% \quad \begin{subfigure}[t]{0.1668\textwidth} \includegraphics[width=\textwidth]{Figures/tmr_macro.pdf} \end{subfigure}% \\ \vspace{1mm}{\centering (b)} \caption{Amoeba views and layouts of TMR architectures: (a)~{\sc aes\_tmr}; (b)~{\sc aes\_tmr\_macro}.}\label{fig:TMR} \vspace{-6mm} \end{figure} Fig.~\ref{fig:final_graph}(e) presents the PCC values for all possible key guesses under the {\sc aes\_tmr\_macro} architecture. The guessed key is the same as obtained under the single AES crypto core and {\sc aes\_tmr} circuit. Fig.~\ref{fig:final_graph}(f) shows the number of plaintexts necessary to determine the secret key under the {\sc aes\_tmr\_macro} architecture, i.e., 790. Note that the required number of plaintexts is higher than the one found in the single AES crypto core, but slightly smaller than the one computed in the {\sc aes\_tmr} circuit. Here, we clarify that the plaintexts are generated by a pseudorandom function, so any differences in SCA resiliency can be attributed to the circuit itself and not to the value of the chosen plaintexts. In summary, the results from Fig.~\ref{fig:final_graph} indicate that manipulating the level of disparity between the TMR instances can be used as a knob to tune SCA resiliency. We took this idea one step further by implementing the AES crypto core under another TMR architecture, called {\sc aes\_tmr\_diverse}, where each instance is physically and structurally different, but all instances remain functionally equivalent. To do so, we performed the physical synthesis of the same TMR'd AES crypto core with three different gate-level netlists. The first one is our baseline AES, the second one is obtained after applying the clock gating technique which is used to reduce power dissipation in parts of the circuit that are not being switched (and therefore, has an impact on SCA resiliency), and the third one is obtained after performing the retiming technique which moves the relative location of latches and registers, primarily to improve performance. Table \ref{tab:diver} shows the design details of each AES instance in terms of the number of gates, static and dynamic power. \begin{table}[t] \centering \caption{Details on each {\sc aes\_tmr\_diverse} instance.} \label{tab:diver} \vspace*{-2mm} \renewcommand{\arraystretch}{1.2} \tabcolsep=0.09cm \begin{tabular}{|l|c|c|c|} \hline {\centering{\bf AES Instance}} & {\centering{\bf \#gates}} & {\centering{\bf Static Power (µW)}} & {\centering{\bf Dynamic Power (mW)}} \\ \hline Baseline & {9826} & {0.3472} & 15.84 \\ Clock Gated & {9853} & {0.3472} & 15.12 \\ Retimed & {10187} & {0.3511} & {16.73} \\ \hline \end{tabular} \vspace*{-6mm} \end{table} Fig.~\ref{fig:TMR_diverse} shows that our SCA was unable to discover the secret key for the {\sc aes\_tmr\_diverse} version, even when 2000 random plaintexts were considered. Note that in this experiment, the utilized key, i.e., 222, remains the same, but the guessed key was 48. The correct key leads to the second highest PCC value. These results clearly show that the use of a diverse TMR architecture can increase the resiliency to SCAs. Therefore, under TMR architectures, an implementation with diverse circuits provides not only reliability, but also security when compared to a traditional TMR implementation. \begin{figure}[t] \vspace*{-4mm} \centering \includegraphics[width=0.75\linewidth]{Figures/diverse_corr.pdf} \vspace*{-2mm} \caption{SCA on the {\sc aes\_tmr\_diverse} circuit: (a)~correlation between the simulation and power data sets; (b)~number of plaintexts required to find 8 MSBs of the secret key.} \label{fig:TMR_diverse} \vspace*{-6mm} \end{figure} \section{Conclusion} \label{sec:conc} This paper demonstrated how a fault tolerance technique interferes with security, more precisely with the SCA resiliency. Even further, it showed how a TMR scheme with diversity can be leveraged to improve said resiliency. As it stands, the use of reliability techniques in order to increase the security of a circuit is largely unexplored territory. The possibilities for future avenues of research are plenty, including the study of redundancy schemes other than TMR. \section*{Acknowledgment} This work has been partially conducted in the project ``ICT programme'' which was supported by the European Union through the ESF. It was also partially supported by the Estonian research council grant MOBERC35. \bibliographystyle{IEEEtran}
2,869,038,154,463
arxiv
\section{Introduction} The rotational dynamics of small anisotropic material particles (\textit{e.g.} rods or disks) in turbulent flows has been the focus of a series of recent studies, see \cite{VothARFM2017} for a review. Few state-of-the-art experiments \cite{ParsaPRL2012,ParsaPRL2014,Marcus2014,Byron2015,ni_kramel_ouellette_voth_2015,BounouaPRL2018} as well as several numerical simulations and theoretical studies \cite{chevillard_meneveau_2013,Gustavsson2014,ni_ouellette_voth_2014,CandelierPRL2016,pujara_variano_2017,GustavssonPRL2017} have highlighted their complex behaviour, which is in part inherited from the non-trivial dynamics of the velocity gradient tensor along lagrangian trajectories in developed turbulence. Preferential alignments of particles with intrinsic orientations of the small scale turbulence structures have been observed. For exemple prolate particles preferentially align with the vorticity direction \cite{Gustavsson2014} which tends also to be in line with the second eigenvector of the rate of strain tensor \cite{chevillard_meneveau_2013}. On the opposite oblate particles are mostly orthogonal to such a direction and as a consequence they tumble much faster than rod-like ones \cite{ParsaPRL2012}. However, while the phenomenology of orientations is now clear for homogeneous and isotropic turbulent flows (at least for particles of weak inertia), much less explored remains the case of non-homogeneous turbulent flows \cite{VothARFM2017}. Steps in this direction have been made for prolate particles evolving in mixing layers and jets\cite{Lin2003,Lin2012}, in turbulent pipe and channel flows \cite{Lin2005,MarchioliPF2010,Marchioli2013,Zhao2015,Challabotla2015} and more recently in high-Reynolds number Taylor-Couette flow where the evolution of rigid fibers has been experimentally tracked \cite{Bakhuis2019}. In the present study we extend the investigation of anisotropic particle dynamics to the paradigmatic case of turbulent convection in the Rayleigh-B\'enard (RB) system, which displays both an inhomogeneous and anisotropic flow. The present study represents a first step into the exploration of this complex system and for this reason we limit the investigation to the case of a two-dimensional convective flow advecting anisotropic particles. A similar simplifying approach has been adopted in the past for other types of flows \cite{ParsaPF2011, GuptaPRE2014}. It is however expected that the effect of the dimensionality of the system affects the statistics of rotations of anisotropic particles, as it has been shown in \cite{GuptaPRE2014} for the case of two-dimensional turbulence as compared to three-dimensional developed turbulence.. The present study aims at addressing the following open questions: i) How is the orientation and the rotation of rods affected by the non-homogeneity of the turbulent convective flow? Specifically, what is the effect of coherent flow structures that characterises a thermal-driven flow, in particular the boundary layer (BL), the thermal plumes and the large scale circulation (LSC)? ii) what are the trends at varying the particle shape aspect-ratio and the turbulence intensity (i.e. the Rayleigh number)? iii) finally in which respect the phenomenology of rod dynamics in a 2D system is different from the one in 3D? The article is organised as follows: In section \ref{sec:method} we present the methodology adopted in this study, in particular we define the model system and concisely describe the set of numerical experiments that have been carried on. Section \ref{sec:results} will first present the basic phenomenology of the system. It will then guide the reader through the analysis of preferential alignment, tumbling rate and their dependences on the particle anisotropy and on the level of turbulence in the flow. The conclusions, Sec. \ref{sec:conclusions}, summarizes the main finding of this study, its implications and discuss still open topics and perspectives. In the appendix \ref{sec:appendix} we provide the derivation of the predictions for the tumbling rate of anisotropic particles in two-dimensions that have been checked against the numerical measurements in this work. \begin{figure}[!hb] \begin{center} \subfigure[]{\includegraphics[width=1.0\columnwidth]{visual_new.jpg}} \subfigure[]{\includegraphics[width=1.0\columnwidth]{visual_nematic.jpg}} \caption{(a) Visualisation of anisotropic particles with aspect ratio $\alpha=100$ in the Rayleigh-B\'enard convective flow at $Ra=10^9$ and $Pr=1$. The size of the particle is arbitrary. The color maps the temperature value, while the grey curves represents the instantaneous flow streamlines. (b) visualisation of the corresponding nematic order parameter $N$. The value 1 (red) indicates horizontal alignment, while -1 (blue) indicates the vertical alignment.}\label{fig:visual} \end{center} \end{figure} \section{Method \label{sec:method}} The approach adopted in this study is numerical. We perform a numerical integration of the Boussinesq system of equations, \begin{eqnarray} \partial_t \textbf{u} + \textbf{u}\cdot \bm{\partial} \textbf{u} &=& - \bm{\partial} p/\rho_0 + \nu\ \partial^2 \textbf{u} + \beta g (T-T_0) \textbf{\^y} \label{eq:NS}\\ \bm{\partial} \cdot \textbf{u} &=& 0 \label{eq:div}\\ \partial_t T + \textbf{u}\cdot \bm{\partial} T &=& \kappa\ \partial^2 T, \label{eq:T} \end{eqnarray} where $\textbf{u}(\textbf{x},t)$ and $T(\textbf{x},t)$ are respectively the velocity and temperature fields, and the parameters are the kinematic viscosity ($\nu$), the thermal diffusivity ($\kappa$), the reference density ($\rho_0$) at temperature $T_0$, the thermal expansion coefficient with respect to the same temperature $(\beta)$ and finally the intensity of gravitational acceleration $(g)$. The domain is rectangular two-dimensional, with size $H$ in the vertical direction ($y$-axis) and $L=2H$ in the horizontal one ($x$-axis). The boundary conditions on the horizontal planes are no-slip for the velocity, $\textbf{u}=0$, and isothermal for temperature, $T= T_0 \pm \Delta T/2$, with larger temperature at the bottom wall. The lateral boundary conditions are periodic for all fields. The latter choice is made for simplicity in order to have a single direction of statistical non-homogeneity in the flow, i.e., the direction perpendicular to the walls. The flow is seeded with point-like anisotropic particles with position, \textbf{r}(t), and orientation, $\textbf{p}(t)$, described by the following set of equations \cite{Jeffery1922}: \begin{eqnarray} \dot{\textbf{r}} &=& \textbf{u}(\textbf{r}(t),t)\\ \dot{\textbf{p}} &=& \Omega \textbf{p} + \tfrac{\alpha^2 - 1}{\alpha^2+1} \left( \mathcal{S}\textbf{p} - (\textbf{p}\cdot \mathcal{S}\textbf{p}) \textbf{p} \right) \label{eq:Jeffery3d}, \end{eqnarray} where $\mathcal{S} = (\bm{\partial} \textbf{u}+ \bm{\partial} \textbf{u}^T)/2$ and $ \Omega=(\bm{\partial} \textbf{u}-\bm{\partial} \textbf{u}^T)/2$ represent respectively the symmetric and anti-symmetric components of the fluid velocity gradient tensors, $\bm{\partial} \textbf{u}$, and $\alpha$ is the aspect ratio of the particle assumed to be ellipsoidal and defined as major ($l$) over minor ($d$) axis $\alpha=l/d$. In two dimension the orientation equation (\ref{eq:Jeffery3d}) can be conveniently simplified by introducing the orientation angle $\theta$ with respect to the horizontal axis, $\textbf{p}=(p_x,p_y) = (\cos \theta,\sin \theta )$, and taking into account the incompressibility of the flow (see appendix A): \begin{equation} \label{eq:theta} \dot{\theta} = \frac{1}{2} \omega -\tfrac{\alpha^2 - 1}{\alpha^2+1} \left[ S_{xx} \sin(2\theta) - S_{xy} \cos(2\theta) \right]. \end{equation} Note that (\ref{eq:Jeffery3d}) is invariant with respect to the transformation $\textbf{p} \to -\textbf{p}$, meaning that it describe fore-and-aft symmetric particles, as a consequence (\ref{eq:theta}) is invariant with respect to the transformation $\theta \to \theta + \pi$.\\ We finally note that, when adimensionalized, e.g. by using $H$, $\tau_{\kappa}=H^2/\kappa$, $\Delta T$ as reference scale for length, time and temperature, the above model system has four independent parameters, the Rayleigh number $Ra=\beta g \Delta T H^3/(\nu \kappa)$, the Prandtl number $Pr=\nu/\kappa$, the geometrical aspect ratio of the domain $\Gamma=L/H$ and of the particle $\alpha$. However, in the forthcoming analysis it will be convenient also to consider as a reference time-scale, the dissipative time scale of the flow, $\tau_{\eta} = \sqrt{\nu/\bar{\epsilon}}$ with $\bar{\epsilon}$ the global mean energy dissipation rate. In the RB system such a time scale can be also expressed as $\tau_{\eta} = \tau_{\kappa}/\sqrt{Ra(Nu-1)}$ where $Nu$ is the mean Nusselt number in the system. In this study we explore the particle aspect ratio dependency $\alpha$, and the $Ra$ number that parametrizes the strength of the thermal convection in the flow. We evolve $N_p=O(10^5-10^6)$ particles divided into 20 aspect ratio types, logarithmically spaced in the interval $\alpha \in [1,100]$. The Rayleigh number spans the range $Ra \in [2.44 \times 10^5, 8 \times 10^9]$. The simulations are performed through a well tested computational fluid dynamics code, already adopted in a series of previous studies \cite{Calzavarini2019}. Table \ref{tab:1} reports the relevant control parameters in the numerical simulations. \begin{table}[!htb] \begin{center} \begin{tabular}{c | c | c | r } Ra & $N_x \times N_y$ & $\tau / \tau_H$ & $N_p \qquad $\\ \hline $2.44 \times 10^5$ & $128 \times 64$ & 280 & $1.25 \times 10^5$ \\ $1.95 \times 10^6$ & $256 \times 128$ & 242 & $1.25 \times 10^5$ \\ $1.56 \times 10^7$ & $512 \times 256$ & 159 & $1.25 \times 10^5$\\ $1.25 \times 10^8$ & $1024 \times 512$ & 188 & $2.5 \times 10^5$\\ $1.00 \times 10^9$ & $2048 \times 1024$ & 135 & $10^6$\\ $8.00 \times 10^9$ & $4096 \times 2048$ & 22 & $4 \times 10^6$\\ \end{tabular} \end{center} \caption{Main parameters of the numerical simulations: the Rayleigh number $Ra$; the horizontal ($N_x$) and vertical ($N_y$) size of the grid; the total duration of the simulation $\tau$ in integral turnover time units $\tau_H = H/u_{rms}$; the total number of particles ($N_p$) evolved in each simulation.}\label{tab:1} \end{table} \section{Results \label{sec:results}} We begin with a detailed analysis of the particles alignment and tumbling rate as a function of their aspect-ratios in prescribed flow conditions at $Ra=10^9$ and $Pr=1$. The dependence of these phenomena on the strength of the thermal forcing, parametrised by the $Ra$ number, will be addressed in a separate section. \subsection{Preferential alignment} Figure \ref{fig:visual}(a) displays a visualisation of an instantaneous configuration of highly anisotropic particles, a set of $5\times10^4$ particles with $\alpha = 100$, together with the fluid flow field streamlines and a heat-map of the temperature field. One can appreciate the fact that the particle orientation is visually correlated to the flow structures. Close to the walls particles appear preferentially horizontal, while along and inside upwelling and downwelling thermal plumes they looks predominantly vertical. Furthermore, they seem to be influenced by the presence of a LSC flow structure, this is evident from the clear tendency to align along streamlines, and to a minor extent by the presence of secondary gyres in the system. In order to better appreciate the trend displayed by the particles orientation as a function of their local position one can use the nematic order parameter \cite{GuptaPRE2014}, \begin{equation} N \equiv 2 \ (\textbf{p}\cdot \textbf{\^x})^2 -1 = 2 (\cos \theta)^2 - 1, \end{equation} which takes the value 1 in case of a perfect horizontal alignment along the x-axis, and -1 if the particles are vertically aligned. The visualisation, provided in figure \ref{fig:visual}(b), shows the local value of $N$ for the same instant of time presented in panel (a). It is now more evident the phenomenon of i) preferential alignment at the wall, ii) the vertical orientation in plume dominated regions and iii) the ordering effect along streamlines produced by energetic vortices. It is worth noting that, due to the random initial conditions that we adopted for the particle orientations, the quantity $N$ can not be approximated to a smooth field, not even in the long time \cite{Zhao2019}. This is the reason why in figure \ref{fig:visual}(b) we still observe dots of a different colour inside large domains of particles mostly aligned along the same direction. In order to quantitatively appreciate the mean trend displayed by the orientation as a function of the position in the system, specifically the distance from the walls, and at the same time as a function of the aspect ratio of the particles, we compute the average $\langle N \rangle (y)$, where $\langle \ldots \rangle$ is taken over time and over the particles with given $y \pm \delta y$ coordinates. The interpretation of the mean value of the nematic order parameter over a given region of space is slightly different, than its instantaneous value, while the meaning of the limiting cases $\pm 1$ remains the same, the zero value is likely to indicate a statistically isotropic distribution (given the unsteady nature of the flow the case in which all particles at a given height are oriented on $\pm 45 \deg$ angle is unlikely). Figure \ref{fig:nematic} shows how the mean orientational ordering varies as a function of the distance from the top and bottom walls, that is to say in the direction of inhomogeneity in the flow. One can note the symmetry of the curves with respect to the mid plane which attests the excellent convergence of the simulations. For the isotropic particles, $\alpha = 1$, as expected there is no preferential orientation and $\langle N \rangle =0$ at any level. However, as soon as the shape anisotropy comes into play particles tend to align preferentially horizontally next to the walls and weakly vertically in the bulk of the system. It appears that, at the considered $Ra$, for all particle anisotropic classes the statistically random orientation region occurs at roughly one third of the box height. In the case of highly anisotropic particles ($\alpha = 100$) the orientation is nearly perfectly horizontal at the system boundaries. We stress that this remarkable effect can not be related to a direct interaction of the rods with the walls (wall-rods collisions are not implemented in our model system) but it is rather a dynamical effect mediated by the properties of the fluid gradient at the particle position in that region of the domain. We will come back later on this important feature. \begin{figure}[!ht] \begin{center} \includegraphics[width=1.0\columnwidth]{nematic.pdf} \caption{Local nematic order parameter as a function the distance from a horizontal wall in the system, for different particles aspect ratios at $Ra=10^9$, $Pr=1$. We compute the average $\langle N \rangle (y)$, where $\langle \ldots \rangle$ is taken over time and over the particles with given $y \pm \delta y$ coordinates, here with $ \delta y = H/2048$. It is shown that the more anisotropic is the particle, the more it displays a non random orientation. For high values of $\alpha$ the alignment is nearly perfectly parallel to the wall in flow regions close to the wall, while in the bulk a clear tendency to be perpendicular to the walls is observed.}\label{fig:nematic} \end{center} \end{figure} So far we have observed that the particles preferentially align along the cartesian axis of the system. However, since the particle do not interact directly with the wall boundaries, this must be a consequence of the structure of the flow field in the system. In order to better understand this aspect we measure the average orientation angle of the particle with respect to a given vector $\textbf{a}$, this is done by taking $$\Theta_a = \langle \arccos{ \left| \textbf{p} \cdot \frac{\textbf{a}}{|| \textbf{a}||} \right|} \rangle,$$ where is to be noted that $\Theta_a \in [0,\pi/2]$ due to the fore-aft symmetry of the particles. We consider the cases in which the $\textbf{a}$ vector is again the horizontal direction (x-axis) but also the fluid velocity $\textbf{u}$, the eigenvector $\textbf{e}_1$ corresponding to the largest eigenvalue of the strain rate tensor $\mathcal{S}$, and the temperature gradient $\mathbf{\partial} T$. The results reported in figure \ref{fig:orientation} illustrates the behaviour of the mean angle at increasing the distance from the wall. The overall strongest alignment is found for highly anisotropic particles with the direction of the flow, $\textbf{u}$ ( fig. \ref{fig:orientation}(a)). We note that such an alignment is very strong near to the system boundaries, where the velocity is mostly parallel to the x axis (see fig. \ref{fig:orientation}(b) ), but the alignment remains noticeable also in the bulk, where the velocity has a dominant vertical component. On the contrary, the alignment of the anisotropic particles with $\textbf{e}_1$ is weak, fig. \ref{fig:orientation}(c), a feature that was already observed in the case of homogeneous 2D turbulent flow \cite{GuptaPRE2014}. This can be also understood by reformulating eq. (\ref{eq:theta}) in terms of the angle $\theta_1$ formed by $\textbf{e}_1$ with the x-axis. This gives (see \ref{sec:appendix}): \begin{equation} \dot{\theta} = \frac{1}{2} \omega -\frac{\alpha^2 - 1}{\alpha^2+1}\sqrt{ S_{xx}^2 + S_{xy}^2}\ \sin(2(\theta-\theta_1)). \end{equation} \label{eq:theta1} If vorticity was absent the above equations would have a fixed point $\theta = \theta_1 + n \pi/2 $ with $n=0,1$, independently of the aspect ratio. This means that both the alignment with $\textbf{e}_1$ or with the orthogonal eigenvector $\textbf{e}_2$ are equally favoured. However, the presence of vorticity, which is moreover local and time dependent, inevitably perturbs and removes such equilibrium positions. Another salient aspect is the nearly orthogonal alignment of rodlike particles with the local temperature gradient (fig. \ref{fig:orientation}(d) ). This feature is related to the fact that the equation for the temperature gradient orientation $\hat{\bm{\partial}T} = \bm{\partial}T / || \bm{\partial}T ||$ shares similarities with the one of anisotropic particles. One has \begin{equation} \dot{\hat{\bm{\partial}T}} = \Omega \hat{\bm{\partial}T} - \mathcal{S} \hat{\bm{\partial}T} + (\hat{\bm{\partial}T})^T \mathcal{S}\hat{\bm{\partial}T} \hat{\bm{\partial}T} + \mathcal{O}(\kappa), \end{equation} where $\mathcal{O}(\kappa)$ denotes the diffusive terms that are linear in $\kappa$. It is possible to show that, when the diffusive terms are neglected, the unit vector $\hat{\bm{\partial}T}$ follows the same evolution of a vector orthogonal to $\textbf{p}$ for $\alpha \to \infty$ (see \ref{sec:appendix}). This means that, in a statistical sense, and when the effect of thermal diffusion is negligible (limit of large Prandtl number) the orientation of $\bm{\partial}T$ shall be orthogonal to the one of thin rods. To our knowledge this phenomenon has never been reported or tested before. We also note that such an analogy is independent of the dimensionality and therefore it must hold also in 3D (in the 3D case the orientation of very oblate particles, disks, will preferentially align along the thermal gradient direction). The origin of this alignment is analogous to to the one that exists between the equation for the vorticity director and the Jeffery equation for a thin rod ($\alpha\to \infty$) \cite{PumirWilkinson2011}. \begin{figure}[!ht] \begin{center} \includegraphics[width=1.0\columnwidth]{alignement.pdf} \caption{Mean orientation angle with respect to the first eigenvector of the rate-of-strain tensor $\textbf{e}_1$ (a); the fluid velocity vector $\textbf{u}$ (b); the $x$ axis (c); the temperature gradient $\bm{\partial}T$ (d), for various particle aspect ratios ranging from spheres $\alpha=1$ to rods $\alpha=100$. $Ra=10^9$, $Pr=1$.}\label{fig:orientation} \end{center} \end{figure} \subsection{Tumbling rate} The observations made in the previous section can be further supported by means of the study of the rotation rate of the particles. Because this rotation is around an axis orthogonal the particle symmetry direction, it is common to name it tumbling. We study here the quadratic tumbling rate intensity, which can be expressed in terms of the quantity $ \dot{\textbf{p}} \cdot \dot{\textbf{p}} = \dot{\theta}^2$. First we visualise the instantaneous value of such a quantity both for isotropic $\alpha=1$ and highly elongated particles $\alpha=100$, see Fig. \ref{fig:visual-tumbling}. Note that the quadratic tumbling rate of isotropic particles is by definition proportional to the local fluid vorticity, via $\dot{\theta}^2 = \omega^2 / 4$. As a result we see that $\alpha=1$ particles tumble vigorously near to walls, where the vorticity is generated and close to vortex cores. The elongated particles clearly tumble much less at the walls, but show a similar tumbling rate distribution in the bulk, maximal inside vortices although smeared down as compared to the case of spheres.\\ \begin{figure}[!hb] \begin{center} \subfigure[]{\includegraphics[width=1.0\columnwidth]{visual_tumbling_sphere.jpg}} \subfigure[]{\includegraphics[width=1.0\columnwidth]{visual_tumbling_rod.jpg}} \caption{Visualisation of the instantaneous local value of quadratic tumbling rate for isotropic $\alpha=1$ (a) and highly elongated particles $\alpha=100$ (b). The flow conditions are $Ra=10^9$ and $Pr=1$, the instant of time (and correspondingly the flow field) is the same as in Fig. \ref{fig:visual}. }\label{fig:visual-tumbling} \end{center} \end{figure} Such qualitative differences are again better understood by looking at their mean behaviours. Figure \ref{fig:tumbling-rate} shows the mean quadratic tumbling rate (we use the average $\langle \ldots \rangle$ with the same meaning as before) normalized by the squared global dissipative time-scale, i.e. $\overline{\epsilon}/\nu$ where $\epsilon = 2 \nu \mathcal{S}:\mathcal{S}$. Although this normalization is not the most suitable for such a type of flow, which is strongly inhomogeneous, it has the advantage to allow for a direct comparison of the intensity of tumbling among different vertical positions. Indeed, here we clearly observe that the tumbling rate is exceptionally high in the boundary layer. Furthermore, it is much higher for spheres as compared to rods. This hierarchy is reverted in the bulk of the flow, where rods tumble slightly faster than spheres, Fig.\ref{fig:tumbling-rate}(a). What happens in the bulk of the flow? As we mentioned in the introduction, in three dimensional turbulence anisotropic particles develops correlations with the flow gradient and as a result the mean tumbling rate has a peculiar behaviour as a function of the aspect ratio of the particles. In particular, prolate particles ($\alpha > 1$) shows a rapid decrease of mean tumbling rate, $(\langle \dot{\textbf{p}} \cdot \dot{\textbf{p}} \rangle)^{1/2}$ for increasing $\alpha$ and a saturation occurring at around $\alpha \simeq 5$ to a value which about 80\% less then the root-mean-square tumbling rate for spherical particles \cite{ParsaPRL2012}. In two dimensional turbulence such an effect has been reported to revert \cite{GuptaPRE2014}. A smooth increase of tumbling with $\alpha$ has been observed in 2D, although with a dependence on the type of forcing applied to sustain the turbulent flow. In order to better understand the phenomenology of rotation it is particularly useful to adimensionalize the quadratic tumbling rate at a given distance from the wall by the time scale based on the local energy dissipation rate $\langle \epsilon \rangle/\nu$, which is shown in Fig. \ref{fig:tumbling-rate}(b). It appears that with this rescaling the rotation rate at the wall for spheres is close to the value 1/4 while for anisotropic particles it tends to vanish. This feature is explained by taking into account that close to the walls the shear term $\dot{\gamma} = \partial_y u_x$ is the dominant one. If we assume it to be time and space (along x direction) independent and we plug it into the Jeffery equation, one gets the prediction for the tumbling rate in the case in which particles are performing the so called Jeffery orbits (see \ref{sec:appendix} for a derivation): \begin{equation}\label{jeffery-tumbling} \frac{\langle \dot{\theta}^2 \rangle}{\langle\epsilon\rangle/\nu} = \frac{\alpha}{2(\alpha^2+1)} \end{equation} We observe that in the isotropic limit, $\alpha = 1$, one gets $1/4$ while in the very elongated case the rotation rate vanishes. This simplified model prediction is in excellent agreement with the simulations (see the inset of \ref{fig:tumbling-rate}(b)). It is indeed known that Jeffrey orbits of prolate particles are characterized by a non-uniform tumbling velocity that reaches its minimum when the particle orientation is along the streamlines (and is maximal in the the shear direction) \cite{Jeffery1922}. This phenomenon is responsible for producing the observed alignment of particles in near-wall regions. \begin{figure}[!ht] \begin{center} \subfigure[]{\includegraphics[width=1.0\columnwidth]{plot_pdot2_folded_fig1.pdf}} \subfigure[]{\includegraphics[width=1.0\columnwidth]{plot_pdot2_folded_fig2.pdf}} \caption{(a) Mean quadratic tumbling rate, $\langle \dot{\theta}^2 \rangle $ as a function of the distance from the wall $y\in [0,H/2]$ for different particle aspect ratios. The tumbling rate is normalized by means of the global energy dissipation rate $\overline{\epsilon}$. The inset reports a zoomed-in vision of the the wall region. (b) Same as before but with a normalization based on the local dissipative energy dissipation rate $\langle \epsilon \rangle$. The dotted line reports the no-correlation prediction (\ref{hyp:random}) for $\alpha=100$, the continuous horizontal lines gives the values of the isotropic flow prediction (\ref{hyp:iso}) for $\alpha=1$ (minimum value) and $\alpha=100$(maximum value). The inset reports the values (datapoints) of the normalized quadratic tumbling rate at the wall ($y = 0$) and a comparison with the prediction (\ref{jeffery-tumbling}), which describe the tumbling in a plane shear flow, also named Jeffery tumbling.} \label{fig:tumbling-rate} \end{center} \end{figure} A quantitative prediction might be attempted also for the bulk of the system along the following lines. One can assume that in turbulent regime the i) fluid velocity gradient components are statistically independent and ii) that they are uncorrelated with the particle orientation angle. This leads to: \begin{equation}\label{hyp:random} \langle \dot{\theta}^2 \rangle = \frac{1}{4}\langle \omega^2 \rangle + \frac{1}{2}\left(\frac{\alpha^2 - 1}{\alpha^2+1}\right)^2 \left[ \langle S_{xx}^2 \rangle + \langle S_{xy}^2 \rangle \right] \end{equation} The further assumption iii) of statistical isotropy of the flow (see appendix \ref{sec:appendix}) leads to: \begin{equation}\label{hyp:iso} \frac{\langle \dot{\theta}^2 \rangle}{\langle \epsilon \rangle /\nu} = \frac{1}{4} + \frac{1}{8} \left(\frac{\alpha^2 - 1}{\alpha^2+1}\right)^2. \end{equation} Note that the above expressions correctly predicts an increase of the quadratic tumbling rate with $\alpha$. However, the predicted tumbling rate appear to be quite off from what is observed in the bulk of the system, see Fig. \ref{fig:tumbling-rate} (b). The reason of this offset can be principally ascribed by the assumption of statistical isotropy of the flow. Indeed, a direct test of isotropy, reported in Fig. \ref{fig:isotropy} shows the net dominance of the vorticity in the bulk of the system. A direct comparison of eq. (\ref{hyp:random}) with the measurements capture the correct trend for the tumbling in the bulk. This is reported Fig. \ref{fig:tumbling-rate} (b) where the dotted line corresponds to eq. (\ref{hyp:random}) for $\alpha=100$. Note that the \textit{no-correlation} prediction obviously fails in the BL and near wall regions, where the already discussed Jeffery tumbling occurs. \begin{figure}[!ht] \begin{center} \includegraphics[width=1.0\columnwidth]{isotropy_2_folded.pdf} \caption{Check of local small-scale flow isotropy: The continuous lines represent $\langle \omega^2 \rangle$,$\langle S_{xx}^2 \rangle$ and $\langle S_{xy}^2 \rangle$ in $\langle \epsilon \rangle / \nu$ units (i.e. local dissipative units) as a function of the distance from the wall $y \in \left[ 0,H \right]$. The colour shadow around the lines indicates the standard deviation error bars. The dashed lines provides the values expected in the isotropic case, $\langle \omega^2 \rangle \nu / \langle \epsilon \rangle = 1$ and $\langle S_{xx}^2 \rangle \nu / \langle \epsilon \rangle = \langle S_{xy}^2 \rangle \nu / \langle \epsilon \rangle = 1/8$. The dotted line reports the value expected for plane shear flow, when the only non-null velocity gradient component is $\partial_y u_x$. $Ra=10^9$, $Pr=1$.} \label{fig:isotropy} \end{center} \end{figure} \subsection{Rayleigh number dependence} How general is the description we have provided so far? In this section we examine the dependence of our findings with respect to the level of turbulence in the system. In order to do so we compare the averaged nematic orientations and quadratic tumbling rates of isotropic $\alpha=1$ and highly anisotropic particle $\alpha=100$ at varying the $Ra$ number of the flow. This is obtained by means of numerical simulations in which all the simulation parameters are kept the same except the size of the bounding box. We explore the range $Ra \in \left[2.4\times 10^5,8 \times10^9 \right] $. \begin{figure}[!ht] \begin{center} \includegraphics[width=1.0\columnwidth]{nematic_Ra.pdf} \caption{Local nematic order parameter as a function the distance from a horizontal wall in the system, for particles of aspect ratios $\alpha =100$ and for Rayleigh numbers $Ra \in \left[2.4\times 10^5,8 \times10^9 \right] $.}\label{fig:nematic_Ra} \end{center} \end{figure} The results on nematic ordering for the most anisotropic particles $\alpha = 100$ are reported in fig \ref{fig:nematic_Ra}. One can appreciate that the wall alignment at the system boundaries is a persistent feature at any $Ra$ number. However, larger values of $Ra$ produce a thinning of such regions, which probably reflects the thinning of kinetic BL. The bulk region of the system tends to loose any trace of preferentially vertical alignment and get closer to a value $\langle N \rangle \sim 0$ indicating an isotropization of the orientation. Indeed the mechanism leading to the vertical alignement in the bulk for highly anisotropic particles is the same occurring for the horizontal alignement at the walls, i.e. plane-shear dominated tumbling occuring at the edge of large scale circulations cells where up- or down-welling occurs. The regularity of LSC is weakens with the increase of $Ra$ and so the observed vertical alignement in the bulk. Figure \ref{fig:tumbling_Ra}(a) reports the tumbling rate for spheres and elongated particles with the global dissipative time normalization at varying $Ra$ numbers. We observe an overall enhancement of tumbling at increasing $Ra$, both in the near-wall and bulk regions. The measurements also confirm the stronger tumbling for spheres than rods for close to the walls. Figure \ref{fig:tumbling_Ra}(b) which uses the local energy dissipation rate normalization highlights the attainment of isotropy in the bulk of the system at increasing Rayleigh. The predictions (\ref{hyp:iso}) based on the decorrelation with the gradient and isotropization are nearly satisfied for the highest $Ra$ simulated. The system isotropization at the highest $Ra$ is independently confirmed by a direct check of isotropy (see Additional Material). This measurement confirms that in the asymptotic $Ra$ limit rods will tumble slightly more than spheres in the bulk of the system, just the opposite trend as compared to rods in 3D turbulent flows. \begin{figure}[!ht] \begin{center} \subfigure[]{\includegraphics[width=0.87\columnwidth]{plot_pdot2_folded_fig1_Ra.pdf}} \subfigure[]{\includegraphics[width=0.87\columnwidth]{plot_pdot2_folded_fig2_Ra.pdf}} \caption{(a) Mean quadratic tumbling rate, $\langle \dot{\theta}^2 \rangle $ as a function of the distance from the wall $y\in [0,H/2]$ at different $Ra$ numbers for $\alpha=1$(top) and $\alpha=100$(bottom) . The tumbling rate is normalized by means of the global energy dissipation rate $\overline{\epsilon}$. (b) Same as above but with a normalization based on the local dissipative energy dissipation rate $\langle \epsilon \rangle$. The dashed horizontal lines gives the values of the isotropic flow prediction (\ref{hyp:iso}), respectively 1/4 and 3/8 for $\alpha=1$ and $\alpha=100$. }\label{fig:tumbling_Ra} \end{center} \end{figure} \section{Conclusions \label{sec:conclusions}} In this work we have explored the rotational dynamics of anisotropic fluid tracers particles in the Rayleigh-B\'enard flow in two dimensions. We showed that elongated particles align preferentially with the direction of the fluid flow, i.e., horizontally close to the isothermal walls and dominantly vertically in the bulk. This behaviour is due to the large scale circulation flow structure, which induces strong shear at wall boundaries and in up/down-welling regions. In shear dominated regions the particles performs Jeffery orbits and therefore their rotation rate slows down for orientations parallel to the flow (and orthogonal to the shear direction). The near-wall horizontal alignment of rods persists at increasing the Rayleigh number, while the vertical orientation in the bulk is progressively weakened by the corresponding increase of turbulence intensity. Furthermore, we showed that very elongated particles are nearly orthogonal to the orientation of the temperature gradient, an alignment independent of the system dimensionality and which becomes exact in the limit of infinite Prandtl numbers. Tumbling rates are extremely vigorous adjacent to the walls in particular for nearly isotropic particles. At $Ra=10^9$ the root-mean-square tumbling rate for spheres is $\mathcal{O}(10)$ times stronger than for rods. In the turbulent bulk the situation reverses and rods tumble slightly faster than isotropic particles, in agreement with earlier observations in two-dimensional turbulence. Additionally, the tumbling dynamics at the center of the system allows to asses the level of statistical isotropy of the flow system. It appears that such an isotropy is not yet fully recovered at the highest Rayleigh number simulated in this study ($Ra = 8 \times 10^9$). This suggest the possibility to use rods as a proxy to estimate isotropy in two-dimensional flows. We have provided a relation that links the tumbling rate to the aspect ratio in case of a statistically isotropic flow. We plan, in a forthcoming study, to extend our investigation to the case of anisotropic particles in a realistic three-dimensional convective system.\\ \textit{Acknowledgments} The M\'eso-centre de Calcul Scientifique Intensif de l'Universit\'e de Lille (\url{hpc.univ-lille.fr}) is acknowledged for providing computing resources. \section{APPENDIX} \label{sec:appendix} \subsection{Equation for the dynamics of the particle orientation angle $\theta$ (or Jeffery equation in 2D)}\label{sec:jeffery-2d} The dynamics of the orientation unit vector $\textbf{p}$ of a ellipsoidal inertialess axi-symmetric particle in a spatially linear flow is described by the following equation: \begin{eqnarray} \dot{\textbf{p}} = \Omega \textbf{p} + \Lambda \left( \mathcal{S}\textbf{p} - (\textbf{p}^{T}\mathcal{S}\textbf{p})\ \textbf{p} \right) \end{eqnarray} with \begin{eqnarray} \Omega &\equiv& \frac{1}{2}\left( \bm{\partial } \bm{u} - (\bm{\partial} \bm{u})^{T}\right) , \quad \mathcal{S} \equiv \frac{1}{2}\left( \bm{\partial} \bm{u} + (\bm{\partial} \bm{u})^{T}\right),\nonumber\\ \bm{\partial} \bm{u} &=& \begin{pmatrix} \frac{\partial u_x}{\partial x} & \frac{\partial u_x}{\partial y} & \frac{\partial u_x}{\partial z}\\ \frac{\partial u_y}{\partial x} & \frac{\partial u_y}{\partial y} & \frac{\partial u_y}{\partial z}\\ \frac{\partial u_z}{\partial x} & \frac{\partial u_z}{\partial y} & \frac{\partial u_z}{\partial z} \end{pmatrix} , \quad \Lambda = \frac{\alpha^2 - 1}{\alpha^2+1}, \end{eqnarray} where $\alpha = l/d$ is the length over diameter aspect ratio and $ \bm{\partial} \bm{u}(\textbf{r}(t),t)$ is the fluid velocity gradient tensor at the particle position $\textbf{r}(t)$. In two-dimension the above equation can be simplified by using the following relations: \begin{widetext} \begin{eqnarray} \textbf{p} &=& \begin{pmatrix} p_x\\ p_y \end{pmatrix} = \begin{pmatrix} \cos{\theta}\\ \sin{\theta} \end{pmatrix} , \quad \\ \Omega &=& \begin{pmatrix} 0 & \Omega_{xy} \\ -\Omega_{xy} & 0 \end{pmatrix} = \begin{pmatrix} 0 & \frac{1}{2}\left( \frac{\partial u_x}{\partial y} - \frac{\partial u_y}{\partial x}\right) \\ \frac{1}{2}\left( \frac{\partial u_y}{\partial x} - \frac{\partial u_x}{\partial y}\right)& 0 \end{pmatrix} = \begin{pmatrix} 0 & -\omega/2 \\ \omega/2 & 0 \end{pmatrix} , \quad \\ \mathcal{S} &=& \begin{pmatrix} S_{xx}& S_{xy} \\ S_{xy} & S_{yy} \end{pmatrix} = \begin{pmatrix} \frac{\partial u_x}{\partial x} & \frac{1}{2}\left( \frac{\partial u_x}{\partial y} + \frac{\partial u_y}{\partial x}\right) \\ \frac{1}{2}\left( \frac{\partial u_y}{\partial x} + \frac{\partial u_x}{\partial y}\right)& \frac{\partial u_y}{\partial y} \end{pmatrix} = \begin{pmatrix} S_{xx}& S_{xy} \\ S_{xy} & - S_{xx} \end{pmatrix} \end{eqnarray} \end{widetext} where $\omega$ is the vorticity pseudo-scalar ( defined as $\omega \hat{z} = \bm{\partial} \times \textbf{u}$, and the relation $S_{yy} = -S_{xx}$ is a direct consequence of the flow incompressibility, $\bm{\partial} \cdot \textbf{u} = 0$. The equations for $p_x$ and $p_y$ are redundant, we just develop the one for the $x$ component: \begin{eqnarray} \dot{p_x} &=& \Omega_{xy}\ p_y\nonumber\\ &+& \Lambda \left[ S_{xx} (p_x - p_x^3 + p_x p_y^2 ) + S_{xy} ( p_y - 2 p_x^2 p_y )\right] \end{eqnarray} by introducing the angle $\theta$, it becomes: \begin{eqnarray} \dot{\theta} &=& - \Omega_{xy} -\Lambda \left[ S_{xx} \sin(2\theta) - S_{xy} \cos(2\theta) \right \end{eqnarray} or \begin{eqnarray}\label{eq:jef-app} \dot{\theta} &=& \frac{1}{2} \omega -\Lambda \left[ S_{xx} \sin(2\theta) - S_{xy} \cos(2\theta) \right]. \end{eqnarray} The latter equation is consistent with equation (1) in Parsa \textit{et al.} (2011) \cite{ParsaPF2011} \subsection{Equation of particle orientation with respect to the rate-of-strain eigenvalues}\label{sec:eq-e1} The symmetric tensor $\mathcal{S}$ has two orthogonal eigenvectors $\textbf{e}_1$ and $\textbf{e}_2$ and real eigenvalues $\lambda_1, \lambda_2$ which are opposite in sign due to the incompressibility of the flow. This means that: \begin{equation} \mathcal{S} = \begin{pmatrix} S_{xx} & S_{xy}\\ S_{xy} & - S_{xx} \end{pmatrix} = (\textbf{e}_1, \textbf{e}_2) \begin{pmatrix} \lambda_1 & 0\\ 0 & -\lambda_1 \end{pmatrix} \begin{pmatrix} \textbf{e}_1\\ \textbf{e}_2 \end{pmatrix} \end{equation} where $\lambda_1 = \sqrt{ S_{xx}^2 + S_{xy}^2}$. By introducing $ \textbf{e}_1 = \begin{pmatrix} \cos{\theta_1}\\ \sin{\theta_1} \end{pmatrix} $ and $ \textbf{e}_2 = \begin{pmatrix} \cos{(\theta_1+\pi/2)}\\ \sin{(\theta_1+\pi/2)} \end{pmatrix} = \begin{pmatrix} - \sin{\theta_1}\\ \cos{\theta_1} \end{pmatrix} $ one gets: \begin{eqnarray} S_{xx} &=& \sqrt{S_{xx}^2 + S_{xy}^2}\ \cos(2\theta_1)\ ,\nonumber\\ S_{xy} &=& \sqrt{S_{xx}^2 + S_{xy}^2}\ \sin(2\theta_1) \end{eqnarray} which can be plugged in into eq. (\ref{eq:jef-app}) to obtain \begin{equation} \dot{\theta} = \frac{1}{2} \omega -\frac{\alpha^2 - 1}{\alpha^2+1}\sqrt{ S_{xx}^2 + S_{xy}^2}\ \sin(2(\theta-\theta_1)). \end{equation} \subsection{Lagrangian equation for the temperature gradient orientation}\label{sec:eq-gradt} Taking the gradient of the advection diffusion equation for temperature (\ref{eq:T}): \begin{equation} \dot{\bm{\partial}T} = - (\bm{\partial}\textbf{u})^T \bm{\partial}T + \kappa\ \partial^2 \bm{\partial}T, \end{equation} where the superscript dot symbol $(\cdot)$ denotes as for the particles the derivative in the Lagrangian reference frame. The equation for the unit norm vector $\hat{\bm{\partial}T} = \bm{\partial}T / || \bm{\partial}T ||$ is obtained derivation and by taking into account the normalization constraint $(\hat{\bm{\partial}T})^T \hat{\bm{\partial}T} = 1$: \begin{equation} \dot{\hat{\bm{\partial}T}} = - (\bm{\partial}\textbf{u})^T \hat{\bm{\partial}T} + (\hat{\bm{\partial}T})^T \mathcal{S}\hat{\bm{\partial}T} \hat{\bm{\partial}T} + \mathcal{O}(\kappa) \end{equation} or: \begin{equation} \dot{\hat{\bm{\partial}T}} = \Omega \hat{\bm{\partial}T} - \mathcal{S} \hat{\bm{\partial}T} + (\hat{\bm{\partial}T})^T \mathcal{S}\hat{\bm{\partial}T} \hat{\bm{\partial}T} + \mathcal{O}(\kappa), \end{equation} where $\mathcal{O}(\kappa)$ denotes the dissipative terms of linear order in $\kappa$. Apart form the dissipative terms, one can immediately remark the strict similarity with the Jeffery equation (\ref{eq:Jeffery3d}) for $\textbf{p}$ in the limit $\alpha\to 0$, which represents the limit of a thin oblate particle (i.e. a disk) in 3D. The scalar product between the particle orientation (\ref{eq:Jeffery3d}) and $\hat{\bm{\partial}T}$ becomes: \begin{eqnarray} \frac{d}{dt}(\textbf{p}^T \hat{\bm{\partial}T} ) &=& (\Lambda - 1) \textbf{p}^T\mathcal{S} \hat{\bm{\partial}T} \nonumber\\ &-& \left[ \Lambda (\textbf{p}^{T}\mathcal{S}\textbf{p}) - (\hat{\bm{\partial}T})^T \mathcal{S}\hat{\bm{\partial}T} \right] \textbf{p}^T \hat{\bm{\partial}T}\nonumber\\ &+& \mathcal{O}(\kappa) \end{eqnarray} which neglecting the terms associated to the dissipation and in the limit $\Lambda \to 1$ ($\alpha \to \infty$) has a fixed point solution for $\textbf{p}^T \hat{\bm{\partial}T} = 0$. Note also that in dimensionless dissipative units the diffusive terms become proportional to $Pr^{-1}$ meaning that the alignment does not decrease by changing the turbulence intensity ($Ra$ number in this specific case) but it depends on the ratio between diffusive and viscous processes. Therefore exact orthoghonality between $\hat{\bm{\partial}T}$ and $\textbf{p}$ can be reached only in the $Pr \to \infty$ limit. \subsection{Predictions for the mean quadratic tumbling rate in two-dimensions}\label{sec:tumbl-relations} The tumbling rate in two dimensions is defined by $\langle \dot{\textbf{p}} \cdot \dot{\textbf{p}} \rangle = \langle \dot{\theta}^2\rangle$. We begin from: \begin{equation} \dot{\theta} = \tfrac{1}{2} \omega -\tfrac{\alpha^2-1}{\alpha^2+1} \left[ S_{xx} \sin(2\theta) - S_{xy} \cos(2\theta) \right] \end{equation} \paragraph{Stationary plane shear flow} \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{cartoon.pdf} \end{center} \caption{Anisotropic particle in a plane linear shear flow, \textit{i.e.} $u_x(y)=\dot{\gamma} y$ with $\dot{\gamma}=const.>0$.}\label{fig:cartoon} \end{figure} In a plane shear flow $\dot{\gamma} = \partial_y u_x$ the evolution equation for the orientation becomes \begin{equation} \dot{\theta} = \frac{1}{2} \dot{\gamma} \left(-1 + \tfrac{\alpha^2-1}{\alpha^2+1} \cos(2\theta) \right) \end{equation} with the initial condition $\theta(t_0)= \theta_0$, it can be solved as \begin{equation}\label{eq:jsolution} \tan(\theta) = \frac{1}{\alpha} \tan\left( \frac{-\dot{\gamma}(t-t_0)}{\alpha+1/\alpha} + \textrm{atan}(\alpha \tan(\theta_0)) \right), \end{equation} note that with our shear direction choice the angle decreases with time (see Fig. \ref{fig:cartoon}). The above solution is periodic with a period $T$ required for a rotation of $\pi$: \begin{equation} T = \frac{\pi}{\dot{\gamma}}\left(\alpha+\frac{1}{\alpha}\right). \end{equation} By deriving with respect to time (\ref{eq:jsolution}), and setting for simplicity $t_0=0$, $\theta_0=0$, one obtains: \begin{equation} \dot{\theta} = \frac{-\gamma \alpha^2}{\alpha^2+1} \frac{\tan^2\left( \frac{-\dot{\gamma}t}{\alpha+1/\alpha} \right) + 1}{\tan^2\left( \frac{-\dot{\gamma}t}{\alpha+1/\alpha} \right) + \alpha^2} = -\frac{\pi\alpha}{T} \frac{\tan^2\left(-\pi \frac{t}{T} \right) + 1}{\tan^2\left( -\pi \frac{t}{T} \right) + \alpha^2} \end{equation} which can be squared and averaged over its period T: \begin{equation} \langle\dot{\theta}^2 \rangle = \dot{\gamma}^2 \frac{\alpha}{2(\alpha^2+1)} = \frac{\epsilon}{\nu}\ \frac{\alpha}{2(\alpha^2+1)} \end{equation} \paragraph{Uncorrelated orientation} We assume that the fluid velocity gradient components are statistically independent and that they are uncorrelated with the particle orientation angle. By squaring and averaging over time and ensembles eq. (\ref{eq:theta}) one obtains: \begin{eqnarray} \langle \dot{\theta}^2 \rangle &=& \frac{1}{4}\langle \omega^2 \rangle + \frac{1}{2} \left(\frac{\alpha^2-1}{\alpha^2+1} \right)^2\left[ \langle S_{xx}^2 \rangle + \langle S_{xy}^2 \rangle \right] \end{eqnarray} \paragraph{Uncorrelated orientation and statistical isotropy and homogeneity} By taking into account the isotropic relations derived in Sec. \ref{sec:iso}, eq. (\ref{eq:isograd}), we find: \begin{equation} \frac{\langle \dot{\theta}^2 \rangle}{\epsilon/\nu} = \frac{1}{4} + \frac{1}{8} \left(\frac{\alpha^2-1}{\alpha^2+1} \right)^2 \end{equation} \subsection{Statistical isotropy and homogeneity in two-dimensions}\label{sec:iso} The present derivation follows the one provided in Ref. \cite{Pumir2017} for 3D flows. The general form for a fourth order isotropic tensor is: \begin{equation} \langle \partial_i u_j \partial_k u_l \rangle = A\ \delta_{ij} \delta_{kl} + B\ \delta_{ik} \delta_{jl} + C\ \delta_{il} \delta_{jk} \end{equation} where the indexes $i,j,k,l$ can all independently take the labels $x,y$. The summation over repeated indices is assumed in the following. The flow incompressibility, $\partial_i u_i=0$, implies that \begin{equation} \langle \partial_i u_i \partial_k u_l \rangle = 0, \end{equation} the homogeneity, i.e. statistical translational invariace, of the system instead implies that \begin{equation} \langle \partial_i u_j \partial_j u_i \rangle = 0, \end{equation} and finally the definition of energy dissipation rate is: \begin{equation} \nu \langle \partial_i u_j \partial_i u_j \rangle = \langle \epsilon \rangle. \end{equation} The three above equations leads to the system: \begin{equation} \begin{cases} 2 A + B + C = 0\\ 2 A + 2 B + 4 C = 0\\ 2 A + 4 B + 2 C = \langle \epsilon \rangle / \nu \end{cases} \end{equation} which gives as a solution $A=C=-\frac{\langle \epsilon \rangle}{8\nu}$ and $B=\frac{3\langle \epsilon \rangle}{8\nu}$, this leads to: \begin{equation} \langle \partial_i u_j \partial_k u_l \rangle = \frac{\langle \epsilon \rangle}{8\nu}\left( 3 \delta_{ik} \delta_{jl} - \delta_{ij} \delta_{kl} -\delta_{il} \delta_{jk}\right) \end{equation} and therefore: \begin{eqnarray} \langle (\partial_x u_x)^2 \rangle &=& \langle (\partial_y u_y)^2 \rangle = \frac{\langle \epsilon \rangle}{8\nu}\\ \langle (\partial_x u_y)^2 \rangle &=& \langle (\partial_y u_x)^2 \rangle = \frac{3\langle \epsilon \rangle}{8\nu}\\ \langle \partial_x u_y \partial_y u_x \rangle &=& -\frac{\langle \epsilon \rangle}{8\nu} \end{eqnarray} or also \begin{equation}\label{eq:isograd} \langle S_{xx}^2 \rangle = \langle S_{yy}^2 \rangle = \langle S_{xy}^2 \rangle = \frac{\langle \epsilon \rangle}{8\nu}, \quad \langle \omega^2 \rangle = \frac{\langle \epsilon \rangle}{\nu} \end{equation} \nocite{apsrev41Control} \bibliographystyle{apsrev4-1}
2,869,038,154,464
arxiv
\section{Introduction} Many sciences today rely heavily on the use of Monte Carlo simulation. In High Energy Physics (HEP) for example it is used in practically every stage of an experiment from the design of the detectors to the final analyses. This brings up the question of the precision of the MC simulation. In other words, how close are the probability distributions of the MC to those of the experimental data? Testing whether two datasets come from the same distribution is a classical problem in Statistics, and for one-dimensional datasets a large number of methods have been developed. Tests in higher dimensions have been proposed by Bickel \cite{Bickel} and Friedman and Rafsky \cite{Friedman-Rafsky} . Zech \cite{Zech} discussed a test based on the concept of minimum energy. The test proposed here belongs to a class of consistent, asymptotically distribution free tests based on nearest neighbors (Narsky \cite{Narsky}, Bickel and Breiman \cite{Bickel-Breiman}, Henze \cite{Henze}, and Schilling \cite{Schilling} ). We concentrate in this paper on comparing two datasets, either both real data or one real and one Monte Carlo, but the proposed method also allows us to test whether a dataset comes from a known theoretical distribution as long as we can simulate data from this distribution. \section{The Method} To set the stage, let's say we have observations (events) $X_{1},..,X_{n}$ from some distribution $F$, and observations $Y_{1},..,Y_{m}$ from some distribution $G$. In the application we have in mind one of these would be MC generated data and the other "real" data, but this is not crucial for the following. We are interested in testing $H_{0}:F=G$ vs $H_{a}:F\neq G$. The idea of our test is this: let's concentrate on one of the $X$ observations, say $X_{j}$. What is its nearest neighbor, that is, the observation closest to it? If the null hypothesis is correct and both datasets were generated by the same probability distribution, then this nearest neighbor is equally likely to be one of the $X$ or $Y$ observations (proportional to $n$ and $m ). If $F\neq G$ there should be regions in space where there are relatively more $X$ or $Y$ observations than could be expected otherwise. More formally, let $Z_{j}=1$ if the nearest neighbor of $X_{j}$ is from the X$ data and $0$ otherwise. Then, under the null hypothesis, $Z_{j}$ is a Bernoulli random variable with success probability $(n-1)/(n+m-1)$. Therefore $Z=\sum_{j=1}^{n}Z_{j}$ has an approximate\ binomial distribution with parameters $n$ and $(n-1)/(n+m-1)$. The distribution is only approximately binomial because the $Z_{j}^{^{\prime }}$s are not independent, but for datasets of any reasonable size the dependence is very slight and can be ignored. There is an immediate generalization of the test: instead of just considering the nearest neighbor we can find the k-nearest neighbors. Now Z_{j}=(Z_{j1},..,Z_{jk})$ with $Z_{ji}=1$ if the $i^{th}$ nearest neighbor of $X_{j}$ is from the $X$ dataset,\ $0$ otherwise. Under the null hypothesis $Z=\sum_{j=1}^{n}\sum_{i=1}^{k}Z_{ji}$ has again an approximate binomial distribution with parameters $nk$ and $(n-1)/(n+m-1)$. (Actually, \sum_{i=1}^{k}Z_{ji}$ has a hypergeometric distribution, but because we will use a $k$ much smaller than $n$ or $m$ the difference is negligible.) We can find the p-value of the test wit \begin{equation*} p=P(V\geq Z) \end{equation* where $V\sim Bin(nk,(n-1)/(n+m-1))$. Especially if $n$ and $m$ are small or if a relatively large $k$ is desired it would be possible to use a permutation type method to estimate the null distribution. The idea is as follows: under $H_{0}$ all the events come from the same distribution, so any permutation of the events will again have the same distribution. Therefore if we join the $X$ and $Y$ events together, shuffle them around and then split them again into $n$ and $m$\ events X^{\prime }$ and $Y^{\prime }$, these are now guaranteed to have the same distribution. Applying the k-nearest neighbor test and repeating the above many times (say $1000$ times) will give us an estimate of the null distribution. For more on the idea of permutation tests, see Good \cite{Good . This method will achieve the correct type I error probability by construction, but it also requires a much greater computational effort. There are a number of choices to be made when using this method. First of all, there is the question of which dataset should be our $X$ data. Clearly, if $n=m$, this does not matter but it might well otherwise. Indeed, in our application of comparing MC data to real data, we have control of the size of the MC data although sometimes there are computational limits on its size. Next we need to decide on $k$. Again there is no obviously optimal choice here. Finally, we need to decide on a metric to use when finding the nearest neighbor. If the observations differ greatly in "size" in different dimensions, the standard Euclidean metric cannot be expected to work well because small differences in dimensions with a large spread would overwhelm small but significant differences in dimensions with a small spread. This last issue we will deal with by standardizing each dimension separately, using the mean and standard deviation of the combined $X$ and $Y$ data. If the data come from a distribution that is severely skewed, other measures of location and spread (such as the median and the interquartile range) could also be used to standardize the data. In the next section we will show the results of some mini MC studies which give some guidelines for the choices. \section{Performance of this Method} If we use the binomial approximation in our test, we will need to verify that the method works, that is, that it achieves the desired type I error probability $\alpha $. Of course, we are also interested in the power of the test, that is, the probability to reject the null hypothesis if indeed F\neq G$. In this section we will carry out several mini MC studies to investigate these questions. We start with the situation where there exist other methods for this test, namely in one dimension. For comparison we will use a method that is known to have generally very good power, the Kolmogorov-Smirnov (KS) test. In the first simulation we generate $n=m$ observations from $1000$ to $20000$ in steps of $1000$ each for $X$ and $Y$ from the uniform distribution on $[0,1] . Because of the probability integral transform this is actually a very general case, and similar conclusions will hold for all other continuous distributions in one dimension. For each of these cases we use our test with $k=1,$ $2,$ $5$ $,10$ $,20$ $,50$ and $100$ as well as the KS test. This is repeated $10000$ times. Figure 1 shows the results, using a nominal type I error probability of $\alpha =5\%$. As we see the true type I error probability is close to nominal but increases slowly as $k$ increases. This is due to the lack of independence between the $Z_{j}$'s. Especially if $k$ is large relative to $n$ or $m$ , the true type I error probability is larger than what is acceptable. Based on this and similar simulation studies, we recommend $k=10$ if both $n$ and m$ are at least $1000$, otherwise $k=0.01\min (n,m)$. Alternatively, one can use the permutation method described above which will have the correct type I error by construction. Even for $k=1$ we have a slightly higher than nominal type I error probability, about $5.5\%$ if $\alpha =5\%$. This is partly due to the dependence between the $Z_{j}$'s, and partly to the discrete nature of the binomial distribution. For example, if we defined the p-value as $P(V\geq Z-1)$ we would get a true type I error probability slightly smaller than the nominal one. In any case, we believe this difference to be acceptable. Next a simulation to study the power of the test, again in one dimension. We use the uniform on $[0,1]$ distribution for $F$ and the uniform on [0,\theta ]$ for $G$, where $\theta $ goes from $1$ to $1.1$ We generate n=m=1000$ events and apply our test with $k=1$, $5$, $15$ and $25$ as well as the KS test. This is repeated $10000$ times. The result is shown in Figure 2. Clearly, the higher the $k$ the better the power of the test. In fact, for this example already with $k=15$ the test has better power than the KS test! Generally speaking, in one dimension, our test has power somewhat inferior to the KS test. However, our test is not meant to be used in one dimension. It is encouraging to find that it does fairly well even in that situation. How should one choose the size of the Monte Carlo dataset relative to the size of the true data? In the next simulation we generate $n=1000$ events from a uniform $[0,1]$ and assume this to be the real data. Then we generate $m$ events from the same distribution, with $m$ going from $50$ to $2500$. In Figure 3 we show the results which indicate that the MC dataset should have the same size as the real data because in that case the true type I error probability is about the same as the nominal one. This agrees with general statistical experience which suggests that, in two-sample problems, equal sample sizes are often preferable. Finally we present a multi-dimensional example. We generate $n=m=1000$ events. The $F$ distribution is a multivariate standard normal in $9$ dimensions, and the $G$ distribution a multivariate normal in $9$ dimensions with means $0$, standard deviations $1$ and correlation coefficients cor(X_{i},X_{j})=\rho $ if $|i-j|=1$ and $0$ if $|i-j|>1$, where $\rho $ goes from $0$ to $0.5$. This example illustrates the need for a test in higher dimensions. Here all the marginals are standard normals, and any one-dimension-at-a-time method is certain to miss the difference between $F$ and $G$. This is shown in Figure 4 where with $k=10$ we reject the null hypothesis quite easily (at $\rho =0.5$) whereas the KS test applied at any of the marginals fails completely. \section{Implementation} A C++ routine that carries out the test is available from one of the authors at http://math.uprm.edu/\symbol{126}wrolke/. It allows the use of the binomial approximation as well as the permutation method. It uses a simple search for the k-nearest neighbors. k-NN searching is a well known problem in computer science, and more sophisticated and faster routines exist and could also be used in combination with our code. (See, for example, Friedman, Baskett and Shustek \cite{Friedman et al} .) \section{Summary} We describe a test for the equality of distributions of two datasets in higher dimensions. The test is conceptually simple and does not suffer from the curse of dimensionality. Simulation studies show that it approximately achieves the desired type I error probability, or does so exactly at a higher computational cost. They also show that this test is capable to detect differences between the distributions only "visible" in higher dimensions. \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \@ifundefined{Column}{\def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }}{}% \@ifundefined{qed}{\def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }}{}% \@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}% \@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}% \@ifundefined{tciFourier}{\def\tciFourier{F}}{}% \@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}% \@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}% \@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}% \@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}% \@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}% \@ifundefined{vvert}{\def\vvert{\Vert}}{ \@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}% \@ifundefined{dB}{\def\dB{\hbox{{}}}}{ \@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{ \@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{ \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\mathop{\displaystyle \int}}% \def\diint{\mathop{\displaystyle \iint}}% \def\diiint{\mathop{\displaystyle \iiint}}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \if@compatibility\else \RequirePackage{amsmath} \makeatother \endinput \fi \typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE} \def\makeatother\endinput{\makeatother\endinput} \bgroup \ifx\ds@amstex\relax \message{amstex already loaded}\aftergroup\makeatother\endinput \else \@ifpackageloaded{amsmath}% {\message{amsmath already loaded}\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amstex}% {\message{amstex already loaded}\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\message{amsgen already loaded}\aftergroup\makeatother\endinput} {} \fi \egroup \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \[email protected] \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \@ifundefined{tag}{ \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} }{} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \makeatother \endinput
2,869,038,154,465
arxiv
\section{Introduction} \label{sec:intro} This article presents the framework for deriving the {\em local capacity} in a wireless ad hoc network with various medium access schemes. The local capacity is defined as the average information rate received by a node randomly located in the network. The seminal work of Gupta \& Kumar~\cite{Gupta:Kumar} and the later studies, {\it e.g.}, \cite{scaling,scaling2,scaling3} quantify the capacity in wireless networks in terms of scaling laws or bounds. These results are very important but they may not provide a deeper insight into the interrelationship between the performances of various medium access schemes in the network ({\it e.g.}, the exact achievable capacity) and the possible network and protocol design issues ({\it e.g.}, trade-offs involving protocol overhead versus performance of various medium access schemes) and network parameters. Apart from being of general interest, an additional advantage of local capacity, as compared to the results on scaling laws, is that it can be derived accurately. Therefore, the local capacity framework can be used to get better insights into the designing of medium access schemes for wireless ad hoc networks. Medium access schemes in wireless ad hoc networks can be broadly classified into two main classes: continuous time access and slotted access. In this article, our main focus is on slotted medium access although many of our results can also be applied to continuous time medium access. Within slotted medium access category, we distinguish grid pattern, node coloring, carrier sense multiple access (CSMA) and slotted ALOHA schemes. In \S \ref{sec:context}, we will briefly summarize our motivation and related works. In the following sections, we will concisely describe the analytical methods we have developed to derive the local capacity of the above mentioned medium access schemes in a wireless ad hoc network. Because these methods are developed independently, depending on the underlying design of each medium access scheme, \S \ref{sec:model} will first give a brief overview of our network model and mathematical background of our parameters of interest. In sections \ref{sec:grid_pattern}, \ref{sec:practical} and \ref{sec:aloha}, we will discuss grid pattern, node coloring, CSMA and slotted ALOHA schemes respectively. First we will give a brief overview of the protocol overheads associated with each scheme and later, in each case, we will discuss the methods we used to evaluate local capacity of wireless ad hoc networks. In \S \ref{sec:simulations}, we will discuss the evaluation of these schemes and our results. The shortcomings and future extensions to our work and concluding remarks can be found in sections \ref{sec:future} and \ref{sec:conclude} respectively. \section{Related Works} \label{sec:context} In one of the first analyses on capacity of medium access schemes in wireless networks, \cite{Nelson:Kleinrock} studied slotted ALOHA and despite using a very simple geometric propagation model, the result is similar to what can be obtained under a realistic SIR based interference model (non-fading, SIR threshold of $10.0$ and attenuation coefficient of $4.0$). Under a similar propagation model and assuming that all nodes are within range of each other, \cite{CSMA} evaluated CSMA scheme and compared it with slotted ALOHA in terms of throughput. \cite{Bartek} used simulations to analyze CSMA under a realistic SIR based interference model and compared it with ALOHA (slotted and un-slotted). For simulations, \cite{Bartek} assumed Poisson distributed transmitters with density $\lambda$. Each transmitter sends packets to an assigned receiver located at a fixed distance of $a\sqrt{\lambda}$, for some $a>0$. \cite{Weber,Weber2} studied the {\em transmission capacity}, which is defined as the maximum number of successful transmissions per unit area at a specified outage probability, of ALOHA and code division multiple access (CDMA) medium access schemes. They assumed that simultaneous transmitters form a homogeneous Poisson point process (PPP) and used the same model for locations of receivers as in~\cite{Bartek}. The fact that the receivers are not a part of the network (node distribution) model and are located at a fixed distance from the transmitters is a simplification. An accurate model of wireless networks should consider that the transmitters, transmit to receivers which are randomly located in their neighborhood or, in other words, their reception areas. Although, \cite{Weber} analyzed the case where receivers are located at a random distance but it was assumed that a transmitter employs transmit power control such that signal power at its intended receiver is some fixed value. PPP only accurately models an ALOHA based scheme where transmitters are independently and uniformly distributed over the network area. However, under exclusion schemes, like node coloring or CSMA, modeling simultaneous transmitters as PPP leads to an inaccurate representation of interference distribution. On the other hand, the correlation between the location of simultaneous transmitters makes it extremely difficult to develop a tractable analytical model and derive the closed-form expression for the probability distribution of interference. Some of the proposed approaches are as follows. \cite{CSMA-Model} (Chapter $18$) used a Mat\'ern point process for CSMA based schemes whereas \cite{Busson} proposed to use Simple Sequential Inhibition ({\em SSI}) or an extension of {\em SSI} called {\em SSI$_k$} point process. In \cite{Guard,Guard2}, interferers are modeled as PPP and outage probability is obtained by excluding or suppressing some of the interferers in the guard zone around a receiver. \cite{Weber3} analyzed transmission capacity in networks with general node distributions under a restrictive hypothesis that density of interferers is very low and, asymptotically, approaches zero. They derived bounds of high-SIR transmission capacity with ALOHA, using PPP, and CSMA using Mat\'ern point process. Other related works include the analysis of local (single-hop) throughput and capacity with slotted ALOHA in networks with random and deterministic node placement, and TDMA, in $1D$ line-networks only \cite{Haenggi}. \cite{Zorzi2} determined the optimum transmission range under the assumption that interferers are distributed according to PPP whereas \cite{SR-ALOHA} gave a detailed analysis on the optimal probability of transmission for ALOHA which optimizes the product of simultaneously successful transmissions per unit of space by the average range of each transmission. \cite{unslotted_csma} develops an analytical model for analysis of saturation throughput of un-slotted CSMA with collision avoidance (CSMA/CA) in wireless networks. The numerical results of the analytical model are also verified using an event driven simulator. \cite{ieee80211_mac} uses queueing analysis to perform a comprehensive throughput and delay analysis of IEEE 802.11 MAC protocol. \section{System Model} \label{sec:model} We present our mathematical model in \S \ref{model_assumptions} and discuss the connection of local capacity with transport capacity in \S \ref{sec:transport}. \subsection{Mathematical Model and Assumptions} \label{model_assumptions} We consider a wireless ad hoc network where an infinite number of nodes are uniformly distributed over an infinite $2D$ map. In slotted medium access, at any given slot, simultaneous transmitters in the network are distributed like a set of points $$ {\cal S}=\{z_1,z_2,\ldots,z_n,\ldots\}~, $$ where $z_i$ is the location of transmitter $i$. The spatial distribution of simultaneous transmitters, {\it i.e.} the set ${\cal S}$, depends on the medium access scheme employed by the nodes. Therefore, we do not adopt any {\em universal} model for the locations of simultaneous transmitters and only assume that, in all slots, the set ${\cal S}$ has a homogeneous density equal to $\lambda$. We consider that each transmitter employs unit transmit power. The channel gain from node $i$ to noise $j$ is represented by $\gamma_{ij}$ such that the received power at node $j$ is $P_i\gamma_{ij}$, where $P_i$ is the transmit power of node $i$. We ignore multi-path fading or shadowing effects and assume that the channel gain is solely determined by the distance and attenuation coefficient. Therefore, $$ \gamma_{ij}=\frac{1}{\vert z_i-z_j\vert^{\alpha}}~, $$ where $\vert .\vert$ is the Euclidean norm of the vector and \mbox{$\alpha>2$} is the attenuation coefficient. Therefore, the transmission from node $i$ to node $j$ is successful only if the following condition is satisfied \begin{equation} \frac{\vert z_i-z_j\vert^{-\alpha}}{N_0+\sum_{k\neq i}\vert z_k-z_j\vert^{-\alpha}}\geq \beta~, \label{eq:sinr_condition} \end{equation} where $N_0$ is the background noise (ambient/thermal) power and $\beta$ is the minimum signal to interference ratio (SIR) threshold required for successfully receiving the packet. We assume that the background noise power, $N_0$, is negligible. Therefore, the SIR of transmitter $i$ at any point $z$ on the plane is given by \begin{equation} S_i(z)=\frac{\vert z-z_i\vert^{-\alpha}}{\sum_{j\neq i}\vert z-z_j\vert^{-\alpha}}~. \label{eq:sinr} \end{equation} We call the reception area of transmitter $i$, the area of the plane, ${\cal A}(z_i,\lambda,\beta,\alpha)$, where this transmitter is received with SIR at least equal to $K$. The area ${\cal A}(z_i,\lambda,\beta,\alpha)$ also contains the point $z_{i}$ since here the SIR is infinite. The average size of ${\cal A}(z_i,\lambda,\beta,\alpha)$ is $\sigma(\lambda,\beta,\alpha)$, {\it i.e.}, $$ \sigma(\lambda,\beta,\alpha)={\bf E}(|{\cal A}(z_i,\lambda,\beta,\alpha)|)~,$$ where $|{\cal A}|$ is the size of an area ${\cal A}$. Note that $\sigma(\lambda,\beta,\alpha)$ does not depend on the location of transmitter $i$, $z_i$. Our principal performance metric is local capacity, hereafter referred to as capacity only, which is defined as the average information rate received by a receiver {\em randomly} located in the network. Consider a receiver at a random location $z$ in the network and let $N(z,\beta,\alpha)$ denote the number of reception areas it belongs to. Under general settings, following identity has been proved in \cite{Jacquet:2009} \begin{equation} {\bf E}(N(z,\beta,\alpha))=\lambda\sigma(\lambda,\beta,\alpha)~. \label{eq:avg_no} \end{equation} ${\bf E}(N(z,\beta,\alpha))$ represents the average number of transmitters from which a receiver, randomly located in the network, can receive with SIR at least equal to $\beta$. Under the hypothesis that a node can only receive at most one packet at a time, {\it e.g.}, when \mbox{$\beta>1$}, then \mbox{$N(z,\beta,\alpha)\le 1$}. The average information rate received by the receiver, $c(z,\beta,\alpha)$, is therefore equal to ${\bf E}(N(z,\beta,\alpha))$ multiplied by the nominal capacity. Without loss of generality, we assume unit nominal capacity and we will derive \begin{equation} c(z,\beta,\alpha)={\bf E}(N(z,\beta,\alpha))=\lambda\sigma(\lambda,\beta,\alpha)~. \label{eq:poisson_hand_over_no} \end{equation} We will derive the capacity in wireless ad hoc networks with grid pattern, node coloring, CSMA and ALOHA medium access schemes. We will also show that maximum capacity can be achieved with grid pattern schemes. Wireless networks of grid topologies are studied in, {\it e.g.}, \cite{Liu:Haenggi,Hong:Hua} and compared to networks with randomly distributed nodes. In contrast to these works, we assume that only the simultaneous transmitters form a regular grid pattern. \subsection{Relationship of Local Capacity with Transport Capacity} \label{sec:transport} Gupta \& Kumar~\cite{Gupta:Kumar} introduced the concept of {\em transport capacity} which measures the end-to-end sum throughput of the network multiplied by the end-to-end distance. A important aspect of their work and the following works on transport capacity is that it is not possible to compute the exact transport capacity in terms of network and system parameters and most of the results are in the form of bounds and scaling laws, {\it i.e.}, the density of transport capacity scales as $Ck_1\sqrt{\lambda}$ bit-meters per second per unit area where $C$ is the nominal capacity and \mbox{$k_1>0$} depends on the medium access scheme and system parameters. We consider homogeneous traffic distribution in the network, {\it i.e.}, all nodes have equal priority and generate traffic at the same rate. Therefore, if all nodes are capable of transmitting at $C$ bits per second, the capacity of each node is $Ck_1/\sqrt{\lambda}$ bit-meters per second. It is also shown in~\cite{Gupta:Kumar} that under general settings, the effective radius of transmission is $k_2/\sqrt{\lambda}$ for some \mbox{$k_2>0$} which also depends on the medium access scheme and system parameters. If each node transmits to a receiver which is randomly located within its effective radius of transmission or, in other words, its reception area, the information rate received by a receiver is constant and equal to $Ck_1/k_2$ bits per second. In this article, we evaluate the {\em average} of this information rate received by a receiver randomly located in the network, {\it i.e.}, the local capacity. Note that, this capacity also incorporates the pre-constants associated with the scaling law, {\it e.g.} $k_1$ and $k_2$, and is independent of $\lambda$ as it is invariant for any {\em homothetic transformation} of the set of transmitters. Note that, we have abstracted out the multi-hop aspect of wireless ad hoc networks and this allows us to focus on the {\em localized} capacity. Our model also captures a realistic scenario of wireless ad hoc networks where each transmitter communicates with a receiver which is randomly located in its neighborhood or reception. \section{Grid Pattern Based Schemes} \label{sec:grid_pattern} It can be argued that optimal capacity in wireless ad hoc networks can be achieved if simultaneous transmitters are positioned in a grid pattern. However, designing a medium access scheme, which ensures that simultaneous transmitters are positioned in a grid pattern, is very difficult because of the limitations introduced by wave propagation characteristics and actual node distribution. For this, location aware nodes may be useful but the specification of a distributed medium access scheme that would allow grid pattern transmissions is beyond the scope of this article. In this section, first we will present the analysis we used to investigate the optimality of grid patterns based medium access schemes. Later, we will also discuss the analytical method we used to to analyze the capacity of such schemes. Grid pattern based medium access schemes may have no practical implementation but their evaluation is interesting in order to establish an upper bound on the optimal capacity in wireless ad hoc networks. In the later sections, we will also discuss more practical medium access schemes. \subsection{Optimality of Grid Pattern Based Schemes} \label{sec:grid_optimality} In this section also, we consider that an infinite number of transmitters are uniformly distributed like a set of points $$ {\cal S}=\{z_1,z_2,\ldots,z_n,\ldots\}~, $$ on an infinite $2D$ plane. The location of transmitter $i$ is denoted by $z_i$ and the center of the plane is at $(0,0)$. In order to simplify our analysis, we define a function $s_i(z)$ as $$ s_i(z)=\frac{|z-z_i|^{-\alpha}}{\sum_{j}|z-z_j|^{-\alpha}}~, $$ where \mbox{$\alpha>2$}. The function $s_i(z)$ is similar to the SIR function $S_i(z)$, in (\ref{eq:sinr}), except that the summation in the denominator factor also includes the numerator factor. In order to simplify the notations, we will remove the reference to $z$ when no ambiguity is possible. We also define a function $f(s_i)$ which can be continuous or integrable. For instance, here we will use $$ f(s_i)=1_{s_i(z)\geq \beta'}~, $$ for some given $\beta'$. Note that, in this case, the function is not continuous but we will not bother with this. In the following discussion, we can consider without loss of generality that the value of $\beta'$ is related to $\beta$ by $$ \beta'=\frac{\beta}{\beta+1}~. $$ Therefore, if transmitter $i$ is received successfully at location $z$ (or in other words, with SIR at least equal to $\beta$, {\it i.e.}, \mbox{$S_i(z)\geq \beta$}), then \mbox{$s_i(z)\geq \beta'$} and $f(s_i)$ is equal to $1$. We also assume a virtual disk on the plane centered at $(0,0)$ and of radius $R$. This allows us to express the density of set ${\cal S}$, $\nu({\cal S})$, in terms of the number of transmitters covered by the disk of radius $R$ or area $\pi R^2$, when $R$ approaches infinity, and it is given by a limit as $$ \nu({\cal S})=\lim_{R\to\infty}\frac{1}{\pi R^2}\sum_i 1_{|z_i|\le R}~. $$ We denote \mbox{$h(z)=\sum_i{f(s_i)}$}. Note that, $h(z)$ is equal to the number of transmitters which can be successfully received at $z$ and its maximum value shall be $1$ if \mbox{$\beta>1$}. We define ${\bf E}(h(z))$ by the limit $$ {\bf E}(h(z))=\lim_{R\to\infty}\frac{1}{\pi R^2}\int_{|z|\le R}h(z)dz^2~. $$ Note that, the integration is over an infinite plane or, in other words, over the disk of radius $R$ where $R$ approaches infinity. Also note that the notations are simplified by taking $dxdy$ equal to $dz^2$. We denote the reception area of an arbitrary transmitter $i$ as $$ \sigma_i=\int f(s_i)dz^2~, $$ and we have $$ {\bf E}(h(z))=\lim_{R\to\infty}\frac{1}{\pi R^2}\sum_i 1_{|z_i|\le R}\sigma_i=\nu({\cal S}){\bf E}(\sigma_i)~, $$ with $$ {\bf E}(\sigma_i)=\lim_{n\to\infty}\frac{1}{n}\sum_{i\le n}\sigma_i~. $$ As $R$ approaches infinity, $n$, {\it i.e.}, the number of transmitters in the set ${\cal S}$, covered by the disk of radius $R$, also approaches infinity. Our objective is to optimize ${\bf E}(h(z))$ whose definition is equivalent to the definition of ${\bf E}(N(z,K,\alpha))$ and therefore capacity, $c(z,K,\alpha)$, as well in expressions (\ref{eq:avg_no}) and (\ref{eq:poisson_hand_over_no}) respectively. \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \subsubsection{First Order Differentiation} \label{sec:grid_first_order} We denote the operator of differentiation w.r.t. $z_i$ by $\nabla_i$. For \mbox{$i\neq j$}, we have $$ \nabla_i s_j=\alpha s_is_j\frac{z-z_i}{|z-z_i|^2}~, $$ and $$ \nabla_i s_i=\alpha(s_i^2-s_i)\frac{z-z_i}{|z-z_i|^2}~. $$ Therefore, \begin{align} \nabla_i h(z)&=\nabla_i\sum_if(s_i) \notag \\ &=f'(s_i)\nabla_is_i+\sum_{j\neq i}f'(s_j)\nabla_is_j \notag \\ &=\alpha s_i\frac{z-z_i}{|z-z_i|^2}\Big(-f'(s_i)+\sum_j s_jf'(s_j)\Big)~. \notag \end{align} Although, we know that $$ \int h(z)dz^2=\infty~, $$ we nevertheless have a finite $\nabla_i \int h(z)dz^2$. In other words, the sum $\sum_{j}\nabla_i\sigma_j$ converges for all $i$. \begin{lemma} For all $j$ in ${\cal S}$, $$ \sum_{i}\nabla_i\sigma_j=0~.$$ Indeed this would be the differentiation of $\sigma_j$ when all points in ${\cal S}$ are translated by the same vector. Similarly, $$ \sum_{i}\nabla_i\int h(z)dz^2=0~. $$ \end{lemma} \begin{theorem} If the points in the set ${\cal S}$ are arranged in a grid pattern then $$ \nabla_i \int h(z)dz^2=\sum_{j}\nabla_i\sigma_j=0~, $$ and grids patterns are {\em locally} optimal. \end{theorem} \begin{proof} If ${\cal S}$ is a set of points arranged in a grid pattern, then $$ \nabla_i \int h(z)dz^2=\sum_{j}\nabla_i\sigma_j~, $$ would be identical for all $i$ and, therefore, would be null since $$ \sum_{i}\nabla_i\int h(z)dz^2=0~. $$ We could erroneously conclude that, \begin{compactitem}[-] \item all grid sets are optimal and \item all grid sets give the same ${\bf E}(h(z))$. \end{compactitem} In fact this is wrong. We could also conclude that ${\bf E}(\sigma_i)$ does not vary but this will contradict that $\nu({\cal S})$ {\em must} vary. The reason of this error is that a grid set cannot be modified into another grid set with a {\em uniformly bounded transformation}, unless the two grid sets are just simply translated by a simple vector. \end{proof} However, we have proved that the grid sets are locally optimal within sets that can be uniformly transformed between each other. In order to cope with uniform transformation and to be able to transform a grid set to another grid set, we will introduce the linear group transformation. \subsubsection{Linear Group Transformation} Here, we assume that the points in the plane are modified according to a continuous linear transform $M(t)$ where ${\bf M(t)}$ is a matrix with \mbox{${\bf M(0)}=\hbox{I}$}, {\it e.g.}, \mbox{${\bf M(t)}=\hbox{I}+{\bf t A}$} where ${\bf A}$ is a matrix. Without loss of generality, we only consider $\sigma_0$, {\it i.e.}, the reception area of the transmitter at $z_0$ which can be located anywhere on the plane. Under these assumptions, we have $$ \frac{\partial}{\partial t}\sigma_0=\sum_i ({\bf A} z_i.\nabla_i\sigma_0)=\hbox{\rm tr}\Big(\sum_{i} {\bf A}^Tz_i\otimes \nabla_i\sigma_0\Big)~. $$ In other words, using the identity $$ \frac{\partial \hbox{\rm tr}({\bf A}^T{\bf B})}{\partial {\bf A}}={\bf B}~, $$ the derivative of $\sigma_0$ w.r.t. matrix ${\bf A}$ is exactly equal to $${\bf D}=\sum_{i} z_i\otimes \nabla_i\sigma_0~,$$ such that $$ {\bf D}=\left[ \begin{array}{cc} D_{xx}&D_{xy}\\ D_{yx}&D_{yy} \end{array} \right]~. $$ Therefore, we can write the following identity $$ \hbox{\rm tr}\Big({\bf A}^T \frac{\partial}{\partial {\bf A}}\sigma_0\Big)=\frac{\partial}{\partial t}\sigma_0(t,{\bf A})\Big|_{t=0}~, $$ where $\sigma_0(t,{\bf A})$ is the transformation of $\sigma_0$ under $M(t)$, {\it i.e.}, $$ \sigma_0(t,{\bf A})=\hbox{\rm det}(\hbox{I}+{\bf t A})\sigma_0~. $$ We assume that {${\bf M(t)}={\bf (1+t)I}$} with \mbox{${\bf A}=\hbox{I}$}, {\it i.e.}, the linear transform is homothetic. \begin{theorem} ${\bf D}$ is symmetric and $\hbox{\rm tr}({\bf D})=2\sigma_0$. \end{theorem} \begin{proof} Under the given transform, $$ \sigma_0(t,{\bf A})=\sigma_0(t,\hbox{I})=(1+t)^2\sigma_0~. $$ As a first property, we have $$ \hbox{\rm tr}({\bf D})=2\sigma_0~, $$ since the derivative of $\sigma_0$ w.r.t. identity matrix $\hbox{I}$ is exactly $2\sigma_0$ ({\it i.e.}, \mbox{$\hbox{\rm tr}({\bf A}^T{\bf D})=\hbox{\rm tr}({\bf D})=\sigma_0'(0,\hbox{I})=2\sigma_0$}). The second property that ${\bf D}$ is a symmetric matrix is not obvious. The easiest proof of this property is to consider the derivative of $\sigma_0$ w.r.t. the rotation matrix $\hbox{J}$ given by $$ \hbox{J}=\left[ \begin{array}{cc} 0&-1\\ 1&0 \end{array} \right]~, $$ which is zero since $\hbox{J}$ is the initial derivative for a rotation and reception area is invariant by rotation. Therefore, $$ \hbox{\rm tr}(\hbox{J}^T{\bf D})=D_{yx}-D_{xy}=0~, $$ which implies that ${\bf D}$ is symmetric. \end{proof} Note that ${\bf D}$ can also be written in the following form $$ {\bf D}=\sum\limits_{i}z_i\otimes\nabla_i\sigma_0=\int dz^{2}\sum\limits_{i}z_i\otimes\nabla_if(s_0)~. $$ Let ${\bf T}$ be defined as $$ {\bf T}=\int dz^2\sum_i (z-z_i)\otimes\nabla_i f(s_0)~, $$ such that $$ {\bf D}=\int \sum_{i} z\otimes \nabla_i f(s_0)dz^2 - {\bf T}~. $$ The purpose of these definitions will become evident from theorems $3$ and $4$. \begin{theorem} We will show that $\int \sum_{i} z\otimes \nabla_i f(s_0)dz^2$ is equal to $\sigma_0\hbox{I}$ and, therefore, $$ {\bf D}=\sigma_0\hbox{I}-{\bf T}~. $$ We will also prove that ${\bf T}$ is symmetric. \end{theorem} \begin{proof} From the definition of ${\bf T}$, we can see that the sum \mbox{$\sum_i (z-z_i)\otimes \nabla_i f(s_0)$} leads to a symmetric matrix since \begin{eqnarray*} {\bf T}&=&\alpha\int f'(s_0)\Big(\frac{s_0^2-s_0}{|z-z_0|^2}(z-z_0)\otimes(z-z_0)+\\ &&\sum_{i\neq 0}\frac{s_0s_i}{|z-z_i|^2}(z-z_i)\otimes(z-z_i)\Big)dz^2~, \end{eqnarray*} and the left hand side is made of \mbox{$(z-z_i)\otimes(z-z_i)$} which are symmetric matrices. This implies that ${\bf T}$ is also symmetric. We can see that $$ \sum_{i}\nabla_i f(s_0)=-\nabla f(s_0)~, $$ and using integration by parts we have \begin{eqnarray*} \lefteqn{\int \sum_i z\otimes \nabla_i f(s_0)dz^2=-1\times}\\ &\Biggl[ \begin{array}{cc} \int x \frac{\partial}{\partial x} f(s_0)dxdy&\int x \frac{\partial}{\partial y} f(s_0)dxdy\\ \int y \frac{\partial}{\partial x} f(s_0)dxdy&\int y \frac{\partial}{\partial y} f(s_0)dxdy \end{array} \Biggr]= \Biggl[\begin{array}{cc} \sigma_0&0\\ 0&\sigma_0 \end{array}\Biggr]~, \end{eqnarray*} which is symmetric and equal to $\sigma_0\hbox{I}$. The sum/difference of symmetric matrices is also a symmetric matrix and, therefore, ${\bf D}$ is a symmetric matrix and ${\bf D}=\sigma_0\hbox{I}-{\bf T}$. \end{proof} Now, we will only consider grid patterns and, by virtue of a grid pattern, we can have $$ {\bf E}(\sigma_i)=\sigma_0=\int f(s_0)dz^2~, $$ and \mbox{${\bf E}(h(z))=\nu({\cal S})\sigma_0$}. Under homothetic transformation, $\nu({\cal S})$ and $\sigma_0$ are transformed but $\nu({\cal S})\sigma_0$ remains invariant. \begin{theorem} If the pattern of the points in set ${\cal S}$ is optimal w.r.t. linear transformation of the set, ${\bf D}=\sigma_0\hbox{I}$ and ${\bf T}=0$. \end{theorem} \begin{proof} The derivative of $\sigma_0$ w.r.t. matrix ${\bf A}$ is exactly equal to ${\bf D}$. Similarly, under the same transformation $$ \frac{\partial}{\partial t}\nu({\cal S})=\frac{1}{\hbox{\rm det}(\hbox{I}+{\bf A} t)}\nu({\cal S})~, $$ and for ${\bf A}=\hbox{I}$, it can be written as $$ \nu'({\cal S})(t,\hbox{I})=\nu({\cal S})/(1+t)^2~. $$ In any case, the derivative of $\nu({\cal S})$ w.r.t. matrix ${\bf A}$ is exactly equal to $-\hbox{I}\nu({\cal S})$. We also know that if the pattern is optimal w.r.t. linear transformation, the derivative of $\nu({\cal S})\sigma_0$ w.r.t. to matrix ${\bf A}$ shall be null. This implies that $$ \nu({\cal S}){\bf D}-\hbox{I}\nu({\cal S})\sigma_0=0~, $$ which leads to ${\bf D}=\sigma_0\hbox{I}$ and ${\bf T}=0$. \end{proof} \begin{figure*}[!t] \centering \psfrag{a}{$\sqrt{3}d$} \psfrag{b}{$2d$} \psfrag{c}{$d$} \includegraphics[scale=0.5]{figures/square_grid} \includegraphics[scale=0.5]{figures/hexagonal_grid} \includegraphics[scale=0.5]{figures/triangular_grid} \caption{Square, Hexagonal and Triangular grid patterns. The arrows (blue and red) represent the invariance of Eigen values w.r.t. isometric symmetries of the grids.} \label{fig:grid_layouts} \end{figure*} We know that ${\bf T}$ is symmetric and \mbox{${\bf T}=0$}. Thus, \mbox{$\hbox{\rm tr}({\bf T})=0$}, {\it i.e.}, Eigen values are invariant by rotation. When a grid is optimal, we must have \mbox{${\bf T}=0$}. In any case, the matrix ${\bf T}$ must be invariant w.r.t. isometric symmetries of the grid. On $2D$ plane, the grid patterns which satisfy this condition are square, hexagonal and triangular grids. The square grid is symmetric w.r.t. any horizontal or vertical axes of the grid and, in particular, with rotation of $\pi/2$ represented by $\hbox{J}$. Therefore, the {\em Eigen system} must be invariant by rotation of $\pi/2$. This implies that the {\em Eigen values} are the same and therefore null since \mbox{$\hbox{\rm tr}({\bf T})=0$}. Same argument also applies for the hexagonal grid with the invariance for $\pi/3$ rotation and for the triangular pattern with invariance for $2\pi/3$ rotation. \subsection{Reception Areas} \label{sec:rx_area_2} \begin{figure}[!t] \centering \psfrag{a}{$z_i$} \psfrag{b}{$z$} \psfrag{c}{$dz=J\frac{\nabla S_{i}(z)}{|\nabla S_{i}(z)|}\delta t$} \psfrag{d}{${\cal C}(z_i,\beta,\alpha)$} \psfrag{e}{$A(z_i,\lambda,\beta,\alpha)$} \includegraphics[scale=0.65]{figures/snr_gradient} \caption{Computation of the reception area of transmitter $i$.\label{fig:snr_gradient}} \end{figure} The simultaneous transmitters, {\it i.e.}, the set ${\cal S}$ is a set of points arranged in a grid pattern. We consider that, for every slot, the grid pattern is the same {\em modulo} a translation. We have covered grid layouts of square, hexagonal and triangle as shown in Fig. \ref{fig:grid_layouts}. Grids are constructed from $d$ which defines the minimum distance between neighboring transmitters and can be derived from the hop-distance parameter of a typical TDMA-based scheme. The density of grid points, $\lambda$, depends on $d$. However, the capacity, $c(z,\beta,\alpha)$, is independent of the value of $d$ or, for that matter, $\lambda$ as it is invariant for any homothetic transformation of the set of transmitters. Our aim is to compute the size of the reception area, $A(z_i,\lambda,\beta,\alpha)$, around each transmitter $i$. By consequence of the regular grid pattern, all reception areas are the same {\em modulo} a translation (and a rotation for the hexagonal pattern), and their surface area size, $\sigma(\lambda,\beta,\alpha)$, is the same. If ${\cal C}(z_i,\beta,\alpha)$ is the closed curve that forms the boundary of $A(z_i,\lambda,\beta,\alpha)$ and $z$ is a point on ${\cal C}(z_i,\beta,\alpha)$, we have \begin{equation} \sigma(\lambda, \beta,\alpha)=\frac{1}{2}\displaystyle\int\limits_{{\cal C}(z_i,\beta,\alpha)}\hbox{\rm det}(z-z_{i},dz)~, \label{eq:area_integral} \end{equation} where $\hbox{\rm det}(a,b)$ is the determinant of vectors $a$ and $b$ and $dz$ is the vector tangent to ${\cal C}(z_i,\beta,\alpha)$ at point $z$. \mbox{$\hbox{\rm det}(z-z_i,dz)$} is the cross product of vectors $(z-z_{i})$ and $dz$ and gives the area of the parallelogram formed by these two vectors. Note that, \eqref{eq:area_integral} remains true if $z_{i}$ is replaced by any interior point of $A(z_i,\lambda,\beta,\alpha)$. The SIR $S_{i}(z)$ of transmitter $i$ at point $z$ is given by (\ref{eq:sinr}). We assume that at point $z$, $S_{i}(z)=\beta$. On point $z$ we can also define the gradient of $S_{i}(z)$, $\nabla S_{i}(z)$, $$ \nabla S_{i}(z)=\left[\begin{array}{c} \frac{\partial}{\partial x}S_{i}(z)\\ \frac{\partial}{\partial y}S_{i}(z)\end{array}\right]~. $$ Note that $\nabla S_i(z)$ is inward normal to the curve ${\cal C}(z_i,\beta,\alpha)$ and points towards $z_i$. The vector $dz$ is co-linear with $J\frac{\nabla S_{i}(z)}{|\nabla S_{i}(z)|}$ where $J$ is the anti-clockwise rotation of $3\pi/2$ (or clockwise rotation of $\pi/2$) given by $$ J=\left[\begin{array}{cc} 0 & 1\\ -1 & 0\end{array}\right]~. $$ Therefore, we can fix $dz=J\frac{\nabla S_{i}(z)}{|\nabla S_{i}(z)|}\delta t$ and in (\ref{eq:area_integral}) \begin{align*} \hbox{\rm det}(z-z_{i},dz)&=(z-z_i)\times J\frac{\nabla S_{i}(z)}{|\nabla S_{i}(z)|}\delta t\\ &=-(z-z_{i}).\frac{\nabla S_{i}(z)}{|\nabla S_{i}(z)|}\delta t~, \end{align*} where $\delta t$ is assumed to be infinitesimally small. The sequence of points $z(k)$ computed as \begin{align*} z(0) & =z\\ z(k+1) & =z(k)+J\frac{\nabla S_{i}(z(k))}{|\nabla S_{i}(z(k))|}\delta t~,\end{align*} gives a discretized and numerically convergent parametric representation of ${\cal C}(z_i,\beta,\alpha)$ by finite elements. Therefore, (\ref{eq:area_integral}) reduces to \begin{equation} \sigma(\lambda, \beta,\alpha)\approx-\frac{1}{2}\sum_{k}(z(k)-z_{i}).\frac{\nabla S_{i}(z(k))}{|\nabla S_{i}(z(k))|}\delta t~, \label{eq:area_integral_2} \end{equation} assuming that we stop the sequence $z(k)$ when it loops back on or close to the point $z$. The point, \mbox{$z(0)=z$}, can be found using Newton's method. First approximate value of $z$, required by Newton's method, can be computed assuming only one interferer nearest to the transmitter $i$ as discussed in the Appendix. The negative sign in (\ref{eq:area_integral_2}) is automatically negated by the dot product of vectors $(z(k)-z_{i})$ and $\nabla S_{i}(z(k))$. \subsection{Capacity} The expression to derive the capacity is $$ c(z,\beta,\alpha)={\bf E}(N(z,\beta,\alpha))=N(z,\beta,\alpha)=\lambda\sigma(\lambda,\beta,\alpha)~, $$ where $\sigma(\lambda,\beta,\alpha)$ is computed using the above described method. \section{Medium Access Schemes Based on Exclusion Rules} \label{sec:practical} As we mentioned earlier, an accurate model of interference distribution, in case of exclusion schemes, is very hard to derive because of the correlation in the locations of simultaneous transmitters, {\it e.g.}, simultaneous transmitters separated by a certain minimum distance. We also mentioned a few approaches to model distribution of transmitters in exclusion schemes including Mat\'ern point process \cite{CSMA-Model,Weber3} or, more recently, {\em SSI} point process \cite{Busson}. Classical Mat\'ern point process is a location dependent thinning of PPP such that the remaining points are at least at a certain distance \mbox{$b>0$} from each other. Instead of distance,~\cite{CSMA-Model,Weber3} used received power level, a function of distance, in order to model CSMA based schemes. However, \cite{Busson} showed that Mat\'ern point process may lead to an underestimation of the density of simultaneous transmitters and proposed an {\em SSI} point process in which a node can transmit or, in other words, can be added to the set of simultaneous transmitters ${\cal S}$ if it is at least at a certain distance \mbox{$b>0$} from all active transmitters ({\it i.e.}, the transmitters already in the set ${\cal S}$). {\em SSI} point process can be used to study node coloring schemes but may result in a flawed representation of CSMA based schemes. In case of CSMA based schemes, a node senses the medium to detect if the signal level of interference is below a certain threshold and only transmits if it remains below that threshold during the randomly selected back-off period. Therefore, the {\em decision} of transmission depends on all nodes which are already active and transmitting on the medium. In order to address this inaccuracy in {\em SSI} point process model, \cite{Busson} proposed an {\em SSI$_k$} point process which ensures that a node, before transmitting on the medium, takes into account the interference from $k$ nearest active transmitters. However, very few analytical results are available on {\em SSI} and {\em SSI$_k$} point processes and most of the results are obtained via simulations. Therefore, we will use Monte Carlo simulation along with the analytical method proposed in \S \ref{sec:rx_area_2} to compute the capacity of node coloring and CSMA based schemes. In this section, we will discuss the models of node coloring and CSMA based schemes which we will employ in our Monte Carlo simulation later. In the following discussion, the set of all nodes in the network is {\cal N}. In practical implementation, this set is finite but in theory, it can be infinite but with a uniform density. \subsection{Node Coloring Based Schemes} \label{sec:tdma} Node coloring schemes use a managed transmission scheme based on time division multiple access (TDMA) approach. The aim is to minimize the interference between transmissions that cause packet loss. These schemes assign colors to nodes that correspond to periodic slots, {\it i.e.}, nodes that satisfy a spatial condition, either based on physical distance or distance in terms of number of hops, will be assigned different colors. This condition is usually derived from the interference models of wireless networks such as unit disk graph (UDG) models. For example, in order to avoid collisions at receivers, all nodes within $k$ hops are assigned unique colors. Typical value of $k$ is $2$. A few practical implementations of node coloring schemes are as follows. In~\cite{unified}, authors proposed coloring based on RAND, MNF and PMNF algorithms. In RAND, nodes are colored in a random order whereas MNF and PMNF prioritize nodes according to the number of their neighbors. NAMA~\cite{NAMA} colors the nodes, in $2$-hop neighborhood, randomly using a hash number. In SEEDEX~\cite{SEEDEX}, nodes use random seed number in $2$-hop neighborhood to randomly elect the transmitter. FPRP~\cite{FPRP} is a five-phase protocol where nodes contend to allocate slots in $2$-hop neighborhood. DRAND is the distributed version of RAND and article~\cite{DRAND} shows its better performance as compared to SEEDEX and FPRP. \cite{tdma_fair} proposes a joint TDMA scheduling/load balancing algorithm for wireless multi-hop networks and shows that it can improve the performance in terms of multi-hop throughput and fairness. Most of these schemes use unit disk graph based interference model. However, success of a transmission depends on whether the SIR at the receiver is above a certain threshold. \cite{Derbel} is an example of a node coloring scheme which uses SIR based interference model. Note that, extremely managed transmission scheduling in node coloring schemes has significant overhead, {\it e.g.}, because of the control traffic or message passing required to achieve the distributed algorithms that resolve color assignment conflicts. In this article, instead of considering any particular scheme, we will present a model which ensures that transmitters use an exclusion distance in order to avoid the use of the same slot within a certain distance. This exclusion distance is defined in terms of euclidean distance $d$ which may be derived from the distance parameter of a typical TDMA-based scheme. Therefore, a slot cannot be shared within a distance of $d$ or, in other words, nodes transmitting in the same slot shall be located at a distance greater or equal to $d$ from each other. Following is a model of node coloring schemes which constructs the set of simultaneous transmitters, ${\cal S}$, in each slot (this is supposed to be done off-line so that transmission patterns periodically recur in each slot). \begin{compactenum} \item Initialize ${\cal M}={\cal N}$ and ${\cal S}=\emptyset$. \item Randomly select a node $s_{i}$ from ${\cal M}$ and add it to the set ${\cal S}$, {\it i.e}, ${\cal S}={\cal S}\cup\{s_{i}\}$. Remove $s_{i}$ from the set ${\cal M}$. \item Remove all nodes from the set ${\cal M}$ which are at distance less than $d$ from $s_{i}$. \item If set ${\cal M}$ is non-empty, repeat from step $2$. \end{compactenum} Above described steps model a centralized or distributed node coloring scheme which {\em randomly} selects the nodes for coloring while satisfying the constraints of euclidean distance. Under given constraints, this model also tries to maximize the number of simultaneous transmitters in each slot and should give the maximum capacity achievable with any node coloring scheme which {\em may not} prioritize the nodes for coloring, {\it e.g.}, \cite{DRAND}. \subsection{CSMA Based Schemes} \label{sec:csma} As compared to managed transmission schemes like node coloring, CSMA based schemes are simpler but are more demanding on the physical layer. Before transmitting on the channel, a node verifies if the medium is idle by sensing the signal level. If the detected signal level is below a certain threshold, medium is assumed idle and the node transmits its packet. Otherwise, it may invoke a random back-off mechanism and wait before attempting a retransmission. CSMA/CD (CSMA with collision detection) and CSMA/CA (CSMA with collision avoidance), which is also used in IEEE 802.11, are the modifications of CSMA for performance improvement. We will adopt a model of CSMA based scheme where nodes contend to access medium at the beginning of each slot. In other words, nodes transmit only after detecting that medium is idle. We assume that nodes defer their transmission by a tiny back-off time, from the beginning of a slot, and abort their transmission if they detect that medium is not idle. We also suppose that detection time and receive to transmit transition times are negligible and, in order to avoid collisions, nodes use randomly selected (but different) back-off times. Therefore, the main effect of back-off times is in the production of a random order of the nodes in competition. For the evaluation of the performance of CSMA based scheme, we will use the following simplified construction of the set of simultaneous transmitters ${\cal S}$. \begin{compactenum} \item Initialize ${\cal M}={\cal N}$ and ${\cal S}=\emptyset$. \item Randomly select a node $s_{i}$ from ${\cal M}$ and add it to the set ${\cal S}$, {\it i.e.}, ${\cal S}={\cal S}\cup\{s_{i}\}$. Remove $s_{i}$ from the set ${\cal M}$. \item Remove all nodes from the set ${\cal M}$ which can detect a combined interference signal of power higher than $\theta$ (carrier sense threshold), from all transmitters in the set ${\cal S}$, {\it i.e.}, if $$ \sum_{s_i\in\cal{S}}|z_i-z_j|^{-\alpha}\geq \theta~, $$ remove $s_j$ from $\cal{M}$. Here, $z_i$ is the position of $s_i$ and $|z_i-z_j|$ is the euclidean distance between $s_i$ and $s_j$. \item If set ${\cal M}$ is non-empty, repeat from step $2$. \end{compactenum} These steps model a CSMA based scheme which requires that transmitters do not detect an interference of signal level equal to or higher than $\theta$, during their back-off periods, before transmitting on the medium. At the end of the construction of set ${\cal S}$, some transmitters may experience interference of signal level higher than $\theta$. However, this behavior is in compliance with a realistic CSMA based scheme where nodes, which started their transmissions, or, in other words, are already added to the set ${\cal S}$ do not consider the increase in signal level of interference resulting from later transmitters. \subsection{Reception Areas} We are not aware of an analytical closed-form expression for the probability distribution of signal level with node coloring or CSMA based schemes. Consequently, we do not have a closed form expression for the average size of the reception area of an arbitrary transmitter with these schemes. Therefore, it is evaluated via Monte Carlo simulation using the analytical method of \S \ref{sec:rx_area_2}. The value of $d$, in case of node coloring scheme, or $\theta$, in case of CSMA based scheme, can be tuned to obtain an average transmitter density of $\lambda$. \subsection{Capacity} Similarly $$ c(z,\beta,\alpha)={\bf E}(N(z,\beta,\alpha))=\lambda \sigma(\lambda,\beta,\alpha)~, $$ is also computed via Monte Carlo simulation. The capacity, $c(z,\beta,\alpha)$, is invariant for any homothetic transformation of $\lambda$ and, therefore, it is also independent of the values of medium access scheme parameters $\theta$ or $d$. \section{Slotted ALOHA Scheme} \label{sec:aloha} In slotted ALOHA scheme, nodes do not use any complicated managed transmission scheduling and transmit their packets independently (with a certain medium access probability), {\it i.e.}, in each slot, each node decides independently whether to transmit or otherwise remain silent. Therefore, the set of simultaneous transmitters, in each slot, can be given by a uniform Poisson distribution of mean equal to $\lambda$ transmitters per unit square area~\cite{Jacquet:2009,SR-ALOHA,Weber2}. We can write \eqref{eq:sinr_condition} as $$ |z-z_{i}|^{-\alpha}\geq \beta\underset{j\neq i}{\sum}|z-z_{j}|^{-\alpha}\label{eq:snr_relation_at_z} $$ or $ {\cal W}(z,\{z_{i}\})\geq \beta {\cal W}(z,{\cal S}-\{z_{i}\})$, where ${\cal W}(z,{\cal S})=\underset{z_{j}\in {\cal S}}{\sum}|z-z_{j}|^{-\alpha}~. $ \subsection{Distribution of Signal Levels} \begin{figure}[!t] \centering \includegraphics[scale=0.65]{figures/signal_level} \caption{Signal Levels (in dB's) for a random network with attenuation coefficient $\alpha=2.5$. \label{fig:signal_levels}} \end{figure} Figure \ref{fig:signal_levels} shows the function ${\cal W}(z,{\cal S})$ for $z$ varying in the plane with ${\cal S}$ an arbitrary set of Poisson distributed transmitters. Figure \ref{fig:signal_levels} uses \mbox{$\alpha=2.5$}. It is clear that closer the receiver is to the transmitter, larger is the SIR. For each value of $\beta$ we can draw an area, around each transmitter, where its signal can be received with SIR greater or equal to $\beta$. Figure \ref{fig:reception_area} shows reception areas for the same set ${\cal S}$, as in Fig. \ref{fig:signal_levels}, for various values of $\beta$. As can be seen, the reception areas do not overlap for $\beta>1$ since there is only one dominant signal. For each value of $\beta$ we can draw, around each transmitter, the area where its signal is received with SIR greater or equal to $\beta$. Our aim is to find the average size of this area and how it is a function of $\lambda$, $\beta$ and $\alpha$. ${\cal W}(z,{\cal S})$ depends on ${\cal S}$ and hence is also a random variable. The random variable ${\cal W}(z,{\cal S})$ has a distribution which is invariant by translation and therefore does not depend on $z$. Therefore, we denote ${\cal W}(\lambda)\equiv{\cal W}(z,\lambda)$. Let $w({\cal S})$ be its density function. The set ${\cal S}$ is given by a $2D$ Poisson process with intensity $\lambda$ transmitters per slot per unit square area and Laplace transform of $w({\cal S})$, $\tilde{w}(\theta,\lambda)$, can be computed exactly. The Laplace transform, $\tilde{w}(\theta,\lambda)=\exp(\int(e^{-\theta r^{-\alpha}}-1)rdr)$, satisfies the identity $$ \tilde{w}(\theta,\lambda)=\exp\left(-\pi\lambda\Gamma(1-\frac{2}{\alpha})\theta^{\frac{2}{\alpha}}\right)~, $$ where $\Gamma(.)$ is the Gamma function. Note that, in all cases, $\tilde{w}(\theta,\lambda)$ is of the form $\exp(-\lambda C\theta^{\gamma})$ where $\gamma=\frac{2}{\alpha}$, and the expression of $C$ in case of $2D$ map is $C=\pi\Gamma(1-\gamma)$. \begin{figure}[!t] \centering \includegraphics[clip,scale=0.65]{figures/reception_area} \caption{Distribution of reception areas for various value of SIR. $\beta=1,4,10$ for situation of Fig. \ref{fig:signal_levels}. \label{fig:reception_area}} \end{figure} From the above formula and by applying the inverse Laplace transformation, we get $$ P({\cal W}(\lambda)<x)=\frac{1}{2i\pi}\underset{-i\infty}{\overset{+i\infty}{\int}}\frac{\tilde{w}(\theta,\lambda)}{\theta}e^{\theta x}d\theta $$ Expanding $\tilde{w}(\theta,\lambda)=\underset{n\geq0}{\sum}\frac{(-C\lambda)^{n}}{n!}\theta^{n\gamma}$, we get $$ P({\cal W}(\lambda)<x)=\frac{1}{2i\pi}\underset{n\geq0}{\sum}\frac{(-C\lambda)^{n}}{n!}\underset{-i\infty}{\overset{+i\infty}{\int}}\theta^{n\gamma-1}e^{\theta x}d\theta $$ By bending the integration path towards negative axis \begin{align*} \frac{1}{2i\pi}\underset{-i\infty}{\overset{+i\infty}{\int}}\theta^{n\gamma-1}e^{\theta x}d\theta & =\frac{e^{i\pi n\gamma}-e^{-i\pi n\gamma}}{2i\pi}\underset{0}{\overset{\infty}{\int}}\theta^{n\gamma-1}e^{-\theta x}d\theta\\ & =\frac{\sin(\pi n\gamma)}{\pi}\Gamma(n\gamma)x^{-n\gamma}\end{align*} we get, \begin{equation} P({\cal W}(\lambda)<x)=\underset{n\geq0}{\sum}\frac{(-C\lambda)^{n}}{n!}\frac{\sin(\pi n\gamma)}{\pi}\Gamma(n\gamma)x^{-n\gamma}\label{eq:signal_pd}~. \end{equation} \subsection{Reception Areas} Let $p(\lambda, r, \beta, \alpha)$ be the probability to receive a signal at distance $r$ with SIR at least equal to $\beta$. Therefore, we have $$ p(\lambda, r, \beta, \alpha)=P\left({\cal W}(\lambda) < \frac{r^{-\alpha}}{\beta}\right) $$ and the average size of the reception area around an arbitrary transmitter with SIR at least equal to $\beta$ is $$ \sigma(\lambda, \beta, \alpha)=2\pi\int p(\lambda, r, \beta, \alpha)rdr~. $$ Because of the obvious homothetic invariance, we can also write $p(\lambda,r,\beta,\alpha)=p(1,r\sqrt{\lambda},\beta,\alpha)$. Therefore, $\sigma(\lambda,\beta,\alpha)=\frac{1}{\lambda}\sigma(1,\beta,\alpha)$. Let $\sigma(1,\beta,\alpha)=2\pi\int P({\cal W}(1)<\frac{r^{-\alpha}}{\beta})rdr$. Assuming that $\tilde{w}(\theta,1)=\exp(-C\theta^\gamma)$ and using integration by parts, we get $$ \sigma(1,\beta,\alpha)=\pi\alpha\int_0^{\infty}P\left({\cal W}(1)=\frac{1}{\beta r^{\alpha}}\right)\frac{r}{\beta r^\alpha}dr~. $$ We use the fact that $P({\cal W}(1)=x)=\frac{1}{2\pi i}\int_{\cal C}w(\theta,1)e^{\theta x}d\theta$, where ${\cal C}$ is an integration path in the definition domain of $w(\theta,1)$, {\it i.e.}, parallel to the imaginary axis with $\Re(\theta)>0$. By changing the variable $x=(\beta r^{\alpha})^{-1}$ and inverting integrations, we get \begin{align} \sigma(1,\beta,\alpha)&=\frac{1}{2i}\int_{\cal C}w(\theta,1)\int_0^{\infty}e^{\theta x}(\beta x)^{-\gamma}dx \notag \\ &=\frac{1}{2i}\beta^{-\gamma}\Gamma(1-\gamma)\int_{\cal C}w(\theta,1)(-\theta)^{\gamma-1}d\theta~.\notag \end{align} Using $\tilde{w}(\theta,1)=\exp(-C\theta^{\gamma})$, and deforming the integration path to stick to the negative axis, we obtain \begin{align} \sigma(1,\beta,\alpha)&=\frac{e^{i\pi\gamma}-e^{-i\pi\gamma}}{2i}\beta^{-\gamma}\Gamma(1-\gamma)\int_0^{\infty}\exp(-C\theta^{\gamma})\theta^{\gamma-1}d\theta \notag \\ &=\sin(\pi \gamma)\beta^{-\gamma}\frac{\Gamma(1-\gamma)}{C\gamma}~.\notag \end{align} Using the expression for $C$, we have $$ \sigma(1,\beta,\alpha)=\frac{\sin(\pi\gamma)}{\pi\gamma}\beta^{-\gamma}~. $$ Therefore, the average size of the reception area around an arbitrary transmitter $i$ with SIR at least equal to $\beta$ satisfies the identity \begin{equation} \sigma_{i}(\lambda,\beta,\alpha)=\frac{1}{\lambda}\frac{\sin(\frac{2}{\alpha}\pi)}{\frac{2}{\alpha}\pi}\beta^{-\frac{2}{\alpha}}~. \label{eq:poisson_area} \end{equation} The reception area $\sigma_{i}(\lambda,\beta,\alpha)$ is inversely proportional to the density of transmitters $\lambda$ and the product $\lambda\sigma_{i}(\lambda, \beta, \alpha)$ is a function of $\beta$ and $\alpha$. We notice that when $\alpha$ approaches infinity, $\sigma_{i}(\lambda,\beta,\infty)$ approaches $1/\lambda$. This is due to the fact that when $\alpha$ is very large, all nodes other than the closest transmitter tend to contribute as a negligible source of interference and consequently the reception areas turn to be the Voronoi cells around every transmitter. This holds for all values of $\beta$. The average size of Voronoi cell being equal to the inverse density of the transmitters, $\frac{1}{\lambda}$, we get the asymptotic result. Note that when $\beta$ grows as $\exp(O(\alpha))$, we have $\sigma_{i}(\lambda,\beta,\alpha)\approx\frac{1}{\lambda}\exp(-\frac{2}{\alpha}\log(\beta))$, which suggests that the typical SIR as $\alpha\rightarrow\infty$ is of the order of $\exp(O(\alpha))$. Secondly, as $\alpha$ approaches $2$, $\sigma_{i}(\lambda,\beta,2)$ approaches zero because $\sin(\frac{2}{\alpha}\pi)$ approaches zero. Indeed, the contribution of remote nodes tends to diverge and makes the SIR approach to zero. This explains why $\sigma_{i}(\lambda,\beta,2)$ approaches to zero for any fixed value of $\beta$. \subsection{Capacity} In this case, the analytical expressions (\ref{eq:poisson_hand_over_no}) and (\ref{eq:poisson_area}) lead to \begin{equation} c(z,\beta,\alpha)={\bf E}(N(z,\beta,\lambda))=\sigma(1,\beta,\alpha)~. \label{eq:poisson_capacity} \end{equation} \section{Evaluation and Results} \label{sec:simulations} In order to approach an infinite map, we perform numerical simulations in a very large network spread over $2D$ square map with length of each side equal to $10000$ meters. \subsection{Grid Pattern Based Schemes} In case of grid pattern schemes, transmitters are spread over this network area in square, hexagonal or triangular pattern. For all grid patterns, we set $d$ equal to $25$ meters although it will have no effect on the validity of our conclusions as capacity, $c(z,\beta,\alpha)$, is independent of $\lambda$. To keep away edge effects, we compute the size of the reception area of transmitter $i$, located in the center of the network area: \mbox{$z_{i}=(x_{i},y_{i})=(0,0)$}. The network area is large enough so that the reception area of transmitter $i$ is close to its reception area in an infinite map. $\lambda$ depends on the type of grid and it is computed from the total number of transmitters spreading over the network area of $10000\times10000$ square meters. In our numerical simulations, we set $\delta t=0.01$. \subsection{Medium Access Schemes Based on Exclusion Rules} \subsubsection{Node Coloring Based Schemes} Performance of node coloring based schemes is analyzed, via simulations, using the model specified in \S \ref{sec:tdma}. We set $d$ equal to $25$ meters. \subsubsection{CSMA Based Schemes} In order to evaluate the capacity of CSMA based schemes, we perform simulations using the model specified in \S \ref{sec:csma}. The value of carrier sense threshold, $\theta$, is set equal to $1\times 10^{-5}$. \subsubsection{Simulations} We consider that nodes are uniformly distributed over the network area of $10000\times10000$ square meters. Simultaneous transmitters, in each slot, are selected according to the model of each medium access scheme. Considering the practical limitations introduced by the bounded network area, we use the following Monte Carlo method to evaluate $\sigma(\lambda,\beta,\alpha)$. We {\em only} compute the size of the reception area of a transmitter located nearest to the center of the network area and $\sigma(\lambda,\beta,\alpha)$ is the average of results obtained with $10000$ samples of node distributions. Similarly, $\lambda$ is also the average of the density of simultaneous transmitters obtained with these $10000$ samples of node distributions. Note that the models of medium access schemes select simultaneous transmitters randomly and transmitters are uniformly distributed over the network area. Therefore, using Monte Carlo method, {\it i.e.}, a large number of samples of node distributions and, with each sample, only measuring the reception area of a transmitter located nearest to the center of the network area gives an accurate approximation of $\sigma(\lambda,\beta,\alpha)$ in an infinite map with given values of $d$ or $\theta$. It can be argued that, in case of CSMA based scheme, density of simultaneous transmitters is higher on the boundaries of the network area, because of lower signal level of interference, as compared to the central region. The network area is very large and we observed that the difference, in spatial density of simultaneous transmitters, on the boundaries and central region is negligible. We also know that the capacity, $c(z,\beta,\alpha)$, is independent of $\lambda$ which depends on $d$ or $\theta$. However, if the node density is very low, it will also have an impact on the packing (density) of simultaneous transmitters in the network. Therefore, $\lambda$ should be maximized to the point where no additional transmitter can be added to the network under given values of $d$ or $\theta$. This can be achieved by keeping the node density very high, {\it e.g.}, we observed that the node density of $1$ node per square meter is sufficient and further increasing the node density does not increase $\lambda$. In order to keep away the edge effects, values of $d$ or $\theta$ are chosen such that $\lambda$ is sufficiently high and edge effects have minimal effect on the central region of the network. \subsection{Slotted ALOHA Scheme} In case of slotted ALOHA scheme, capacity, $c(z,\beta,\alpha)$, is computed from analytic expressions (\ref{eq:poisson_area}) and (\ref{eq:poisson_capacity}). \begin{figure*}[!t] \centering \subfloat[$\beta$ is varying and $\alpha$ is fixed at $4.0$.]{ \hspace{-0.85 cm} \includegraphics[scale=1]{figures/capacity_K1} \hspace{-0.85 cm} \includegraphics[scale=1]{figures/capacity_K2} } \subfloat[$\beta$ is fixed at $10.0$ and $\alpha$ is varying.]{ \hspace{-0.85 cm} \includegraphics[scale=1]{figures/capacity_A1} \hspace{-0.85 cm} \includegraphics[scale=1]{figures/capacity_A2} } \caption{Capacity, $c(z,\beta,\alpha)$, of grid pattern (triangular, square and hexagonal), node coloring, CSMA and slotted ALOHA based medium access schemes. \label{fig:comparison}} \end{figure*} \subsection{Observations} The values of SIR threshold, $\beta$, and attenuation coefficient, $\alpha$, depend on the underlying physical layer or system parameters and are usually fixed and beyond the control of network/protocol designers. However, to give the reader an understanding of the influence of these parameters on the capacity, $c(z,\beta,\alpha)$, of different medium access schemes, we assume that these parameters are variable. Figure \ref{fig:comparison}(a) shows the comparison of capacity, $c(z,\beta,\alpha)$, with grid patterns, node coloring, CSMA and slotted ALOHA schemes with $\beta$ varying and \mbox{$\alpha=4.0$}. Similarly, Fig. \ref{fig:comparison}(b) shows the comparison of these medium access schemes with \mbox{$\beta=10.0$} and $\alpha$ varying. We know that as $\alpha$ approaches infinity, reception area around each transmitter turns to be a Voronoi cell with an average size equal to $1/\lambda$. Therefore, as $\alpha$ approaches infinity, $c(z,\beta,\alpha)$ approaches 1. For slotted ALOHA scheme, (\ref{eq:poisson_area}) and (\ref{eq:poisson_capacity}) also arrive at the same result. For other medium access schemes, we computed $c(z,\beta,\alpha)$ with $\alpha$ increasing up to $100$ and from the results, we can observe that asymptotically, as $\alpha$ approaches infinity, $c(z,\beta,\alpha)$ approaching 1 is true for all schemes. \subsubsection{Optimal Local Capacity in Wireless Ad Hoc Networks} From the results, we can see that the maximum capacity in wireless ad hoc networks can be obtained with triangular grid pattern based medium access scheme. In order to quantify the improvement in capacity by triangular grid pattern scheme over other schemes, we perform a scaled comparison of triangular grid pattern, slotted ALOHA, node coloring and CSMA based schemes which is obtained by dividing the capacity, $c(z,\beta,\alpha)$, of all these schemes with the capacity, $c(z,\beta,\alpha)$, of triangular grid pattern scheme. Figure \ref{fig:improvement} shows the scaled comparison with $\beta$ and $\alpha$ varying. It can be observed that triangular grid pattern scheme can achieve, {\em at most}, double the capacity of slotted ALOHA scheme. However, node coloring and CSMA based medium access schemes can achieve almost \mbox{$85\sim90\%$} of the optimal capacity obtained with triangular grid pattern scheme. \subsubsection{Observations on Node Coloring Scheme} Triangular grid pattern based medium access scheme can be visualized as an optimal node coloring which ensures that transmitters are exactly at distance $d$ from each other whereas, in case of random node coloring, transmitters are selected randomly and only condition is that they must be at a distance greater or equal to $d$ from each other. The exclusion region around each transmitter is a circular disk of radius $d/2$ with transmitter at the center. Note that the disks of simultaneous transmitters shall not overlap. The triangular grid pattern can achieve a packing density of \mbox{$\pi/\sqrt{12}\approx0.9069$}. The packing density is defined as the proportion of network area covered by the disks of simultaneous transmitters. On the other hand, random packing of disks, which is the case in random node coloring, can achieve a packing density in the range of \mbox{$0.54\sim0.56$} only~\cite{disk,Busson}. We have seen in the results that even this sub-optimal packing of simultaneous transmitters by random node coloring scheme can achieve almost similar capacity as obtained with optimal packing by triangular grid pattern based medium access scheme. \subsubsection{Observations on CSMA Based Scheme} We observe that the capacity with CSMA based scheme is slightly lower (by approximately $3\%$) as compared to node coloring based medium access scheme and this is irrespective of the value of carrier sense threshold. The reason of slightly lower capacity with CSMA based scheme is that the exclusion rule is based on carrier sense threshold, rather than the distance in-between simultaneous transmitters. With carrier sense baed exclusion rule, CSMA based scheme may not allow to pack more transmitters, in each slot, that would have been possible with node coloring schemes. In other words, CSMA may result in a lower packing density of simultaneous transmitters as compared to node coloring scheme. This can also be observed by comparing the densities of {\em SSI} and {\em SSI$_k$} point processes in \cite{Busson} and also explains the slightly lower capacity of CSMA based scheme as compared to node coloring scheme. However, as $\alpha$ approaches infinity, $\lambda$ with CSMA based scheme approaches the node density and reception area around each transmitter also becomes a Voronoi cell with an average size equal to the inverse of node density. In fact, asymptotically, as $\alpha$ approaches infinity, capacity, $c(z,K,\alpha)$, approaches $1$. \begin{figure*}[!t] \centering \subfloat[$K$ is varying and $\alpha$ is fixed at $4.0$.]{ \hspace{-0.85 cm} \includegraphics[scale=1]{figures/compare_K} } \subfloat[$K$ is fixed at $10.0$ and $\alpha$ is varying.]{ \hspace{-0.85 cm} \includegraphics[scale=1]{figures/compare_A} } \caption{Scaled comparison of triangular grid pattern, node coloring, CSMA and slotted ALOHA based schemes. \label{fig:improvement}} \end{figure*} \section{Current Limitations and Future Extensions} \label{sec:future} In future, we will extend this work to multi-hop networks. A medium access scheme which achieves higher local capacity should also be able to achieve higher end-to-end capacity in multi-hop networks. For example, consider that $\lambda$ is normalized across all medium access schemes to $1$. Therefore, higher local capacity means higher $\sigma(1,\beta,\alpha)$ which has an impact on the range of transmission and the number of hops required to reach the destination. The analysis to establish exact bounds on end-to-end capacity with different medium access schemes in multi-hop networks will be challenging as we will have to take into account the impact of routing schemes on capacity as well as various parameters like hop length, number of hops and density of simultaneous transmitters which are interrelated. The analysis presented here do not take into account fading and shadowing effects. Some results with fading are available, {\it e.g.}, for Poisson distribution of transmitters~\cite{Weber2,Bartek,Haenggi}. Our analysis, in case of slotted ALOHA, can take into account fading by using the results of~\cite{Jacquet:2009}. Nevertheless, analysis of all medium access schemes, discussed here, under the common framework, such as local capacity, is lacking. \section{Conclusions} \label{sec:conclude} We evaluated the performance of wireless ad hoc networks under the framework of local capacity. We used analytical tools, based on realistic interference model, to evaluate the performance of slotted ALOHA and grid pattern based medium access schemes and we used Monte Carlo simulation to evaluate node coloring and CSMA based schemes. Our analysis implies that maximum local capacity in wireless ad hoc networks can be achieved with grid pattern based schemes and our results show that triangular grid pattern outperforms square and hexagonal grids. Moreover, compared to slotted ALOHA, which does not use any significant protocol overhead, triangular grid pattern can only increase the capacity by a factor of $2$ or less whereas CSMA and node coloring can achieve almost similar capacity as the triangular grid pattern based medium access scheme. The conclusion of this work is that improvements above ALOHA are limited in performance and may be costly in terms of protocol overheads and that CSMA or node coloring can be very good candidates. Therefore, attention should be focused on optimizing existing medium access schemes and designing efficient routing strategies in case of multi-hop networks. Note that, our results are also relevant when nodes move according to an i.i.d. mobility process such that, at any time, distribution of nodes in the network is homogeneous.
2,869,038,154,466
arxiv
\section{Introduction} \label{sec:intro} Generative adversarial networks (GANs)~\cite{gan} have achieved remarkable success in learning to synthesize realistic images, which is crucial for a plethora of applications in computer graphics and vision~\cite{stylegan, stylegan2}. Notably, GANs allow us to explore synthesized images and edit real images in a semantically meaningful way~\cite{interfacegan, sefa, stylespace}. Among many image categories that GAN methods have dealt with, it is not surprising that the human face is one of the most popular targets in computer graphics and vision. Recently, making GANs aware of 3D geometry has received great attention, opening up an exciting research field of 3D GANs. They tackle the ill-posed problem of learning the 3D-aware distribution of real images by explicitly modeling 3D light transport between a camera and a target object. 3D GANs enable the synthesis and editing of photographs not only in a semantically meaningful way, but also in consideration of 3D scene geometry~\cite{eg3d, pigan, stylenerf, cips, giraffe}. To date, 3D GANs have been mainly demonstrated only on real-world photographs, which are the exact recordings of real-world scenes through perspective cameras. In this paper, we aim to extend the capability of 3D GANs to handle a different, but meaningful visual form: \emph{drawing}. Drawing plays a crucial role in human history by depicting both real-world and imaginary subjects with intended and/or unintended variations. Existing 2D GAN methods have been extended to cope with drawings by adapting 2D GANs pretrained on real-world photographs into drawings, so-called domain adaptation~\cite{stylegan-ada, fewshot-correspondence, cyclegan, image2image}. The adaptation strategy exploits common features between photographs and drawings, allowing us to bring the synthesis and editing capability of 2D GANs to the drawing domain~\cite{stylealign}. Unfortunately, extending 3D GANs to the drawing domain turns out to be more challenging as shown in Figure~\ref{fig:teaser}. One fundamental reason for this difficulty is that drawings have intrinsic geometric ambiguity on the subject and camera pose. Artists intentionally or unintentionally assume nondeterministic geometry of subjects from an imaginary viewpoint deviating from the physical one, resulting in drawing with creative ambiguity. This further increases the ill-posedness of learning a 3D-aware image distribution of drawings and hinders the direct application of previous domain adaptation methods used in 2D GANs for 3D GAN methods. Figure~\ref{fig:teaser} shows that the application of state-of-the-art 3D GANs~\cite{stylenerf, pigan} on drawings via domain adaptation fails to synthesize faithful 3D-consistent images. This paper proposes Dr.3D{}, a novel 3D GAN domain adaptation method for portrait drawings. Dr.3D{} effectively handles the fundamental geometric ambiguity of drawings with three remedies. First, we present a deformation-aware 3D synthesis network suitable for learning a large distribution of diverse shapes in drawing. Second, we propose an alternating adaptation scheme for 3D-aware image synthesis and pose estimation, effectively reducing the learning complexity of ambiguous 3D geometries and camera poses in drawings. Third, we impose geometric priors that enable stable domain adaptation from real photographs to drawings. The resulting domain adaptation method, Dr.3D{}, is the first method that enables stable editing and synthesis of drawing images in a 3D consistent way. We validate the effectiveness of Dr.3D{} via extensive quantitative and qualitative evaluations. \section{Related works} \paragraph{3D-aware GANs} Several recent works have extended 2D GANs to be aware of the 3D structures of subjects and camera poses. Voxel-based 3D GANs~\cite{hologan} directly represent 3D structures with 3D voxel grids parameterized by 3D convolutional neural networks. Unfortunately, they typically suffer from large memory requirements. Mesh-based GANs~\cite{mesh3dgan1, mesh3dgan2} lift the memory problem by using sparse meshes as a geometric representation. However, dealing with such sparse primitives with neural networks is challenging due to their unstructured data types. Recently, implicit 3D GANs~\cite{eg3d, pigan, stylenerf, giraffe, graf} have shown promising performance in terms of image fidelity and 3D consistency. GRAF and $\pi$-GAN~\cite{graf, pigan} first proposed to learn to generate neural radiance fields (NeRF)~\cite{nerf} and synthesize images via differentiable volumetric rendering. Since then, several attempts have been made to further improve the synthesis quality by incorporating feature projection and upsampling with the expense of losing multi-view consistency~\cite{giraffe, stylenerf}. Most recently, EG3D~\cite{eg3d} demonstrates the synthesis of high-resolution 3D-aware images based on a tri-plane representation and a StyleGAN generator~\cite{stylegan2}. Albeit great progress has been made in 3D GANs, directly applying them to drawings fails to learn meaningful 3D structures due to the large domain gap between real photographs and drawings (Figure~\ref{fig:teaser}). \paragraph{Photo-to-Drawing Domain Adaptation} Applying GANs to drawings has often been practiced via domain adaptation in the 2D image space where we first train a GAN model on real photographs, and then finetune the model on a drawing dataset. This domain-adaptation technique has achieved notable success in synthesizing high-fidelity drawing images. Moreover, the adapted models inherit the semantically-meaningful editing capability of previous 2D GANs, thus enabling semantic editing of drawing images. Thus, their applications span the diverse computer graphics and vision fields, resulting in new applications such as image cartoonization~\cite{toonify, dualstylegan} and automatic caricature generation~\cite{stylecarigan}. However, extending the success of 2D GANs to 3D GANs has been challenging. Drawings have ambiguous and diverse geometric shapes and appearances, resulting in a large domain gap between real photographs and drawings as witnessed by recent works~\cite{stylenerf}. Typical failure examples are flattened 3D shapes, inconsistent multi-view images, and low-fidelity images as shown in Figure~\ref{fig:qualitative_comparison}. We aim to overcome this hurdle by proposing a 3D domain adaptation method designed explicitly for drawings and demonstrates compelling results via our stable photo-to-drawing domain adaptation. \paragraph{Non-generative 3D-aware Image Editing} Editing an input image considering its 3D structure can be also done without using generative models. For instance, fitting a 3D parametric shape model~\cite{3dmm, FLAME, bfm} to an image allows us to have a geometrically-editable 3D model textured with the image~\cite{deep3dface, DECA}. StyleRig~\cite{stylerig} propose combining 3DMM parameters with semantic features learned in 2D GANs for image editing. Unfortunately, parametric shape models are not applicable to drawings as the diverse 3D geometries in drawings often deviate from the representation space of existing parametric shape models. Another research direction is to reconstruct the 3D geometry of an input image based on the physics-based priors of light transport, where Unsup3d~\cite{unsup3d}, GAN2Shape~\cite{gan2shape}, and StyleGANRender~\cite{imagegansmeet} show promising results. However, these methods often fail to handle drawings, because of their restrictive physics-based priors that assume the accurate decomposition of an image into illumination, appearance, and shape, which does not hold in drawings. \section{Background on EG3D} Before introducing our approach, we first provide a brief review of the network architecture of EG3D~\cite{eg3d}, a state-of-the-art 3D GAN network upon which our network is built. Specifically, it starts with a randomly sampled GAN latent code $z$, which turns into 2D convolutional features after passing through a StyleGAN-based feature generator~\cite{stylegan}. Generated 2D feature maps are then rearranged into 3D orthogonal feature planes, from which any 3D point can be described with the projected features. Given the features, a multi-layer perceptron (MLP) decoder predicts the color and density of a 3D point, which are subsequently used for volume rendering, resulting in an image ${x}_\mathrm{fake}$ and a depth map ${d}_{fake}$ at a camera pose $\theta$. The feature generator and the MLP decoder are trained using a discriminator $D$, which is conditioned with the input camera pose $\theta$ to promote the generator to synthesize images that accurately reflect the camera pose. In contrast to being successful as a 3D GAN model for realistic portrait images, directly applying EG3D to drawings results in catastrophic failures as shown in Figure ~\ref{fig:ablation_img}, due to the fundamental ambiguity in drawings. \section{Domain Adaptation to Drawings} Built upon EG3D~\cite{eg3d}, Dr.3D{} is equipped with three remedies that mitigate the ill-posedness of photo-to-drawing 3D-aware domain adaptation: (1) a deformation-aware 3D synthesis network, (2) an alternating adaptation scheme for image synthesis and pose estimation, and (3) geometric priors for adaptation to drawing. In this section, we introduce each remedy in detail. \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{figs/framework_v8.pdf} \vspace{-0.2cm} \caption{ Network architecture of a deformation-aware 3D synthesis network. The network consists of a deformation network, a mapping network, a feature generator, and a volume rendering module. The network takes latent codes $z_d$ and $z$, and a camera pose parameter $\theta$ as inputs, and synthesizes an image in a multi-view consistent way. } \label{fig:framework} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figs/detail_v5.pdf} \vspace{-0.2cm} \caption{ Network architectures of a generator and a deformation network. The generator network is based on the StyleGAN2 generator~\cite{stylegan2}. FC: a fully-connected (FC) layer. A: an affine layer consisting of a single FC layer. Mod: a modulation layer. Demod: a demodulation layer. } \label{fig:deform_network} \end{figure} \subsection{Deformation-aware 3D Synthesis} \label{sec:deform_net} Drawings may have local shape variations that do not exist in photographs taken by cameras. To handle such domain gaps effectively, we introduce a deformation-aware 3D synthesis network $G$. Our network architecture builds on top of the EG3D network as shown in Figure~\ref{fig:framework}. To model diverse shape deformations in drawings, our network uses an additional latent code $z_d$. The deformation code $z_d$ turns into residual features via an MLP-based deformation network as shown in Figure~\ref{fig:deform_network}, which are then added to early convolutional features in the StyleGAN feature generator. Note that modulating early layers in a StyleGAN generator is known to provide large-scale changes to synthesized images~\cite{stylecarigan, dualstylegan}. This simple feature-modulation strategy allows us to model diverse shape variations in drawings. The role of the deformation-aware network is twofold. First, it introduces additional dimensions to the latent space so that local shape variations that may uniquely exist in target artistic drawing domains can be more effectively handled. Second, the residual features generated by the deformation-aware network help model the domain gap between the source and target domains more effectively. Specifically, as the mapping network is an MLP and the generator consists of spatially-invariant convolution operations, finetuning them cannot effectively model local deformations. To resolve this, our deformation-aware network estimates spatially-variant residual features to better handle local feature differences between the source and target domains. Moreover, the residual features help retain the original weights of the generator so that the knowledge about 3D structures learned in the original networks can be better preserved for more successful domain adaptation. Refer to the Supplemental Document for implementation details and additional analysis of the deformation network. \subsection{Alternating Adaptation of Pose Estimation and Image Synthesis} \label{sec:adapt_pose} EG3D~\cite{eg3d}, which our method builds upon, requires known camera poses associated with input training images for its training. While camera poses for real portraits can be readily estimated using an off-the-shelf pose estimation network~\cite{deep3dface,DECA}, it is not trivial to obtain camera poses for portrait drawings. Previous pose estimation networks trained on real portraits fail on drawings due to the large domain gap, and there exist no datasets with ground-truth poses of drawings to train a pose estimation network. To tackle this problem, we may also adapt a pose-estimation network trained on portrait photos to drawings so that we can estimate the poses of drawings to train a synthesis network. However, adapting a pose-estimation network and a 3D synthesis network is a chicken-and-egg problem. Adapting a pose-estimation network requires training data with ground-truth pose labels, which can be obtained by an adapted 3D synthesis network, while adapting a 3D synthesis network requires an accurately adapted pose-estimation network. \begin{figure*}[!t] \centering \includegraphics[width=0.98\linewidth]{figs/alternative_training_v5.pdf} \vspace{-0.2cm} \caption{ Alternating adaptation. Our approach alternatingly adapts the deformation-aware 3D synthesis network and the pose-estimation network. $x_\mathrm{real}$: Portrait of Benjamin Moore McVickar, 1825 by Charles Cromwell Ingham, MetMuseum [Public Domain] via (https://bit.ly/3c9skiy). } \label{fig:alternate_optim} \end{figure*} To resolve this, we propose an alternating adaptation approach that alternatingly updates the 3D synthesis network $G$ and pose-estimation network $P$ (Figure~\ref{fig:alternate_optim}). Specifically, at each iteration of the alternating adaptation, we synthesize a pseudo ground-truth dataset using the current $G$, and update $P$ using the synthesized dataset. Then, using the updated $P$, we estimate the poses of the real drawings in a training dataset and update $G$ using the estimated poses. In this way, we can progressively adapt both $P$ and $G$ to a target drawing domain. However, at early iterations of the alternating adaptation, the poses of training images are not accurately estimated by $P$ due to the large domain gap, which may eventually lead to the failure of adaptation. To overcome this, we introduce training losses with geometric priors, which will be described in Section \ref{sec:geo_prior}, to guide the adaptation process. In the following, we describe each step of our alternating adaptation in more detail. \paragraph{Adapting 3D Synthesis Network} Given an input drawing $x_\mathrm{real}$ as a training sample, we estimate its camera pose $\theta$ using a \emph{fixed} pose estimation network. With the estimated pose $\theta$, our 3D synthesis network $G$ generates an image $x_\mathrm{fake}$ and its corresponding depth map $d_\mathrm{fake}$. To adapt $G$, we employ the adversarial loss $\mathcal{L}_{a}$ of the original EG3D~\cite{eg3d}, which is based on a conditional discriminator $D$. Specifically, $D$ takes either a synthetic or real image, $x_\mathrm{fake}$ or $x_\mathrm{real}$, with its corresponding camera pose $\theta$ and evaluates how realistic it is. We update both $G$ and $D$ an adversarial-learning manner by back-propagating the loss. However, using the adversarial loss alone is not enough as there is no guarantee that the camera pose $\theta$ is accurate especially at early iterations of the alternating adaptation. Inaccurate pose estimation typically leads to learning flattened geometries for drawings as shown in Figure \ref{fig:ablation_img}. To address this issue, we introduce an additional loss $\mathcal{L}_g$ based on geometric priors, described in Section \ref{sec:geo_prior}. The 3D synthesis network $G$ is then updated by minimizing a loss defined as: \begin{equation} \mathcal{L} = \mathcal{L}_a (x_\textrm{fake}, x_\textrm{real}, \theta) + \mathcal{L}_g(x_\textrm{fake},d_\textrm{fake},\theta) . \end{equation} \paragraph{Adapting Pose-estimation Network.} We adapt the pose-estimation network $P$ while fixing the 3D synthesis network $G$. To adapt $P$, we first generate a pseudo training dataset $\Omega$ that consists of multiple pairs of randomly sampled camera poses $\theta$ and their corresponding images $x_\textrm{fake}^\theta$. We synthesize $x_\textrm{fake}^\theta$ as $x_\textrm{fake}^\theta = G(z,\theta)$ where $z$ is a randomly sampled GAN latent code. On the pseudo dataset, we finetune our pose-estimation network $P$ by minimizing the pose-estimation loss $\mathcal{L}_p$ defined as: \begin{equation} \label{eq_posenet} \begin{aligned} &\mathcal{L}_{p} = \frac{1}{|\Omega|} \sum_{\{\theta,x_\mathrm{fake}^\theta\} \in \Omega} \left\lVert\theta-P(x_\mathrm{fake}^\theta)\right\rVert^2_2. \end{aligned} \end{equation} As our deformation-aware 3D synthesis network $G$ continuously adapts to a drawing domain thanks to the adversarial and geometric-prior-based losses, our pose-estimation network $P$ can coordinately adapt to a drawing domain through alternating adaptation. \subsection{Additional Losses with Geometric Priors} \label{sec:geo_prior} In order to guide the alternating adaptation process to a proper solution, the loss $\mathcal{L}_g$ is defined as a combination of three losses: \begin{equation} \label{eq_prior} \begin{aligned} \mathcal{L}_{g} = \alpha \mathcal{L}_{d} + \beta \mathcal{L}_{n} + \gamma \mathcal{L}_{p}, \end{aligned} \end{equation} where $\alpha$, $\beta$ and $\gamma$ are balancing weights. $\mathcal{L}_{d}$ is a depth similarity loss, $\mathcal{L}_{n}$ is a normal smoothness loss, and $\mathcal{L}_{p}$ is a pose loss defined in Equation~\eqref{eq_posenet}. The pose loss $\mathcal{L}_{p}$ guides the 3D synthesis network to synthesize an image that matches the input camera pose $\theta$. $\mathcal{L}_d$ and $\mathcal{L}_n$ correspond to geometric priors that guide $G$ to synthesize a valid 3D geometry and an image correctly reflecting the input camera pose $\theta$. In the following, we describe geometric priors $\mathcal{L}_d$ and $\mathcal{L}_n$ in detail, and discuss how the loss terms guide the alternating adaptation process to a proper solution. \paragraph{Depth Similarity Loss} Even though portrait drawings have intrinsic geometric ambiguity, there are still similarities between drawings and real photographs because the category of subjects is still the same as human face. This incurs our first observation: the geometry of a subject depicted in a drawing is similar to the geometry in a photograph \emph{at a high level}. We implement such prior by penalizing the \emph{low-frequency} difference between the depth of a synthesized drawing $d_\mathrm{fake}$ and that of a synthesized photo $d_\mathrm{fake, photo}$: \begin{equation} \label{eq_depth} \begin{aligned} \mathcal{L}_{d} = \left\lVert k*d_\mathrm{fake}-k*d_\mathrm{fake, photo}\right\rVert^2_2, \end{aligned} \end{equation} where $k$ is a $15\times15$-sized Gaussian low-pass filter of standard deviation 5. We use a synthesis network $G_\mathrm{photo}$ trained on real FFHQ photos~\cite{stylegan} to generate its depth $d_\mathrm{fake, photo}=G_\mathrm{photo}(z,\theta)$. Note that latent code $z$ and pose $\theta$ are the ones used for the drawing sample: $d_\mathrm{fake}=G(z,\theta)$. \begin{figure*}[!t] \centering \includegraphics[width=0.95\linewidth]{figs/qual_v1.pdf} \vspace{-0.2cm} \caption{3D-aware drawing synthesis results of our 3D synthesis network adapted to different datasets by Dr.3D{} (from left to right: historical art, ukiyo-e, caricature, and anime). } \label{fig:curated_examples} \end{figure*} \paragraph{Normal Smoothness Loss} We further penalize abrupt changes of a synthesized geometry, which is implemented as a loss function: \begin{equation} \label{eq_smoothloss} \mathcal{L}_{n} = \| \nabla n_\mathrm{fake} \|_2^2, \end{equation} where $\nabla$ is the spatial gradient operator and $n_\mathrm{fake}$ is a surface normal map computed from a synthesized depth map $d_\mathrm{fake}$. \paragraph{Effect on Alternating Adaption} The additional losses are crucial in guiding the alternating adaptation toward a proper solution. At early iterations of the alternating adaptation process, the 3D synthesis network $G$ produces images that are close to real portrait images. As the pose estimation network $P$ can accurately estimate the camera poses of such synthesized images at early iterations, the pose loss $\mathcal{L}_p$ can enforce $G$ to produce images of the correct camera poses. On the other hand, the depth similarity loss $\mathcal{L}_d$ promotes $G$ to synthesize 3D geometries that are close to their corresponding source-domain geometries. As the source-domain geometries have valid geometric structures and correctly reflect the camera poses, $\mathcal{L}_d$ guides $G$ to synthesize valid unflattened geometries that correctly reflect the camera poses during the entire adaptation process. Finally, $\mathcal{L}_n$ helps avoid degenerate 3D structures with high-frequency artifacts. Thanks to the pose loss and geometric priors, the 3D synthesis network can be adequately adapted without drifting to an improper solution, which also helps the adaptation of the pose-estimation network. \subsection{Training Details} We pretrain the 3D synthesis network $G$ and the pose-estimation network $P$ on the real portrait images of the FFHQ dataset~\cite{stylegan}. We apply horizontal flip for data augmentation. We use the Adam optimizer~\cite{adam} with learning rates of 0.0001 and 0.00125 for optimizing $P$ and $G$, respectively. The learning rate for the discriminator $D$ is 0.00075. The 3D synthesis and pose-estimation networks $G$ and $P$ are alternatively trained within a mini-batch of 32 images. We freeze the first 10 layers of the discriminator $D$ for stable domain adaptation~\cite{freeze-d}. We use the weights $\alpha$, $\beta$ and $\gamma$ differently for target drawing domains as provided in the Supplemental Document. \section{Assessment} \label{sec:results} We conduct extensive validation of our method on four datasets of different drawing styles: historical art~\cite{stylegan-ada}, ukiyo-e~\cite{ukiyoe}, anime~\cite{anime}, and caricature~\cite{webcaricature}. For the anime dataset, we crop and align face regions using an off-the-shelf face detection method~\cite{face_detection}. We apply Dr.3D{} to each dataset and obtain an adapted 3D GAN model separately. Figure~\ref{fig:curated_examples} shows curated examples of 3D-aware drawing synthesis for the different drawing styles, demonstrating our 3D-aware synthesis capability for diverse drawing styles. Refer to the Supplemental Document for uncurated results. \subsection{Comparison} We compare Dr.3D{} to recent GAN-based 3D synthesis methods: StyleNeRF~\cite{stylenerf}, $\pi$-GAN~\cite{pigan} and EG3D~\cite{eg3d}. In the case of EG3D, we directly adapt EG3D from real photos to artistic drawings using camera poses estimated by an off-the-shelf pose-estimation network ~\cite{DECA}. For the results of parametric fitting~\cite{DECA} and physics-based decomposition methods~\cite{unsup3d,gan2shape}, refer to the Supplemental Document. Figure~\ref{fig:qualitative_comparison} shows a qualitative comparison between previous 3D GANs and ours. Na\"{i}ve domain adaptation of the previous 3D GANs fails to handle diverse drawing shapes and appearances, resulting in low-fidelity images and flattened geometries. Dr.3D{} reconstructs plausible shapes and images of drawing, outperforming the previous methods. We further conduct quantitative analysis on the fidelity of synthesized images and shapes. The qualities of synthesized images are evaluated using FID~\cite{fid} and KID~\cite{kid}. We use $256\times256$-sized images for all the methods except $\pi$-GAN, for which we use $128\times128$-sized images due to its large memory requirement. Table~\ref{table:quantitative_img} shows the evaluation results where Dr.3D{} achieves the best image-synthesis fidelity except for caricatures, thanks to our effective adaptation scheme. Quantitative evaluation of synthesized shapes mandates the ground-truth shapes of drawings, which are challenging to obtain in most cases. For the historical-art dataset, as done in EG3D \cite{eg3d}, we obtain the \emph{pseudo} ground-truth shapes and poses of randomly generated drawings using a parametric fitting method \cite{DECA}. We measure depth and pose error by calculating MSE between generated sets and pseudo ground-truth depths and poses. For the evaluation of caricatures, we utilize the 3DCaricShop dataset, which provides paired images and 3D shapes created by artists. We reconstruct caricature images using GAN-inversion and measure depth and pose error with ground-truth geometries. Table~\ref{table:quantitative_shape} shows that Dr.3D{} generally performs better than the other methods in terms of shapes and poses. While StyleNeRF achieves better depth accuracy than ours for the historical-art dataset, it shows the worst pose accuracy. Also, while the table shows that EG3D achieves comparable results to ours, it tends to produce noisy and flattened shapes as shown in Figure~\ref{fig:qualitative_comparison}. \begin{table}[!t] \caption{Quantitative comparison on the image quality among $\pi$-GAN~\cite{pigan}, StyleNeRF~\cite{stylenerf}, EG3D~\cite{eg3d} and ours.} \vspace{-3mm} \begin{tabular}{@{}c|c|cccc@{}} \hline & & Hist.~art & Ukiyo-e & Anime & Caricature \\ \hline $\pi$-GAN & FID $\downarrow$ & 46.40 & 65.91 & 48.78 & 73.25 \\ & KID $\times 10^3 \downarrow$ & 26.14 & 53.79 & 28.29 & 52.15 \\ \hline StyleNeRF & FID $\downarrow$ & 34.99 & 58.52 & 27.94 & 22.53 \\ & KID $\times 10^3\downarrow$ & 14.51 & 58.72 & 12.41 & 11.72 \\ \hline EG3D & FID $\downarrow$ & 26.95 & 40.16 & 20.75 & \textbf{15.71} \\ & KID $\times 10^3\downarrow$ & 9.295 & 32.94 & 8.699 & \textbf{7.123} \\ \hline Dr.3D{} & FID $\downarrow$ & \textbf{23.42} & \textbf{37.38} & \textbf{18.74} & 19.69 \\ (Ours) & KID $\times 10^3\downarrow$ & \textbf{5.916} & \textbf{29.65} & \textbf{6.335} & 9.180 \\ \hline \end{tabular} \label{table:quantitative_img} \end{table} \begin{table}[!t] \caption{ Quantitative comparison on the shape and pose quality among $\pi$-GAN~\cite{pigan}, StyleNeRF~\cite{stylenerf}, EG3D~\cite{eg3d} and Dr.3D{}. } \begin{tabular}{@{}c|cc|cc@{}} \hline & \multicolumn{2}{c|}{Hist.~art} & \multicolumn{2}{c}{Caricature} \\ & Depth & Pose & Depth & Pose \\ \hline $\pi$-GAN & 0.305 & 0.072 & 0.151 & 0.077 \\ StyleNeRF & \textbf{0.169}& 0.333 & 0.688 & 0.326 \\ EG3D & 0.215 & 0.054 & 0.033 & \textbf{0.047} \\ Dr.3D{} (Ours) & 0.217 & \textbf{0.030}& \textbf{0.020} & 0.070 \\ \hline \end{tabular} \label{table:quantitative_shape} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/ablation_v4.pdf} \vspace{-0.5cm} \caption{Ablation study. The baseline model (EG3D~\cite{eg3d}) synthesizes a distorted image and a flattened geometry as shown in (a). While our alternating adaptation helps avoid flattened shapes as shown in (b), our deformation-aware 3D synthesis network, and geometric priors further improve the synthesis quality. } \label{fig:ablation_img} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figs/anime_z_deform.pdf} \vspace{-0.2cm} \caption{ Image synthesis results from different deformation codes $z_d$. For all the results, the same latent code $z$ is used. } \label{fig:deformation_img} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/novel_view_v3.pdf} \vspace{-0.2cm} \caption{Novel view synthesis of a real-world drawing. Input: Girl with a Pearl Earring, 1665 by Johannes Vermeer, WikiArt [Public Domain] via (https://bit.ly/3PE66CT). } \label{fig:synthesis_img} \end{figure} \subsection{Ablation Study} Dr.3D{} effectively deals with the intrinsic ambiguity of drawing images by means of (1) a deformation-aware 3D synthesis network, (2) alternating adaptation of pose estimation and image synthesis, and (3) geometric priors. We assess the impact of each component by starting with our baseline network, EG3D~\cite{eg3d}. Figure~\ref{fig:ablation_img} shows an ablation result. Using the original EG3D model on drawings results in a flattened shape (Figure~\ref{fig:ablation_img}(a)). For training the EG3D model, we used the camera pose estimated from an off-the-shelf pose-estimation network~\cite{DECA}. Our alternating adaptation of the pose-estimation network and the deformation-aware 3D synthesis network enables us to recover a better 3D geometry (Figure~\ref{fig:ablation_img}(b)). Adding our deformation-aware 3D synthesis network further improves the shape-reconstruction fidelity and the quality of synthesized images as it helps capture shape and style variations in drawings (Figure~\ref{fig:ablation_img}(c)). Our full method, Dr.3D{}, with the geometric priors, results in the best synthesis quality for both image and shape (Figure~\ref{fig:ablation_img}(d)). As discussed in Section~\ref{sec:deform_net}, drawings have a larger distribution of potentially-feasible 3D shapes than photos. Our deformation network helps model such a larger distribution of drawings by expanding the representation space with an additional latent code $z_d$, which leads to higher-quality adaptation results as shown in Figure~\ref{fig:ablation_img}(c). Figure~\ref{fig:deformation_img} shows another example of the deformation network. In the figure, while the same latent code $z$ synthesizes all the images, they exhibit different details due to different deformation codes $z_d$, proving the larger representation space expanded by the deformation network. More analysis on the deformation-aware network is provided in the Supplemental Document. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figs/edit_v2.pdf} \vspace{-0.2cm} \caption{ Semantic editing of input drawings. Top: male to female. Bottom: hairstyle change. Input on the top row: Gulian Verplanck, 1771 by John Singleton Copley, WikiArt [Public Domain] via (https://bit.ly/3PE6vVV). } \label{fig:editing} \end{figure} \subsection{3D-aware Semantic Editing of Drawing} Combining GAN inversion with Dr.3D{} enables multi-view consistent editing of real-world drawings such as novel-view synthesis and semantic editing. Figure~\ref{fig:synthesis_img} shows examples of novel-view synthesis of real-world drawings. In these examples, we estimate the camera poses of the input images using our domain-adapted pose-estimation network and invert the images to GAN latent codes using the pivotal tuning inversion method~\cite{pti}. Then, we synthesize novel views of the input images by feeding their latent codes and new camera poses to the 3D synthesis network. Dr.3D{} also enables multi-view consistent semantic editing on real-world drawings. Figure~\ref{fig:editing} shows examples of semantic editing. In these examples, we use editing vectors found by applying InterfaceGAN~\cite{interfacegan} using the original EG3D network trained on the FFHQ dataset. \section{Conclusion} This paper presented Dr.3D{}, a novel 3D GAN adaptation method from real portraits to artistic drawings. To handle the intrinsic geometric ambiguity of drawings, we proposed alternating adaptation of the pose estimation and image synthesis, a deformation-aware network, and geometric priors. We experimentally validated that our approach can successfully adapt 3D GANs to drawings for the first time. Dr.3D{} allows to edit an artistic drawing in consideration of its 3D geometric structure and semantics of the content. \paragraph{Limitations} While Dr.3D{} can produce superior results to previous methods, it may still produce flattened geometries for some latent codes for challenging domains such as anime. Refer to the Supplemental Document for a failure example. Also, our method is limited in dealing with the background region in which 3D-consistent shared geometric features do not exist in training images. We note that this limitation also applies to existing 3D-GAN methods including EG3D~\cite{eg3d}. One potential way to resolving this would be to divide the feature-generation procedure into two: one for the foreground and the other for the background~\cite{stylenerf}. Extending Dr.3D{} to diverse target domains including non-human faces would also be an interesting future direction. \begin{acks} This research was supported by \grantsponsor{IITP}{IITP}{} grants funded by the Korea government (MSIT) (\grantnum[]{IITP}{2021-0-02068}, \grantnum[]{IITP}{2019-0-01906}), an \grantsponsor{NRF}{NRF}{} grant funded by the the Korea government (MOE) (\grantnum[]{NRF}{2022R1A6A1A03052954}), and \grantsponsor{Pebblous}{Pebblous}{}. \end{acks}
2,869,038,154,467
arxiv
\section{Group data} \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline $p$ & 2 & 3 & 5 & 7 & 11 & 13 & 17 & 19 & 23 & 29 & 31 & 41 & 59 & 71 \\ [0.5ex] \hline\hline $\overline{\mu}_{0}^{+}$ & $\tfrac{3}{2}$ & 2 & 3 & 4 & 6 & 7 & 9 & 10 & 12 & 15 & 16 & 21 & 30 & 36\\ [1ex] \hline \end{tabular} \caption{Index data for Fricke groups of prime divisor levels of $\mathbb{M}$ with $\overline{\mu}_{0}^{+} = \left[\text{PSL}(2,\mathbb{Z}),\overline{\Gamma}_{0}^{+}(p)\right]$} \label{tab:Fricke_index_data} \end{table} \begin{table}[htb!] \centering \begin{tabular} {||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline $p$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 17 & 19 & 23 & 29 & 31 & 41 & 59 & 71\\ [0.5ex] \hline\hline $\overline{\mu}_{0}$ & 3 & 4 & 6 & 6 & 12 & 8 & 12 & 12 & 18 & 12 & 24 & 14 & 18 & 20 & 24 & 30 & 32 & 42 & 60 & 72\\ [1ex] \hline \end{tabular} \caption{Index data for Fricke groups of prime divisor levels of $\mathbb{M}$ with $\overline{\mu}_{0} = \left[\text{PSL}(2,\mathbb{Z}),\overline{\Gamma}_{0}(p)\right]$} \label{tab:Hecke_index_data} \end{table} \section{Modular re-parameterization for \texorpdfstring{$\Gamma_{0}^{+}(5)$}{Γ0(5)+}}\label{appendix:B} \noindent From \ref{derivative_1}, we find \begin{align} E_{4}^{(5^{+})}(\tau) = \left(1 - \frac{36}{13}\frac{1}{j_{5^{+}}(\tau)}\right)E_{2,5^{'}}^{2}(\tau). \end{align} We now make the following definitions \begin{align}\label{A5+_def} A_{5^{+}}(\tau) \equiv \left(\frac{E_{6}^{(5^{+})}}{E_{2,5^{'}}^{2}}\right)(\tau),\ \ \ P_{5^{+}}(\tau)\equiv \frac{36}{13}\frac{1}{j_{5^{+}}(\tau)}, \end{align} where we see that the definition of $A_{5^{+}}$ naturally follows from observing the form of the derivative of the Hauptmodul. The derivative of $P_{5^{+}}$ is found to be \begin{align} q\frac{d}{dq}P_{5^{+}}(\tau) = \left(P_{5^{+}}A_{5^{+}}\right)(\tau). \end{align} With this we find the following expressions for $\theta_{q}$ and $\mathcal{D}^{2}$, \begin{align} \begin{split} \theta_{q} =& A_{5^{+}}\theta_{P_{5^{+}}},\\ \mathcal{D}^{2} =& \left(\theta_{q} - \frac{1}{2}E_{2}^{(5^{+})}\right)\theta_{q}\\ =& A_{5^{+}}^{2}\theta_{P_{5^{+}}}^{2} + \left[E_{2,5^{'}}^{2}\left(\frac{1250}{81}P_{5^{+}}^{2} - \frac{116}{9}(1 - P_{5^{+}})P_{5^{+}} - \frac{3}{2}(1 - P_{5^{+}})^{2}\right) + \frac{A_{5^{+}}^{2}}{1 - P_{5^{+}}}\right]\theta_{P_{5^{+}}}. \end{split} \end{align} The modular forms $E_{6}^{(5^{+})}(\tau)$ and $E_{2,5^{'}}^{2}(\tau)$ can be expressed in terms of $K_{5^{+}}\equiv \tfrac{22 + 10\sqrt{5}}{j_{5^{+}}}$, but the relationship is more complicated unlike what we observed previously at levels $2$ and $3$. These modular forms are related to the inverse Hauptmodul $K_{5^{+}}$ via Heun functions as shown below \begin{align} E_{6}^{(5^{+})}(\tau) = \sqrt{1 - \frac{143}{9}P_{5^{+}} - \frac{169}{81}P^{2}_{5^{+}}}\ H\ell_{5}^{6}(\tau),\ \ \ E_{2,5^{'}}^{2}(\tau) = H\ell_{5}^{4}(\tau). \end{align} Substituting this in the definition \ref{A5+_def}, we set \begin{align} A_{5^{+}}(\tau) \equiv& \frac{1}{9}\sqrt{81 - 1287P_{5^{+}} - 169P_{5^{+}}^{2}}\ H\ell_{5}^{2}(\tau)\\ =& f(P_{5^{+}})H\ell_{5}^{2}(\tau). \end{align} With $\omega_{4}(\tau) = \mu_{1}E_{2,5^{'}}^{2}(\tau) + \mu_{2}\Delta_{5}(\tau)$, the re-parameterized MLDE reads \begin{align} \begin{split} \left[\theta_{P_{5^{+}}}^{2} + \left( \frac{4345P_{5^{+}} - 1602P_{5^{+}} - 243}{162f(P_{5^{+}})} + \frac{1}{1 - P_{5^{+}}}\right) \theta_{P_{5^{+}}}+ \frac{36\mu_{1} -13\mu_{2}P_{5^{+}}}{36f(P_{5^{+}})}\right]f(\tau) = 0. \end{split} \end{align} \section{Useful relations in level 7 groups}\label{appendix:modular_forms} \noindent Some useful expressions of the $q$-derivatives of forms in $\Gamma_{0}(7)$ and $\Gamma_{0}^{+}(7)$ are listed below \begin{align}\label{j'_Hecke_7} \begin{split} &q\frac{d}{dq}j_{7}(\tau) =-\left(j_{7}\theta_{7}^{2}\right)(\tau),\\ &q\frac{d}{dq}\mathbf{t}(\tau) = \left(\mathbf{t}\theta_{7}^{2}\right)(\tau),\\ &q\frac{d}{dq}\mathbf{k}(\tau)= \frac{7}{24}\mathbf{k}(\tau)\left(E_{2}(\tau) - E_{2}(7\tau)\right)\\ &{}\ \ \ \ \ \ \ \ \ \ \ = \mathbf{k}(\tau)\left[E_{2}^{(7^{+})} - \left(\frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right) - E_{2,7^{'}}\frac{13\mathbf{t} + 98\mathbf{t}^{2}}{1 + 13\mathbf{t} + 49\mathbf{t}^{2}}\right](\tau),\\ &q\frac{d}{dq}\Delta_{7}(\tau) =\left(\Delta_{7}E_{2}^{(7^{+})}\right)(\tau)\\ &q\frac{d}{dq}E_{2,7^{'}}(\tau) = \frac{2}{3}\left(E_{2,7^{'}}E_{2}^{(7^{+})} - E_{4,7^{'}}\right)(\tau),\\ &q\frac{d}{dq}E_{4,7^{'}}(\tau) = \frac{19}{8}\left(\frac{7^{3}E_{2}(7\tau)E_{4}(7\tau) - E_{2}(\tau)E_{4}(\tau)}{342} - E_{6,7^{'}}(\tau)\right),\\ &q\frac{d}{dq}\Delta_{7^{+},4}(\tau) = \frac{4}{3}\Delta_{7^{+},4}(\tau)\left(E_{2}^{(7^{+})} - \frac{1}{4}\frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right)(\tau),\\ &q\frac{d}{dq}\Delta_{7^{+},10}(\tau) = \Delta_{10,7^{'}}(\tau)\left[2\left(E_{2}^{(7^{+})} - \frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right)(\tau) + q\frac{d}{dq}\log{E_{4,7'}}(\tau)\right]\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \Delta_{10,7^{'}}(\tau)\left[2\left(E_{2}^{(7^{+})} - \frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right)(\tau) + \frac{57}{8E_{4,7^{'}}(\tau)}q\frac{d}{dq}\left(7^{3}E_{4}(7\tau) - E_{4}(\tau)\right)\right], \end{split} \end{align} where we used $q\tfrac{d}{dq}E_{4}(\tau) = \tfrac{1}{3}\left(E_{2}E_{4} - E_{6}\right)(\tau)$ to obtain the last line. \section{Heun function relations in \texorpdfstring{$\Gamma_{0}^{+}(7)$}{Γ0(7)+} re-parameterization}\label{appendix:level_7+_simplifications} \noindent We suspect that it should be possible to simplify the covariant derivative $\mathcal{D}^{2}$ by expressing modular forms $E_{k,7^{'}}(\tau)$ in terms of those belonging to $\Gamma_{0}(7)$ since we know that the $\text{SL}(2,\mathbb{Z})$ Eisenstein series can be expressed in terms of $\mathbf{k}(\tau)$ and $\mathbf{t}(\tau)$ as follows \cite{Liu} \begin{align}\label{forE10} \begin{split} E_{4}(\tau)=& \mathbf{k}^{\tfrac{4}{3}}\left(1 + 245\mathbf{t} + 2401\mathbf{t}^{2}\right)\left(1 + 13\mathbf{t} + 49\mathbf{t}^{2}\right)^{\tfrac{1}{3}}(\tau),\\ E_{4}(7\tau) =& \mathbf{k}^{\tfrac{4}{3}}\left(1 + 5\mathbf{t} + \mathbf{t}^{2}\right)\left(1 + 13\mathbf{t} + 49\mathbf{t}^{2}\right)^{\tfrac{1}{3}}(\tau),\\ E_{6}(\tau) =& \mathbf{k}^{2}\left(1 - 7(5 + 2\sqrt{7}\mathbf{t} - 7^{3}(21 + 8\sqrt{7}))\mathbf{t}^{2}\right)\left(1 - 7(5 - 2\sqrt{7}\mathbf{t} - 7^{3}(21 - 8\sqrt{7}))\mathbf{t}^{2}\right)(\tau),\\ E_{6}(7\tau) =& \mathbf{k}^{2}\left(1 + (7 + 2\sqrt{7})\mathbf{t} + (21 + 8\sqrt{7})\mathbf{t}^{2}\right)\left(1 + (7 - 2\sqrt{7})\mathbf{t} + (21 - 8\sqrt{7})\mathbf{t}^{2}\right)(\tau). \end{split} \end{align} From this, we see that the modular forms $E_{2,7^{'}}^{(7^{+})}(\tau)$, $E_{4}^{(7^{+})}(\tau)$, $E_{4,7^{'}}(\tau)$, and it's logarithmic derivative can be rewritten using \ref{forE10} as follows \begin{align}\label{k,t-relations} \begin{split} E_{2,7^{'}}(\tau) =& \mathbf{k}^{\tfrac{2}{3}}\left(1 + 13\mathbf{t} + 49\mathbf{t}^{2}\right)^{\tfrac{2}{3}}(\tau),\\ E_{4}^{(7^{+})}(\tau) =& \frac{1}{5}\mathbf{k}^{\tfrac{4}{3}}\left(5 + 49\mathbf{t} + 245\mathbf{t}^{2}\right)\left(1 + 13\mathbf{t} + 49\mathbf{t}^{2}\right)^{\tfrac{1}{3}}(\tau),\\ E_{4,7^{'}}(\tau) =& \mathbf{k}^{\tfrac{4}{3}}\left(1 - 49\mathbf{t}^{2}\right)\left(1 + 13\mathbf{t} + 49\mathbf{t}^{2}\right)^{\tfrac{1}{3}}(\tau)\\ q\frac{d}{dq}\log{E_{4,7^{'}}}(\tau) =& \frac{4}{3}\left[E_{2}^{(7^{+})} - \left(\frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right) - \frac{13\mathbf{t} + 98\mathbf{t}^{2}}{1 + 13\mathbf{t} + 49\mathbf{t}^{2}}E_{2,7^{'}}\right](\tau)\\ &- \left[\frac{\mathbf{t}\left(19208\mathbf{t}^{3} + 4459\mathbf{t}^{2} + 196\mathbf{t} - 13\right)}{(1- 49\mathbf{t}^{2})(1 + 13\mathbf{t}+ 49\mathbf{t}^{2})}\frac{E_{2,7^{'}}}{3}\right](\tau) \end{split} \end{align} Using these expressions, the covariant derivative $\mathcal{D}^{2}$ now reads \begin{align} \mathcal{D}^{2} = A_{7^{+}}^{2}\theta_{K_{7^{+}}}^{2} + A_{7^{+}}\left(\frac{4}{3}E_{2}^{(7^{+})} - \frac{13 + 196\mathbf{t} + 637\mathbf{t}^{2}}{(1-49\mathbf{t}^{2})(1 + 13\mathbf{t} + 49\mathbf{t}^{2})}E_{2,7^{'}}\right)\theta_{K_{7^{+}}}. \end{align} \noindent Now, since $\text{dim}\ \mathcal{M}_{4}(\Gamma_{0}^{+}(7)) = 2$, we make the choice $\omega_{4}(\tau) = \mu_{1}E_{2,7^{'}}^{2} + \mu_{2}\Delta_{7}E_{1,7^{'}}$. From \cite{Sakai2014TheAO}, we have the following expressions for the Eisenstein series and the cusp form in terms of Heun's function, \begin{align} \begin{split} \Delta_{7}(\tau) =& \frac{1}{j_{7^{+}}}H\ell_{7}^{3}(\tau),\\ E_{2,7^{'}}^{2}(\tau) =& H\ell_{7}^{4}(\tau),\\ E_{6}^{(7^{+})}(\tau) =& \sqrt{\left(1 + \frac{1}{j_{7^{+}}}\right)\left(1 - \frac{27}{j_{7^{+}}}\right)}\ H\ell_{7}^{6}(\tau),\\ H\ell_{7}(\tau) \equiv& H\ell\left(-27, -2;\frac{1}{3},\frac{2}{3},1,\frac{1}{2};K_{7^{+}}(\tau)\right). \end{split} \end{align} We now want to find similar Heun function relations for other modular forms. Let us begin with the cusp form $\Delta_{7^{+},4}(\tau)$. From its definition in \ref{level_7+}, we have \begin{align} \begin{split} \Delta_{7^{+},4}(\tau) =& \left(\sqrt{E_{2,7^{'}}}\Delta_{7}\right)(\tau) = \frac{1}{j_{7^{+}}}H\ell_{7}^{5}(\tau). \end{split} \end{align} Next, consider the Eisenstein series $E_{4}^{(7^{+})}(\tau)$ which is found to possess the following Heun function relation \begin{align} E_{4}^{(7^{+})}(\tau) = \left(E_{2,7^{'}}^{2} - \frac{16}{5}\Delta_{7^{+},4}\right)(\tau) = \left(H\ell_{7}^{4} - \frac{16}{5j_{7^{+}}}H\ell_{7}^{5}\right)(\tau). \end{align} Consider now the cusp form $\Delta_{7^{+},10}(\tau)$. From its definition in \ref{level_7+}, we see that the Eisenstein series $E_{10}^{(7^{+})}(\tau)$ is the only modular form we do not possess a Heun function relation of. Consider now the basis decomposition for the space of modular forms of weight $10$ reads \begin{align} \mathcal{M}_{10}(\Gamma_{0}^{+}(7)) = \mathbb{C}E_{4}^{(7^{+})}E_{6}^{(7^{+})}\oplus\mathbb{C}E_{4}^{(7^{+})}\Delta_{7^{+},6}\oplus\mathbb{C}\Delta_{7^{+},10}. \end{align} Considering an ansatz with three unique coefficients and comparing its $q$-series expansion with that of $E_{10}^{(7^{+})}(\tau)$, we can find an explicit expression for the latter to be \begin{align} \begin{split} E_{10}^{(7^{+})}(\tau) =& \left(E_{4}^{(7^{+})}E_{6}^{(7^{+})}- \frac{137592}{41065}\frac{\Delta_{7^{+},4}}{E_{2,7^{'}}}E_{4}^{(7^{+})}E_{4,7^{'}} - \frac{63504}{4775}\frac{\Delta_{7^{+},4}^{2}}{E_{2,7^{'}}}E_{4,7^{'}}\right)(\tau)\\ =& \left(E_{4}^{(7^{+})}E_{6}^{(7^{+})}- \frac{137592}{41065}\frac{\Delta_{7^{+},4}}{E_{2,7^{'}}}\left(E_{4}^{(7^{+})}\right)^{2}\left(\frac{5(1-49\mathbf{t}^{2})}{(5 + 49\mathbf{t} + 245\mathbf{t}^{2})}\right)\right.\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.- \frac{63504}{4775}\frac{\Delta_{7^{+},4}^{2}}{E_{2,7^{'}}}E_{4}^{7^{+}}\left(\frac{5(1-49\mathbf{t}^{2})}{(5 + 49\mathbf{t} + 245\mathbf{t}^{2})}\right)\right)(\tau), \end{split} \end{align} where we have used expressions in \ref{k,t-relations} to express $E_{4,7^{'}}(\tau)$ in terms of $E_{4}^{(7^{+})}(\tau)$. Using this in the definition of $\Delta_{7^{+},10}(\tau)$, we find \begin{align} \Delta_{7^{+},10}(\tau) =& \left[\frac{559}{690}\left(\frac{5(j_{7} - 49)}{245 + 49j_{7} + 5j_{7}^{2}} - \sqrt{\left(1 - \frac{1}{j_{7^{+}}}\right)\left(1 - \frac{27}{j_{7^{+}}}\right)}\right)\frac{1}{j_{7^{+}}}H\ell_{7}^{11}\right.\\ &\ \ \ \ \ \ \ \ \ \left.- \frac{5(j_{7} - 49)}{245 + 49j_{7} + 5j_{7}^{2}}\left(\frac{688}{345j_{7^{+}}^{3}}h\ell_{7}^{13} + \frac{3397}{1725j_{7^{+}}^{2}}H\ell_{7}^{12}\right)\right](\tau), \end{align} where we have expressed all $\mathbf{t}(\tau)$ in terms of $j_{7}(\tau)$, the Hauptmodul of $\Gamma_{0}(7)$. Using these relations, it should be possible to express the re-parameterized MLDE in terms of the Hauptmodul $j_{7^{+}}(\tau)$. From this, we find the following useful relations \begin{align} \begin{split} \left(\frac{\Delta_{7}^{4}}{E_{2,7^{'}}^{6}}\right)(\tau) =& \mathfrak{a} = \frac{1}{j_{7^{+}}(\tau)},\\ \left(\frac{\left(E_{6}^{(7^{+})}\right)^{2}}{\Delta_{7}^{2}}\right)(\tau) =& \mathfrak{b} = \left(j_{7^{+}}^{2} - 26j_{7^{+}} - 27\right)(\tau),\\ \left(\frac{\left(E_{6}^{(7^{+})}\right)^{2}}{E^{6}_{2,7^{'}}}\right)(\tau) =& \mathfrak{c} = \left(1 - \frac{27}{j_{7^{+}}^{2}} - \frac{26}{j_{7^{+}}}\right)(\tau). \end{split} \end{align} \section{\texorpdfstring{$(2,0)\ \Gamma_{0}^{+}(2)$}{(2,0)} MLDE specifics}\label{appendix:detailed_derivation} \noindent Substituting mode expansions in the MLDE, we obtain \begin{align}\label{MLDE_mode_Gamma_0_2+} q^{\alpha}\sum\limits_{n=0}^{\infty}f_{n}q^{n}\left[(n+\alpha)^{2} - (n + \alpha)\sum\limits_{m=0}^{\infty}\frac{1}{4}E^{(2^{+})}_{2,m}q^{m} + \mu\sum\limits_{m=0}^{\infty} E_{4,m}^{(2^{+})}q^{m}\right] = 0. \end{align} When $n=0,m=0$, with $E^{(2^{+})}_{2,0} = E^{(2^{+})}_{4,0} = 1$, we obtain the following indicial equation \begin{align} \alpha^{2} -\frac{1}{4} \alpha + \mu = 0. \end{align} The parameter $\mu$ can now be fixed using this equation. Let us denote the roots of this equation, in increasing order, as $\alpha_{0}$ and $\alpha_{1}$. Solving this indicial equation gives us \begin{align}\label{roots_n=2_l=0_Gamma_0_2+} \begin{split} \alpha_{0} =& \frac{1}{8}\left(1 - \sqrt{1 - 64\mu}\right) \equiv \frac{1}{8}(1 - x),\\ \alpha_{1} =& \frac{1}{12}\left(1 + \sqrt{1 - 64\mu}\right) \equiv \frac{1}{8}(1 + x), \end{split} \end{align} where we have set $x = \sqrt{1 - 64\mu}$. The smaller solution $\alpha_{0} = \tfrac{1}{8}(1-x)$ corresponds to the identity character which behaves as $f(\tau) \sim q^{\tfrac{1-x}{8}}\left(1 + \mathcal{O}(q)\right)$. We know that the identity character $\chi_{0}$, associated with a primary of weight $h_{0} = 0$ behaves as $\chi_{0}\sim q^{-\tfrac{c}{24}}\left(1 + \mathcal{O}(q)\right)$. Comparing the two behaviours, we obtain the following expression for the central charge \begin{align}\label{central_charge_x} c = 3(x-1). \end{align} To find the conformal dimension $h$, we compare the behaviours with the larger solution for $\alpha$, i.e. $f(\tau)\sim q^{\tfrac{1 + x}{8}}\left(1 +\mathcal{O}(q)\right)$ and $\chi\sim q^{-\tfrac{c}{24} + h}\left(1 + \mathcal{O}(q)\right)$. This gives us \begin{align} h = \frac{x}{4}. \end{align} Next, to obtain a recurrence relation among the coefficients $f_{n}$, we use the Cauchy product of two infinite series which reads \begin{align} \left(\sum\limits_{i=0}^{\infty}\alpha_{i}\right)\cdot\left(\sum\limits_{j=0}^{\infty}\beta_{j}\right) = \sum\limits_{k=0}^{\infty}\gamma_{k},\ \gamma_{k} = \sum\limits_{p=0}^{k}\alpha_{p}\beta_{k-p}. \end{align} This gives us \begin{align} \begin{split} \left(\sum\limits_{m=0}^{\infty}E^{(2^{+})}_{4,m}(\tau)q^{m}\right)\left(\sum\limits_{n=0}^{\infty}f_{n}q^{n}\right) =& \sum\limits_{k=0}^{\infty}\left(\sum\limits_{p=0}^{k}E^{(2^{+})}_{4,m}f_{k-p}\right)q^{k},\\ \left(\sum\limits_{m=0}^{n}E^{(2^{+})}_{2,m}q^{m}\right)\left(\sum\limits_{n=0}^{\infty}f_{n}q^{n}(n+\alpha)\right) =& \sum\limits_{k=0}^{\infty}\left(\sum\limits_{p=0}^{k}E^{(2^{+})}_{2,p}(k+\alpha - p)f_{k-p}\right)q^{k}. \end{split} \end{align} Substituting this back into the MLDE, we get \begin{align}\label{recursion_l=0_Gamma_0_2+} f_{n} = \left((n + \alpha)^{2} - \frac{1}{4}(n + \alpha) + \mu\right)^{-1}\sum\limits_{m=1}^{n}\left(\frac{(n + \alpha - m)}{4}E^{(2^{+})}_{2,m} - \mu E^{(2^{+})}_{4,m}\right)f_{n-m}. \end{align} When $n = 1$, we solve for the ratio $\tfrac{f_{1}}{f_{0}}$ in terms of $\alpha$ with coefficients $E^{(2^{+})}_{2,1} = -8$ and $E_{4,1}^{(2^{+})} = 48$ to obtain \begin{align} m^{(i)}_{1} = \frac{f^{(i)}_{1}}{f^{(i)}_{0}} = \frac{8\alpha_{i}(-7 + 24\alpha_{i})}{3 + 8\alpha_{i}}, \end{align} for $ i = 0,1$ corresponding to ratios of $m^{(0)}_{1}$ and $m^{(1)}_{1}$ taken with respect to values $\alpha_{0}$ and $\alpha_{1}$ respectively. Restricting to $i=0$ and assuming that $\alpha_{i} = -\tfrac{c}{24}$, we find \begin{align} m_{1}^{(0)} = \frac{c(7+c)}{9-c}. \end{align} Dropping the script associated with the index $i$, we see that for $m_{1}\geq 0$, we require $c<9$. Rewriting this, we have \begin{align} c^{2} + c(m_{1} + 7) = 9m_{1}. \end{align} This tells us that $c$ is an integer. Hence, for two character theories with $\ell = 0$, we have $c\in\mathbb{Z}$ and the bound $c< 9$. Solving the quadratic equation, we notice that for $c$ to be rational, the following determinant has to be rational \begin{align} \sqrt{49 + 50m_{1} + m_{1}^{2}}. \end{align} Now, since we require $m_{1}\in\mathbb{Z}$, we must demand that the square root is an integer and this gives us \begin{align}\label{recast_prior} \left(m_{1} + 25\right)^{2} - 576 = k^{2}, \end{align} where $k^{2}\in\mathbb{Z}$. We define $p$ such that we shift $k$ by an integer amount that absorbs the first two terms on the left-hand side \begin{align} p = 25 + m_{1} - k, \end{align} and we recast \ref{recast_prior} as follows \begin{align} m_{1} + 25 = \frac{576 + p^{2}}{2p} = \frac{288}{p} + \frac{p}{2}. \end{align} This tells us that $p$ must be even and that it must divide $288$. Restricting $k$ to be positive, we see that we have $k\geq m_{1}$ which implies that $p<25$. We conclude that all possible values of $m_{1}$ are found by those values of $p$ below $26$ that divide $288$ and are even. The list of these values is $p = \{2,4,6,8,12,16,18,24\}$. We note that at $p = 24$, we obtain $(m_{1}, c, h) = (-1, -3,0)$ which we ignore since we only want unitary theories. Table \ref{tab:theory_Fricke_2+} contains CFT data. \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c||} \hline $p$ & $\mu$ & $m_{1}$ & $c$ & $h$\\ [0.5ex] \hline\hline 2 & $-\tfrac{7}{36}$ & 120 & 8 & $\tfrac{11}{12}$\\[0.5ex] 4 & $-\tfrac{91}{576}$ & 49 & 7 & $\tfrac{5}{6}$\\[0.5ex] 6 & $-\tfrac{1}{8}$ & 26 & 6 & $\tfrac{3}{4}$\\[0.5ex] 8 &$-\tfrac{55}{576}$ & 15 & 5 & $\tfrac{2}{3}$\\[0.5ex] 12 & $-\tfrac{3}{64}$ & 5 & 3 & $\tfrac{1}{2}$\\[0.5ex] 16 & $-\tfrac{7}{576}$ & 1 & 1 & $\tfrac{1}{3}$\\[0.5ex] 18 & $0$ & 0 & 0 & $\tfrac{1}{4}$\\[1ex] \hline \end{tabular} \caption{$c$ and $h$ data corresponding to the Fricke group $\Gamma_{0}^{+}(2)$ for the choice $\phi_{0} = \mu E^{(2^{+})}_{4}(\tau)$ with $\ell = 0$.} \label{tab:theory_Fricke_2+} \end{table} \noindent The ratio $m_{1}$ being non-negative is not sufficient proof to convince ourselves that this data might indeed be related to a CFT. Hence, we compute $m_{2}$ using recursion relation \ref{recursion_l=0_Gamma_0_2+}. If this turns out to be negative or fractional, then we rule out the theory as a viable candidate. When $n = 2$, we get \begin{align} m_{2}^{(i)} \equiv \frac{f_{2}^{(i)}}{f_{0}^{(i)}} = \frac{4\alpha_{i}(312\alpha_{i} -83)}{8\alpha_{i} + 7} + \frac{4m_{1}^{(i)}(-1 + \alpha_{i}(-7 + 24\alpha_{i}))}{8\alpha_{i} + 7}. \end{align} The values of $m_{2}$ for $i = 0$ are tabulated in table \ref{tab:theory_Fricke_2+_m2}. \begin{table}[htb!] \centering \begin{tabular}{||c|c|c||} \hline $p$ & $c$ & $m_{2}$\\ [0.5ex] \hline\hline 2 & 8 & $\tfrac{6508}{13}$\\[0.5ex] 4 & 7 & 173\\[0.5ex] 6 & 6 & 79\\[0.5ex] 8 & 5 & 40\\[0.5ex] 12 & 3 & 11\\[0.5ex] 16 & 1 & 2\\[0.5ex] 18 & 0 & 0\\[1ex] \hline \end{tabular} \caption{Values of $m_{2}$ for corresponding to the Fricke group $\Gamma_{0}^{+}(2)$ for the choice $\phi_{0} = \mu E^{(2^{+})}_{4}(\tau)$ with $\ell = 0$.} \label{tab:theory_Fricke_2+_m2} \end{table} \noindent We discard the case $p = 2$ since the coefficient turns out to be fractional. Furthermore, we also discard the case $p = 18$ since further computation yields all coefficients $m_{i}$ to be null valued which makes this a trivial solution. We have now restricted to cases $p = \{4,6,8,12,16\}$. Next, we compute $m_{3}$ using recursion relation \ref{recursion_l=0_Gamma_0_2+} and this turns out to be \begin{align} m_{3}^{(i)} \equiv \frac{f_{3}^{(i)}}{f_{0}^{(i)}} = \frac{32\alpha_{i}(168\alpha_{i} -43)}{24\alpha_{i} + 33} + \frac{8m_{1}^{(i)}(-5 + \alpha_{i}(-83 + 312\alpha_{i}))}{24\alpha_{i} + 33} + \frac{8m_{2}^{(i)}(-2 + \alpha_{i}(-7 + 24\alpha_{i}))}{24\alpha_{i} + 33}. \end{align} \noindent The integral values of $m_{3}$ for $i = 0$ are shown in table \ref{tab:theory_Fricke_2+_m3}. \begin{table}[htb!] \centering \begin{tabular}{||c|c|c||} \hline $p$ & $c$ & $m_{2}$\\ [0.5ex] \hline\hline 6 & 6 & 326\\[0.5ex] 8 & 5 & 135\\[0.5ex] 12 & 3 & 20\\ [0.5ex] 16 & 1 & 1\\[0.5ex] \hline \end{tabular} \caption{Values of $m_{3}$ for corresponding to the Fricke group $\Gamma_{0}^{+}(2)$ for the choice $\phi_{0} = \mu E^{(2^{+})}_{4}(\tau)$ with $\ell = 0$.} \label{tab:theory_Fricke_2+_m3} \end{table} \noindent In a similar fashion, computing coefficients for $i = 0,1$, we obtain the coefficients of the two characters. \section{Modular forms for levels 5 \& 7}\label{appendix: Mod_5_and_7} $\mathbf{\Gamma_{0}^{+}(5)}:$\\ \noindent The Fricke group of level $5$ is generated by $\Gamma_{0}^{+}(5) = \langle\left(\begin{smallmatrix}1 & 1\\ 0 & 1\end{smallmatrix}\right), W_{5}, \left(\begin{smallmatrix} & 1\\ 5 & 2\end{smallmatrix}\right)\rangle$. The elliptic points in the fundamental domain are at $\tfrac{i}{\sqrt{5}}$ (also known as the Fricke involution point), $\rho_{5,1} = -\tfrac{1}{2}+\tfrac{i}{2\sqrt{5}}$, and $\rho_{5,2} = \tfrac{-2 + i}{5}$ and this group has a cusp at $\tau = i\infty$. A non-zero $f\in\mathcal{M}_{k}^{!}(\Gamma_{0}^{+}(5))$ satisfies the following valence formula \begin{align}\label{valence_Fricke_5} \nu_{\infty}(f) + \frac{1}{2}\nu_{\tfrac{i}{\sqrt{5}}}(f) + \frac{1}{2}\nu_{\rho_{5,1}}(f) + \frac{1}{2}\nu_{\rho_{5,2}}(f) + \sum\limits_{\substack{p\in\Gamma_{0}^{+}(5)\backslash\mathbb{H}^{2}\\ p\neq \tfrac{i}{\sqrt{5}},\rho_{5,1}, \rho_{5,2}}}\nu_{p}(f) = \frac{k}{4}. \end{align} The Hauptmodul and modular forms of $\Gamma_{0}^{+}(5)$ read \begin{align} \begin{split} j_{5^{+}}(\tau) =& \left(\frac{\left(E_{2,5^{'}}\right)^{2}}{\Delta_{5}}\right)(\tau) = \left(\frac{E_{4}^{(5^{+})}}{\Delta_{5}}\right)(\tau) - \frac{36}{13} = \left(\frac{\eta(\tau)}{\eta(5\tau)}\right)^{6} + 5^{3}\left(\frac{\eta(5\tau)}{\eta(\tau)}\right)^{6} + 22,\\ \Delta_{5}(\tau) =& \left(\eta(\tau)\eta(5\tau)\right)^{4}\in\mathcal{S}_{4}(\Gamma_{0}^{+}(5)),\\ E_{2,5^{'}}(\tau) =& \frac{5E_{2,5^{'}}(5\tau) - E_{2}(\tau)}{4}\in\mathcal{M}_{2}(\Gamma_{0}(5)),\\ E_{k}^{(5^{+})}(\tau) =& \frac{5^{\tfrac{k}{2}}E_{k}(5\tau) + E_{k}(\tau)}{5^{\tfrac{k}{2}} + 1}\in\mathcal{M}_{k}(\Gamma_{0}^{+}(5)),\ \text{for}\ k\geq 4\ \&\ k\in2\mathbb{Z}. \end{split} \end{align} $\mathbf{\Gamma_{0}^{+}(7)}:$\\ \noindent The Fricke group of level $7$ is generated by $\Gamma_{0}^{+}(7) = \langle\left(\begin{smallmatrix}1 & 1\\ 0 & 1\end{smallmatrix}\right), W_{7}, \left(\begin{smallmatrix}3 & 1\\ 7 & 2\end{smallmatrix}\right)\rangle$. The elliptic points in the fundamental domain are at $\tfrac{i}{\sqrt{7}}$ (also known as the Fricke involution point), $\rho_{7,1} = -\tfrac{1}{2}+\tfrac{i\sqrt{7}}{10}$, and $\rho_{7,2} = \tfrac{-5 + i\sqrt{3}}{14}$ and this group has a cusp at $\tau = i\infty$. A non-zero $f\in\mathcal{M}_{k}^{!}(\Gamma_{0}^{+}(7))$ satisfies the following valence formula \begin{equation}\label{valence_formula_Fricke} \nu_{\infty}(f) + \frac{1}{2}\nu_{\rho_{F}}(f) + \frac{1}{2}\nu_{k_{2}}(f) + \frac{1}{3}\nu_{\rho_{2}}(f) + \sum\limits_{\substack{p\in\ X_{0}^{+}(7)\\ p\neq \tfrac{i}{\sqrt{7}},\rho_{7,1},\rho_{7,2}}}\nu_{p}(f) = \frac{k}{3}. \end{equation} The Hauptmodul and some semi-modular and cusp forms in the Hecke group $\Gamma_{0}(7)$ are defined as follows \cite{Umasankar:2022kzs} \begin{align}\label{Hecke_def} \begin{split} j_{7}(\tau) =& \left(\frac{\eta(\tau)}{\eta(7\tau)}\right)^{4},\\ \theta_{7}(\tau) =& \sqrt{E_{2,7^{'}}(\tau)} = \left(\frac{7E_{2}(7\tau) - E_{2}(\tau)}{6}\right)^{\frac{1}{2}},\\ \mathbf{k}(\tau) =& \frac{\eta^{7}(\tau)}{\eta(7\tau)},\ \ \ \mathbf{t}(\tau) = \frac{1}{j_{7}(\tau)},\\ \Delta_{7}(\tau) =& \left(\mathbf{k}\mathbf{t}\right)(\tau)\in\mathcal{S}_{3}(\Gamma_{0}(7)) \end{split} \end{align} We now express the Hauptmodul and some semi-modular and cusp forms in $\Gamma_{0}^{+}(7)$ in terms of the definitions in \ref{Hecke_def} \begin{align}\label{level_7+} \begin{split} j_{7^{+}}(\tau) =& \left(\frac{E_{1,7^{'}}^{3}}{\Delta_{7}}\right)(\tau) = \left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right) - \frac{16}{5} =\left(j_{7} + \frac{49}{j_{7}} + 13\right)(\tau),\\ E_{1,7^{'}}(\tau) =& \sqrt{E_{2,7^{'}}(\tau)} = \theta_{7}(\tau),\\ \Delta_{7^{+},4}(\tau) =& \left(\theta_{7}\mathbf{t}\mathbf{k}\right)(\tau) = \left(\sqrt{E_{2,7^{'}}}\Delta_{7}\right)(\tau)\in\mathcal{S}_{4}(\Gamma_{0}^{+}(7)),\\ \Delta_{7^{+},10}(\tau) =& \frac{559}{690}\left(\frac{41065}{137592}\left(E^{(7^{+})}_{4}(\tau)E^{(7^{+})}_{6}(\tau) - E^{(7^{+})}_{10}(\tau)\right) - E^{(7^{+})}_{6}(\tau)\Delta_{7^{+},4}(\tau)\right)\\ =& \left(\Delta_{7^{+},4}^{2}\frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right)(\tau)\in\mathcal{S}_{10}(\Gamma_{0}^{+}(7)),\\ \Delta_{7^{+},6}(\tau) =& \left(\frac{\Delta_{7^{+},10}}{\Delta_{7^{+},4}}\right)(\tau) = \left(\Delta_{7^{+},4}\frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right)(\tau)\in\mathcal{S}_{6}(\Gamma_{0}^{+}(7)), \end{split} \end{align} where \begin{align} \begin{split} E_{k,7^{'}}(\tau) \equiv& \frac{7^{\tfrac{k}{2}}E_{2}(7\tau) - E_{k}(\tau)}{7^{\tfrac{k}{2}} - 1},\ \text{for}\ k\in2\mathbb{Z},\\ E_{k}^{(7^{+})}(\tau) \equiv& \frac{7^{\tfrac{k}{2}}E_{k}(7\tau) + E_{k}(\tau)}{7^{\tfrac{k}{2}} + 1},\ \text{for}\ k\geq 2\ \&\ k\in2\mathbb{Z}. \end{split} \end{align} \section{Discussion}\label{sec:Discussion} \subsection{Fricke Monsters lurk in the Borcherds product} \noindent From MLDE analysis for groups $\Gamma_{0}^{+}(2)$ and $\Gamma_{0}^{+}(3)$, we found that the identity characters with unstable or trivial descendants possess coefficients equal to a McKay-Thompson series of a certain conjugacy class of $\mathbb{M}$. There exists a Borcherds product interpretation for this observation as pointed out in \cite{Duncan:2022afh} (see table $10$). We summarize this observation here. Consider the following infinite product expansion of the $\text{SL}(2,\mathbb{Z})$ Haputmodul, \begin{align}\label{j-product} j(\tau) - j(z) = q_{\tau}^{-1}\prod\limits_{\substack{m>0\\ n\in\mathbb{Z}}}\left(1 - q_{\tau}^{m}q_{z}^{n}\right)^{c(m,n)}, \end{align} where $q_{a} = e^{2\pi i a}$, and $c(r)$ is the r$^{\text{th}}$ Fourier coefficient of $J(\tau) = j(\tau) - 744$. When $m=1, n=1$, we find the RHS to read $q_{\tau}^{-1} - q_{z}^{-1}$ which indeed matches the antisymmetric LHS although it may not look it at first sight. This $j$-product formula also called the Koike-Norton-Zagier infinite product identity, was independently discovered by Borcherds, Koike, Norton, and Zagier. This is a generalization of the Weyl denominator formula for the finite-dimensional Lie algebra which reads \begin{align}\label{Weyl_denominator_formula} \sum\limits_{w\in \mathcal{W}}\varepsilon(w)w(P) = q_{\rho}^{-1}\prod\limits_{\alpha>0}\left(1 - q_{\alpha}\right)^{M(\alpha)}, \end{align} where $\mathcal{W}$ denotes the Weyl group, $\varepsilon(w) = (-1)^\ell(w)$ is the sign function with $\ell(w)$ denoting the length of the Weyl group element, $w(P) = q_{\rho}^{-1}\sum_{k}\varepsilon(k)q_{k}$ where the index $k$ runs over all the sums of pairwise orthogonal simple roots, $\rho$ is the Weyl vector of a group $G$, $\alpha$ are the roots of the Lie algebra, and $M(\alpha)$, which is always unity, is the multiplicity. Comparing \ref{Weyl_denominator_formula} and \ref{j-product}, we see that the infinite sums over all positive vectors correspond to the infinite sum over all positive roots, the $q_{\tau}^{-1}$ is matched by the $q_{\rho}^{-1}$, and expanding the difference of the Hauptmodules, we observe that we obtain an alternating sum that is matched by the alternating sum in the Weyl formula. We direct the reader to section $4$ of \cite{Carnahan2012GeneralizedMI} for a more detailed explanation of the same. The generalized Borcherds product is obtained by taking a product sum of the $j$-product over binary quadratic forms $Q = [a,b,c] = ax^{2} + bxy + cy^{2}$, where $a,b,c\in\mathbb{Z}$, belonging to the orbit space $\mathcal{Q}_{D}^{(N)}/\Gamma_{0}^{+}(N)$ (we note that a similar construction is also applicable to Hecke groups), where $D = b^{2} - 4ac$ is the discriminant of $Q$ and $\mathcal{Q}_{D}$ denotes the space of binary quadratic forms with $D<0$ and $a>0$. For a unique choice of the discriminant, this takes the following form \begin{align}\label{Borcherds_result} \prod\limits_{Q\in\mathcal{Q}_{D_{0}}^{(N)}/\Gamma_{0}^{+}(N)}\left(j_{N^{+}}(\tau) - j_{N^{+}}(\alpha_{Q})\right)^{\frac{1}{\vert s_{Q}\vert}} = \left(j_{N^{+}}(\tau) + \mathcal{N}\right)^{x}, \end{align} where the root $\alpha_{Q}$ is the complex multiplication point of $Q$, $s_{Q}$ is the stabilizer in $\Gamma_{0}^{+}(N)/\mathbb{Z}_{2}$, and $\mathcal{N}, x\in\mathbb{Z}$. The values of $j(\alpha_{Q})$ solely depends on the $\Gamma_{0}^{+}(N)$-equivalence class of $Q$. For the case of $N = 2$, we notice that the set $(\mathcal{N},x) = \left\{\left(0,\tfrac{1}{4}\right), \left(0,\tfrac{1}{2}\right), (104,1)\right\}$ corresponding to solutions with central charges $c = 6$ (\ref{8C_character_1}, \ref{8C_character_2}, \ref{8C_character_3}), $c = 12$ (\ref{4B_character_1}, \ref{4B_character_2}), and to the single character solution with central charge $c = 24$ (\ref{j_104_c=24}) respectively. Next, when $N = 3$, we find that the set $(\mathcal{N},x) = \left\{\left(0,\tfrac{1}{6}\right), \left(0,\tfrac{1}{3}\right), \left(0,\tfrac{1}{2}\right)\right\}$ that correspond to solutions with central charges $c = 4$ (\ref{18b_character_1}, \ref{18b_character_2}), $c = 8$ (\ref{9a_character_1}), and $c = 12$ (\ref{6b_character_1}).\\ \noindent From \cite{Duncan:2022afh}, we can also predict the admissible solutions with unstable descendants for higher-level Fricke groups. This, along with the results thus far for Fricke levels $p = 2,3$ are summarized in table \ref{tab: Summary}. Of the $14$ prime divisor levels, only $10$ turn out to possess admissible identity solutions with an unstable descendant. At this point, we do not have enough information to state if these solutions correspond to new characters not found by the single character analysis, for this we would need to resort to a complete analysis. \begin{table}[htb!] \begin{tabular}{||c|c|c|c||} \hline Level $p$ & Conjugacy class & $c$ & Hauptmodul construction\\[0.5ex] \hline\hline \multirow{2}{*}{2} & 8C & 6 & $j_{2^{+}}^{\tfrac{1}{4}}$\\[0.5ex] & 4B & 12 & $j_{2^{+}}^{\tfrac{1}{2}}$\\[1ex] \hline \multirow{3}{*}{3} & 18b & 4 & $j_{3^{+}}^{\tfrac{1}{6}}$\\[0.5ex] & 9a & 8 & $j_{3^{+}}^{\tfrac{1}{3}}$\\[0.5ex] & 6b & 12 & $j_{3^{+}}^{\tfrac{1}{2}}$\\[1ex] \hline 5 & 10a & 12 & $j_{5^{+}}^{\tfrac{1}{2}}$\\[1ex] \hline 7 & 21C & 8 & $\left(j_{7^{+}} + 13\right)^{\tfrac{1}{3}}$ \\[1ex] \hline 13 & 39B & 8 & $\left(j_{13^{+}} + 5\right)^{\tfrac{1}{3}}$ \\[1ex] \hline \end{tabular} \caption{This table lists all the admissible solutions to $n = 2$ MLDEs, for Fricke groups of prime divisor levels of $\mathbb{M}$, that possess an unstable descendant. The Hauptmodules are constructed for higher-level groups by using \ref{Borcherds_result} as a guide and playing around with the exponent to obtain the correct sequence. The central charges are found by reading off the value in the exponent of the identity characters (Hauptmodul constructions here), i.e. $j_{p^{+}}^{x} = q^{-\tfrac{c}{24}}(1 + \ldots)$.} \label{tab: Summary} \end{table} \subsection{General formula for identity character with c-chronology} \noindent For the case of $\Gamma_{0}^{+}(2)$, we found in \ref{Hecke_1_c=1} and \ref{Hecke_1_c=2} that the identity characters with ascending central charges in two- and three-character theories can be expressed in terms of modular forms of the Hecke group $\Gamma_{0}(2)$. We find the following general formula for all the identities one might expect to find with ascending central charges in higher-character theories \begin{align}\label{c-chronology_Fricke_2} \begin{split} \chi^{(2^{+})}_{0}(\tau) =& \left(\mathfrak{p}_{2^{+}}\Delta_{2}^{-\tfrac{1}{24}}\right)^{c},\\ \mathfrak{p}_{2^{+}}(\tau) \equiv& \left(\frac{\left(j_{2}\Delta_{2}^{\infty}\right)(3\tau)}{\left(j_{2}\Delta_{2}^{\infty}\right)^{\tfrac{1}{3}}(\tau)}\right)^{\tfrac{1}{8}}. \end{split} \end{align} We can write down a similar expression for $c$-chronology in $\Gamma_{0}^{+}(3)$ as follows \begin{align}\label{c-chronology_Fricke_3} \begin{split} \chi^{(3^{+})}_{0}(\tau) =& \left(\mathfrak{p}_{3^{+}}\Delta_{3}^{-\tfrac{1}{6}}\right)^{c},\\ \mathfrak{p}_{3^{+}}(\tau) \equiv& \left(\frac{\Delta_{3}^{0}(2\tau)}{\left(\Delta_{3}^{0}\right)^{\tfrac{1}{2}}(\tau)}\right)^{\tfrac{1}{3}}. \end{split} \end{align} Analysis of higher-character theories of $\Gamma_{0}^{+}(2)$ and $\Gamma_{0}^{+}(3)$ would reveal more such solutions. The reason why only the identity corresponding to a theory with a low central charge possesses this nice closed-form expressed would be interesting to study further. On the other hand, MLDE analysis of $\Gamma_{0}(2)$ and $\Gamma_{0}(3)$ could reveal some of these solutions since the Fricke groups are super groups of Hecke groups. \subsection{From Fricke solutions to Schellekens' list} \noindent Schellekens classified all $c = 24$ single-character theories and found that there is a total of $71$ CFTs, all of which possess a partition function of the form \cite{Schellekens:1992db} \begin{align} \begin{split} &Z(\tau) = J(\tau) + \mathcal{N},\ \ \ \ \ \mathcal{N} =12m,\\ &m\in\left\{0, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 16, 18, 20, 22, 24, 25, 26, 28, 30, 32, 34, 38, 46, 52, 62, 94\right\} \end{split} \end{align} For $\Gamma_{0}^{+}(2)$, the partition function corresponding to the general solution to $c = 24$ single-character theories we have established in \ref{j_104_c=24} is $Z_{2^{+}}(\tau) = j_{2^{+}}(\tau) + \mathcal{N}$, where $\mathcal{N}\geq -104$. The only solutions we have come across are \ref{bilin_non_0}, \ref{bilin_non_1}, \ref{bilin_non_2}, \ref{the_one_with_N_as15987}, and the $\ell = 4$ solution with $\mathcal{N} = 0$ found in \cite{Umasankar:2022kzs}. This gives us the list $\mathcal{N}\in\{-80, -64, -32, 0, 15987\}$. We understand the relationship between Klein's function, i.e. the Hauptmodul of the modular group, and the Hauptmodul of the Fricke group of level $2$ which reads \begin{align}\label{j_to_Fricke_2} j(\tau) = \frac{1}{2}\left[\left(j_{2^{+}} - 81\right)\sqrt{j_{2^{+}}(256 - j_{2^{+}})} + \left(j^{2}_{2^{+}} - 207j_{2^{+}} + 3456\right)\right](\tau). \end{align} This helps us map $\mathcal{Z}_{2^{+}}(\tau)$ to $Z(\tau)$ for $\mathcal{N}_{2^{+}} = 0$ but does not act as a general map. It would be interesting to complete the list and find a general map between the single-character partition functions. Additionally, since we have the following relations \begin{align}\label{j-j_Hecke_Fricke} \begin{split} j_{2^{+}}(\tau) =& \left(j_{2} + 4096j_{2}^{-1} + 128\right)(\tau),\\ j(\tau) =& \left(\frac{256 + j_{2}}{j_{2}^{2}}\right)(\tau), \end{split} \end{align} we can utilize these relationships to better understand the maps among the partition functions post an exhaustive analysis of the Hecke group of level $2$. \subsection{Hecke images} \noindent We obtain the following Hecke relations between some of our single-character solutions using the Hecke operators constructed in \cite{Harvey:2018rdc}. For single-character theories, the conductor $N$ is computed by observing the smallest positive integer $N$ such that $\left(e^{-\frac{i\pi c}{12}}\right)^N = 1$. Now, if, \begin{align} \chi = \sum\limits_{n\in\mathbb{Z}} a(n) q^{\frac{n}{N}}, \label{nonH} \end{align} Then, acting the above with a Hecke operator $T_p$ where $\text{gcd}(p,N)=1$, we get, \begin{align} T_p(\chi) = \sum\limits_{n\in\mathbb{Z}} a^{(p)}(n) q^{\frac{n}{N}}, \label{afH} \end{align} where, \begin{align} a^{(p)}(n) = \begin{cases} &pa(pn) \, \, \, \, \, \text{if} \, \, p \, \hspace{-4pt}\not|\hspace{2pt} \, n \\ &pa(pn) + a(\frac{n}{p}) \, \, \, \, \, \text{if} \, \, p \, | \, n \end{cases} \end{align} We report a few Hecke images of single-character solutions we found in Fricke groups of levels $2$ and $3$ here.\\ $\mathbf{\Gamma^{+}_0(2)}$:\\ \noindent The conductor is $N=4$ for $j_{2^{+}}^{\frac{1}{4}}$. \begin{align} &T_3\left(j_{2^{+}}^{\frac{1}{4}}\right) = j_{2^{+}}^{\frac{3}{4}}, \label{hF21} \\ &T_5\left(j_{2^{+}}^{\frac{1}{4}}\right) = j_{2^{+}}^{\frac{1}{4}}\left(j_{2^{+}}-130\right), \label{hF22} \\ &T_7\left(j_{2^{+}}^{\frac{1}{4}}\right) = j_{2^{+}}^{\frac{3}{4}}\left(j_{2^{+}}-182\right). \label{hF23} \end{align} $\mathbf{\Gamma^{+}_0(3)}$:\\ \noindent The conductor is $N=3$ for $j_{3+}^{\frac{1}{3}}$. \begin{align} &T_2\left(j_{3+}^{\frac{1}{3}}\right) = j_{3+}^{\frac{2}{3}}, \label{hF31} \\ &T_5\left(j_{3+}^{\frac{1}{3}}\right) = j_{3+}^{\frac{2}{3}}\left(j_{3+}-70\right), \label{hF32} \\ &T_7\left(j_{3+}^{\frac{1}{3}}\right) = j_{3+}^{\frac{1}{3}}\left(j_{3+}^2 -98j_{3+}+917\right). \label{hF33} \end{align} \section{\texorpdfstring{$\mathbf{\Gamma_{0}^{+}(2)}$}{Γ0(2)+}}\label{sec:Gamma_0_2+} \input{Fricke_2/Single_character_solutions.tex} \input{Fricke_2/Two_character_Gamma_0_2+.tex} \input{Fricke_2/Three_character_Fricke2} \subsection{Two-character solutions} \noindent From the dimension of the space of modular forms listed in \ref{dimension_Fricke_2}, we obtain the following expression for the number of free parameters \begin{align}\label{number_of_parameters_Gamma_0_2+} \#(\mu) = \begin{cases} \left\lfloor\left.\frac{\ell}{4}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 1}{4}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 2}{4} \right\rfloor\right. + 2,\ &2\ell\not\equiv 2\ (\text{mod}\ 8),\ 2\ell>2,\\ \left\lfloor\left.\frac{\ell}{4}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 1}{4}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 2}{4}\right\rfloor\right. -1,\ &2\ell\equiv 2\ (\text{mod}\ 8),\ 2\ell>2. \end{cases} \end{align} \subsubsection{\texorpdfstring{$\ell = 0$}{l = 0}} \noindent The $n = 2$ MLDE reads \begin{align}\label{MLDE_n=2_Fricke_2} \left[\mathcal{D}^{2} + \phi_{1}(\tau) \mathcal{D} + \phi_{0}(\tau)\right]f(\tau) = 0, \end{align} where $\phi_{0}(\tau)$ and $\phi_{1}(\tau)$ are modular forms of weights $4$ and $2$ respectively. The space $\mathcal{M}_{2}(\Gamma_{0}^{+}(2))$ is zero-dimensional and there are no modular forms of weight $2$ in $\Gamma_{0}^{+}(2)$. Hence, we set $\phi_{1}(\tau) = 0$. The space $\mathcal{M}_{4}(\Gamma_{0}^{+}(2))$ is one-dimensional and hence, we set $\phi_{0}(\tau) = \mu_{1} E_{4}^{(2^{+})}(\tau)$. For these choices, the second-order MLDE takes the following form \begin{align}\label{MLDE_n=2_Gamma_0_2+} \left[\mathcal{D}^{2} + \mu_{1} E_{4}^{(2^{+})}(\tau)\right]f(\tau) = 0, \end{align} where $\mu_{1}$ is an independent parameter. Now, since the covariant derivative transforms a weight $r$ modular form into one of weight $r+2$, the double covariant derivative of a weight $0$ form is \begin{align} \begin{split} \mathcal{D}^{2} = \mathcal{D}_{(2)}\mathcal{D}_{(0)} =& \left(\frac{1}{2\pi i}\frac{d}{d\tau} - \frac{1}{4}E^{(2^{+})}_{2}(\tau)\right)\frac{1}{2\pi i}\frac{d}{d\tau}\\ =& \Tilde{\partial}^{2} - \frac{1}{4}E^{(2^{+})}_{2}(\tau)\Tilde{\partial}. \end{split} \end{align} where $\Tilde{\partial}=\frac{1}{2\pi i}\frac{\partial}{\partial\tau}$. \\ \\ The MLDE \ref{MLDE_n=2_Gamma_0_2+} now reads \begin{align}\label{MLDE_n=2_Gamma_0_2+_expanded} \left[\Tilde{\partial}^{2} - \frac{1}{4}E^{(2^{+})}_{2}(\tau)\Tilde{\partial} + \mu_{1} E_{4}^{(2^{+})}(\tau)\right]f(\tau) = 0. \end{align} This equation can be solved by making the following mode expansion substitution for the character $f(\tau)$ and the other modular forms, \begin{align}\label{series_defs_Gamma_0_2+} \begin{split} f(\tau) =& q^{\alpha}\sum\limits_{n= 0}^{\infty}f_{n}q^{n},\\ E_{2}^{(2^{+})}(\tau) =& \sum\limits_{n=0}^{\infty}E_{2,n}^{(2^{+})}q^{n}\\ =& 1 - 8 q - 40 q^2 - 32 q^3 - 104 q^4 + \ldots,\\ E_{4}^{(2^{+})}(\tau) =& \sum\limits_{n=0}^{\infty}E_{4,n}^{(2^{+})}q^{n}\\ =& 1 + 48 q + 624 q^2 + 1344 q^3 + 5232 q^4 + \ldots \end{split} \end{align} Substituting these Fourier expansions we get the following recursion relation, \begin{align} &\left[(\alpha+i)^2 - \frac{1}{4}(\alpha+i) + \mu_1\right]a_i = \sum\limits_{k=1}^i\left[\frac{1}{4}(\alpha+i-k)E^{(2^{+})}_{2,k}-\mu_1E_{4,k}^{(2^{+})}\right]a_{i-k}, \label{recursion0} \end{align} where $a_i$s are Fourier coefficients of the character $q$-series expansion. The indicial equation is obtained by setting $i=0$ and $k=0$ in \ref{recursion0}, \begin{align} \alpha^2 - \frac{1}{4}\alpha + \mu_1 = 0. \label{indicial0} \end{align} Setting $\alpha=\alpha_0$ in \ref{indicial0} we get, \begin{align} \mu_1 = \frac{1}{4}\left(\alpha_0-4\alpha_0^2\right) = -\frac{c(c+6)}{576}, \label{param0} \end{align} Now setting $i=1$ in \ref{recursion0} we get the following quadratic equation in $\alpha_0$, \begin{align} 9a_1 + 168\alpha_0 + 24a_1\alpha_0 - 576\alpha_0^2 = 0, \label{m1_eqn_0} \end{align} which can be recast as below if we identify $N=-24\alpha_0=c$, \begin{align} -9a_1 + 7N + a_1N + N^2 = 0, \label{N_eqn_0} \end{align} $a_1$ can now be expressed in terms of $N$ as, \begin{align} a_1 = \frac{N(N+7)}{9-N}, \label{a1_eqn_0} \end{align} which sets an upper bound on the central charge $c<9$. Furthermore, applying the integer root theorem to \ref{N_eqn_0}, we see that $c=N\in\mathbb{Z}$. Now, checking for higher orders (up to $\mathcal{O}(q^{5000})$) for both characters, for $0<c<9$, we find the following admissible character solutions, \begin{itemize} \item $(c,h)=\left(1,\frac{1}{3}\right)$: \begin{align}\label{c=1_Fricke_2_2,0} \begin{split} \chi_0 =& q^{-\tfrac{1}{24}}(1 + q + 2q^2 + q^3 + 3q^4 + 3q^5 + 5q^6 + 5q^7 + 8q^8 + 8q^9 + 12q^{10} + \mathcal{O}(q^{11})),\\ \chi_{\frac{1}{3}} =& q^{\tfrac{7}{24}}(1 + q^2 + q^3 + 2q^4 + q^5 + 3q^6 + 2q^7 + 5q^8 + 4q^9 + 7q^{10} + \mathcal{O}(q^{11})). \end{split} \end{align} \item $(c,h)=\left(3,\frac{1}{2}\right)$: \begin{align}\label{c=3_Fricke_2_2,0} \begin{split} \chi_0 =& q^{-\tfrac{1}{8}}(1 + 5q + 11q^2 + 20q^3 + 41q^4 + 76q^5 + 127q^6 + 211q^7 + 342q^8 + 541q^9\\ &{}\ \ \ \ \ \ \ \ + 838q^{10} + \mathcal{O}(q^{11})),\\ \chi_{\frac{1}{2}} =& q^{\tfrac{3}{8}}(1 + q + 5q^2 + 6q^3 + 16q^4 + 21q^5 + 46q^6 + 61q^7 + 117q^8 + 157q^{9}\\ &{}\ \ \ \ \ \ + 273 q^{10} + \mathcal{O}(q^{11})) \end{split} \end{align} \item $(c,h)=\left(5,\frac{2}{3}\right)$: \begin{align}\label{c=5_Fricke_2_2,0} \begin{split} \chi_0 =& q^{-\tfrac{5}{24}}(1 + 15q + 40q^2 + 135q^3 + 290q^4 + 726q^5 + 1415q^6 + 3000q^7 + 5485q^8\\ &{}\ \ \ \ \ \ \ \ \ + 10565q^9 + 18407q^{10} + \mathcal{O}(q^{11})),\\ \chi_{\frac{2}{3}} =& q^{\tfrac{11}{24}}(5 + 11q + 55q^2 + 100q^3 + 290q^4 + 535q^5 + 1235q^6 + 2160q^7 + 4400q^8\\ &{}\ \ \ \ \ \ \ + 7465q^{9} + 13945q^{10} + \mathcal{O}(q^{11})). \end{split} \end{align} \end{itemize} We also find a \textit{identity only} admissible solution, which only has a nice identity character but an unstable non-trivial character\footnote{By unstable character, we mean a character which has fractional or negative coefficients and is not a quasi-character.}. This is found at $(c,h)=\left(6,\frac{3}{4}\right)$. \begin{align}\label{8C_character_1} \begin{split} \chi_0 =& q^{-\tfrac{1}{4}}(1 + 26q + 79q^2 + 326q^3 + 755q^4 + 2106q^5 + 4460q^6 + 10284q^7\\ &{}\ \ \ \ \ \ \ \ + 20165q^8 + 41640q^9 + 77352q^{10} + \mathcal{O}(q^{11}))\\ =& j_{2^{+}}^{\tfrac{1}{4}}, \end{split} \end{align} which is the $\ell = 1$ single-character solution reappearing. This surprisingly turns out to be the McKay-Thompson series of the $8C$ conjugacy class of the Monster $\mathbb{M}$ (OEIS sequence A052241 \cite{A052241}). A detailed derivation of the admissible solutions is provided in \ref{appendix:detailed_derivation}. For $c=7,8$, both the characters are unstable and for $c=2,4$ there exists no $a_1\in\mathbb{Z}$ solutions to \ref{N_eqn_0}. Consider now the $c=6$, \textit{identity only} solution. It turns out $\chi_{0}^{(c=6)} = j_{2^{+}}^{\tfrac{1}{4}}$. So, a $(1,1)$ solution has appeared as a $(2,0)$ solution, a phenomenon which keeps happening in the $\text{SL}(2,\mathbb{Z})$ case. So, we can use this phenomena: \textit{$j_{2^{+}}^x$ where $x\in\{\frac{1}{4},\frac{1}{2},\frac{3}{4}\}$ will always appear as a particular solution to a $(n, \ell)$ MLDE}, to reduce the no. of parameters of the $(n, \ell)$ MLDE by one. For $c=6$, $\mu_1 = -\frac{1}{8}$ (using \ref{param0}). With this value of $\mu_1$, \ref{reparameterized_MLDE_2+} should yield a solution $\chi=j_{2^{+}}^{\tfrac{1}{4}}$. It is interesting to note that the identity character in \ref{c=1_Fricke_2_2,0} can be realized in terms of modular forms of the Hecke group $\Gamma_{0}(2)$ as follows \begin{align}\label{Hecke_1_c=1} \begin{split} \chi_{0}^{(c = 1)} =& \left(\frac{\left(j_{2}\Delta_{0}^{\infty}\right)(3\tau)}{\left(j_{2}\Delta_{2}^{\infty}\right)^{\tfrac{1}{3}}(\tau)}\right)^\frac{1}{8}\left(\frac{1}{\Delta_{2}(\tau)}\right)^{\tfrac{1}{24}},\\ \Delta_{2}^{\infty}(\tau) =& \frac{\eta^{16}(2\tau)}{\eta^{8}(\tau)}\in\mathcal{M}_{4}(\Gamma_{0}(2)),\ \ \Delta_{2}^{0}(\tau) = \frac{\eta^{16}(\tau)}{\eta^{8}(2\tau)}\in\mathcal{M}_{4}(\Gamma_{0}(2)),\\ \Delta_{2}(\tau) =& \Delta_{2}^{\infty}\Delta_{2}^{0}\in\mathcal{S}_{8}(\Gamma_{0}(2)),\ \ \ \ \ \ \ j_{2}(\tau) = \left(\frac{\eta(\tau)}{\eta(2\tau)}\right)^{24}. \end{split} \end{align} Also, the $q$-series expansion of this character is the sequence of the number of partitions of $n$ in which no part appears more than twice and no two parts differ by unity (OEIS sequence \cite{A070047}). \subsubsection{Bilinear pairs} \noindent Let ${}^{(2)}\mathcal{W}_1$, ${}^{(2)}\mathcal{W}_3$ and ${}^{(2)}\mathcal{W}_5$ denote the $c\in\{1,2,5\}$ admissible character-like (2,0) solutions. Note that we have the following bilinear identities, \begin{align} j_{2^{+}}^{\tfrac{1}{4}} =& \chi_0^{{}^{(2)}\mathcal{W}_1}\chi_0^{{}^{(2)}\mathcal{W}_5} + 2 \, \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1}\chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5}, \label{bilin0} \\ j_{2^{+}}^{\tfrac{1}{4}} =& \chi_0^{{}^{(2)}\mathcal{W}_3}\chi_0^{{}^{(2)}\mathcal{W}_3} + 16 \, \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}\chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}, \label{bilin1} \end{align} where \ref{bilin1} is a self-dual relation and where the above bilinears are for the following relation: $(2,0)\overset{c^\mathcal{H}=6}{\underset{n_1=1}{\longleftrightarrow}}(2,0)$ (where $n_1=\text{sum of conformal dimensions of the bilinear pair}$). These bilinear relations can probably help us to write down the modular invariant partition function for the $(2,0)$ admissible solutions. For instance, the below could be true, \begin{align} \mathcal{Z}_1 =&\left|\chi_0^{{}^{(2)}\mathcal{W}_1}\right|^2 + 2 \, \left|\chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1}\right|^2, \label{partintn1} \\ \mathcal{Z}_3 =& \left|\chi_0^{{}^{(2)}\mathcal{W}_3}\right|^2 + 16 \, \left|\chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}\right|^2, \label{partintn3} \\ \mathcal{Z}_5 =& \left|\chi_0^{{}^{(2)}\mathcal{W}_5}\right|^2 + 2 \, \left|\chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5}\right|^2. \label{partintn5} \end{align} \subsubsection{\texorpdfstring{$\ell = 1$}{l = 1}} \noindent The MLDE in this case takes the following form \begin{align} \left[\mathcal{D}^2+\left(\mu_1\frac{\left(E_4^{(2+)}\right)^2}{E_6^{(2+)}}+\mu_2\frac{\Delta_2}{E_6^{(2+)}}\right)\mathcal{D}+\mu_3\frac{E_{10}^{(2+)}}{E_{6}^{(2+)}}\right]\chi(\tau) =& 0, \label{mldeF2_2char_l1} \\ \left[E_6^{(2+)}\theta_{q}^{2} + \left(E_6^{(2+)} - \frac{E_6^{(2+)}E_2^{2+}}{4} + \mu_1\left(E_4^{(2+)}\right)^2 + \mu_2\Delta_2\right)\theta_{q} + \mu_3 E_{10}^{(2+)} \right]\chi(\tau) =& 0. \label{qmldeF2_2char_l1} \end{align} We obtain the following recursion relation from the above MLDE \begin{align} &\sum\limits_{k=0}^i\left[E_{6,k}^{(2+)}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{6,k}^{(2+)}(\alpha^j+i-k)-\frac{E_{2,k}^{(2^{+})}E_{2,k}^{(2^{+})}}{4}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1 (E_{4}^{(2^{+})})^2_{,k} (\alpha^j+i-k) + \mu_2\Delta_{2,k}(\alpha^j+i-k) + \mu_3 E^{(2^{+})}_{10,k}\right]a^j_{i-k} = 0, \label{recursion_l1F2_2char} \end{align} which gives the following indicial equation, \begin{align} \alpha^2 + \left(\mu_1-\frac{1}{4}\right)\alpha + \mu_3 = 0, \label{indicial_l1F2_2char} \end{align} From Riemann-Roch (sum of exponents $= 0$), we see that $\mu_1=\frac{1}{4}$. Now, let us determine the value of $\mu_2$ by transforming the $\tau$-MLDE in \ref{mldeF2_2char_l1} into the corresponding $j_{2^{+}}$-MLDE (about $\tau=\rho_2$), \begin{align} \left[j_{2^{+}}^2\partial_{j_{2^{+}}}^2 + \frac{j_{2^{+}}}{2}\partial_{j_{2^{+}}} + \frac{j_{2^{+}}}{j_{2^{+}}-256}\left(\frac{j_{2^{+}}}{2}-(64+\mu_2)\right)\partial_{j_{2^{+}}} + \frac{\mu_3 j_{2^{+}}}{j_{2^{+}}-256}\right]\chi = 0, \label{j2+mlde_p2_l1} \end{align} where we have set $\mu_1=\frac{1}{4}$ from the $\ell=1$ indicial equation (see \ref{indicial_l1F2_2char}). Now, note that about $\tau=\rho_2$, we have, $\chi_0\sim j_{2^{+}}^x$ and $j_{2^{+}}\to 0$, with $x=\frac{1}{4}$ for $\ell=1$ (see single-character solutions). Upon substituting $\chi=j_{2^{+}}^x$ about $j_{2^{+}}=0$, the $j_{2^{+}}$-indicial equation (which is basically, coefficient of $j_{2^{+}}^x$ $=\left.0\right|_{j_{2^{+}}=0}$) reads \begin{align} x\left(x+\frac{\mu_2}{256}-\frac{1}{4}\right) = 0, \label{jindip2l1F2} \end{align} which yields $\mu_2 = 0$ for $x=\frac{1}{4}$. \\ \noindent Putting $i=1$ in \ref{recursion_l1F2_2char} yields\footnote{We shall at times identify $m_i:=a^0_i$.}, \begin{align} &m_1 + 40 \alpha_0 + 2 m_1 \alpha_0 - 48 \alpha_0^2 = 0, \label{m1al1}\\ &-12 m_1 + 20 N + m_1 N + N^2 = 0 \, \, \, \, (\text{where}, \ N:=-24\alpha_0 = c). \label{diophNl1p2} \end{align} Applying integer root theorem to \ref{diophNl1p2} we note that $N=c\in\mathbb{Z}$.\\ With the above definition of $\alpha_0$ we can now find $m_1$ in terms of $N$, \begin{align} m_1 = \frac{N(N+20)}{12-N}, \label{m1N_p2F2l1} \end{align} which implies that $0<c< 12$. Scanning this space of $c$, we observe that there exist the following admissible solutions: \begin{itemize} \item $(c,h)=\left(4,\frac{1}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{6}}(1 + 12q + 70q^2 + 1304q^3 + 45969q^4 + 2110036q^5 + 109853542q^6 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 6204772264q^7 + 371363466708q^8 + \mathcal{O}(q^{9})), \\ \chi_{\frac{1}{3}} &= q^{\tfrac{1}{6}}(1 - 4q - 86q^2 - 2712q^3 - 110221q^4 - 5321572q^5 - 285692130q^6 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ - 16484830680q^7 - 1002347627239q^8 - \mathcal{O}(q^{9})), \end{align} where $\chi_0$ can be interpreted as a $c=4$ single-character solution. Also, note that, $\chi_{\frac{1}{3}}$ is a quasi-character of type II (see \cite{Chandra:2018pjq}). One can build this out of the general form of a single-character solution in $\Gamma_{0}^{+}(2)$ given in \ref{gen_j}. \item $(c,h)=\left(6,\frac{1}{2}\right)$: \begin{align}\label{8C_character_2} \begin{split} \chi_0 =& j_{2^{+}}^{\tfrac{1}{4}},\\ \chi_{\frac{1}{2}} =& \text{unstable}. \end{split} \end{align} \end{itemize} Akin to the $(c,h) = \left(6, \tfrac{3}{4}\right)$ identity character in \ref{8C_character_1}, this identity character's $q$-series expansion turns out to be the McKay-Thompson series of the $8C$ conjugacy class of $\mathbb{M}$. Note that, we can make use of the fact that $j_{2^{+}}^x$ with $x\in\{\frac{1}{2},\frac{3}{4}\}$ can also appear as a solution to the above $(2,1)$ MLDE and hence, we get for these values of $x$, $\mu_2=-64$ and $\mu_2=-128$ respectively. \begin{itemize} \item When $\mu_2=-64$, we get from the $i=1$ equation: $(m_1+c)\left(1-\frac{c}{12}\right)=0$ which implies $c=12$. So, this choice of $\mu_2$ only leads to one unitary single-character solution: $j_{2^{+}}^{\tfrac{1}{2}}$. \item When $\mu_2=-128$, we get from the $i=1$ equation: $m_1=\frac{N(N-44)}{12-N}$ which means the allowed range of central charge is, $12<c<45$ (where $N=-24\alpha_0$). Scanning this space of $c$, we only obtain $j_{2^{+}}^{\tfrac{3}{4}}$ and three type II quasi-character solutions for $c=20,28,44$. Thus, we conclude that this value of $\mu_2$ has no admissible 2-character solutions. \end{itemize} \subsubsection{\texorpdfstring{$\ell = 2$}{l = 2}} \noindent Let us consider the case with $\ell = 2$ and two characters $n = 2$. The MLDE in this case reads \begin{align}\label{MLDE_n=2,l=2_Fricke_2} \left[\omega_{4}(\tau)\mathcal{D}^{2} + \omega_{6}(\tau) \mathcal{D} + \omega_{8}(\tau)\right]\chi(\tau) = 0. \end{align} Since $2\ell = 4 \not\equiv 2\ (\text{mod}\ 8)$, using \ref{number_of_parameters_Gamma_0_2+}, we see that we have three free parameters to deal with here. The basis decomposition of the space of modular forms in $\Gamma_{0}^{+}(2)$ of weights $6$ and $8$ read \begin{align} \begin{split} \mathcal{M}_{6}(\Gamma_{0}^{+}(2)) =& E_{6}^{(2^{+})},\\ \mathcal{M}_{8}(\Gamma_{0}^{+}(2)) =& \mathbb{C}\left(E_{4}^{(2^{+})}\right)^{2}\oplus\mathbb{C}\Delta_{2}. \end{split} \end{align} We make the following choices for $\omega_{4}(\tau)$, $\omega_{6}(\tau)$, and $\omega_{8}(\tau)$ \begin{align} \begin{split} \omega_{4}(\tau) =& \nu_{1}E_{4}^{(2^{+})}(\tau),\\ \omega_{6}(\tau) =& \nu_{2}E_{6}^{(2^{+})}(\tau),\\ \omega_{8}(\tau) =& \nu_{3}\left(E_{4}^{(2^{+})}\right)^{2}(\tau) + \nu_{4}\Delta_{2}(\tau). \end{split} \end{align} Making these substitutions, we obtain the following MLDE \begin{align}\label{n=2,l=2_MLDE} \left[\Tilde{\partial}^{2} -\frac{1}{4}\left(\frac{E_{2}^{(2^{+})}}{E_{4}^{(2^{+})}}\right)\Tilde{\partial} + \mu_{1}\left(\frac{E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}}\right)\Tilde{\partial} + \mu_{2}E_{4}^{(2^{+})} + \mu_{3}\left(\frac{\Delta_{2}}{E_{4}^{(2^{+})}}\right)\right]\chi(\tau) = 0, \end{align} where the three free parameters are $\mu_{1} \equiv \tfrac{\nu_{2}}{\nu_{1}}$, $\mu_{2} \equiv \tfrac{\nu_{3}}{\nu_{1}}$, and $\mu_{3}\equiv \tfrac{\nu_{4}}{\nu_{1}}$. Substituting mode expansions, we find \begin{align} &\sum\limits_{k=0}^i\left[E_{4,k}^{(2+)}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{4,k}^{(2+)}(\alpha^j+i-k)-\frac{E_{4,k}^{(2+)}E_{2,k}^{2+}}{4}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1E_{6,k}^{(2+)}(\alpha^j+i-k) + \mu_2(E_4^{2+})_{,k}^2 + \mu_3\Delta_{2,k}\right]a^j_{i-k} = 0, \label{recursion_l2F2_2char} \end{align} From this, we obtain the following indicial equation, \begin{align}\label{indicial_l2F2_2char} \begin{split} &\alpha^{2} + \alpha\left(\mu_{1} - \frac{1}{4}\right) + \mu_{2} = 0,\\ \alpha_{0} =& \frac{1}{2}\left(-\Tilde{\mu}_{1} - \sqrt{\Tilde{\mu}_{1}^{2} - 4\mu_{2}}\right),\\ \alpha_{1} =& \frac{1}{2}\left(-\Tilde{\mu}_{1} + \sqrt{\Tilde{\mu}_{1}^{2} - 4\mu_{2}}\right), \end{split} \end{align} where $\Tilde{\mu}_{1} \equiv \mu_{1} - \tfrac{1}{4}$. Using the Riemann-Roch identity \ref{Riemann_Roch_Gamma_0_2+} with $(n,\ell) = (2,0)$, we find \begin{align}\label{RR_alphas_l=2} \alpha_{0} + \alpha_{1} = -\frac{1}{4}. \end{align} From \ref{indicial_l2F2_2char}, we have $\alpha_{0} + \alpha_{1} = -\Tilde{\mu}_{1}$ and \ref{RR_alphas_l=2} implies $\mu_{1} = \tfrac{1}{2}$. The roots in \ref{indicial_l2F2_2char} now read \begin{align} \begin{split} \alpha_{0} = \frac{1}{8}\left(-1 -\sqrt{1-64\mu}\right) \equiv \frac{1}{8}(-1 - x),\\ \alpha_{1} = \frac{1}{8}\left(-1 +\sqrt{1-64\mu}\right) \equiv \frac{1}{8}(-1 + x), \end{split} \end{align} where we have set $x = \sqrt{1 - 64\mu}$. From this, we can find the central charge and the conformal dimension to be \begin{align} c = 3(x+1),\ \ h = \frac{x}{4}. \end{align} We have now reduced the number of free parameters to two. We now perform modular re-parameterization to fix the parameter $\mu_{3}$. The MLDE \ref{MLDE_n=2_Fricke_2} with $\ell = 2$ has modular forms \begin{align} \begin{split} \phi_{1} =&\mu_{1}\left(\frac{E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}}\right) = \mu_{1}A_{2^{+}}^{2},\\ \phi_{0} =& \mu_{2}E_{4}^{(2^{+})} + \mu_{3}\left(\frac{\Delta_{2}}{E_{4}^{(2^{+})}}\right) = \mu_{2}\frac{A_{2^{+}}^{2}}{1 - K_{2^{+}}} + \frac{\mu_{3}}{256}\frac{A_{2^{+}}^{2}K_{2^{+}}}{1 - K_{2^{+}}}, \end{split} \end{align} where we used the definitions in \ref{repara_definitions_2+}. We can recast the MLDE as follows \begin{align}\label{reparameterized_l=2} \left[\theta_{K_{2^{+}}}^{2} + \left(\mu_{1} -\frac{1 + K_{2^{+}}}{4(1- K_{2^{+}})}\right)\theta_{K_{2^{+}}} + \frac{\mu_{2}}{1-K_{2^{+}}} + \frac{\mu_{3}}{256}\frac{K_{2^{+}}}{1-K_{2^{+}}}\right]\chi(\tau) = 0. \end{align} Now, at $\ell = 2$, the single-character solution reads $f(\tau) = j_{2^{+}}^{\tfrac{1}{2}}(\tau) = 16K_{2^{+}}^{-\tfrac{1}{2}}(\tau)$. From \ref{Hauptmodul_limits_2+}, we have $j_{2^{+}}(\tau)\to 0$ at the elliptic point $\tau = \rho_{2} = \tfrac{-1+i}{2}$, or equivalently, $K(\rho_{2})\to \infty$. With $\mu_{1} = \tfrac{1}{2}$, substituting $f(\tau) = (256K_{2^{+}}^{-1})^{\gamma}(\tau)$ in \ref{reparameterized_l=2}, we obtain the following indicial equation \begin{align} \begin{split} \lim\limits_{K_{2^{+}}\to \infty}\left[\gamma^{2} - \left(\frac{1}{2} - \frac{1-K_{2^{+}}}{4(1-K_{2^{+}})}\right)\gamma + \frac{\mu_{2}}{(1-K_{2^{+}})} + \frac{\mu_{3}}{256}\frac{K_{2^{+}}}{(1-K_{2^{+}})}\right] =& 0,\\ \gamma^{2} - \frac{3}{4}\gamma - \frac{1}{256}\mu_{3} =& 0. \end{split} \end{align} With $\gamma = \tfrac{1}{2}$, we find $\mu_{3} = -32$. Putting $i=1$ in \ref{recursion_l2F2_2char} yields\footnote{We shall at times identify $m_i:=a^0_i$.}, \begin{align} &-128 + 5 m_1 - 248 \alpha_0 + 8 m_1 \alpha_0 - 192 \alpha_0^2 = 0, \label{m1al}\\ &384 - 15 m_1 - 31 N + m_1 N + N^2 = 0 \, \, \, \, (\text{where}, \ N:=-24\alpha_0 = c). \label{diophNl2p2} \end{align} Applying the integer root theorem to \ref{diophNl2p2} we note that $N=c\in\mathbb{Z}$. With the above definition of $\alpha_0$ we can now find $m_1$ in terms of $N$, \begin{align} m_1 = (16-N)+\frac{144}{15-N}, \label{m1N_p2F2l2} \end{align} which implies that $0<c< 15$. Scanning this space of $c$, we observe that there exist the following admissible solutions. \begin{itemize}\label{c=3,h=0_no_MT} \item $(c,h)=\left(3,0\right)$: \begin{align}\label{c=3,single_char_n=2} \begin{split} \chi_0 = q^{-\tfrac{1}{8}}(1 +& 25q + 51q^2 + 196q^3 + 297q^4 + 780q^5 + 1223q^6\\ +& 2551q^7 + 3798q^8 + 7201q^9 + 10582q^{10} + \mathcal{O}(q^{11})), \end{split} \end{align} where since $h=0$ both the characters are identical. This is a single-character solution which surprisingly is not ``yet" any $(1,\ell)$ solution and also does not possess a McKay-Thompson interpretation. \item $(c,h)=\left(6,\frac{1}{4}\right)$: \begin{align}\label{8C_character_3} &\chi_0 = j_{2^{+}}^{\tfrac{1}{4}}, \\ &\chi_{\frac{1}{4}} = \text{unstable} \nonumber \end{align} \item $(c,h)=\left(7,\frac{1}{3}\right)$: \begin{align}\label{c=7_Fricke_2_2,2} &\chi_0 = j_{2^{+}}^{\tfrac{1}{4}}\otimes \chi_0^{{}^{(2)}\mathcal{W}_1} \nonumber\\ &\chi_{\frac{1}{3}} = j_{2^{+}}^{\tfrac{1}{4}}\otimes \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1} \end{align} \item $(c,h)=\left(9,\frac{1}{2}\right)$: \begin{align} &\chi_0 = j_{2^{+}}^{\tfrac{1}{2}}\otimes \chi_0^{{}^{(2)}\mathcal{W}_3} \nonumber\\ &\chi_{\frac{1}{2}} = j_{2^{+}}^{\tfrac{1}{4}}\otimes \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3} \end{align} \item $(c,h)=\left(11,\frac{2}{3}\right)$: \begin{align} &\chi_0 = j_{2^{+}}^{\tfrac{2}{3}}\otimes \chi_0^{{}^{(2)}\mathcal{W}_5} \nonumber\\ &\chi_{\frac{2}{3}} = j_{2^{+}}^{\tfrac{1}{4}}\otimes \chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5} \end{align} \item $(c,h)=\left(12,\frac{3}{4}\right)$: \begin{align} \begin{split}\label{4B_character_1} \chi_0 =& j_{2^{+}}^{\tfrac{1}{2}}\\ =& q^{-\tfrac{1}{2}}(1 + 52 q + 834 q^2 + 4760 q^3 + 24703 q^4 + 94980 q^5 + 343998 q^6 + 1077496 q^7\\ &{}\ \ \ \ \ \ \ \ + 3222915 q^8 + \mathcal{O}(q^{9})),\\ \chi_{\frac{3}{4}} =& \text{unstable}. \end{split} \end{align} \end{itemize} The identity character in \ref{8C_character_3}, along with \ref{8C_character_1} and \ref{8C_character_2} forms a set of central charge $c = 6$ admissible solutions that correspond to the $8C$ conjugacy class of $\mathbb{M}$. Also, the identity character in \ref{4B_character_1} surprisingly turns out to be the McKay-Thompson series of the $4B$ conjugacy class of $\mathbb{M}$ (OEIS sequence A007247 \cite{A007247}). Let ${}^{(2)}\mathcal{W}_7$, ${}^{(2)}\mathcal{W}_9$ and ${}^{(2)}\mathcal{W}_{11}$ denote the $c\in\{7,9,11\}$ admissible character-like (2,2) solutions. Note that, here also, we have some bilinear identities, \begin{align} &j_{2^{+}}^{\tfrac{3}{4}} = \chi_0^{{}^{(2)}\mathcal{W}_7}\chi_0^{{}^{(2)}\mathcal{W}_{11}} + 2 \, \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_7}\chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_{11}}, \label{bilin2} \\ &j_{2^{+}}^{\tfrac{1}{4}} = \chi_0^{{}^{(2)}\mathcal{W}_9}\chi_0^{{}^{(2)}\mathcal{W}_9} + 16 \, \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_9}\chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_9}, \label{bilin3} \end{align} where \ref{bilin3} is a self-dual relation. Note, \ref{bilin2} and \ref{bilin3} are nothing but $j_{2^{+}}^{\tfrac{1}{2}}$ multiplied with \ref{bilin0} and \ref{bilin1} respectively. More bilinear pairs can be constructed by multiplying $j_{2^{+}}^{\tfrac{1}{4}}$ with \ref{bilin0} and \ref{bilin1}.\\ \noindent Note that, we can make use of the fact that $j_{2^{+}}^x$ with $x\in\{\frac{3}{4}\}$ can also appear as a solution to the above $(2,2)$ MLDE and hence we get for this value of $x$, $\mu_3=0$. With $\mu_3=0$, we get from the $i=1$ equation: $m_1=\frac{N(N-31)}{15-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $15<c<32$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(19,\frac{4}{3}\right)$: \begin{align} \begin{split} \chi_0 =& q^{-\tfrac{19}{24}}(1 + 57q + 2147q^2 + 31540q^3 + 260243q^4 + 1691798q^5 + 8887877q^6\\ &{}\ \ \ \ \ \ \ \ \ + 41091167q^7 + 168938614q^8 + \mathcal{O}(q^{9})),\\ \chi_{\frac{4}{3}} =& q^{\tfrac{13}{24}}(133 + 2717q + 33839q^2 + 246468q^3 + 1506510q^4 + 7478629q^5 + 33354310q^6\\ &{}\ \ \ \ \ \ \ \ \ \ + 132591975q^7 + 489341675q^8 + \mathcal{O}(q^{9})). \end{split} \end{align} \item $(c,h)=\left(21,\frac{3}{2}\right)$: \begin{align} \begin{split} \chi_0 =& q^{-\tfrac{7}{8}}(1 + 35q + 2394q^2 + 40873q^3 + 405426q^4 + 2946132q^5 + 17381133q^6\\ &{}\ \ \ \ \ \ \ \ +88016962q^7 + 395953299q^8 + \mathcal{O}(q^{9})),\\ \chi_{\frac{3}{2}} =& q^{\tfrac{5}{8}}(7 + 161q + 2160q^2 + 17409q^3 + 115003q^4 + 619101q^5 + 2962183q^6\\ &{}\ \ \ \ \ \ + 12628973q^7 + 49681296q^8 + \mathcal{O}(q^{9})). \end{split} \end{align} \item $(c,h)=\left(23,\frac{5}{3}\right)$: \begin{align} \begin{split} \chi_0 =& q^{-\tfrac{23}{24}}(1 + 23q + 3335q^2 + 67068q^3 + 793776q^4 + 6461689q^5\\ &{}\ \ \ \ \ \ \ \ \ + 42601060q^6 + 236116965q^7 + 1157910689q^8 + \mathcal{O}(q^{9})),\\ \chi_{\frac{5}{3}} =& q^{\tfrac{17}{24}}(506 + 12903q + 185725q^2 + 1639463q^3 + 11650696q^4 + 67590238q^5\\ &{}\ \ \ \ \ \ \ \ \ \ + 345549517q^6 + 1572505630q^7 + 6570405101q^8 + \mathcal{O}(q^{9})). \end{split} \end{align} \end{itemize} For $\mu_3=-112$, we get from the $i=1$ equation: $m_1=\frac{N(N-10)}{18-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $9<c<18$. Scanning this space of $c$ we find no admissible solutions. Also, we note that we did not get any quasi-character solutions. \subsubsection{Non-trivial bilinear pairs} \noindent We present some more bilinear pairs that are non-trivial in the sense that they cannot be derived from \ref{bilin0} and \ref{bilin1}. Let ${}^{(2)}\mathcal{W}_{19}$, ${}^{(2)}\mathcal{W}_{21}$ and ${}^{(2)}\mathcal{W}_{23}$ denote the $c\in\{19,21,23\}$ admissible character-like (2,2) solutions (obtained with $\mu_3=0$). \begin{align} &j_{2^{+}} - 32 = \chi_0^{{}^{(2)}\mathcal{W}_5}\chi_0^{{}^{(2)}\mathcal{W}_{19}} + 2 \, \chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5}\chi_{\frac{4}{3}}^{{}^{(2)}\mathcal{W}_{19}}, \label{bilin_non_0} \\ &j_{2^{+}} - 64 = \chi_0^{{}^{(2)}\mathcal{W}_3}\chi_0^{{}^{(2)}\mathcal{W}_{21}} + 256 \, \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}\chi_{\frac{3}{2}}^{{}^{(2)}\mathcal{W}_{21}}, \label{bilin_non_1} \\ &j_{2^{+}} - 80 = \chi_0^{{}^{(2)}\mathcal{W}_1}\chi_0^{{}^{(2)}\mathcal{W}_{23}} + 2 \, \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1}\chi_{\frac{5}{3}}^{{}^{(2)}\mathcal{W}_{23}}, \label{bilin_non_2} \end{align} where the above bilinears are for the following relation: $(2,0)\overset{c^\mathcal{H}=24}{\underset{n_1=2}{\longleftrightarrow}}(2,2)$. Furthermore, we also have, \begin{align} j_{2^{+}}^{\tfrac{3}{4}}(j_{2^{+}}-102) =& \chi_0^{{}^{(2)}\mathcal{W}_{19}}\chi_0^{{}^{(2)}\mathcal{W}_{23}} + 2 \, \chi_{\frac{4}{3}}^{{}^{(2)}\mathcal{W}_{19}}\chi_{\frac{5}{3}}^{{}^{(2)}\mathcal{W}_{23}} \nonumber\\ =& q^{-\tfrac{7}{4}}\left(1 + 80 q + 6793 q^2 + 472680 q^3 + 18944362 q^4 + \mathcal{O}(q^{5})\right),\\\label{bilin_non_3} j_{2^{+}}^{\tfrac{3}{4}}(j_{2^{+}}-112) =& \chi_0^{{}^{(2)}\mathcal{W}_{21}}\chi_0^{{}^{(2)}\mathcal{W}_{21}} + 256 \, \chi_{\frac{3}{2}}^{{}^{(2)}\mathcal{W}_{21}}\chi_{\frac{3}{2}}^{{}^{(2)}\mathcal{W}_{21}}\\ =& q^{-\tfrac{7}{4}}\left(1 +70 q + 6013 q^2 + 450030 q^3 + 18635582 q^4 + \mathcal{O}(q^{5})\right),\label{bilin_non_4} \end{align} where the above bilinears are for the following relation: $(2,2)\overset{c^\mathcal{H}=42}{\underset{n_1=3}{\longleftrightarrow}}(2,2)$.\footnote{A similar analysis should be performed with the $\frac{k}{12}$ derivative operator (the traditional one used in the case of $\text{SL}(2,\mathbb{Z})$). In this case, we can tune the $\mu_1$ parameter to even make this MLDE consistent with the Riemann-Roch. This is because in this case there exists a weight $2$ meromorphic form contribution to the MLDE due to the allowance of poles in the coefficient functions present in the MLDE. We reserve this for future work.}. \subsubsection{\texorpdfstring{$\ell = 3$}{l = 3}} \noindent The MLDE, in this case, reads\footnote{The generic form of the $(2,3)$ and $(2,1)$ MLDEs are identical with only the values of the parameters being different}, \begin{align}\label{mldeF2_2char_l3} \left[\mathcal{D}^2+\left(\mu_1\frac{\left(E_4^{(2+)}\right)^2}{E_6^{2+}}+\mu_2\frac{\Delta_2}{E_6^{2+}}\right)\mathcal{D}+\mu_3\frac{E_{10}^{(2+)}}{E_{6}^{(2+)}}\right]\chi = 0. \end{align} From above we obtain the following recursion relation, \begin{align} &\sum\limits_{k=0}^i\left[E_{6,k}^{2+}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{6,k}^{2+}(\alpha^j+i-k)-\frac{E_{2,k}^{2+}E_{2,k}^{2+}}{4}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1 (E_{4}^{(2+)})^2_{,k} (\alpha^j+i-k) + \mu_2\Delta_{2,k}(\alpha^j+i-k) + \mu_3 E_{10,k}\right]a^j_{i-k} = 0, \end{align} which gives the following indicial equation, \begin{align} \alpha^2 + \left(\mu_1-\frac{1}{4}\right)\alpha + \mu_3 = 0, \label{indicial_l3F2_2char} \end{align} From Riemann-Roch (sum of exponents $= -\frac{1}{2}$), we see that $\mu_1=\frac{3}{4}$. Now, let us determine the value of $\mu_2$ by transforming the $\tau$-MLDE in \ref{mldeF2_2char_l3} into the corresponding $j_{2^{+}}$-MLDE (about $\tau=\rho_2$), \begin{align} \left[(j_{2^{+}})^2\partial_{j_{2^{+}}}^2 + \frac{j_{2^{+}}}{2}\partial_{j_{2^{+}}} - \frac{j_{2^{+}}}{j_{2^{+}}-256}\left((64+\mu_2)\right)\partial_{j_{2^{+}}} + \frac{\mu_3 j_{2^{+}}}{j_{2^{+}}-256}\right]\chi = 0, \label{j2+mlde_p2_l3} \end{align} where we have set $\mu_1=\frac{3}{4}$ from the $\ell=3$ indicial equation (see \ref{indicial_l3F2_2char}). Now, note that about $\tau=\rho_2$, we have, $\chi_0\sim (j_{2^{+}})^x$ and $j_{2^{+}}\to 0$, with $x=\frac{3}{4}$ for $\ell=3$ (see single-character solutions). So, substituting $\chi=(j_{2^{+}})^x$ about $j_{2^{+}}=0$, the $j_{2^{+}}$-indicial equation (which is basically, coefficient of $(j_{2^{+}})^x$ $=\left.0\right|_{j_{2^{+}}=0}$) reads, \begin{align} x^2-\frac{x}{2}+\frac{1}{4}+\frac{\mu_2}{256} = 0, \label{jindip2l3F2} \end{align} which yields $\mu_2 = -112$ for $x=\frac{3}{4}$. Similar analysis gives, $(x,\mu_2)=(\frac{1}{4},-48)$, $(x,\mu_2)=(\frac{1}{2},-64)$ as solution to \ref{jindip2l3F2}.\\ \noindent For $\mu_2=-48$, we get from the $i=1$ equation: $m_1=\frac{N(N+22)}{18-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $0<c<18$. Scanning this space of $c$ we get the following admissible solutions: \begin{itemize} \item $(c,h)=\left(6,0\right)$: \begin{align}\label{no_luck_1} \begin{split} \chi_0 =& q^{-\tfrac{1}{4}}(1 + 14q + 331q^2 + 4506q^3 + 112795q^4 + 3965662q^5\\ &{}\ \ \ \ \ \ \ \ + 175520124q^6 + 8718679572q^7 + 470448806341q^8 + \mathcal{O}(q^{9})),\\ \chi_{\text{non-id}} =& \text{unstable}, \end{split} \end{align} where the above is a single-character solution (not yet a $(1,\ell)$ solution). \end{itemize} \noindent For $\mu_2=-64$, we get from the $i=1$ equation: $m_1=\frac{N(N+14)}{18-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $0<c<18$. Scanning this space of $c$ we get the following admissible solutions: \begin{itemize} \item $(c,h)=\left(6,0\right)$: \begin{align}\label{no_luck_2} \begin{split} \chi_0 =& q^{-\tfrac{1}{4}}(1 + 10q + 367q^2 + 6422q^3 + 169555q^4 + 6322026q^5\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + 286760396q^6 + 14591718924q^7 + 802225180229q^8 + \mathcal{O}(q^{9})),\\ \chi_{\text{non-id}} =& \text{unstable}, \end{split} \end{align} where the above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h)=\left(12,\frac{1}{2}\right)$: \begin{align}\label{4B_character_2} \begin{split} \chi_0 =& j_{2^{+}}^{\tfrac{1}{2}},\\ \chi_{\text{non-id}} =& 1. \end{split} \end{align} \end{itemize} We note that although the identity characters in \ref{no_luck_1} and \ref{no_luck_2} possess an unstable descendant, these do not have a nice McKay-Thompson interpretation. This is similar to what we observed for the case of the identity-only solution $(c,h) = (3,0)$ in \ref{c=3,h=0_no_MT}. The identity character with a trivial descendant in \ref{4B_character_2} along with \ref{4B_character_1} forms a set of central charge $c = 12$ admissible solutions that correspond to the $4B$ conjugacy class of $\mathbb{M}$. \subsection{Single-character solutions} \noindent From \cite{Umasankar:2022kzs}, we find the following form for the character in a $n = 1$ theory \begin{align}\label{single-character_normie} \begin{split} &\chi(\tau) = j_{2^{+}}^{w_{\rho}}(\tau),\\ &c = 24w_{\rho} = 6\ell, \end{split} \end{align} where $w_{\rho}\in\left\{0,1,\tfrac{3}{2}\right\}$ corresponding to the three characters at $\ell = 0$, $\ell = 4$, and $\ell = 6$ respectively. Here, we will present the admissible solutions at the other values of $\ell$ and a more detailed analysis of the $\ell = 4$ \textit{bulk-pole}\footnote{Whenever, the RHS of the valence formula is $\geq 1$, we shall refer to the MLDE corresponding to that $\ell$ value as a \textit{bulk-pole} MLDE.}. \subsubsection{\texorpdfstring{$\ell = 1$}{l=1}} \noindent For $\ell=1$, the RHS of valence formula \ref{valence_Fricke_2} reads $\frac{1}{4}$ (since $k=2\ell$). The MLDE takes the following form, \begin{align}\label{F2_l1_p1_l=1} \left[\mathcal{D} + \frac{1}{4}\left(\mu_1\frac{\left(E_{4}^{(2+)}\right)^2}{E_{6}^{(2+)}}+\mu_2\frac{\Delta_2}{E_{6}^{(2+)}}\right)\right]\chi(\tau) = 0. \end{align} Now using the indicial equation we can set $\mu_1=1$. Consider now the ansatz, $\chi=j_{2^{+}}^{\tfrac{1}{4}}$. One can readily check that this is a solution with $\mu_2=-256$. We note here that this solution re-appears as a solution to a $(2,0)$ MLDE paired up with an unstable character. \subsubsection{\texorpdfstring{$\ell = 2$}{l=2}} \noindent The MLDE takes the following form, \begin{align} &\left[\mathcal{D}+\mu_1\frac{E_6^{(2+)}}{E_4^{(2+)}}\right]\chi(\tau) = 0, \label{1char_F2_l2}\\ &\left[E_4^{(2+}\mathcal{D}+\mu_1E_6^{(2+)}\right]\chi(\tau) = 0. \label{1char_F2_l2_aliter} \end{align} The indicial equation ($\alpha_0+\mu_1=0$) dictates $\mu_1=\frac{1}{2}$. Consider the ansatz, $\chi=j_{2^{+}}^{\tfrac{1}{2}}$. Then, $\Tilde{\partial}_\tau j_{2^{+}}^{\tfrac{1}{2}} = -\frac{1}{2}j_{2^{+}}^{\tfrac{1}{2}}\frac{E_6^{(2+)}}{E_4^{(2+)}}$. Note, that this $\chi$ indeed satisfies \ref{1char_F2_l2} with $\mu_1=\frac{1}{2}$. \subsubsection{\texorpdfstring{$\ell = 3$}{l=3}} \noindent When $\ell=3$, the RHS of valence formula \ref{valence_Fricke_2} reads $\frac{3}{4}$. The MLDE takes the following form, \begin{align} \left[\mathcal{D} + \frac{1}{4}\left(\mu_1\frac{\left(E_{4}^{(2+)}\right)^2}{E_{6}^{(2+)}}+\mu_2\frac{\Delta_2}{E_{6}^{(2+)}}\right)\right]\chi(\tau) = 0. \label{F2_l1_p1_l=3} \end{align} Now using the indicial equation we can set $\mu_1=3$. Consider now the ansatz, $\chi=j_{2^{+}}^{\tfrac{3}{4}}$. One can readily check that this is a solution with $\mu_2=-768$. \subsubsection{\texorpdfstring{$\ell = 4$}{l=4}} \noindent From Riemann-Roch, we notice that, when $\ell=4$ we get the \textit{bulk-pole} MLDE. The MLDE takes the following form, \begin{align} \left[\theta_{q}+\mu_1\frac{E_{10}^{(2+)}}{(E_{4}^{(2+)})^2+\mu_2\Delta_2}\right]\chi(\tau) &= 0, \label{1char_F2_l4} \\ \left[\left((E_{4}^{(2+)})^2+\mu_2\Delta_2\right)\theta_{q} + \mu_1 E_{10}^{(2+)}\right]\chi(\tau) &= 0. \label{1char_F2_l4_aliter} \end{align} The indicial equation dictates $\mu_1=1$. The recursion relation reads, \begin{align} \sum\limits_{k=0}^i\left[\left(E_{4}^{(2+)}\right)^2_{,k}m_{i-k}(i-k-1) + \mu_2\Delta_{2,k}m_{i-k}(i-k-1)+E^{(2^{+})}_{10,k}m_{i-k}\right] = 0. \label{recurF2l42char} \end{align} Putting $i=1$ in \ref{recurF2l42char} gives, \begin{align} m_1 = 104 + \mu_2, \label{m1eqn_F2_2char_l4} \end{align} which implies that $\mu_2\in\mathbb{Z}$. Now putting $i=2$ in \ref{recurF2l42char} yields, \begin{align} m_2 = 4372. \label{m2eqn_F2_2char_l4} \end{align} One can check up to higher orders and check that, \begin{align}\label{j_104_c=24} \chi(\tau) &= q^{-1}\left(1 + (104+\mathcal{N})q + 4372q^2 + 96256q^3 + 1240002q^4 + \mathcal{O}(q^5)\right) \nonumber\\ &= j_{2^{+}}(\tau) + \mathcal{N} \, \, \, \, (\text{with}\ \mathcal{N}\geq -104). \end{align} A more direct way to see the above is the following. Say, we consider $j_{2^{+}} + \mathcal{N}$ to be an ansatz to \ref{1char_F2_l4_aliter}, substitution of which yields \begin{align} -j_{2^{+}}E_6^{(2+)}E_4^{(2+)} - \mu_2\Delta_2 j_{2^{+}}\frac{E_6^{(2+)}}{E_4^{(2+)}} + j_{2^{+}}E_6^{(2+)}E_4^{(2+)} + \mathcal{N}E_6^{(2+)}E_4^{(2+)} =& 0, \nonumber\\ \Rightarrow\, \, (\mu_2-\mathcal{N})E_6^{(2+)}E_4^{(2+)} =& 0. \label{const} \end{align} This implies that $j_{2^{+}} + \mathcal{N}$ is a solution to the MLDE in \ref{1char_F2_l4} if and only if $\mathcal{N}=\mu_2$ and by integrality of $m_1$ in \ref{m1eqn_F2_2char_l4}, with $m_1\geq 0$, we have that $\mathcal{N}\in\mathbb{Z}$ and $\mathcal{N}\geq -104$. \subsubsection{Tensor-Product formula} \noindent Consider two different admissible $p$-character and $q$-character solutions. Let us say the Wronskian indices are $m$ and $n$ respectively. The tensor product solution will have $pq$ number of characters and say its Wronskian index is $\Tilde{\ell}$, then, \begin{align} \Tilde{\ell} = \frac{pq}{2}(p-1)(q-1) + pn + qm, \label{hampa_mukhi_tensor} \end{align} which is exactly similar to the \textit{Hampapura-Mukhi} formula for the $\text{SL}(2,\mathbb{Z})$ case (see \cite{Hampapura:2015cea}). Using \ref{hampa_mukhi_tensor} we immediately note that $\left(j_{2^{+}}^{1/4}\right)^{\otimes n}$ is a $(1,n)$ admissible solution. Similarly, one can readily see that ${}^{(2)}\mathcal{W}\otimes\left(j_{2^{+}}^{1/4}\right)^{\otimes n}$ is a $(2,2n)$ admissible solution (where ${}^{(2)}\mathcal{W}$ is a $2$-character admissible solution). Also, ${}^{(3)}\mathcal{W}\otimes\left(j_{2^{+}}^{1/4}\right)^{\otimes n}$ is a $(3,3n)$ admissible solution (where ${}^{(3)}\mathcal{W}$ is a $3$-character admissible solution). Motivated by the $\text{SL}(2,\mathbb{Z})$ case, we predict that the most general single-character solution can be written as, (with $c=6N$ and $N\in\mathbb{Z}$), (see \cite{Das:2022slz}), \begin{align} \begin{split} \chi^\mathcal{H}(\tau) =& j_{2^{+}}^{\tfrac{N}{4}-s}\left[j_{2^{+}}^s + a_1\,j_{2^{+}}^{s-1}+ a_2 \, j_{2^{+}}^{s-2} + \ldots + a_s\right],\\ =&j_{2^{+}}^{w_{\rho}}j_{2^{+}}^{-s} \, \mathfrak{P}_{s}(j_{2^{+}}), \end{split} \label{gen_j} \end{align} where we have defined $s \equiv \left \lfloor \frac{N}{4} \right \rfloor$, $c = 24w_{\rho} = 6N$, and $\mathfrak{P}_{s}(j_{2^{+}})$ is a monic polynomial of degree $s$ in $j_{2^{+}}$. \subsection{Three-character solutions} \noindent We restrict our analysis to the $(n,\ell) = (3,0)$ MLDEs and reserve the other cases for future work. This modular invariant ODE in this case reads \begin{align} \left[\mathcal{D}^{3} + \omega_{2}(\tau)\mathcal{D}^{2} + \omega_{4}(\tau)\mathcal{D} +\omega_{6}(\tau)\right]\chi(\tau) = 0. \end{align} With choices $\omega_{4}(\tau) =\mu_{1} E_{4}^{(2^{+})}(\tau)$ and $\omega_{6}(\tau) = \mu_{2} E_{6}^{(2^{+})}(\tau)$ and in terms of derivatives $\Tilde{\partial}$, the MLDE reads \begin{align} \left[\Tilde{\partial}^{3} - \frac{3}{4}\left(\Tilde{\partial}E_{2}^{(2^{+})}(\tau)\right)\Tilde{\partial}^{2} + \frac{1}{8}\left(E_{2}^{(2^{+})}\right)^{2} - \frac{1}{4}\left(\Tilde{\partial}E_{2}^{(2^{+})}\right)\Tilde{\partial} + \mu_{1}E_{4}^{(2^{+})}(\tau)\Tilde{\partial} + \mu_{2}E_{6}^{(2^{+})}(\tau)\right]\chi(\tau) = 0. \end{align} We now make use of the Ramanujan identity in \ref{Ramanujan_p=2} for $E_{2}^{(2^{+})}(\tau)$ to rewrite the MLDE as follows \begin{align} \left[\Tilde{\partial}^{3} - \frac{3}{4}\left(\Tilde{\partial}E_{2}^{(2^{+})}(\tau)\right)\Tilde{\partial}^{2} + \left(\frac{3}{4}\left(\Tilde{\partial}E_{2}^{(2^{+})}\right) + \frac{1}{8}E_{4}^{(2^{+})} + \mu_{1}E_{4}^{(2^{+})}(\tau)\right)\Tilde{\partial} + \mu_{2}E_{6}^{(2^{+})}(\tau)\right]\chi(\tau) = 0. \end{align} The recursion relation reads, \begin{align} &\left[(\alpha^j+i)(\alpha^j+i-1)(\alpha^j+i-2) + 3(\alpha^j+i)(\alpha^j+i-1) + (\alpha^j+i)\right]a^j_i \nonumber\\ &+ \sum\limits_{k=0}^i\left[-\frac{3}{4}E_{2,k}^{(2+)}a^j_{i-k}(\alpha^j+i-k)(\alpha^j+i-k-1) - \frac{3}{4}E_{2,k}^{(2+)}a^j_{i-k}(\alpha^j+i-k)\right. \nonumber\\ &\left.+ \frac{1}{32}E_{4,k}^{2+}a^j_{i-k}(\alpha^j+i-k) + \frac{3}{32}\left(E_{2}^{(2+)}\right)^2_{,k}a^j_{i-k}(\alpha^j+i-k) + \mu_1E_{4,k}^{(2+)}a^j_{i-k}(\alpha^j+i-k)\right. \nonumber\\ &\left.+ \mu_2E_{6,k}^{(2+)}a^j_{i-k}\right] = 0. \label{recursion3_2} \end{align} From which the indicial equation reads, \begin{align} \alpha^3 - \frac{3}{4}\alpha^{2} + \left(\mu_1+\frac{1}{8}\right)\alpha + \mu_2 = 0, \label{indicial3_2} \end{align} where we have defined $\Tilde{\mu}_{1}\equiv \mu_{1} + \tfrac{1}{8}$. Using the equation we obtain at $k = 1$, we find the following relations among the roots of the indicial equation, $\alpha_{i}, i = 0,1,2$, and the free parameters \begin{align} \begin{split} \Tilde{\mu}_{1} =& \alpha_{0}\alpha_{1} + \alpha_{1}\alpha_{2} + \alpha_{0}\alpha_{2},\\ \mu_{2} =& -\alpha_{0}\alpha_{1}\alpha_{2}. \end{split} \end{align} We can now fix $\mu_{2}$ from the indicial equation and express $\mu_{1}$ in terms of the roots $\alpha_{i}$ and the ratio $m_{1}$ obtain ed via the equation at $k = 1$. At $k = 2$, we obtain the following equation \begin{align} &-24 m_1^2 + 9 m_1 m_2 - 1004 m_1 \alpha_0 - 184 m_1^2 \alpha_0 + 1064 m_2 \alpha_0 + 12 m_1 m_2 \alpha_0 + 22560 \alpha_0^2 - 6192 m_1 \alpha_0^2 \nonumber\\ &- 672 m_1^2 \alpha_0^2 + 2016 m_2 \alpha_0^2 - 141696 \alpha_0^3 - 18304 m_1 \alpha_0^3 - 512 m_1^2 \alpha_0^3 + 1024 m_2 \alpha_0^3 + 150528 \alpha_0^4 = 0. \label{al3} \end{align} We observe that if we define the variable \begin{align} N = -1176\alpha_{0}, \end{align} then the polynomial equation simplifies to (after multiplying $7^6\times 3^3\times 2^2$ with Eq.(\ref{al3}) and using the above identification) \begin{align} &-304946208 m_1^2 + 114354828 m_1 m_2 + 10847718 m_1 N + 1988028 m_1^2 N - 11495988 m_2 N \nonumber\\ &- 129654 m_1 m_2 N + 207270 N^2 - 56889 m_1 N^2 - 6174 m_1^2 N^2 + 18522 m_2 N^2 + 1107 N^3 \nonumber\\ &+ 143 m_1 N^3 + 4 m_1^2 N^3 - 8 m_2 N^3 + N^4 = 0. \label{Diop1_3_2} \end{align} The integer root theorem, $N=-1176\alpha_0$ above should be an integer and hence, $N=49c\in\mathbb{Z}$. this tells us that the central charge has to be a rational number whose denominator is always $49$. Since $\alpha_{1}$ and $\alpha_{2}$ are to be rational, this implies that the discriminant (which is an integer, say $k^2$ with $k\in\mathbb{Z}$) has to be the perfect square to be able to result in rational roots Thus, we have, \begin{align} &46694888100 m_1^2 - 5006200248 m_1 N - 190591380 m_1^2 N + 77533092 N^2 \nonumber\\ &+ 17114328 m_1 N^2 + 194481 m_1^2 N^2 - 22932 N^3 - 15582 m_1 N^3 - 143 N^4 = k^2. \label{Diop2_3_2} \end{align} From this, we find the three roots of the indicial equation to read \begin{align} &\alpha_0 = -\frac{N}{1176}, \label{root0_30_F2} \\ &\alpha_1 = \frac{129654 m_1 - 11466 N + 147 m_1 N - 13 N^2 - k}{2352(147 m_1 - 13 N)}, \label{roo1_30_F2} \\ &\alpha_2 = \frac{129654 m_1 - 11466 N + 147 m_1 N - 13 N^2 + k}{2352(147 m_1 - 13 N)}. \label{roo2_30_F2} \end{align} This equation where all the unknown variables $N$, $m_{1}$, and $k$ are positive integers is known as a Diophantine equation.\\ \\ Now, solving \ref{Diop1_3_2} and \ref{Diop2_3_2}, for the range $1\leq N\leq 2058$, yields, \begin{itemize} \item $(c,h_1,h_2)=\left(\frac{6}{7},\frac{5}{7},\frac{1}{7}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{28}}(1 + q^2 + q^3 + 2q^4 + q^5 + 3q^6 + 2q^7 + 4q^8 + 3q^9 + 6q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{5}{7}} &= q^{\tfrac{19}{28}}(1 + q^2 + q^4 + q^5 + 2q^6 + q^7 + 3q^8 + 2q^9 + 4q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{1}{7}} &= q^{\tfrac{3}{28}}(1 + q + q^2 + q^3 + 2q^4 + 2q^5 + 3q^6 + 3q^7 + 5q^8 + 5q^9 + 7q^{10} + \mathcal{O}(q^{11})). \end{align} where the identity character above is similar to that of a Virasoro minimal model CFT by which we mean that $m_1=0$ for such models since they do not possess any Kac-Moody currents. From here on we shall use the notation minimal model to denote any admissible solution whose identity character has $m_1=0$. \item $(c,h_1,h_2)=\left(1,\frac{13}{24},\frac{1}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{24}}(1 + q + 2q^2 + q^3 + 3q^4 + 3q^5 + 5q^6 + 5q^7 + 8q^8 + 8q^9 + 12q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{13}{24}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{3}} &= q^{\tfrac{7}{24}}(1 + q^2 + q^3 + 2q^4 + q^5 + 3q^6 + 2q^7 + 5q^8 + 4q^9 + 7q^{10} + \mathcal{O}(q^{11})), \end{align} where above is a two-character solution paired with an unstable character. This is the $(2,0)$ two-character solution found in \ref{c=1_Fricke_2_2,0} that makes a reappearance here. \item $(c,h_1,h_2)=\left(2,\frac{2}{3},\frac{1}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{12}}(1 + 2q + 5q^2 + 6q^3 + 12q^4 + 16q^5 + 29q^6 + 38q^7 + 61q^8 + 80q^9\\ \nonumber &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 121q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{2}{3}} &= q^{\tfrac{7}{12}}(1 + 2q^2 + 2q^3 + 5q^4 + 4q^5 + 11q^6 + 10q^7 + 22q^8 + 22q^9 + 41q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{1}{3}} &= q^{\tfrac{1}{4}}(1 + q + 3q^2 + 3q^3 + 8q^4 + 9q^5 + 17q^6 + 20q^7 + 36q^8 + 43q^9 + 70q^{10} + \mathcal{O}(q^{11})). \end{align} We note here that the identity character can be expressed in terms of modular forms of $\Gamma_{0}(7)$, akin to what we saw in the two-character case at low central charge $c = 1$ in \ref{Hecke_1_c=1}, as follows \begin{align}\label{Hecke_1_c=2} \chi_{0}^{(c = 2)} = \left(\frac{\left(j_{2}\Delta_{2}^{\infty}\right)(3\tau)}{\left(j_{2}\Delta_{2}^{\infty}\right)^{\tfrac{1}{3}}(\tau)}\right)^{\tfrac{1}{4}}\left(\frac{1}{\Delta_{2}(\tau)}\right)^{\tfrac{1}{12}}. \end{align} \item $(c,h_1,h_2)=\left(3,\frac{5}{8},\frac{1}{2}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{8}}(1 + 5q + 11q^2 + 20q^3 + 41q^4 + 76q^5 + 127q^6 + 211q^7 + 342q^8 + 541q^9 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 838q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{5}{8}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{2}} &= q^{\tfrac{3}{8}}(1 + q + 5q^2 + 6q^3 + 16q^4 + 21q^5 + 46q^6 + 61q^7 + 117q^8 + 157q^9\\ \nonumber {}&\ \ \ \ \ \ \ \ \ \ + 273q^{10} + \mathcal{O}(q^{11})). \end{align} This is the $(2,0)$ two-character solution we found in \ref{c=3_Fricke_2_2,0} that makes a reappearance here. \item $(c,h_1,h_2)=\left(3,0,\frac{9}{8}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{8}}(1 + 25q + 51q^2 + 196q^3 + 297q^4 + 780q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 1223q^6 + 2551q^7 + 3798q^8 + 7201q^9 + 10582q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{0} &= q^{-\tfrac{1}{8}}(1 + 25q + 51q^2 + 196q^3 + 297q^4 + 780q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 1223q^6 + 2551q^7 + 3798q^8 + 7201q^9 + 10582q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{9}{8}} &= \text{unstable}. \end{align} This is the single character identified in \ref{c=3,single_char_n=2} that makes a reappearance here. \item $(c,h_1,h_2)=\left(\frac{30}{7},\frac{5}{7},\frac{4}{7}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{5}{28}}(1 + 10q + 25q^2 + 70q^3 + 145q^4 + 330q^5 + 610q^6 + 1200q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 2095q^8 + 3800q^9 + 6336q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{5}{7}} &= q^{15/28}(4 + 5q + 30q^2 + 45q^3 + 130q^4 + 204q^5 + 480q^6 + 745q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ + 1510q^8 + 2330q^9 + 4294q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{4}{7}} &= q^{11/28}(5 + 10q + 45q^2 + 74q^3 + 205q^4 + 350q^5 + 770q^6 + 1260q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ + 2470q^8 + 3950q^9 + 7115q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(5,\frac{17}{24},\frac{2}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{5}{24}}(1 + 15q + 40q^2 + 135q^3 + 290q^4 + 726q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 1415q^6 + 3000q^7 + 5485q^8 + 10565q^9 + 18407q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{17}{24}} &= \text{unstable}, \nonumber\\ \chi_{\frac{2}{3}} &= q^{\tfrac{11}{24}}(5 + 11q + 55q^2 + 100q^3 + 290q^4 + 535q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 1235q^6 + 2160q^7 + 4400q^8 + 7465q^9 + 13945q^{10} + \mathcal{O}(q^{11})). \end{align} This is the $(2,0)$ two-character solution we found in \ref{c=5_Fricke_2_2,0} that makes a reappearance here. \item $(c,h_1,h_2)=\left(6,1,\frac{1}{2}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{4}}(1 + m_1q + m_2q^2 + \mathcal{O}(q^{3})), \nonumber\\ \chi_{1} &= q^{\tfrac{3}{4}}(1 + 2q + 11q^2 + 22q^3 + 69q^4 + 134q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 330q^6 + 616q^7 + 1324q^8 + 2382q^9 + 4675q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{1}{2}} &= q^{\tfrac{1}{4}}(1 + 6q + 21q^2 + 62q^3 + 162q^4 + 384q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 855q^6 + 1806q^7 + 3648q^8 + 7110q^9 + 13434q^{10} + \mathcal{O}(q^{11})) \end{align} where for the above identity character we have, $m_1\in\mathbb{N}\cup \{0\}$, $m_2=27+2m_1$ and $k=2247336-86436 m_1$ for $m_1< 26$ and $k=-2247336+86436 m_1$ for $m_1\geq 27$.\footnote{This kind of infinite one-parameter solutions (here the parameter being $m_1$) were first observed in \cite{Das:2021uvd} where it was explained using an argument using Inhomogenous MLDE. We have checked that the same argument applies here too with the following replacement $\eta(\tau)^{2k}\leftrightarrow(\eta(\tau)\eta(2\tau))^k$.} For $m_1=26$, the id. character is just $j_{2^{+}}^{\frac{1}{4}}$ \item $(c,h_1,h_2)=\left(7,\frac{31}{24},\frac{1}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{7}{24}}(1 + 27q + 107q^2 + 458q^3 + 1268q^4 + 3673q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 8722q^6 + 21061q^7 + 45251q^8 + 97657q^9 + 195561q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{31}{24}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{3}} &= q^{\tfrac{1}{24}}(1 + 26q + 80q^2 + 353q^3 + 862q^4 + 2564q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 5728q^6 + 13956q^7 + 28861q^8 + 62621q^9 + 122250q^{10} + \mathcal{O}(q^{11})) \end{align} This is the $(2,2)$ two-character solution we found in \ref{c=7_Fricke_2_2,2} that makes a reappearance here. \item $(c,h_1,h_2)=\left(\frac{54}{7},\frac{9}{7},\frac{3}{7}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{9}{28}}(1 + 27q + 135q^2 + 591q^3 + 1836q^4 + 5481q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 14037q^6 + 35046q^7 + 79866q^8 + 178083q^9 + 374328q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{9}{7}} &= q^{\tfrac{27}{28}}(26 + 81q + 405q^2 + 1047q^3 + 3375q^4 + 7884q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 20352q^6 + 43983q^7 + 99927q^8 + 202994q^9 + 422901q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{3}{7}} &= q^{\tfrac{3}{28}}(3 + 54q + 189q^2 + 840q^3 + 2268q^4 + 6858q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 16397q^6 + 40986q^7 + 89505q^8 + 199410q^9 + 407457q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(\frac{54}{7},\frac{10}{7},\frac{2}{7}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{9}{28}}(1 + 30q + 135q^2 + 594q^3 + 1836q^4 + 5484q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 14040q^6 + 35052q^7 + 79869q^8 + 178092q^9 + 374334q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{10}{7}} &= q^{\tfrac{31}{28}}(51 + 186q + 837q^2 + 2262q^3 + 6852q^4 + 16388q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 40977q^6 + 89490q^7 + 199395q^8 + 407436q^9 + 838032q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{2}{7}} &= q^{-\tfrac{1}{28}}(3 + 26q + 84q^2 + 408q^3 + 1053q^4 + 3378q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 7893q^6 + 20358q^7 + 43995q^8 + 99936q^9 + 203012q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(9,\frac{11}{8},\frac{1}{2}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{3}{8}}(1 + 31q + 220q^2 + 1027q^3 + 3815q^4 + 12189q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 35157q^6 + 93733q^7 + 234357q^8 + 556019q^9 + 1262556q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{11}{8}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{2}} &= q^{\tfrac{1}{8}}(1 + 27q + 110q^2 + 541q^3 + 1648q^4 + 5402q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 14153q^6 + 37936q^7 + 89648q^8 + 212550q^9 + 465321q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(10,\frac{4}{3},\frac{2}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{5}{12}}(1 + 30q + 305q^2 + 1470q^3 + 6230q^4 + 20952q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 66035q^6 + 184830q^7 + 494290q^8 + 1228810q^9 + 2950340q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{4}{3}} &= q^{\tfrac{11}{12}}(25 + 110q + 671q^2 + 2210q^3 + 8125q^4 + 22730q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 66020q^6 + 165620q^7 + 418470q^8 + 966350q^9 + 2222205q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{2}{3}} &= q^{\tfrac{1}{4}}(5 + 86q + 420q^2 + 2040q^3 + 6925q^4 + 23130q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 65371q^6 + 180730q^7 + 453375q^8 + 1111940q^9 + 2560355q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(10,\frac{5}{3},\frac{1}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{5}{12}}(1 + 40q + 305q^2 + 1490q^3 + 6250q^4 + 21002q^5 + 66075q^6 + 184940q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 494390q^8 + 1229030q^9 + 2950560q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{5}{3}} &= q^{\tfrac{5}{4}}(1 + 5q + 25q^2 + 85q^3 + 285q^4 + 806q^5 + 2230q^6 + 5595q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 13725q^8 + 31605q^9 + 71258q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{1}{3}} &= q^{-\tfrac{1}{12}}(5 + 60q + 245q^2 + 1372q^3 + 4480q^4 + 16330q^5 + 45605q^6 + 132230q^7 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 331545q^8 + 837340q^9 + 1933305q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(11,\frac{35}{24},\frac{2}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{11}{24}}(1 + 41q + 509q^2 + 2686q^3 + 12605q^4 + 45402q^5 + 153461q^6 + 455033q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 1288031q^8 + 3367910q^9 + 8491985q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{35}{24}} &= \text{unstable}, \nonumber\\ \chi_{\frac{2}{3}} &= q^{\tfrac{5}{24}}(5 + 141q + 736q^2 + 4029q^3 + 14596q^4 + 52740q^5 + 157646q^6 + 462885q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 1221334q^8 + 3151415q^9 + 7594605q^{10} + \mathcal{O}(q^{11})) \end{align} \item $(c,h_1,h_2)=\left(\frac{78}{7},\frac{9}{7},\frac{6}{7}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{39}{84}}(1 + 26q + 403q^2 + 2002q^3 + 9906q^4 + 35204q^5 + 122161q^6 + 361062q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 1038219q^8 + 2718820q^9 + 6930937q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{9}{7}} &= q^{\tfrac{23}{28}}(52 + 299q + 2002q^2 + 7501q^3 + 29406q^4 + 90506q^5 + 278408q^6 + 751907q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 2002066q^8 + 4919317q^9 + 11872484q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{6}{7}} &= q^{\tfrac{11}{28}}(26 + 352q + 2054q^2 + 9880q^3 + 37180q^4 + 127764q^5 + 388765q^6 + 1115322q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 2969551q^8 + 7572838q^9 + 18351476q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(12,\frac{5}{4},1\right)$: 2-char \begin{align} \chi_0 &= q^{-\tfrac{1}{2}}(1 + m_1q + m_2q^2 + \mathcal{O}(q^{3})), \nonumber\\ \chi_{\frac{5}{4}} &= \text{unstable}, \nonumber\\ \chi_{1} &= q^{\tfrac{1}{4}}(1 + 12q + 78q^2 + 376q^3 + 1509q^4 + 5316q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 16966q^6 + 50088q^7 + 138738q^8 + 364284q^9 + 913824q^{10} + \mathcal{O}(q^{11})) \end{align} where for the above identity character we have, $m_1\in\mathbb{N}\cup \{0\}$, $m_2=210+12m_1$ and $k=2247336-43218 m_1$ for $m_1< 52$ and $k=-2247336+43218 m_1$ for $m_1\geq 53$. For $m_1=52$, the id. character is just $j_{2^{+}}^{\frac{1}{2}}$. \item $(c,h_1,h_2)=\left(14,\frac{1}{3},\frac{13}{6}\right)$: \begin{align}\label{3_char_unstable} \chi_0 &= \text{unstable},\nonumber\\ \chi_{\frac{1}{3}} &= q^{-\tfrac{1}{4}}(1 + 26q + 79q^2 + 326q^3 + 755q^4 + 2106q^5 +4460q^6 + 10284q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 20165q^8 + 41640q^9 + 77352q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{13}{6}} &= \text{unstable}. \end{align} This is the $c = 6$ single-character theory we found in \ref{8C_character_1} that makes a reappearance as a descendant of a higher central charge three-character theory with unstable identity and second descendant. We can actually re-interpret the central charge (using \textit{unitary presentation} kind of arguments of \cite{Harvey:2018rdc}) as $c_{\text{eff}}=c-24h_1 = 14-24\times\frac{1}{3} = 6$. Hence, when the $\frac{1}{3}$ descendant is re-interpreted as a $c=6$ theory with central charge equal to $c_{\text{eff}}=6$, it is indeed $j_{2^{+}}^{\frac{1}{4}}$. \item $(c,h_1,h_2)=\left(18,\frac{1}{2},\frac{5}{2}\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \nonumber\\ \chi_{\frac{1}{2}} &= \text{Log unstable}, \nonumber\\ \chi_{\frac{5}{2}} &= q^{\tfrac{7}{4}}(1 + 10q + 71q^2 + 390q^3 + 1831q^4 + 7602q^5 + 28712q^6 + 100292q^7 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 328247q^8 + 1016012q^9 + 2996354q^{10} + \mathcal{O}(q^{11})). \end{align} Above, the term \textit{Log unstable} refers to some pathologies in the q-series coefficients of these characters when we try to obtain their coefficients from the recursion relation. At times, these happen when the corresponding hypergeometric expressions for these characters, of ${}_3F_2$ types, when expanded about $q=0$ result in a $\log[q]$ behaviour. However, in certain cases, there is no such instability and it is just a shortcoming of the recursion relation approach to obtain q-series coefficients of characters. This happens when one tries to obtain the q-series coefficients of $D_{4,1}$ WZW model in the $\text{SL}(2,\mathbb{Z})$ case. To remedy this, one uses the theta series representations for the characters of $D_{4,1}$ to find the q-series of its characters. However, for the congruence subgroups, we are not aware whether such theta series representations exist for the characters and hence we have not been able to conclude whether these are really Log singularities in the q-series or just an artefact of the recursion relation. In any case, we call them Log unstable here and shall return in the future to understand this feature better in the realm of congruence subgroups. \footnote{Note that, we have only reported the Log unstable solutions for the three-character case and we postpone the analysis of such instabilities in the two-character case to future work. Indeed, these log instabilities do appear even in the two-character case whenever one of the conformal dimensions is a non-zero positive integer. For the $\text{SL}(2,\mathbb{Z})$ case, whenever a descendant had a conformal dimension $\in\mathbb{N}$, the theory is called a \textit{logarithmic} CFT. Here, we do not know how to interpret such things just as yet, and thus, we reserve this for future work too.} \item $(c,h_1,h_2)=\left(22,\frac{2}{3},\frac{17}{6}\right)$: \begin{align} \chi_0 &= \text{unstable}, \nonumber\\ \chi_{\frac{2}{3}} &= q^{-\tfrac{1}{4}}(1 + 26q + 79q^2 + 326q^3 + 755q^4 + 2106q^5 + 4460q^6 + 10284q^7 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 20165q^8 + 41640q^9 + 77352q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{17}{6}} &= \text{unstable}. \end{align} The first descendant is precisely equal to the first descendant in \ref{3_char_unstable}, which is a reappearing single-character theory. \item $(c,h_1,h_2)=\left(24,\frac{291}{292},\frac{201}{73}\right)$: \begin{align}\label{the_one_with_N_as15987} \chi_0 &= q^{-1}(1 + 16091q + 4372q^2 + 96256q^3 + 1240002q^4 + 10698752q^5 + 74428120q^6 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 431529984q^7 + 2206741887q^8 + 10117578752q^9 + 42616961892q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{291}{292}} &= \text{unstable}, \nonumber\\ \chi_{\frac{201}{73}} &= \text{unstable}. \end{align} The identity is an $\ell = 4$, $c = 24$ single-character theory $j_{2^{+}} + 15987$. \item $(c,h_1,h_2)=\left(26,\frac{2}{3},\frac{10}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{13}{12}}(1 + 286q + 6799q^2 + 102466q^3 + 1374100q^4 + 12916124q^5 + 98031453q^6\nonumber\\ &{}\ \ \ \ \ \ \ \ \ + 619853026q^7 + 3426275008q^8 + 16919200870q^9 + 76289142098q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{2}{3}} &= q^{-\tfrac{5}{12}}(65 + 2782q + 26387q^2 + 366938q^3 + 3482752q^4 + 26387140q^5 + 174154539q^6\nonumber\\ &{}\ \ \ \ \ \ \ \ \ + 989454102q^7 + 5051982756q^8 + 23348937742q^9 + 99790466750q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{10}{3}} &= q^{\tfrac{9}{4}}(13 + 198q + 1872q^2 + 13728q^3 + 83655q^4 + 447174q^5 + 2143011q^6 + 9418266q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 38395656q^8 + 146938090q^9 + 531665433q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(27,\frac{1}{8},4\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \nonumber\\ \chi_{\frac{1}{8}} &= q^{-1}(69 + 9224q + 301668q^2 + 6641664q^3 + 85560138q^4 + 738213888q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 5135540280q^6 + 29775568896q^7 + 152265190203q^8\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 698112933888q^9 + 2940570370548q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{4} &= \text{unstable}. \end{align} \item $(c,h_1,h_2)=\left(30,\frac{1}{2},4\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \nonumber\\ \chi_{\frac{1}{2}} &= q^{-\frac{3}{4}}(25 + 2110q + 51953q^2 + 1015950q^3 + 15480440q^4 + 163643540q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 1372184025q^6 + 9575867298q^7 + 58118855135q^8\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 314673021160q^9 + 1550715669375q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{4} &= q^{\frac{11}{4}}(15 + 286q + 3085q^2 + 25370q^3 + 170775q^4 + 1000690q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 5228922q^6 + 24959680q^7 + 110204890q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 455662830q^9 + 1777830395q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(30,\frac{5}{6},\frac{11}{3}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{5}{4}}(1 + 670q + 21195q^2 + 261130q^3 + 4298745q^4 + 52456626q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ +486596870q^6 + 3700866400q^7 + 23959194930q^8\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 137023596950q^9 + 706045556129q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{5}{6}} &= \text{unstable}, \nonumber\\ \chi_{\frac{11}{3}} &= \text{unstable}. \end{align} The above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(30,\frac{37}{38},\frac{67}{19}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{5}{4}}(1 + 3740q + 101015q^2 + 503660q^3 + 5299565q^4 + 54774476q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 493062290q^6 + 3714558600q^7 + 23990766810q^8\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 137085503500q^9 + 706173390929q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{37}{38}} &= \text{unstable}, \nonumber\\ \chi_{\frac{67}{19}} &= \text{unstable}. \end{align} The above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(33,\frac{7}{8},4\right)$: \begin{align} \chi_0 &= \text{unstable}, \\ \chi_{\frac{7}{8}} &= q^{-\tfrac{1}{2}}(1 + 52q + 834q^2 + 4760q^3 + 24703q^4 + 94980q^5 + 343998q^6 + 1077496q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 3222915q^8 + 8844712q^9 + 23381058q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{4} &= \text{unstable}. \end{align} The single-character theory found in \ref{1char_F2_l2_aliter} makes a reappearance as a descendant of a higher central charge three-character theory with unstable identity and second descendant. \item $(c,h_1,h_2)=\left(36,\frac{1}{4},5\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \\ \chi_{\frac{1}{4}} &= q^{-\tfrac{5}{4}}(5 + 906q + 42431q^2 + 1112574q^3 + 20696981q^4 + 260437910q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 2427837286q^6 + 18493431760q^7 + 119770840554q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 685068701490q^9 + 3530126012485q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{5} &= \text{unstable}. \end{align} \item $(c,h_1,h_2)=\left(36,\frac{1}{2},\frac{19}{4}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{3}{2}}(1 + 412q + 23926q^2 + 628600q^3 + 11629865q^4 + 185255140q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 2265641766q^6 + 22044942216q^7 + 179314657672q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 1260100453460q^9 + 7865573552644q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{1}{2}} &= q^{-1}(13 + 1608 \, q + 56836q^2 + 1251328q^3 + 16120026q^4 + 139083776q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 967565560q^6 + 5609889792q^7 + 28687644531q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 131528523776q^9 + 554020504596q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{19}{4}} &= \text{unstable}. \end{align} The descendant character, $\chi_{\frac{1}{2}}$, is $13j_{2^{+}}+256$ which can be thought of as twelve copies of $j_{2^{+}}$ single characters and one copy of the single character $j_{2^{+}} + 256$. \item $(c,h_1,h_2)=\left(36,\frac{23}{28},\frac{31}{7}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{3}{2}}(1 + 940q + 51382q^2 + 1068952q^3 + 14143145q^4 + 198298324q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 2315791206q^6 + 22226573160q^7 + 179883575560q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 1261802152580q^9+ \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{23}{28}} &= \text{unstable}, \nonumber\\ \chi_{\frac{31}{7}} &= \text{unstable}. \end{align} The above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(36,\frac{11}{12},\frac{13}{3}\right)$: \begin{align} \chi_0 &= q^{-3/2}(1 + 1884q + 100470q^2 + 1856248q^3 + 18636585q^4 + 221617956q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 2405452326q^6 + 22551307272q^7 + 180900731784q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 1264844584340q^9 + 7878592968708q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{11}{12}} &= \text{unstable}, \nonumber\\ \chi_{\frac{13}{3}} &= \text{unstable}. \end{align} The above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(36,\frac{151}{156},\frac{167}{39}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{3}{2}}(1 + 4719q + 247890q^2 + 4220638q^3 + 32131185q^4 + 291650961q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 2674720626q^6 + 23526541602q^7 + 183955432944q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 1273981548365q^9 + 7903667727228q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{151}{156}} &= \text{unstable}, \nonumber\\ \chi_{\frac{167}{39}} &= \text{unstable}. \end{align} The above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(36,\frac{193}{196},\frac{209}{49}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{3}{2}}(1 + 9760q + 510022q^2 + 8424832q^3 + 56126345q^4 + 416178784q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 3153514806q^6 + 25260635520q^7 + 189387090280q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 1290228262880q^9 + 7948253920420q^{10}+ \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{193}{196}} &= \text{unstable}, \nonumber\\ \chi_{\frac{209}{49}} &= \text{unstable} \end{align} The above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(36,\frac{235}{236},\frac{251}{59}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{3}{2}}(1 + 34966q + 1820734q^2 + 29446636q^3 + 176106905q^4 + 1038842602q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 5547580686q^6 + 33931449108q^7 + 216546454456q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 1371465058370q^9 + 8171193731092q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{235}{236}} &= \text{unstable}, \nonumber\\ \chi_{\frac{251}{59}} &= \text{unstable}. \end{align} The above is a single-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(39,\frac{5}{8},5\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \nonumber\\ \chi_{\frac{5}{8}} &= q^{-1}(117 + 14216q + 511524q^2 + 11261952q^3 + 145080234q^4 + 1251753984q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 8708090040q^6 + 50489008128q^7 + 258188800779q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 1183756713984q^9 + 4986184541364q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{5} &= \text{unstable}. \end{align} \item $(c,h_1,h_2)=\left(42,0,6\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \nonumber\\ \chi_{0} &= \text{Log unstable}, \nonumber\\ \chi_{6} &= q^{\frac{17}{4}}(98 + 3196q + 53669q^2 + 642838q^3 + 6133855q^4 + 49500906q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 350187964q^6 + 2224588380q^7 + 12909977905q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 69325524646q^9 + 347916765224q^{10} + \mathcal{O}(q^{11})). \end{align} \end{itemize} \section{$\mathbf{\Gamma^{+}_0(2)}$} \subsection{Riemann-Roch} The valence formula for $\Gamma^{+}_0(2)$ reads \cite{2006math......7408M}, \begin{align}\label{valence_Fricke_2} \nu_{\infty}(f) + \frac{1}{2}\nu_{\tfrac{i}{\sqrt{2}}}(f) + \frac{1}{4}\nu_{\rho_{2}}(f) + \sum\limits_{\substack{p\in\Gamma_{0}^{+}(2)\backslash\mathbb{H}^{2}\\ p\neq \tfrac{i}{\sqrt{2}},\rho_{2}}}\nu_{p}(f) = \frac{k}{8}, \end{align} where $\rho_2 := \frac{-i+1}{2}$ and $\frac{i}{\sqrt{2}}$ is the Fricke involution point. From Proposition 3.1 of \cite{2006math......7408M}, we note that $E^{2+}_4$ has a zero at $\tau=\rho_2$ of order $2$ and $E^{2+}_6$ has a zero at $\tau=\rho_2$ of order $1$ and a zero at $\tau=\frac{i}{\sqrt{2}}$ of order 1.\\ The Riemann-Roch for $\Gamma^{+}_0(2)$ reads, \begin{align} \sum_{i=0}^{n-1} \, \alpha_i = \frac{n(n-1)}{8} - \frac{\ell}{4}, \label{RR2} \end{align} where, $n$ is the number of linearly independent characters, $\alpha_i$s are exponents and $\ell$ is the wronskian index. \subsection{The operator} Consider the following transformations (where $\gamma\in\Gamma^{+}_0(2)$), \begin{align} &2E_2(2\gamma\tau) = 2(c\tau+d)^2E_2(2\tau) + \frac{6c}{i\pi}(c\tau+d), \label{E22tau} \\ &E_2(\gamma\tau) = (c\tau+d)^2E_2(\tau) + \frac{6c}{i\pi}(c\tau+d), \label{E2tau} \\ \text{leading to,} \, \, &E_2^{2+}(\gamma\tau) = (c\tau+d)^2E_2^{2+}(\tau) + \frac{4c}{i\pi}(c\tau+d) \label{E22+tau}. \end{align} Eq.(\ref{E22+tau}) motivates us to define the following derivative operator in the space of $\Gamma^{+}_0(2)$, acting on weight $k$ forms, \begin{align} \Tilde{D}_k := \Tilde{\partial}_\tau - \frac{k}{8}E_2^{2+}(\tau). \label{deriv_op} \end{align} where $\Tilde{\partial}_\tau := \frac{1}{2i\pi}\partial_\tau = q\partial_q$. \\ One can check that for any modular form in $\Gamma^{+}_0(2)$, $\Tilde{D}_k$ maps $k$ forms to $k+2$ forms. Now if we construct the MLDE in the space of $\Gamma^{+}_0(2)$ withe the above derivative operator then the MLDE will be consistent with the Riemann-Roch formula for $\Gamma^{+}_0(2)$. However, if we use the usual \textit{Serre-Ramanujan} derivative to write down the MLDE in $\Gamma^{+}_0(2)$ then the resulting MLDE will not be consistent with the Riemann-Roch formula for $\Gamma^{+}_0(2)$.\\ Note that, if in a given space of modular forms there exists a weight $2$ form then the usual \textit{Serre-Ramanujan} derivative will work and the MLDE will remain consistent with the corresponding Riemann-Roch. This begs the question will we get two different MLDEs for spaces which have a weight 2 modular form? \\ We note below some interesting properties of the above derivative operator, \begin{align} &\left(\Tilde{\partial}_{\tau} - \frac{1}{8}E_2^{2+}\right)E_2^{2+} = -\frac{1}{8}E_4^{2+}, \label{De2} \\ &\Tilde{D}E_4^{2+} = -\frac{1}{2}E_6^{2+}, \label{De4} \\ &\Tilde{D}E_6^{2+} = -\frac{3}{4}(E_4^{2+})^2 + 64\Delta_2, \label{De6}, \end{align} where $\Delta_2(\tau)=(\eta(\tau)\eta(2\tau))^8=\frac{17}{1152}((E^{2+})^2-E_{8}^{2+})$ and $\eta(\tau)$ is the Dedekind-eta function. \subsection{$j^{2+}$ plane} Since, $q\partial_q\Delta_2 = E_2^{2+}\Delta_2$, we have the following relations, \begin{align} &\Tilde{\partial}_\tau = -j^{2+}\frac{E_6^{2+}}{E_4^{2+}}\partial_{j^{2+}}, \label{j1_2} \\ &\Tilde{D}^2_\tau = (j^{2+})^2\left(\frac{E_6^{2+}}{E_4^{2+}}\right)^2\partial^2_{j^{2+}} + \left(\frac{1}{2}j^{2+}\left(\frac{E_6^{2+}}{E_4^{2+}}\right)^2+\frac{3j^{2+}}{4}E_4^{2+}-64E_4^{2+}\right)\partial_{j^{2+}}. \label{j1_21} \end{align} We also have, \begin{align} \frac{j^{2+}}{j^{2+}-256} = \frac{(E_4^{2+})^3}{(E_6^{2+})^2}. \label{re_F2} \end{align} The above relation also shows, \begin{align} &\lim\limits_{\tau\to\frac{i}{\sqrt{2}}} j^{2+}(\tau) = 256, \label{j2_1} \\ &\lim\limits_{\tau\to\rho_2} j^{2+}(\tau) = 0, \label{j2_2} \\ &\lim\limits_{\tau\to i\infty} j^{2+}(\tau) = \infty, \label{j2_3} \end{align} Using Eq.(\ref{j1_2}), (\ref{j1_21}) and (\ref{re_F2}) we can now recast $\tau$-plane MLDEs for $\Gamma^{+}_0(2)$ into $j^{2+}$-plane MLDEs. We shall derive Eq.(\ref{re_F2}) later in Sec. \ref{sec2}. \subsection{$\mathbf{(1,\ell\neq 0)}$ MLDE}\label{sec1} Let us do here some $1$-character computations. From Riemann-Roch we get that $c=6\ell$. \\ $\ell=0$ is trivial so let us start with $\ell=1$. For $\ell=1$, the RHS of valence formula reads $\frac{1}{4}$ (since $k=2\ell$ always). The MLDE takes the following form, \begin{align} \Tilde{D}\chi + \frac{1}{4}\left(\mu_1\frac{(E_4^{2+})^2}{E_6^{2+}}+\mu_2\frac{\Delta_2}{E_6^{2+}}\right)\chi = 0. \label{F2_l1_p1} \end{align} Now using the indicial equation we can set $\mu_1=1$. Now let us take a guess, $\chi=(j^{2+})^{\frac{1}{4}}$. One can readily check that this is a solution with $\mu_2=-256$. This solution will re-appear as a solution to a $(2,0)$ MLDE paired up with an unstable character.\\ Now let us analyse $\ell=2$. The MLDE takes the following form, \begin{align} &\left(\Tilde{D}+\mu_1\frac{E_6^{2+}}{E_4^{2+}}\right)\chi = 0, \label{1char_F2_l2} \\ &\left(E_4^{2+}\Tilde{D}+\mu_1E_6^{2+}\right)\chi = 0. \label{1char_F2_l2_aliter} \end{align} Note that in above $\Tilde{D}=q\partial_q$. Also, the indicial equation ($\alpha_0+\mu_1=0$) dictates $\mu_1=\frac{1}{2}$. \\ Now let us take a guess, $\chi=(j^{2+})^{\frac{1}{2}}$. Then, $\Tilde{\partial}_\tau (j^{2+})^{\frac{1}{2}} = -\frac{1}{2}(j^{2+})^{\frac{1}{2}}\frac{E_6^{2+}}{E_4^{2+}}$. Note, that this $\chi$ indeed satisfies Eq.(\ref{1char_F2_l2}) with $\mu_1=\frac{1}{2}$.\\ Now let us do $\ell=3$. For $\ell=3$, the RHS of Valence Formula reads $\frac{3}{4}$ (since $k=2l$ always). The MLDE takes the following form, \begin{align} \Tilde{D}\chi + \frac{1}{4}\left(\mu_1\frac{(E_4^{2+})^2}{E_6^{2+}}+\mu_2\frac{\Delta_2}{E_6^{2+}}\right)\chi = 0. \label{F2_l1_p1} \end{align} Now using the indicial equation we can set $\mu_1=3$. Now let us take a guess, $\chi=(j^{2+})^{\frac{3}{4}}$. One can readily check that this is a solution with $\mu_2=-768$. Next let us consider the $\ell=4$ case. From the Riemann-Roch, we notice that this is the \textit{bulk-pole} MLDE. The MLDE takes the following form, \begin{align} &\left(q\partial_q+\mu_1\frac{E_{10}^{2+}}{(E_4^{2+})^2+\mu_2\Delta_2}\right)\chi = 0, \label{1char_F2_l4} \\ &\left(((E_4^{2+})^2+\mu_2\Delta_2)q\partial_q + \mu_1E_{10}^{2+}\right)\chi = 0. \label{1char_F2_l4_aliter} \end{align} The indicial equation dictates $\mu_1=1$. The recursion relation reads, \begin{align} \sum\limits_{k=0}^i\left[(E_4^{2+})^2_{,k}m_{i-k}(i-k-1) + \mu_2\Delta_{2,k}m_{i-k}(i-k-1)+E_{10,k}m_{i-k}\right] = 0. \label{recurF2l42char} \end{align} Putting $i=1$ in Eq.(\ref{recurF2l42char}) gives, \begin{align} m_1 = 104 + \mu_2, \label{m1eqn_F2_2char_l4} \end{align} which implies that $\mu_2\in\mathbb{Z}$. Now putting $i=2$ in Eq.(\ref{recurF2l42char}) yields, \begin{align} m_2 = 4372. \label{m2eqn_F2_2char_l4} \end{align} One can check upto higher orders and check that, \begin{align} \chi &= q^{-1}(1 + (104+\mathcal{N})q + 4372q^2 + 96256q^3 + 1240002q^4 + \mathcal{O}(q^5)) \nonumber\\ &= j^{2+} + \mathcal{N} \, \, \, \, (\text{with} \, \mathcal{N}\geq -104) \end{align} \subsubsection{Tensor-Product formula} Consider two different admissible $p$-character and $q$-character solutions. Let us say the Wronskian indices are $m$ and $n$ respectively. The tensor product solution will have $pq$ no. of characters and say its Wronskian index is $\Tilde{l}$, then, \begin{align} \Tilde{l} = \frac{pq}{2}(p-1)(q-1) + pn + qm, \label{hampa_mukhi_tensor} \end{align} which is exactly similar to the \textit{Hampapura-Mukhi} formula for the $\text{SL}(2,\mathbb{Z})$ case (see \cite{Hampapura:2015cea}).\\ Using Eq.(\ref{hampa_mukhi_tensor}) we immediately note that, $((j^{2+})^{\frac{1}{4}})^{\otimes n}$ is a $(1,n)$ admissible solution.\\ Similarly, one can readily see that ${}^{(2)}\mathcal{W}\otimes((j^{2+})^{\frac{1}{4}})^{\otimes n}$ is a $(2,2n)$ admissible solution (where ${}^{(2)}\mathcal{W}$ is a $2$-character admissible solution). \\ Also, ${}^{(3)}\mathcal{W}\otimes((j^{2+})^{\frac{1}{4}})^{\otimes n}$ is a $(3,3n)$ admissible solution (where ${}^{(3)}\mathcal{W}$ is a $3$-character admissible solution). \\ Note that, the most general single-character solution can be written as, (with $c=6N$ and $N\in\mathbb{Z}$), (see \cite{Das:2022slz}), \begin{align} \chi^\mathcal{H}(\tau) = (j^{2+})^{\frac{N}{4}-s}\left[(j^{2+})^s + a_1\,(j^{2+})^{s-1}+ a_2 \, (j^{2+})^{s-2} + \ldots\ldots + a_s\right], \label{gen_j} \end{align} where $s := \left \lfloor \frac{N}{4} \right \rfloor$. \subsection{2-character MLDE}\label{sec2} \subsubsection{$\ell=0$ analysis} With the above derivative operator as defined in Eq.(\ref{deriv_op}) now let us set up the (2,0) MLDE, \begin{align} \left(\Tilde{D}^2 + \mu_1E_4^{2+}\right)\chi = 0, \label{mlde20_0} \end{align} where $\chi$ is a weight 0 modular function for $\Gamma^{+}_0(2)$. Now substituting the Fourier expansions of the relevant terms in Eq.(\ref{mlde20_0}) we get the following recursion relation, \begin{align} &\left[(\alpha+i)^2 - \frac{1}{4}(\alpha+i) + \mu_1\right]a_i = \sum\limits_{k=1}^i\left[\frac{1}{4}(\alpha+i-k)E^{2+}_{2,k}-\mu_1E_{4,k}^{2+}\right]a_{i-k}, \label{recursion0} \end{align} where $a_i$s are Fourier coefficients of the character $q$-series expansion.\\ The indicial equation is obtained by setting $i=0$ and $k=0$ in Eq.(\ref{recursion0}), \begin{align} \alpha^2 - \frac{1}{4}\alpha + \mu_1 = 0. \label{indicial0} \end{align} Setting $\alpha=\alpha_0$ in Eq.(\ref{indicial0}) we get, \begin{align} \mu_1 = \frac{1}{4}\left(\alpha_0-4\alpha_0^2\right), \label{param0} \end{align} Now setting $i=1$ in Eq.(\ref{recursion0}) we get the following quadratic equation in $\alpha_0$, \begin{align} 9a_1 + 168\alpha_0 + 24a_1\alpha_0 - 576\alpha_0^2 = 0, \label{m1_eqn_0} \end{align} which can be recast as below if we identify $N=-24\alpha_0=c$, \begin{align} -9a_1 + 7N + a_1N + N^2 = 0, \label{N_eqn_0} \end{align} $a_1$ can now be expressed in terms of $N$ as, \begin{align} a_1 = \frac{N(N+7)}{9-N}, \label{a1_eqn_0} \end{align} which sets an upper bound on the central charge $c<9$. Furthermore, applying the integer root theorem to Eq.(\ref{N_eqn_0}), we see that $c=N\in\mathbb{Z}$. \\ Now checking for higher orders (upto $\mathcal{O}(q^{5000})$) for both characters, for $0<c<9$, we find the following admissible character solutions, \begin{itemize} \item $(c,h)=\left(1,\frac{1}{3}\right)$: \begin{align} &\chi_0 = q^{-1/24}(1 + q + 2q^2 + q^3 + 3q^4 + 3q^5 + 5q^6 + 5q^7 + 8q^8 + 8q^9 + 12q^{10} + \mathcal{O}(q^{11})), \nonumber\\ &\chi_{\frac{1}{3}} = q^{7/24}(1 + q^2 + q^3 + 2q^4 + 1q^5 + 3q^6 + 2q^7 + 5q^8 + 4q^9 + 7q^{10} + \mathcal{O}(q^{11})) \end{align} \item $(c,h)=\left(3,\frac{1}{2}\right)$: \begin{align} &\chi_0 = q^{-1/8}(1 + 5q + 11q^2 + 20q^3 + 41q^4 + 76q^5 + 127q^6 + 211q^7 + 342q^8 + 541q^9 + 838q^{10} + \mathcal{O}(q^{11})), \nonumber\\ &\chi_{\frac{1}{2}} = q^{3/8}(1 + q + 5q^2 + 6q^3 + 16q^4 + 21q^5 + 46q^6 + 61q^7 + 117q^8 + 157q^{9} + 273 q^{10} + \mathcal{O}(q^{11})) \end{align} \item $(c,h)=\left(5,\frac{2}{3}\right)$: \begin{align} \chi_0 &= q^{-5/24}(1 + 15q + 40q^2 + 135q^3 + 290q^4 + 726q^5 + 1415q^6 + 3000q^7 + 5485q^8 \nonumber\\ &+ 10565q^9 + 18407q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{2}{3}} &= q^{11/24}(5 + 11q + 55q^2 + 100q^3 + 290q^4 + 535q^5 + 1235q^6 \nonumber\\ &+ 2160q^7 + 4400q^8 + 7465q^{9} + 13945q^{10} + \mathcal{O}(q^{11})) \end{align} \end{itemize} We also find a \textit{identity only} admissible solution, which only has a nice identity character but unstable non-trivial character. This is found at $(c,h)=\left(6,\frac{3}{4}\right)$. \begin{align} \chi_0 &= q^{-1/4}(1 + 26q + 79q^2 + 326q^3 + 755q^4 + 2106q^5 + 4460q^6 + 10284q^7 \nonumber\\ &+ 20165q^8 + 41640q^9 + 77352q^{10} + \mathcal{O}(q^{11})) \end{align} For $c=7,8$, both the characters are unstable and for $c=2,4$ there exists no $a_1\in\mathbb{Z}$ solutions to Eq.(\ref{N_eqn_0}). \\ Now let us observe the $c=6$, \textit{identity only} solution. It turns out $\chi_{0}^{c=6} = ({j^{2+}})^{\frac{1}{4}}$. So, a $(1,1)$ solution has appeared as a $(2,0)$ solution, a phenomenon which keeps happening in the $\text{SL}(2,\mathbb{Z})$ case. So, we can use this phenomena: \textit{$(j^{2+})^x$ where $x\in\{\frac{1}{4},\frac{1}{2},\frac{3}{4}\}$ will always appear as a particular solution to a $(n,l)$ MLDE}, to reduce the no. of parameters of the $(n,l)$ MLDE by one. \\ For $c=6$, $\mu_1 = -\frac{1}{8}$ (using Eq.(\ref{param0})). With this value of $\mu_1$, Eq.(\ref{j2+mlde}) should yield a solution $\chi=({j^{2+}})^{\frac{1}{4}}$. This would lead to the following relation (after substituting $\chi=({j^{2+}})^{\frac{1}{4}}$ in Eq.(\ref{j2+mlde})), \begin{align} \frac{j^{2+}}{j^{2+}-256} = \frac{(E_4^{2+})^3}{(E_6^{2+})^2}. \label{re_F2_again} \end{align} \subsubsection{Bilinear pairs} Let ${}^{(2)}\mathcal{W}_1$, ${}^{(2)}\mathcal{W}_3$ and ${}^{(2)}\mathcal{W}_5$ denote the $c\in\{1,2,5\}$ admissible character-like (2,0) solutions. Note that we have the following bilinear identities, \begin{align} &(j^{2+})^{\frac{1}{4}} = \chi_0^{{}^{(2)}\mathcal{W}_1}\chi_0^{{}^{(2)}\mathcal{W}_5} + 2 \, \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1}\chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5}, \label{bilin0} \\ &(j^{2+})^{\frac{1}{4}} = \chi_0^{{}^{(2)}\mathcal{W}_3}\chi_0^{{}^{(2)}\mathcal{W}_3} + 16 \, \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}\chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}, \label{bilin1} \end{align} where Eq.(\ref{bilin1}) is a self-dual relation and where the above bilinears are for the following relation: $(2,0)\overset{c^\mathcal{H}=6}{\underset{n_1=1}{\longleftrightarrow}}(2,0)$ (where $n_1=\text{sum of conformal dimensions of the bilinear pair}$).\\ These bilinear relations can probably help us to write down the modular invariant partition function for the $(2,0)$ admissible solutions. For instance the below could be true\footnote{\AD{Ask Sunil Sir about this..}}, \begin{align} &\mathcal{Z}_1 = \left|\chi_0^{{}^{(2)}\mathcal{W}_1}\right|^2 + 2 \, \left|\chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1}\right|^2, \label{partintn1} \\ &\mathcal{Z}_3 = \left|\chi_0^{{}^{(2)}\mathcal{W}_3}\right|^2 + 16 \, \left|\chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}\right|^2, \label{partintn3} \\ &\mathcal{Z}_5 = \left|\chi_0^{{}^{(2)}\mathcal{W}_5}\right|^2 + 2 \, \left|\chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5}\right|^2, \label{partintn5} \end{align} \subsubsection{$\ell=1$ analysis} Now let us move onto $\ell=1$ case. The MLDE reads (with $\frac{k}{8}$ derivative operator), \begin{align} &\left[\Tilde{D}^2+\left(\mu_1\frac{(E_4^{2+})^2}{E_6^{2+}}+\mu_2\frac{\Delta_2}{E_6^{2+}}\right)\Tilde{D}+\mu_3\frac{E_{10}^{2+}}{E_{6}^{2+}}\right]\chi = 0, \label{mldeF2_2char_l1} \\ &\left[E_6^{2+}\Tilde{D}^2 + \left(\mu_1(E_4^{2+})^2 + \mu_2\Delta_2\right)\Tilde{D} + \mu_3E_{10}^{2+}\right]\chi = 0. \label{mldeF2_2char_l1_aliter} \\ &\left[E_6^{2+}q^2\partial_{q}^2 + \left(E_6^{2+} - \frac{E_6^{2+}E_2^{2+}}{4} + \mu_1(E_4^{2+})^2 + \mu_2\Delta_2\right)q\partial_q\chi + \mu_3 E_{10}^{2+} \right]\chi = 0. \label{qmldeF2_2char_l1} \end{align} From above we obtain the following recursion relation, \begin{align} &\sum\limits_{k=0}^i\left[E_{6,k}^{2+}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{6,k}^{2+}(\alpha^j+i-k)-\frac{E_{2,k}^{2+}E_{2,k}^{2+}}{4}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1 (E_{4}^{2+})^2_{,k} (\alpha^j+i-k) + \mu_2\Delta_{2,k}(\alpha^j+i-k) + \mu_3 E_{10,k}\right]a^j_{i-k} = 0, \label{recursion_l1F2_2char} \end{align} which gives the following indicial equation, \begin{align} \alpha^2 + \left(\mu_1-\frac{1}{4}\right)\alpha + \mu_3 = 0, \label{indicial_l1F2_2char} \end{align} From Riemann-Roch (sum of exponents $= 0$), we see that $\mu_1=\frac{1}{4}$.\\ Now let us determine the value of $\mu_2$ by transforming the $\tau$-MLDE in Eq.(\ref{mldeF2_2char_l1}) into the corresponding $j^{2+}$-MLDE (about $\tau=\rho_2$), \begin{align} \left[(j^{2+})^2\partial_{j^{2+}}^2 + \frac{j^{2+}}{2}\partial_{j^{2+}} + \frac{j^{2+}}{j^{2+}-256}\left(\frac{j^{2+}}{2}-(64+\mu_2)\right)\partial_{j^{2+}} + \frac{\mu_3 j^{2+}}{j^{2+}-256}\right]\chi = 0, \label{j2+mlde_p2_l1} \end{align} where we have set $\mu_1=\frac{1}{4}$ from the $\ell=1$ indicial equation (see Eq.(\ref{indicial_l1F2_2char})). Now, note that about $\tau=\rho_2$, we have, $\chi_0\sim (j^{2+})^x$ and $j^{2+}\to 0$, with $x=\frac{1}{4}$ for $\ell=1$ (see $1$-character solutions). So, substituting $\chi=(j^{2+})^x$ about $j^{2+}=0$, the $j^{2+}$-indicial equation (which is basically, coefficient of $(j^{2+})^x$ $=\left.0\right|_{j^{2+}=0}$) reads, \begin{align} x\left(x+\frac{\mu_2}{256}-\frac{1}{4}\right) = 0, \label{jindip2l1F2} \end{align} which yields $\mu_2 = 0$ for $x=\frac{1}{4}$. \\ Putting $i=1$ in Eq.(\ref{recursion_l2F2_2char}) yields\footnote{We shall at times identify $m_i:=a^0_i$.}, \begin{align} &m_1 + 40 \alpha_0 + 2 m_1 \alpha_0 - 48 \alpha_0^2 = 0, \label{m1al1}\\ &-12 m_1 + 20 N + m_1 N + N^2 = 0 \, \, \, \, (\text{where}, \ N:=-24\alpha_0 = c). \label{diophNl1p2} \end{align} Applying integer root theorem to Eq.(\ref{diophNl1p2}) we note that $N=c\in\mathbb{Z}$.\\ With the above definition of $\alpha_0$ we can now find $m_1$ in terms of $N$, \begin{align} m_1 = \frac{N(N+20)}{12-N}, \label{m1N_p2F2l1} \end{align} which implies that $0<c< 12$. Scanning this space of $c$, we observe that there exists the following admissible solutions. \begin{itemize} \item $(c,h)=\left(4,\frac{1}{3}\right)$: \begin{align} \chi_0 &= q^{-1/6}(1 + 12q + 70q^2 + 1304q^3 + 45969q^4 + 2110036q^5 + 109853542q^6 \nonumber\\ &+ 6204772264q^7 + 371363466708q^8 + \mathcal{O}(q^{9})), \\ \chi_{\frac{1}{3}} &= q^{1/6}(1 - 4q - 86q^2 - 2712q^3 - 110221q^4 - 5321572q^5 - 285692130q^6 \nonumber\\ &- 16484830680q^7 - 1002347627239q^8 - \mathcal{O}(q^{9})), \end{align} where $\chi_0$ can be interpreted as a $c=4$ single-character solution. Surprisingly, this single-character solution is not ``yet" any $(1,l)$ solution. Also, note that, $\chi_{\frac{1}{3}}$ is a quasi-character of type II (see \cite{Chandra:2018pjq}). \item $(c,h)=\left(6,\frac{1}{2}\right)$: \begin{align} &\chi_0 = (j^{2+})^{\frac{1}{4}}, \\ &\chi_{\frac{1}{2}} = \text{unstable} \nonumber \end{align} \end{itemize} Note that, we can make use of the fact that $(j^{2+})^x$ with $x\in\{\frac{1}{2},\frac{3}{4}\}$ can also appear as a solution to the above $(2,1)$ MLDE and hence we get for these values of $x$, $\mu_2=-64$ and $\mu_2=-128$ respectively. \\ For $\mu_2=-64$ we get from the $i=1$ equation: $(m_1+c)\left(1-\frac{c}{12}\right)=0$ which implies $c=12$. So, this choice of $\mu_2$ only leads to one unitary single-character solution: $(j^{2+})^{\frac{1}{2}}$. \\ For $\mu_2=-128$, we get from the $i=1$ equation: $m_1=\frac{N(N-44)}{12-N}$ which means the allowed range of central charge is, $12<c<45$ (where $N=-24\alpha_0$). Scanning this space of $c$, we only obtain $(j^{2+})^{\frac{3}{4}}$ and three type II quasi-character solutions for $c=20,28,44$. So, this value of $\mu_2$ has no admissible 2-character solutions. \subsubsection{$\ell=2$ analysis} Now let us move to $\ell=2$ case. The MLDE reads (with $\frac{k}{8}$ derivative operator), \begin{align} &\left(\Tilde{D}^2+\mu_1\frac{E_6^{2+}}{E_4^{2+}}\Tilde{D}+\mu_2E_4^{2+}+\mu_3\frac{\Delta_2}{E_{4}^{2+}}\right)\chi = 0, \label{mldeF2_2char_l2} \\ &\left(E_4^{2+}\Tilde{D}^2 + \mu_1E_6^{2+}\Tilde{D} + \mu_2(E_4^{2+})^2 + \mu_3\Delta_2\right)\chi = 0. \label{mldeF2_2char_l2_aliter} \\ &\left[E_4^{2+}q^2\partial_{q}^2 + \left(E_4^{2+} - \frac{E_4^{2+}E_2^{2+}}{4} + \mu_1E_6^{2+}\right)q\partial_q\chi + \mu_2(E_4^{2+})^2 + \mu_3\Delta_2\right]\chi = 0. \label{qmldeF2_2char_l2} \end{align} From above we obtain the following recursion relation, \begin{align} &\sum\limits_{k=0}^i\left[E_{4,k}^{2+}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{4,k}^{2+}(\alpha^j+i-k)-\frac{E_{4,k}^{2+}E_{2,k}^{2+}}{4}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1E_{6,k}^{2+}(\alpha^j+i-k) + \mu_2(E_4^{2+})_{,k}^2 + \mu_3\Delta_{2,k}\right]a^j_{i-k} = 0, \label{recursion_l2F2_2char} \end{align} which gives the following indicial equation, \begin{align} \alpha^2 + \left(\mu_1-\frac{1}{4}\right)\alpha + \mu_2 = 0, \label{indicial_l2F2_2char} \end{align} From Riemann-Roch (sum of exponents $= -\frac{1}{4}$), we see that $\mu_1=\frac{1}{2}$.\\ Now let us determine the value of $\mu_3$ by transforming the $\tau$-MLDE in Eq.(\ref{mldeF2_2char_l2}) into the corresponding $j^{2+}$-MLDE (about $\tau=\rho_2$), \begin{align} \left[(j^{2+})^2\partial_{j^{2+}}^2 + \frac{j^{2+}}{j^{2+}-256}\left(\frac{3j}{4}-64\right)\partial_{j^{2+}} + \frac{\mu_2 j^{2+}}{j^{2+}-256} + \frac{\mu_3}{j^{2+}-256}\right]\chi = 0, \label{j2+mlde_p2_l2} \end{align} where we have set $\mu_1=\frac{1}{2}$ from the $\ell=2$ indicial equation (see Eq.(\ref{indicial_l2F2_2char})). Now, note that about $\tau=\rho_2$, we have, $\chi_0\sim (j^{2+})^x$ and $j^{2+}\to 0$, with $x=\frac{1}{2}$ for $\ell=2$ (see $1$-character solutions). So, substituting $\chi=(j^{2+})^x$ about $j^{2+}=0$, the $j^{2+}$-indicial equation (which is basically, coefficient of $(j^{2+})^x$ $=\left.0\right|_{j^{2+}=0}$) reads, \begin{align} x^2-\frac{3}{4}x - \frac{\mu_3}{256} = 0, \label{jindip2l2F2} \end{align} which yields $\mu_3 = -32$ for $x=\frac{1}{2}$. Another root of the above equation with $\mu_3=-32$ is $x=\frac{1}{4}$. \\ Putting $i=1$ in Eq.(\ref{recursion_l2F2_2char}) yields\footnote{We shall at times identify $m_i:=a^0_i$.}, \begin{align} &-128 + 5 m_1 - 248 \alpha_0 + 8 m_1 \alpha_0 - 192 \alpha_0^2 = 0, \label{m1al}\\ &384 - 15 m_1 - 31 N + m_1 N + N^2 = 0 \, \, \, \, (\text{where}, \ N:=-24\alpha_0 = c). \label{diophNl2p2} \end{align} Applying integer root theorem to Eq.(\ref{diophNl2p2}) we note that $N=c\in\mathbb{Z}$.\\ With the above definition of $\alpha_0$ we can now find $m_1$ in terms of $N$, \begin{align} m_1 = (16-N)+\frac{144}{15-N}, \label{m1N_p2F2l2} \end{align} which implies that $0<c< 15$. Scanning this space of $c$, we observe that there exists the following admissible solutions. \begin{itemize} \item $(c,h)=\left(3,0\right)$: \begin{align} \chi_0 &= q^{-1/8}(1 + 25q + 51q^2 + 196q^3 + 297q^4 + 780q^5 + 1223q^6 \nonumber\\ &+ 2551q^7 + 3798q^8 + 7201q^9 + 10582q^{10} + \mathcal{O}(q^{11})), \end{align} where since $h=0$ both the characters are identical. This is a $1$-character solution which surprisingly is not ``yet" any $(1,l)$ solution. \item $(c,h)=\left(6,\frac{1}{4}\right)$: \begin{align} &\chi_0 = (j^{2+})^{\frac{1}{4}}, \\ &\chi_{\frac{1}{4}} = \text{unstable} \nonumber \end{align} \item $(c,h)=\left(7,\frac{1}{3}\right)$: \begin{align} &\chi_0 = (j^{2+})^{\frac{1}{4}}\otimes \chi_0^{{}^{(2)}\mathcal{W}_1} \nonumber\\ &\chi_{\frac{1}{3}} = (j^{2+})^{\frac{1}{4}}\otimes \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1} \end{align} \item $(c,h)=\left(9,\frac{1}{2}\right)$: \begin{align} &\chi_0 = (j^{2+})^{\frac{1}{2}}\otimes \chi_0^{{}^{(2)}\mathcal{W}_3} \nonumber\\ &\chi_{\frac{1}{2}} = (j^{2+})^{\frac{1}{4}}\otimes \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3} \end{align} \item $(c,h)=\left(11,\frac{2}{3}\right)$: \begin{align} &\chi_0 = (j^{2+})^{\frac{2}{3}}\otimes \chi_0^{{}^{(2)}\mathcal{W}_5} \nonumber\\ &\chi_{\frac{2}{3}} = (j^{2+})^{\frac{1}{4}}\otimes \chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5} \end{align} \item $(c,h)=\left(12,\frac{3}{4}\right)$: \begin{align} &\chi_0 = (j^{2+})^{\frac{1}{2}}, \\ &\chi_{\frac{3}{4}} = \text{unstable} \nonumber \end{align} \end{itemize} Let ${}^{(2)}\mathcal{W}_7$, ${}^{(2)}\mathcal{W}_9$ and ${}^{(2)}\mathcal{W}_{11}$ denote the $c\in\{7,9,11\}$ admissible character-like (2,2) solutions. Note that, here also, we have some bilinear identities, \begin{align} &(j^{2+})^{\frac{3}{4}} = \chi_0^{{}^{(2)}\mathcal{W}_7}\chi_0^{{}^{(2)}\mathcal{W}_{11}} + 2 \, \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_7}\chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_{11}}, \label{bilin2} \\ &(j^{2+})^{\frac{1}{4}} = \chi_0^{{}^{(2)}\mathcal{W}_9}\chi_0^{{}^{(2)}\mathcal{W}_9} + 16 \, \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_9}\chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_9}, \label{bilin3} \end{align} where Eq.(\ref{bilin3}) is a self-dual relation. Note, Eqs.(\ref{bilin2}) and (\ref{bilin3}) are nothing but $(j^{2+})^{\frac{1}{2}}$ multiplied with Eqs.(\ref{bilin0}) and (\ref{bilin1}) respectively. More bilinear pairs can be constructed by multiplying $(j^{2+})^{\frac{1}{4}}$ multiplied with Eqs.(\ref{bilin0}) and (\ref{bilin1}). \\ Note that, we can make use of the fact that $(j^{2+})^x$ with $x\in\{\frac{3}{4}\}$ can also appear as a solution to the above $(2,2)$ MLDE and hence we get for this value of $x$, $\mu_3=0$. \\ For $\mu_3=0$, we get from the $i=1$ equation: $m_1=\frac{N(N-31)}{15-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $15<c<32$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(19,\frac{4}{3}\right)$: \begin{align} \chi_0 &= q^{-19/24}(1 + 57q + 2147q^2 + 31540q^3 + 260243q^4 + 1691798q^5 \nonumber\\ &+ 8887877q^6 + 41091167q^7 + 168938614q^8 + \mathcal{O}(q^{9})), \nonumber\\ \chi_{\frac{4}{3}} &= q^{13/24}(133 + 2717q + 33839q^2 + 246468q^3 + 1506510q^4 + 7478629q^5 + 33354310q^6 \nonumber\\ &+ 132591975q^7 + 489341675q^8 + \mathcal{O}(q^{9})) \end{align} \item $(c,h)=\left(21,\frac{3}{2}\right)$: \begin{align} \chi_0 &= q^{-7/8}(1 + 35q + 2394q^2 + 40873q^3 + 405426q^4 + 2946132q^5 \nonumber\\ &+ 17381133q^6 + 88016962q^7 + 395953299q^8 + \mathcal{O}(q^{9})), \nonumber\\ \chi_{\frac{3}{2}} &= q^{5/8}(7 + 161q + 2160q^2 + 17409q^3 + 115003q^4 + 619101q^5 + 2962183q^6 \nonumber\\ &+ 12628973q^7 + 49681296q^8 + \mathcal{O}(q^{9})) \end{align} \item $(c,h)=\left(23,\frac{5}{3}\right)$: \begin{align} \chi_0 &= q^{-23/24}(1 + 23q + 3335q^2 + 67068q^3 + 793776q^4 + 6461689q^5 \nonumber\\ &+ 42601060q^6 + 236116965q^7 + 1157910689q^8 + \mathcal{O}(q^{9})), \nonumber\\ \chi_{\frac{5}{3}} &= q^{17/24}(506 + 12903q + 185725q^2 + 1639463q^3 + 11650696q^4 + 67590238q^5 + 345549517q^6 \nonumber\\ &+ 1572505630q^7 + 6570405101q^8 + \mathcal{O}(q^{9})) \end{align} \end{itemize} For $\mu_3=-112$, we get from the $i=1$ equation: $m_1=\frac{N(N-10)}{18-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $9<c<18$. Scanning this space of $c$ we find no admissible solutions. We did not get any quasi-character solutions also. \subsubsection{Non-trivial bilinear pairs} Let us give here some more bilinear pairs which are non-trivial in the sense that they cannot be derived from Eqs.(\ref{bilin0}) and (\ref{bilin1}). Let ${}^{(2)}\mathcal{W}_{19}$, ${}^{(2)}\mathcal{W}_{21}$ and ${}^{(2)}\mathcal{W}_{23}$ denote the $c\in\{19,21,23\}$ admissible character-like (2,2) solutions (obtained with $\mu_3=0$). \begin{align} &j^{2+} - 32 = \chi_0^{{}^{(2)}\mathcal{W}_5}\chi_0^{{}^{(2)}\mathcal{W}_{19}} + 2 \, \chi_{\frac{2}{3}}^{{}^{(2)}\mathcal{W}_5}\chi_{\frac{4}{3}}^{{}^{(2)}\mathcal{W}_{19}}, \label{bilin_non_0} \\ &j^{2+} - 64 = \chi_0^{{}^{(2)}\mathcal{W}_3}\chi_0^{{}^{(2)}\mathcal{W}_{21}} + 256 \, \chi_{\frac{1}{2}}^{{}^{(2)}\mathcal{W}_3}\chi_{\frac{3}{2}}^{{}^{(2)}\mathcal{W}_{21}}, \label{bilin_non_1} \\ &j^{2+} - 80 = \chi_0^{{}^{(2)}\mathcal{W}_1}\chi_0^{{}^{(2)}\mathcal{W}_{23}} + 2 \, \chi_{\frac{1}{3}}^{{}^{(2)}\mathcal{W}_1}\chi_{\frac{5}{3}}^{{}^{(2)}\mathcal{W}_{23}}, \label{bilin_non_2} \end{align} where the above bilinears are for the following relation: $(2,0)\overset{c^\mathcal{H}=24}{\underset{n_1=2}{\longleftrightarrow}}(2,2)$. \\ Furthermore, we also have, \begin{align} &(j^{2+})^{\frac{3}{4}}(j^{2+}-102) = \chi_0^{{}^{(2)}\mathcal{W}_{19}}\chi_0^{{}^{(2)}\mathcal{W}_{23}} + 2 \, \chi_{\frac{4}{3}}^{{}^{(2)}\mathcal{W}_{19}}\chi_{\frac{5}{3}}^{{}^{(2)}\mathcal{W}_{23}}, \label{bilin_non_3} \\ &(j^{2+})^{\frac{3}{4}}(j^{2+}-112) = \chi_0^{{}^{(2)}\mathcal{W}_{21}}\chi_0^{{}^{(2)}\mathcal{W}_{21}} + 256 \, \chi_{\frac{3}{2}}^{{}^{(2)}\mathcal{W}_{21}}\chi_{\frac{3}{2}}^{{}^{(2)}\mathcal{W}_{21}}, \label{bilin_non_4} \\ \end{align} where the above bilinears are for the following relation: $(2,2)\overset{c^\mathcal{H}=42}{\underset{n_1=3}{\longleftrightarrow}}(2,2)$.\footnote{A similar analysis should be performed with the $\frac{k}{12}$ derivative operator (the usual one)In this case, we can tune the $\mu_1$ parameter to even make this MLDE consistent with the Riemann-Roch. This is because in this case there exists a weight $2$ meromorphic form contribution to the MLDE due to the allowance of poles in the coefficient functions present in the MLDE.}. \subsubsection{$\ell=3$ analysis} Now let us move to the $\ell=3$ case. The MLDE reads (with $\frac{k}{8}$ derivative operator)\footnote{The generic form of the $(2,3)$ MLDE is exactly the same as the $(2,1)$ MLDE and only the values of the parameters in the MLDE will change.}, \begin{align} &\left[\Tilde{D}^2+\left(\mu_1\frac{(E_4^{2+})^2}{E_6^{2+}}+\mu_2\frac{\Delta_2}{E_6^{2+}}\right)\Tilde{D}+\mu_3\frac{E_{10}^{2+}}{E_{6}^{2+}}\right]\chi = 0, \label{mldeF2_2char_l3} \\ &\left[E_6^{2+}\Tilde{D}^2 + \left(\mu_1(E_4^{2+})^2 + \mu_2\Delta_2\right)\Tilde{D} + \mu_3E_{10}^{2+}\right]\chi = 0. \label{mldeF2_2char_l3_aliter} \\ &\left[E_6^{2+}q^2\partial_{q}^2 + \left(E_6^{2+} - \frac{E_6^{2+}E_2^{2+}}{4} + \mu_1(E_4^{2+})^2 + \mu_2\Delta_2\right)q\partial_q\chi + \mu_3 E_{10}^{2+} \right]\chi = 0. \label{qmldeF2_2char_l3} \end{align} From above we obtain the following recursion relation, \begin{align} &\sum\limits_{k=0}^i\left[E_{6,k}^{2+}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{6,k}^{2+}(\alpha^j+i-k)-\frac{E_{2,k}^{2+}E_{2,k}^{2+}}{4}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1 (E_{4}^{2+})^2_{,k} (\alpha^j+i-k) + \mu_2\Delta_{2,k}(\alpha^j+i-k) + \mu_3 E_{10,k}\right]a^j_{i-k} = 0, \label{recursion_l1F2_2char} \end{align} which gives the following indicial equation, \begin{align} \alpha^2 + \left(\mu_1-\frac{1}{4}\right)\alpha + \mu_3 = 0, \label{indicial_l3F2_2char} \end{align} From Riemann-Roch (sum of exponents $= -\frac{1}{2}$), we see that $\mu_1=\frac{3}{4}$. \\ Now let us determine the value of $\mu_2$ by transforming the $\tau$-MLDE in Eq.(\ref{mldeF2_2char_l3}) into the corresponding $j^{2+}$-MLDE (about $\tau=\rho_2$), \begin{align} \left[(j^{2+})^2\partial_{j^{2+}}^2 + \frac{j^{2+}}{2}\partial_{j^{2+}} - \frac{j^{2+}}{j^{2+}-256}\left((64+\mu_2)\right)\partial_{j^{2+}} + \frac{\mu_3 j^{2+}}{j^{2+}-256}\right]\chi = 0, \label{j2+mlde_p2_l3} \end{align} where we have set $\mu_1=\frac{3}{4}$ from the $\ell=3$ indicial equation (see Eq.(\ref{indicial_l3F2_2char})). Now, note that about $\tau=\rho_2$, we have, $\chi_0\sim (j^{2+})^x$ and $j^{2+}\to 0$, with $x=\frac{3}{4}$ for $\ell=3$ (see $1$-character solutions). So, substituting $\chi=(j^{2+})^x$ about $j^{2+}=0$, the $j^{2+}$-indicial equation (which is basically, coefficient of $(j^{2+})^x$ $=\left.0\right|_{j^{2+}=0}$) reads, \begin{align} x^2-\frac{x}{2}+\frac{1}{4}+\frac{\mu_2}{256} = 0, \label{jindip2l3F2} \end{align} which yields $\mu_2 = -112$ for $x=\frac{3}{4}$. Similar analysis gives, $(x,\mu_2)=(\frac{1}{4},-48)$, $(x,\mu_2)=(\frac{1}{2},-64)$ as solution to Eq.(\ref{jindip2l3F2}).\\ For $\mu_2=-48$, we get from the $i=1$ equation: $m_1=\frac{N(N+22)}{18-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $0<c<18$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(6,0\right)$: \begin{align} \chi_0 &= q^{-1/4}(1 + 14q + 331q^2 + 4506q^3 + 112795q^4 + 3965662q^5 \nonumber\\ & + 175520124q^6 + 8718679572q^7 + 470448806341q^8 + \mathcal{O}(q^{9})), \nonumber\\ \chi_{\text{non-id}} &= \text{unstable} \end{align} where the above is a $1$-character solution (not yet a $(1,l)$ solution). \end{itemize} For $\mu_2=-64$, we get from the $i=1$ equation: $m_1=\frac{N(N+14)}{18-N}$ where $N=-24\alpha_0$. This gives the allowed range of central charge as, $0<c<18$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(6,0\right)$: \begin{align} \chi_0 &= q^{-1/4}(1 + 10q + 367q^2 + 6422q^3 + 169555q^4 + 6322026q^5 \nonumber\\ & + 286760396q^6 + 14591718924q^7 + 802225180229q^8 + \mathcal{O}(q^{9})), \nonumber\\ \chi_{\text{non-id}} &= \text{unstable} \end{align} where the above is a $1$-character solution (not yet a $(1,l)$ solution). \item $(c,h)=\left(12,\frac{1}{2}\right)$: \begin{align} \chi_0 &= (j^{2+})^{\frac{1}{2}}, \nonumber\\ \chi_{\text{non-id}} &= 1 \end{align} \end{itemize} \subsection{$j^{2+}$ analysis of $(2,0)$ solutions} With Eqs.(\ref{j1_2}) and (\ref{j1_21}) we can recast the MLDE in Eq.(\ref{mlde20_0}) as, \begin{align} \left[(j^{2+})^2\left(\frac{E_6^{2+}}{E_4^{2+}}\right)^2\partial^2_{j^{2+}} + \left(\frac{1}{2}j^{2+}\left(\frac{E_6^{2+}}{E_4^{2+}}\right)^2+\frac{3j^{2+}}{4}E_4^{2+}-64E_4^{2+}\right)\partial_{j^{2+}} + \mu_1E_4^{2+}\right]\chi = 0. \label{j2+mlde} \end{align} Now let us divide Eq.(\ref{j2+mlde}) by $E_4^{2+}$ and then use Eq.(\ref{re_F2}) to get (about $\tau=\rho_2$), \begin{align} \left[j^{2+}\left(j^{2+}-256\right)\partial_{j^{2+}}^2 + \left(\frac{5j^{2+}}{4}-192\right)\partial_{j^{2+}} + \mu_1\right]\chi = 0, \label{j2+mlde_sim} \end{align} Define, $J := \frac{j^{2+}}{256}$. Then, Eq.(\ref{j2+mlde_sim}) becomes, \begin{align} \left[J(J-1)\partial_J^2 + \frac{1}{4}\left(5J-3\right)\partial_J + \mu_1\right]\chi = 0. \label{J2+mlde} \end{align} Eq.(\ref{J2+mlde}) is an hypergeometric differential equation whose solution is (about $\tau=\rho_2$), \begin{align} \chi(J) = a \, \, {}_2F_1\left(\alpha,\beta;\frac{3}{4};J\right) + b \, \, J^{1/4} {}_2F_1\left(\beta+\frac{1}{4},\alpha+\frac{1}{4};\frac{5}{4};J\right), \label{sonlF220} \end{align} where $\alpha+\beta=\frac{1}{4}$ and $\alpha\beta=\mu_1$. Furthermore, $a$ and $b$ are integration constants. \subsection{(3,0) MLDE} The recursion relation reads, \begin{align} &\left[(\alpha^j+i)(\alpha^j+i-1)(\alpha^j+i-2) + 3(\alpha^j+i)(\alpha^j+i-1) + (\alpha^j+i)\right]a^j_i \nonumber\\ &+ \sum\limits_{k=0}^i\left[-\frac{3}{4}E_{2,k}^{2+}a^j_{i-k}(\alpha^j+i-k)(\alpha^j+i-k-1) - \frac{3}{4}E_{2,k}^{2+}a^j_{i-k}(\alpha^j+i-k)\right. \nonumber\\ &\left.- \frac{1}{32}E_{4,k}^{2+}a^j_{i-k}(\alpha^j+i-k) - \frac{3}{32}(E_{2}^{2+})^2_{,k}a^j_{i-k}(\alpha^j+i-k) + \mu_1E_{4,k}^{2+}a^j_{i-k}(\alpha^j+i-k) + \mu_2E_{6,k}^{2+}a^j_{i-k}\right] = 0. \label{recursion3_2} \end{align} From which the indicial equation reads, \begin{align} \alpha^3 - \frac{3}{4}\alpha + \left(\mu_1-\frac{1}{8}\right)\alpha + \mu_2 = 0. \label{indicial3_2} \end{align} Also, \begin{align} &-40 m_1^2 + 15 m_1 m_1 - 820 m_1 \alpha_0 - 376 m_1^2 \alpha_0 + 1880 m_2 \alpha_0 + 18 m_1 m_2 \alpha_0 + 15840 \alpha_0^2 - 11064 m_1 \alpha_0^2 \nonumber\\ &- 1104 m_1^2 \alpha_0^2 + 3216 m_2 \alpha_0^2 - 206784 \alpha_0^3 - 27456 m_1 \alpha_0^3 - 768 m_1^2 \alpha_0^3 + 1536 m_2 \alpha_0^3 + 225792 \alpha_0^4 = 0. \end{align} The Diophantine equation reads, \begin{align} &-338829120 m_1^2 + 127060920 m_1 m_2 + 5906460 m_1 N + 2708328 m_1^2 N - 13541640 m_2 N \nonumber\\ &- 129654 m_1 m_2 N + 97020 N^2 - 67767 m_1 N^2 - 6762 m_1^2 N^2 + 19698 m_2 N^2 + 1077 N^3 \nonumber\\ &+ 143 m_1 N^3 + 4 m_1^2 N^3 - 8 m_2 N^3 + N^4 = 0, \label{Diop1_3_2} \end{align} \section{$\mathbf{\Gamma^{+}_0(3)}$} \subsection{Riemann-Roch} The valence formula for $\Gamma^{+}_0(3)$ reads \cite{2006math......7408M}, \begin{align}\label{valence_Fricke_3} \nu_{\infty}(f) + \frac{1}{2}\nu_{\tfrac{i}{\sqrt{3}}}(f) + \frac{1}{6}\nu_{\rho_{3}}(f) + \sum\limits_{\substack{p\in\Gamma_{0}^{+}(3)\backslash\mathbb{H}^{2}\\ p\neq \tfrac{i}{\sqrt{3}},\rho_{3}}}\nu_{p}(f) = \frac{k}{6}, \end{align} where $\rho_2 := -\frac{1}{2}+\frac{i}{2\sqrt{3}}$ and $\frac{i}{\sqrt{3}}$ is the Fricke involution point. From Proposition 4.3 of \cite{2006math......7408M}, we note that $E^{3+}_4$ has a zero at $\tau=\rho_3$ of order $4$ and $E^{2+}_6$ has a zero at $\tau=\rho_3$ of order $3$ and a zero at $\tau=\frac{i}{\sqrt{3}}$ of order 1. Also, $E^{3+}_8$ has a zero at $\tau=\rho_3$ of order $2$.\\ The Riemann-Roch for $\Gamma^{+}_0(3)$ reads, \begin{align} \sum_{i=0}^{n-1} \, \alpha_i = \frac{n(n-1)}{6} - \frac{\ell}{3}, \label{RR3} \end{align} where, $n$ is the number of linearly independent characters, $\alpha_i$s are exponents and $\ell$ is the wronskian index. \subsection{The operator} Consider the following transformations (where $\gamma\in\Gamma^{+}_0(3)$), \begin{align} &3E_2(3\gamma\tau) = 3(c\tau+d)^2E_2(3\tau) + \frac{6c}{i\pi}(c\tau+d), \label{E22tau_3} \\ &E_2(\gamma\tau) = (c\tau+d)^2E_2(\tau) + \frac{6c}{i\pi}(c\tau+d), \label{E2tau_3} \\ \text{leading to,} \, \, &E_2^{3+}(\gamma\tau) = (c\tau+d)^2E_2^{3+}(\tau) + \frac{3c}{i\pi}(c\tau+d) \label{E22+tau_3}. \end{align} Eq.(\ref{E22+tau_3}) motivates us to define the following derivative operator in the space of $\Gamma^{+}_0(3)$, acting on weight $k$ forms, \begin{align} \Tilde{D}_k := \Tilde{\partial}_\tau - \frac{k}{6}E_2^{3+}(\tau). \label{deriv_op3} \end{align} where $\Tilde{\partial}_\tau := \frac{1}{2i\pi}\partial_\tau = q\partial_q$. \\ One can check that for any modular form in $\Gamma^{+}_0(3)$, $\Tilde{D}_k$ maps $k$ forms to $k+2$ forms. Now if we construct the MLDE in the space of $\Gamma^{+}_0(3)$ withe the above derivative operator then the MLDE will be consistent with the Riemann-Roch formula for $\Gamma^{+}_0(3)$. However, if we use the usual \textit{Serre-Ramanujan} derivative to write down the MLDE in $\Gamma^{+}_0(3)$ then the resulting MLDE will not be consistent with the Riemann-Roch formula for $\Gamma^{+}_0(3)$.\\ We note below some interesting properties of the above derivative operator, \begin{align} &\left(\Tilde{\partial}_{\tau} - \frac{1}{8}E_2^{3+}\right)E_2^{3+} = -\frac{1}{6}E_4^{3+}, \label{De2_3} \\ &\Tilde{D}E_4^{3+} = -\frac{2}{3}E_6^{3+}, \label{De4_3} \\ &\Tilde{D}E_6^{3+} = -(E_4^{3+})^2 + 54\Delta_{3,8}, \label{De4_3} \\ &\Tilde{D}\Delta_{3,8} = -\frac{1}{3}\Delta_{3,10}, \label{Ddel38} \\ \text{also,} \, \, &\Delta_{3,10}E_4^{3+} = \Delta_{3,8}E_{6}^{3+}, \label{prod_eq} \\ \text{also,} \, \, &(E_4^{3+})^3 - (E_6^{3+})^2 = 108\Delta_{3,8}E_4^{3+}, \label{prod_eq1} \end{align} where $\Delta_{3,8} = \frac{41}{1728}\left((E_4^{3+})^3-E_8^{3+}\right)$ and $\Delta_{3,10} = \frac{61}{432}\left(E_4^{3+}E_6^{3+}-E_{10}^{2+}\right)$. \\ Now let us define the ``Hauptmodul" for $\Gamma^{+}_0(3)$, \begin{align} j^{3+} = \frac{(E_4^{3+})^2}{\Delta_{3,8}}, \label{def_H_3} \end{align} We also have, \begin{align} &\frac{j^{3+}}{j^{3+}-108} = \frac{(E_4^{3+})^3}{(E_6^{3+})^2}. \label{j-rel-e4e6_F3} \end{align} The above relation also shows, \begin{align} &\lim\limits_{\tau\to\frac{i}{\sqrt{3}}} j^{3+}(\tau) = 108, \label{j2_1_F3} \\ &\lim\limits_{\tau\to\rho_3} j^{3+}(\tau) = 0, \label{j2_2_F3} \\ &\lim\limits_{\tau\to i\infty} j^{3+}(\tau) = \infty, \label{j2_3_F3} \end{align} Now it is straightforward to show that, \begin{align} &\Tilde{\partial}_\tau = -j^{3+}\frac{E_6^{3+}}{E_4^{3+}}\partial_{j^{3+}}, \label{j1_2_F3} \\ &\Tilde{D}^2 = (j^{3+})^2\left(\frac{E_6^{3+}}{E_4^{3+}}\right)^2\partial_{j^{3+}}^2 + \left(\frac{E_4^{3+}}{2}+\frac{5}{6}\left(\frac{E_6^{3+}}{E_4^{3+}}\right)^2\right)j^{3+}\partial_{j^{3+}}. \label{j1_2_F3_1} \end{align} \subsection{$\mathbf{(1,\ell\neq 0)}$ MLDE}\label{sec1_F3} Let us do here some $1$-character computations. From Riemann-Roch we get that $c=8l$. \\ $\ell=0$ is trivial so let us start with $\ell=1$. For $\ell=1$, the RHS of Valence Formula reads $\frac{1}{3}$ (since $k=2l$ always). The MLDE takes the following form, \begin{align} \left[\Tilde{D} + \mu_1\frac{E_4^{3+}E_6^{3+}}{(E_4^{3+})^2+\mu\Delta_{3,8}} + \mu_2\frac{\Delta_{3,10}}{(E_4^{3+})^2+\mu\Delta_{3,8}}\right]\chi = 0. \label{F3_l1_p1} \end{align} Now using the indicial equation we can set $\mu_1=\frac{1}{3}$. Now let us take a guess, $\chi=(j^{3+})^{\frac{1}{3}}$. One can readily check that this is a solution with $\mu_2=\frac{\mu}{3}$.\\ Now let us analyse $\ell=2$. The MLDE takes the following form, \begin{align} &\left(\Tilde{D}+\mu_1\frac{E_6^{3+}}{E_4^{3+}}\right)\chi = 0, \label{1char_F3_l2} \\ &\left(E_4^{3+}\Tilde{D}+\mu_1E_6^{3+}\right)\chi = 0. \label{1char_F3_l2_aliter} \end{align} Note that in above $\Tilde{D}=q\partial_q$. Also, the indicial equation ($\alpha_0+\mu_1=0$) dictates $\mu_1=\frac{2}{3}$.\\ Now let us take a guess, $\chi=(j^{3+})^{\frac{2}{3}}$. Then, $\Tilde{\partial}_\tau (j^{3+})^{\frac{2}{3}} = -\frac{2}{3}(j^{3+})^{\frac{2}{3}}\frac{E_6^{3+}}{E_4^{3+}}$. Note, that this $\chi$ indeed satisfies Eq.(\ref{1char_F3_l2}) with $\mu_1=\frac{2}{3}$.\\ Now let us analyse $\ell=3$. We see from the Riemann-Roch that this is the \textit{bulk-pole} MLDE. The MLDE takes the following form, \begin{align} &\left(\Tilde{D}+\mu_1\frac{(E_4^{3+})^2}{E_6^{3+}} + \mu_2\frac{\Delta_{3,8}}{E_6^{3+}}\right)\chi = 0, \label{1char_F3_l3} \\ &\left(E_6^{3+}\Tilde{D}+\mu_1(E_4^{3+})^2 + \mu_2\Delta_{3,8}\right)\chi = 0. \label{1char_F3_l3_aliter} \end{align} Note that in above $\Tilde{D}=q\partial_q$. Also, the indicial equation ($\alpha_0+\mu_1=0$) dictates $\mu_1=1$.\\ Now let us take a guess, $\chi=j^{3+}+\mathcal{N}$. Then, substituting this in the differential equation in Eq.(\ref{1char_F3_l3_aliter}) we get, \begin{align} &\left(E_6^{3+}\Tilde{D}+\mu_1(E_4^{3+})^2 + \mu2\Delta_{3,8}\right)(j^{3+}+\mathcal{N}) = \Delta_{3,8}\left[(108+b+\mathcal{N})j^{3+}+\mathcal{N}b\right] \end{align} So, we see that $j^{3+}+\mathcal{N}$ is a solution to the above differential equation iff, \begin{align} &b=0 \, \, \, \text{and} \, \, \, \mathcal{N} = -108 \longrightarrow \text{soln.:} \, \, j^{3+}-108, \label{sol1} \\ &b=-108 \, \, \, \text{and} \, \, \, \mathcal{N} = 0 \longrightarrow \text{soln.:} \, \, j^{3+}, \label{sol2} \end{align} Note that, $j^{3+}-108$ has $m_1<0$ and hence is not an admissible solution. So, the only admissible solution is: $j^{3+}$ which happens when $b=-108$ and $\mathcal{N}=0$. \\ So, here we do not get $j^{3+}+\mathcal{N}$ as a solution to the \textit{bulk-pole} MLDE. This is different from the $\text{SL}(2,\mathbb{Z})$ and the $\Gamma^{+}_0(2)$ case and probably this happens because of the following reason. In both $\text{SL}(2,\mathbb{Z})$ and $\Gamma^{+}_0(2)$, the \textit{bulk-pole} MLDE ($\ell=6$ and $\ell=4$, respectively), the \textit{bulk-pole} appears in the denominator ($D^r = E_4^3+\mu\Delta$ and $D^r=(E_4^{2+})^2+\mu\Delta_2$, respectively) while in $\Gamma^{+}_0(3)$ the \textit{bulk-pole} appears in the numerator ($N^r=(E_4^{3+})^2+\mu\Delta_{3,8}$). \subsection{2-character MLDE} \subsubsection{$\ell=0$ analysis} With the above derivative operator as defined in Eq.(\ref{deriv_op3}) now let us set up the (2,0) MLDE, \begin{align} \left(\Tilde{D}^2 + \mu_1E_4^{3+}\right)\chi = 0, \label{mlde20_3} \end{align} where $\chi$ is a weight 0 modular function for $\Gamma^{+}_0(3)$. Now substituting the Fourier expansions of the relevant terms in Eq.(\ref{mlde20_3}) we get the following recursion relation, \begin{align} &\left[(\alpha+i)^2 - \frac{1}{3}(\alpha+i) + \mu_1\right]a_i = \sum\limits_{k=1}^i\left[\frac{1}{3}(\alpha+i-k)E^{3+}_{2,k}-\mu_1E_{4,k}^{3+}\right]a_{i-k}, \label{recursion3} \end{align} where $a_i$s are Fourier coefficients of the character $q$-series expansion.\\ The indicial equation is obtained by setting $i=0$ and $k=0$ in Eq.(\ref{recursion3}), \begin{align} \alpha^2 - \frac{1}{3}\alpha + \mu_1 = 0. \label{indicial3} \end{align} Setting $\alpha=\alpha_0$ in Eq.(\ref{indicial3}) we get, \begin{align} \mu_1 = \frac{1}{3}\left(\alpha_0-3\alpha_0^2\right), \label{param3} \end{align} Now setting $i=1$ in Eq.(\ref{recursion_l2F3_2char}) we get the following quadratic equation in $\alpha_0$, \begin{align} 4a_1 + 60\alpha_0 + 12a_1\alpha_0 - 144\alpha_0^2 = 0, \label{m1_eqn_3_l2} \end{align} which can be recast as below if we identify $N=-12\alpha_0=c/2$, \begin{align} -4a_1 + 5N + a_1N + N^2 = 0, \label{N_eqn_3} \end{align} $a_1$ can now be expressed in terms of $N$ as, \begin{align} a_1 = \frac{N(N+5)}{4-N}, \label{a1_eqn_3} \end{align} which sets an upper bound on the central charge $c<8$. Furthermore, applying the integer root theorem to Eq.(\ref{N_eqn_0}), we see that $c=2N$ and the central charge in this case is restricted to be an even integer. \\ Now checking for higher orders (upto $\mathcal{O}(q^{5000})$) for both characters, for $0<c<8$ (even $c$), we find the following admissible character solutions, \begin{itemize} \item $(c,h)=\left(2,\frac{1}{2}\right)$: \begin{align} &\chi_0 = q^{-1/12}(1 + 2q + 2q^2 + 4q^3 + 5q^4 + 6q^5 + 11q^6 + 14q^7 + 17q^8 + 24q^9 + 31q^{10} + \mathcal{O}(q^{11})), \nonumber\\ &\chi_{\frac{1}{2}} = q^{5/12}(1 + q^2 + 2q^3 + 2q^4 + 2q^5 + 5q^6 + 4q^7 + 7q^8 + 10q^9 + 11q^{10} + \mathcal{O}(q^{11})) \end{align} \end{itemize} We also find a \textit{identity only} admissible solution, which only has a nice identity character but unstable non-trivial character. This is found at $(c,h)=\left(4,\frac{2}{3}\right)$. \begin{align} \chi_0 &= q^{-1/6}(1 + 7q + 8q^2 + 22q^3 + 42q^4 + 63q^5 + 106q^6 + 190q^7 \nonumber\\ &+ 267q^8 + 428q^9 + 652q^{10} + \mathcal{O}(q^{11})) \end{align} For $c=6$, both the characters are unstable. \subsubsection{$\ell=1$ analysis} The $\ell=1$ mlde takes the following form, \begin{align} \left[((E_4^{3+})^2+\mu\Delta_{3,8})\Tilde{D}^2 + (\mu_1E_4^{3+}E_6^{3+}+\mu_2\Delta_{3,10})\Tilde{D} + \mu_3 (E_4^{3+})^3 + \mu_4\frac{(\Delta_{3,8})^2}{E_4^{3+}} + \mu_5\Delta_{3,8}E_4^{3+}\right]\chi = 0. \label{mlde_F3_p2_l1} \end{align} \subsubsection{$\ell=2$ analysis} With the above derivative operator as defined in Eq.(\ref{deriv_op3}) now let us set up the (2,2) MLDE, \begin{align} \left(\Tilde{D}^2 + \mu_1\frac{E_6^{3+}}{E_4^{3+}}\Tilde{D} + \mu_2E_4^{3+} + \mu_3\frac{\Delta_{3,8}}{E_4^{3+}}\right)\chi = 0, \label{mlde22_3} \end{align} where $\chi$ is a weight 0 modular function for $\Gamma^{+}_0(3)$. Now substituting the Fourier expansions of the relevant terms in Eq.(\ref{mlde22_3}) we get the following recursion relation, \begin{align} &\sum\limits_{k=0}^i\left[E_{4,k}^{3+}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{4,k}^{3+}(\alpha^j+i-k)-\frac{E_{2,k}^{3+}E_{4,k}^{3+}}{3}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1 E_{6,k}^{3+} (\alpha^j+i-k) + \mu_2(E_4^{3+})^2_{,k} + \mu_3\Delta_{3,8,k}\right]a^j_{i-k} = 0, \label{recursion_l2F3_2char} \end{align} where $a_i$s are Fourier coefficients of the character $q$-series expansion.\\ The indicial equation is obtained by setting $i=0$ and $k=0$ in Eq.(\ref{recursion_l2F3_2char}), \begin{align} \alpha^2 + \left(\mu_1-\frac{1}{3}\right)\alpha + \mu_2 = 0. \label{indicial_l2F3_2char} \end{align} Setting $\alpha=\alpha_0$ in Eq.(\ref{indicial_l2F3_2char}) and observing the Riemann-Roch we get, \begin{align} &\mu_1 = \frac{2}{3}, \label{param2_l2F3}\\ &\mu_2 = -\left(\alpha_0^2-\frac{\alpha_0}{3}\right), \label{param3_l2F3} \end{align} Now let us determine the value of $\mu_3$ by transforming the $\tau$-MLDE in Eq.(\ref{mlde22_3}) into the corresponding $j^{3+}$-MLDE (about $\tau=\rho_3$), \begin{align} \left[(j^{3+})^2\partial_{j^{3+}}^2 + \frac{j^{3+}}{6}\partial_{j^{3+}} + \frac{(j^{3+})^2}{2(j^{3+}-108)}\partial_{j^{3+}} + \frac{\mu_2 j^{3+}}{j^{3+}-108} + \frac{\mu_3}{j^{3+}-108}\right]\chi = 0, \label{j3+mlde_p2_l2} \end{align} where we have set $\mu_1=\frac{2}{3}$ from the $\ell=2$ indicial equation (see Eq.(\ref{indicial_l2F3_2char})). Now, note that about $\tau=\rho_3$, we have, $\chi_0\sim (j^{3+})^x$ and $j^{3+}\to 0$, with $x=\frac{2}{3}$ for $\ell=2$ (see $1$-character solutions). So, substituting $\chi=(j^{3+})^x$ about $j^{3+}=0$, the $j^{3+}$-indicial equation (which is basically, coefficient of $(j^{3+})^x$ $=\left.0\right|_{j^{3+}=0}$) reads, \begin{align} x^2-\frac{5}{6}x-\frac{\mu_3}{108} = 0, \label{jindip2l2F3} \end{align} which yields $\mu_2 = -12$ for $x=\frac{2}{3}$. Similar analysis gives, $(x,\mu_2)=(\frac{1}{3},-18)$ as solution to Eq.(\ref{jindip2l2F3}).\\ For $\mu_3=-12$, we get from the $i=1$ equation: $m_1=9-N$ where $N=-12\alpha_0$. This gives the allowed range of central charge as, $0<c<10$ with $c\in 2\mathbb{Z}$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(4,0\right)$: \begin{align} &\chi_0 = q^{-1/6}(1 + 7q + 8q^2 + 22q^3 + 42q^4 + 63q^5 + 106q^6 + 190q^7 + 267q^8 + 428q^9 + 652q^{10} + \mathcal{O}(q^{11})), \nonumber\\ &\chi_{\text{non-id}} = \chi_0 \end{align} \item $(c,h)=\left(6,\frac{1}{6}\right)$: \begin{align} \chi_0 &= q^{-1/4}(1 + 6q + 8q^3 + 18q^4 + 17q^6 + 54q^7 + \mathcal{O}(q^{9})), \nonumber\\ \chi_{\frac{1}{6}} &= q^{-1/12}(1 + 8q + 17q^2 + 46q^3 + 98q^4 + 198q^5 + 371q^6 + 692q^7 + 1205q^8 + \mathcal{O}(q^{9})) \end{align} \end{itemize} For $\mu_3=-18$, we get from the $i=1$ equation: $m_1=\frac{N^2-17N+108}{8-N}$ where $N=-12\alpha_0$. This gives the allowed range of central charge as, $0<c<8$ with $c\in 2\mathbb{Z}$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(4,0\right)$: \begin{align} &\chi_0 = q^{-1/6}(1 + 13q + 50q^2 + 76q^3 + 222q^4 + 405q^5 + 664q^6 + 1222q^7 + 2121q^8 + 3146q^9 + + \mathcal{O}(q^{10})), \nonumber\\ &\chi_{\text{non-id}} = \chi_0 \end{align} \item $(c,h)=\left(8,\frac{1}{3}\right)$: \begin{align} &\chi_0 = q^{-1/3}(1 + 14q + 65q^2 + 156q^3 + 456q^4 + 1066q^5 + 2250q^6 + 4720q^7 + 9426q^8 + 17590q^9 + \mathcal{O}(q^{10})), \nonumber\\ &\chi_{\frac{1}{3}} = \text{unstable} \end{align} \item $(c,h)=\left(10,\frac{1}{2}\right)$: \begin{align} &\chi_0 = (j^{3+})^{\frac{1}{3}}\otimes \chi_0^{{}^{(3)}\mathcal{W}_2} \nonumber\\ &\chi_{\frac{1}{2}} = (j^{3+})^{\frac{1}{3}}\otimes \chi_{\frac{1}{2}}^{{}^{(3)}\mathcal{W}_2} \end{align} \item $(c,h)=\left(12,\frac{2}{3}\right)$: \begin{align} &\chi_0 = q^{-2/3}(1 + 21q + 171q^2 + 745q^3 + 2418q^4 + 7587q^5 + 20510q^6 + 51351q^7 + \mathcal{O}(q^{8})), \nonumber\\ &\chi_{\frac{2}{3}} = \text{unstable} \end{align} \end{itemize} \subsection{$j^{3+}$ analysis of $(2,0)$ solutions} With Eqs.(\ref{j1_2_F3}) and (\ref{j1_2_F3_1}) we can recast the MLDE in Eq.(\ref{mlde20_3}) as, \begin{align} \left[(j^{3+})^2\left(\frac{E_6^{3+}}{E_4^{3+}}\right)^2\partial^2_{j^{3+}} + \left(\frac{5}{6}j^{3+}\left(\frac{E_6^{3+}}{E_4^{3+}}\right)^2+\frac{j^{3+}E_4^{3+}}{2}\right)\partial_{j^{3+}} + \mu_1E_4^{3+}\right]\chi = 0. \label{j3+mlde} \end{align} Now let us divide Eq.(\ref{j3+mlde}) by $E_4^{3+}$ and then use Eq.(\ref{j-rel-e4e6_F3}) to get (about $\tau=\rho_3$), \begin{align} \left[j^{3+}\left(j^{3+}-108\right)\partial_{j^{3+}}^2 + \left(\frac{5(j^{3+}-108)}{6}+\frac{j^{3+}}{2}\right)\partial_{j^{3+}} + \mu_1\right]\chi = 0, \label{j3+mlde_sim} \end{align} Define, $J := \frac{j^{3+}}{108}$. Then, Eq.(\ref{j3+mlde_sim}) becomes, \begin{align} \left[J(J-1)\partial_J^2 + \frac{1}{6}\left(8J - 5\right)\partial_J + \mu_1\right]\chi = 0. \label{J3+mlde} \end{align} Eq.(\ref{J3+mlde}) is an hypergeometric differential equation whose solution is (about $\tau=\rho_3$), \begin{align} \chi(J) = a \, \, {}_2F_1\left(\alpha,\beta;\frac{5}{6};J\right) + b \, \, J^{1/6} {}_2F_1\left(\beta+\frac{1}{6},\alpha+\frac{1}{6};\frac{7}{6};J\right), \label{sonlF320} \end{align} where $\alpha+\beta=\frac{1}{3}$ and $\alpha\beta=\mu_1$. Furthermore, $a$ and $b$ are integration constants. \subsection{(3,0) MLDE} \section{$\mathbf{\Gamma^{+}_0(5)}$} \subsection{Riemann-Roch} The Riemann-Roch for $\Gamma^{+}_0(5)$ reads, \begin{align} \sum_{i=0}^{n-1} \, \alpha_i = \frac{n(n-1)}{4} - \frac{\ell}{2}, \label{RR5} \end{align} where, $n$ is the number of linearly independent characters, $\alpha_i$s are exponents and $\ell$ is the wronskian index. \subsection{The operator} Consider the following transformations (where $\gamma\in\Gamma^{+}_0(5)$), \begin{align} &5E_2(5\gamma\tau) = 2(c\tau+d)^5E_2(5\tau) + \frac{6c}{i\pi}(c\tau+d), \label{E22tau5} \\ &E_2(\gamma\tau) = (c\tau+d)^2E_2(\tau) + \frac{6c}{i\pi}(c\tau+d), \label{E2tau5} \\ \text{leading to,} \, \, &E_2^{5+}(\gamma\tau) = (c\tau+d)^2E_2^{2+}(\tau) + \frac{2c}{i\pi}(c\tau+d) \label{E22+tau5}. \end{align} Eq.(\ref{E22+tau5}) motivates us to define the following derivative operator in the space of $\Gamma^{+}_0(5)$, acting on weight $k$ forms, \begin{align} \Tilde{D}_k := \Tilde{\partial}_\tau - \frac{k}{4}E_2^{5+}(\tau). \label{deriv_op5} \end{align} where $\Tilde{\partial}_\tau := \frac{1}{2i\pi}\partial_\tau = q\partial_q$. \\ One can check that for any modular form in $\Gamma^{+}_0(5)$, $\Tilde{D}_k$ maps $k$ forms to $k+2$ forms. Now if we construct the MLDE in the space of $\Gamma^{+}_0(5)$ withe the above derivative operator then the MLDE will be consistent with the Riemann-Roch formula for $\Gamma^{+}_0(5)$. However, if we use the usual \textit{Serre-Ramanujan} derivative to write down the MLDE in $\Gamma^{+}_0(5)$ then the resulting MLDE will not be consistent with the Riemann-Roch formula for $\Gamma^{+}_0(5)$.\\ We note below some interesting properties of the above derivative operator, \begin{align} &\left(\Tilde{\partial}_{\tau} - \frac{1}{4}E_2^{5+}\right)E_2^{5+} = -\frac{1}{4}E_4^{5+} + \frac{4}{13}\Delta_5, \label{De25} \\ &\Tilde{D}E_4^{5+} = -E_6^{5+}, \label{De45} \\ &\Tilde{\partial}_\tau \Delta_5 = E_2^{5+}\Delta_5, \label{DeDel5} \\ &\Tilde{D}\Tilde{E}_4^{5+} = -E_6^{5+}, \label{De45} \\ &\Tilde{D}E_6^{5+} = -\frac{3}{2}E_8^{5+} - \frac{3224}{2817}\Delta_5E_4^{5+} + \frac{29000}{2817}\Delta_5 \Tilde{E}_4^{5+}, \label{De65} \\ &\Tilde{D}E_6^{5+} = -\frac{3}{2}(E_4^{5+})^2 + \frac{464}{13}\Delta_5E_4^{5+} + \frac{20000}{169}\Delta_5^2, \label{De65_simp}, \\ &\Tilde{D}E_6^{5+} = -\frac{3}{2}(\Tilde{E}_4^{5+})^2 + 44\Delta_5 \Tilde{E}_4^{5+} + 8\Delta_5^2, \label{De65_simp_E45p} \\ &\frac{(E_6^{5+})^2}{(\Tilde{E}_4^{5+})^3} = -\frac{16}{(j^{5+})^2} - \frac{44}{j^{5+}} + 1 = \frac{(j^{5+}-x_1)(j^{5+}-x_2)}{(j^{5+})^2}. \label{e63-e42} \end{align} where $\Delta_5(\tau)=(\eta(\tau)\eta(5\tau))^4$ and $j^{5+}=\frac{\Tilde{E}_4^{5+}}{\Delta_5}$. Also, $\Tilde{E}_4^{5+} := (E_{2,5^{'}})^2$. We also note, $\Delta_5=\frac{13}{36}(\Tilde{E}_4^{5+}- E_4^{5+})$ and $E_8^{5+} = (E_4^{5+})^2 + \frac{16000}{4069}\Delta_5 E_4^{5+} - \frac{88000}{4069}\Delta_5 \Tilde{E}_4^{5+}$. We used these equations to re-write Eq.(\ref{De65}) as Eq.(\ref{De65_simp}). Also, $x_1=2(11-5\sqrt{5})$ and $x_2=2(11+5\sqrt{5})$.\\ Now it is straightforward to show that, \begin{align} &\Tilde{\partial}_\tau = -j^{5+}\frac{E_6^{5+}}{\Tilde{E}_4^{5+}}\partial_{j^{5+}}, \label{j1_2_F5} \\ &\Tilde{D}^2 = (j^{5+})^2\left(\frac{E_6^{5+}}{\Tilde{E}_4^{5+}}\right)^2\partial_{j^{5+}}^2 + \left(\frac{3}{2}\Tilde{E}_4^{5+} - 44\Delta_5 - 8\frac{\Delta_5^2}{\Tilde{E}_4^{5+}}\right)j^{5+}\partial_{j^{5+}}. \label{j1_2_F5_1} \end{align} \subsection{(2,0) MLDE} With the above derivative operator as defined in Eq.(\ref{deriv_op}) now let us set up the (2,0) MLDE (note, $\Gamma^{+}_0(5)$ has a $2$-dimensional space of weight $4$ modular forms), \begin{align} \left(\Tilde{D}^2 + \mu_1E_4^{5+} + \mu_2\Delta_5\right)\chi = 0, \label{mlde20_5} \end{align} where $\chi$ is a weight 0 modular function for $\Gamma^{+}_0(5)$. Now substituting the Fourier expansions of the relevant terms in Eq.(\ref{mlde20_5}) we get the following recursion relation, \begin{align} &\left[(\alpha+i)^2 - \frac{1}{2}(\alpha+i) + \mu_1\right]a_i = \sum\limits_{k=1}^i\left[\frac{1}{2}(\alpha+i-k)E^{5+}_{2,k}-\mu_1 E_{4,k}^{5+} - \mu_2\Delta_{5,k}\right]a_{i-k}, \label{recursion05} \end{align} where $a_i$s are Fourier coefficients of the character $q$-series expansion.\\ The indicial equation is obtained by setting $i=0$ and $k=0$ in Eq.(\ref{recursion05}), \begin{align} \alpha^2 - \frac{1}{2}\alpha + \mu_1 = 0. \label{indicial5} \end{align} We can try to do a $j^{5+}$ analysis to determine $\mu_2$. We can recast the MLDE in Eq.(\ref{mlde20_5}) in the $j^{5+}$ plane as follows, \begin{align} \partial^2_{j^{5+}}\chi + \frac{3}{2}\frac{\left(j^{5+}-y_1\right)\left(j^{5+}-y_2\right)}{j^{5+}(j^{5+}-x_1)(j^{5+}-x_2)}\partial_{j^{5+}} + \frac{\mu_1 j^{5+} + \mu_2}{j^{5+}(j^{5+}-x_1)(j^{5+}-x_2)} = 0, \label{j5+20_F5} \end{align} where $y_1=\frac{4}{3}(11-2\sqrt{31})$, $y_2=\frac{4}{3}(11+2\sqrt{31})$, $x_1=2(11-5\sqrt{5})$ and $x_2=2(11+5\sqrt{5})$. The above differential equation is a Heun differential equation with the general solution being, \begin{align} \chi &= a \, \text{HeunG}\left[-\frac{1}{2}\left(\frac{11x_2}{2}+2\right),\frac{\mu_2 x_2}{16},\frac{1}{4}(1-\sqrt{1-16\mu_1}),\frac{1}{4}(1+\sqrt{1-16\mu_1}),\frac{1}{2},\frac{1}{2},-\frac{j^{5+}x_2}{16}\right] \nonumber\\ &+ \, b \, \sqrt{\frac{j^{5+}}{x_1}}\text{HeunG}\left[-\frac{1}{2}\left(\frac{11x_2}{2}+2\right),\frac{(\mu_2-11) x_2}{16},\frac{1}{4}(3+\sqrt{1-16\mu_1}),\frac{1}{4}(3-\sqrt{1-16\mu_1}),\frac{3}{2},\frac{1}{2},-\frac{j^{5+}x_2}{16}\right]. \end{align} \section{\texorpdfstring{$\mathbf{\Gamma_{0}^{+}(3)}$}{Γ0(3)+}}\label{sec:Gamma_0_3+} \input{Fricke_3/Single_character_solutions.tex} \input{Fricke_3/Two_character_Gamma_0_3+.tex} \input{Fricke_3/Three_Char_Fricke3} \subsection{Two-character solutions} \noindent From the dimension of the space of modular forms in \ref{dimension_Fricke_3}, we obtain the following expression for the number of free parameters \begin{align}\label{number_of_parameters_Gamma_0_3+} \begin{split} \#(\mu) = \begin{cases} \left\lfloor\left.\frac{\ell}{3}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 1}{3}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 2}{3}\right\rfloor\right. + 2,\ &2\ell\not\equiv 2,6\ (\text{mod}\ 12),\ 2\ell>2\\ \left\lfloor\left.\frac{\ell}{3}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 1}{3}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 2}{3}\right\rfloor\right. -1,\ &2\ell\equiv 2, 6\ (\text{mod}\ 12),\ 2\ell>2. \end{cases} \end{split} \end{align} On the other hand, for the case of $k \equiv 2,6\ (\text{mod}\ 12), k\in2\mathbb{Z}$, we are to use the following expression \begin{align}\label{number_of_parameters_Gamma_0_3+_2,6mod12} \#(\mu) = \left\lfloor\left.\frac{\ell}{3}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 1}{3}\right\rfloor\right. + \left\lfloor\left.\frac{\ell + 2}{3}\right\rfloor\right. -1,\ 2\ell\equiv 2, 6\ (\text{mod}\ 12). \end{align} \subsubsection{\texorpdfstring{$\ell = 0$}{l = 0}} \noindent The space $\mathcal{M}_{2}(\Gamma_{0}^{+}(3))$ is zero-dimensional and there are no modular forms of weight $2$ in $\Gamma_{0}^{+}(3)$. Hence, we set $\phi_{1}(\tau) = 0$. The space $\mathcal{M}_{4}(\Gamma_{0}^{+}(2))$ is one-dimensional and hence, we set $\phi_{0}(\tau) = \mu E_{4}^{(3^{+})}(\tau)$. For these choices, the second-order MLDE takes the following form \begin{align}\label{MLDE_n=2_Gamma_0_3+} \left[\mathcal{D}^{2} + \mu_{1} E_{4}^{(3^{+})}(\tau)\right]\chi(\tau) = 0, \end{align} where $\mu_{1}$ is an independent parameter. Now, since the covariant derivative transforms a weight $r$ modular form into one of weight $r+2$, the double covariant derivative of a weight $0$ form is \begin{align} \begin{split} \mathcal{D}^{2} = \mathcal{D}_{(2)}\mathcal{D}_{(0)} =& \left(\frac{1}{2\pi i}\frac{d}{d\tau} - \frac{1}{3}E^{(3^{+})}_{2}(\tau)\right)\frac{1}{2\pi i}\frac{d}{d\tau}\\ =& \Tilde{\partial}^{2} - \frac{1}{3}E^{(2^{+})}_{2}(\tau)\Tilde{\partial}. \end{split} \end{align} The MLDE \ref{MLDE_n=2_Gamma_0_3+} now reads \begin{align}\label{MLDE_n=2_Gamma_0_3+_expanded} \left[\Tilde{\partial}^{2} - \frac{1}{3}E^{(3^{+})}_{2}(\tau)\Tilde{\partial} + \mu_{1} E_{4}^{(3^{+})}(\tau)\right]\chi(\tau) = 0. \end{align} This equation can be solved by making the following mode expansion substitution for the character $\chi(\tau)$ and the other modular forms, \begin{align}\label{series_defs_Gamma_0_3+} \chi(\tau) =& q^{\alpha}\sum\limits_{n= 0}^{\infty}\chi_{n}q^{n},\\ E_{2}^{(3^{+})}(\tau) =& \sum\limits_{n=0}^{\infty}E_{2,n}^{(3^{+})}q^{n}\nonumber\\ =& 1 - 6 q - 18 q^2 - 42 q^3 - 42 q^4 + \ldots,\nonumber\\ E_{4}^{(2^{+})}(\tau) =& \sum\limits_{n=0}^{\infty}E_{4,n}^{(3^{+})}q^{n}\\ =& 1 + 24 q + 216 q^2 + 888 q^3 + 1752 q^4 + \ldots \end{align} Substituting these expansions in the MLDE, we obtain Substituting these expansions in the MLDE, we obtain \begin{align}\label{MLDE_mode_Gamma_0_3+} q^{\alpha}\sum\limits_{n=0}^{\infty}\chi_{n}q^{n}\left[(n+\alpha)^{2} - (n + \alpha)\sum\limits_{m=0}^{\infty}\frac{1}{3}E^{(3^{+})}_{2,m}q^{m} + \mu\sum\limits_{m=0}^{\infty} E_{4,m}^{(3^{+})}q^{m}\right] = 0. \end{align} When $n=0,m=0$, with $E^{(3^{+})}_{2,0} = E^{(3^{+})}_{4,0} = 1$, we obtain the following indicial equation \begin{align} \alpha^{2} -\frac{1}{3} \alpha + \mu = 0. \end{align} Solving this indicial equation, we have \begin{align} \begin{split} \alpha_{0} =& \frac{1}{6}\left(1 - \sqrt{1 - 36\mu}\right) \equiv \frac{1}{6}(1 - x),\\ \alpha_{1} =& \frac{1}{6}\left(1 + \sqrt{1 - 36\mu}\right) \equiv \frac{1}{6}(1 + x), \end{split} \end{align} where we have set $x = \sqrt{1 - 36\mu}$. The smaller solution $\alpha_{0} = \tfrac{1}{5}(1-x)$ corresponds to the identity character which behaves as $\chi(\tau) \sim q^{\tfrac{1-x}{6}}\left(1 + \mathcal{O}(q)\right)$. We know that the identity character $\chi_{0}$, associated with a primary of weight $h_{0} = 0$ behaves as $\chi_{0}(\tau)\sim q^{-\tfrac{c}{24}}\left(1 + \mathcal{O}(q)\right)$. Comparing the two behaviours, we obtain the following expression for the central charge \begin{align}\label{central_charge_x_3+} c = 4(x-1). \end{align} To find the conformal dimension $h$, we compare the behaviours with the larger solution for $\alpha$, i.e. $\chi(\tau)\sim q^{\tfrac{1 + x}{6}}\left(1 +\mathcal{O}(q)\right)$ and $\chi(\tau)\sim q^{-\tfrac{c}{24} + h}\left(1 + \mathcal{O}(q)\right)$. This gives us \begin{align} h = \frac{x}{3}. \end{align} Using the Cauchy product formulae, we obtain the following recurrence relation \begin{align}\label{recursion_l=0_Gamma_0_3+} \chi_{n} = \left((n + \alpha)^{2} - \frac{1}{3}(n + \alpha) + \mu\right)^{-1}\sum\limits_{m=1}^{n}\left(\frac{(n + \alpha - m)}{3}E^{(3^{+})}_{2,m} - \mu E^{(3^{+})}_{4,m}\right)\chi_{n-m}. \end{align} When $n = 1$, we solve for the ratio $\tfrac{\chi_{1}}{\chi_{0}}$ in terms of $\alpha$ with coefficients $E^{(3^{+})}_{2,1} = -6$ and $E_{4,1}^{(3^{+})} = 24$ to obtain \begin{align} m_{1}^{(i)} + 14\alpha_{0} + 3m_{1}\alpha_{0} - 36\alpha_{0}^{2} = 0. \end{align} for $ i = 0,1$ corresponding to ratios of $m^{(0)}_{1}$ and $m^{(1)}_{1}$ taken with respect to values $\alpha_{0}$ and $\alpha_{1}$ respectively. Restricting to $i=0$ and assuming that $\alpha_{0} = -\tfrac{c}{24}$, we can recast this equation by identifying $N=-12\alpha_0=\tfrac{c}{2}$, \begin{align} -4m_1 + 5N + m_1N + N^2 = 0, \label{N_eqn_3} \end{align} $a_1$ can now be expressed in terms of $N$ as, \begin{align} m_1 = \frac{N(N+5)}{4-N}, \label{a1_eqn_3} \end{align} which sets an upper bound on the central charge $c<8$. Furthermore, applying the integer root theorem to \ref{N_eqn_3}, we see that $c=2N$ and the central charge, in this case, is restricted to be an even integer, i.e. \begin{align} m_{1} = \frac{c(10+c)}{8-c}. \end{align} Dropping the subscript associated with the index $i$, we see that for $m_{1}\geq 0$, we require $c<8$. Rewriting this, we have \begin{align} c^{2} + c(m_{1} + 10) = 8m_{1}. \end{align} This tells us that for $(n,\ell) = (2,0)$ in $\Gamma_{0}^{+}(3)$, we have $c\in\mathbb{Z}$ and $c<8$. For $c$ to be rational, we demand the determinant, $\sqrt{25 + 26m_{1} + m_{1}^{2}}$ , to be rational. We further demand that the square root is an integer and this gives us \begin{align}\label{m_1_3+} (m_{1} + 13)^{2} - 144 = k^{2}, \end{align} where $k^{2}\in\mathbb{Z}$. We now set $p$ to be \begin{align} p = 13 + m_{1} - k, \end{align} and we recast \ref{m_1_3+} as follows \begin{align} m_{1} + 13 = \frac{144 + p^{2}}{2p} = \frac{72}{p} + \frac{p}{2}. \end{align} Restricting $k$ to be positive, we see that $p<13$. We conclude that all possible values of $m_{1}$ are found by those values of $p$ below $13$ that divide $72$ and are even. The list of these values reads $p = \{2,4,6,8,12\}$. We ignore the case corresponding to $p = 12$ since $m_{1} = -1$. Table \ref{tab:theory_Fricke_3+} contains CFT data. \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c||} \hline $p$ & 2 & 4 & 6 & 8\\ [0.5ex] \hline\hline $\mu_{1}$ & $-\tfrac{7}{48}$ & $-\tfrac{1}{12}$ & $-\tfrac{5}{144}$ & $0$\\[0.5ex] \hline\hline $m_{1}$ & 24 & 7 & 2 & 0\\[0.5ex] \hline\hline $c$ & 6 & 4 & 2 & 0\\[0.5ex] \hline\hline $h$ & $\tfrac{5}{6}$ & $\tfrac{2}{3}$ & $\tfrac{1}{2}$ & $\tfrac{1}{3}$\\[1ex] \hline \end{tabular} \caption{$c$ and $\Delta$ data corresponding to the Fricke group $\Gamma_{0}^{+}(3)$ for the choice $\phi_{0} = \mu_{1} E^{(3^{+})}_{4}(\tau)$ with $\ell = 0$.} \label{tab:theory_Fricke_3+} \end{table} \noindent Using recursion relation \ref{MLDE_mode_Gamma_0_3+}, when $n = 2$, we obtain \begin{align} m_{2}^{(i)} \equiv \frac{\chi_{2}^{(i)}}{\chi_{0}^{(i)}} = \frac{9\alpha_{i}(36\alpha_{i} -13)}{6\alpha_{i} + 5} + \frac{3m_{1}^{(i)}(-1 + \alpha_{i}(-5 + 12\alpha_{i}))}{6\alpha_{i} + 5}. \end{align}We discard the case $p = 8$ since further computations show that all coefficients $m_{i}$ are null valued thus making this a trivial solution. The values of $m_{2}$ fr $i = 0$ is tabulated in table \ref{tab:theory_Fricke_3+_m2}. \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c||} \hline $p$ & 2 & 4 & 6\\[0.5ex] \hline\hline $c$ & 6 & 4 & 2\\[0.5ex] \hline\hline $m_{2}$ & $\tfrac{243}{7}$& 8 & 2\\[1ex] \hline \end{tabular} \caption{Values of $m_{2}$ for corresponding to the Fricke group $\Gamma_{0}^{+}(3)$ for the choice $\phi_{0} = \mu_{1} E^{(3^{+})}_{4}(\tau)$ with $\ell = 0$.} \label{tab:theory_Fricke_3+_m2} \end{table} \noindent In a similar fashion, computing coefficients for $i = 0,1$, we obtain the coefficients of the two characters. Checking to higher-order (up to $\mathcal{O}(q^{5000})$) for both characters, for $0<c<8$ (even $c$), we find the admissible character shown below. \begin{itemize} \item $(c,h)=\left(2,\frac{1}{2}\right)$: \begin{align}\label{c=2_Fricke_3} \begin{split} \chi_0 =& q^{-\tfrac{1}{12}}(1 + 2q + 2q^2 + 4q^3 + 5q^4 + 6q^5 + 11q^6 + 14q^7 + 17q^8 + 24q^9 + 31q^{10} + \mathcal{O}(q^{11})),\\ \chi_{\frac{1}{2}} =& q^{\tfrac{5}{12}}(1 + q^2 + 2q^3 + 2q^4 + 2q^5 + 5q^6 + 4q^7 + 7q^8 + 10q^9 + 11q^{10} + \mathcal{O}(q^{11})). \end{split} \end{align} \end{itemize} \noindent We also find a \textit{identity only} admissible solution, which only has a nice identity character but an unstable non-trivial character. This is found at $(c,h)=\left(4,\frac{2}{3}\right)$. \begin{align}\label{18b_character_1} \begin{split} \chi_0 = q^{-\tfrac{1}{6}}(1 +& 7q + 8q^2 + 22q^3 + 42q^4 + 63q^5 + 106q^6 + 190q^7 + 267q^8\\ +& 428q^9 + 652q^{10} + \mathcal{O}(q^{11}))\\ =& j^{\tfrac{1}{6}}_{3^{+}}. \end{split} \end{align} At central charge $c=6$, both characters are unstable. We here note that the identity character in \ref{c=2_Fricke_3} can be expressed in terms of (semi-)modular forms, which we shall denote by $\mathcal{S}^{0}_{k}(\Gamma)$, of the Hecke group $\Gamma_{0}(3)$ as follows \begin{align} \begin{split} \chi_{0}^{(c=2)} = \left(\frac{\Delta_{3}^{0}(2\tau)}{\left(\Delta_{3}^{0}\right)^{\tfrac{1}{2}}(\tau)}\right)^{\tfrac{1}{3}}\left(\frac{1}{\Delta_{3}(\tau)}\right)^{\tfrac{1}{12}},\ \ \ \Delta_{3}^{0}(\tau) = \frac{\eta^{9}(\tau)}{\eta^{3}(3\tau)}\in\mathcal{S}^{0}_{3}(\Gamma_{0}(3)), \end{split} \end{align} where $\Delta_{3}^{0}(\tau)$ is the $2^{\text{nd}}$ semi-modular form of weight $2$ in $\Gamma_{0}(3)$ \cite{Junichi}. \subsubsection{\texorpdfstring{$\ell=2$}{l=2}} \noindent The (2,2) MLDE takes the following form \begin{align} \left[\mathcal{D}^2 + \mu_1\frac{E_6^{(3+)}}{E_4^{(3+)}}\mathcal{D} + \mu_2E_4^{(3+)} + \mu_3\frac{\Delta_{3^{+},8}}{E_4^{(3+)}}\right]\chi(\tau) = 0, \label{mlde22_3} \end{align} where $\chi$ is a weight 0 modular function for $\Gamma^{+}_0(3)$. Substituting for the Fourier expansions of the relevant terms in \ref{mlde22_3}, we obtain the following recursion relation \begin{align} &\sum\limits_{k=0}^i\left[E_{4,k}^{(3+)}(\alpha^j+i-k)(\alpha^j+i-k-1) + E_{4,k}^{(3+)}(\alpha^j+i-k)-\frac{E_{2,k}^{(3+)}E_{4,k}^{(3+)}}{3}(\alpha^j+i-k)\right. \nonumber\\ &\left.\mu_1 E_{6,k}^{(3+)} (\alpha^j+i-k) + \mu_2(E_4^{(3+)})^2_{,k} + \mu_3\Delta_{3^{+},8,k}\right]a^j_{i-k} = 0, \label{recursion_l2F3_2char} \end{align} where $a_i$s are Fourier coefficients of the character $q$-series expansion. Setting $i=0$ and $k=0$ in \ref{recursion_l2F3_2char}, we obtain the following indicial equation \begin{align} \alpha^2 + \left(\mu_1-\frac{1}{3}\right)\alpha + \mu_2 = 0. \label{indicial_l2F3_2char} \end{align} Setting $\alpha=\alpha_0$ in \ref{indicial_l2F3_2char} and observing Riemann-Roch we get, \begin{align} &\mu_1 = \frac{2}{3}, \label{param2_l2F3}\\ &\mu_2 = -\left(\alpha_0^2-\frac{\alpha_0}{3}\right). \label{param3_l2F3} \end{align} Tthe value of $\mu_3$ can now be determined by transforming the $\tau$-MLDE in \ref{mlde22_3} into the corresponding $j_{3^{+}}$-MLDE (about $\tau=\rho_3$), \begin{align} \left[(j_{3^{+}})^2\partial_{j_{3^{+}}}^2 + \frac{j_{3^{+}}}{6}\partial_{j_{3^{+}}} + \frac{(j_{3^{+}})^2}{2(j_{3^{+}}-108)}\partial_{j_{3^{+}}} + \frac{\mu_2 j_{3^{+}}}{j_{3^{+}}-108} + \frac{\mu_3}{j_{3^{+}}-108}\right]\chi = 0, \label{j3+mlde_p2_l2} \end{align} where we have set $\mu_1=\frac{2}{3}$ from the $\ell=2$ indicial equation (see \ref{indicial_l2F3_2char}). Now, note that about $\tau=\rho_3$, we have, $\chi_0\sim j_{3^{+}}^x$ and $j_{3^{+}}\to 0$, with $x=\frac{2}{3}$ for $\ell=2$ (see single-character solutions). Hence, substituting $\chi=j_{3^{+}}^x$ about $j_{3^{+}}=0$, the $j_{3^{+}}$-indicial equation (which is basically, coefficient of $j_{3^{+}}^x$ $=\left.0\right|_{j_{3^{+}}=0}$) reads, \begin{align} x^2-\frac{5}{6}x-\frac{\mu_3}{108} = 0, \label{jindip2l2F3} \end{align} which yields $\mu_2 = -12$ for $x=\frac{2}{3}$. Similar analysis gives, $(x,\mu_2)=(\frac{1}{3},-18)$ as solution to \ref{jindip2l2F3}. For $\mu_3=-12$, we get from the $i=1$ equation: $m_1=9-N$ where $N=-12\alpha_0$. This gives the allowed range of central charge as, $0<c<10$ with $c\in 2\mathbb{Z}$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(4,0\right)$: \begin{align}\label{18b_character_2} \begin{split} \chi_0 =& q^{-\tfrac{1}{6}}(1 + 7q + 8q^2 + 22q^3 + 42q^4 + 63q^5 + 106q^6 + 190q^7\\ &{}\ \ \ \ \ \ \ \ + 267q^8 + 428q^9 + 652q^{10} + \mathcal{O}(q^{11}))\\ =& j_{3^{+}}^{\tfrac{1}{6}},\\ \chi_{\text{non-id}} =& \chi_0 \end{split} \end{align} \item $(c,h)=\left(6,\frac{1}{6}\right)$: \begin{align} \begin{split} \chi_0 &= q^{-\tfrac{1}{4}}(1 + 6q + 8q^3 + 18q^4 + 17q^6 + 54q^7 + \mathcal{O}(q^{9})),\\ \chi_{\frac{1}{6}} &= q^{-\tfrac{1}{12}}(1 + 8q + 17q^2 + 46q^3 + 98q^4 + 198q^5 + 371q^6 + 692q^7 + 1205q^8 + \mathcal{O}(q^{9})). \end{split} \end{align} \end{itemize} \noindent For $\mu_3=-18$, we get from the $i=1$ equation: $m_1=\frac{N^2-17N+108}{8-N}$ where $N=-12\alpha_0$. This gives the allowed range of central charge as, $0<c<8$ with $c\in 2\mathbb{Z}$. Scanning this space of $c$ we get the following admissible solutions. \begin{itemize} \item $(c,h)=\left(4,0\right)$: \begin{align} \begin{split} \chi_0 =& q^{-\tfrac{1}{6}}(1 + 13q + 50q^2 + 76q^3 + 222q^4 + 405q^5 + 664q^6 + 1222q^7\\ &{}\ \ \ \ \ \ \ \ + 2121q^8 + 3146q^9 + + \mathcal{O}(q^{10})),\\ \chi_{\text{non-id}} =& \chi_0 \end{split} \end{align} \item $(c,h)=\left(8,\frac{1}{3}\right)$: \begin{align}\label{9a_character_1} \begin{split} \chi_0 =& q^{-\tfrac{1}{3}}(1 + 14q + 65q^2 + 156q^3 + 456q^4 + 1066q^5 + 2250q^6 + 4720q^7\\ &{}\ \ \ \ \ \ \ \ \ + 9426q^8 + 17590q^9 + \mathcal{O}(q^{10})),\\ =& j_{3^{+}}^{\tfrac{1}{3}},\\ \chi_{\frac{1}{3}} =& \text{unstable}. \end{split} \end{align} \item $(c,h)=\left(10,\frac{1}{2}\right)$: \begin{align} &\chi_0 = j_{3^{+}}^{\tfrac{1}{3}}\otimes \chi_0^{{}^{(3)}\mathcal{W}_2} \nonumber\\ &\chi_{\frac{1}{2}} = j_{3^{+}}^{\tfrac{1}{3}}\otimes \chi_{\frac{1}{2}}^{{}^{(3)}\mathcal{W}_2} \end{align} \item $(c,h)=\left(12,\frac{2}{3}\right)$: \begin{align}\label{6b_character_1} \begin{split} \chi_0 =& q^{-\tfrac{1}{2}}(1 + 21q + 171q^2 + 745q^3 + 2418q^4 + 7587q^5 + 20510q^6 + 51351q^7 + \mathcal{O}(q^{8}))\\ =& j_{3^{+}}^{\tfrac{1}{2}},\\ \chi_{\frac{2}{3}} =& \text{unstable} \end{split} \end{align} \end{itemize} We note that the identity character in \ref{18b_character_2}, and also \ref{18b_character_1}, turns out to be the McKay-Thompson series of the $18b$ conjugacy class of $\mathbb{M}$ (OEIS sequence A058537 \cite{A058537}), the identity character in \ref{9a_character_1} is the single character solution we found in \ref{9a_character_OG} which is nothing but the McKay-Thompson series of the $9a$ conjugacy class of $\mathbb{M}$ (OEIS sequence A058092 \cite{A058092}), and the identity character in \ref{6b_character_1} turns out to be the McKay-Thompson series of the $6b$ conjugacy class of $\mathbb{M}$ (OEIS sequence A007261 \cite{A007261}). \subsection{Single character solutions} \noindent From \cite{Umasankar:2022kzs}, we find the following form for the character in a $n = 1$ theory \begin{align} \begin{split} &\chi(\tau) = j_{3^{+}}^{w_{\rho}}(\tau),\\ &c = 24w_{\rho} = 8\ell, \end{split} \end{align} where $w_{\rho}\in\left\{0,\tfrac{4}{3},2\right\}$ corresponding to the three characters at $\ell = 0$, $\ell = 4$, and $\ell = 6$ respectively. Here, we will present the admissible solutions at $\ell$ values all the way to the \textit{bulk-pole}. \subsubsection{\texorpdfstring{$\ell = 1$}{l=1}} \noindent For $\ell=1$, the RHS of Valence Formula reads $\frac{1}{3}$ (since $k=2\ell$). The MLDE takes the following form, \begin{align} \left[\mathcal{D} + \mu_1\frac{E_4^{(3+)}E_6^{(3+)}}{\left(E_4^{(3+)}\right)^2+\mu\Delta_{3^{+},8}} + \mu_2\frac{\Delta_{3^{+},10}}{(E_4^{(3+)})^2+\mu\Delta_{3^{+},8}}\right]\chi(\tau) = 0. \label{F3_l1_p1} \end{align} Now using the indicial equation we can set $\mu_1=\frac{1}{3}$. Now let us take a guess, $\chi=j_{3^{+}}^{\tfrac{1}{3}}$. One can readily check that this is an admissible central charge $c = 8$ solution with $\mu_2=\frac{1}{3}$. The $q$-series expansion of this character reads \begin{align}\label{9a_character_OG} \begin{split} \chi = q^{-\tfrac{1}{3}}(1 +&14 q + 65 q^2 + 156 q^3 + 456 q^4 + 1066 q^5 + 2250 q^6 + 4720 q^7\\ +& 9426 q^8 + 17590 q^9 + \mathcal{O}(q^{10})), \end{split} \end{align} which turns out to be the McKay-Thompson series of the $9a$ conjugacy class of $\mathbb{M}$ (OEIS sequence A058092 \cite{A058092}). \subsubsection{\texorpdfstring{$\ell = 2$}{l=2}} \noindent The MLDE takes the following form, \begin{align} &\left[\mathcal{D}+\mu_1\frac{E_6^{(3+)}}{E_4^{(3+)}}\right]\chi(\tau) = 0, \label{1char_F3_l2} \\ &\left[E_4^{3+}\mathcal{D}+\mu_1E_6^{3+}\right]\chi(\tau) = 0. \label{1char_F3_l2_aliter} \end{align} The indicial equation ($\alpha_0+\mu_1=0$) dictates $\mu_1=\frac{2}{3}$. Consider the ansatz, $\chi=j_{3^{+}}^{\tfrac{2}{3}}$. Then, $\Tilde{\partial}_\tau j_{3^{+}}^{\tfrac{2}{3}} = -\frac{2}{3}j_{3^{+}}^{\tfrac{2}{3}}\frac{E_6^{(3+)}}{E_4^{(3+)}}$. Note, that this $\chi$ indeed satisfies \ref{1char_F3_l2} with $\mu_1=\frac{2}{3}$. \subsubsection{\texorpdfstring{$\ell = 3$}{l=3}} \noindent From Riemann-Roch, we notice that this is the \textit{bulk-pole} MLDE which takes the following form \begin{align} &\left[\mathcal{D}+\mu_1\frac{\left(E_4^{(3+)}\right)^2}{E_6^{(3+)}} + \mu_2\frac{\Delta_{3^{+},8}}{E_6^{(3+)}}\right]\chi(\tau) = 0, \label{1char_F3_l3} \\ &\left[E_6^{(3+)}\mathcal{D}+\mu_1\left(E_4^{(3+)}\right)^2 + \mu_2\Delta_{3^{+},8}\right]\chi(\tau) = 0. \label{1char_F3_l3_aliter} \end{align} Note that in above $\mathcal{D}=\theta_{q}$. Also, the indicial equation ($\alpha_0+\mu_1=0$) dictates $\mu_1=1$.\\ Consider now the ansatz, $\chi=j_{3^{+}}+\mathcal{N}$. Substituting this in ODE \ref{1char_F3_l3_aliter} yields \begin{align} &\left(E_6^{(3+)}\mathcal{D}+\mu_{1}\left(E_4^{(3+)}\right)^2 + \mu_{2}\Delta_{3^{+},8}\right)(j_{3^{+}}+\mathcal{N}) = \Delta_{3^{+},8}\left[(108+b+\mathcal{N})j_{3^{+}}+\mathcal{N}b\right] \end{align} So, we see that $j_{3^{+}}+\mathcal{N}$ is a solution to the above differential equation if and only if, \begin{align} &b=0 \, \, \, \text{and} \, \, \, \mathcal{N} = -108 \longrightarrow \text{soln.:} \, \, j_{3^{+}}-108, \label{sol1} \\ &b=-108 \, \, \, \text{and} \, \, \, \mathcal{N} = 0 \longrightarrow \text{soln.:} \, \, j_{3^{+}}, \label{sol2} \end{align} Note that, $j_{3^{+}}-108$ has $m_1<0$ and hence is not an admissible solution. So, the only admissible solution is: $j_{3^{+}}$ which happens when $b=-108$ and $\mathcal{N}=0$. Thus, unlike the $\text{SL}(2,\mathbb{Z})$ or the Fricke level $2$ case, here we do not get $j_{3^{+}}+\mathcal{N}$ as a solution to the \textit{bulk-pole} MLDE. This probably happens because of the following reason. In the \textit{bulk-pole} MLDE of both $\text{SL}(2,\mathbb{Z})$ and $\Gamma^{+}_0(2)$ which appears at $\ell=6$ and $\ell=4$ respectively, the \textit{bulk-pole} shows up in the denominator ($D^r = E_4^3+\mu\Delta$ and $D^r=\left(E_4^{(2+)}\right)^2+\mu\Delta_2$ respectively) while in $\Gamma^{+}_0(3)$ the \textit{bulk-pole} appears in the numerator ($N^r=\left(E_4^{(3+)}\right)^2+\mu\Delta_{3^{+},8}$). \subsection{Three-character solutions} The $(3,0)$ MLDE reads, \begin{align} \left(\mathcal{D}^3 + \mu_1 E_4^{(3+)}\mathcal{D} + \mu_2 E_6^{(3+)}\right)\chi = 0. \label{30charMLDE_F3} \end{align} \noindent The recursion relation corresponding to the above $(3,0)$ MLDE reads \begin{align} &\left[(\alpha^j+i)(\alpha^j+i-1)(\alpha^j+i-2) + 3(\alpha^j+i)(\alpha^j+i-1) + (\alpha^j+i)\right]a^j_i \nonumber\\ &+ \sum\limits_{k=0}^i\left[-E_{2,k}^{3+}a^j_{i-k}(\alpha^j+i-k)(\alpha^j+i-k-1) - E_{2,k}^{3+}a^j_{i-k}(\alpha^j+i-k)\right. \nonumber\\ &\left.+ \frac{1}{18}E_{4,k}^{3+}a^j_{i-k}(\alpha^j+i-k) + \frac{1}{6}(E_{2}^{3+})^2_{,k}a^j_{i-k}(\alpha^j+i-k) + \mu_1E_{4,k}^{3+}a^j_{i-k}(\alpha^j+i-k) + \mu_2E_{6,k}^{3+}a^j_{i-k}\right] = 0. \label{recursion3_3} \end{align} From which the indicial equation reads, \begin{align} \alpha^3 - \alpha^2 + \left(\mu_1+\frac{2}{9}\right)\alpha + \mu_2 = 0. \label{indicial3_3} \end{align} Also, \begin{align} &4 m_1 m_2 + 108 m_1 \alpha_0 - 18 m_1^2 \alpha_0 + 180 m_2 \alpha_0 + 6 m_1 m_2 \alpha_0 + 3348 \alpha_0^2 - 594 m_1 \alpha_0^2 \nonumber\\ &- 126 m_1^2 \alpha_0^2 + 360 m_2 \alpha_0^2 - 14472 \alpha_0^3 - 2268 m_1 \alpha_0^3 - 108 m_1^2 \alpha_0^3 + 216 m_2 \alpha_0^3 + 10368 \alpha_0^4 = 0. \label{al1} \end{align} The Diophantine equation reads, (which is obtained by multiplying $16^3 \times 2$ with \ref{al1} and then identifying $N=-96\alpha_0$), \begin{align} &32768 m_1 m_2 - 9216 m_1 N + 1536 m_1^2 N - 15360 m_2 N \nonumber\\ &- 512 m_1 m_2 N + 2976 N^2 - 528 m_1 N^2 - 112 m_1^2 N^2 + 320 m_2 N^2 + 134 N^3 \nonumber\\ &+ 21 m_1 N^3 + m_1^2 N^3 - 2 m_2 N^3 + N^4 = 0. \label{Diop1_3_3} \end{align} By integer root theorem $N=-96\alpha_0$ above should be an integer and hence $N=4c\in\mathbb{Z}$. Next, we have that the discriminant of the cubic indicial equation should be a perfect square, say $k^2$ with $k\in\mathbb{Z}$. Thus we have, \begin{align} &262144 m_1^2 - 163840 m_1 N - 16384 m_1^2 N + 21504 N^2 \nonumber\\ &+ 8192 m_1 N^2 + 256 m_1^2 N^2 - 448 N^3 - 96 m_1 N^3 - 7 N^4 = k^2. \label{Diop2_3_3} \end{align} The three roots of the indicial equation read, \begin{align} &\alpha_0 = -\frac{N}{96}, \label{root1_30_F3} \\ &\alpha_1 = \frac{1536 m_1 - 672 N + 16 m_1 N - 7 N^2 - 3k}{192(16 m_1 - 7 N)}, \label{roo2_30_F3} \\ &\alpha_2 = \frac{1536 m_1 - 672 N + 16 m_1 N - 7 N^2 + 3k}{192(16 m_1 - 7 N)}. \label{roo3_30_F3} \end{align} Now solving \ref{Diop1_3_3} and \ref{Diop2_3_3}, for the range $1\leq N\leq 96$, yields, \begin{itemize} \item $(c,h_1,h_2)=\left(1,\frac{7}{8},\frac{1}{4}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{24}}(1 + q^2 + q^3 + q^4 + q^5 + 2q^6 + q^7 + 2q^8 + 3q^9 + 3q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{7}{8}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{4}} &= q^{\tfrac{5}{24}}(1 + q + q^3 + q^4 + q^5 + 2q^6 + 2q^7 + 2q^8 + 3q^9 + 3q^{10} + \mathcal{O}(q^{11})). \end{align} The $q$-series expansions of the identity and the second descendant of this theory can be expressed in terms of ratios of Ramanujan theta series following the sequences in \cite{A097242} (OEIS sequence A097242) and \cite{A328796} (OEIS sequence A328796) respectively. \item $(c,h_1,h_2)=\left(2,\frac{3}{4},\frac{1}{2}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{12}}(1 + 2q + 2q^2 + 4q^3 + 5q^4 + 6q^5 + 11q^6 + 14q^7 + 17q^8 + 24q^9 + 31q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{3}{4}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{2}} &= q^{\tfrac{5}{12}}(1 + q^2 + 2q^3 + 2q^4 + 2q^5 + 5q^6 + 4q^7 + 7q^8 + 10q^9 + 11q^{10} + \mathcal{O}(q^{11})). \end{align} This is a $(2,0)$ solution found in \ref{c=2_Fricke_3} that makes a reappearance here. \item $(c,h_1,h_2)=\left(4,1,\frac{1}{2}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{6}}(1 + m_1q + m_2q^2 + \mathcal{O}(q^{3})), \nonumber\\ \chi_{1} &= q^{\tfrac{5}{6}}(1 + 2q^2 + 4q^3 + 5q^4 + 8q^5 + 18q^6 + 20q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 36q^8 + 56q^9 + 76q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{1}{2}} &= q^{\tfrac{1}{3}}(1 + 2q + 3q^2 + 8q^3 + 13q^4 + 20q^5 + 37q^6 + 56q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 83q^8 + 134q^9 + 196q^{10} + \mathcal{O}(q^{11})), \end{align} where for the above identity character we have, $m_1\in\mathbb{N}\cup \{0\}$, $m_2=8$ and $k=1792-256 m_1$ for $m_1< 7$ and $k=-1792 + 256 m_1$ for $m_1\geq 8$. \item $(c,h_1,h_2)=\left(6,\frac{5}{4},\frac{1}{2}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{4}}(1 + 9q + 24q^2 + 56q^3 + 135q^4 + 264q^5 + 497q^6 + 945q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 1656q^8 + 2830q^9 + 4815q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{5}{4}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{2}} &= q^{\tfrac{1}{4}}(1 + 7q + 9q^2 + 31q^3 + 66q^4 + 117q^5 + 227q^6 + 436q^7 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ + 702q^8 + 1241q^9 + 2072q^{10} + \mathcal{O}(q^{11})). \end{align} This is a new two-character solution that we did not find in our analysis with $(2,\ell)$ MLDEs. \item $(c,h_1,h_2)=\left(7,\frac{9}{8},\frac{3}{4}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{7}{24}}(1 + 7q + 35q^2 + 70q^3 + 189q^4 + 420q^5 + 833q^6 + 1631q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ + 3143q^8 + 5530q^9 + 9863q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{9}{8}} &= \text{unstable}, \nonumber\\ \chi_{\frac{3}{4}} &= q^{\tfrac{11}{20}}(7 + 22q + 56q^2 + 161q^3 + 343q^4 + 686q^5 + 1421q^6 + 2653q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 4782q^8 + 8638q^9 + 14847q^{10} + \mathcal{O}(q^{11})). \end{align} This is a two-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(7,\frac{7}{4},\frac{1}{8}\right)$: \begin{align} \chi_0 &= \text{unstable}, \nonumber\\ \chi_{\frac{7}{4}} &= \text{unstable}, \nonumber\\ \chi_{\frac{1}{8}} &= q^{-\tfrac{1}{6}}(1 + 7q + 8q^2 + 22q^3 + 42q^4 + 63q^5 + 106q^6 + 190q^7 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 267q^8 + 428q^9 + 652q^{10} + \mathcal{O}(q^{11})). \end{align} The stable descendant with $h = \tfrac{1}{8}$ is a $c = 4$ single-character theory found in \ref{18b_character_1} that also reappears as a two-character theory with an unstable descendant in \ref{18b_character_2}. \item $(c,h_1,h_2)=\left(8,1,1\right)$: \begin{align} \chi_0 &= q^{-\tfrac{1}{3}}(1 + m_1q + m_2q^2 + \mathcal{O}(q^{3})), \nonumber\\, \nonumber\\ \chi_{1} &= q^{-\tfrac{2}{3}}(1 + 2q + 8q^2 + 22q^3 + 47q^4 + 102q^5 + 224q^6 + 422q^7 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 815q^8 + 1516q^9 + 2688q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{1} &= q^{-\tfrac{2}{3}}(1 + 2q + 8q^2 + 22q^3 + 47q^4 + 102q^5 + 224q^6 + 422q^7 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 815q^8 + 1516q^9 + 2688q^{10} + \mathcal{O}(q^{11})). \end{align} where for the above identity character we have, $m_1\in\mathbb{N}\cup \{0\}$, $m_2=37+2 m_1$ and $k=0$. Also, note that, in the above case the two descendants are equal. This is exactly similar to how $D_{4,1}$ (and its GHM dual (\cite{Gaberdiel:2016zke}) with central charge 20) appear as 3-character solutions to the $(3,0)$ MLDE in the $\text{SL}(2,\mathbb{Z})$ case (see \cite{Das:2021uvd}). \item $(c,h_1,h_2)=\left(16,0,3\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \\ \chi_{0} &= \text{Log unstable}, \nonumber\\ \chi_{3} &= q^{\tfrac{7}{3}}(1 + 7q + 35q^2 + 140q^3 + 490q^4 + 1547q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 4480q^6 + 12192q^7 + 31402q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 77119q^9 + 182119q^{10} + \mathcal{O}(q^{11})). \end{align} \item $(c,h_1,h_2)=\left(20,0,\frac{7}{2}\right)$: \begin{align} \chi_0 &= q^{-\tfrac{5}{6}}(1 + 47q + 440q^2 + 5782q^3 + 83517q^4 + 239124q^5 + 1070594q^6 + 4192784q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 14782019q^8 + 47772551q^9 + 144263738q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{0} &=q^{-\tfrac{5}{6}}(1 + 47q + 440q^2 + 5782q^3 + 83517q^4 + 239124q^5 + 1070594q^6 + 4192784q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 14782019q^8 + 47772551q^9 + 144263738q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{7}{2}} &= \text{unstable}. \end{align} This is a two-character solution (not yet a $(2,\ell)$ solution). \item $(c,h_1,h_2)=\left(20,\frac{1}{2},3\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \\ \chi_{\frac{1}{2}} &= q^{-\tfrac{1}{3}}(1 + 14q + 65q^2 + 156q^3 + 456q^4 + 1066q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 2250q^6 + 4720q^7 + 9426q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ + 17590q^9 + 32801q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{3} &= \text{unstable}. \end{align} \item $(c,h_1,h_2)=\left(24,\frac{6}{7},\frac{22}{7}\right)$: \begin{align} \chi_0 &= q^{-1}(1 + 238q + 783q^2 + 8672q^3 + 65367q^4 + 371520q^5 + 1741655q^6 + 7161696q^7\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ + 26567946q^8 + 90521472q^9 + 288078201q^{10} + \mathcal{O}(q^{11})), \nonumber\\ \chi_{\frac{6}{7}} &= \text{unstable}, \nonumber\\ \chi_{\frac{22}{7}} &= \text{unstable}. \end{align} This is a one-character solution (not yet a $(1,\ell)$ solution). \item $(c,h_1,h_2)=\left(24,0,4\right)$: \begin{align} \chi_0 &= \text{Log unstable}, \\ \chi_{0} &= \text{Log unstable}, \nonumber\\ \chi_{4} &= q^{3}(3185 + 34398q + 231231q^2 + 1201824q^3 + 5321511q^4 + 20914816q^5 \nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ + 74740679q^6 + 247766688q^7 + 770976635q^8\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ + 2272634846q^9 + 6394256424q^{10} + \mathcal{O}(q^{11})). \end{align} \end{itemize} \section{Future directions}\label{sec:Future_work} \noindent There are various interesting future directions we reserve for upcoming work(s) which we present briefly below while also summarizing the results of this paper.\\ \\ \textbf{\textit{Are there new theories out in the wild?}}\\ We have obtained many single-, two-, and three-character solutions corresponding to $\Gamma^{+}_0(2)$ and $\Gamma^{+}_0(3)$ congruence subgroups of $\text{SL}(2,\mathbb{Z})$. Many of the single-character solutions have nice interpretations with regard to the conjugacy classes of $\mathbb{M}$. We noticed the reappearance of certain single-character solutions aided by unstable descendants as solutions to $(2,\ell)$ and $(3,0)$ MLDEs which possess a neat description in terms of the Borcherds product. However, these solutions themselves do not describe any CFTs ``yet". It would be nice to see if these solutions, by themselves, could be related to some known theories which have some restricted conformal symmetry such that their corresponding partition functions are only modular (up to a phase if required) with respect to a congruence subgroup rather than the full modular group. For instance, we notice that many of our two-character solutions form nice bilinear pairs with many other two-character solutions to give single-character solutions. This phenomenon was very useful in identifying coset pairs in the $\text{SL}(2,\mathbb{Z})$ case (see \cite{Gaberdiel:2016zke}). This perhaps hints at the fact that there could be some nice coset relations even for these solutions belonging to the congruence subgroups. In the CFT literature, it is known that there exist some theories belonging to the Hecke subgroup at level 2, for example, fermionic RCFTs (\cite{Bae:2021mej}), SCFTs and parafermion theories \cite{Anderson:1987ge}. So, it would be natural to expect that there might be theories out in the wild for Fricke subgroups too.\\ \\ \textbf{\textit{Looking at the bulk and beyond for $\mathbf{\Gamma_{0}^{+}(2)}$ and $\mathbf{\Gamma_{0}^{+}(3)}$.}}\\ For two-character and three-character analysis, we have analyzed the MLDEs and their solutions for $\mathbf{\Gamma_{0}^{+}(2)}$ and $\mathbf{\Gamma_{0}^{+}(3)}$ up to the \textit{bulk point}. There exists a complete classification (see \cite{Chandra:2018pjq}) of \textit{admissible} two-character solutions in the $\text{SL}(2,\mathbb{Z})$ case for $\ell\geq 6$ (note, $\ell=6$ is the bulk point for $\text{SL}(2,\mathbb{Z})$). The way this classification works is that any solution above the bulk point can be written down in terms of the solutions below the bulk point. Performing an MLDE analysis beyond the bulk point for these congruence subgroups might enable us to get a complete classification of admissible solutions using the solutions obtained above since these solutions are below the bulk point solutions.\\ \noindent \textbf{\textit{Higher-character MLDEs for $\mathbf{\Gamma_{0}^{+}(2)}$ and $\mathbf{\Gamma_{0}^{+}(3)}$?}}\\ It is natural to extend our analysis to four- and five-character MLDEs and probe the landscape, at least at $\ell = 0$, for admissible solutions (reappearances of other lower-character solutions, minimal models, and of genuine solutions). Additionally, its worth mentioning that in the case of $\Gamma_{0}^{+}(2)$, all the minimal models possess central charges $c \in\tfrac{6}{7}\times\{1, 5, 9, 13\}$, where $\tfrac{6}{7} = \tfrac{4}{7}\times \tfrac{3}{2} = c_{\mathcal{M}(7,2)}\cdot \mu_{0}^{+}$, where $\mathcal{M}(7,2)$ is a three-character minimal model with $(c,h_{1},h_{2}) = \left(\tfrac{4}{7},\tfrac{5}{7},\tfrac{1}{7}\right)$. It would be interesting to explore this relationship further and also to see if such relations show up for minimal models at higher-character theories. The equation would be simple to work out thanks to the ease of computation with modular forms of Fricke levels $2$ and $3$. Additionally, it would be nice to provide a general prescription for modular re-parameterization to $n$-character theories for these levels (which should work out nicely since the re-parameterization setup for these groups mimics the prescription done in the case of $\text{SL}(2,\mathbb{Z})$).\\ \noindent \textbf{\textit{What's up with $\mathbf{\Gamma_{0}^{+}(5)}$ and $\mathbf{\Gamma_{0}^{+}(7)}$?}}\\ Although we found that $\Gamma_{0}^{+}(7)$ is quite hard to re-parameterize, we should be able to find three-character solutions to it and for Fricke level $5$. It would be interesting to see if analysis at $(3,0)$ could reveal some interesting solutions. Firstly, we could check if the identity of a particular three-character solution is expressible in terms of Hecke modular forms of level $5$ and $7$ thereby enabling us to identify the $c$-chronology and devise a general expression akin to \ref{c-chronology_Fricke_2} and \ref{c-chronology_Fricke_3}. Secondly, we can explore if we continue to find minimal models at these levels as we did at levels $p = 2,3$. Lastly, it would be interesting to see if genuine three-character solutions exist at these levels.\\ \noindent \textbf{\textit{What about other prime divisors levels of $\mathbf{\Gamma_{0}^{+}(p)}$?}}\\ the most approachable higher Fricke level would be $p = 11$ since the space of modular forms is well studied \cite{Junichi}. Thus, An exhaustive analysis at Fricke level $11$ for single-, two-, and three-character solutions should be possible and we can ask the same questions of finding minimal models and the existence of genuine three-character solutions here too. The space of modular forms becomes complicated at levels $p\geq 13$ which unfortunately limits the ease of calculations. Nevertheless, at least an exhaustive single-character and a $(n,\ell) = (2,0)$ analysis for the remaining prime divisor levels could help us establish some common themes among the solutions in $\Gamma_{0}^{+}(p)$. \\ \noindent \textbf{\textit{What about Hecke groups?}}\\ The natural direction to follow post-Fricke would be to investigate the Hecke groups. Taking indications from $c$-chronology, where the identity character could be expressed in terms of Hecke group modular forms, it would be interesting to check if a set of solutions found in our analysis for $\Gamma_{0}^{+}(p)$ reappears in $\Gamma_{0}(p)$. this is natural to expect since Fricke groups are supergroups of Hecke groups. From section \ref{sec:Two-character_Hecke_Fricke}, we see that we can associate multiple definitions of the Serre-Ramanujan covariant derivative to a particular Hecke group. This could yield a plethora of two- and three-character solutions at the same Wronskian index. It would be nice to check if there exist certain single character solutions found with $(2,\ell)$ that possess unstable descendants. If this is the case, then there should also exist a way to read off the (unstable) theories from a Borcherds product with the Hecke Hauptmodul. Lastly, it would be especially interesting to see if we can relate the admissible solutions to existing minimal models, coset theories, or other unitary (S)CFTs.\\ \noindent \textbf{\textit{Getting lucky with other congruent subgroups.}}\\ Why leave out congruent subgroups $\Gamma_{1}(N)$, the conjugate groups $\Gamma^{0}(N)$ and $\Gamma^{1}(N)$, and the groups $\Gamma_{0}^{0}(N)$? We could fully comprehend how admissible solutions are categorized in these congruence subgroups with the aid of MLDE analysis, but more importantly, this could provide us with new insights into the relationships of character solutions among the subgroups and between the subgroup and modular group characters. For example, the $S$-transform of the $\Gamma_{0}(7)$ Hauptmodul, $j_{7}(\tau)$, is found to be $j_{\Gamma_{0}(7)}(S(\tau)) = 49\left(\tfrac{\eta(\tau)}{\eta\left(\tfrac{\tau}{7}\right)}\right)^{4} = 49j_{\Gamma^{0}(7)}^{-1}(\tau)$. Suppose $j_{\Gamma_{0}(7)}(\tau)$ and $j_{\Gamma^{0}(7)}(\tau)$ are admissible single-character solutions, then we see that analyzing just the Hecke solutions can help us find the related conjugate solutions by mere taking the $S$-transform. Also, Hecke images of $\Gamma_{0}(N)$ characters can be related to those of $\Gamma^{0}(N)$ characters. There also exists other Atkin-Lehner groups $\Gamma_{0}^{*}(N)$ studied in \cite{Junichi} that are desirable candidates to pick for MLDE analysis.\\ \noindent \textbf{\textit{A modular tower of theories.}}\\ To better comprehend the categorization of CFTs, a more systematic investigation of the two- and three-character MLDE for the Fricke groups and related modular towers, $\Gamma_{0}^{+}(p^{n+2})$ would be interesting. The natural choice of candidates for first consideration would be the towers $\Gamma_{0}^{+}(2)\to \Gamma_{0}^{+}(4)\to \Gamma_{0}^{+}(8)\to \ldots$ (and the corresponding Hecke tower), that follows from the modular tower $X_{0}(2^{n})$, and $\Gamma_{0}^{+}(3)\to \Gamma_{0}^{+}(6)\to \Gamma_{0}^{+}(12)\to \ldots$ (and the corresponding Hecke tower), that follows from the modular tower $X^{+}_{0}(3\cdot 2^{n})$. In \cite{Umasankar:2022kzs}, an admissible single-character solution to the first levels of the modular tower $\Gamma_{0}^{+}(7^{n})$ was found in terms of the parameterization in the affine equation of the elliptic modular curve $X_{0}(49)$. Since $\Gamma_{0}^{+}(25)$ is also a Conway-Norton ghost similar to $\Gamma_{0}^{+}(49)$, it would be interesting to examine what occurs for level $5$ Fricke groups.\\ \noindent \textbf{\textit{Finding new lattices.}}\\ It was reported in \cite{McKay2000FuchsianGA} that the Schwarz derivative of the Hauptmodules of genus-zero Hecke groups and their conjugates with no elliptic elements can be expressed in terms of Eisenstein series or certain lattice $A_{4}$ and $D_{4}$-lattice theta functions. The Schwarzian of the Fricke level $2$ Hauptmodule is found to possess the following $q$-series expansion \begin{align} \begin{split} -\{j_{2^{+}}(\tau),\tau\} =& -\left(\frac{1}{\left(j_{2^{+}}'\right)^{2}}\left(2j'_{2^{+}}j'''_{2^{+}} - 3\left(j''_{2^{+}}\right)^{2}\right) \right)(\tau)\\ =& 1 +44040 q^2 + 3792000 q^3 + 536995320 q^4 + 57168341760 q^5 + \mathcal{O}(q^{6}), \end{split} \end{align} where all the coefficients were found to possess positive signs. A first step to try reconstruction of this series would be to use the Jacobi theta-series combination $\tfrac{1}{256}\left(\theta_{3}(q^{4}) + \theta_{4}(q^{4})\right)^{8}$ which has a null-valued coefficient at order $\mathcal{O}(q)$. This ansatz, however, does not turn out to yield the required series. Thus, it could be that the series we are after is some combination of the level $2$ Eisenstein series or it could be constructed out of other integer lattices. Finding such Schwarzian-lattice relations for all Fricke levels would form an interesting study. Additionally, one can also construct $\Theta$-series for the modular invariant partition functions \ref{partintn1}, \ref{partintn3}, and \ref{partintn5} and explore if it can be expressed as a combination of a McKay-Thompson series and a lattice $\Theta$-series akin to what was done with $\Theta_{2^{+}}(\tau)$. \\ \\ \textbf{\textit{Exhaustive Hecke Relations.}}\\ We observed a few Hecke relations (\cite{Harvey:2018rdc}) among some of our single-character solutions. The analysis done here in terms of Hecke images is far from complete. One natural extension to the analysis provided here is to compute Hecke images for two-character and three-characters solutions obtained above. Since taking Hecke images of higher characters involves the modular $S$ and $T$ matrices, we can in principle ask the opposite question for our case considered above. Say, we construct Hecke images of our two-character and three-character solutions. We would end up with some characters from this procedure. Now, using the knowledge of how Hecke operators act on characters in the $\text{SL}(2,\mathbb{Z})$ case, perhaps we can determine the \textit{restricted} modular $S$ and $T$ matrices which would govern the transformations of these congruence subgroups characters. We aim to return to this kind of Hecke image analysis for other Fricke and Hecke subgroups in the future.\\ \\ \textbf{\textit{MTCs for congruence subgroups.}}\\ It is known that every RCFT, whose partition function by definition is modular (up to a phase) with respect to the full modular group, belongs to some Modular Tensor Category (MTC) \cite{rowell2009classification}. Say by following the procedure outlined in the above point, one can find the \textit{restricted} modular $S$ and $T$ matrices for a given character-like solution. Then, this begs the question of which MTC this modular data would belong to. If such a thing would exist then we could find, in principle, MTCs to which our two- and three-character solutions belong. This idea is also motivated by the fact that we observe nice bilinear relations among many of our character-like solutions. In the $\text{SL}(2,\mathbb{Z})$ case, whenever two RCFTs are related by a bilinear relation, it meant that if one belonged to an MTC then the other, in the pair, belonged to the conjugate MTC. Such things might also exist in the case of congruence subgroups.\\ \noindent \textbf{\textit{Kaneko-Zagier Equations.}}\\ It was noted in \cite{Chandra:2018pjq} that the following Kaneko-Zagier equation, \begin{align} \left(\mathcal{D}^2_{(k)}-\frac{k(k+2)}{144}E_4(\tau)\right)f_{(k)}(\tau) = 0, \label{KZSL2z} \end{align} can be recast into the usual $(2,0)$ MMS equation, \begin{align} \left(\mathcal{D}^2_{(0)}-\frac{k(k+2)}{144}E_4(\tau)\right)f_{(0)}(\tau) = 0, \label{MMSSL2z} \end{align} by doing the following substitution, \begin{align} f_{(k)}(\tau) = \eta(\tau)^{2k}f_{(0)}(\tau), \qquad c=2k. \label{subs1} \end{align} We figured out that a similar \textit{Kaneko-Zagier type} equation can be obtained in the $\Gamma^{+}_0(2)$ case. We start with the following \textit{Kaneko-Zagier type} equation, \begin{align} \left(\mathcal{D}^2_{(k)}-\frac{k(k+2)}{64}E^{(2+)}_4(\tau)\right)f_{(k)}(\tau) = 0. \label{KZSG2} \end{align} Now we observe that $\theta_q (\eta(\tau)\eta(2\tau)) = \tfrac{1}{8}E_2^{(2+)}(\tau)(\eta(\tau)\eta(2\tau))$ which follows from $\theta_q \Delta_2 = E_2^{(2+)}\Delta_2$ (see \ref{general_cusp_derivative}). So, consider the substitution, \begin{align} f_{(k)} = (\eta(\tau)\eta(2\tau))^k f_{(0)}, \qquad c=3k. \label{subs2} \end{align} This then leads to (from \ref{KZSG2}), \begin{align} \left(\mathcal{D}^2_{(0)}-\frac{k(k+2)}{64}E_4(\tau)\right)f_{(0)}(\tau) = 0, \label{G2z} \end{align} In \cite{Chandra:2018pjq}, an equation of the type \ref{KZSL2z} was studied to classify quasi-characters. Motivated by this, we are interested to study the equation of the form \ref{KZSG2} to classify quasi-characters that appear as solutions in the $\Gamma^{+}_0(2)$ case. Furthermore, from a mathematical standpoint, equations like \ref{KZSL2z} find applications in the study of Supersingular j-invariants, hypergeometric series, and Atkin's orthogonal polynomials (see \cite{KZ}). It would be interesting to investigate a similar equation (obtained above in \ref{KZSG2}) for the $\Gamma^{+}_0(2)$ case with regards to applications similar to those done in \cite{KZ}. One can in principle also set up \textit{Kaneko-Zagier type} equations for other Fricke groups. \section{Modular re-parameterization in \texorpdfstring{$\Gamma^{+}_{0}(p)$}{Γ(p)+}} \noindent Taking a hint from the trend of the definitions and the method followed for modular re-parameterization, one might be tempted to state that this should work for all prime-level Fricke groups. This, unfortunately, does not turn out to be true. We note that the procedure we followed to obtain MLDEs for level $p = 2$ and $ p = 3$ in \ref{reparameterized_MLDE_2+} and \ref{reparameterized_MLDE_3+_new} respectively mimics that done in the case of $\text{SL}(2,\mathbb{Z})$ (see, for example, \cite{Franc:2016}). A drastic difference at the group level we immediately notice is that levels $p\geq 5$, unlike $p = 2,3$, possess more than one elliptic point, excluding the Fricke involution point $\rho_{F_{p}} = \tfrac{i}{\sqrt{p}}$. At levels $p = 5,13$, we notice that the value of the Hauptmodules at the Fricke involution point becomes irrational (see table \ref{tab: Fricke_involution_point_limits}) and this affects the definition of $K_{p^{+}}(\tau)\equiv \tfrac{j_{p^{+}}(\rho_{F_{p}})}{j_{p^{+}}(\tau)}$, which in turn affects the way in which we define the Eisenstein series. The following expressions hold only when $p = 2,3$ \begin{align} E_{4}^{(p^{+})} = \frac{A_{p^{+}}^{2}}{1 - K_{p^{+}}},\ \ \ E_{6}^{(p^{+})} = \frac{A_{p^{+}}^{3}}{1 - K_{p^{+}}}. \end{align} \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c|c||} \hline $p$ & 2 & 3 & 5 & 7 & 13\\[1ex] \hline\hline $j_{p^{+}}(\rho_{F_{p}})$ & 256 & 108 & 22 + 10$\sqrt{5}$ & 27 & $2\sqrt{13}$\\ [1ex] \hline \end{tabular} \caption{The values of the Hauptmodules at the Fricke involution points for select levels of Fricke groups that possess integral coefficients in the $q$-series expansion of the Hauptmodules.} \label{tab: Fricke_involution_point_limits} \end{table} \\ Let us consider the following general ansatz for the re-parameterized $(n,\ell) = (2,0)$ MLDE in $\Gamma_{0}^{+}(p)$ \begin{align}\label{general_ansatz} \left[\theta_{K_{p^{+}}}^{2} + \left(\mathcal{D}A_{p^{+}}\right)\theta_{K_{p^{+}}} + \sum\limits_{i = 1}^{\nu_{p^{+}}}\mu_{i}\mathcal{T}_{p^{+}, i}\right]f(\tau) = 0, \end{align} where $\nu_{p^{+}} = \text{dim}\ \mathcal{M}_{4}(\Gamma_{0}^{+}(p))$ and $\mathcal{T}_{p^{+}, i}$ are weight $4$ modular forms which are the Eisenstein series $E_{4}^{(p^{+})}$ for levels $p= 2,3,5,7$, but are more complicated for level $p = 13$. See \cite{Umasankar:2022kzs} for an idea of constructing such modular forms for $\Gamma_{0}^{+}(13)$. Due to the irrational nature of $K_{5^{+}}(\tau)$ and the complicated relation between the Eisenstein series and the Hauptmodul, it turns out that re-parameterization following the ansatz \ref{general_ansatz} does not perform well at level $p = 5$. This will be discussed in further detail in subsequent sections. To set up the re-parameterized MLDE, we require Ramanujan identities at each Fricke level which we will present here. We note that although these identities for levels $p = 2,3$, shown in sections below in \ref{Ramanujan_p=2} and \ref{Ramanujan_p=3} respectively, match the expressions originally derived by Zudilin in \cite{Zudilin2003TheHE}, the fact that the Ramanujan-Serre covariant derivative can be used as a building block to create these identities, which is the emphasis we place here. \subsection{Recipe for Ramanujan-Eisenstein identities} \noindent We can use the Ramanujan-Serre covariant derivatives we have built previously for the Fricke groups as a guide to help us find Ramanujan's Eisenstein identities or simply the Ramanujan identities. The algorithm we shall use is the following \begin{enumerate} \item Choose Fricke group and identify the correct value of $\kappa(r)$ from table \ref{tab:Fricke_valence}. \item Choose an Eisenstein series whose identity we want to find and by looking at the space of modular forms of weight corresponding to the net weight of the product of the Eisenstein series under consideration and the weight $2$ quasi-modular form $E_{2}^{(p^{+})}(\tau)$, build a linear combination all of which have the multiple $\kappa(r)$. Set $r = 1$ if the Eisenstein series being considered is $E_{2}^{(p^{+})}(\tau)$. \item Fix the coefficients of the linear combination by comparison to the $q$-series expansion of the $q$-derivative of the Eisenstein series considered. \end{enumerate} As an example, consider the group $\Gamma_{0}^{+}(2)$ with $\kappa(r) = \tfrac{r}{8}$. Say we are interested in finding the Ramanujan identity associated with the Eisenstein series $E_{6}^{(2^{+})}(\tau)$. Firstly, we note that $r = 6$ and the common factor to the linear combination is $\kappa(r) = \tfrac{3}{4}$. Now, since the weight of the product $E_{2}^{(2^{+})}E_{6}^{(2^{+})}$ is $8$, we pick the linear combination $a\left(E_{4}^{(2^{+})}\right)^{2} + b\Delta_{2}$. Comparing the $q$-series expansions, we find the expression shown in \ref{Ramanujan_p=2}. The Ramanujan identities in the $\text{SL}(2,\mathbb{Z})$ case read \begin{align}\label{Ramanujan_identities_SL2Z} \begin{split} \mathcal{D}_{2}E_{2}(\tau) =& -\frac{1}{12}E_{4}(\tau),\\ \mathcal{D}_{4}E_{4}(\tau) =& -\frac{1}{3}E_{6}(\tau),\\ \mathcal{D}_{6}E_{6}(\tau) =& -\frac{1}{2}E_{4}^{2}(\tau). \end{split} \end{align} Since the space of modular forms for $\Gamma(1) = \text{SL}(2,\mathbb{Z})$ is given by the polynomial ring $\mathcal{M}_{k}(\Gamma(1)) = \mathbb{C}\left[E_{4},E_{6}\right]$, we can build Eisenstein series of higher weights using powers of those of weights $4$ and $6$, i.e. $E_{2k}(\tau) = \left(E_{4}^{a}E_{6}^{b}\right)(\tau)$, where $a, b\in\mathbb{Z}$. This tells us that Ramanujan identities for higher weight Eisenstein series can be obtained by simply using the identities \ref{Ramanujan_identities_SL2Z}. Consider for example the Eisenstein series of weights $8$, $10$, and $12$ that can be built as follows \begin{align} \begin{split} E_{8}(\tau) =& E_{4}^{2}(\tau),\ \ \ \ E_{10}(\tau) = \left(E_{4}E_{6}\right)(\tau),\\ E_{12}(\tau) =& \frac{1}{691}\left(441E_{4}^{3} + 250E_{6}^{2}\right)(\tau). \end{split} \end{align} Using \ref{Ramanujan_identities_SL2Z}, we find the following identities \begin{align} \begin{split} \mathcal{D}_{8}E_{8}(\tau) =& -\frac{2}{3}E_{10}(\tau),\\ \mathcal{D}_{10}E_{10}(\tau) =& -\frac{1}{2}E_{4}^{3}(\tau) - \frac{1}{3}E_{6}^{2}(\tau),\\ \mathcal{D}_{12}E_{12}(\tau) =& -\left(E_{4}^{2}E_{6}\right)(\tau). \end{split} \end{align} This method, however, is not applicable to the congruent groups of interest since the Ring of modular forms is more complicated. We are to study the space of modular forms of a particular weight for each subgroup in order to derive identities for the Eisenstein series of higher weights. \subsection{Level 2} \noindent The Ramanujan identities for $\Gamma_{0}^{+}(2)$ are found to be \begin{align}\label{Ramanujan_p=2} \mathcal{D}_{2}E_{2}^{(2^{+})}(\tau) =& -\frac{1}{8}E_{4}^{(2^{+})}(\tau),\nonumber\\ q\frac{d}{dq}E_{2}^{(2^{+})}(\tau) =& \frac{1}{8}\left(\left(E_{2}^{(2^{+})}\right)^{2} - E_{4}^{(2^{+})}\right)(\tau),\nonumber\\\\ \mathcal{D}_{4}E_{4}^{(2^{+})}(\tau) =& -\frac{1}{2}E_{6^{(2^{+)}}}(\tau),\nonumber\\ q\frac{d}{dq}E_{4}^{(2^{+})}(\tau) =& \frac{1}{2}\left(E_{2}^{(2^{+})}E_{4}^{(2^{+})} - E_{6}^{(2^{+})}\right)(\tau),\nonumber\\\\ \mathcal{D}_{6}E_{6}^{(2^{+})}(\tau) =& -\frac{3}{4}\left(E_{4}^{(2^{+})}\right)^{2} + 64\Delta_{2}(\tau),\nonumber\\ q\frac{d}{dq}E_{6}^{(2^{+})}(\tau) =& \frac{3}{4}\left(E_{2}^{(2^{+})}E_{6}^{(2^{+})} - \left(E_{4}^{(2^{+})}\right)^{2} + \frac{256}{3}\Delta_{2}\right)(\tau),\nonumber\\\\ \mathcal{D}_{8}E_{8}^{(2^{+})}(\tau) =& -E_{10}^{(2^{+})}(\tau),\nonumber\\ q\frac{d}{dq}E_{8}^{(2^{+})}(\tau) =& \left(E_{2}^{(2^{+})}E_{8}^{(2^{+})} - E_{10}^{(2^{+})}\right)(\tau),\nonumber\\\\ \mathcal{D}_{10}E_{10}^{(2^{+})}(\tau) =& -\frac{5}{4}\left(E_{4}^{(2^{+})}\right)^{2}(\tau) - 192\left(E_{4}^{(2^{+})}\Delta_{2}\right)(\tau),\nonumber\\ q\frac{d}{dq}E_{10}^{(2^{+})}(\tau) =& \frac{5}{4}\left(E_{2}^{(2^{+})}E_{10}^{(2^{+})} - \left(E_{4}^{(2^{+})}\right)^{3} - \frac{768}{5}E_{4}^{(2^{+})}\Delta_{2}\right)(\tau),\nonumber\\\\ \mathcal{D}_{12}E_{12}^{(2^{+})}(\tau) =& -\frac{3}{2}\left(E_{4}^{(2^{+})}\right)^{2}(\tau)E_{6}^{(2^{+})}(\tau) + \frac{49248}{691}\left(E_{6}^{(2^{+})}\Delta_{2}\right)(\tau),\nonumber\\ q\frac{d}{dq}E_{12}^{(2^{+})}(\tau) =& \frac{3}{2}\left(E_{2}^{(2^{+})}E_{12}^{(2^{+})} - \left(E_{4}^{(2^{+})}\right)^{2}E_{6}^{(2^{+})} + \frac{32832}{691}E_{6}^{(2^{+})}\Delta_{2}\right)(\tau). \end{align} The behaviour of the Hauptmodul $j_{2^{+}}(\tau)$ near points $\tau = \rho_{2}, \tfrac{i}{\sqrt{2}}, i\infty$ reads \begin{align}\label{Hauptmodul_limits_2+} \begin{split} j_{2^{+}}(\rho_{2}) \to& \ 0,\\ j_{2^{+}}\left(\frac{i}{\sqrt{2}}\right) \to&\ 256,\\ j_{2^{+}}\left(i\infty\right) \to&\ \infty. \end{split} \end{align} Now, the $(n,\ell) = (2,0)$ MLDE for $\Gamma_{0}^{+}(2)$ takes the following form \begin{align} \left[\mathcal{D}^{2} + \mu E_{4}^{(2^{+})}(\tau)\right]f(\tau) = 0, \end{align} where we have have made the choices $\omega_{2}(\tau) = 1$ since $\text{dim}\ \mathcal{M}_{2}(\Gamma_{0}^{+}(2)) = 0$, and $\omega_{4}(\tau) = \mu E_{4}^{(2^{+})}(\tau)$. Substituting for the covariant derivative, the ODE now reads \begin{align}\label{reparameterized_MLDE} \left[\theta_{q}^{2} - \frac{1}{4}E_{2}^{(2^{+})}\theta_{q} + \mu E_{4}^{(2^{+})}\right]f(\tau) = 0. \end{align} where we have defined $\theta_{a}\equiv a\tfrac{d}{da}$ with $q\tfrac{d}{dq} = \tfrac{1}{2\pi i}\frac{d}{d\tau}$. The $q$-derivative of $j_{2^{+}}(\tau)$ reads \begin{align}\label{j_2+_derivatives} \begin{split} q\frac{d}{dq}j_{2^{+}}(\tau) =& \frac{1}{2\pi i}\frac{d}{d\tau}j_{2^{+}}(\tau)\\ =& -\left(\frac{E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}}j_{2^{+}}\right)(\tau),\\ q^{2}\frac{d^{2}}{dq^{2}}j_{2^{+}}(\tau) =& \frac{1}{(2\pi i)^{2}}\frac{d^{2}}{d\tau^{2}}j_{2^{+}}(\tau)\\ =& \left[\left(\frac{3}{4}E_{4}^{(2^{+})} + \frac{1}{2}\left(\frac{E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}}\right)^{2}- \frac{1}{4}\frac{E_{2}^{(2^{+})}E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}}\right)j_{2^{+}} - 64\frac{\Delta_{2}}{E_{4}^{(2^{+})}}\right](\tau). \end{split} \end{align} We now make the following definitions \begin{align} \theta_{q}\equiv A_{2^{+}}\theta_{K_{2^{+}}},\ \ A_{2^{+}}\equiv \frac{E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}},\ \ K_{2^{+}}\equiv \frac{256}{j_{2^{+}}}. \end{align} The $q$-derivatives of $A_{2^{+}}$ and $K_{2^{+}}$ are found to be \begin{align} \begin{split} \theta_{q}A_{2^{+}} =& -\frac{3}{4}E_{4}^{(2^{+})} + 64\frac{\Delta_{2}}{E_{4}^{(2^{+})}} + \frac{1}{2}\left(\frac{E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}}\right)^{2} + \frac{E_{2}^{(2^{+})}E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}},\\ \theta_{q}K_{2^{+}} =& A_{2^{+}}K_{2^{+}}. \end{split} \end{align} We find the following relations between the weight $4$ and weight $6$ Eisenstein series and the newly defined variables, \begin{align}\label{repara_definitions_2+} E_{4}^{(2^{+})} = \frac{A_{2^{+}}^{2}}{1 - K_{2^{+}}},\ \ \ E_{6}^{(2^{+})} =\frac{A_{2^{+}}^{3}}{1 - K_{2^{+}}},\ \ \ \Delta_{2} = \frac{1}{256}\frac{A_{2^{+}}^{4}K_{2^{+}}}{(1-K_{2^{+}})^{2}}. \end{align} Now, the covariant derivative of $A_{2^{+}}$ is found to be \begin{align} \begin{split} \mathcal{D}A_{2^{+}} =& -\frac{3}{4}E_{4}^{(2^{+})} + 64\frac{\Delta_{2}}{E_{4}^{(2^{+})}} + \frac{1}{2}\left(\frac{E_{6}^{(2^{+})}}{E_{4}^{(2^{+})}}\right)^{2}\\ =& -A_{2^{+}}^{2}\frac{1 + K_{2^{+}}}{4(1 - K_{2^{+}})}. \end{split} \end{align} Using these results, we find \begin{align}\label{derivatives_repara_2+} \begin{split} \mathcal{D} =& \theta_{q} = A_{2^{+}}\theta_{K_{2^{+}}},\\ \mathcal{D}^{2} =& \left(\theta_{q} - \frac{1}{4}E_{2}^{(2^{+})}\right)\theta_{q} = A_{2^{+}}^{2}\theta_{K_{2^{+}}}^{2} + \left(\mathcal{D}A_{2^{+}}\right)\theta_{K_{2^{+}}}. \end{split} \end{align} We can now recast the MLDE \ref{reparameterized_MLDE} as follows \begin{align}\label{reparameterized_MLDE_2+} \left[\theta_{K_{2^{+}}}^{2} - \frac{1 + K_{2^{+}}}{4(1 - K_{2^{+}})}\theta_{K_{2^{+}}} + \frac{\mu}{1 - K_{2^{+}}}\right]f(K_{2^{+}}) = 0. \end{align} This ODE is now of the hypergeometric type and can be solved to obtain \begin{align} f(\tau) = c_{1}\left(-K_{2^{+}}\right)^{\alpha}\pFq{2}{1}{\alpha - \frac{1}{4},\ \alpha}{2\alpha + \frac{1}{4}}{K_{2^{+}}} + c_{2}\left(-K_{2^{+}}\right)^{\beta}\pFq{2}{1}{\beta - \frac{1}{4},\ \beta}{2\beta + \frac{1}{4}}{K_{2^{+}}}, \end{align} where we have defined $\alpha \equiv \tfrac{1}{8}\left(3 - \sqrt{25 - 64\mu}\right)$ and $\beta\equiv \tfrac{1}{8}\left(5 + \sqrt{25 - 64\mu}\right)$. \newline \subsection{Level 3} The Ramanujan identities for $\Gamma_{0}^{+}(3)$ are found to be \begin{align}\label{Ramanujan_p=3} \mathcal{D}_{2}E_{2}^{(3^{+})}(\tau) =& -\frac{1}{6}E_{4}^{(3^{+})}(\tau),\nonumber\\ q\frac{d}{dq}E_{2}^{(3^{+})}(\tau) =& \frac{1}{6}\left(\left(E_{2}^{(3^{+})}\right)^{2} - E_{4}^{(3^{+})}\right)(\tau),\nonumber\\\\ \mathcal{D}_{4}E_{4}^{(3^{+})}(\tau) =& -\frac{2}{3}E_{6}^{(3^{+})}(\tau),\nonumber\\ q\frac{d}{dq}E_{4}^{(3^{+})}(\tau) =& \frac{2}{3}\left(E_{2}^{(3^{+})}E_{4}^{(3^{+})} - E_{6}^{(3^{+})}\right)(\tau),\nonumber\\\\ \mathcal{D}_{6}E_{6}^{(3^{+})}(\tau) =& -\frac{1}{2}\left(E_{4}^{(3^{+})}\right)^{2}(\tau) - \frac{1}{2}\frac{\left(E_{6}^{(3^{+})}\right)^{2}(\tau)}{E_{4}^{(3^{+})}(\tau)},\nonumber\\ q\frac{d}{dq}E_{6}^{(3^{+})}(\tau) =& \left(E_{2}^{(3^{+})}E_{6}^{(3^{+})} - \frac{1}{2}\left(E_{4}^{(2^{+})}\right)^{2} - \frac{1}{2}\frac{\left(E_{6}^{(3^{+})}\right)^{2}}{E_{4}^{(3^{+})}}\right)(\tau),\nonumber\\\\ \mathcal{D}_{8}E_{8}^{(3^{+})}(\tau) =& -\frac{4}{3}\left(E_{4}^{(3^{+})}E_{6}^{(3^{+})}\right)(\tau) + \frac{8064}{205}\frac{E_{6}^{(3^{+})}(\tau)}{\left(E_{4}^{(3^{+})}\right)^{\tfrac{1}{2}}(\tau)}\Delta_{3}(\tau),\nonumber\\ q\frac{d}{dq}E_{8}^{(3^{+})}(\tau) =& \frac{4}{3}\left(E_{2}^{(3^{+})}E_{8}^{(3^{+})} - E_{4}^{(3^{+})}E_{6}^{(3^{+})} + \frac{6048}{205}\frac{E_{6}^{(3^{+})}}{\left(E_{4}^{(3^{+})}\right)^{\tfrac{1}{2}}}\Delta_{3}\right)(\tau),\nonumber\\\\ \mathcal{D}_{10}E_{10}^{(3^{+})}(\tau) =& -\frac{5}{3}\left(E_{4}^{(3^{+})}\right)^{3}(\tau) + \frac{7974}{61}\left(E_{4}^{(3^{+})}\right)^{\tfrac{3}{2}}(\tau)\Delta_{3}(\tau) - \frac{7776}{61}\Delta_{3}^{2}(\tau),\nonumber\\ q\frac{d}{dq}E_{10}^{(3^{+})}(\tau) =& \frac{5}{3}\left(E_{2}^{(3^{+})}E_{10}^{(3^{+})} - \left(E_{4}^{(3^{+})}\right)^{3} + \frac{23922}{305}\left(E_{4}^{(3^{+})}\right)^{\tfrac{3}{2}}\Delta_{3} - \frac{23328}{305}\Delta_{3}^{2}\right)(\tau),\nonumber\\ =& \frac{5}{3}\left(E_{2}^{(3^{+})}E_{10}^{(3^{+})} - E_{12}^{(3^{+})} + \frac{76660529}{492323680}\left(\left(E_{4}^{(3^{+})}\right)^{3} - E_{4}^{(3^{+})}E_{8}^{(3^{+})}\right) - \frac{465230304}{15385115}\Delta_{3}^{2}\right)(\tau),\nonumber\\\\ \mathcal{D}_{12}E_{12}^{(3^{+})}(\tau) =& -2\left(E_{4}^{(3^{+})}\right)^{2}(\tau)E_{6}^{(3^{+})}(\tau) + \frac{3625344}{50443}\left(E_{4}^{(3^{+})}\right)^{\tfrac{1}{2}}(\tau)E_{6}^{(3^{+})}(\tau)\Delta_{3}(\tau),\nonumber\\ q\frac{d}{dq}E_{12}^{(3^{+})}(\tau) =& 2\left(E_{2}^{(3^{+})}E_{12}^{(3^{+})} - \left(E_{4}^{(3^{+})}\right)^{2}E_{6}^{(3^{+})} + \frac{1812672}{50443}\left(E_{4}^{(3^{+})}\right)^{\tfrac{1}{2}}E_{6}^{(3^{+})}\Delta_{3}\right)(\tau). \end{align} The behaviour of the Hauptmodul $j_{3^{+}}(\tau)$ near points $\tau = \rho_{3}, \tfrac{i}{\sqrt{3}}, i\infty$ reads \begin{align}\label{Hauptmodul_limits_3+} \begin{split} j_{3^{+}}(\rho_{3}) \to& \ 0,\\ j_{3^{+}}\left(\frac{i}{\sqrt{3}}\right) \to&\ 108,\\ j_{3^{+}}\left(i\infty\right) \to&\ \infty. \end{split} \end{align} Now, the $(n,\ell) = (2,0)$ MLDE for $\Gamma_{0}^{+}(3)$ takes the following form \begin{align} \left[\mathcal{D}^{2} + \mu E_{4}^{(3^{+})}(\tau)\right]f(\tau) = 0. \end{align} where we have have made the choices $\omega_{2}(\tau) = 1$ since $\text{dim}\ \mathcal{M}_{2}(\Gamma_{0}^{+}(3)) = 0$, and $\omega_{4}(\tau) = \mu E_{4}^{(3^{+})}(\tau)$. Substituting for the covariant derivative, the ODE now reads \begin{align}\label{reparameterized_MLDE_3+} \left[\theta_{q}^{2} - \frac{1}{3}E_{2}^{(3^{+})}\theta_{q} + \mu E_{4}^{(3^{+})}\right]f(\tau) = 0. \end{align} We now make the following definitions \begin{align} \theta_{q}\equiv A_{3^{+}}\theta_{K_{3^{+}}},\ \ A_{3^{+}}\equiv \frac{E_{6}^{(3^{+})}}{E_{4}^{(3^{+})}},\ \ K_{3^{+}}\equiv \frac{108}{j_{3^{+}}}. \end{align} We find the following relations between the weight $4$ and weight $6$ Eisenstein series and the newly defined variables, \begin{align} E_{4}^{(3^{+})} = \frac{A_{3^{+}}^{2}}{1 - K_{3^{+}}},\ \ \ E_{6}^{(3^{+})} = \frac{A_{3^{+}}^{3}}{1 - K_{3^{+}}}. \end{align} Now, the covariant derivative of $A_{3^{+}}$ is found to be \begin{align} \begin{split} DA_{3^{+}} =& -\frac{1}{2}E_{4}^{(3^{+})} + \frac{1}{6}\left(\frac{E_{6}^{(3^{+})}}{E_{4}^{(3^{+})}}\right)^{2}\\ =& -A_{3^{+}}^{2}\frac{2 + K_{3^{+}}}{6(1 - K_{3^{+}})}. \end{split} \end{align} Using this result, we can recast the MLDE \ref{reparameterized_MLDE_3+} as follows \begin{align}\label{reparameterized_MLDE_3+_new} \left[\theta_{K_{3^{+}}}^{2} - \frac{2 + K_{3^{+}}}{6(1 - K_{3^{+}})}\theta_{K_{3^{+}}} + \frac{\mu}{1 - K_{3^{+}}}\right]f(K_{3^{+}}) = 0. \end{align} This ODE is now of the hypergeometric type and can be solved to obtain \begin{align} f(\tau) = c_{1}\left(-K_{2^{+}}\right)^{\alpha}\pFq{2}{1}{\alpha - \frac{5}{6},\ \alpha}{2\alpha - \frac{1}{3}}{K_{3^{+}}} + c_{2}\left(-K_{2^{+}}\right)^{\beta}\pFq{2}{1}{\beta - \frac{5}{6},\ \beta}{2\beta - \frac{1}{3}}{K_{3^{+}}}, \end{align} where we have defined $\alpha \equiv \tfrac{1}{3}\left(2 - \sqrt{4 - 9\mu}\right)$ and $\beta\equiv \tfrac{1}{3}\left(2 + \sqrt{4 - 9\mu}\right)$. \subsection{Level 5} \noindent The Ramanujan identities for $\Gamma_{0}^{+}(5)$ are found to be (see appendix \ref{appendix: Mod_5_and_7} for definitions of modular forms belonging to this group) \begin{align} \mathcal{D}_{2}E_{2}^{(5^{+})}(\tau) =& -\frac{1}{4}E_{4}^{(5^{+})}(\tau) + \frac{4}{13}\Delta_{5}(\tau),\nonumber\\ q\frac{d}{dq} E_{2}^{(5^{+})}(\tau) =&\frac{1}{4}\left(\left(E_{2}^{(5^{+})}\right)^{2} - E_{4}^{(5^{+})} + \frac{16}{13}\Delta_{5}\right)(\tau),\nonumber\\\\ \mathcal{D}_{4}E_{4}^{(5^{+})}(\tau) =& -E_{6}^{(5^{+})}(\tau),\nonumber\\ q\frac{d}{dq} E_{4}^{(5^{+})}(\tau) =& \left(E_{2}^{(5^{+})}E_{4}^{(5^{+})} - E_{6}^{(5^{+})}\right)(\tau),\nonumber\\\\ \mathcal{D}_{6}E_{6}^{(5^{+})}(\tau) =& -\frac{3}{2}\left(E_{4}^{(5^{+})}\right)^{2}(\tau) + \frac{464}{13}\left(E_{4}^{(5^{+})}\Delta_{5}\right)(\tau) + \frac{20000}{169}\Delta_{5}^{2}(\tau),\nonumber\\ q\frac{d}{dq} E_{6}^{(5^{+})}(\tau) =& \frac{3}{2}\left(E_{2}^{(5^{+})}E_{6}^{(5^{+})} - \left(E_{4}^{(5^{+})}\right)^{2} + \frac{928}{39}E_{4}^{5^{+}}\Delta_{5} + \frac{40000}{507}\Delta_{5}^{2} \right)(\tau),\nonumber\\\\ \mathcal{D}_{8}E_{8}^{(5^{+})}(\tau) =& -2\left(E_{4}^{(5^{+})}(\tau)E_{6}^{(5^{+})}(\tau)\right)(\tau) + \frac{72000}{4069}\left(E_{6}^{(5^{+})}\Delta_{5}\right)(\tau),\nonumber\\ q\frac{d}{dq} E_{8}^{(5^{+})}(\tau) =& 2\left(E_{2}^{(5^{+})}E_{8}^{(5^{+})} - E_{4}^{(5^{+})}E_{6}^{(5^{+})} + \frac{36000}{4069}E_{6}^{(5^{+})}\Delta_{5}\right)(\tau),\nonumber\\\\ \mathcal{D}_{10}E_{10}^{(5^{+})}(\tau) =& -\frac{5}{2}\left(E_{4}^{(5^{+})}\right)^{3}(\tau) + \frac{537488}{6773}\left(E_{4}^{(5^{+})}\right)^{2}(\tau)\Delta_{5}(\tau) + \frac{14556000}{88049}E_{4}^{(5^{+})}(\tau)\Delta_{5}^{2}(\tau)\nonumber\\ &{}- \frac{307368000}{1144637}\Delta_{5}^{3}(\tau),\nonumber\\ q\frac{d}{dq} E_{10}^{(5^{+})}(\tau) =& \frac{5}{2}\left(E_{2}^{(5^{+})}E_{10}^{(5^{+})} - \left(E_{4}^{(5^{+})}\right)^{3} + \frac{1074976}{33865}\left(E_{4}^{(5^{+})}\right)^{2}\Delta_{5}\right.\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.+ \frac{5822400}{88049}E_{4}^{(5^{+})}\Delta_{5}^{2} - \frac{122947200}{1144637}\Delta_{5}^{3}\right)(\tau),\nonumber\\\\ \mathcal{D}_{12}E_{12}^{(5^{+})}(\tau) =& -3\left(E_{4}^{(5^{+})}\right)^{2}E_{6}^{(5^{+})} + \frac{298944000}{5398783}\left(E_{4}^{(5^{+})}E_{6}^{(5^{+})}\Delta_{5}\right)(\tau)\nonumber\\ &+ \frac{646164000}{70184179}\left(E_{6}^{(5^{+})}\Delta_{5}^{2}\right)(\tau),\\ q\frac{d}{dq} E_{12}^{(5^{+})}(\tau) =& 3\left(E_{2}^{(5^{+})} E_{12}^{(5^{+})} - \left( E_{4}^{(5^{+})}\right)^{2} E_{6}^{(5^{+})} + \frac{99648000}{5398783}E_{4}^{(5^{+})}E_{6}^{(5^{+})}\Delta_{5}\right.\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.+ \frac{2153088000}{70184179}E_{6}^{(5^{+})}\Delta_{5}^{2}\right)(\tau). \end{align} The behaviour of the Hauptmodul $j_{5^{+}}(\tau)$ near points $\tau = \rho_{5,1}, \rho_{5,2}, \tfrac{i}{\sqrt{5}}, i\infty$ reads \begin{align} \begin{split} j_{5^{+}}(\rho_{5,1})\to&\ 22 + 10\sqrt{5},\\ j_{5^{+}}(\rho_{5,2})\to&\ 44,\\ j_{5^{+}}\left(\frac{i}{\sqrt{5}}\right)\to&\ 22 + 10\sqrt{5},\\ j_{5^{+}}(i\infty)\to&\ \infty. \end{split} \end{align} We find the derivative of the Hauptmodul to be \begin{align}\label{derivative_1} \begin{split} q\frac{d}{dq}j_{5^{+}}(\tau) =& -\frac{E_{6}^{(5^{+})}}{E_{4}^{(5^{+})}}\left(j_{5^{+}} - \frac{36}{13}\right)(\tau)\\ =& -j_{5^{+}}\left(\frac{E_{6}^{(5^{+})}}{E_{2,5^{'}}^{2}}\right)(\tau). \end{split} \end{align} It is easy to show that the re-parameterized derivatives take the following form \begin{align} \begin{split} \theta_{q} =& -\left(\frac{E_{6}^{(5^{+})}}{E_{2,5^{'}}^{2}}\right)\theta_{j_{5^{+}}},\\ \mathcal{D}^{2} =& \left(\theta_{q} - \frac{1}{2}E_{2}^{(5^{+})}\right)\theta_{q}\\ =& \left(\frac{E_{6}^{(5^{+})}}{E_{2,5^{'}}^{2}}\right)^{2}\theta_{j_{5^{+}}}^{2} + \left(\frac{3}{2}E_{2,5^{'}}^{2} -8\frac{\Delta_{2}^{2}}{E_{2,5^{'}}^{2}} - 44\Delta_{5} - \frac{\left(E_{6}^{(5^{+})}\right)^{2}}{E_{2,5^{'}}^{4}}\right)\theta_{j_{5^{+}}}, \end{split} \end{align} where we used $q\tfrac{d}{dq}\Delta_{5}(\tau) = \left(\Delta_{5}E_{2}^{(5^{+})}\right)(\tau)$. We find that at this level, it is easier to re-parameterize the MLDE in terms of the Hauptmodul rather than in terms of $K_{5^{+}}(\tau)$ since the Eisenstein series is related to this inverse Haupotmodul via Heun's functions as opposed to a straightforward relation we could derive earlier by mimicking the $\text{SL}(2, \mathbb{Z})$ construction. This is explored further in appendix \ref{appendix:B}. From \cite{Sakai2014TheAO}, we have the following expressions for the Eisenstein series and the cusp form in terms of Heun's function, \begin{align} \begin{split} \Delta_{5}(\tau) =& \frac{1}{j_{5^{+}}}H\ell_{5}^{4}(\tau),\\ E_{2,5^{'}}^{2}(\tau) =& H\ell_{5}^{4}(\tau),\\ E_{6}^{(5^{+})}(\tau) =& \sqrt{1 - \frac{44}{j_{5^{+}}} - \frac{16}{j_{5^{+}}^{2}}}\ H\ell_{5}^{6}(\tau),\\ H\ell_{5}(\tau) \equiv& H\ell\left(\frac{11 + 5\sqrt{5}}{11 - 5\sqrt{5}}, -\frac{3(11 + 5\sqrt{5})}{8};\frac{1}{4},\frac{3}{4},1,\frac{1}{2};K_{5^{+}}(\tau)\right). \end{split} \end{align} From this, we find the following useful relations \begin{align} \begin{split} \left(\frac{\Delta_{5}}{E_{2,5^{'}}^{2}}\right)(\tau) =& \mathfrak{a} = \frac{1}{j_{5^{+}}(\tau)},\\ \left(\frac{\left(E_{6}^{(5^{+})}\right)^{2}}{\Delta_{5}^{3}}\right)(\tau) =& \mathfrak{b} = \left(j_{5^{+}}^{3} - 44j_{5^{+}}^{2} - 16j_{5^{+}}\right)(\tau),\\ \left(\frac{\left(E_{6}^{(5^{+})}\right)^{2}}{E^{6}_{2,5^{'}}}\right)(\tau) =& \mathfrak{c} = \left(1 - \frac{44}{j_{5^{+}}} - \frac{16}{j_{5^{+}}^{2}}\right)(\tau). \end{split} \end{align} Now, since $\text{dim}\ \mathcal{M}_{4}(\Gamma_{0}^{+}(5)) = 2$, we choose $\omega_{4}(\tau) = \mu_{1}E_{2,5^{'}}^{2}(\tau) + \mu_{2}\Delta_{5}(\tau)$ with which the re-parameterized MLDE reads \begin{align} \begin{split} \left[\theta_{j_{5^{+}}}^{2} + \left(\frac{3}{2}\frac{1}{\mathfrak{c}} - 8\frac{\mathfrak{a}^{2}}{\mathfrak{c}} - 44\frac{\mathfrak{a}}{\mathfrak{c}} - 1\right)\theta_{j_{5^{+}}} + \frac{\mu_{1}}{\mathfrak{c}} + \frac{\mu_{2}\mathfrak{a}}{\mathfrak{c}}\right]f(\tau) = 0. \end{split} \end{align} This can be simplified to read \begin{align} \left[\theta_{j_{5^{+}}}^{2} + \left(\frac{j_{5^{+}}^{2} + 16}{2(j_{5^{+}}^{2} - 44j_{5^{+}} - 16)}\right)\theta_{j_{5^{+}}} + \frac{j_{5^{+}}(j_{5^{+}}\mu_{1} + \mu_{2})}{(j_{5^{+}}^{2} - 44j_{5^{+}} - 16)}\right]f(\tau) = 0. \end{align} The solution to this ODE is expressed in terms of Heun's function as shown below \begin{align} \begin{split} f(\tau) =& c_{1}\ H\ell\left[-\frac{\left(2j_{5^{+}}(\rho_{5,1}) + 4\right)}{4},\frac{\mu_{2}j_{5^{+}}(\rho_{5,1})}{16};\frac{\alpha_{-}}{4}, \frac{\alpha_{+}}{4},\frac{1}{2},\frac{1}{2};\frac{1}{2};-\frac{j_{5^{+}}(\rho_{5,1})j_{5^{+}}(\tau)}{16}\right]\\ \\+& c_{2}\ H\ell\left[-\frac{\left(2j_{5^{+}}(\rho_{5,1}) + 4\right)}{4},\frac{(\mu_{2} - 11)j_{5^{+}}(\rho_{5,1})}{16};\frac{\beta_{+}}{4}, \frac{\beta_{-}}{4},\frac{3}{2},\frac{1}{2};-\frac{j_{5^{+}}(\rho_{5,1})j_{5^{+}}(\tau)}{16}\right]k(j_{5^{+}}), \end{split} \end{align} where $\alpha_{\pm}\equiv \tfrac{1 \pm \sqrt{1-16\mu_{1}}}{4}$ and $\beta_{\pm}\equiv \tfrac{3 \pm \sqrt{1-16\mu_{1}}}{4}$ and $k(j_{5^{+}}) \equiv \left(\tfrac{j_{5^{+}}(\tau)}{j_{5^{+}}(\rho_{5,2}) - j_{5^{+}}(\rho_{5,1})}\right)^{\frac{1}{2}}$. \subsection{Level 7} \noindent The Ramanujan identities for $\Gamma_{0}^{+}(7)$ are found to be (see appendix \ref{appendix: Mod_5_and_7} for definitions of modular forms belonging to this group) \begin{align} \mathcal{D}_{2}E_{2}^{(7^{+})}(\tau) =& -\frac{1}{3}E_{4}^{(7^{+})}(\tau) + \frac{3}{5}\Delta_{7^{+},4}(\tau),\nonumber\\ q\frac{d}{dq}E_{2}^{(7^{+})}(\tau) =& \frac{1}{3}\left(\left(E_{2}^{(7^{+})}\right)^{2} - E_{4}^{(7^{+})} + \frac{9}{5}\Delta_{7^{+},4}\right)(\tau),\nonumber\\\\ \mathcal{D}_{4}E_{4}^{(7^{+})}(\tau) =& -\frac{4}{3}E_{6}^{(7^{+})}(\tau) + \frac{96}{215}\Delta_{7^{+},6}(\tau),\nonumber\\ q\frac{d}{dq}E_{4}^{(7^{+})}(\tau) =& \frac{4}{3}\left(E_{2}^{(7^{+})}E_{4}^{(7^{+})} - E_{6}^{(7^{+})} + \frac{72}{215}\Delta_{7^{+},6}\right)(\tau),\nonumber\\\\ \mathcal{D}_{6}E_{6}^{(7^{+})}(\tau) =& -2\left(E_{4}^{(7^{+})}\right)^{2}(\tau) + \frac{5733}{215}\left(E_{4}^{(7^{+})}\Delta_{7^{+},6}\right)(\tau) + \frac{136269}{1075}\Delta_{7^{+},4}^{2}(\tau),\nonumber\\ q\frac{d}{dq}E_{6}^{(7^{+})}(\tau) =& 2\left(E_{2}^{(7^{+})}E_{6}^{(7^{+})} - \left(E_{4}^{(7^{+})}\right)^{2} + \frac{5733}{430}E_{4}^{(7^{+})}\Delta_{7^{+},4} + \frac{136269}{2150}\Delta_{7^{+},4}^{2}\right)(\tau),\nonumber\\\\ \mathcal{D}_{8}E_{8}^{(7^{+})}(\tau) =& -\frac{8}{3}\left(E_{4}^{(7^{+})}E_{6}^{(7^{+})}\right)(\tau) + \frac{4276032}{258215}\left(E_{4}^{(7^{+})}\Delta_{7^{+},6}\right)(\tau) + \frac{1862784}{30025}\Delta_{7^{+},10}(\tau),\nonumber\\ q\frac{d}{dq}E_{8}^{(7^{+})}(\tau) =& \frac{8}{3}\left(E_{2}^{(7^{+})}E_{8}^{(7^{+})} - E_{4}^{(7^{+})}E_{6}^{(7^{+})} + \frac{1603512}{258215}E_{4}^{(7^{+})}\Delta_{7^{+},6} + \frac{698544}{30025}\Delta_{7^{+},10}\right)(\tau),\nonumber\\\\ \mathcal{D}_{10}E_{10}^{(7^{+})}(\tau) =& -\frac{10}{3}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{4}(\tau)\Delta_{7}^{4}(\tau) + \frac{27143}{573}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{3}(\tau)\Delta_{7}^{4}(\tau)\nonumber\\ & + \frac{372057}{955}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{2}(\tau)\Delta_{7}^{4}(\tau) +\frac{310464}{955}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{2}(\tau)\Delta_{7}^{4}(\tau) + \frac{25239312}{23875}\Delta_{7}^{4}(\tau),\nonumber\\ q\frac{d}{dq}E_{10}^{(7^{+})}(\tau) =& \frac{10}{3}\left(E_{2}^{(7^{+})}E_{10}^{(7^{+})} - \left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{4}\Delta_{7}^{4} + \frac{27143}{1910}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{3}\Delta_{7}^{4}\right.\nonumber\\ &{}\ \ \ \ \ \ \ \ \ \left. + \frac{1116171}{9550}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{2}\Delta_{7}^{4} + \frac{465696}{4775}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)\Delta_{7}^{4} + \frac{37858968}{119375}\Delta_{7}^{4} \right)(\tau),\nonumber\\\\ \mathcal{D}_{12}E_{12}^{(7^{+})}(\tau) =& 4\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{\tfrac{2}{3}}(\tau)\Delta_{7^{+},6}(\tau)\Delta_{7}^{\tfrac{8}{3}}(\tau)\left(-\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{3}(\tau) + a_{1}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)^{2}\right.\nonumber\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.+ a_{2}\left(\frac{E_{4}^{(7^{+})}}{\Delta_{7^{+},4}}\right)(\tau) + a_{3}\right), \end{align} where the coefficients $a_{i}$ are rational number that can be fixed by comparing the $q$-series expansions. The behaviour of the Hauptmodul $j_{7^{+}}(\tau)$ near points $\tau = \rho_{7,1}, \rho_{7,2}, \tfrac{i}{\sqrt{7}}, i\infty$ reads \begin{align} \begin{split} j_{7^{+}}(\rho_{7,1})\to&\ 0,\\ j_{7^{+}}(\rho_{7,2})\to&\ 0,\\ j_{7^{+}}\left(\frac{i}{\sqrt{7}}\right)\to&\ 27,\\ j_{7^{+}}(i\infty)\to&\ \infty. \end{split} \end{align} We make the following definitions \begin{align} \begin{split} \widetilde{E_{4,7}}(\tau) \equiv& E_{2,7^{'}}^{2}(\tau),\ \ \ \widetilde{E_{6,7}} (\tau)\equiv \left(\sqrt{E_{2,7^{'}}^{3}}\frac{\Delta_{7^{+},6}}{\Delta_{7}}\right)(\tau),\\ K_{7^{+}}(\tau) \equiv& \frac{27}{j_{7^{+}}(\tau)},\ \ \ A_{7^{+}}(\tau) \equiv \left(\frac{\widetilde{E_{6,7}}}{\widetilde{E_{4,7}}}\right)(\tau) = \left(\frac{\Delta_{7^{+},10}}{E_{2,7^{'}}\Delta_{7}^{2}}\right)(\tau). \end{split} \end{align} Using identities found in appendix \ref{appendix:modular_forms}, the derivative of the Hauptmodul, $K_{7^{+}}(\tau)\equiv \tfrac{27}{j_{7^{+}}(\tau)}$, and $A_{7^{+}}(\tau)$ read \begin{align} \begin{split} q\frac{d}{dq}j_{7^{+}}(\tau) =& -j_{7^{+}}\left(\frac{\Delta_{7^{+},10}}{\Delta^{2}_{7}E_{2,7^{'}}}\right)(\tau)\\ q\frac{d}{dq}K_{7^{+}}(\tau) =& A_{7^{+}}(\tau)K_{7^{+}}(\tau),\\ q\frac{d}{dq}A_{7^{+}}(\tau) =& A_{7^{+}}\left(q\frac{d}{dq}\log E_{4,7^{'}}-\frac{2}{3}E_{2}^{(7^{+})} -\frac{4}{3} \frac{E_{4,7^{'}}}{E_{2,7^{'}}} \right)(\tau). \end{split} \end{align} Using this it is straightforward to show that the re-parameterized derivatives read \begin{align} \begin{split} \theta_{q} =& A_{7^{+}}\theta_{K_{7^{+}}},\\ \mathcal{D}^{2} =& \left(\theta_{q} - \frac{2}{3}E_{2}^{(7^{+})}\right)\theta_{q}\\ =& A_{7^{+}}^{2}\theta_{K_{7^{+}}}^{2} + A_{7^{+}}\left(q\frac{d}{dq}\log E_{4,7^{'}} - \frac{4}{3}\frac{E_{4,7^{'}}}{E_{2,7^{'}}}\right)\theta_{K_{7^{+}}}, \end{split} \end{align} We do not present the simplification of the re-parameterized MLDE here but discuss a possible method to find the same in appendix \ref{appendix:level_7+_simplifications}. \subsection{Levels 11 and 13} \noindent We reserve the derivation of the re-parameterized MLDE for groups $\Gamma_{0}^{+}(11)$ and $\Gamma_{0}^{+}(13)$ for future work and report the Ramanujan identities at this level which hasn't been explored in the literature previously. We note here that the dimension of the space of weight $4$ modular forms at level $13$ being three-dimensional complicates the derivation of the re-parameterized MLDE since we had previously found a basis decomposition in \cite{Umasankar:2022kzs} which was two-dimensional. This suggests that there is something more complicated going on at level $13$ and a detailed analysis of the space of modular forms is required prior to us setting up the MLDE. This issue persists at level $11$ since the space of weight $4$ modular forms again turns out to be three-dimensional, but the basis decomposition of this space is well-understood \cite{Junichi}, and hence, setting up the MLDE wouldn't be a cumbersome task. The first few Ramanujan identities for $\Gamma_{0}^{+}(11)$ were found to be (see \cite{Umasankar:2022kzs} for basis decomposition and definitions) \begin{align} \begin{split} q\frac{d}{dq}E_{2}^{(11^{+})}(\tau) =& \frac{1}{2}\left(\left(E_{2}^{(11^{+})}\right)^{2} - E_{2,11^{'}}^{2} + \frac{24}{5}E_{2,11^{'}}\Delta_{11} + \frac{56}{25}\Delta_{11}^{2}\right)(\tau),\\ q\frac{d}{dq}E_{4}^{(11^{+})}(\tau) =& 2\left(E_{2}^{(11^{+})}E_{4}^{(11^{+})} - E_{4,11^{'}}E_{2,11^{'}} + \frac{432}{305}E_{4,11^{'}}\Delta_{11}\right)(\tau),\\ q\frac{d}{dq}E_{6}^{(11^{+})}(\tau) =& 3\left(E_{2}^{(11^{+})}E_{6}^{(11^{+})} - E_{2,11^{'}}^{4} + \frac{6578}{555}E_{2,11^{'}}^{3}\Delta_{11} + \frac{30896}{2775}E_{2,11^{'}}^{2}\Delta_{11}^{2}\right.\\ &{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.- \frac{24906}{4625}E_{2,11^{'}}\Delta_{11}^{3} + \frac{37504}{69375}\Delta_{11}^{4}\right)(\tau). \end{split} \end{align} The modular forms $E_{k,11^{'}}(\tau)$ can be related to the Eisenstein series in $\Gamma_{0}^{+}(11)$ to obtain a simplified expression. Although it is difficult to obtain identities for the Eisenstein series of weights $4$ and $6$ in level $13$, the identity corresponding to $E_{2}^{(13^{+})}$ is easy to construct. This reads (see \cite{Umasankar:2022kzs} for basis decomposition and definitions) \begin{align} q\frac{d}{dq}E_{2}^{(13^{+})}(\tau) =\frac{7}{12}\left(\left(E_{2}^{(13^{+})}\right)^{2} - E_{4}^{(13^{+})} + \frac{36}{49}\left(E_{2,13^{'}}^{2} - E_{4}^{(13^{+})}\right)\right)(\tau). \end{align} \subsection{Three-character re-parameterization} \noindent The $(n,\ell) = (3,0)$ MLDE for $\Gamma_{0}^{+}(2)$ takes the following form \begin{align} \left[\mathcal{D}^{3} + \mu_{1}E_{4}^{(2^{+})}(\tau)\mathcal{D} + \mu_{2}E_{6}^{(2^{+})}(\tau)\right]f(\tau) = 0. \end{align} The third-order covariant derivative is found to be \begin{align} \begin{split} \mathcal{D}^{3} =& \mathcal{D}_{(4)}\circ\mathcal{D}_{(2)}\circ\mathcal{D}_{(0)} = \mathcal{D}_{4}\circ\mathcal{D}^{2}\\ =& \left(\theta_{q} - \frac{1}{2}E_{2}^{(2^{+})}\right)\left(\theta_{q} - \frac{1}{4}E_{2}^{(2^{+)}}\right)\theta_{q}\\ =& A_{2^{+}}^{3}\theta_{K_{2^{+}}}^{3} - A_{2^{+}}^{3}\frac{3(1 + K_{2^{+}})}{4(1-K_{2^{+}})}\theta_{K_{2}^{+}}^{2} + \frac{1}{8}A_{2^{+}}^{3}\theta_{K_{2^{+}}}. \end{split} \end{align} With this and \ref{derivatives_repara_2+}, we find the third-order re-parameterized MLDE for $\Gamma_{0}^{+}(2)$ to be \begin{align} \left[\theta_{K_{2^{+}}}^{3} - \frac{3(1 + K_{2^{+}})}{4(1-K_{2^{+}})}\theta_{K_{2}^{+}}^{2} + \frac{8\mu_{1} + 1 - K_{2^{+}}}{8(1-K_{2^{+}})}\theta_{K_{2^{+}}} + \frac{\mu_{2}}{1-K_{2^{+}}}\right]f(K_{2^{+}}) = 0. \end{align} The solution to this ODE, which we don't mention here, is found to be given by ${}_{3}F_{2}$ hypergeometric functions.\\ \noindent The $(n,\ell) = (3,0)$ $\Gamma_{0}^{+}(3)$ MLDE takes the following form \begin{align} \left[\mathcal{D}^{3} + \mu_{1}E_{4}^{(3^{+})}(\tau)\mathcal{D} + \mu_{2}E_{6}^{(3^{+})}(\tau)\right]f(\tau) = 0. \end{align} The third-order covariant derivative is found to be \begin{align} \begin{split} \mathcal{D}^{3} =& \mathcal{D}_{(4)}\circ\mathcal{D}_{(2)}\circ\mathcal{D}_{(0)} = \mathcal{D}_{4}\circ\mathcal{D}^{2}\\ =& \left(\theta_{q} - \frac{2}{3}E_{2}^{(3^{+})}\right)\left(\theta_{q} - \frac{1}{3}E_{2}^{(3^{+)}}\right)\theta_{q}\\ =& A_{3^{+}}^{3}\theta_{K_{3^{+}}}^{3} - A_{3^{+}}^{3}\frac{2 + K_{3^{+}}}{2(1-K_{3^{+}})}\theta_{K_{3}^{+}}^{2} -A_{3^{+}}^{3}\frac{K_{3^{+}} - 4}{18(1-K_{3^{+}})}\theta_{K_{3^{+}}}. \end{split} \end{align} With this, the third-order re-parameterized MLDE for $\Gamma_{0}^{+}(2)$ reads \begin{align} \left[\theta_{K_{3^{+}}}^{3} - \frac{2 + K_{3^{+}}}{2(1-K_{2^{+}})}\theta_{K_{3}^{+}}^{2} + \frac{18\mu_{1} + 4 - K_{3^{+}}}{18(1-K_{3^{+}})}\theta_{K_{3^{+}}} + \frac{\mu_{2}}{1-K_{3^{+}}}\right]f(K_{3^{+}}) = 0, \end{align} whose solution is again given by ${}_{3}F_{2}$ hypergeometric functions. \section{Introduction} \noindent A 2D rational conformal field theory, RCFT, has a finite set of holomorphic characters, denoted by $\chi_i(\tau)$ and a partition function of the form: \begin{equation}\label{eq1} Z(\tau,\Bar{\tau})=\sum_{i,j=0}^{n-1}M_{ij}\,\bar{\chi}_i(\Bar{\tau})\chi_j(\tau) = |\chi_0|^2 + \sum_{i=1}^{n-1}Y_i|\chi_i|^2 \end{equation} where $n\in\mathbb{Z}$ labels the number of linearly independent characters. These characters are vector-valued modular forms (vvmfs) as we explain below, and we refer to $n$ as the \textit{dimension} of the vvmf. When $n=1$, there is only the identity character. In this case, we will refer to the resulting theory as a meromorphic CFT \footnote{In a meromorphic CFT, the partition function can be modular invariant up to a phase.}. \\ \\ An RCFT classification program, called the Mathur-Mukhi-Sen (MMS) program, initiated in \cite{Mathur:1988rx, Mathur:1988na, Mathur:1988gt} is pursued by both mathematicians and physicists in recent times \cite{Naculich:1988xv, Kiritsis:1988kq, Bantay:2005vk, Mason:2007, Mason:2008, Bantay:2007zz, Tuite:2008pt, Bantay:2010uy, Marks:2011, Gannon:2013jua, Kawasetsu:2014, Hampapura:2015cea, Franc:2016, Gaberdiel:2016zke, Hampapura:2016mmz, Arike:2016ana, Tener:2016lcn, Mason:2018, Harvey:2018rdc, Chandra:2018pjq, Chandra:2018ezv,Bae:2018qfh,Bae:2018qym,franc2020classification,Bae:2020xzl, Mukhi:2020gnj, Kaidi:2020ecu,Das:2020wsi,Das:2021uvd, Kaidi:2021ent,Bae:2021mej,Duan:2022ltz}. This classification is based on the fact that characters are vvmfs of weight 0 and that they solve modular linear differential equations. Such equations have finitely many parameters and these can be varied to scan for solutions that satisfy the basic criteria to be those of an RCFT. These criteria correspond to the fact that each character is holomorphic in $q=e^{2\pi i\tau}$ except as $q\to 0$, and have an expansion of the form: \begin{equation} \chi_i(\tau)=q^{\alpha_i}\left(a_{0,i}+a_{1,i} \,q+a_{2,i}\,q^2+\cdots\right) \label{charexp} \end{equation} If the vvmf corresponds to a genuine RCFT then these exponents can be identified with the central charge and (chiral) conformal dimensions via: \begin{equation} \alpha_i=-\frac{c}{24}+h_i \end{equation} with $h_0=0$ corresponding to the identity character of the RCFT. The coefficients $a_{r,i},r\ge 1$ should be non-negative integers for some choice of a positive integer $a_{0,i}$ that provides the overall normalization of each character. We must choose $a_{0,0}=1$ (non-degeneracy of the vacuum or uniqueness of the ground state), while for $i\ne 0$ we take $a_{0,i}$ to be the minimum integer such that the $q$-series for the corresponding character has integral coefficients. In \cite{Chandra:2018pjq} character sets with the above properties were called \textit{admissible}. It is not, in general, the case that admissible characters correspond to an RCFT\footnote{If one looks at $(1,6)$ MLDE then as solutions to this equation, there exists an infinite set of admissible solutions labelled by an integral parameter $\mathcal{N}$ where $\mathcal{N}\geq -744$. The admissible solutions are of the form $j+\mathcal{N}$ where $j$ is the Klein-j invariant. However, from this infinite set only $71$ values of $\mathcal{N}$ correspond to genuine RCFTs (see \cite{Schellekens:1992db}).}. We also define $m_i:=a_{i,0}$ and note that for a CFT, $m_1$, corresponds to the number of spin-1 generators (Kac-Moody currents) in the chiral algebra. Also for $i\ne 0$, we write $D_i=a_{0,i}$. In a CFT these are the ground-state degeneracies of the various modules other than the identity. \\ \\ In \cite{Umasankar:2022kzs}, the study of MLDEs was extended to some congruence subgroups of $\text{SL}(2,\mathbb{Z})$ where in particular, Fricke and Hecke subgroups were studied since their theory of modular forms is well understood. Riemann-Roch relations were derived for Fricke groups of prime divisor levels of the Monster group following which admissible single-character solutions were reported. This paper extends the work by presenting a more exhaustive analysis of the single-character solutions, and reports on admissible two- and three-character solutions to MLDEs in $\Gamma_{0}^{+}(2)$ and $\Gamma_{0}^{+}(3)$.\\ \\ The main results of the paper are as follows: \begin{itemize} \item Detailed set up of Serre-Ramanujan covariant derivatives and MLDEs for Fricke and Hecke groups. \item Ramanujan-Eisenstein identities for Fricke groups $\Gamma_{0}^{+}(p)$, for $p = 2,3,5,7,11$. \item Modular re-parameterization schemes for Fricke groups for two- and three-characters. \item Admissible two-character solutions for modular forms in the Fricke groups $\Gamma_{0}^{+}(p)$ for $p = 2,3$ up to the bulk point. \item Admissible three-character solutions to the $(3,0)$ MLDE in the Fricke groups $\Gamma_{0}^{+}(p)$ for $p = 2,3$. \item Identified many bilinear identities among the 2-character solutions. \item Constructed some \textit{putative} partition functions for many 2-character solutions. \item Identification of certain Fricke two-character solutions with sequences of the conjugacy classes of the Monster group and as closed-form expressions in terms of modular forms of the corresponding Hecke group. \item Generalization to the relation between $\Theta$-series of the odd Leech lattice and $\Theta_{2^{+}}$ that corresponds to the series obtained via a generalized single-character solution. \end{itemize} \noindent \textbf{Outline}\\ \noindent In the next section, we provide a quick introduction to modular forms for physicists that comprises basic definitions and motivations for certain foundational concepts. This section also includes an overview of the necessary definitions of the Fricke groups of levels $p = 2,3$ required for reading the paper. In section \ref{sec:Two-character_Hecke_Fricke}, we provide a general prescription to construct Serre-Ramanujan derivatives for Fricke and Hecke groups following which we present the re-parameterized MLDEs at the two-character levels for Fricke groups of levels $p = 2,3, 5, 7$ and at the three-character level for level $p =2,3$. In sections \ref{sec:Gamma_0_2+} and \ref{sec:Gamma_0_3+}, we present an exhaustive single-character analysis and report results on admissible two- and three-character solutions for $\Gamma_{0}^{+}(2)$ and $\Gamma_{0}^{+}(3)$ respectively. In section \ref{sec:Lattice}, we generalize the lattice relation found in $\Gamma_{0}^{+}(2)$ and present some interesting lattice theta function realizations of modular forms of Fricke and Hecke groups. Section \ref{sec:Discussion} expands on some interesting results found in the two- and three-character analysis. In section \ref{sec:Future_work}, we finish by providing a detailed list of interesting questions worth tackling in future work. All fundamental domains were obtained using the Mathematica program in \cite{kainberger}. Several appendices complement the theory and results of the main sections. \section{Lattice relations}\label{sec:Lattice} \noindent The $\Theta$-function of a $d$-dimensional lattice $\Lambda$ is a weight $\tfrac{d}{2}$ modular form defined as follows \begin{align} \Theta_{\Lambda}(\tau) = \sum\limits_{x\in\Lambda}N(m)q^{m}. \end{align} Here, the sum runs over all the vectors $x$ in the lattice $\Lambda$ whose length is $m=x\cdot x$, $N(m)$ denotes the number of vectors of norm $m$, and $q \equiv e^{2\pi i\tau}$. The lattice partition function $\mathcal{Z}$ is defined as follows \begin{align}\label{partition_theta_lattice} \mathcal{Z}(\tau) \equiv \frac{\Theta_{\Lambda}(\tau)}{\eta^{d}(\tau)}. \end{align} It is always possible to define a holomorphic modular-covariant CFT with central charge $c$ with a partition function given by \ref{partition_theta_lattice} corresponding to an even self-dual lattice of integer dimension $d$. The packing radius $\rho$, the kissing number $\mathscr{K}$ can be read off from the $q$-series expansion of the $\Theta$-function of a lattice $\Lambda$ as follows \begin{align} \Theta_{\Lambda}(\tau) = 1 + \mathscr{K}q^{4\rho^{2}} + \ldots \end{align} Let $\Theta_{2^{+}}(\tau) = 1 + \sum_{m=1}^{\infty}N_{2^{+}}(m)q^{m}$ denote the lattice corresponding to the single character solution of central charge $c = 24$ found in $\Gamma_{0}^{+}(2)$ with partition function $\mathcal{Z}(\tau) = j_{2^{+}}(\tau)$. This $\Theta$-series was found to be related to the $\Theta$-series of odd Leech lattice $O_{24}$ as follows \cite{Umasankar:2022kzs} \begin{align}\label{odd_Leech_Fricke_2} \Theta_{O_{24}}(\tau) = 1 + \sum\limits_{n=1}^{\infty}m^{(23A)}_{n}q^{n} - \Theta_{2^{+}}\left(\frac{\tau}{2}\right). \end{align} where $m^{(23A)}_{i}$ indicated the coefficients obtained by linear combinations of character degrees of the McKay-Thompson series of class $23A$ for $\mathbb{M}$. From \ref{j_104_c=24}, we see that the partition function, $\mathcal{Z}(\tau) =j_{2^{+}}(\tau) + \mathcal{N}$, represents a large set of admissible solutions that includes the single character solution with $\mathcal{N} = 0$ at $\ell = 4$, the non-trivial bilinear pairs in \ref{bilin_non_0}, \ref{bilin_non_1}, and \ref{bilin_non_2} with $\mathcal{N} = -32, -64, -80$ respectively, and possibly more which we haven't found yet in the current analysis. With this partition function, we obtain the following $q$-series expansion of the $\Theta$-function \begin{align}\label{theta_Gamma_0_2+} \eta^{24}(\tau)\left(j_{2^{+}}(\tau) - 80 + \Tilde{a}\right) = 1 + \Tilde{a}q + (4048 - 24\Tilde{a})q^{2} + (-4096 + 252\Tilde{a})q^{3} + \ldots, \end{align} where $\Tilde{a} = \mathcal{N}a$. Now, upon setting $\Tilde{a} = 0$, we find the kissing number and lattice radius to be $\mathscr{K} = 4048$ and $\rho = \tfrac{1}{\sqrt{2}}$ respectively. Thus, a wide set of $c = 24$ single-character solutions and concocted bilinear pairs all possess the same $\Theta$-series that is related to the odd Leech lattice in $d = 24$ via the relation shown in \ref{odd_Leech_Fricke_2}.\\ \noindent The space of modular forms of weight $4$ is one-dimensional and it turns out that the modular form is related to a lattice theta function as follows \begin{align} \begin{split} \omega^{(2^{+})}_{4}(\tau) =& E_{4}^{(2^{+})}(\tau)\\ =& \tfrac{1}{4}\left(\theta_{3}^{4}(\tau) + \theta_{4}^{4}(\tau)\right)^{2}\\ =& \Theta_{\mathbf{D}_{4}\oplus\mathbf{D}_{4}}(\tau). \end{split} \end{align} The Fricke weight $4$ Eisenstein series of level $2$ is the theta function of two copies of the $\mathbf{D}_{4}$ lattice (OEIS sequence A008658 \cite{A008658}). Such lattice relations are not observed for $E_{k}^{(2^{+}}(\tau)$ with $k\geq 4$. Moving over to $\Gamma_{0}^{+}(3)$, we the following connections between the modular forms lattice theta functions \begin{align} \begin{split} \omega^{(3^{+})}_{4}(\tau) =&E_{4}^{(3^{+})}(\tau)= \Theta_{4\cdot\mathbf{H}}(\tau),\\ E_{2,3^{'}}(\tau) =& \Theta_{2\cdot\mathbf{H}}(\tau) \end{split} \end{align} The Fricke weight $4$ Eisenstein series of level $3$ is the theta function of six copies of the Hexagonal lattice $\mathbf{H}$ (OEIS sequence A008655 \cite{A008655}) and the modular form $E_{2,3^{'}}(\tau)\in\mathcal{M}_{3}(\Gamma_{0}(3))$ is the theta function of two copies of $\mathbf{H}$ (OEIS sequence A008653 \cite{A008653}. Such lattice relations are not observed for $E_{k}^{(p^{+}}(\tau)$ with $k\geq 4$ and $p = 3,5,7$. However, in $\Gamma_{0}^{+}(7)$, we find that the modular form $E_{2,7^{'}}(\tau)$ (see appendix \ref{appendix: Mod_5_and_7} for definition) is the theta series of the square of Kleinian lattice $\mathbb{Z}\left[\tfrac{-1+\sqrt{-7}}{2}\right]$ (OEIS sequence A028594 \cite{A028594}). Similar relations can be found for other modular forms of Hecke and Fricke groups. \section*{\refname}} \usepackage{color} \newcommand{\qs}[1]{\textcolor{red}{#1}} \newcommand{\qsx}[1]{\textcolor{red}{\sout{#1}}} \setlength{\paperheight}{11.5in} \usepackage[normalem]{ulem} \usepackage{enumerate} \usepackage{amsfonts} \usepackage{yfonts} \usepackage{tcolorbox} \usepackage{subfigure} \usepackage{psfrag} \usepackage[flushleft]{threeparttable} \usepackage{lscape} \definecolor{Mygrey}{gray}{0.8} \definecolor{Mywhite}{gray}{1.0} \usepackage{epsfig} \usepackage[latin1]{inputenc} \usepackage{float} \usepackage{graphicx} \usepackage{cancel} \usepackage{mathrsfs} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{slashed} \usepackage{braket} \usepackage{graphicx} \usepackage{bm} \usepackage{bbold} \usepackage{svg} \usepackage[ colorlinks=true, linkcolor=blue, urlcolor=blue, filecolor=black, citecolor=red, pdfstartview=FitV, pdftitle={}, pdfauthor={Nabil Iqbal, Kieran Macfarlane}, pdfsubject={}, pdfkeywords={}, pdfpagemode=UseNone, bookmarksopen=true, hyperfootnotes=false ]{hyperref} \newcommand \mathop{\rm tr} {\mbox{{\bf Tr}}} \newcommand\nnfootnote[1]{% \begin{NoHyper} \renewcommand\thefootnote{}\footnote{#1}% \addtocounter{footnote}{-1}% \end{NoHyper} } \def\partial{\partial} \def{\it e.g. }{{\it e.g. }} \def{\it i.e.}{{\it i.e.}} \def{\it c.f. }{{\it c.f. }} \def{\it et al.}{{\it et al.}} \def\({\left(} \def\){\right)} \def\[{\left[} \def\]{\right]} \def\<{\langle} \def\>{\rangle} \def{\cal A}{{\cal A}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal T}{{\cal T}} \def{\cal M}{{\cal M}} \def{\cal N}{{\cal N}} \def{\cal O}{{\cal O} \def{\cal P}{{\cal P}} \def{\cal L}{{\cal L}} \def{\cal V}{{\cal V}} \def{\cal S}{{\cal S}} \def{\cal W}{{\cal W}} \def\CX{{\cal X} \def\bbra#1{{\langle\langle}#1|} \def\kket#1{|#1\rangle\rangle} \def\rlap{\hskip0.2em/}D{\rlap{\hskip0.2em/}D} \def\rlap{\hskip0.2em/}{\CD}{\rlap{\hskip0.2em/}{{\cal D}}} \def{\cal L}{{\cal L}} \newcommand{\cit}[1]{ [{\bf\texttt{#1}}]} \newcommand{\begin{bmatrix}}{\begin{bmatrix}} \newcommand{\end{bmatrix}}{\end{bmatrix}} \def\mathop{\rm Tr}{\mathop{\rm Tr}} \def\mathop{\rm tr}{\mathop{\rm tr}} \newcommand{1\over 2}{{\ensuremath{\frac{1}{2}}}} \newcommand\p{\ensuremath{\partial}} \newcommand\evalat[2]{\ensuremath{\left.{#1}\right|_{#2}}} \newcommand\abs[1]{\ensuremath{\left\lvert{#1}\right\rvert}} \newcommand\no[1]{{{:}{#1}{:}}} \newcommand\transpose{{\ensuremath{\text{\sf T}}}} \newcommand\field[1]{{\ensuremath{\mathbb{{#1}}}}} \newcommand\order[1]{{\ensuremath{{\mathcal O}({#1})}}} \newcommand\vev[1]{{\ensuremath{\left\langle{#1}\right\rangle}}} \newcommand\anti[2]{\ensuremath{\bigl\{{#1},{#2}\bigr\}}} \newcommand\com[2]{\ensuremath{\bigl[{#1},{#2}\bigr]}} \newcommand\lie[2]{\ensuremath{\pounds_{{#1}} {#2}}} \newcommand\sfrac[2]{\ensuremath{\frac{#1}{#2}}} \newcommand\lvec[2][]{\ensuremath{\overleftarrow{{#2}_{#1}}}} \newcommand\rvec[2][]{\ensuremath{\overrightarrow{{#2}_{#1}}}} \newcommand\speceq{\ensuremath{\stackrel{\star}{=}}} \newcommand\conj[1]{{\ensuremath{\left({#1}\right)^*}}} \newcommand{\field{A}}{\field{A}} \newcommand{\field{D}}{\field{D}} \newcommand{\field{J}}{\field{J}} \newcommand{\field{G}}{\field{G}} \newcommand{\field{F}}{\field{F}} \newcommand{\field{Q}}{\field{Q}} \newcommand{\field{H}}{\field{H}} \newcommand{\field{L}}{\field{L}} \newcommand{\field{K}}{\field{K}} \newcommand{\field{N}}{\field{N}} \newcommand{\field{P}}{\field{P}} \newcommand{\field{R}}{\field{R}} \newcommand{\field{T}}{\field{T}} \newcommand{\field{Z}}{\field{Z}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{widetext}}{\begin{widetext}} \newcommand{\end{widetext}}{\end{widetext}} \newcommand{\nonumber\\}{\nonumber\\} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{\begin{enumerate}}{\begin{enumerate}} \newcommand{\end{enumerate}}{\end{enumerate}} \newcommand{\begin{cases}}{\begin{cases}} \newcommand{\end{cases}}{\end{cases}} \newcommand{\begin{align}}{\begin{align}} \newcommand{\end{align}}{\end{align}} \newcommand{\begin{split}}{\begin{split}} \newcommand{\end{split}}{\end{split}} \newcommand\al{{\alpha}} \newcommand\ep{\epsilon} \newcommand\sig{\sigma} \newcommand\Sig{\Sigma} \newcommand\lam{\lambda} \newcommand\Lam{\Lambda} \newcommand\om{\omega} \newcommand\Om{\Omega} \newcommand\vt{\vartheta} \newcommand\ga{{\ensuremath{{\gamma}}}} \newcommand\Ga{{\ensuremath{{\Gamma}}}} \newcommand\de{{\ensuremath{{\delta}}}} \newcommand\De{{\ensuremath{{\Delta}}}} \newcommand\vp{\varphi} \newcommand\ze{{\zeta}} \newcommand\da{{\dagger}} \newcommand\nab{{\nabla}} \newcommand\Th{{\Theta}} \def{\theta}{{\theta}} \newcommand\ra{{\rightarrow}} \newcommand\Lra{{\Longrightarrow}} \newcommand\ov{\over} \newcommand\ha{{{1\over 2}}} \newcommand\papr{{2 \pi \apr}} \newcommand\apr{{\ensuremath{{\alpha'}}}} \def\left{\left} \def\right{\right} \newcommand\sA{{\ensuremath{{\mathcal A}}}} \newcommand\sB{{\ensuremath{{\mathcal B}}}} \newcommand\sC{{\ensuremath{{\mathcal C}}}} \newcommand\sD{{\ensuremath{{\mathcal D}}}} \newcommand\sF{{\ensuremath{{\mathcal F}}}} \newcommand\sI{{\ensuremath{{\mathcal I}}}} \newcommand\sG{{\ensuremath{{\mathcal G}}}} \newcommand\sH{{\ensuremath{{\mathcal H}}}} \newcommand\sL{{\ensuremath{{\mathcal L}}}} \newcommand\sM{{\ensuremath{{\mathcal M}}}} \newcommand\sN{{\ensuremath{{\mathcal N}}}} \newcommand\sO{{\ensuremath{{\mathcal O}}}} \newcommand\sR{{\ensuremath{{\mathcal R}}}} \newcommand\sV{{\mathcal V}} \newcommand\sW{{\mathcal W}} \newcommand\sJ{{\mathcal J}} \newcommand\sS{{\mathcal S}} \newcommand\sP{{\mathcal P}} \newcommand\bQ{{\bf Q}} \newcommand\bT{{\bf T}} \newcommand\bB{{\bf B}} \newcommand\bC{{\bf C}} \newcommand\bR{{\bf R}} \newcommand\bX{{\bf X}} \newcommand\bY{{\bf Y}} \newcommand\bI{{\bf I}} \newcommand\bv{{\bf v}} \newcommand\bw{{\bf w}} \newcommand\bII{{\bf II}} \newcommand\bth{{\boldsymbol \theta}} \newcommand\bom{{\boldsymbol \omega}} \newcommand\bq{{\bf Q}_B} \newcommand\bfb{{\bf b}_{-1}} \newcommand\bc{{\bar c}} \newcommand\bb{{\bar b}} \newcommand\bpsi{{\bar \psi}} \renewcommand{\Im}{\textrm{Im}\,} \renewcommand{\Re}{\textrm{Re}\,} \newcommand{\ell_2}{\ell_2} \newcommand{\hat \mu_q}{\hat \mu_q} \newcommand{{\bf 1}}{{\bf 1}} \newcommand{{\vec k}}{{\vec k}} \newcommand{{\kappa}}{{\kappa}} \newcommand{SL(2,\RR)}{SL(2,\field{R})} \newcommand{SL(3,\RR)}{SL(3,\field{R})} \newcommand{SL(N,\RR)}{SL(N,\field{R})} \newcommand{\tilde m}{\tilde m} \newcommand{\tilde g}{\tilde g} \newcommand{\tilde a}{\tilde a} \newcommand\uz{{\underline{z}}} \newcommand\utau{{\underline{\tau}}} \newcommand\ut{{\underline{t}}} \newcommand\ur{{\underline{r}}} \newcommand\ui{{\underline{i}}} \newcommand\uj{{\underline{j}}} \newcommand\umu{{\underline{\mu}}} \newcommand\uy{{\underline{y}}} \newcommand{{\Phi}}{{\Phi}} \newcommand{{\chi_*}}{{\chi_*}} \newcommand{{X}}{{X}} \newcommand{\bar{A}}{\bar{A}} \def{\delta g^\tau_{\,\tau}}{{\delta g^\tau_{\,\tau}}} \def{\delta g^\x_{\,\,\x}}{{\delta g^{X}_{\,\,{X}}}} \def{\delta g^\x_{\,\,\,\tau}}{{\delta g^{X}_{\,\,\,\tau}}} \def\hat{\rho}{\hat{\rho}} \def\delta \hat{\chi}{\delta \hat{\chi}} \newcommand{\nb}[1]{#1} \newcommand{\bar{a}}{\bar{a}} \newcommand{\bar{\chi}}{\bar{\chi}} \newcommand\uM{{\underline{M}}} \newcommand\sT{{\mathcal{T}}} \newcommand{\NI}[1]{\textcolor{blue}{\textsf{[NBU: #1]}}} \newcommand{\AD}[1]{\textcolor{brown}{\textsf{[AD: #1]}}} \linespread{1.3} \def\oversortoftilde#1{\mathop{\vbox{\m@th\ialign{##\crcr\noalign{\kern3\p@}% \sortoftildefill\crcr\noalign{\kern3\p@\nointerlineskip}% $\hfil\displaystyle{#1}\hfil$\crcr}}}\limits} \def\sortoftildefill{$\m@th \setbox\z@\hbox{$\braceld$}% \braceld\leaders\vrule \@height\ht\z@ \@depth\z@\hfill\braceru$} \usepackage{bbold} \usepackage{multirow} \usepackage[normalem]{ulem} \useunder{\uline}{\ul}{} \usepackage{scalerel,stackengine} \stackMath \newcommand\reallywidehat[1]{% \savestack{\tmpbox}{\stretchto{% \scaleto{% \scalerel*[\widthof{\ensuremath{#1}}]{\kern-.6pt\bigwedge\kern-.6pt}% {\rule[-\textheight/2]{1 ex}{\textheight} }{\textheight }{0.5ex}}% \stackon[1pt]{#1}{\tmpbox}% } \usepackage{tcolorbox} \allowdisplaybreaks \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}{Proposition} \newmuskip\pFqmuskip \newcommand*\pFq[6][8]{% \begingroup \pFqmuskip=#1mu\relax \mathcode`\,=\string"8000 \begingroup\lccode`\~=`\, \lowercase{\endgroup\let~}\mskip\pFqmuskip {}_{#2}F_{#3}{\left[\genfrac..{0pt}{}{#4}{#5};#6\right]}% \endgroup } \newcommand{\mskip\pFqmuskip}{\mskip\pFqmuskip} \newcommand{\rotatebox{90}{$\,=$}}{\rotatebox{90}{$\,=$}} \newcommand{\equalto}[2]{\underset{\scriptstyle\overset{\mkern4mu\rotatebox{90}{$\,=$}}{#2}}{#1}} \usepackage{tikz} \tikzset{Witten diagram/.style={execute at begin picture={% \draw[blue,fill=blue!20] circle[radius=\pgfkeysvalueof{/tikz/Witten/radius}]; \path node (X){\phantom{X}}; },baseline={(X.base)}},vertex/.style={circle,fill,inner sep=1.5pt,node contents={}}, Witten/.cd,radius/.initial=1.5cm} \newenvironment{wittendiagram}[1][]{\begin{tikzpicture}[Witten diagram,#1]}{\end{tikzpicture}} \usepackage{tikz-cd} \usetikzlibrary{cd} \newcommand{\hspace{-4pt}\not|\hspace{2pt}}{\hspace{-4pt}\not|\hspace{2pt}} \begin{document} \title{Two- \& Three-character solutions to MLDEs and Ramanujan-Eisenstein Identities for Fricke Groups} \author{Arpit Das} \email{[email protected]} \affiliation{Centre for Particle Theory, Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE, UK\\} \author{Naveen Balaji Umasankar} \email{[email protected], [email protected]} \affiliation{University of Amsterdam, Institute for Theoretical Physics (ITFA), Science Park 904, 1098 XH Amsterdam.} \begin{abstract} \noindent In this work we extend the study of \cite{Umasankar:2022kzs} by investigating two- and three-character MLDEs for Fricke groups at prime levels. We have constructed these higher-character MLDEs by using a \textit{novel} Serre-Ramanujan type derivative operator which maps $k$-forms to $(k+2)$-forms in $\Gamma^{+}_0(p)$. We found that this \textit{novel} derivative construction enabled us to write down a general prescription for obtaining \textit{Ramanujan-Eisenstein} identities for these groups. We discovered several novel single-, two-, and three-character admissible solutions for Fricke groups at levels $2$ and $3$ after solving the MLDEs among which we have realized some in terms of Mckay-Thompson series and others in terms of modular forms of the corresponding Hecke groups. Among these solutions, we have identified interesting non-trivial bilinear identities. Furthermore, we could construct \textit{putative} partition functions for these theories based on these bilinear pairings, which could have a range of lattice interpretations. We also present and discuss modular re-parameterization of MLDE and their solutions for Fricke groups of prime levels. \end{abstract} \vfill \today \maketitle \tableofcontents \input{Introduction/Introduction_main.tex} \input{Preliminaries/Fricke_preliminaries.tex} \input{MLDE_Fricke.tex} \input{General_modular_reparameterization.tex} \input{Fricke_2/MLDE_Fricke_2.tex} \input{Fricke_3/MLDE_Fricke_3.tex} \input{Lattice_relations} \input{Discussion.tex} \input{Future_directions.tex} \acknowledgments \noindent We thank Sunil Mukhi for the helpful discussions. Additionally, AD would like to thank Chethan N. Gowdigere, Jagannath Santara and Jishu Das for insightful discussions pertaining to MLDEs. AD would also like to thank Nabil Iqbal for useful discussions on CFTs. AD would also like to express his gratitude to Sigma Samhita for help in \LaTeX-formatting. NU would like to thank Mikhail Isachenkov for multiple discussions, and useful suggestions, and Miranda Cheng and Erik Verlinde for early discussions. \section{MLDEs for Hecke and Fricke groups}\label{sec:Two-character_Hecke_Fricke} \noindent In this section, we outline a general procedure to obtain \textit{novel} Serre-Ramanujan type derivative operators for congruence subgroups of $\text{SL}(2,\mathbb{Z})$. Our main focus will be to construct such derivative operators for Fricke subgroups but this procedure can be readily generalized to other congruence subgroups. As we shall see, these \textit{novel} derivative operators not only help us to construct the MLDEs for Fricke groups but also will enable us to write down Ramanujan identities for these subgroups. \\ \\ Consider an $(n+1)$-dimensional square matrix made up of $\chi_{0},\chi_{1},\ldots,\chi_{n-1},f$ and their derivatives. The most general ODE that is invariant under $\Gamma_{0}^{+}(p)$ reads \begin{align}\label{general_MLDE} \left[\omega_{2\ell}(\tau)\mathcal{D}^{n} + \omega_{2\ell + 2}(\tau)\mathcal{D}^{n-1} + \ldots + \omega_{2(n-1+\ell)}(\tau)\mathcal{D} + \omega_{2(n+\ell)}(\tau)\right]f(\tau) = 0, \end{align} where the functions $\omega_{k}$ are holomorphic modular forms of weight $k$ and the derivatives here are the \textit{modified} Serre-Ramanujan covariant derivatives given by \begin{align}\label{Ramanujan-Serre} \begin{split} \mathcal{D} \equiv& \frac{1}{2\pi i}\frac{d}{d\tau} - \kappa(r) E^{(p^{+})}_{2}(\tau),\\ \mathcal{D}^{n} =& \mathcal{D}_{r+2n-2}\circ\ldots\circ\mathcal{D}_{r+2}\circ\mathcal{D}_{r}, \end{split} \end{align} where $r$ is the weight of the modular form on which the derivative acts and $\kappa(r)$, which is obtained from the valence formula, is tabulated in \ref{tab:Fricke_valence}. \\ \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline $p$ & 2 & 3 & 5 & 7 & 11 & 13 & 17 & 19 & 23 & 29 & 31 & 41 & 59 & 71\\ [0.5ex] \hline\hline $\kappa(r)$ & $\tfrac{r}{8}$ & $\tfrac{r}{6}$ & $\tfrac{r}{4}$ & $\tfrac{r}{3}$ & $\tfrac{r}{2}$ & $\tfrac{7r}{12}$ & $\tfrac{3r}{4}$ & $\tfrac{5r}{6}$ & $r$ & $\tfrac{5r}{4}$ & $\tfrac{4r}{3}$ & $\tfrac{7r}{4}$ & $\tfrac{5r}{2}$ & $3r$\\ [1ex] \hline \end{tabular} \caption{Values of $\kappa(r) = \tfrac{r\overline{\mu}_{0}^{+}}{12}$ computed using index data in table \ref{tab:Fricke_index_data}.} \label{tab:Fricke_valence} \end{table} \\ The solution to the ODE \ref{general_MLDE} is given by the Frobenius ansatz which reads \begin{align} \chi_{i} = q^{\alpha_{i}}\sum\limits_{n=0}^{\infty}a_{n}^{(i)}q^{n}, \end{align} where the exponents $\alpha_{i}$, the Wronskian index $\ell$, and the order $n$ of the differential equation are related by an expression that follows from the Riemann-Roch theorem. Note that the Fricke groups considered here do not possess any modular form of weight $2$, this can be seen from the dimension of the space of modular forms. The covariant derivative constructed here transforms a weight $r$ modular form to a weight $r+2$ modular form. Since each Fricke group has a distinct valence formula, each Fricke level would have a different formulation of the Ramanujan-Serre covariant derivatives. Let us now show the difference between the $\text{SL}(2,\mathbb{Z})$ Serre-Ramanujan covariant derivatives and those in Fricke groups. The $\text{SL}(2,\mathbb{Z})$ covariant derivative is defined as follows \begin{align}\label{Ramanujan-Serre_SL2Z} \mathcal{D}_{\Gamma(1)} \equiv& \frac{1}{2\pi i}\frac{d}{d\tau} - \frac{r}{12}E_{2}(\tau). \end{align} Consider now the Eisenstein series of a Fricke group of level $p\in\mathbb{P}$ and weight $k$ is given by \begin{align} E^{(p^{+})}_{k}(\tau)\equiv \frac{p^{\tfrac{k}{2}}E_{k}(p\tau) + E_{k}(\tau)}{p^{\tfrac{k}{2}} + 1}. \end{align} Using this in \ref{Ramanujan-Serre} with $k = 2$, we find \begin{align} \mathcal{D}_{\Gamma^{+}_{0}(p)} \equiv& \left(\frac{1}{2\pi i}\frac{d}{d\tau} - \frac{\kappa(r)}{p + 1}E_{2}(\tau)\right) - \frac{\kappa(r)}{p + 1}E_{2}(p\tau). \end{align} Since $p\neq 1$ and no $\kappa(r)$ value corresponding to a prime level $p$ conspires with $\tfrac{1}{p+1}$ to yield the desired factor of $\tfrac{r}{12}$, it is simply impossible to express the derivatives $\mathcal{D}_{\Gamma^{+}(p)}$ in terms of $\mathcal{D}_{\Gamma(1)}$. It turns out that we obtain the factor $\tfrac{r}{12}$ in \ref{Ramanujan-Serre_SL2Z} only at the Fricke level $N = 12$. The Eisenstein series of weight $2$ at this level is defined as follows \cite{Junichi} \begin{align} E_{2,12^{+}}(\tau)\equiv \frac{12E_{2}(12\tau) - 6E_{2}(6\tau) + 4E_{2}(4\tau) + 3E_{2}(3\tau) - 2E_{2}(2\tau) + E_{2}(\tau)}{4}. \end{align} Using this in \ref{Ramanujan-Serre}, with $\kappa(r) = \tfrac{r}{2}$, we find \begin{align}\label{Gamma_12+_derivative} \begin{split} \mathcal{D}_{\Gamma^{+}_{0}(12)} \equiv& \left(\frac{1}{2\pi i}\frac{d}{d\tau} - \frac{r}{12}E_{2}(\tau)\right) - \frac{r}{12}\left(12E_{2}(12\tau) - 6E_{2}(6\tau) + 4E_{2}(4\tau) + 3E_{2}(3\tau) - 2E_{2}(2\tau)\right)\\ =& \mathcal{D}_{\Gamma(1)} - \frac{r}{12}\mathcal{E}_{2}(\tau), \end{split} \end{align} where $\mathcal{E}_{2}(\tau)$ contains the other weight $2$ Eisenstein series. From this brief discussion, we see that when we deal with the MLDEs of higher-character theories, it is important to consider the Eisenstein series associated with the Fricke group in building the covariant derivatives although they are built out $\text{SL}(2,\mathbb{Z})$ Eisenstein series. It is easy to see from \ref{Gamma_12+_derivative} that the other terms in $\mathcal{E}_{2}(\tau)$ can have considerable influence on the solutions to the MLDE. Solving the $(n,\ell) = (2,0)$ MLDE for $\Gamma_{0}^{+}(2)$ with $\mathcal{D}_{\Gamma(1)}$ instead of $\mathcal{D}_{\Gamma_{0}^{+}(2)}$ yields spurious solutions. The algorithm we follow to classify CFTs using MLDEs is the following: \begin{enumerate} \item Postulate MLDE for values of $n$ and fix the number of free parameters $\mu_{i}$. \item Find the solutions to the MLDE as a power series in $q = e^{2\pi i\tau}$ of the form \begin{align}\label{q-series} \chi_{j}(\tau) = q^{\alpha_{j}}\left(m_{0}^{(j)} + m_{1}^{(j)}q + m_{2}^{(j)}q^{2} + \ldots\right), \end{align} where the exponent $\alpha_{0} = -\tfrac{c}{24}$ is negative in a unitary theory. \item Write down the q-series expansions for the coefficient functions present in the MLDE. Plug the q-series in \ref{q-series} along with the q-series of the coefficient functions into the MLDE to obtain a recursion relation. Putting ``$i=0$" in the recursion relation gives us the \textit{indicial} equation. \item Set up recursion relations to obtain coefficients $m_{n}^{(j)}$ (for $i\geq 1$). \item Verify that coefficients $m_{n}^{(j)}$ remain non-negative integers to higher orders in $q$. This test passes the character as an admissible one and the corresponding candidate theory to be viable. \item Check if the candidate CFT is actually a well-defined consistent theory. \end{enumerate} The general form of the MLDE for $n=2$ and $\ell\neq0$ reads \begin{align} \left[\omega_{2\ell}(\tau)\mathcal{D}^{2} + \omega_{2\ell + 2}(\tau)\mathcal{D} + \omega_{2\ell + 4}(\tau)\right]f(\tau) = 0. \end{align} The free parameters $\mu_{i}$ in this MLDE come as coefficients to the modular forms and the number of these that show up in the MLDE can be determined as follows \begin{align} \#(\mu) =\text{dim}\ \mathcal{M}_{2\ell}(\Gamma_{0}^{+}(p)) + \text{dim}\ \mathcal{M}_{2\ell + 2}(\Gamma_{0}^{+}(p)) + \text{dim}\ \mathcal{M}_{2\ell + 4}(\Gamma_{0}^{+}(p)) -1. \end{align} Knowledge of the dimension of spaces of modular forms for each group would help in determining specific formulae.\\\\ \noindent There are certain subtleties we need to be careful of when formulating the covariant derivative that acts on modular forms belonging to Hecke groups. The key difference is that, unlike Fricke groups, Hecke groups possess non-zero dimensional space of weight $2$ modular forms. Thanks to this we don't need to include a quasi-modular form in the covariant derivative to preserve mapping from weight $r$ to weight $r+2$ forms. This being said, we could also define the covariant derivatives with the Eisenstein series of weight $2$ associated with the cusps to act as the quasi-modular forms. A distinct difference between Fricke and Hecke groups is that the latter possess a cusp at $\tau = 0$ in addition to others listed in table \ref{tab: Hecke_cusp_list}. \\ \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline $N$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\[0.5ex] \hline\hline $\tau$ & - & - & $-\tfrac{1}{2}$ & - & $-\tfrac{1}{2}$, $-\tfrac{1}{3}$ & - & $-\tfrac{1}{2}$, $-\tfrac{1}{4}$ & $-\tfrac{1}{3}$, $-\tfrac{1}{3}$ & $-\tfrac{1}{2}$, $-\tfrac{1}{5}$ & - & $-\tfrac{1}{2}$, $-\tfrac{1}{3}$, $-\tfrac{1}{4}$, $-\tfrac{1}{6}$\\[1ex] \hline \end{tabular} \caption{The non-trivial cusps of Hecke groups $\Gamma_{0}(N)$ apart from the ones at $\tau = 0,i\infty$ that is common to all levels.} \label{tab: Hecke_cusp_list} \end{table} \\ Hence, the first choice for a Ramanujan-Serre covariant derivative for $\Gamma_{0}(N)$ is one that includes the Eisenstein series associated with the cusp at $\tau = 0$. Let us make the following definition \begin{align} \begin{split} \mathcal{D} \equiv& \frac{1}{2\pi i}\frac{d}{d\tau} - \upsilon(r) E^{0}_{2,N}(\tau),\\ \mathcal{D}^{n} =& \mathcal{D}_{r+2n-2}\circ\ldots\circ\mathcal{D}_{r+2}\circ\mathcal{D}_{r}, \end{split} \end{align} where the Eisenstein series $E_{k,N}^{0}(\tau)$ is defined as follows \cite{Junichi} \begin{align} E^{0}_{k,N}(\tau) = \begin{cases} -2^{\tfrac{k}{2} - \left\lfloor\left.\tfrac{N}{4}\right\rfloor\right.\tfrac{k}{2}}\frac{\left(E_{k}(2\tau) - E_{k}(\tau)\right)}{2^{k} - 1},\ &N = 2,4,8,\\ \frac{-N^{\tfrac{k}{2}}\left(E_{k}(N\tau) - E_{k}(\tau)\right)}{N^{k} - 1},\ &N\in\mathbb{P},\\ \frac{6^{\tfrac{k}{2}}\left(E_{k}(6\tau) - (3\tau) - E_{k}(2\tau) + E_{k}(\tau)\right)}{(3^{k} - 1)(2^{k} - 1)},\ &N = 6,\\ \frac{10^{\tfrac{k}{2}}\left(E_{k}(10\tau) - E_{k}(5\tau) - E_{k}(2\tau) + E_{k}(\tau)\right)}{(5^{k} - 1)(2^{k} - 1)},\ &N = 10,\\ 2^{-\tfrac{k}{2}}E^{0}_{k,6}(\tau),\ &N = 12,\\ \end{cases} \end{align} Another definition of the Ramanujan-Serre covariant derivatives in the $\Gamma_{0}(N)$ is one including the Eisenstein series associated with the cusp at $\tau = i\infty$ as follows \begin{align} \begin{split} \mathcal{D} \equiv& \frac{1}{2\pi i}\frac{d}{d\tau} - \upsilon(r) E^{\infty}_{2,N}(\tau),\\ \mathcal{D}^{n} =& \mathcal{D}_{r+2n-2}\circ\ldots\circ\mathcal{D}_{r+2}\circ\mathcal{D}_{r}, \end{split} \end{align} where $E^{\infty}_{k,N}(\tau)$ is the Eisenstein series associated with the cusp $\infty$ is defined as follows \cite{Junichi} \begin{align} E^{\infty}_{k,N}(\tau) = \begin{cases} \frac{2^{k}E_{k}(N\tau) - E_{k}\left(\tfrac{N\tau}{2}\right)}{2^{k} - 1},\ &N = 2,4,8,\\ \frac{N^{k}E_{k}(N\tau) - E_{k}(\tau)}{N^{k} - 1},\ &N\in\mathbb{P},\\ \frac{6^{k}E_{k}(6\tau) - 3^{k}(3\tau) - 2^{k}E_{k}(2\tau) + E_{k}(\tau)}{(3^{k} - 1)(2^{k} - 1)},\ &N = 6,\\ \frac{10^{k}E_{k}(10\tau) - 5^{k}E_{k}(5\tau) - 2^{k}E_{k}(2\tau) + E_{k}(\tau)}{(5^{k} - 1)(2^{k} - 1)},\ &N = 10,\\ E^{\infty}_{k,6}(2\tau),\ &N = 12,\\ \end{cases} \end{align} for $k\geq 4$. The values of $\upsilon(r)$ for levels $N\leq 12$ and prime divisor levels of $\mathbb{M}$ $N>12$ are tabulated in \ref{tab:Hecke_valence}. \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline $N$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 17 & 19 & 23 & 29 & 31 & 41 & 59 & 71\\ [0.5ex] \hline\hline $\upsilon(r)$ & $\tfrac{r}{4}$ & $\tfrac{r}{3}$ & $\tfrac{r}{2}$ & $\tfrac{r}{2}$ & $r$ & $\tfrac{2r}{3}$ & $r$ & $r$ & $\tfrac{3r}{2}$ & $r$ & $2r$ & $\tfrac{7r}{6}$ & $\tfrac{3r}{2}$ & $\tfrac{5r}{2}$ & $2r$ & $\tfrac{5r}{2}$ & $\tfrac{8r}{3}$ & $\tfrac{7r}{2}$ & $5r$ & $6r$\\ [1ex] \hline \end{tabular} \caption{Values of $\upsilon(r) = \tfrac{r\overline{\mu}_{0}}{12}$ computed index data in table \ref{tab:Hecke_index_data}.} \label{tab:Hecke_valence} \end{table} \\ At level $N = 4$, we see from table \ref{tab: Hecke_cusp_list} that the fundamental domain has a cusp at $\tau = -\tfrac{1}{2}$ and there exists an Eisenstein series associated with this cusp using which we can define yet another Ramanujan-Serre covariant derivative. Thus, we would have four unique MLDEs at level $N = 4$ and similarly, five unique MLDEs at levels $N = 6,8,9,10$, and seven unique MLDEs at level $N = 12$. The definitions of the Eisenstein series associated with the non-trivial cusps can be found in \cite{Junichi}. \section{Two-character MLDEs for Hecke groups}\label{sec:Hecke_MLDE} \noindent There are certain subtleties we need to be careful of when formulating the covariant derivative that acts on modular forms belonging to Hecke groups. The key difference is that, unlike Fricke groups, Hecke groups possess non-zero dimensional space of weight $2$ modular forms. Thanks to this we don't need to include a quasi-modular form in the covariant derivative to preserve mapping from weight $r$ to weight $r+2$ forms. This being said, we could also define the covariant derivatives with the Eisenstein series of weight $2$ associated with the cusps to act as the quasi-modular forms. A distinct difference between Fricke and Hecke groups is that the latter possess a cusp at $\tau = 0$ in addition to others listed in table \ref{tab: Hecke_cusp_list}. \\ \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline $N$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\[0.5ex] \hline\hline $\tau$ & - & - & $-\tfrac{1}{2}$ & - & $-\tfrac{1}{2}$, $-\tfrac{1}{3}$ & - & $-\tfrac{1}{2}$, $-\tfrac{1}{4}$ & $-\tfrac{1}{3}$, $-\tfrac{1}{3}$ & $-\tfrac{1}{2}$, $-\tfrac{1}{5}$ & - & $-\tfrac{1}{2}$, $-\tfrac{1}{3}$, $-\tfrac{1}{4}$, $-\tfrac{1}{6}$\\[1ex] \hline \end{tabular} \caption{The non-trivial cusps of Hecke groups $\Gamma_{0}(N)$ apart from the ones at $\tau = 0,i\infty$ that is common to all levels.} \label{tab: Hecke_cusp_list} \end{table} \\ Hence, the first choice for a Ramanujan-Serre covariant derivative for $\Gamma_{0}(N)$ is one that includes the Eisenstein series associated with the cusp at $\tau = 0$. Let us make the following definition \begin{align} \begin{split} \mathcal{D} \equiv& \frac{1}{2\pi i}\frac{d}{d\tau} - \upsilon(r) E^{0}_{2,N}(\tau),\\ \mathcal{D}^{n} =& \mathcal{D}_{r+2n-2}\circ\ldots\circ\mathcal{D}_{r+2}\circ\mathcal{D}_{r}, \end{split} \end{align} where the Eisenstein series $E_{k,N}^{0}(\tau)$ is defined as follows \cite{Junichi} \begin{align} E^{0}_{k,N}(\tau) = \begin{cases} -2^{\tfrac{k}{2} - \left\lfloor\left.\tfrac{N}{4}\right\rfloor\right.\tfrac{k}{2}}\frac{\left(E_{k}(2\tau) - E_{k}(\tau)\right)}{2^{k} - 1},\ &N = 2,4,8,\\ \frac{-N^{\tfrac{k}{2}}\left(E_{k}(N\tau) - E_{k}(\tau)\right)}{N^{k} - 1},\ &N\in\mathbb{P},\\ \frac{6^{\tfrac{k}{2}}\left(E_{k}(6\tau) - (3\tau) - E_{k}(2\tau) + E_{k}(\tau)\right)}{(3^{k} - 1)(2^{k} - 1)},\ &N = 6,\\ \frac{10^{\tfrac{k}{2}}\left(E_{k}(10\tau) - E_{k}(5\tau) - E_{k}(2\tau) + E_{k}(\tau)\right)}{(5^{k} - 1)(2^{k} - 1)},\ &N = 10,\\ 2^{-\tfrac{k}{2}}E^{0}_{k,6}(\tau),\ &N = 12,\\ \end{cases} \end{align} Another definition of the Ramanujan-Serre covariant derivatives in the $\Gamma_{0}(N)$ is one including the Eisenstein series associated with the cusp at $\tau = i\infty$ as follows \begin{align} \begin{split} \mathcal{D} \equiv& \frac{1}{2\pi i}\frac{d}{d\tau} - \upsilon(r) E^{\infty}_{2,N}(\tau),\\ \mathcal{D}^{n} =& \mathcal{D}_{r+2n-2}\circ\ldots\circ\mathcal{D}_{r+2}\circ\mathcal{D}_{r}, \end{split} \end{align} where $E^{\infty}_{k,N}(\tau)$ is the Eisenstein series associated with the cusp $\infty$ is defined as follows \cite{Junichi} \begin{align} E^{\infty}_{k,N}(\tau) = \begin{cases} \frac{2^{k}E_{k}(N\tau) - E_{k}\left(\tfrac{N\tau}{2}\right)}{2^{k} - 1},\ &N = 2,4,8,\\ \frac{N^{k}E_{k}(N\tau) - E_{k}(\tau)}{N^{k} - 1},\ &N\in\mathbb{P},\\ \frac{6^{k}E_{k}(6\tau) - 3^{k}(3\tau) - 2^{k}E_{k}(2\tau) + E_{k}(\tau)}{(3^{k} - 1)(2^{k} - 1)},\ &N = 6,\\ \frac{10^{k}E_{k}(10\tau) - 5^{k}E_{k}(5\tau) - 2^{k}E_{k}(2\tau) + E_{k}(\tau)}{(5^{k} - 1)(2^{k} - 1)},\ &N = 10,\\ E^{\infty}_{k,6}(2\tau),\ &N = 12,\\ \end{cases} \end{align} for $k\geq 4$. The values of $\upsilon(r)$ for levels $N\leq 12$ and prime divisor levels of $\mathbb{M}$ $N>12$ are tabulated in \ref{tab:Hecke_valence}. \begin{table}[htb!] \centering \begin{tabular}{||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||} \hline $N$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 17 & 19 & 23 & 29 & 31 & 41 & 59 & 71\\ [0.5ex] \hline\hline $\upsilon(r)$ & $\tfrac{r}{4}$ & $\tfrac{r}{3}$ & $\tfrac{r}{2}$ & $\tfrac{r}{2}$ & $r$ & $\tfrac{2r}{3}$ & $r$ & $r$ & $\tfrac{3r}{2}$ & $r$ & $2r$ & $\tfrac{7r}{6}$ & $\tfrac{3r}{2}$ & $\tfrac{5r}{2}$ & $2r$ & $\tfrac{5r}{2}$ & $\tfrac{8r}{3}$ & $\tfrac{7r}{2}$ & $5r$ & $6r$\\ [1ex] \hline \end{tabular} \caption{Values of $\upsilon(r) = \tfrac{r\overline{\mu}_{0}}{12}$ computed index data in table \ref{tab:Hecke_index_data}.} \label{tab:Hecke_valence} \end{table} \\ At level $N = 4$, we see from table \ref{tab: Hecke_cusp_list} that the fundamental domain has a cusp at $\tau = -\tfrac{1}{2}$ and there exists an Eisenstein series associated with this cusp using which we can define yet another Ramanujan-Serre covariant derivative. Thus, we would have four unique MLDEs at level $N = 4$ and similarly, five unique MLDEs at levels $N = 6,8,9,10$, and seven unique MLDEs at level $N = 12$. The definitions of the Eisenstein series associated with the non-trivial cusps can be found in \cite{Junichi}. \section{Mathematical preliminaries}\label{appendix:A} \subsection*{Definitions} The special linear group over the integers $\text{SL}(2,\mathbb{Z})$ is defined as the set of all $2\times 2$ matrices with real entries and unit determinant, i.e. \begin{equation} \text{SL}(2,\mathbb{Z}) = \left\{\left.\begin{pmatrix} a & b\\ c& d \end{pmatrix}\right\vert a,b,c,d\in\mathbb{R},ad-bc=1\right\}. \end{equation} The projective special linear group over the integers or the modular group $\text{PSL}(2,\mathbb{Z})$ is nothing but the special linear group quotiented by $\mathbb{Z}_{2}$, i.e. \begin{equation} \text{PSL}(2,\mathbb{Z}) = \text{SL}(2,\mathbb{Z})/\mathbb{Z}_{2} = \left\{\left.\begin{pmatrix} a & b\\ c& d \end{pmatrix}\right\vert a,b,c,d\in\mathbb{R},ad-bc=1\right\}/\{\pm 1\}. \end{equation} Since $\text{PSL}(2,\mathbb{Z})$ and $\text{SL}(2,\mathbb{Z})$ are equal up to a quotient $\mathbb{Z}_{2}$, both can be loosely referred to as the modular group. $\text{GL}(2,\mathbb{R})$ is the general linear group over the reals and is the group of all $2\times 2$ invertible matrices with coefficients in the ring $\mathbb{R}$ with $\text{SL}(2,\mathbb{R})$ being its subgroup with unit determinant. Let $\reallywidehat{\mathbb{C}} \equiv \mathbb{C}\cup \{\infty\}$ denote the Riemann sphere, which is also sometimes denoted by $\mathbb{P}^{1}(\mathbb{C})$. Let us consider an element $A\in\text{SL}(2,\mathbb{Z})$, where our ring now is the integers and the element $A$ is a $2\times 2$ matrix of the form \begin{equation} A = \begin{pmatrix} a & b\\ c & d \end{pmatrix}. \end{equation} The action of $A$ on $\reallywidehat{\mathbb{C}}$ is defined by M\"obius or fractional linear transformations in the usual way \begin{equation} A(z)\equiv \frac{az + b}{cz + d}, \end{equation} and \begin{align}\label{action_SL2Z} \begin{split} A(\infty) = \lim\limits_{y\to \infty}\frac{a(x+iy) + b}{c(x+iy) + d} = \begin{cases} \frac{a}{c},\ &c\neq 0\\ 0,\ &c=0 \end{cases} \end{split} \end{align} The modular group contains the following three matrices \begin{equation} I = \mathbb{1}_2,\ S = \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix},\ T = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}, \end{equation} where $\mathbb{1}_{2}$ is the $2\times 2$ identity matrix. It is to be noted that different definitions are followed for the $S$ matrix with the convention of placing the $-1$ in different off-diagonal positions. We either denote $\left(\begin{smallmatrix}0 &-1\\ 1&\ 0\end{smallmatrix}\right)$ or $\left(\begin{smallmatrix}\ 0 &1\\ -1&0\end{smallmatrix}\right)$ to be the $S$ matrix and we shall stick to the former in this paper. The $S$ and the $T$ matrices generate the modular group. The discrete subgroups of $\text{PSL}(2,\mathbb{R})$ are denoted by $\overline{\Gamma}$ with the convention that we denote $\overline{\Gamma}(1) = \text{PSL}(2,\mathbb{Z})$ and the discrete subgroups of $\text{SL}(2,\mathbb{Z})$ are denoted by $\Gamma$ with the convention that $\Gamma(1) = \text{SL}(2,\mathbb{Z})$. These subgroups are related as $\overline{\Gamma} = \Gamma/\pm I$. The discrete subgroup of level $2$, for example, is defined as follows \begin{align} \begin{split} \Gamma(2) =& \left\{A\in\text{SL}(2,\mathbb{Z})\vert A\equiv \mathbb{1}_{2}\ (\text{mod}\ 2)\right\}\\ \Gamma_{\theta} =& \left\{A\in\text{SL}(2,\mathbb{Z})\vert A\equiv \mathbb{1}_{2}\ \text{or}\ \mathbb{1}_{2}\ (\text{mod}\ 2)\right\}, \end{split} \end{align} where $\text{SL}(2,\mathbb{Z})\supset \Gamma_{\theta}\supset \Gamma(2)$, and $\Gamma_{\theta}$ is called the theta group. The discrete subgroup $\Gamma(2)$ for example, is generated by $T^{2} = \left(\begin{smallmatrix}1 & 2\\ 0 & 1\end{smallmatrix}\right)$ and $ST^{2}S = \left(\begin{smallmatrix}-1 & \ 0\\ \ 2 & -1\end{smallmatrix}\right)$ which is denoted as $\Gamma(2) = \langle T^{2},ST^{2}S\rangle.$ There also exist special subgroups of the discrete group $\Gamma$ that we are interested in working with. These are called the congruence subgroups of level $N$ or the level $N$ principal congruence subgroups and are denoted by $\Gamma(N)$ with $N\in\mathbb{N}$. These are defined to be the identity mod $N$, i.e. \begin{align} \begin{split} \Gamma(N) =& \{\gamma\in\Gamma(1)\vert\ \gamma = \mathbb{1}_{2}\ (\text{mod}\ N)\}\\ =& \left.\left\{\begin{pmatrix} a & b\\ c & d \end{pmatrix}\in\text{SL}(2,\mathbb{Z})\right\vert a\equiv d\equiv 1\ (\text{mod}\ N),\ b\equiv c\equiv 0\ (\text{mod}\ N)\right\}, \end{split} \end{align} where $\Gamma(1) = \text{PSL}(2,\mathbb{Z})$ as noted earlier. A congruent subgroup of level $N$ is also a congruent subgroup of any multiple of $N$, i.e. $N' = \ell N$ for all $\ell\in\mathbb{N}$. The index of principal congruence subgroup $\Gamma(N)$ in $\text{SL}(2,\mathbb{Z})$ and in $\text{PSL}(2,\mathbb{Z})$ is given by \begin{equation}\label{index} \begin{split} \mu =& \left[\text{SL}(2,\mathbb{Z}):\Gamma(N)\right]= N^{3}\prod\limits_{p\vert N}(1-p^{-2}),\\ \overline{\mu} =& \left[\text{PSL}(2,\mathbb{Z}):\overline{\Gamma}(N)\right] = \begin{cases} \frac{1}{2}N^{3}\prod\limits_{p\vert N}(1-p^{-2}),\ N\geq 3,\\ 6,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ N=2. \end{cases} \end{split} \end{equation} where $p\vert N$ denotes the prime divisors of $N$. Every congruence subgroup $\Gamma$ has a finite index in $\text{SL}(2,\mathbb{Z})$. There are several other special subgroups of $\Gamma$ of which the two most relevant for our discussion are $\Gamma_{0}(N)$ and $\Gamma_{1}(N)$, where the former is defined to be the subgroup of $\Gamma$ with matrices which have lower left entry $0$ mod $N$ and the latter is defined to the subgroup of $\Gamma$ with matrices that are identity except possibly in the upper right corner mod $N$. Explicitly, we have \begin{align}\label{Hecke_subgroup} \begin{split} \Gamma_{0}(N) =& \left\{\left.\begin{pmatrix} a & b\\ c & d \end{pmatrix}\in\text{SL}(2,\mathbb{Z})\right\vert c\equiv 0\ (\text{mod}\ N)\right\} = \left\{\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\ \text{mod}\ N\right\},\\ \Gamma_{1}(N) =& \left\{\begin{pmatrix} 1 & *\\ 0 & 1 \end{pmatrix}\text{mod}\ N\right\}. \end{split} \end{align} Here “$*$” stands for “unspecified”. These definitions satisfy \begin{align} \Gamma(N)\subset\Gamma_{1}(N)\subset\Gamma_{0}(N)\subset\text{SL}(2,\mathbb{Z}). \end{align} We shall refer to $\Gamma_{0}(N)$ as Hecke subgroups from here on. When $N=1$, we have \begin{equation} \text{SL}(2,\mathbb{Z}) = \Gamma_{0}(1) = \Gamma_{1}(1) = \Gamma(1). \end{equation} The index of the Hecke subgroup $\Gamma_{0}(N)$ in $\text{SL}(2,\mathbb{Z})$ and in $\text{PSL}(2,\mathbb{Z})$, and the index of the congruence subgroup $\Gamma_{1}(N)$ in $\text{SL}(2,\mathbb{Z})$ and in $\text{PSL}(2,\mathbb{Z})$ read \cite{Cohen_modular_forms} \begin{align}\label{index_subgroups} \begin{split} \mu_{0} =& \left[\text{SL}(2,\mathbb{Z}):\Gamma_{0}(N)\right]= N\prod\limits_{p\vert N}(1+p^{-1}),\\ \overline{\mu}_{0} =& \left[\text{PSL}(2,\mathbb{Z}):\overline{\Gamma}_{0}(N)\right] = \begin{cases} N\prod\limits_{p\vert N}(1+p^{-1}),\ N\geq 3,\\ 3,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ N=2, \end{cases}\\ \mu_{1} =& \left[\text{SL}(2,\mathbb{Z}):\Gamma_{1}(N)\right]= \frac{N^{2}}{2}\prod\limits_{p\vert N}(1-p^{-2}),\\ \overline{\mu}_{1} =& \left[\text{PSL}(2,\mathbb{Z}):\overline{\Gamma}_{1}(N)\right]= \begin{cases} \frac{N^{2}}{2}\prod\limits_{p\vert N}(1-p^{-2}),\ N\geq 3,\\ 3,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ N=2. \end{cases} \end{split} \end{align} Additionally, we also have the following relations \begin{align}\label{index_relations} \begin{split} &\left[\Gamma_{1}(N):\Gamma(N)\right] = N,\\ &\left[\overline{\Gamma}_{1}(N):\overline{\Gamma}(N)\right] = N,\ N>2\\ &\left[\Gamma_{1}(N):\Gamma(N^{2})\right] = N,\\ &\left[\overline{\Gamma}_{1}(N):\overline{\Gamma}(N^{2})\right] = N,\ N>2\\ &\left[\Gamma_{0}(N):\Gamma_{1}(N)\right] = \phi(N),\\ &\left[\overline{\Gamma}_{0}(N):\overline{\Gamma}_{1}(N)\right] = \frac{1}{2}\phi(N),\ N>2\\ &\left[\text{PSL}(2,\mathbb{Z}):\overline{\Gamma}_{1}(N^{2})\right] = \left[\text{PSL}(2,\mathbb{Z}):\overline{\Gamma}_{0}(N^{2})\right] = \left[\text{PSL}(2,\mathbb{Z}):\overline{\Gamma}(N)\right] = \frac{N^{3}}{2}\prod\limits_{p\vert N}\left(1 - p^{-2}\right),\ N\geq 3 \end{split} \end{align} where Here, $\phi$ is the Euler's totient or Euler's phi function defined as follows \begin{align}\label{Euler_totient} \phi(N) = N\prod\limits_{p\vert N}\left(1 - \frac{1}{p}\right),\ p\in\mathbb{P}, \end{align} where the product is over the distinct prime numbers dividing $N$. The index is multiplicative, i.e. suppose $A,B,C$ are groups such that $A\subset B\subset C$ and the indices $[A:B]$ and $[B:C]$ are finite, then we have \begin{align} [A:C] = [A:B][B:C]. \end{align} For example, at level $N=7$, we have $\left[\Gamma_{1}(7):\Gamma(7)\right] = 7$, $\left[\Gamma_{0}(7):\Gamma_{1}(7)\right] = \phi(7) = 6$, $\left[\text{SL}(2,\mathbb{Z}:\Gamma_{0}(7)\right] = 7\prod\limits_{p\vert 7}\left(1+p^{-1}\right) = 8$. Using the multiplicative nature of the index, we find \begin{align} \begin{split} \left[\text{SL}(2,\mathbb{Z}):\Gamma_{0}(7)\right]\left[\Gamma_{0}(7):\Gamma_{1}(7)\right]\left[\Gamma_{1}(7):\Gamma(7)\right] = 336 = \left[\text{SL}(2,\mathbb{Z}):\Gamma(7)\right], \end{split} \end{align} which matches with the result obtain using definition \ref{index}. For the Hall divisor $\ell\vert N$, i.e. in addition to $\ell$ being a divisor, $\ell$ and $\tfrac{N}{\ell}$ are coprime, the Atkin-Lehner involution at $\ell$ is defined to any matrix that has the following form \begin{align} A_{\ell} = \frac{1}{\sqrt{\ell}}\begin{pmatrix} \ell\cdot a & b\\ N c& \ell\cdot d \end{pmatrix} \end{align} for $a,b,c,d\in\mathbb{Z}$ such that $\text{det}\ A_{\ell} =1$. These operators define involutions on the space of weakly holomorphic modular forms of weight $k$ denoted by $\mathcal{M}_{k}^{!}(\Gamma_{0}(N))$. Products of these involutions correspond to cusps of the Hecke group $\Gamma_{0}(N)$, and slashing by these matrices yield expansions at those cusps. The Atkin-Lehner involutions can be chosen such that they square to the identity matrix $\mathbb{1}_{2}$ and hence, they act as involutions when considered as modular transformations on the upper half-plane $\mathbb{H}^{2}$. For the special case of $k = N$, this is called the Fricke involution, \begin{align}\label{Fricke_involution} \begin{split} W_{N} = \frac{1}{\sqrt{N}}\begin{pmatrix} 0 & -1\\ N & 0 \end{pmatrix},\\ W_{N}:\tau\mapsto -\frac{1}{N\tau}. \end{split} \end{align} Let $[\tau]$ denote the equivalence class of $\tau\in\mathbb{H}^{2*}$ under the action of the group $\Gamma$ on $\mathbb{H}^{2}$. The Fricke involution exchanges two cusp classes $[i\infty]$ and $[0]$ while fixing the fixed point. For a positive integer $N$, the Fricke group denoted by $\Gamma^{+}_{0}(N)$ is generated by the Hecke group $\Gamma_{0}(N)$ and the Fricke involution $W_{N}$, i.e. $\Gamma_{0}^{+}(N) = \langle\Gamma_{0}(N),W_{N}\rangle$ or \begin{equation}\label{Fricke} \Gamma_{0}^{+}(N) \equiv \Gamma_{0}(N)\cup \Gamma_{0}(N)W_{N}. \end{equation} The group $\Gamma_{0}^{*}(N)$ on the other hand, is one that is generated by the Hecke group and the Atkin-Lehner involution, i.e. $\Gamma_{0}^{*}(N) = \langle\Gamma_{0}(N),A_{N}\rangle$. When $N \leq 5$ and when $N = p\in\mathbb{P}$, $\Gamma_{0}^{*}(N) = \Gamma_{0}^{+}(N)$. Let $M\in\mathbb{N}$ and let $h$ be the largest divisor of $24$ such that $M = Nh^{2}$. The normalizer of $\Gamma_{0}(m)$ is given by \\ \begin{align} \mathcal{N}\left(\Gamma_{0}(m)\right) = \Gamma_{0}^{+}(Nh\vert h) = \begin{pmatrix} h & 0\\ 0 & 1\end{pmatrix}^{-1}\Gamma_{0}^{+}(N)\begin{pmatrix} h & 0\\ 0 & 1\end{pmatrix}, \end{align} \\ where we notice that we recover the Fricke group when $h=1$. \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}[\cite{Apostol}]\label{thm:Apostol_eta} For $\gamma = \left(\begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\right)\in\text{SL}(2,\mathbb{Z})$ with $c>0$ and $\tau\in\mathbb{H}^{2}$, \begin{align} \begin{split} \eta(\gamma(\tau)) =& \eta\left(\frac{a\tau + b}{c\tau + d}\right) = \epsilon(a,b,c,d)\left(-i(c\tau + d)\right)^{\frac{1}{2}}\eta(\tau),\\ \epsilon(a,b,c,d) =& \text{exp}\left[\pi i\left(\frac{a+d}{12c} + s(-d,c)\right)\right],\\ s(a,b)=& \sum\limits_{\ell=1}^{b-1}\frac{\ell}{b}\left(\frac{a\ell}{b} - \left\lfloor\left.\frac{a\ell}{b}\right\rfloor\right. - \frac{1}{2}\right). \end{split} \end{align} \end{theorem} \end{tcolorbox} \noindent It follows from this theorem that if $k\vert N$ and $\gamma = \left(\begin{smallmatrix}a & b\\ Nc & d\end{smallmatrix}\right)\in\Gamma_{0}(N)$ with $c>0$, then \begin{align} \begin{split} \eta(k\gamma(\tau)) =& \eta\left(\frac{ka\tau + kb}{Nc\tau + d}\right) = \eta\left(\begin{pmatrix} a & kb\\ Nc & d \end{pmatrix}(k\tau)\right)\\ =& \epsilon\left(a,kb,\tfrac{cN}{k},d\right)\left(-i(Nc\tau + d)\right)^{\frac{1}{2}}\eta(k\tau). \end{split} \end{align} Under the $S$-transform, the Dedekind eta-function behaves as follows \begin{align}\label{Dedekind_S_transform} \begin{split} \eta(S(\tau)) =& (-i\tau)^{\tfrac{1}{2}}\eta(\tau),\\ \eta(N\cdot S(\tau)) =& \left(-\frac{i\tau}{N}\right)^{\tfrac{1}{2}}\eta\left(\frac{\tau}{N}\right). \end{split} \end{align} \subsection*{Signature and genus of congruence subgroups} We consider a somewhat larger set of congruence subgroups described by \begin{equation} H(p,q,r;\chi,\tau) = \left\{\left.\begin{pmatrix} 1 + ap & bq\\ cr & 1 + dp \end{pmatrix} \in \Gamma\right\vert a,b,c,d\in\mathbb{Z},\ c\equiv \tau a\ (\text{mod}\ \chi)\right\}, \end{equation} where $p$ divides $qr$ and $\chi$ divides $\text{gcd}\left(p,\tfrac{qr}{p}\right)$. These groups include $\Gamma(N) = H(N,N,N;1,1)$, the Hecke subgroup $\Gamma_{0}(N) = H(1,1,N;1,1)$, and $\Gamma_{1}(N) = H(N,1,N;1,1)$. A subgroup of $\Gamma$ is called a regular subgroup if it contains $I = \mathbb{1}_{2}$, the identity element of $\Gamma$. If a subgroup is not regular, it is called irregular. In general, the subgroup $H(p,q,r;\chi,\tau)$ is irregular and the associated regular subgroup is denoted by $\pm H(p,q,r;\chi,\tau)$. Let $H(p,N;\chi) = H(p,N,1;\chi,1)$, for $p,N,\chi\in\mathbb{Z}$ such that $p\vert N$ and $\chi\vert \text{gcd}\left(p,\tfrac{N}{p}\right)$. \begin{tcolorbox}[colback=magenta!5!white,colframe=gray!75!black,title=] \begin{lemma}[\cite{Cummins}] For $p,q,r,\chi,\tau\in\mathbb{Z}_{+}$, such that $p\vert qr$and $\chi\vert\text{gcd}\left(p,\tfrac{qr}{p}\right)$. Let $g = \text{gcd}(\chi,\tau)$. Then the groups $H(p,q,r;\chi,\tau)$ and $H(p,gqr;\chi/g)$ have the same signatures. \end{lemma} \end{tcolorbox} \noindent Let $\Gamma'$ be a subgroup of finite index $\mu = \mu(\Gamma')$ of the modular group $\Gamma$ or in other words, $\Gamma'$ can be described as a finitely generated Fuchsian group of the first kind contained in $\Gamma$. Then, the index $\mu = [\text{SL}(2,\mathbb{Z}): \Gamma(N)]$ defined in \ref{index} is given by the following formula \begin{equation}\label{index_euler_char} \mu = \frac{\text{Area}(\Gamma'\backslash\mathbb{H}^{2})}{\text{Area}(\Gamma\backslash\mathbb{H}^{2})} = 6\left(2g-2 + \sum\limits_{j=1}^{v}\left(1 - \frac{1}{m_{j}}\right)\right) = 6\chi(\Gamma'), \end{equation} where $\chi(\Gamma')$ is the (negative) Euler characteristic of $\Gamma'$. The group $\Gamma$ contains elliptic elements of orders $2$ and $3$ only. Hence, let us define the following \begin{itemize} \item $\nu_{0} = \nu_{0}(\Gamma')$ be the degree of covering of Riemann surfaces $X(\Gamma')\to X(\Gamma)$ given by the index of the subgroup $\Gamma'$ in $\Gamma$, i.e. \begin{equation} \nu_{0}(\Gamma') = \mu(\Gamma'), \end{equation} \item $\nu_{2} = \nu_{2}(\Gamma')$ be the number of $\Gamma'$-inequivalent elliptic fixed points (in $\mathbb{H}^{2}$) of order $2$, \item $\nu_{3} = \nu_{3}(\Gamma')$ be the number of $\Gamma'$-inequivalent elliptic fixed points (in $\mathbb{H}^{2}$) of order $3$, \item $\nu_{\infty} = \nu_{\infty}(\Gamma')$ be the number of $\Gamma'$-inequivalent parabolic fixed points (in $\mathbb{Q}\cup \{\infty\}$) given by \begin{equation} \nu_{\infty}(\Gamma') = \nu_{\infty}^{\text{reg}}(\Gamma') + \nu_{\infty}^{\text{irr}}, \end{equation} where $\nu_{\infty}^{\text{reg}}(\Gamma')$ and $\nu_{\infty}^{\text{irr}}(\Gamma')$ are the number of regular and irregular cusps of $\Gamma'$ respectively. \item If the discriminant $D$ of the imaginary quadratic field $K = \mathbb{Q}\left(\sqrt{-D}\right)$ is $5$, $8$, and $12$ then $\nu_{5}$, $\nu_{4}$, and $\nu_{6}$ respectively occur. \item If the discriminant $D$ of the imaginary quadratic field $K = \mathbb{Q}\left(\sqrt{-D}\right)$ is different from $5$, $8$, and $12$ then only $\nu_{2}$ and $\nu_{3}$ exist. \end{itemize} \noindent \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}[\cite{Shimura}]\label{thm:genus} The genus of the curve $X(\Gamma') = \Gamma'\backslash\mathbb{H}^{2*}$ is \begin{equation} g(X(\Gamma')) = 1 + \frac{\nu_{0}(\Gamma')}{12} - \frac{\nu_{2}(\Gamma')}{4} - \frac{\nu_{3}(\Gamma')}{3} - \frac{\nu_{\infty}(\Gamma')}{2}. \end{equation} \end{theorem} \end{tcolorbox} \noindent \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}[\cite{Kok}]\label{thm:genus_Fricke_index} Let $\Gamma'$ be an intermediate group of $\Gamma_{0}(M)\leq \mathcal{N}\left(\Gamma_{0}(M)\right) = \Gamma_{0}(Nh\vert h)$, where $M\in\mathbb{N}$ and $M = Nh^{2}$ with $h$ as the largest divisor of $24$. The genus of the curve $X(\Gamma') = \Gamma'\backslash\mathbb{H}^{2}$ reads \begin{equation}\label{genus_Fricke_eqn} g(X(\Gamma')) = 1 + \chi(\Gamma_{0}^{+}(N))\frac{\left[\Gamma_{0}^{+}(Nh\vert h):\Gamma'\right]}{2} - \frac{5\nu_{6}(\Gamma')}{12} - \frac{3\nu_{4}(\Gamma')}{8} - \frac{2\nu_{3}(\Gamma')}{6} - \frac{\nu_{2}(\Gamma')}{4} - \frac{\nu_{\infty}(\Gamma')}{2}, \end{equation} where $\chi_{0}(\Gamma_{0}^{+}(M)) = \tfrac{M}{6}\prod\limits_{p\vert M}\tfrac{p+1}{2p}$ is the Euler characteristic. \end{theorem} \end{tcolorbox} \noindent Let us consider $H$ to be one of the groups of the set $H(p,q,r;\chi,\tau)$ or $\pm H(p,q,r;\chi,\tau)$. The signature of $H$ is given by $(\mu,\nu_{2},\nu_{3},\nu_{\infty}^{\text{reg}},\nu_{\infty}^{\text{irr}})$ and the signature of $\pm H$ is given by $(\mu, \nu_{2},\nu_{3},\nu_{\infty}^{\text{reg}} + \nu_{\infty}^{\text{irr}}, 0)$. \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}[\cite{Cummins}]\label{thm:reg_irr_cusps} For $p,q,r,\chi,\tau\in\mathbb{Z}_{+}$ such that $q\vert N$ and $\chi\vert \text{gcd}\left(p,\tfrac{N}{p}\right)$. Let $c = c(p,N;\chi)$ be the number of orbits of group $H(p,N;\chi)$ acting on a set related to the cups of $\Gamma(N)$. then, the number of regular and irregular cusps of $H(p,N;\chi)$ is given by \begin{align} \left(\nu_{\infty}^{\text{reg}},\nu_{\infty}^{\text{irr}}\right) = \begin{cases} (c,0),\ \text{if}\ p=2,\ \text{and}\ \chi = 1,\\ \ \ \ \ \ \ \ \ \text{or}\ p=1,\\ \left(\frac{2c}{5},\frac{c}{5}\right),\ \text{if}\ p=2,\ \chi = 2,\ 2\vert\vert (N/p),\\ \left(\frac{c}{4},\frac{c}{2}\right),\ \text{if}\ p=2,\ \chi = 2,\ 2^{k}\vert\vert (N/p),\ k\ \text{odd},\ k>1\\ \left(\frac{c}{3},\frac{c}{3}\right),\ \text{if}\ p=2,\ \chi = 2,\ 2^{k}\vert\vert (N/p),\ k\ \text{even}\\ \left(\frac{2c}{5},\frac{c}{5}\right),\ \text{if}\ p=4,\ \chi = 2,\ 2\nmid (N/p),\ (\text{so}\ \chi=1)\\ \left(\frac{c}{2},0\right),\ \text{otherwise} \end{cases} \end{align} \end{theorem} \end{tcolorbox} \noindent Here, we have used the notation $a\vert\vert b$ for $a,b,\in\mathbb{Z}$ that is to be read as $a$ exactly divides $b$ and $\text{gcd}(a,b)=1$. \\ \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}\label{thm:eliptic_points} The number of elliptic points of order $2$ for $\Gamma_{0}(N)$ is given by \begin{align} \begin{split} \nu_{2}(\Gamma_{0}(N)) = \begin{cases} \prod\limits_{p\vert N}\left(1 + \left(\frac{-1}{p}\right)\right),\ &\text{if}\ 4\nmid N,\\ 0,\ &\text{if}\ 4\vert N, \end{cases} \end{split} \end{align} where $\left(\tfrac{-1}{p}\right) = \pm 1$ if $p = \pm 1(\text{mod}\ 4)$ and is $0$ if $p = 2$, and the number of elliptic points of order $3$ for $\Gamma_{0}(N)$ is given by \begin{align} \begin{split} \nu_{3}(\Gamma_{0}(N)) = \begin{cases} \prod\limits_{p\vert N}\left(1 + \left(\frac{-3}{p}\right)\right),\ &\text{if}\ 9\nmid N,\\ 0,\ &\text{if}\ 9\vert N, \end{cases} \end{split} \end{align} where $\left(\tfrac{-3}{p}\right) = \pm 1$ if $p = \pm 1(\text{mod}\ 3)$ and is $0$ if $p = 3$. \end{theorem} \end{tcolorbox} \noindent \begin{tcolorbox}[colback=orange!5!white,colframe=gray!75!black,title=] \begin{prop}\label{prop:identity_element} The cases when $H(p,q,r;\chi,\tau)$ contains the identity element $I = \mathbb{1}_{2}$ are \begin{enumerate} \item $p=2,\ \chi = 2,\ \tau$ even. \item $p = 2,\ \chi=1$. \item $p=1,\ \chi = 1$. \end{enumerate} \end{prop} \end{tcolorbox} \noindent For the case of Hecke subgroups $\Gamma_{0}(k)$ defined in \ref{Hecke_subgroup} with $k$ an odd prime, we have \begin{align} \mu =& k+1,\ \ \nu_{\infty} = 2,\\ \nu_{2} =& \begin{cases} 2,\ \text{if}\ k\equiv 1(\text{mod}\ 4),\\ 0,\ \text{if}\ k\equiv 3(\text{mod}\ 4), \end{cases}\\ \nu_{3} =& \begin{cases} 1,\ \text{if}\ k = 3\\ 2,\ \text{if}\ k\equiv 1(\text{mod}\ 3),\\ 0,\ \text{if}\ k\equiv 2(\text{mod}\ 3).\\ \end{cases} \end{align} The genus for $k=p$ prime reads \begin{equation} g(\Gamma_{0}(p)) = \begin{cases} 0,\ \text{if}\ p=2\ \text{and}\ 3,\\ \frac{p-13}{12},\ \text{if}\ p\equiv 1(\text{mod}\ 12),\\ \frac{p-n}{12},\ \text{if}\ p\equiv n(\text{mod}\ 12)\ \text{for}\ n=5\ \text{and}\ 7,\\ \frac{p+1}{12},\ \text{if}\ p\equiv 11(\text{mod}\ 12). \end{cases} \end{equation} The (negative) Euler characteristic for $k$ prime is given by \begin{equation} \chi = \frac{k+1}{6}. \end{equation} For the case of the Fricke group $\Gamma^{+}_{0}(p) = \Gamma^{*}_{0}(p)$ where $p$ is $1$ or a prime for which the genus of $\Gamma_{0}^{+}(p)$ is zero, i.e. $p\in\{1,2,3,5,7,11,13,17,19,23,29,31,41,47,59,71\}$. \\ \begin{tcolorbox}[colback=magenta!5!white,colframe=gray!75!black,title=] \begin{lemma}[\cite{Choi}]\label{lemma:Fixed_points} The number of fixed points $\#$ of the Frick involution $W_{p}$ on $X_{0}(p)$ is given by \begin{align} \begin{split} \# = \begin{cases} \tfrac{p-1}{6},\ &p\equiv 1(\text{mod}\ 12),\\ \tfrac{p+7}{6},\ &p\equiv 5(\text{mod}\ 12),\\ \tfrac{p+5}{6},\ &p\equiv 7(\text{mod}\ 12),\\ \tfrac{p+13}{6},\ &p\equiv 11(\text{mod}\ 12),\\ \end{cases} \end{split} \end{align} \end{lemma} \end{tcolorbox} \noindent The genus of $\Gamma_{0}^{+}(p)$ reads \begin{equation}\label{genus_Fricke} g(\Gamma^{+}_{0}(p)) = \frac{1}{4}\left(g(\Gamma_{0}(p)) + 2 - \#)\right). \end{equation} \subsection*{The dimension of spaces of forms of congruent subgroups} We state useful theorems that provide us with a working formula to find the dimensions spaces of forms. Let $\mathcal{M}_{k}(\Gamma')$ and $\mathcal{S}_{k}(\Gamma')$ be the space of modular forms and the space of cusp forms of the congruent subgroup $\Gamma'$ respectively. \section*{Case I: $\Gamma'\subseteq\Gamma$} \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}[\cite{Shimura}]\label{thm:space_dim_G} If $k\geq 2$ is even, then the dimension of the space of cusp forms of $\Gamma'$ and the space of modular forms is \begin{align} \begin{split} \text{dim}\ \mathcal{S}_{k}(\Gamma') =& \delta_{2,k} + \frac{k-1}{12}\nu_{0}(\Gamma') + \left(\left\lfloor\left.\frac{k}{4}\right\rfloor\right. - \frac{k-1}{4}\right)\nu_{2}(\Gamma')\\ +& \left(\left\lfloor\left.\frac{k}{3}\right\rfloor\right. - \frac{k-1}{3}\right)\nu_{3}(\Gamma') - \frac{\nu_{\infty}(\Gamma')}{2},\\ \text{dim}\ \mathcal{M}_{k}(\Gamma') =& \text{dim}\ \mathcal{S}_{k}(\Gamma') + \nu_{\infty}(\Gamma') - \delta_{2,k}. \end{split} \end{align} If $k\geq 3$ is odd and $-I\notin \Gamma'$, then \begin{align} \begin{split} \text{dim}\ \mathcal{S}_{k}(\Gamma') =& \frac{k-1}{12}\nu_{0}(\Gamma') + \left(\left\lfloor\left.\frac{k}{3}\right\rfloor\right. - \frac{k-1}{3}\right)\nu_{3}(\Gamma') - \frac{\nu_{\infty}^{\text{reg}}(\Gamma')}{2},\\ \text{dim}\ \mathcal{M}_{k}(\Gamma') =& \text{dim}\ \mathcal{S}_{k}(\Gamma') + \nu_{\infty}^{\text{reg}}(\Gamma'), \end{split} \end{align} and the spaces $\mathcal{M}_{k}(\Gamma')$ are trivial for $k$ odd when $-I\in \Gamma'$. \end{theorem} \end{tcolorbox} \noindent \section*{Case II: $\Gamma' = \Gamma_{H}(N)\subseteq\Gamma$} \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}[\cite{Cohen-I}]\label{thm:space_dim_G_0} Consider the congruence subgroups $\Gamma'\subset\Gamma$ defined by \begin{equation} \Gamma_{H}(N) = \left\{\left.\begin{pmatrix} a & b\\ c & d \end{pmatrix}\in\text{SL}(2,\mathbb{Z})\right\vert c\equiv 0(\text{mod}\ N),\ a,\ d\in H\right\} \end{equation} for $H$ a subgroup of the multiplicative group $(\mathbb{Z}/N\mathbb{Z})^{*}$. this includes two cases $\Gamma_{1}(N)$ and $\Gamma_{0}(N)$ corresponding to groups $H = \{1\}$ and $H = (\mathbb{Z}/N\mathbb{Z})^{*}$ respectively. Then, the dimension of the space of cusp forms and modular forms read \begin{align} \begin{split} \text{dim}\ \mathcal{S}_{k}(\Gamma_{H}(N)) =& 2\left\lfloor\left. \frac{k}{3}\right\rfloor\right.-1 \ \text{if}\ k\geq 3,\\ \text{dim}\ \mathcal{M}_{k}(\Gamma_{H}(N)) =& 2\left\lfloor\left. \frac{k}{3}\right\rfloor\right.+1 \ \text{if}\ k\geq 1. \end{split} \end{align} \end{theorem} \end{tcolorbox} \noindent \section*{Case III: $\Gamma' = \Gamma^{+}_{0}(N)$} \begin{tcolorbox}[colback=blue!5!white,colframe=gray!75!black,title=] \begin{theorem}[\cite{Choi}]\label{thm:space_dim_G*0} For $k> 2$ with $k$ being an even integer, the dimension of the space of cusp forms of $\Gamma'$ is \begin{align} \begin{split} \text{dim}\ \mathcal{S}_{k}(\Gamma^{+}_{0}(2)) =& \begin{cases} \left\lfloor\left.\frac{k}{8}\right\rfloor\right. - 1,\ k\equiv 2 (\text{mod}\ 8)\\ \left\lfloor\left.\frac{k}{8}\right\rfloor\right.,\ \ \ \ \ \ k\equiv 2(\text{mod}\ 8),\\ \end{cases}\\ \text{dim}\ \mathcal{S}_{k}(\Gamma^{+}_{0}(3)) =& \begin{cases} \left\lfloor\left.\frac{k}{6}\right\rfloor\right.-1,\ k\equiv 2,6(\text{mod}\ 12),\\ \left\lfloor\left.\frac{k}{6}\right\rfloor\right.,\ \ \ \ \ \ k\not\equiv 2,6(\text{mod}\ 12),\\ \end{cases}\\ \text{dim}\ \mathcal{S}_{k}(\Gamma^{+}_{0}(p)) =& \begin{cases} \left(\tfrac{p+5}{6}\right)\left\lfloor\left.\frac{k}{4}\right\rfloor\right. + \left\lfloor\left.\frac{k}{3}\right\rfloor\right. - \frac{k}{2},\ p\equiv 1,7(\text{mod}\ 12),\\ \left(\tfrac{p + 13}{6}\right)\left\lfloor\left.\frac{k}{4}\right\rfloor\right. - \frac{k}{2},\ \ \ \ p \equiv 5,11(\text{mod}\ 12), \end{cases} \end{split} \end{align} where the third expression is for $p>3$. The dimension of the space of modular forms is \begin{align} \text{dim}\ \mathcal{M}_{k}(\Gamma^{+}_{0}(p)) = 1 + \text{dim}\ \mathcal{S}_{k}(\Gamma^{+}_{0}(p)), \end{align} for $p\in\{1,2,3,5,7,11,13,17,19,23,29,31,41,47,59,71\}$. \end{theorem} \end{tcolorbox} \noindent We follow \cite{Junichi} for finding the basis decomposition of spaces of modular forms and valence formulae of various Fricke and Hecke groups. \noindent \section{Other groups}\label{sec:other_groups} Investigation for single character solutions in the Hecke groups of levels $N\leq 12$ did not reveal any admissible solutions. In the previous section, we found quasi-characters in $\Gamma_{0}(7)$. We see this repeat in the solutions of the $\Gamma_{0}(2)$ which we then use to construct the admissible solutions of $\Gamma_{0}^{+}(2)$. After this, we consider $\Gamma_{0}(3)$ as an example and present the single character analysis to elucidate some of the issues that make the solution inadmissible in the other Hecke group levels. What about Fricke groups of other levels? It turns out that there are no admissible single character solutions for $\Gamma_{0}^{+}(N)$ with $N\leq 12$ except at $N = 2,3,5,7$ which are all genus zero Moonshine groups and are the first four prime divisors of $\mathbb{M}$. \input{Gamma_0_2} \input{Gamma_0_3} \input{Gamma_1_N} \section{Preliminaries} \subsection{Lighting review of the theory of modular forms} \subsubsection{The modular group and its cousins} \noindent We define the special linear group over the integers $\text{SL}(2,\mathbb{Z})$ as the set of all $2\times 2$ matrices with real entries and unit determinant, i.e. \begin{equation} \text{SL}(2,\mathbb{Z}) = \left\{\left.\begin{pmatrix} a & b\\ c& d \end{pmatrix}\right\vert a,b,c,d\in\mathbb{Z},ad-bc=1\right\}. \end{equation} The projective special linear group over the integers (or the modular group $\text{PSL}(2,\mathbb{Z})$) on the other hand, is nothing but the special linear group quotiented by $\mathbb{Z}_{2}$, i.e. \begin{equation} \text{PSL}(2,\mathbb{Z}) = \text{SL}(2,\mathbb{Z})/\mathbb{Z}_{2} = \left\{\left.\begin{pmatrix} a & b\\ c& d \end{pmatrix}\right\vert a,b,c,d\in\mathbb{Z},ad-bc=1\right\}/\{\pm 1\}. \end{equation} We loosely refer to both $\text{PSL}(2,\mathbb{Z})$ and $\text{SL}(2,\mathbb{Z})$ as the modular group since they are equal up to a quotient $\mathbb{Z}_{2}$. The action of the an element $\gamma = \left(\begin{smallmatrix}a & b\\ c & d \end{smallmatrix}\right)\in\text{SL}(2,\mathbb{Z})$ on the Riemann sphere $\widehat{\mathbb{C}}\equiv \mathbb{C}\cup \{i\infty\}$ is defined by a M\"obius transform given by \begin{align}\label{action_SL2Z} \begin{split} \gamma(\tau) \equiv& \frac{a\tau + b}{c\tau + d},\\ \gamma(\infty) =& \lim\limits_{y\to \infty}\frac{a(x+iy) + b}{c(x+iy) + d} = \begin{cases} \frac{a}{c},\ c\neq 0,\\ 0,\ c=0. \end{cases} \end{split} \end{align} The modular group is comprised of the following three matrices \begin{align} I = \mathbb{1}_2,\ S = \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix},\ T = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}, \end{align} where $\mathbb{1}_{2}$ is the $2\times 2$ identity matrix. We denote the discrete subgroups of $\text{SL}(2,\mathbb{Z})$ as $\Gamma$ with the convention that $\Gamma(1) = \text{SL}(2,\mathbb{Z})$. The special set of discrete subgroups of our interest are the principal congruence subgroups of level $N$ denoted by $\Gamma(N)$, where $N\in\mathbb{N}$, defined as follows \begin{align} \begin{split} \Gamma(N) =& \{\gamma\in\Gamma(1)\vert\ \gamma = \mathbb{1}_{2}\ (\text{mod}\ N)\}\\ =& \left.\left\{\begin{pmatrix} a & b\\ c & d \end{pmatrix}\in\text{SL}(2,\mathbb{Z})\right\vert a\equiv d\equiv 1\ (\text{mod}\ N),\ b\equiv c\equiv 0\ (\text{mod}\ N)\right\}, \end{split} \end{align} The index of principal congruence subgroup $\Gamma(N)$ in $\text{SL}(2,\mathbb{Z})$ refers to the number of (left or right cosets) of $\Gamma(N)$ in $\text{SL}(2,\mathbb{Z})$ and is defined as follows \begin{align} \mu = \left[\text{SL}(2,\mathbb{Z}):\Gamma(N)\right]= N^{3}\prod\limits_{p\vert N}(1-p^{-2}), \end{align} where $p\vert N$ denotes the prime divisors of $N$. Every congruence subgroup $\Gamma$ has a finite index in $\text{SL}(2,\mathbb{Z})$. There exist several other subgroups of $\Gamma$ that are defined below \begin{align}\label{congruent_subgroup_definitions} \begin{split} \Gamma_{0}(N) =& \left\{\left.\begin{pmatrix} a & b\\ c & d \end{pmatrix}\in\text{SL}(2,\mathbb{Z})\right\vert c\equiv 0\ (\text{mod}\ N)\right\} = \left\{\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\ \text{mod}\ N\right\},\\ \Gamma^{0}(N) =& \left\{\begin{pmatrix} * & 0\\ * & * \end{pmatrix}\ \text{mod}\ N\right\},\ \ \ \Gamma_{0}^{0}(N) = \left\{\begin{pmatrix} * & 0\\ 0 & * \end{pmatrix}\ \text{mod}\ N\right\},\\ \Gamma_{1}(N) =& \left\{\begin{pmatrix} 1 & *\\ 0 & 1 \end{pmatrix}\ \text{mod}\ N\right\},\ \ \ \Gamma^{1}(N) = \left\{\begin{pmatrix} 1 & 0\\ * & 1 \end{pmatrix}\ \text{mod}\ N\right\}, \end{split} \end{align} where ``$*$'' stands for ``unspecified''. It follows from the definitions that $\Gamma(N)\subset\Gamma_{1}(N)\subset\Gamma_{0}(N)\subset\text{SL}(2,\mathbb{Z})$, and analogously, $\Gamma(N)\subset\Gamma^{1}(N)\subset\Gamma^{0}(N)\subset\text{SL}(2,\mathbb{Z})$. We shall refer to groups $\Gamma_{0}(N)$ as Hecke groups throughout this paper. The index of the Hecke group in the modular group is given by \begin{align}\label{index_Hecke} \mu_{0} = \left[\text{SL}(2,\mathbb{Z}):\Gamma_{0}(N)\right]= N\prod\limits_{p\vert N}(1+p^{-1}). \end{align} For $p\in\mathbb{P}$, the right cosets of the Hecke group in $\text{SL}(2,\mathbb{Z})$ are given by \begin{align}\label{right_cosets} \left\{\Gamma_{0}(p)\right\}\cup \left\{\Gamma_{0}(p)ST^{k}:\ 0\leq k<p\right\}. \end{align} For a positive integer $N$, the Fricke group denoted by $\Gamma^{+}_{0}(N)$ is a supergroup of the Hekce group defined as follows \begin{equation}\label{Fricke} \Gamma_{0}^{+}(N) \equiv \Gamma_{0}(N)\cup \Gamma_{0}(N)W_{N}, \end{equation} where $W_{N}$ is called the Fricke involution with the following action \begin{align}\label{Fricke_involution} \begin{split} W_{N} = \frac{1}{\sqrt{N}}\begin{pmatrix} 0 & -1\\ N & 0 \end{pmatrix},\\ W_{N}:\tau\mapsto -\frac{1}{N\tau}. \end{split} \end{align} It follows the definition above, we see that special points on the fundamental domain of the Fricke groups are mapped to themselves. We refer to these points as Fricke involution points defined as $\tau_{F} = \tfrac{i}{\sqrt{N}}$, where $N$ corresponds to the level of the group. the Fricke involution is a special case of the Atkin-Lehner involution defined as follows \begin{align} A_{\ell} = \frac{1}{\sqrt{\ell}}\begin{pmatrix} \ell\cdot a & b\\ N c& \ell\cdot d \end{pmatrix}, \end{align} for $a,b,c,d\in\mathbb{Z}$ such that $\text{det}\ A_{\ell} =1$, and where the divisor $\ell$ and $\tfrac{N}{\ell}$ are coprime. The group generated by a Hecke group of level $N$ and the $A_{\ell}$ involution is called the Atkin-Lehner groups denoted by $\Gamma_{0}^{*}(N)$. When $N\in\mathbb{P}$ and when $N\leq 5$, the Atkin-Lehner groups are equal to the Fricke groups, i.e. $\Gamma_{0}^{*}(N) = \Gamma_{0}^{+}(N)$. This distinction becomes important when we consider non-prime groups of higher levels in subsequent sections. Finding $\mu_{0}^{+}$, the Fricke group index, is a bit involved and hence, we only present the results in this work when relevant and direct the reader to \cite{Umasankar:2022kzs} for the specifics. \subsubsection{An invitation to modular forms} \noindent The word ``modular" refers to the moduli space of complex curves (i.e. Riemann surfaces) of genus $1$. Such a curve can be represented as $\mathbb{C}/\Lambda$ where $\Lambda\subset\mathbb{C}$ is a lattice. Let us first consider the following definition of a modular form $f$ of weight $k$ \begin{equation}\label{modulardef} f\left(\frac{az+b}{cz+d}\right) = (cz+d)^{k}f(z),\ \ \text{Im}(z)>0,\ \begin{pmatrix} a & b\\ c & d \end{pmatrix}\in\text{SL}(2,\mathbb{Z}). \end{equation} We will soon return to this definition and make it more rigorous but for the moment let us use this as a working definition to get further along. Consider now the case when the weight is null, i.e. $k=0$. We have \begin{equation}\label{modular_def_k=0} f\left(\frac{az+b}{cz+d}\right) = f(z). \end{equation} This tells us that the modular form $f$ is invariant under the action of $\text{SL}(2,\mathbb{Z})$. The implication of us considering $\text{Im}(z)>0$ is that the modular form $f$ is a function that is invariant under the upper-half plane $\mathbb{H}^{2}$. formally, the upper-half plane is defined as follows \begin{equation} \mathbb{H}^{2} = \{z\in\mathbb{C}\vert \text{Im}(z)>0\} = \{x+iy\in\mathbb{C}\vert x,y\in\mathbb{R},y>0\}. \end{equation} But we are to ask ourselves why we should even be interested in such functions. The reason is that these are in fact functions of elliptic curves. Here, we have a function that goes from the set of elliptic curves to the complex numbers, i.e. \begin{equation} f:\{E\}\to \mathbb{C}. \end{equation} Note that we have used $\{E\}$ indicating a set of elliptic curves as opposed to just one since in the latter case $f$ would be referred to as an elliptic function. Let $\omega_{1}, \omega_{2}$ be two periods. We now want to use these to probe for the set of all integer linear combinations, i.e. $n\omega_{1} + m\omega_{2}$ with $n,m\in\mathbb{Z}$. These linear combinations form a lattice $\Lambda$. Formally, a lattice in the complex plane $\mathbb{C}$ is a subgroup $\Lambda\subset\mathbb{C}$ of the following form \begin{equation} \Lambda\equiv \langle\omega_{1},\omega_{2}\rangle = n\omega_{1} + m\omega_{2}, \end{equation} where $m,n\in\mathbb{Z}$ and $\omega_{1},\omega_{2}\in\mathbb{C}$ are $\mathbb{R}$-linearly independent. An elliptic function is one that is periodic with respect to the lattice $\langle\omega_{1},\omega_{2}\rangle$, i.e. for a function $g(z)$, being elliptic implies \begin{equation} g(z) = g(\omega_{1} + z) = g(\omega_{2} + z). \end{equation} Consider the function $g(\omega_{1},\omega_{2})$. Now, rescaling the lattices as $\omega_{1}\sim \lambda\omega_{1}$ and $\omega_{2}\sim \lambda\omega_{2}$ should not change the elliptic curve. Thus, we have \begin{equation} g(\omega_{1},\omega_{2}) = g(\lambda\omega_{1},\lambda\omega_{2}),\ \lambda\in\mathbb{C}^{\times}, \end{equation} where $\mathbb{C}^{\times} = \mathbb{C}\backslash\{0\}$ denotes the group units of the complex numbers $\mathbb{C}$. Two lattices $\Lambda$ and $\Tilde{\Lambda}$ are called homothetic if the $\Tilde{\Lambda} = \lambda\Lambda$ in which case, we write $\Tilde{\Lambda} \simeq \Lambda$. We can also consider the change \begin{align}\label{shift_omega} \begin{split} \omega_{1} \to \Tilde{\omega}_{1} =& a\omega_{1} + b\omega_{2},\\ \omega_{2}\to\Tilde{\omega}_{2} =& c\omega_{1} + d\omega_{2}, \end{split} \end{align} where $\left(\begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\right)\in\text{GL}(2,\mathbb{Z})$ with $\text{det} = \pm 1$. We can now ask ourselves if the following claim holds \begin{equation}\label{elliptic_claim} g(\Tilde{\omega}_{1},\Tilde{\omega}_{2})\overset{!}{=}g(\omega_{1},\omega_{2}). \end{equation} To simplify notation, we rid ourselves of two variables and just consider \begin{equation} f(z) = g(1,z), \end{equation} where we have chosen a basis with $\omega_{1} =1$ and $\omega_{2} = z$. We can reintepret $z$ to be $\tfrac{\omega_{2}}{\omega_{1}}$ and write \begin{equation} g(\omega_{1},\omega_{2}) = f\left(\frac{\omega_{2}}{\omega_{1}}\right). \end{equation} With this map between $f$ and $g$, notice that the invariance \ref{modular_def_k=0} translates to the claim \ref{elliptic_claim} holding good. With the shift \ref{shift_omega}, we have \begin{align} \begin{split} f\left(\frac{az+b}{cz+d}\right) =& f\left(\frac{a\omega_{1}+b\omega_{2}}{c\omega_{1} + d\omega_{2}}\right) = g(a\omega_{1} + b\omega_{2},c\omega_{2} + d\omega_{2},\omega_{1}),\\ f(z) =& f\left(\frac{\omega_{2}}{\omega_{1}}\right) = g(\omega_{1},\omega_{2}),\\ f\left(\frac{az+b}{cz+d}\right) =& f(z)\Longleftrightarrow g(\Tilde{\omega}_{1},\Tilde{\omega}_{2}) = g(\omega_{1},\omega_{2}). \end{split} \end{align} We can now search for functions $f$ that satisfy \ref{modular_def_k=0} such that these are holomorphic and not constant. As a first step, let us search for $1$-forms instead that satisfy \begin{equation} f(z)dz = f\left(\frac{az+b}{cz+d}\right)d\left(\frac{az+b}{cz+d}\right). \end{equation} Working this out gives us the definition of a modular function of weight $2$, \begin{equation} f\left(\frac{az+b}{cz+d}\right) = (cz+d)^{2}f(z). \end{equation} It is now natural to see how modular functions of weight $k$ are motivated. We look at forms of the type $f(z)(dz)^{\tfrac{k}{2}}$ and ask when this is left invariant. This leads us to the definition \ref{modulardef}. Consider now the forms $f_{1}(z)(dz)^{\tfrac{k}{2}}$ and $f_{2}(z)(dz)^{\tfrac{k}{2}}$. Since these are invariant by themselves, so should their ratio be and hence, we have \begin{equation} \frac{f_{1}(z)(dz)^{\tfrac{k}{2}}}{f_{2}(z)(dz)^{\tfrac{k}{2}}} = \frac{f_{1}(z)}{f_{2}(z)}, \end{equation} which is a ratio of two modular forms of the same weight. This is called a modular function which inhabits all the properties of the modular forms in the ratio. This tells us that in order to find invariant modular functions we are to probe for modular forms of similar weight and then take their ratio. We will call a function $h$ of weight $k$ to be a homogeneous function of a lattice $\Lambda$ if it satisfies \begin{equation}\label{homo_function} h(\lambda\Lambda) = \lambda^{-k}h(\Lambda),\ \ \lambda\in\mathbb{C}^{\times},\ \Lambda\in\mathbb{C}. \end{equation} In terms of basis $\{\omega_{1},\omega_{2}\}$, the definition \ref{homo_function} reads \begin{equation} h(\lambda\omega_{1},\lambda\omega_{2}) = \lambda^{-k}h(\omega_{1},\omega_{2}). \end{equation} Invariance under the shift \ref{shift_omega} demands \begin{equation} h(a\omega_{1}+b\omega_{2}, c\omega_{1}+d\omega_{2}) = h(\omega_{1},\omega_{2}). \end{equation} Yet again, defining $f(z) = h(1,z)$ so that $h(\omega_{1},\omega_{2}) = f\left(\tfrac{\omega_{2}}{\omega_{1}}\right) = f(z)$, we obtain the functional equation \ref{modulardef} for $f(z)$. Thus, we have just seen the following three different ways to understand and define modular forms of weight $k$: \begin{enumerate} \item They are functions that correspond to the functional equation \ref{modulardef}. \item They are functions that correspond to invariant modular forms $f(z)(dz)^{\tfrac{k}{2}}$, and \item They can be thought of as being homogeneous functions of lattices spanned by $\langle\omega_{1},\omega_{2}\rangle$ of weight $k$. \end{enumerate} \subsubsection{Formal definitions} \noindent\begin{tcolorbox}[colback=gray!5!white,colframe=teal!75!black,title=Fundamental domain] The fundamental domain for the action of the group $\text{SL}(2,\mathbb{Z})$ on $\mathbb{H}^{2}$ is denoted by $\mathcal{F}$ and is defined by \begin{align}\label{fundamental_domain_SL2Z} \mathcal{F} \equiv \left\{-\frac{1}{2}<\text{Re}(z)<\frac{1}{2},\ \vert\tau\vert>1\right\}\cup\left\{-\frac{1}{2}\leq \text{Re}(z)\leq 0,\ \vert\tau\vert=1\right\}. \end{align} \end{tcolorbox} \noindent The fundamental domain $\mathcal{F}$ is visualized in \ref{fig:SL2Z_fundamental_domain} where the elliptic points are at $i$ and $\rho\equiv \tfrac{-1+\sqrt{3}}{2}$. \begin{figure}[htb!] \centering \includegraphics[width = 13.5cm]{Preliminaries/SL2,Z_identifications.png} \caption{The fundamental domain of $\text{SL}(2,\mathbb{Z}) = \langle S,T\rangle$, as defined in \ref{fundamental_domain_SL2Z}, with elliptic points at $i,\rho$, and a cusp at $\tau = i\infty$. The points on vertical strips in the shaded region are identified by $\tau\mapsto \tau + 1$ and those on the circular segment about the point $\tau = i$ are identified by $\tau\mapsto -\tfrac{1}{\tau}$.} \label{fig:SL2Z_fundamental_domain} \end{figure} \\ Identifying points on the boundary of $\mathcal{F}$ by $\tau\mapsto \tau + 1$ and $\tau\mapsto-\tfrac{1}{\tau}$ yields a cylinder-like shape that is topologically homeomorphic to an open disk. Adding the cusp at $i\infty$ helps us to turn the open disk into a sphere. To better visualize the cusp, we can simply remap the fundamental domain using the transform $\tau\mapsto-\tfrac{1}{\tau}$ to map the cusp at $i\infty$ to $0$. This is shown in figure \ref{fig:SL2Z_cusp}. \begin{figure}[htb!] \centering \includegraphics[width = 13.5cm]{Preliminaries/SL2Z_cusp} \caption{The equivalent fundamental domain (right) after the application of the $S$-transform to all the points in the fundamental domain (left).} \label{fig:SL2Z_cusp} \end{figure} \\ \begin{tcolorbox}[colback=gray!5!white,colframe=teal!75!black,title=Modular functions and modular forms] Let $\gamma= \left(\begin{smallmatrix} a & b\\ c & d \end{smallmatrix}\right)$, acting by $\gamma: \tau\mapsto \frac{a\tau + b}{c\tau + d}$, vary throughout $\Gamma$. If $f$ is meromorphic on $\mathbb{H}^{2}$ and there exists $k\in\mathbb{Z}$ such that \\ \begin{equation}\label{modular_form_def} f(\gamma(\tau)) = (c\tau + d)^{k}f(\tau),\ \forall \gamma\in\Gamma \end{equation} \\ and in addition, $f$ is meromorphic at $\infty$, we say that $f$ is a modular function of weight $k$ for $\Gamma$. If $f$ is holomorphic on $\mathbb{H}^{2}$ and at $\infty$, we say that $f$ is a modular form of weight $k$ for $\Gamma$, and if it is in addition vanishes at $\infty$, $f$ is a cusp form of weight $k$ for $\Gamma$. \end{tcolorbox} \noindent A cusp for a discrete subgroup $\Gamma$ is defined to be $\infty$ or any other rational number $\tfrac{a}{c}$. From \ref{action_SL2Z}, we see that $\tfrac{a}{c}$ is clearly equivalent to $\infty$ under the action of $\text{SL}(2,\mathbb{Z})$. A weakly holomorphic modular form of weight $k$ for $\Gamma\in\text{SL}(2,\mathbb{Z})$ is a holomorphic function in $\mathbb{H}^{2}$ that satisfies \ref{modular_form_def} and that has a $q$-expansion of the form \\ \begin{equation}\label{f(tau)} f(\tau) = \sum\limits_{n\geq n_{0}}a(n)q^{n}, \end{equation} \\ where $q = e^{2\pi i\tau}$ and $n_{0} = \text{ord}_{\infty}(f)$, i.e. of infinite multiplicative order of $f$. The multiplicative order of an element $a$ is the smallest positive integer $m$ (if at all it exists) such that $a^{m} = e$, where $e$ is the identity element of the given set. If no such $m$ exists, then the order is said to be infinite. Here, for $f(\tau)$ given in \ref{f(tau)}, we ask ourselves if there exists a positive integer $m$ such that $f^{m}(\tau) = 1$. By definition, the multiplicative order of $f$ is infinite which implies that there exists no positive integer $m$. Based on the values of $n_{0}$, we can classify $f(\tau)$ as follows \begin{align} \begin{split} f(\tau) = \begin{cases} \text{holomorphic},\ &\text{if}\ n_{0}\geq 0\\ \text{cusp form},\ &\text{if}\ n_{0}\geq 1. \end{cases} \end{split} \end{align} The space of all holomorphic modular forms of weight $k$ that belong to a group $\Gamma$ is denoted by $\mathcal{M}_{k}(\Gamma)$, the space of all cusp forms is denoted by $\mathcal{S}_{k}(\Gamma)$, and the space of all weakly holomorphic modular forms is denoted by $\mathcal{M}_{k}^{!}(\Gamma)$. Any non-zero $f\in\mathcal{M}^{!}_{k}(\text{SL}(2,\mathbb{Z})$ satisfies the following valence formula \begin{equation}\label{valence_formula_SL2Z} \text{ord}_{\infty}(f) + \frac{1}{2}\text{ord}_{i}(f) + \frac{1}{3}\text{ord}_{\rho}(f) + \sum\limits_{\substack{p\in\text{SL}(2,\mathbb{Z})\backslash\mathbb{H}^{2}\\ p\neq i,\rho}}\text{ord}_{p}(f) = \frac{k}{12}, \end{equation} where $\mathcal{F}$ is the fundamental domain defined in \ref{fundamental_domain_SL2Z}. The right-hand side of the equation tells us the number of zeros inside $\mathcal{F}$ which, for a general group $\Gamma$, is defined as follows \begin{align}\label{no_zeros} \# = \text{weight}\cdot\frac{\text{index}}{12}. \end{align} Since the index of $\text{SL}(2,\mathbb{Z})$ in itself is $1$, for a weight $k$ modular form the number of zeros is $\tfrac{k}{12}$. \begin{tcolorbox}[colback=gray!5!white,colframe=teal!75!black,title=Dimension of the space of modular and cusp forms] If $k\geq 4$ is a non-negative integer, then the dimension of the space of cusp forms and modular forms of $\text{SL}(2,\mathbb{Z})$ is \begin{align}\label{dim_SL2Z} \begin{split} \text{dim}\ \mathcal{S}_{k}(\text{SL}(2,\mathbb{Z})) =& \begin{cases} \left\lfloor\left.\frac{k}{12}\right\rfloor\right. - 1,\ \text{if}\ k\equiv 2\ (\text{mod}\ 12),\\ \left\lfloor\left.\frac{k}{12}\right\rfloor\right.,\ \ \ \ \ \ \text{if}\ k\not\equiv 2\ (\text{mod}\ 12). \end{cases}\\ \text{dim}\ \mathcal{M}_{k}(\text{SL}(2,\mathbb{Z})) =& 1 + \text{dim}\ \mathcal{S}_{k}(\text{SL}(2,\mathbb{Z})). \end{split} \end{align} \end{tcolorbox} \subsubsection{The Eisenstein series} \noindent Consider the Weierstrass elliptic function, $\wp$ defined as follows \begin{align} \wp(z,\omega_{1},\omega_{2})\equiv \frac{1}{z^{2}} + \sum\limits_{\substack{m,n\in\mathbb{Z}\\ (m,n)\neq (0,0)}}\left(\frac{1}{(z-m\omega_{1} - n\omega_{2})^{2}} - \frac{1}{(m\omega_{1} - n\omega_{2})^{2}}\right), \end{align} where the first piece inside the sum can is invariant under transformations by $\omega_{1}$ and $\omega_{2}$ when summed over and the second piece is one that is added in by hand since it behaves as a fudge factor that ensures convergence of the sum. Note that the term $(m\omega_{1} - n\omega_{2})^{-2}\to \infty$ when $(m,n) = (0,0)$ and it is for this reason that we have excluded this case in the sum and added the $z^{-2}$ at the beginning. The $\wp$-function is homogeneous, i.e. \begin{align}\label{wp_def} \wp(\lambda z,\lambda\omega_{1},\lambda\omega_{2}) = \lambda^{-2}\wp(z,\omega_{1},\omega_{2}). \end{align} We can now use the above equation to construct modular forms and to do this we first require the Laurent series expansion in $z$ of the $\wp$-function, \begin{align} \wp(z) = \frac{1}{z^{2}} + a_{2}z^{2} + z_{4}z^{4} + \ldots \end{align} Here, coefficients $a_{2} = a_{2}(\omega_{1},\omega_{2})$ and $a_{4} = a_{4}(\omega_{1},\omega_{2})$. Performing a rescaling by $\lambda$, we get \begin{align} \begin{split} \wp(z)\mapsto \wp(\lambda z) =& \frac{1}{(\lambda z)^{2}} + a_{2}(\lambda_{1}\omega_{1},\lambda\omega_{2})(\lambda z)^{2} + a_{4}(\lambda_{1}\omega_{1},\lambda\omega_{2})(\lambda z)^{4} + \ldots\\ =& \lambda^{-2}\left(\frac{1}{z^{2}} + a_{2}(\omega_{1},\omega_{2})z^{2} + a_{4}(\omega_{2},\omega_{4})z^{4} + \ldots\right). \end{split} \end{align} Homogeneity of $\wp(z)$ tells us that the Fourier coefficients $a_{2}(\omega_{1},\omega_{2}),a_{4}(\omega_{1},\omega_{2}),\ldots$ are also homogeneous but at a different degree, i.e. \begin{align} \begin{split} a_{2}:\ a_{2}(\lambda\omega_{1},\lambda\omega_{2})=& \lambda^{-4}a_{2}(\omega_{1},\omega_{2}),\\ a_{4}:\ a_{4}(\lambda\omega_{1},\lambda\omega_{2})=& \lambda^{-6}a_{4}(\omega_{1},\omega_{2}),\\ &\vdots \end{split} \end{align} Thus, in general, the coefficients $a_{2}(1,\tau),a_{4}(1,\tau),a_{6}(1,\tau),\ldots$ are modular forms of weight $4,6,8,\ldots$ respectively. Now, expanding the first piece inside the sum \ref{wp_def}, we get \\ \begin{align} \frac{1}{(z-m\omega_{1}-n\omega_{2})^{2}} = \frac{1}{(m\omega_{1} + n\omega_{2})^{2}} + \frac{2z}{(m\omega_{1} + n\omega_{2})^{3}} + \frac{3z}{(m\omega_{1} + n\omega_{2})^{4}} + \ldots \end{align} From this we see that the coefficients of $z^{k-2}$ i $\wp(z,\omega_{1},\omega_{2})$ is $\text{constant}\times\sum_{(m,n)\neq (0,0)}(m\omega_{1} + n\omega_{2})^{-k}$. Fixing $(\omega_{1},\omega_{2}) = (1,\tau)$, we obtain the definition of the holomorphic Eisenstein series (of weight $k$), \begin{align} G_{k}(\tau) \equiv \sum\limits_{(m,n)\neq(0,0)}\frac{1}{(m + n\tau)^{k}}. \end{align} This function is valid for $k\geq 4$ and $k\in2\mathbb{Z}$, and is invariant under the T-transform, $T:\tau\to \tau + 1$. We note that the Eisenstein series is defined only for even $k$ because if $k$ were odd then the series would be null-valued since all the terms cancel out in pairs. Thanks to the modular invariance, we can expand $G_{k}(z)$ as a function of the form $c_{0} + c_{1}q + c_{2}q^{2} + \ldots$, where $q = e^{2\pi i\tau}$ and $\{c_{i}\}$ are the Fourier coefficients which are to be found. To fix these coefficients we visualize the sum $G_{k}(\tau)$ on the upper-half plane as shown in figure \ref{fig:Eisenstein_series}. The real line can be partitioned into $(-\infty,0)\cup(0,\infty)$ which correspond to $n=0,m<0$ and $n=0,m>0$ respectively. The point $\tau = 0$ is not included since $(m,n)\neq (0,0)$. The contribution of the strip $(0,\infty)$ to the sum is $\tfrac{1}{1^{k}} + \tfrac{1}{2^{k}} + \ldots = \zeta(k)$ and similarly, the contribution of the strip $(-\infty,0)$ to the sum is also $\zeta(k)$. Next, for the strip above the real line with points $\ldots, \tau-2,\tau-1,\tau, \tau+1,\tau+2,\ldots$ contributes $\tfrac{1}{(\tau^{-1})^{k}} + \tfrac{1}{\tau^{k}} + \tfrac{1}{(\tau + 1)^{k}} + \ldots$. The other contributions of strips above can be found by simply replacing $\tau\to \mathbb{Z}\tau$. All we are to do now is to evaluate the following two sums: \begin{align*} (1)\ \sum\limits_{m\in\mathbb{Z}}\frac{1}{(m + \tau)^{k}},\ \ \ \ \ \ \ \ \ (2)\ \zeta(k) = \sum\limits_{m>0}\frac{1}{m^{k}}. \end{align*} \\ \begin{figure} \centering \includegraphics[width=12cm]{Preliminaries/Eisenstein_visualized.png} \caption{Visualizing the Eisenstein series.} \label{fig:Eisenstein_series} \end{figure} \\ Starting with the first sum, we turn to the function $\tfrac{\pi}{\tan(\pi \tau)}$ which has poles at $\ldots, -2,-1,0,1,2,\ldots$, all of which have unit residue. This function can be expressed as a sum of partial fractions as shown below \begin{align} \frac{\pi}{\tan(\pi \tau)} = \sum\limits_{m\in\mathbb{Z}}\left(\frac{1}{\tau - m} + \frac{1}{m}\right). \end{align} The term $m^{-1}$ plays the role of the fudge factor here that ensures convergence. Note that we are to omit the case $m=0$ since the sum would blow up otherwise. Evaluating the sum, we get \begin{align} \frac{\pi}{\tan(\pi \tau)} = -i \pi \left(\frac{1 + e^{2\pi i\tau}}{1 - e^{2\pi i\tau}}\right) = -i\pi \left(1 + 2q + 2q^{2} + \ldots\right). \end{align} Taking the derivative $\tfrac{d}{d\tau}$ $(k-1)$-times with $\tfrac{1}{2\pi i}\tfrac{d}{d\tau} = q\tfrac{d}{dq}$, we arrive at \begin{align} (k-1)!\sum\limits_{m\in\mathbb{Z}}\frac{1}{(\tau - m)^{k}} = (-2\pi i)^{k}\left(1^{k-1}q + 2^{k-1}q^{2} + \ldots\right). \end{align} This fixes the first sum. We now turn to the second sum which is nothing but the Riemann $\zeta$-function where $k$ is even. It can be shown that the $\zeta$-function can be expressed in terms of the Bernoulli numbers $B_{k}$ as follows \begin{align} \zeta(k) = (-1)^{\frac{k}{2} +1}\frac{B_{k}}{2}\frac{(2\pi)^{k}}{k!}. \end{align} The Bernoulli numbers $B_{k}$ are defined as the coefficients in the following power series \begin{equation} \frac{x}{e^{x}-1} = \sum\limits_{k=0}^{\infty}B_{k}\frac{x^{k}}{k!},\ \ \vert x\vert<2\pi. \end{equation} They have the property that $B_{2k+1}=0$ for $k\geq 1$. The first few Bernoulli numbers is tabulated in table \ref{tab:Bernoulli}. \\ \begin{table}[htb!] \centering \begin{tabular}{||c|c|c||} \hline Weight $k$ & Bernoulli number $B_{k}$ & $A_{k} = -\tfrac{2k}{B_{k}}$\\[0.5ex] \hline\hline $0$ & $+1$ & $0$\\[0.5ex] $1$ & $-\frac{1}{2}$ & $+4$\\[0.5ex] $2$ & $+\frac{1}{6}$ & $-24$\\[0.5ex] $4$ & $-\frac{1}{30}$ & $-240$\\[0.5ex] $6$ & $+\frac{1}{42}$ & $-504$\\[0.5ex] $8$ & $-\frac{1}{30}$ & $+480$\\[0.5ex] $10$ & $+\frac{5}{66}$ & $-264$\\[0.5ex] $12$ & $-\frac{691}{2730}$ & $+\frac{65520}{691}$\\[0.5ex] $14$ & $+\frac{7}{6}$ & $-24$\\[1ex] \hline \end{tabular} \caption{The first few Bernoulli numbers and the value of $A_{k}$ calculated using them.} \label{tab:Bernoulli} \end{table} Putting the results of the sums together, we finally have \begin{align} \begin{split} G_{k}(\tau) =& 2\zeta(k) + 2\frac{(2\pi i)^{k}}{(k-1)!}\sum\limits_{n\geq 1}\sum\limits_{d\geq 1}d^{k-1}q^{nd}\\ =& (-1)^{\frac{k}{2} +1}\frac{(2\pi)^{k}B_{k}}{k!} + 2\frac{(2\pi i)^{k}}{(k-1)!}\sum\limits_{n\geq 1}\sigma_{k-1}(n)q^{n}, \end{split} \end{align} where the factor $2$ accounts for the fact that we are summing over positive and negative $n$, the sum over $d$ is a result of the sum over $m$, and $\sigma_{k-1}(n) = \sum\limits_{d\vert n}d^{k-1}$. This can be written in the following neater form \begin{align} G_{k}(\tau) = 2\zeta(k)E_{k}(\tau), \end{align} where $E_{k}(\tau)$ is the Eisenstein series of weight $k$ given by \begin{align} E_{k}(\tau) = 1 + A_{k}\sum\limits_{n\geq 1}\frac{n^{k-1}q^{n}}{1 - q^{n}} = 1 + A_{k}\sum\limits_{n\geq 1}\sigma_{k-1}(n)q^{n},\ \ A_{k} = -\frac{2k}{B_{k}}. \end{align} Using the values of $A_{k}$ in table \ref{tab:Bernoulli}, the first few $E_{k}(z)$ are \begin{align} \begin{split} E_{2}(\tau) =& 1 - 24\sum\limits_{n=1}^{\infty}\frac{nq^{n}}{1-q^{n}} = 1 \ - \ \ 24\sum\limits_{n=1}^{\infty}\sigma_{1}(n)q^{n} = 1 - 24q - 72q^{2} -96q^{3} - 168q^{4} + \ldots,\\ E_{4}(\tau) =& 1 + 240\sum\limits_{n=1}^{\infty}\frac{n^{3}q^{n}}{1-q^{n}} = 1 + 240\sum\limits_{n=1}^{\infty}\sigma_{3}(n)q^{n} = 1 + 240q + 2160q^{2} + 6720 q^{3} + 17520q^{4} + \ldots,\\ E_{6}(\tau) =& 1 - 504\sum\limits_{n=1}^{\infty}\frac{n^{5}q^{n}}{1-q^{n}} = 1 - 504\sum\limits_{n=1}^{\infty}\sigma_{5}(n)q^{n} = 1 - 504q - 16632q^{2} - 122976q^{3} - 532728q^{4} + \ldots. \end{split} \end{align} We now make an important note. The Eisenstein series of weight $2$, $E_{2}(\tau)$ is not a modular form since this series does not converge for $k=2$ (recall that $G_{k}(\tau)$ was defined for $k\geq 4$). The function $E_{2}(\tau)$ is rather called a quasi-modular form since it transforms as follows under a modular transformation $\gamma = \tfrac{a\tau + b}{c\tau + d}$, \begin{align} \begin{split} E_{k}(p\gamma(\tau)) =& (c\tau + d)^{2}E_{k}(p\tau) + \frac{12c}{2\pi ip}(c\tau + d),\\ E_{2}(\gamma(\tau)) =& (c\tau + d)^{2}E_{2}(\tau) + \frac{12c}{2\pi i}(c\tau + d). \end{split} \end{align} We also note that the Eisenstein series discussed here are those defined on the full modular group $\text{SL}(2,\mathbb{Z})$. There also exist various other kinds of Eisenstein series for congruent subgroups and Hecke groups. These shall be defined when they are required for a calculation. \subsubsection{Modular functions} \noindent Using these modular forms as a basis, we can define the different modular functions of weight $0$ by taking the ratio of two weight $k$ forms. First, consider the ratio of two weight $8$ forms, a possible candidate is the following \begin{align} \frac{E_{4}^{3}(\tau)}{E_{8}(\tau)} = \frac{1 + 240\sum\limits_{n=1}^{\infty}\sigma_{3}(n)q^{n}}{1 + 480\sum\limits_{n=1}^{\infty}\sigma_{7}(n)q^{n}} = \frac{\left(1 + 240q + 2160q^{2} + \ldots\right)^{2}}{1 + 480q + 61920q^{2} + \ldots} = \frac{1 + 480q + 61920q^{2} + \ldots}{1 + 480q + 61920q^{2} + \ldots} = 1. \end{align} From this we learn that $E_{4}^{2}(\tau) = E_{8}(\tau)$. Now, consider the ratio of weight $10$ forms for which a possible candidate is the following \begin{align} \begin{split} \frac{E_{4}(\tau)E_{6}(z)}{E_{10}(\tau)} =& \frac{\left(1 + 240\sum\limits_{n=1}^{\infty}\sigma_{3}(n)q^{n}\right)\left(1 - 504\sum\limits_{n=1}^{\infty}\sigma_{5}(n)q^{n}\right)}{1 - 264\sum\limits_{n=1}^{\infty}\sigma_{9}(n)q^{n}}\\ =& \frac{\left(1 + 240q + \ldots\right)\left(1 - 504q + \ldots\right)}{1 - 264q + \ldots} = \frac{1 - 264q + \ldots}{1 - 264q + \ldots} = 1. \end{split} \end{align} Next, for weight $12$, we find that there are three possible candidates of weight $4$: $E_{4}(\tau)E_{6}(\tau) = E_{10}(\tau)$, but we find that $E_{4}^{3}(\tau)\neq E_{12}(\tau)\neq E_{6}^{2}(\tau)$. The reason for this lies in the fact that the dimensions of the spaces of modular forms of weight $12$ are different from weight $4$ and weight $10$. From \ref{dim_SL2Z}, we find \\ \begin{align} \begin{split} \text{dim}\ \mathcal{M}_{4}\left(\text{SL}(2,\mathbb{Z})\right) =& 1,\ i = 4,6,8,10\\ \text{dim}\ \mathcal{M}_{12}\left(\text{SL}(2,\mathbb{Z})\right) =& 2. \end{split} \end{align} From this, we conclude that any two modular forms of weights $8$ must be equal. The same also applies to modular forms of weight $10$. Since the space $\mathcal{M}_{12}(\text{SL}(2,\mathbb{Z})$ is two-dimensional, we can separate them out and this is why $E_{4}^{3}\neq E_{6}^{2} \neq E_{12}$. Hence, for the case of weight $12$, there must exist a linear relation among forms $E_{4}^{3}$, $E_{6}^{2}$, and $E_{12}$ although all their individual $q$-series expansions are different. Taking the ratio of the Eisenstein series of different weights, we can build new modular forms. We define the discriminant $\Delta(\tau)$ as a cusp form of weight $12$as follows \begin{align}\label{cusp_form} \begin{split} \Delta(\tau) =& \frac{1}{1728}\left(E_{4}(\tau)^{3} - E_{6}(\tau)^{2}\right) = q\prod\limits_{n\geq 1}\left(1 -q^{n}\right)^{24} = \sum\limits_{n\geq 1}z(n)q^{n}\\ =& q - 24q^{2} + 252 q^{3} - 1472 q^{4} + \ldots, \end{split} \end{align} where the definition involving the $q$-product is called Jacobi's product formula. The Dedekind $\eta$-function is defined in the usual way, \begin{align} \eta(\tau) \equiv q^{\frac{1}{24}}\prod\limits_{n=1}^{\infty}(1-q)^{n}, \end{align} which possesses the following behaviour under the $T$- and $S$-transformations \begin{align} \begin{split} \eta(\tau)\overset{T:z\mapsto \tau+1}{\longrightarrow} \eta(\tau+1) =& e^{\frac{2\pi i}{24}}\eta(\tau),\\ \eta(\tau)\overset{S:\tau\mapsto -\tfrac{1}{\tau}}{\longrightarrow} \eta\left(-\frac{1}{\tau}\right) =& \sqrt{\frac{\tau}{i}}\eta(\tau). \end{split} \end{align} Furthermore, under a modular transformation $\gamma = \tfrac{a\tau + b}{c\tau + d}$, the Dedekind $\eta$-function reads \\ \begin{align} \eta(\gamma(\tau)) = \epsilon(a,b,c,d)\sqrt{\frac{c\tau+d}{i}}\eta(\tau), \end{align} where $\epsilon(a,b,c,d)$ is one of the $24^{\text{th}}$ roots of unity. The cusp form \ref{cusp_form}, in terms of $\eta(\tau)$ is just $\Delta(\tau) = \eta^{24}(\tau)$. The discriminant $\Delta$ is closely related to the quasi-modular form $E_{2}$ which can be seen by taking the logarithmic derivative of the former as shown below \\ \begin{align} \begin{split} \frac{1}{2\pi i}\frac{d}{d\tau}\ln\Delta(\tau) =& 24\frac{1}{2\pi i}\frac{d}{d\tau}\ln\eta(\tau) = \frac{1}{2\pi i}\left(\frac{\pi i}{12} + 2\pi i\sum\limits_{n=1}^{\infty}\frac{nq^{n}}{q-q^{n}}\right)\\ =& \frac{1}{2\pi i}\left(\frac{\pi i}{12} + 2\pi i\sum\limits_{n=1}^{\infty}n\sum\limits_{m=1}^{\infty}q^{nm}\right) = \frac{1}{2\pi i}\left(\frac{\pi i}{12} + 2\pi i\sum\limits_{n=1}^{\infty}\sum\limits_{m=1}^{\infty}nq^{nm}\right)\\ =& \frac{1}{2\pi i}\left(\frac{\pi i}{12} + 2\pi i\sum\limits_{n=1}^{\infty}\left(\sum\limits_{0<d\vert n}d\right)q^{n}\right) = E_{2}(\tau). \end{split} \end{align} The fundamental relation established here is \begin{align} \frac{\Delta'(\tau)}{\Delta(\tau)} = E_{2}(\tau). \end{align} It turns out that this follows for all congruent subgroups but with a small twist being that the derivative of the cusp form of $\Gamma_{0}(p)$, where $p\in\mathbb{P}$, is equal to the product of itself with the weight $2$ Eisenstein series of the corresponding Fricke group $E_{2}^{(p^{+})}(\tau)$ with $p\in\mathbb{P}$, i.e. \begin{align}\label{general_cusp_derivative} \frac{\Delta'(\tau)}{\Delta(\tau)} = E_{2}^{(p^{+})}(\tau),\ \Delta(\tau)\in\mathcal{S}_{k}(\Gamma_{0}(p)). \end{align} A function $f(\tau)$ of the following form \begin{align} f(\tau) = \prod\limits_{\alpha\vert N}\left(\eta(\alpha\tau)\right)^{r_{\alpha}}, \end{align} where $N\geq 1$ and each $r_{\alpha}$ is an integer, is called an $\eta$-quotient. If each $r_{\alpha}\geq 0$, then the function $f(\tau)$ is called an $\eta$-product. \begin{tcolorbox}[colback=gray!5!white,colframe=teal!75!black,title=The $j$-function] The $j$-function (also called Klein's invariant) is a modular function of weight $0$ given by \begin{align}\label{j-function} \begin{split} j(\tau) =& \frac{E_{4}(\tau)^{3}}{\Delta(\tau)} = q^{-1} + 744 + \sum\limits_{n\geq 1}c(n)q^{n}\\ =& q^{-1} + 744 + 196884 q + 21493760 q^{2} + 864299970 q^{3} + 20245856256 q^{4} + \ldots \end{split} \end{align} \end{tcolorbox} \noindent The factor $1728$ that appears in the denominator of the cusp form in \ref{cusp_form} and hence, the numerator of the $j$-function in \ref{j-function} is the least positive integer that ensures that the $q$-expansion of $j(z)$ has integral coefficients, i.e. coefficients of the algebraic expansion being integers. The $j$-function is also the ``Hauptmodul'', which is German for the principal modulus, of $\text{SL}(2,\mathbb{Z})$. Notice that $j(\tau)$ has a simple zero at the cusp $\tau = i\infty$ ($q=0$) and a simple pole at the cusp $\tau = 0$ ($q = 1$). This is in fact a characteristic of the Hauptmoduls of genus-zero Hecke subgroups $\Gamma_{0}(N)$. The Hauptmoduls of Hecke and Fricke subgroups come as $\eta$-products. For example, when $N=7$, we have $f(\tau) = \eta^{r_{1}}(\tau)\eta^{r_{7}}(7\tau)$. For $r_{1} = r_{7} = 4$, $f(\tau) = j_{7}(\tau)$, the Hauptmodul of $\Gamma_{0}(7)$. From \ref{dim_SL2Z}, we notice that \begin{align} \text{dim}\left(\frac{\mathcal{M}_{k}(\text{SL}(2,\mathbb{Z}))}{\mathcal{S}_{k}(\text{SL}(2,\mathbb{Z}))}\right) \leq 1, \end{align} and hence, we can decompose the space of modular forms as follows \begin{align}\label{M_basis_decomp} \mathcal{M}_{k}(\text{SL}(2,\mathbb{Z})) = \mathbb{C}E_{k}\oplus \mathcal{S}_{k}(\text{SL}(2,\mathbb{Z})). \end{align} \begin{tcolorbox}[colback=gray!5!white,colframe=teal!75!black,title=Basis decomposition of the spaces of forms] For every even integer $k\geq 4$, the space of cusp forms on $\text{SL}(2,\mathbb{Z})$ can be written as follows \begin{align} \mathcal{S}_{k}(\text{SL}2,\mathbb{Z})) = \Delta \mathcal{M}_{k-12}(\text{SL}2,\mathbb{Z})). \end{align} Using this in \ref{M_basis_decomp}, we have the following basis decomposition \begin{align} \mathcal{M}_{k}(\text{SL}(2,\mathbb{Z}) = E_{k-12n}\left(\mathbb{C}\left(E_{4})^{3}\right)^{n}\oplus\mathbb{C}\left(E_{4})^{3}\right)^{n-1}\Delta\oplus\mathbb{C}\left(E_{4})^{3}\right)^{n-2}\Delta^{2}\oplus\ldots\oplus\mathbb{C}\left(\Delta\right)^{n} \right), \end{align} where $n = \text{dim}\ \mathcal{M}_{k}(\text{SL}(2,\mathbb{Z}) - 1$. \end{tcolorbox} \noindent Going back to the case of $k=12$, we now want to establish a relation between the Eisenstein series of weight $12$. To do this, we first notice that we have the following basis decomposition \begin{align} \mathcal{M}_{12}(\text{SL}(2,\mathbb{Z})) = \mathbb{C}E_{4}^{3}\oplus \mathbb{C}\Delta \end{align} Consider the candidate $\mathcal{C}(\tau) = aE_{4}^{3}(\tau) + b\Delta(\tau)$, where $a,b\in\mathbb{C}$. The $q$-series expansions of this candidate and the Eisenstein series $E_{12}(\tau)$ read \begin{align} \begin{split} \mathcal{C}(\tau) =& aE_{4}^{3}(\tau) + b\Delta(\tau)\\ =& a + (720 a + b) q + (179280 a - 24 b) q^2 + (16954560 a + 252 b) q^3 + \ldots,\\ E_{12}(\tau) =& 1 + \frac{65520}{691}\sum\limits_{n=1}^{\infty}\sigma_{11}q^{n} = 1 + \frac{65520}{691}\sum\limits_{n=1}^{\infty}\frac{n^{11}q^{n}}{1-q^{n}}\\ =& 1 + \frac{1}{691}\left(65520q + 134250480q^{2} + 11606736960q^{3} + \ldots\right). \end{split} \end{align} Comparing the two series, we fix coefficients $a = 1$, $b = -\tfrac{432000}{691}$. Thus, we can establish the following linear relation between modular forms of weight $12$ \begin{align} 441 E_{4}^{3}(\tau) + 250E_{6}^{2}(\tau) = 691E_{12}(\tau). \end{align} We can find similar relations for modular forms of higher weight. \subsubsection{Hecke opertators} \noindent Let $f(\tau)$ be a modular function. We can now realize the function $f(2\tau)$ as the action of $\alpha = \left(\begin{smallmatrix}2 & 0\\ 0 & 1 \end{smallmatrix}\right)$. This, however, is not invariant under the action of $\text{SL}(2,\mathbb{Z})$ but is rather invariant under $\text{SL}(2,\mathbb{Z})$ conjugated by $\alpha$, i.e. $\Tilde{\gamma} = \alpha^{-1}\gamma\alpha = \left(\begin{smallmatrix} a & \tfrac{b}{2}\\ 2c & d\end{smallmatrix}\right)$. The intersection of $\Tilde{\gamma}$ with $\gamma\in\text{SL}(2, \mathbb{Z})$ yields $\left(\begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\right)$ with $c\in2\mathbb{Z}$ or equivalently, $c \equiv 0\ (\text{mod}\ 2)$. Now, from \ref{congruent_subgroup_definitions}, we see that matrices of this form define the Hecke group $\Gamma_{0}(2) = \langle T,S^{-1}T^{-2}S\rangle$ and from \ref{index_Hecke}, we find $\mu_{0} = 3$. Hence, we see that the function $f(2\tau)$ is invariant under an index-$3$ subgroup of the modular group rather than the full group itself. Now, in order to make the modular function invariant under $\text{SL}(2,\mathbb{Z})$, we can simply sum over the cosets $\Gamma_{0}(2)\backslash\text{SL}(2,\mathbb{Z})$. The coset representatives of $\Gamma_{0}(N)\backslash\text{SL}(2,\mathbb{Z})$ are given by the following set upper triangular matrices, \begin{align}\label{X_n} X_{n} = \left\{\left.\begin{pmatrix} a & b\\ 0 & d\end{pmatrix}\in\mathbb{M}_{n}(2,\mathbb{Z})\right\vert 0\leq b<d\right\}, \end{align} where $\mathbb{M}_{n}(2,\mathbb{Z})$ denotes the set of $2\times 2$ matrices with integer entries and determinant $n$. For $n = 2$, we have $X_{n} = \left\{\left(\begin{smallmatrix}2 & 0\\ 0 & 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1 & 1\\ 0 & 2\end{smallmatrix}\right),\left(\begin{smallmatrix}1 & 0\\ 0 & 2\end{smallmatrix}\right)\right\}$ and these correspond to the following modular function invariant under $\text{SL}(2,\mathbb{Z})$, \begin{align}\label{invariant_combination} \mathcal{F}(\tau) = f(2\tau) + f\left(\frac{\tau + 1}{2}\right) + f\left(\frac{\tau}{2}\right). \end{align} This provides a natural motivation for the Hecke operator $T_{n}$ that can be thought of as operators that act on a modular function $f(\tau)$ to yield an invariant combination of modular functions. Here $\mathcal{F}(\tau) = T_{2}f(\tau)$. For the next couple of $n$ values in \ref{X_n}, the action of the Hecke operator $T_{n}$ on $f(\tau)$ reads \begin{align} \begin{split} X_{3} =& \left\{\left(\begin{smallmatrix}3 & 0\\ 0 & 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1 & 1\\ 0 & 3\end{smallmatrix}\right), \left(\begin{smallmatrix}1 & 2\\ 0 & 3\end{smallmatrix}\right), \left(\begin{smallmatrix}1 & 0\\ 0 & 3\end{smallmatrix}\right)\right\},\\ T_{3}f(\tau) =& f(3\tau) + f\left(\frac{\tau + 1}{3}\right) + f\left(\frac{\tau + 2}{3}\right) + f\left(\frac{\tau}{3}\right),\\ X_{4} =& \left\{\left(\begin{smallmatrix}4 & 0\\ 0 & 1\end{smallmatrix}\right),\left(\begin{smallmatrix}1 & 1\\ 0 & 4\end{smallmatrix}\right), \left(\begin{smallmatrix}1 & 2\\ 0 & 4\end{smallmatrix}\right), \left(\begin{smallmatrix}1 & 3\\ 0 & 4\end{smallmatrix}\right), \left(\begin{smallmatrix}2 & 0\\ 0 & 2\end{smallmatrix}\right), \left(\begin{smallmatrix}2 & 1\\ 0 & 2\end{smallmatrix}\right), \left(\begin{smallmatrix}1 & 0\\ 0 & 4\end{smallmatrix}\right)\right\},\\ T_{4}f(\tau) =& f(4\tau) + f\left(\frac{\tau + 1}{4}\right) + f\left(\frac{\tau + 2}{4}\right) + f\left(\frac{\tau + 3}{4}\right)\\ &{} \ \ \ \ \ \ \ + f(\tau) + f\left(\frac{2\tau + 1}{2}\right) + f\left(\frac{\tau}{4}\right),\\ &\vdots \end{split} \end{align} Furthermore, it is easy to see that in the case of $p\in\mathbb{P}$, then the Hecke operator acts as follows \begin{align} T_{p}:f(\tau) \mapsto f(p\tau) + f\left(\frac{\tau + 1}{p}\right) + \ldots + f\left(\frac{\tau + p-1}{p}\right) + f\left(\frac{\tau}{p}\right). \end{align} With $f(\tau) = \sum_{n}a_{n}q^{n}$, we find \begin{align} \begin{split} &f\left(\frac{\tau + 1}{p}\right) + \ldots + f\left(\frac{\tau + p-1}{p}\right) + f\left(\frac{\tau}{p}\right) = \begin{cases} p\sum\limits_{n}a_{n}q^{\tfrac{n}{p}},\ p\vert n,\\ 0,\ \ \ \ \ \ \ \ \ \ \ \ p\not\vert n, \end{cases}\\ &f(p\tau) = \sum_{n}a_{n}q^{np}, \end{split} \end{align} For the case of $p = 3$, we have the following expansions \begin{align} \begin{split} f(\tau) =& \ldots + a_{-3}q^{-3} + a_{-2}q^{-2} + a_{-1}q^{-1} + a_{0} + a_{1}q + a_{2}q^{2} + a_{3}q^{3} + \ldots,\\ f(3\tau) =& \ldots + a_{-3}q^{-9} + a_{-2}q^{-6} + a_{-1}q^{-3} + a_{0} + a_{1}q^{3} + a_{2}q^{6} + a_{3}q^{9} + \ldots,\\ \sum\limits_{i=0}^{3-1}f\left(\frac{\tau + i}{3}\right) =& \ldots + 3a_{-3}q^{-1}\ \ \ \ \ \ \ \ \ \ \ \ +\ \ \ \ \ \ \ \ \ \ \ \ a_{0}\ \ \ \ \ \ \ \ \ +\ \ \ \ \ \ \ \ \ 3a_{3}q + \ldots, \end{split} \end{align} from which we notice that the action of the last set of functions on any prime pulls the $q$-series expansion inwards by $\tfrac{n}{p}$ places while also multiplying all the coefficients by $p$, and the action of the second function pushes $q$-series expansion outwards by $np$ places. \begin{tcolorbox}[colback=gray!5!white,colframe=teal!75!black,title=Hecke operator] We can succinctly define the Hecke operator $T'_{\ell} = \ell T_{\ell}$ as follows \begin{align} \begin{split} T_{\ell} =& \begin{cases} f(\ell \tau) + \sum\limits_{m=0}^{\ell - 1}f\left(\frac{\tau + i}{\ell}\right),\ &\ell\in\mathbb{P},\\ \sum\limits_{d\vert \ell}\sum\limits_{m=0}^{d- 1}f\left(\frac{\ell\tau + md}{d^{2}}\right),\ &\ell\in\mathbb{Z}_{+}, \end{cases} \end{split} \end{align} and the coefficients $a_{-\ell}$, for $0\leq \ell\leq k$, come from \begin{align} \begin{split} q^{-k}\prod\limits_{n=2}^{\infty}\left(1-q^{n}\right)^{-1} =& \sum\limits_{\ell = -k}^{\infty}a_{\ell}q^{\ell},\\ =& q^{-k}\left(1 + q^{2} + q^{3} + 2q^{4} + 2q^{5} + 4q^{6} + \ldots\right). \end{split} \end{align} \end{tcolorbox} \noindent Suppose now we want to use non-zero weight modular forms instead of modular functions. Then, we simply consider the invariant form $f(\tau)\left(d\tau\right)^{\tfrac{k}{2}}$ for the Hecke operator to act on. For the case of $p=2$, instead of the invariant linear combination shown in \ref{invariant_combination}, we now have \begin{align} \begin{split} \mathscr{F}(\tau) =& f(2\tau)\left(d\ 2\tau\right)^{\tfrac{k}{2}} + f\left(\frac{\tau + 1}{2}\right)\left(d\ \left(\frac{\tau + 1}{2}\right)\right)^{\tfrac{k}{2}} + f\left(\frac{\tau}{2}\right)\left(d\ \frac{\tau}{2}\right)^{\tfrac{k}{2}},\\ =& \left[2^{\tfrac{k}{2}}f(2\tau) + 2^{-\tfrac{k}{2}}f\left(\frac{\tau + 1}{2}\right) + 2^{-\tfrac{k}{2}}f\left(\frac{\tau}{2}\right)\right]\left(d\tau\right)^{\tfrac{k}{2}}, \end{split} \end{align} from which we see that the inclusion of $\left(d\tau\right)^{\tfrac{k}{2}}$ results in all the powers of $p$ multiplying the coset representatives acting on the modular form. Formally, we can define the Hecke operator and its Fourier expansion as follows \begin{align} \begin{split} T_{n} \equiv& \sum\limits_{\substack{ad = n\\ 0\leq b< d}}f\left(\frac{a\tau + b}{d}\right),\\ T_{n}f(\tau)=& \sum\limits_{n}\left(\sum\limits_{d\vert (m,n)}d^{k-1}c\left(\frac{mn}{d^{2}}\right)\right)q^{n}, \end{split} \end{align} where $(m,n)\equiv \text{gcd}(m,n)$. \subsection{\texorpdfstring{$\mathbf{\Gamma_{0}^{+}(2)}$}{ Γ0(2)+}} \noindent The Fricke group of level $2$ is generated by $\Gamma_{0}^{+}(2) = \langle\left(\begin{smallmatrix}1 & 1\\ 0 & 1\end{smallmatrix}\right), W_{2}\rangle$. A non-zero $f\in\mathcal{M}_{k}^{!}(\Gamma_{0}^{+}(2))$ satisfies the following valence formula \begin{align}\label{valence_Fricke_2} \nu_{\infty}(f) + \frac{1}{2}\nu_{\tfrac{i}{\sqrt{2}}}(f) + \frac{1}{4}\nu_{\rho_{2}}(f) + \sum\limits_{\substack{p\in\Gamma_{0}^{+}(2)\backslash\mathbb{H}^{2}\\ p\neq \tfrac{i}{\sqrt{2}},\rho_{2}}}\nu_{p}(f) = \frac{k}{8}, \end{align} where $\tfrac{i}{\sqrt{2}}$ and $\rho_{2} = \tfrac{-1+i}{2}$ are elliptic points and this group has a cusp at $\tau = \infty$. The dimension of the space of modular forms reads \begin{align}\label{dimension_Fricke_2} \text{dim}\ \mathcal{M}_{k}(\Gamma_{0}^{+}(2)) = \begin{cases}\left\lfloor\left.\frac{k}{8}\right\rfloor\right.,\ &k \equiv 2\ (\text{mod}\ 8)\\ \left\lfloor\left.\frac{k}{8}\right\rfloor\right. + 1,\ &k\not\equiv2\ (\text{mod}\ 8), \end{cases} \end{align} where $k>2$ and $k\in2\mathbb{Z}$. From \cite{Umasankar:2022kzs}, we know that the Riemann-Roch theorem takes the following form \begin{align}\label{Riemann_Roch_Gamma_0_2+} \begin{split} \sum\limits_{i=0}^{n-1}\alpha_{i} =& -\frac{nc}{24} + \sum\limits_{i}\Delta_{i}\\ =&\frac{1}{8}n(n-1) - \frac{1}{4}\ell. \end{split} \end{align} The Eisenstein series of $\Gamma_{0}^{+}(2)$, which for weight $k = 2$ is a quasi-modular form, is defined as follows \begin{align} E_{k}^{(2^{+})}(\tau) \equiv \frac{2^{\tfrac{k}{2}}E_{k}(2\tau) + E_{k}(\tau)}{2^{\tfrac{k}{2}} + 1}, \ k\geq 4. \end{align} We note that only at this level do we observe the relation $E_{10}^{(2^{+})} = E_{4}^{(2^{+})}E_{6}^{(2^{+})}$. Using the transformation formula $E_{k}(p\gamma(\tau)) = (c\tau + d)^{2}E_{k}(p\tau) + \tfrac{12c}{2\pi ip}(c\tau + d),\ \gamma\in\Gamma_{0}^{+}(2)$, we find \begin{align} \begin{split} E_{2}^{(2^{+})}(\gamma(\tau)) =& (c\tau + d)^{2}E_{2}^{(2^{+})}(\tau) + \frac{8c}{2\pi i}(c\tau + d), \end{split} \end{align} with the action $\gamma(\tau) = \tfrac{a\tau + b}{c\tau + d}$. The Hauptmodul and the cusp form of this group are defined as follows \begin{align}\label{Hauptmodul_Gamma_0_2+} \begin{split} j_{2^{+}}(\tau) =& \left(\frac{\left(E_{4}^{(2^{+})}\right)^{2}}{\Delta_{2}}\right)(\tau) = \left(\left(\frac{\eta(\tau)}{\eta(2\tau)}\right)^{12} + 2^{6}\left(\frac{\eta(2\tau)}{\eta(\tau)}\right)^{12}\right)^{2},\\ \Delta_{2}(\tau) =& \left(\eta(\tau)\eta(2\tau)\right)^{8}\in\mathcal{S}_{8}(\Gamma_{0}^{+}(2)). \end{split} \end{align} \subsection{\texorpdfstring{$\mathbf{\Gamma_{0}^{+}(3)}$}{ Γ0(3)+}} \noindent The Fricke group of level $3$ is generated by $\Gamma_{0}^{+}(3) = \langle\left(\begin{smallmatrix}1 & 1\\ 0 & 1\end{smallmatrix}\right), W_{3}\rangle$. A non-zero $f\in\mathcal{M}_{k}^{!}(\Gamma_{0}^{+}(3))$ satisfies the following valence formula \begin{align}\label{valence_Fricke_3} \nu_{\infty}(f) + \frac{1}{2}\nu_{\tfrac{i}{\sqrt{3}}}(f) + \frac{1}{4}\nu_{\rho_{3}}(f) + \sum\limits_{\substack{p\in\Gamma_{0}^{+}(3)\backslash\mathbb{H}^{2}\\ p\neq \tfrac{i}{\sqrt{3}},\rho_{3}}}\nu_{p}(f) = \frac{k}{6}, \end{align} where $\tfrac{i}{\sqrt{3}}$ and $\rho_{3} = -\tfrac{1}{2}+\tfrac{i}{2\sqrt{3}}$ are elliptic points and this group has a cusp at $\tau = \infty$. The dimension of the space of modular forms reads \begin{align}\label{dimension_Fricke_3} \text{dim}\ \mathcal{S}_{k}(\Gamma_{0}^{+}(3)) = \begin{cases}\left\lfloor\left.\frac{k}{6}\right\rfloor\right.,\ &k \equiv 2,6\ (\text{mod}\ 12)\\ \left\lfloor\left.\frac{k}{6}\right\rfloor\right. + 1,\ &k\not\equiv2,6\ (\text{mod}\ 12), \end{cases} \end{align} where $k>2$ and $k\in2\mathbb{Z}$. From \cite{Umasankar:2022kzs}, we know that the Riemann-Roch theorem takes the following form \begin{align} \begin{split} \sum\limits_{i=0}^{n-1}\alpha_{i} =& -\frac{nc}{24} + \sum\limits_{i}\Delta_{i}\\ =& \frac{1}{6}n(n-1) - \frac{1}{3}\ell. \end{split} \end{align} The Eisenstein series of $\Gamma_{0}^{+}(3)$, which for weight $k = 2$ is a quasi-modular form, is defined as follows \begin{align} E_{k}^{(3^{+})}(\tau)\equiv\frac{3^{\tfrac{k}{2}}E_{k}(3\tau) + E_{k}(\tau)}{3^{\tfrac{k}{2}} + 1},\ k\geq 4,\\ \end{align} When $k = 2$, this transforms as follows under $\gamma\in\Gamma_{0}^{+}(3)$ \begin{align} \begin{split} E_{2}^{(3^{+})}(\gamma(\tau)) =& (c\tau + d)^{2}E_{2}^{(3^{+})}(\tau) + \frac{6c}{2\pi i}(c\tau + d), \end{split} \end{align} The Hauptmodul and other modular forms in this group are defined as follows \begin{align} j_{3^{+}}(\tau) =& \left(\frac{\left(E_{2,3^{'}}\right)^{3}}{\Delta_{3}}\right)(\tau) = \left(\left(\frac{\eta(\tau)}{\eta(3\tau)}\right)^{6} + 3^{3}\left(\frac{\eta(3\tau)}{\eta(\tau)}\right)^{6}\right)^{2},\nonumber\\ \Delta_{3}(\tau) =& \left(\eta(\tau)\eta(3\tau)\right)^{6}\in\mathcal{S}_{6}(\Gamma_{0}(3)),\nonumber\\ \Delta_{3^{+}, 8}(\tau) =& \frac{41}{1728}\left(\left(E_{4}^{(3^{+})}\right)^{2} - E_{8}^{(3^{+})}\right)(\tau)\in\mathcal{S}_{8}(\Gamma_{0}^{+}(3)),\nonumber\\ \Delta_{3^{+}, 10}(\tau) =& \frac{61}{432}\left(E_{4}^{(3^{+})}E_{6}^{(3^{+})} - E_{10}^{(3^{+})}\right)(\tau)\in\mathcal{S}_{10}(\Gamma_{0}^{+}(3)),\nonumber\\ \Delta_{3^{+},12}(\tau) =& \left( \Delta_{3^{+},8}E_{4}^{(3^{+})}\right)(\tau)\in\mathcal{S}_{12}(\Gamma_{0}^{+}(3)),\nonumber\\ \Delta_{3^{+}, 14}(\tau) =& \left( \Delta_{3^{+},10}E_{4}^{(3^{+})}\right)(\tau)\in\mathcal{S}_{14}(\Gamma_{0}^{+}(3)). \end{align} \section{Three-character theories (note: section yet to be completed)} Let us now consider three character theories with $\ell = 0$. The modular invariant differential equation reads \begin{align} \left[\omega_{0}(\tau)\mathcal{D}^{3} + \omega_{2}(\tau)\mathcal{D}^{2} + \omega_{4}(\tau)\mathcal{D} + \omega_{6}(\tau)\right]f(\tau) = 0. \end{align} Since there are no modular forms of weight $2$ in $\Gamma_{0}^{+}(2)$, with choices $\omega_{4}(\tau) =\mu_{1} E_{4}^{(2^{+})}(\tau)$ and $\omega_{6}(\tau) = \mu_{2} E_{6}^{(2^{+})}(\tau)$ and in terms of derivatives $\Tilde{\partial}$, the MLDE reads \begin{align} \left[\Tilde{\partial}^{3} - \frac{1}{6}\left(\Tilde{\partial}E_{2}(\tau)\right)\Tilde{\partial} - \frac{1}{2}E_{2}\Tilde{\partial}^{2} + \frac{1}{18}E^{2}_{2}\Tilde{\partial} + \mu_{1}E_{4}^{(2^{+})}(\tau)\Tilde{\partial} + \mu_{2}E_{6}^{(2^{+})}(\tau)\right]f(\tau) = 0. \end{align} We now make use of the Ramanujan identity for the Eisenstein series, $\Tilde{\partial}E_{2} = \tfrac{E_{2}^{2} - E_{4}}{12}$, to rewrite the MLDE as follows \begin{align} \left[\Tilde{\partial}^{3} + \frac{1}{2}\left(\Tilde{\partial}E_{2}(\tau)\right)\Tilde{\partial} - \frac{1}{2}E_{2}(\tau)\Tilde{\partial}^{2} + \frac{1}{18}E_{4}(\tau)\Tilde{\partial} + \mu_{1}E_{4}^{(2^{+})}(\tau)\Tilde{\partial} + \mu_{2}E_{6}^{(2^{+})}(\tau)\right]f(\tau) = 0. \end{align} We make the following mode expansion substitutions for $f(\tau)$, and the modular forms in the above MLDE \begin{align} \begin{split} f(\tau) =& q^{\alpha}\sum\limits_{n= 0}^{\infty}f_{n}q^{n},\\ E_{k}(\tau) =& \sum\limits_{n=0}^{\infty}E_{k,n}q^{n},\\ E_{4}^{(2^{+})}(\tau) =& \sum\limits_{n=0}^{\infty}E_{4,n}^{(2^{+})}q^{n}\\ =& 1 + 48 q + 624 q^2 + 1344 q^3 + 5232 q^4 + \ldots,\\ E^{(2^{+})}_{6}(\tau) =& \sum\limits_{n=0}^{\infty}E^{(2^{+})}_{6,n}q^{n}\\ =& 1 - 56 q - 2296 q^2 - 13664 q^3 - 73976 q^4 + \ldots,\\ \Tilde{\partial}E_{2}(\tau) =& q\frac{d}{dq}E_{2}(\tau) = \sum\limits_{n = 0}^{\infty}\mathfrak{e}_{n}q^{n}\\ =& 24 q - 144 q^2 - 288 q^3 - 672 q^4 - 720 q^5 + \ldots \end{split} \end{align} We now have the recursion relation \begin{align}\label{m_1_equation} \begin{split} &\left((n + \alpha)^{3} - \frac{1}{2}(n + \alpha)^{2} + \left(\mu_{1} + \frac{1}{18}\right)(n + \alpha) + \mu_{2}\right)f_{n} =\\ &\sum\limits_{m=1}^{n}\left(\frac{(n + \alpha - m)^{2}}{2}E_{2,m} - \left(\frac{1}{2}\mathfrak{e}_{m} + \frac{1}{18}E_{4,m} + \mu_{1}E^{(2^{+})}_{4,m}\right) (n + \alpha - m) - \mu_{2}E^{(2^{+})}_{6,m}\right)f_{n-m}. \end{split} \end{align} For $n = 0$, we get the equation \begin{align}\label{indicial_cubic} \alpha^{3} - \frac{1}{2}\alpha^{2} + \Tilde{\mu}_{1}\alpha + \mu_{2} = 0, \end{align} where we have defined $\Tilde{\mu}_{1}\equiv \mu_{1} + \tfrac{1}{18}$. For $n = 1$, we obtain the following ratio \begin{align}\label{m_1_finding_equation} \left(3\alpha^{2} + 2\alpha + \frac{5}{9} + \mu_{1}\right)m_{1} + \left(56\alpha^{3} - 16\alpha^{2} + 104\alpha\mu_{1} - \frac{40}{9}\alpha\right) = 0, \end{align} where $ m_{1} \equiv \tfrac{f_{1}}{f_{0}}$. From these equations, we find \begin{align} \begin{split} \mu_{1} =& \alpha_{0}\alpha_{1} + \alpha_{1}\alpha_{2} + \alpha_{0}\alpha_{2} - \frac{1}{18},\\ \mu_{2} =& -\alpha_{0}\alpha_{1}\alpha_{2}, \end{split} \end{align} where $\alpha_{i}$ with $i=0,1,2$ are the roots of the cubic equation. With $\mu_{2}$ obtained through the indicial cubic equation, we find the following explicit expression for $\mu_{1}$ from \ref{m_1_equation} \begin{align} \mu_{1} = -\frac{504 \alpha_{0}^{3} + \alpha_{0}^{2}(27m_{1} - 144) + \alpha_{0}(40 + 18m_{1}) + 5m_{1}}{9(m_{1} + 104\alpha_{0})}. \end{align} We now consider the outcome of the recursion relation for $n =2$ and substitute the known expressions of $\mu_{1}$ and $\mu_{2}$ to obtain \begin{align} \begin{split} 0 &=225792\alpha_{0}^{4} + \alpha_{0}^{3}\left(-206784 - 27456m_{1} - 768m_{1}^{2} + 1536m_{2}\right)\\ &+ \alpha_{0}^{2}\left(15840 - 11064m_{1} - 1104m_{1}^{2} + 3216m_{2}\right)\\ &+ \alpha_{0}(-820m_{1} - 376m_{1}^{2} + 1880m_{2}+ 18m_{1}m_{2}) - (40m_{1}^{2} - 15m_{1}m_{2}). \end{split} \end{align} We observe that if we define the variable \begin{align}\label{N_def} N = -28224\alpha_{0}, \end{align} then the polynomial equation simplifies to \begin{align}\label{poly_quartic} \begin{split} 0 &= N^{4} + \left(25848 + 3432m_{1} + 96m_{1}^{2} - 192m_{2}\right)N^{3}\\ &+\left(55883520 - 39033792m_{1} - 3894912m_{1}^{2}+11346048m_{2}\right)N^{2}\\ &+ \left(81650903040m_{1} + 37439926272m_{1}^{2} -187199631360m_{2}-1792336896m_{1}m_{2}\right)N\\ &- \left(112415370117120m_{1}^{2} - 42155763793920m_{1}m_{2}\right). \end{split} \end{align} The integer-root theorem requires $N$ to be an integer. Hence, $N$ is further restricted- it not only has to be rational but also is to be an integer. Since $\alpha_{0} = -\tfrac{c}{24}$, we obtain the relation $c = \tfrac{N}{1176}$ from which we see that the central charge has to be a rational number whose denominator is always $1176$. Substituting $\mu_{1}$ and $\mu_{2}$ in \ref{indicial_cubic}, with the definition of $N$ in \ref{N_def} we get \begin{align} \begin{split} 0 &=\left(\alpha+\frac{N}{28224}\right)\left((1725954048 N - 468397375488m_{1})\alpha^{2} \right.\\ &\left.+ (-862977024 N - 61152 N^2 + 234198687744m_{1} + 16595712 N m_{1})\alpha\right.\\ &\left.+(1176 N^{2}m_{1} - 41489280 Nm_{1} + 234198687744 + N^{3} + 21168N^{2} + 22127616 N)\right). \end{split} \end{align} We see from this that $\alpha = \alpha_{0} = -\tfrac{N}{28224}$ is one of the solutions. Let us denote the other solutions by $\alpha_{1}$ and $\alpha_{2}$ so that $\alpha_{1} - \alpha_{0}$, and $\alpha_{2} - \alpha_{0}$ are the conformal weights $\Delta_{1}$ and $\Delta_{2}$. Hence, $\alpha_{1}$ and $\alpha_{2}$ should also be rational which implies that the discriminant, which is an integer, has to be the perfect square to be able to result in rational roots. This yields \begin{align}\label{k_equation} \begin{split} k^{2} = &22308830199742464m_{1}^{2} - 89463898718208m_{1} N - 3161682284544 m_{1}^{2}N + 26752287744 N^{2}\\ &+ 12148061184 m_{1}N^{2} + 112021056m_{1}^{2}N^{2} - 1834560N^{3} - 373968m_{1}N^{3} - 143N^{4}, \end{split} \end{align} where $k$ is a positive integer. This equation where all the unknown variables $N$, $m_{1}$, and $k$ are positive is known as a Diophantine equation. The roots are found to be \begin{align} \frac{-49787136m_{1} + 183456 N -3528 m_{1}N + 13N^{2} \pm k}{56448(13 N - 3528 m_{1})}, \end{align} where the smaller solution corresponds to $\alpha_{1}$ and the larger one corresponds to $\alpha_{2}$. We now solve \ref{poly_quartic} for a fixed value of $N$ (chosen among the divisors of $28224$) and obtain the solution set $(N,m_{1},m_{2})$. Now, we choose those values of $m_{1}$ and $N$ that result in positive integer values for $k$ that satisfies \ref{k_equation}. By following this procedure, we would have found values for $N$, $m_{1}$, $m_{2}$, and $k$, all of which simultaneously solve equations \ref{poly_quartic} and \ref{k_equation}. The only solution we could find was at $N = 14112$ for which we obtain the values listed in table \ref{tab:N_value} \\ \begin{table}[htb!] \centering \begin{tabular}{c|c|c|c} $N$ & $m_{1}$ & $m_{2}$ & $k$\\ \hline & & & \\ 14112 & 58 & 66 & 796594176\\ \end{tabular} \caption{Simultaneous solution for $0< N\leq 28224$.} \label{tab:N_value} \end{table} \\ For the above values, we find roots \begin{align} (\alpha_{0},\alpha_{1},\alpha_{2}) = \left(-\frac{1}{2},-\frac{1}{6},\frac{7}{6}\right). \end{align} From this we find central charge and conformal dimensions $(c,\Delta_{1},\Delta_{2}) = \left(12,\tfrac{1}{3},\frac{5}{3}\right)$, and free parameters $(\mu_{1},\mu_{2}) = \left(-\tfrac{25}{36},-\tfrac{7}{72}\right)$. Thus far, we have only investigated the vacuum character and found the $q$-series expansion \begin{align} \chi_{0}(\tau) = q^{-\tfrac{1}{2}}\left(1 + 58q + 66q^{2} + \ldots\right). \end{align} In order to establish that this is indeed a valid character, we should check for positivity in higher-order Fourier coefficients and repeat the same process for the characters $\chi_{\tfrac{1}{3}}(\tau)$ and $\chi_{\tfrac{5}{3}}(\tau)$ in order to rule this as an admissible solution. The following is a general expression we obtain for the ratio $m_{1}$ from \ref{m_1_finding_equation}, \begin{align} m_{1}^{(i)} \equiv \frac{f_{1}^{(i)}}{f_{0}^{(i)}} = \ldots \end{align} where $i = 0,1,2$. From this we find $m_{1}^{(1)} = \ldots$ \section{Two-character theories for \texorpdfstring{$\Gamma_{0}^{+}(7)$}{Γ0(7)+}}\label{appendix:D} For the general form of the MLDE for $n=2$, the free parameters $\mu_{i}$ come as coefficients to the modular forms and the number of them can be determined as follows \begin{align}\label{number_of_parameters_F} \begin{split} \#(\mu) =&\text{dim}\mathcal{M}_{2\ell}(\Gamma^{+}_{0}(7)) + \text{dim}\mathcal{M}_{2\ell + 2}(\Gamma^{+}_{0}(7)) + \text{dim}\mathcal{M}_{2\ell + 4}(\Gamma^{+}_{0}(7)) -1\\ =& 2\left(\left\lfloor\left.\frac{\ell}{2}\right\rfloor\right. + \left\lfloor\left.\frac{\ell+1}{2}\right\rfloor\right. + \left\lfloor\left.\frac{\ell+2}{2}\right\rfloor\right.\right) + \left\lfloor\left.\frac{2\ell}{3}\right\rfloor\right. + \left\lfloor\left.\frac{2\ell+2}{3}\right\rfloor\right. + \left\lfloor\left.\frac{2\ell+4}{3}\right\rfloor\right. -3\ell -1. \end{split} \end{align} \subsubsection*{\texorpdfstring{$\ell=0$}{Lg}} Consider the simple case with no poles $\ell = 0$ and two characters. The MLDE in this case reads \begin{align} \left[\mathcal{D}^{2} + \phi_{1}(\tau)\mathcal{D} + \phi_{0}(\tau) \right]f(\tau) = 0, \end{align} where $\phi_{0}(\tau)$ and $\phi_{1}(\tau)$ are modular forms of weights $4$ and $2$ respectively. Since there exists no weight $2$ modular form that belongs to the space of modular forms corresponding to the Fricke group $\Gamma_{0}^{+}(7)$, the MLDE simplifies to \begin{align}\label{MLDE_n=2} \left[\mathcal{D}^{2} + \phi_{0}(\tau)\right]f(\tau) = 0. \end{align} We now have three unique choices for $\phi_{0}(\tau)$, namely \begin{align}\label{choices_phi} \begin{split} \phi_{0}(\tau) =& \mu E^{(7^{+})}_{4}(\tau) = \mu\Theta_{4,7}(\tau)\\ =& \mu\left(1 + \tfrac{24}{5}q + \tfrac{216}{5}q^{2} + \tfrac{672}{5}q^{3} + \tfrac{1752}{5}q^{4} + \ldots\right)\\ \phi_{0}(\tau) =& \mu \Theta^{2}_{2,7}(\tau) = \mu\theta_{7}^{4}(\tau)\\ =& \mu\left(1 + 8q + 40q^{2} + 128q^{3} + 328q^{4} + \ldots\right)\\ \phi_{0}(\tau) =& \mu\Delta_{7^{+},4}(\tau) = \mu\theta_{7}(\tau)j_{7}^{-1}(\tau)\mathbf{k}(\tau)\\ =& \mu\left(q - q^{2} - 2q^{3} - 7q^{4} + \ldots\right). \end{split} \end{align} Here $\mu$ is a free parameter which will be fixed later. The space $\mathcal{M}_{4}\left(\Gamma_{0}^{+}(7)\right)$ is one-dimensional and we notice that all three of the above choices can be generated by $\Theta_{2,7}^{2}(\tau)$ and $\Theta_{4,7}(\tau)$ as shown below \begin{align} \begin{split} E_{4}^{7^{+}}(\tau) =& a\Theta_{2,7}^{2} + b\Theta_{4,7}(\tau),\ a = 0,b = 1\\ \Delta_{7^{+},4}(\tau) =& a\Theta_{2,7}^{2}(\tau) + b\Theta_{4,7}(\tau),\ a = \frac{5}{16},\ b = -\frac{5}{16}. \end{split} \end{align} Lastly, we note that $E_{4}^{(7^{+})}(\tau)$ is a quasi-modular form unlike $\Theta_{2,7}^{2}(\tau)$ (which is a modular form) since it transforms as follows under a modular transformation $\gamma = \tfrac{a\tau + b}{c\tau + d}$, \begin{align} \begin{split} E_{k}(p\gamma(\tau)) =& (c\tau + d)^{2}E_{k}(p\tau) + \frac{12c}{2\pi ip}(c\tau + d),\\ E_{4}^{(7^{+})}(\gamma(\tau)) =& (c\tau + d)^{2}E_{4}^{(7^{+})}(\tau) + \frac{48c}{25}\frac{(c\tau + d)}{2\pi i}. \end{split} \end{align} Hence, we only consider the second choice for $\phi_{0}(\tau)$ for our calculation. From the basis \ref{Fricke_basis}, we find \begin{align} \begin{split} \mathcal{M}_{4}(\Gamma_{0}^{+}(7)) =& \theta_{7}(\tau)\left(\mathbb{C}\theta_{7}^{3}\oplus\mathbf{k}\mathbf{t}\right)\\ =& \mathbb{C}\Theta_{2,7}^{2}\oplus\mathbb{C}\theta_{7}\mathbf{k}\mathbf{t}. \end{split} \end{align} Hence, the most general choice for $\phi_{0}(\tau)$ would be \begin{align} \phi_{0}(\tau) = \mu_{1}\Theta_{2,7}^{2}(\tau) + \mu_{2}\left(\theta_{7}\mathbf{k}\mathbf{t}\right)(\tau). \end{align} We now have two free parameters which matches with the result of \ref{number_of_parameters_F} for $\ell = 0$. Now, since the covariant derivative transforms a weight $r$ modular form into one of weight $r+2$, the double covariant derivative of a weight $0$ form is \begin{align} \begin{split} \mathcal{D}^{2} = \mathcal{D}_{(2)}\mathcal{D}_{(0)} =& \left(\frac{1}{2\pi i}\frac{d}{d\tau} - \frac{1}{6}E_{2}(\tau)\right)\frac{1}{2\pi i}\frac{d}{d\tau}\\ =& \Tilde{\partial}^{2} - \frac{1}{6}E_{2}(\tau)\Tilde{\partial}. \end{split} \end{align} The MLDE \ref{MLDE_n=2} now reads \begin{align} \left[\Tilde{\partial}^{2} - \frac{1}{6}E_{2}(\tau)\Tilde{\partial} + \phi_{0}(\tau)\right]f(\tau) =0. \end{align} This equation can be solved by making the following mode expansion substitution for the character $f(\tau)$ and the Eisenstein series $E_{k}(\tau)$, and modular forms $\Theta_{2,7}^{2}(\tau)$ and $\left(\theta_{7}\mathbf{k}\mathbf{t}\right)(\tau)$, \begin{align}\label{series_defs} \begin{split} f(\tau) =& q^{\alpha}\sum\limits_{n= 0}^{\infty}f_{n}q^{n},\\ E_{k}(\tau) =& \sum\limits_{n=0}^{\infty}E_{k,n}q^{n},\\ \Theta_{2,7}^{2}(\tau) =& \sum\limits_{n=0}^{\infty}\Theta_{n}q^{n}\\ =& 1 + 8 q + 40 q^2 + 128 q^3 + 328 q^4 + \ldots,\\ \left(\theta_{7}\mathbf{k}\mathbf{t}\right)(\tau) =& \sum\limits_{n=0}^{\infty}\mathfrak{a}_{n}q^{n}\\ =& 0 + q - q^2 - 2 q^3 - 7 q^4 + \ldots\\ \end{split} \end{align} Substituting these expansions in the MLDE, we get \begin{align}\label{MLDE_series_expressions} q^{\alpha}\sum\limits_{n=0}^{\infty}f_{n}q^{n}\left[\left(n + \alpha\right)^{2} + \sum\limits_{m=0}^{\infty}\left(\mu_{1}\Theta_{m} + \mu_{2}\mathfrak{a}_{m} - \frac{(n+\alpha)}{6}E_{2,m}\right)q^{m}\right] = 0. \end{align} We first restrict ourselves to the case when $\mu_{1} = \mu \neq 0$ and $\mu_{2} = 0$. When $n=0, m=0$, with $E_{2,0} = 1 = \Theta_{0}$, we get the indicial equation \begin{align}\label{alpha_eqn} \begin{split} &\alpha^{2} - \frac{\alpha}{6} + \mu = 0,\\ &\alpha_{0} = \frac{1}{12}\left(1 -\sqrt{1 + 144\mu}\right) \equiv \frac{1}{12}\left(1 - x\right),\\ &\alpha_{1} = \frac{1}{12}\left(1+\sqrt{1 + 144\mu}\right)\equiv \frac{1}{12}(1+x) \end{split} \end{align} where we set $x = +\sqrt{1 + 144\mu}$. The smaller solution $\alpha_{0} = \tfrac{1}{12}(1-x)$ corresponds to the identity character which behaves as $f(\tau) \sim q^{\tfrac{1-x}{12}}\left(1 + \mathcal{O}(q)\right)$. From \ref{character_behaviour}, the identity character $\chi_{0}$, associated with a primary of weight $\Delta_{0} = 0$ behaves as $\chi_{0}\sim q^{-\tfrac{c}{24}}\left(1 + \mathcal{O}(q)\right)$. Comparing the two behaviours, we obtain the following expression for the central charge \begin{align} c = 2(x-1). \end{align} To find the conformal dimension $\Delta$, we compare the behaviours with the larger solution for $\alpha$, i.e. $f(\tau)\sim q^{\tfrac{1 + x}{12}}\left(1 +\mathcal{O}(q)\right)$ and $\chi\sim q^{-\tfrac{c}{24} + \Delta}\left(1 + \mathcal{O}(q)\right)$. This gives us \begin{align} \Delta = \frac{x}{6}. \end{align} Next, to obtain a recurrence relation among the coefficients $f_{n}$, we use Cauchy product of two infinite series which reads \begin{align} \left(\sum\limits_{i=0}^{\infty}\alpha_{i}\right)\cdot\left(\sum\limits_{j=0}^{\infty}\beta_{j}\right) = \sum\limits_{k=0}^{\infty}\gamma_{k},\ \gamma_{k} = \sum\limits_{p=0}^{k}\alpha_{p}\beta_{k-p}. \end{align} This gives us \begin{align}\label{product_formulae} \begin{split} \left(\sum\limits_{m=0}^{\infty}E_{2,m}q^{m}\right)\left(\sum\limits_{n=0}^{\infty}f_{n}q^{n}(n+\alpha)\right) =& \sum\limits_{k=0}^{\infty}\left(\sum\limits_{p=0}^{k}E_{2,p}(k+\alpha - p)f_{k-p}\right)q^{k}\\ \left(\sum\limits_{m=0}^{\infty}\Theta_{m}(\tau)q^{m}\right)\left(\sum\limits_{n=0}^{\infty}f_{n}q^{n}\right) =& \sum\limits_{k=0}^{\infty}\left(\sum\limits_{p=0}^{k}\Theta_{p}f_{k-p}\right)q^{k}. \end{split} \end{align} Relabelling and substituting this back into the MLDE, we get \begin{align} \begin{split} q^{\alpha}\sum\limits_{n=0}^{\infty}q^{n}\left((n+\alpha)^{2} - \frac{(n+\alpha)}{6}E_{2,0} + \mu \Theta_{0}\right)f_{n} = q^{\alpha}\sum\limits_{n=0}^{\infty}q^{n}&\left(\sum\limits_{m=1}^{n}\left(\frac{1}{6}E_{2,m}(n+\alpha - m) - \mu\Theta_{m}\right)f_{n-m}\right). \end{split} \end{align} With $E_{k,0} = 1 = \Theta_{0}$ and \ref{alpha_eqn}, we get \begin{align}\label{recursion_l=0} f_{n} = \left(n^{2} + 2\alpha n - \frac{n}{6}\right)^{-1}\sum\limits_{m=1}^{n}\left(\frac{1}{6}E_{2,m}(n+\alpha - m) - \mu\Theta_{m}\right)f_{n-m}. \end{align} When $n=1$ and $\alpha = \alpha_{0} = \tfrac{1}{12}(1-x)$, this simplifies to the following expression \begin{align} \begin{split} m_{1}^{(0)}\equiv \frac{f_{1}^{(0)}}{f_{0}^{(0)}} = \frac{x^{2}+6x+7}{3(6-x)}, \end{split} \end{align} where we used $E_{2,1} = -24$ and $\Theta_{1}= 8$. Solving for $x$, we obtain \begin{align} x = \frac{1}{2}\left(-(3m_{1} + 6) \pm \sqrt{(3m_{1}+6)^{2} + 4(18m_{1} + 7)}\right). \end{align} Since $2(x-1)$ is the central charge of the theory, $x$ must be rational. With $m_{1}^{(0)} = m_{1}$ and using the fact that $m_{1}$ is an integer, we conclude that at least one of the roots is rational. We now write \begin{align}\label{to_be_recast} (3m_{1} + 18)^{2} - 260 = k^{2}, \end{align} where $k\in\mathbb{Z}$. Defining \begin{align} p \equiv 18 + 3m_{1} - k, \end{align} we can recast \ref{to_be_recast} as follows \begin{align} 3m_{1} + 18 = \frac{p^{2} +260}{2p} = \frac{130}{p} + \frac{p}{2}. \end{align} This tells us that $p$ must be even, and that it must divide $130$. Restricting $k$ to be positive, we see that we have $k\geq 3m_{1}$ which implies that $p<18$. We conclude that all possible values of $m_{1}$ are found by those values of $p$ below $18$ that divide $130$ and that are even. The list of these values is $p = 2,10$. Table \ref{tab:theory_Fricke} contains CFT data among other things that correspond to particular theories. \begin{table}[htb!] \centering \begin{tabular}{c|c|c|c|c} $p$ & $\mu$ & $m_{1}$ & $c$ & $\Delta$\\ \hline & & & & \\ 2 & $\tfrac{1}{6}$ & 16 & 8 & $\tfrac{5}{6}$\\ 10 & $0$ & 0 & 0 & $\tfrac{1}{6}$\\ \end{tabular} \caption{$c$ and $\Delta$ data corresponding to the Fricke group $\Gamma_{0}^{+}(7)$ for the choice $\phi_{0} = \mu\Theta_{2,7}^{2}(\tau)$ with $\ell = 0$.} \label{tab:theory_Fricke} \end{table} $m_{1}$ being non-negative is not sufficient proof to convince ourselves that this data might indeed be related to a CFT. Hence, we compute $m_{2}$ using recursion relation \ref{recursion_l=0}. If this turns out to be negative or fractional, then we rule out the theory as a viable candidate. We $n = 2$, we get \begin{align} m_{2}^{(i)} \equiv \frac{f_{2}^{(i)}}{f_{0}^{(i)}} = -\frac{12(9\alpha_{i} + 10\mu)}{12\alpha_{i} + 11} - \frac{12m_{1}^{(i)}(3\alpha_{i} + 3 + 2\mu)}{12\alpha_{i} + 11}. \end{align} For $i = 0$, we find the values of $m_{2}^{(0)}$ corresponding to our candidate theories to yield $0$ for $p = 10$ and a negative fraction for $p = 2$ which we rule out. Now, we proceed to check if $m_{3}^{(i)}$ is positive as well for the surviving candidate theory. For $n = 3$, the recursion relation \ref{recursion_l=0} yields \begin{align} m_{3}^{(i)} \equiv \frac{f_{3}^{(i)}}{f_{0}^{(i)}} = -\frac{32(3\alpha_{i} + 8\mu)}{12\alpha_{i} + 17} - \frac{8m_{1}^{(i)}(9\alpha_{i} + 9 + 10\mu)}{12\alpha_{i} + 17} - \frac{8m_{2}^{(i)}(3\alpha_{i} + 6 + 2\mu)}{12\alpha_{i} + 17}. \end{align} For $i = 0$, we obtain $m_{3}^{(0)} = \tfrac{224}{39}$ which is fractional and thus, we rule out this theory. We have found that there exists no CFT at $n = 2$ and $\ell = 0$.\\\\ \noindent For the case of $\mu_{1}\neq 0$ and $\mu_{2} \neq 0$, from \ref{MLDE_series_expressions} we find the following expressions for the free parameters by matching order by order in $q$ \begin{align} \begin{split} \mathcal{O}(1):&\ \mu = \frac{n + \alpha_{i}}{6} - (n + \alpha_{i}),\\ \mathcal{O}(q):&\ \mu_{2} = \frac{8}{3}\left(3\alpha_{i}^{2} - 2\alpha_{i}\right). \end{split} \end{align} for $i=0,1$. When $i=0$, $\alpha_{0} = -\tfrac{c}{24}$ and this fixes \begin{align} \mu_{1} = -\frac{c(c+4)}{576},\ \mu_{2} = \frac{c^{2}+16c}{72}. \end{align} From \ref{central_charge_valence_Fricke}, for $n = 2,\ \ell = 0$, we have $-\tfrac{c}{12} + \Delta = \tfrac{2}{3}$ which fixes the conformal dimension to be $\Delta = \tfrac{8 + c}{12}$. But it turns out that this case too yields inadmissible solutions. This makes sense since the parameter $\mu_{2}$ is one that is associated with a cusp form which, as we have seen in multiple instances in this work, is notorious for rendering the solutions to be inadmissible.
2,869,038,154,468
arxiv
\section*{References}} \journal{Journal of Multivariate Analysis} \begin{document} \begin{frontmatter} \title{Second order statistics of robust estimators of scatter. \\ Application to GLRT detection for elliptical signals\tnoteref{t1}} \tnotetext[t1]{ Couillet's work is supported by the ERC MORE EC--120133. Pascal's work is supported by the DGA grant no. 2013.60.0011.00.470.75.01.} \author[supelec]{Romain Couillet} \ead{[email protected]} \address[supelec]{Telecommunication department, Sup\'elec, Gif sur Yvette, France} \author[kaust]{Abla Kammoun} \ead{[email protected]} \address[kaust]{King Abdullah's University of Science and Technology, Saudi Arabia} \author[sondra]{Fr\'ed\'eric Pascal} \ead{[email protected]} \address[sondra]{SONDRA Laboratory, Sup\'elec, Gif sur Yvette, France} \begin{abstract} A central limit theorem for bilinear forms of the type $a^*\hat{C}_N(\rho)^{-1}b$, where $a,b\in\CC^N$ are unit norm deterministic vectors and $\hat{C}_N(\rho)$ a robust-shrinkage estimator of scatter parametrized by $\rho$ and built upon $n$ independent elliptical vector observations, is presented. The fluctuations of $a^*\hat{C}_N(\rho)^{-1}b$ are found to be of order $N^{-\frac12}$ and to be the same as those of $a^*\hat{S}_N(\rho)^{-1}b$ for $\hat{S}_N(\rho)$ a matrix of a theoretical tractable form. This result is exploited in a classical signal detection problem to provide an improved detector which is both robust to elliptical data observations (e.g., impulsive noise) and optimized across the shrinkage parameter $\rho$. \end{abstract} \begin{keyword} random matrix theory \sep robust estimation \sep central limit theorem \sep GLRT. \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} As an aftermath of the growing interest for large dimensional data analysis in machine learning, in a recent series of articles \citep{COU13,COU13b,COU14,ZHA14,KAR13}, several estimators from the field of robust statistics (dating back to the seventies) started to be explored under the assumption of commensurably large sample ($n$) and population ($N$) dimensions. Robust estimators were originally designed to turn classical estimators into outlier- and impulsive noise-resilient estimators, which is of considerable importance in the recent big data paradigm. Among these estimation methods, robust regression was studied in \citep{KAR13} which reveals that, in the large $N,n$ regime, the difference in norm between estimated and true regression vectors (of size $N$) tends almost surely to a positive constant which depends on the nature of the data and of the robust regressor. In parallel, and of more interest to the present work, \citep{COU13,COU13b,COU14,ZHA14} studied the limiting behavior of several classes of robust estimators $\hat{C}_N$ of scatter (or covariance) matrices $C_N$ based on independent zero-mean elliptical observations $x_1,\ldots,x_n\in\CC^N$. Precisely, \citep{COU13} shows that, letting $N/n<1$ and $\hat{C}_N$ be the (almost sure) unique solution to \begin{align*} \hat{C}_N &= \frac1n \sum_{i=1}^n u\left( \frac1Nx_i^*\hat{C}_N^{-1}x_i \right) x_ix_i^* \end{align*} under some appropriate conditions over the nonnegative function $u$ (corresponding to Maronna's M-estimator \citep{MAR76}), $\Vert\hat{C}_N-\hat{S}_N\Vert\asto 0$ in spectral norm as $N,n\to\infty$ with $N/n\to c\in(0,1)$, where $\hat{S}_N$ follows a standard random matrix model (such as studied in \citep{CHO95,HAC13}). In \citep{ZHA14}, the important scenario where $u(x)=1/x$ (referred to as Tyler's M-estimator) is treated. It is in particular shown for this model that for identity scatter matrices the spectrum of $\hat{C}_N$ converges weakly to the Mar\u{c}enko--Pastur law \citep{MAR67} in the large $N,n$ regime. Finally, for $N/n\to c\in(0,\infty)$, \citep{COU14} studied yet another robust estimation model defined, for each $\rho\in(\max\{0,1-n/N\},1]$, by $\hat{C}_N=\hat{C}_N(\rho)$, unique solution to \begin{align} \label{eq:hatCN} \hat{C}_N(\rho) &= \frac1n \sum_{i=1}^n \frac{x_ix_i^*}{\frac1Nx_i^*\hat{C}_N^{-1}(\rho)x_i} + \rho I_N. \end{align} This estimator, proposed in \citep{PAS13}, corresponds to a hybrid robust-shrinkage estimator reminding Tyler's M-estimator of scale \citep{TYL87} and Ledoit--Wolf's shrinkage estimator \citep{LED04}. This estimator is particularly suited to scenarios where $N/n$ is not small, for which other estimators are badly conditioned if not undefined. For this model, it is shown in \citep{COU14} that $\sup_{\rho}\Vert\hat{C}_N(\rho)-\hat{S}_N(\rho)\Vert\asto 0$ where $\hat{S}_N(\rho)$ also follows a classical random matrix model. The aforementioned approximations $\hat{S}_N$ of the estimators $\hat{C}_N$, the structure of which is well understood (as opposed to $\hat{C}_N$ which is only defined implicitly), allow for both a good apprehension of the limiting behavior of $\hat{C}_N$ and more importantly for a better usage of $\hat{C}_N$ as an appropriate substitute for sample covariance matrices in various estimation problems in the large $N,n$ regime. The convergence in norm $\Vert\hat{C}_N-\hat{S}_N\Vert\asto 0$ is indeed sufficient in many cases to produce new consistent estimation methods based on $\hat{C}_N$ by simply replacing $\hat{C}_N$ by $\hat{S}_N$ in the problem defining equations. For example, the results of \citep{COU13b} led to the introduction of novel consistent estimators based on functionals of $\hat{C}_N$ (of the Maronna type) for power and direction-of-arrival estimation in array processing in the presence of impulsive noise or rare outliers \citep{COU14c}. Similarly, in \citep{COU14}, empirical methods were designed to estimate the parameter $\rho$ which minimizes the expected Frobenius norm $\tr [(\hat{C}_N(\rho)-C_N)^2]$, of interest for various outlier-prone applications dealing with non-small ratios $N/n$.\footnote{Other metrics may also be considered as in e.g.\@ \citep{YAN14} with $\rho$ chosen to minimize the return variance in a portfolio optimization problem.} Nonetheless, when replacing $\hat{C}_N$ for $\hat{S}_N$ in deriving consistent estimates, if the convergence $\Vert\hat{C}_N-\hat{S}_N\Vert\asto 0$ helps in producing novel consistent estimates, this convergence (which comes with no particular speed) is in general not sufficient to assess the performance of the estimator for large but finite $N,n$. Indeed, when second order results such as central limit theorems need be established, say at rate $N^{-\frac12}$, to proceed similarly to the replacement of $\hat{C}_N$ by $\hat{S}_N$ in the analysis, one would ideally demand that $\Vert\hat{C}_N-\hat{S}_N\Vert=o(N^{-\frac12})$; but such a result, we believe, unfortunately does not hold. This constitutes a severe limitation in the exploitation of robust estimators as their performance as well as optimal fine-tuning often rely on second order performance. Concretely, for practical purposes in the array processing application of \citep{COU14c}, one may naturally ask which choice of the $u$ function is optimal to minimize the variance of (consistent) power and angle estimates. This question remains unanswered to this point for lack of better theoretical results. \bigskip The main purpose of the article is twofold. From a technical aspect, taking the robust shrinkage estimator $\hat{C}_N(\rho)$ defined by \eqref{eq:hatCN} as an example, we first show that, although the convergence $\Vert\hat{C}_N(\rho)-\hat{S}_N(\rho)\Vert\asto 0$ (from \citep[Theorem~1]{COU14}) may not be extensible to a rate $O(N^{1-\varepsilon})$, one has the bilinear form convergence $N^{1-\varepsilon} a^*(\hat{C}_N^k(\rho)-\hat{S}_N^k(\rho))b\asto 0$ for each $\varepsilon>0$, each $a,b\in\CC^N$ of unit norm, and each $k\in\ZZ$. This result implies that, if $\sqrt{N}a^*\hat{S}_N^k(\rho)b$ satisfies a central limit theorem, then so does $\sqrt{N}a^*\hat{C}_N^k(\rho)b$ with the same limiting variance. This result is of fundamental importance to any statistical application based on such quadratic forms. Our second contribution is to exploit this result for the specific problem of signal detection in impulsive noise environments via the generalized likelihood-ratio test, particularly suited for radar signals detection under elliptical noise \citep{CON95,PAS13}. In this context, we determine the shrinkage parameter $\rho$ which minimizes the probability of false detections and provide an empirical consistent estimate for this parameter, thus improving significantly over traditional sample covariance matrix-based estimators. The remainder of the article introduces our main results in Section~\ref{sec:results} which are proved in Section~\ref{sec:proof}. Technical elements of proof are provided in the appendix. {\it Notations:} In the remainder of the article, we shall denote $\lambda_1(X),\ldots,\lambda_n(X)$ the real eigenvalues of $n\times n$ Hermitian matrices $X$. The norm notation $\Vert \cdot\Vert$ being considered is the spectral norm for matrices and Euclidean norm for vectors. The symbol $\imath$ is the complex $\sqrt{-1}$. \section{Main Results} \label{sec:results} Let $N,n\in\NN$, $c_N\triangleq N/n$, and $\rho \in (\max\{0,1-c_N^{-1}\},1]$. Let also $x_1,\ldots,x_n\in\CC^N$ be $n$ independent random vectors defined by the following assumptions. \begin{assumption}[Data vectors] \label{ass:x} For $i\in\{1,\ldots,n\}$, $x_i=\sqrt{\tau_i}A_Nw_i=\sqrt{\tau_i}z_i$, where \begin{itemize} \item $w_i\in\CC^N$ is Gaussian with zero mean and covariance $I_N$, independent across $i$; \item $A_NA_N^*\triangleq C_N\in\CC^{N\times N}$ is such that $\nu_N\triangleq \frac1N\sum_{i=1}^N{\bm\delta}_{\lambda_i(C_N)}\to \nu$ weakly, $\limsup_N\Vert C_N\Vert <\infty$, and $\frac1N\tr C_N=1$; \item $\tau_i>0$ are random or deterministic scalars. \end{itemize} \end{assumption} Under Assumption~\ref{ass:x}, letting $\tau_i=\tilde{\tau}_i/\Vert w_i\Vert$ for some $\tilde{\tau}_i$ independent of $w_i$, $x_i$ belongs to the class of elliptically distributed random vectors. Note that the normalization $\frac1N\tr C_N=1$ is not a restricting constraint since the scalars $\tau_i$ may absorb any other normalization. It has been well-established by the robust estimation theory that, even if the $\tau_i$ are independent, independent of the $w_i$, and that $\lim_n \frac1n \sum_{i=1}^n\tau_i=1$ a.s., the sample covariance matrix $\frac1n\sum_{i=1}^n x_ix_i^*$ is in general a poor estimate for $C_N$. Robust estimators of scatter were designed for this purpose \citep{MAR76,TYL87}. In addition, if $N/n$ is non trivial, a linear shrinkage of these robust estimators against the identity matrix often helps in regularizing the estimator as established in e.g., \citep{PAS13,CHE11}. The robust estimator of scatter considered in this work, which we denote $\hat{C}_N(\rho)$, is defined (originally in \citep{PAS13}) as the unique solution to \begin{align*} \hat{C}_N(\rho) &= (1-\rho) \frac1n \sum_{i=1}^n \frac{x_ix_i^*}{\frac1Nx_i\hat{C}_N^{-1}(\rho)x_i} + \rho I_N. \end{align*} \subsection{Theoretical Results} The asymptotic behavior of this estimator was studied recently in \citep{COU14} in the regime where $N,n\to\infty$ in such a way that $c_N\to c\in(0,\infty)$. We first recall the important results of this article, which shall lay down the main concepts and notations of the present work. First define \begin{align*} \hat{S}_N(\rho) &= \frac1{\gamma_N(\rho)}\frac{1-\rho}{1-(1-\rho)c_N} \frac1n\sum_{i=1}^n z_iz_i^* + \rho I_N \end{align*} where $\gamma_N(\rho)$ is the unique solution to \begin{align*} 1 &= \int \frac{t}{\gamma_N(\rho)\rho + (1-\rho)t}\nu_N(dt). \end{align*} For any $\kappa>0$ small, define $\mathcal R_\kappa\triangleq [\kappa+\max\{0,1-c^{-1}\},1]$. Then, from \cite[Theorem~1]{COU14}, as $N,n\to\infty$ with $c_N\to c\in(0,\infty)$, \begin{align*} \sup_{\rho \in \mathcal R_\kappa} \left\Vert \hat{C}_N(\rho) - \hat{S}_N(\rho) \right\Vert \asto 0. \end{align*} A careful analysis of the proof of \cite[Theorem~1]{COU14} (which is performed in Section~\ref{sec:proof}) reveals that the above convergence can be refined as \begin{align} \label{eq:cv12-eps} \sup_{\rho \in \mathcal R_\kappa} N^{\frac12-\varepsilon} \left\Vert \hat{C}_N(\rho) - \hat{S}_N(\rho) \right\Vert \asto 0 \end{align} for each $\varepsilon>0$. This suggests that (well-behaved) functionals of $\hat{C}_N(\rho)$ fluctuating at a slower speed than $N^{-\frac12+\varepsilon}$ for some $\varepsilon>0$ follow the same statistics as the same functionals with $\hat{S}_N(\rho)$ in place of $\hat{C}_N(\rho)$. However, this result is quite weak as most limiting theorems (starting with the classical central limit theorems for independent scalar variables) deal with fluctuations of order $N^{-\frac12}$ and sometimes in random matrix theory of order $N^{-1}$. In our opinion, the convergence speed \eqref{eq:cv12-eps} cannot be improved to a rate $N^{-\frac12}$. Nonetheless, thanks to an averaging effect documented in Section~\ref{sec:proof}, the fluctuation of special forms of functionals of $\hat{C}_N(\rho)$ can be proved to be much slower. Although among these functionals we could have considered linear functionals of the eigenvalue distribution of $\hat{C}_N(\rho)$, our present concern (driven by more obvious applications) is rather on bilinear forms of the type $a^*\hat{C}_N^k(\rho)b$ for some $a,b\in\CC^N$ with $\Vert a\Vert=\Vert b\Vert=1$, $k\in\ZZ$. Our first main result is the following. \begin{theorem}[Fluctuation of bilinear forms] \label{th:bilin} Let $a,b\in\CC^N$ with $\Vert a\Vert=\Vert b\Vert=1$. Then, as $N,n\to\infty$ with $c_N\to c\in(0,\infty)$, for any $\varepsilon>0$ and every $k\in\ZZ$, \begin{align*} \sup_{\rho\in\mathcal R_\kappa} N^{1-\varepsilon} \left| a^*\hat{C}_N^{k}(\rho)b - a^*\hat{S}_N^{k}(\rho)b \right| &\asto 0. \end{align*} \end{theorem} Some comments and remarks are in order. First, we recall that central limit theorems involving bilinear forms of the type $a^*\hat{S}_N^k(\rho)b$ are classical objects in random matrix theory (see e.g.\@ \citep{KAM09,MES08} for $k=-1$), particularly common in signal processing and wireless communications. These central limit theorems in general show fluctuations at speed $N^{-\frac12}$. This indicates, taking $\varepsilon<\frac12$ in Theorem~\ref{th:bilin} and using the fact that almost sure convergence implies weak convergence, that $a^*\hat{C}_N^k(\rho)b$ exhibits the same fluctuations as $a^*\hat{S}_N^k(\rho)b$, the latter being classical and tractable while the former is quite intricate at the onset, due to the implicit nature of $\hat{C}_N(\rho)$. Of practical interest to many applications in signal processing is the case where $k=-1$. In the next section, we present a classical generalized maximum likelihood signal detection in impulsive noise, for which we shall characterize the shrinkage parameter $\rho$ that meets minimum false alarm rates. \subsection{Application to Signal Detection} In this section, we consider the hypothesis testing scenario by which an $N$-sensor array receives a vector $y\in\CC^N$ according to the following hypotheses \begin{align*} y &= \left\{ \begin{array}{ll} x &,~\mathcal H_0 \\ \alpha p + x&,~\mathcal H_1 \end{array} \right. \end{align*} in which $\alpha>0$ is some unknown scaling factor constant while $p\in\CC^N$ is deterministic and known at the sensor array (which often corresponds to a steering vector arising from a specific known angle), and $x$ is an impulsive noise distributed as $x_1$ according to Assumption~\ref{ass:x}. For convenience, we shall take $\Vert p\Vert=1$. Under $\mathcal H_0$ (the null hypothesis), a noisy observation from an impulsive source is observed while under $\mathcal H_1$ both information and noise are collected at the array. The objective is to decide on $\mathcal H_1$ versus $\mathcal H_0$ upon the observation $y$ and prior pure-noise observations $x_1,\ldots,x_n$ distributed according to Assumption~\ref{ass:x}. When $\tau_1,\ldots,\tau_n$ and $C_N$ are unknown, the corresponding generalized likelihood ratio test, derived in \citep{CON95}, reads \begin{align*} T_N(\rho) &\overset{\mathcal H_1}{\underset{\mathcal H_0}{\gtrless}} \Gamma \end{align*} for some detection threshold $\Gamma$ where \begin{align*} T_N(\rho) \triangleq \frac{|y^*\hat{C}_N^{-1}(\rho)p|}{\sqrt{y^*\hat{C}_N^{-1}(\rho)y}\sqrt{p^*\hat{C}_N^{-1}(\rho)p}}. \end{align*} More precisely, \citep{CON95} derived the detector $T_N(0)$ only valid when $n\geq N$. The relaxed detector $T_N(\rho)$ allows for a better conditioning of the estimator, in particular for $n\simeq N$. In \citep{PAS13}, $T_N(\rho)$ is used explicitly in a space-time adaptive processing setting but only simulation results were provided. Alternative metrics for similar array processing problems involve the signal-to-noise ratio loss minimization rather than likelihood ratio tests; in \citep{ABR13a,ABR13b}, the authors exploit the estimators $\hat{C}_N(\rho)$ but restrict themselves to the less tractable finite dimensional analysis. Our objective is to characterize the false alarm performance of the detector. That is, provided $\mathcal H_0$ is the actual scenario (i.e.\@ $y=x$), we shall evaluate $P(T_N(\rho)>\Gamma)$. Since it shall appear that, under $\mathcal H_0$, $T_N(\rho)\asto 0$ for every fixed $\Gamma>0$ and every $\rho$, by dominated convergence $P(T_N(\rho)>\Gamma)\to 0$ which does not say much about the actual test performance for large but finite $N,n$. To avoid such empty statements, we shall then consider the non-trivial case where $\Gamma=N^{-\frac12}\gamma$ for some fixed $\gamma>0$. In this case our objective is to characterize the false alarm probability \begin{align*} P\left( T_N(\rho) > \frac{\gamma}{\sqrt{N}} \right). \end{align*} Before providing this result, we need some further reminders from \citep{COU14}. First define \begin{align*} \underline{\hat{S}}_N(\rho) &\triangleq (1-\rho)\frac1n\sum_{i=1}^n z_iz_i^* + \rho I_N. \end{align*} Then, from \cite[Lemma~1]{COU14}, for each $\rho\in (\max\{0,1-c^{-1}\},1]$, \begin{align*} \frac{\hat{S}_N(\rho)}{\rho+\frac1{\gamma_N(\rho)}\frac{1-\rho}{1-(1-\rho)c}} = \underline{\hat{S}}_N(\underline\rho) \end{align*} where \begin{align*} \underline\rho \triangleq \frac{{\rho}}{\rho+\frac1{\gamma_N(\rho)}\frac{1-\rho}{1-(1-\rho)c}}. \end{align*} Moreover, the mapping $\rho\mapsto \underline\rho$ is continuously increasing from $(\max\{0,1-c^{-1}\},1]$ onto $(0,1]$. From classical random matrix considerations (see e.g.\@ \citep{SIL95}), letting $Z=[z_1,\ldots,z_n]\in\CC^{N\times n}$, the empirical spectral distribution\footnote{That is the normalized counting measure of the eigenvalues.} of $(1-\rho)\frac1nZ^*Z$ almost surely admits a weak limit $\mu$. The Stieltjes transform $m(z)\triangleq \int (t-z)^{-1}\mu(dt)$ of $\mu$ at $z\in\CC\setminus {\rm Supp}(\mu)$ is the unique complex solution with positive (resp.\@ negative) imaginary part if $\Im[z]>0$ (resp.\@ $\Im[z]<0$) and unique real positive solution if $\Im[z]=0$ and $\Re[z]<0$ to \begin{align*} m(z) &= \left( -z + c\int \frac{(1-\rho)t}{1+(1-\rho)tm(z)} \nu(dt) \right)^{-1}. \end{align*} We denote $m'(z)$ the derivative of $m(z)$ with respect to $z$ (recall that the Stieltjes transform of a positively supported measure is analytic, hence continuously differentiable, away from the support of the measure). With these definitions in place and with the help of Theorem~\ref{th:bilin}, we are now ready to introduce the main result of this section. \begin{theorem}[Asymptotic detector performance] \label{th:T} Under hypothesis $\mathcal H_0$, as $N,n\to\infty$ with $c_N\to c\in(0,\infty)$, \begin{align*} \sup_{\rho\in\mathcal R_\kappa} \left| P\left( T_N(\rho) > \frac{\gamma}{\sqrt{N}} \right) - \exp \left( - \frac{\gamma^2}{2\sigma_N^2(\underline\rho)} \right) \right| &\to 0 \end{align*} where $\rho\mapsto\underline\rho$ is the aforementioned mapping and \begin{align*} \sigma_N^2(\underline{\rho}) &\triangleq \frac12 \frac{ p^*C_NQ_N^2(\underline\rho)p}{ p^*Q_N(\underline\rho)p \cdot \frac1N\tr C_NQ_N(\underline\rho)\cdot \left(1-c (1-\rho)^2 m(-\underline{\rho})^2 \frac1N\tr C_N^2Q_N^2(\underline\rho)\right) } \end{align*} with $Q_N(\underline\rho)\triangleq (I_N + (1-\underline{\rho})m(-\underline{\rho})C_N)^{-1}$. \end{theorem} Otherwise stated, $\sqrt{N}T_N(\rho)$ is uniformly well approximated by a Rayleigh distributed random variable $R_N(\underline\rho)$ with parameter $\sigma_N(\underline\rho)$. Simulation results are provided in Figure~\ref{fig:hist_detector_20} and Figure~\ref{fig:hist_detector_100} which corroborate the results of Theorem~\ref{th:T} for $N=20$ and $N=100$, respectively (for a single value of $\rho$ though). Comparatively, it is observed, as one would expect, that larger values for $N$ induce improved approximations in the tails of the approximating distribution. \begin{figure}[h!] \centering \begin{tabular}{cc} \begin{tikzpicture}[font=\footnotesize,scale=.7] \renewcommand{\axisdefaulttryminticks}{4} \tikzstyle{every major grid}+=[style=densely dashed] \tikzstyle{every axis y label}+=[yshift=-10pt] \tikzstyle{every axis x label}+=[yshift=5pt] \tikzstyle{every axis legend}+=[cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\scriptsize ] \begin{axis}[ xmin=0, ymin=0, xmax=4, bar width=3pt, grid=major, ymajorgrids=false, scaled ticks=true, ylabel={Density} ] \addplot+[ybar,mark=none,color=black,fill=gray!40!white] coordinates{ (0.05,0.055)(0.15,0.138)(0.25,0.263)(0.35,0.324)(0.45,0.42)(0.55,0.516)(0.65,0.544)(0.75,0.626)(0.85,0.621)(0.95,0.631)(1.05,0.639)(1.15,0.618)(1.25,0.625)(1.35,0.588)(1.45,0.515)(1.55,0.512)(1.65,0.417)(1.75,0.395)(1.85,0.331)(1.95,0.3)(2.05,0.232)(2.15,0.186)(2.25,0.143)(2.35,0.093)(2.45,0.081)(2.55,0.065)(2.65,0.053)(2.75,0.031)(2.85,0.019)(2.95,0.01)(3.05,0.006)(3.15,0.002)(3.25,0.)(3.35,0.001)(3.45,0.)(3.55,0.)(3.65,0.)(3.75,0.)(3.85,0.)(3.95,0.)(4.05,0.)(4.15,0.)(4.25,0.)(4.35,0.)(4.45,0.)(4.55,0.)(4.65,0.)(4.75,0.)(4.85,0.)(4.95,0.)(5.05,0.) }; \addplot[black,smooth,line width=1pt] plot coordinates{ (0.,0.)(0.1,0.109902)(0.2,0.21619)(0.3,0.315448)(0.4,0.40464)(0.5,0.481262)(0.6,0.543458)(0.7,0.590088)(0.8,0.620745)(0.9,0.635727)(1.,0.635965)(1.1,0.62292)(1.2,0.598449)(1.3,0.564673)(1.4,0.523829)(1.5,0.478146)(1.6,0.429733)(1.7,0.380485)(1.8,0.332025)(1.9,0.285669)(2.,0.24241)(2.1,0.202932)(2.2,0.167636)(2.3,0.136674)(2.4,0.109997)(2.5,0.087403)(2.6,0.068576)(2.7,0.053135)(2.8,0.040662)(2.9,0.030736)(3.,0.02295)(3.1,0.01693)(3.2,0.012338)(3.3,0.008885)(3.4,0.006321)(3.5,0.004445)(3.6,0.003088)(3.7,0.00212)(3.8,0.001439)(3.9,0.000965)(4.,0.00064)(4.1,0.000419)(4.2,0.000271)(4.3,0.000174)(4.4,0.00011)(4.5,0.000069)(4.6,0.000042)(4.7,0.000026)(4.8,0.000016)(4.9,0.000009)(5.,0.000006) }; \legend{ {Empirical hist.\@ of $T_N(\rho)$},{Distribution of $R_N(\underline\rho)$} } \end{axis} \end{tikzpicture} & \begin{tikzpicture}[font=\footnotesize,scale=.7] \renewcommand{\axisdefaulttryminticks}{4} \tikzstyle{every major grid}+=[style=densely dashed] \tikzstyle{every axis y label}+=[yshift=-10pt] \tikzstyle{every axis x label}+=[yshift=5pt] \tikzstyle{every axis legend}+=[cells={anchor=west},fill=white, at={(0.98,0.02)}, anchor=south east, font=\scriptsize ] \begin{axis}[ xmin=0, ymin=0, xmax=4, ymax=1, bar width=1.5pt, grid=major, ymajorgrids=false, scaled ticks=true, mark repeat=10, ylabel={Cumulative distribution} ] \addplot[black,mark=*] coordinates{ (0.,0.)(0.006969,0.01)(0.140717,0.02)(0.20486,0.03)(0.244764,0.04)(0.283584,0.05)(0.315349,0.06)(0.346699,0.07)(0.377842,0.08)(0.406186,0.09)(0.429656,0.1)(0.451618,0.11)(0.475542,0.12)(0.500291,0.13)(0.520232,0.14)(0.539641,0.15)(0.559926,0.16)(0.577158,0.17)(0.596035,0.18)(0.616435,0.19)(0.638308,0.2)(0.657572,0.21)(0.673424,0.22)(0.68921,0.23)(0.704637,0.24)(0.719824,0.25)(0.736868,0.26)(0.752971,0.27)(0.768967,0.28)(0.784846,0.29)(0.801679,0.3)(0.819511,0.31)(0.835377,0.32)(0.848875,0.33)(0.866909,0.34)(0.880432,0.35)(0.89928,0.36)(0.913679,0.37)(0.932125,0.38)(0.948459,0.39)(0.965796,0.4)(0.977908,0.41)(0.992774,0.42)(1.010217,0.43)(1.023342,0.44)(1.039632,0.45)(1.053273,0.46)(1.068361,0.47)(1.086139,0.48)(1.105446,0.49)(1.121136,0.5)(1.136039,0.51)(1.151077,0.52)(1.168546,0.53)(1.184198,0.54)(1.200874,0.55)(1.21731,0.56)(1.233584,0.57)(1.248894,0.58)(1.266999,0.59)(1.28289,0.6)(1.297469,0.61)(1.314827,0.62)(1.331183,0.63)(1.34982,0.64)(1.366348,0.65)(1.381957,0.66)(1.398737,0.67)(1.418269,0.68)(1.435907,0.69)(1.456034,0.7)(1.476974,0.71)(1.496225,0.72)(1.514232,0.73)(1.530731,0.74)(1.551749,0.75)(1.570442,0.76)(1.591612,0.77)(1.614273,0.78)(1.63925,0.79)(1.664769,0.8)(1.685157,0.81)(1.709925,0.82)(1.733259,0.83)(1.759881,0.84)(1.787953,0.85)(1.816399,0.86)(1.844413,0.87)(1.872439,0.88)(1.906016,0.89)(1.938662,0.9)(1.974716,0.91)(2.008711,0.92)(2.050453,0.93)(2.097168,0.94)(2.146087,0.95)(2.201662,0.96)(2.268444,0.97)(2.365899,0.98)(2.481182,0.99)(2.634444,1.) }; \addplot[black,smooth] plot coordinates{ (0.,0.)(0.1,0.00551)(0.2,0.02186)(0.3,0.048514)(0.4,0.084613)(0.5,0.129022)(0.6,0.180384)(0.7,0.237194)(0.8,0.297868)(0.9,0.36082)(1.,0.424522)(1.1,0.487569)(1.2,0.548725)(1.3,0.606949)(1.4,0.661424)(1.5,0.711554)(1.6,0.756962)(1.7,0.797473)(1.8,0.833086)(1.9,0.863948)(2.,0.890323)(2.1,0.912556)(2.2,0.931049)(2.3,0.946228)(2.4,0.958527)(2.5,0.968364)(2.6,0.976133)(2.7,0.982192)(2.8,0.986859)(2.9,0.990409)(3.,0.993078)(3.1,0.995058)(3.2,0.996511)(3.3,0.997564)(3.4,0.998318)(3.5,0.998851)(3.6,0.999224)(3.7,0.999481)(3.8,0.999657)(3.9,0.999776)(4.,0.999855)(4.1,0.999908)(4.2,0.999942)(4.3,0.999963)(4.4,0.999977)(4.5,0.999986)(4.6,0.999992)(4.7,0.999995)(4.8,0.999997)(4.9,0.999998)(5.,0.999999) }; \legend{ {Empirical dist.\@ of $T_N(\rho)$},{Distribution of $R_N(\underline\rho)$} } \end{axis} \end{tikzpicture} \end{tabular} \caption{Histogram distribution function of the $\sqrt{N}T_N(\rho)$ versus $R_N(\underline\rho)$, $N=20$, $p=N^{-\frac12}[1,\ldots,1]^\trans$, $[C_N]_{ij}=0.7^{|i-j|}$, $c_N=1/2$, $\rho=0.2$.} \label{fig:hist_detector_20} \end{figure} \begin{figure}[h!] \centering \begin{tabular}{cc} \begin{tikzpicture}[font=\footnotesize,scale=.7] \renewcommand{\axisdefaulttryminticks}{4} \tikzstyle{every major grid}+=[style=densely dashed] \tikzstyle{every axis y label}+=[yshift=-10pt] \tikzstyle{every axis x label}+=[yshift=5pt] \tikzstyle{every axis legend}+=[cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\scriptsize ] \begin{axis}[ xmin=0, ymin=0, xmax=4, bar width=3pt, grid=major, ymajorgrids=false, scaled ticks=true, ylabel={Density} ] \addplot+[ybar,mark=none,color=black,fill=gray!40!white] coordinates{ (0.05,0.062)(0.15,0.152)(0.25,0.292)(0.35,0.332)(0.45,0.412)(0.55,0.515)(0.65,0.576)(0.75,0.592)(0.85,0.614)(0.95,0.669)(1.05,0.638)(1.15,0.617)(1.25,0.615)(1.35,0.546)(1.45,0.532)(1.55,0.426)(1.65,0.391)(1.75,0.378)(1.85,0.331)(1.95,0.252)(2.05,0.225)(2.15,0.185)(2.25,0.144)(2.35,0.125)(2.45,0.098)(2.55,0.063)(2.65,0.066)(2.75,0.052)(2.85,0.032)(2.95,0.021)(3.05,0.017)(3.15,0.007)(3.25,0.008)(3.35,0.006)(3.45,0.003)(3.55,0.)(3.65,0.003)(3.75,0.002)(3.85,0.)(3.95,0.001)(4.05,0.)(4.15,0.)(4.25,0.)(4.35,0.)(4.45,0.)(4.55,0.)(4.65,0.)(4.75,0.)(4.85,0.)(4.95,0.)(5.05,0.) }; \addplot[black,smooth,line width=1pt] plot coordinates{ (0.,0.)(0.1,0.109131)(0.2,0.214698)(0.3,0.313332)(0.4,0.402036)(0.5,0.478332)(0.6,0.540381)(0.7,0.587044)(0.8,0.617904)(0.9,0.633237)(1.,0.633945)(1.1,0.621449)(1.2,0.597572)(1.3,0.564395)(1.4,0.524123)(1.5,0.478956)(1.6,0.430981)(1.7,0.382081)(1.8,0.333873)(1.9,0.287674)(2.,0.244483)(2.1,0.204995)(2.2,0.169624)(2.3,0.138538)(2.4,0.111702)(2.5,0.088927)(2.6,0.069911)(2.7,0.054281)(2.8,0.041628)(2.9,0.031536)(3.,0.023602)(3.1,0.017452)(3.2,0.01275)(3.3,0.009204)(3.4,0.006566)(3.5,0.004629)(3.6,0.003225)(3.7,0.002221)(3.8,0.001511)(3.9,0.001017)(4.,0.000676)(4.1,0.000444)(4.2,0.000289)(4.3,0.000185)(4.4,0.000118)(4.5,0.000074)(4.6,0.000046)(4.7,0.000028)(4.8,0.000017)(4.9,0.00001)(5.,0.000006) }; \legend{ {Empirical hist.\@ of $T_N(\rho)$},{Density of $R_N(\underline\rho)$} } \end{axis} \end{tikzpicture} & \begin{tikzpicture}[font=\footnotesize,scale=.7] \renewcommand{\axisdefaulttryminticks}{4} \tikzstyle{every major grid}+=[style=densely dashed] \tikzstyle{every axis y label}+=[yshift=-10pt] \tikzstyle{every axis x label}+=[yshift=5pt] \tikzstyle{every axis legend}+=[cells={anchor=west},fill=white, at={(0.98,0.02)}, anchor=south east, font=\scriptsize ] \begin{axis}[ xmin=0, ymin=0, xmax=4, ymax=1, bar width=1.5pt, grid=major, ymajorgrids=false, scaled ticks=true, mark repeat=10, ylabel={Cumulative distribution} ] \addplot[black,mark=*] coordinates{ (0.,0.)(0.004147,0.01)(0.134196,0.02)(0.194839,0.03)(0.231715,0.04)(0.268037,0.05)(0.29753,0.06)(0.326162,0.07)(0.361765,0.08)(0.387635,0.09)(0.415288,0.1)(0.441611,0.11)(0.464966,0.12)(0.485413,0.13)(0.509661,0.14)(0.529458,0.15)(0.548931,0.16)(0.567224,0.17)(0.585663,0.18)(0.606538,0.19)(0.622153,0.2)(0.639809,0.21)(0.657092,0.22)(0.674893,0.23)(0.691094,0.24)(0.709848,0.25)(0.723726,0.26)(0.742494,0.27)(0.759849,0.28)(0.776713,0.29)(0.792786,0.3)(0.809041,0.31)(0.826186,0.32)(0.843159,0.33)(0.860687,0.34)(0.876359,0.35)(0.891876,0.36)(0.908243,0.37)(0.921541,0.38)(0.935376,0.39)(0.950716,0.4)(0.966973,0.41)(0.982105,0.42)(0.996863,0.43)(1.010958,0.44)(1.031895,0.45)(1.04803,0.46)(1.059607,0.47)(1.074379,0.48)(1.090857,0.49)(1.105972,0.5)(1.12501,0.51)(1.14239,0.52)(1.157615,0.53)(1.171878,0.54)(1.188458,0.55)(1.204927,0.56)(1.222216,0.57)(1.237269,0.58)(1.251976,0.59)(1.271192,0.6)(1.287183,0.61)(1.301946,0.62)(1.319419,0.63)(1.338576,0.64)(1.355493,0.65)(1.377815,0.66)(1.394742,0.67)(1.415819,0.68)(1.433655,0.69)(1.451259,0.7)(1.468387,0.71)(1.486074,0.72)(1.509745,0.73)(1.534089,0.74)(1.555325,0.75)(1.579848,0.76)(1.601953,0.77)(1.626237,0.78)(1.650685,0.79)(1.678268,0.8)(1.704434,0.81)(1.728554,0.82)(1.757243,0.83)(1.782361,0.84)(1.810967,0.85)(1.840125,0.86)(1.869583,0.87)(1.904995,0.88)(1.944813,0.89)(1.984082,0.9)(2.025297,0.91)(2.068876,0.92)(2.11431,0.93)(2.168671,0.94)(2.231536,0.95)(2.303031,0.96)(2.383405,0.97)(2.477321,0.98)(2.621018,0.99)(2.800403,1.) }; \addplot[black,smooth] plot coordinates{ (0.,0.)(0.1,0.005472)(0.2,0.021707)(0.3,0.04818)(0.4,0.084042)(0.5,0.128172)(0.6,0.179233)(0.7,0.235735)(0.8,0.296114)(0.9,0.358798)(1.,0.422273)(1.1,0.485146)(1.2,0.546184)(1.3,0.60435)(1.4,0.658826)(1.5,0.709012)(1.6,0.754524)(1.7,0.795178)(1.8,0.830964)(1.9,0.86202)(2.,0.888599)(2.1,0.91104)(2.2,0.929735)(2.3,0.945108)(2.4,0.957585)(2.5,0.967584)(2.6,0.975496)(2.7,0.981679)(2.8,0.986451)(2.9,0.99009)(3.,0.99283)(3.1,0.99487)(3.2,0.996369)(3.3,0.997458)(3.4,0.99824)(3.5,0.998795)(3.6,0.999184)(3.7,0.999453)(3.8,0.999638)(3.9,0.999762)(4.,0.999846)(4.1,0.999901)(4.2,0.999937)(4.3,0.999961)(4.4,0.999976)(4.5,0.999985)(4.6,0.999991)(4.7,0.999995)(4.8,0.999997)(4.9,0.999998)(5.,0.999999) }; \legend{ {Empirical dist.\@ of $T_N(\rho)$},{Distribution of $R_N(\underline\rho)$} } \end{axis} \end{tikzpicture} \end{tabular} \caption{Histogram distribution function of the $\sqrt{N}T_N(\rho)$ versus $R_N(\underline\rho)$, $N=100$, $p=N^{-\frac12}[1,\ldots,1]^\trans$, $[C_N]_{ij}=0.7^{|i-j|}$, $c_N=1/2$, $\rho=0.2$.} \label{fig:hist_detector_100} \end{figure} The result of Theorem~\ref{th:T} provides an analytical characterization of the performance of the GLRT for each $\rho$ which suggests in particular the existence of values for $\rho$ which minimize the false alarm probability for given $\gamma$. Note in passing that, independently of $\gamma$, minimizing the false alarm rate is asymptotically equivalent to minimizing $\sigma_N^2(\underline\rho)$ over $\rho$. However, the expression of $\sigma_N^2(\underline\rho)$ depends on the covariance matrix $C_N$ which is unknown to the array and therefore does not allow for an immediate online choice of an appropriate $\underline\rho$. To tackle this problem, the following proposition provides a consistent estimate for $\sigma_N^2(\underline\rho)$ based on $\hat{C}_N(\rho)$ and $p$. \begin{proposition}[Empirical performance estimation] \label{prop:1} For $\rho\in(\max\{0,1-c_N^{-1}\},1)$ and $\underline\rho$ defined as above, let $\hat{\sigma}_N^2(\underline\rho)$ be given by \begin{align*} \hat{\sigma}_N^2(\underline\rho) &\triangleq \frac12 \frac{1-\underline\rho \cdot \frac{p^*\hat{C}_N^{-2}(\rho)p}{p^*\hat{C}_N^{-1}(\rho)p}\cdot \frac1N\tr \hat{C}_N(\rho)}{\left( 1-c + c\underline\rho \frac1N\tr\hat{C}_N^{-1}(\rho)\cdot \frac1N\tr \hat{C}_N(\rho) \right)\left( 1 - \underline\rho \frac1N\tr\hat{C}_N^{-1}(\rho)\cdot \frac1N\tr \hat{C}_N(\rho)\right)}. \end{align*} Also let $\hat{\sigma}_N^2(1)\triangleq \lim_{\underline\rho\uparrow 1}\hat{\sigma}_N^2(\underline\rho)$. Then we have \begin{align*} \sup_{\rho \in \mathcal R_\kappa} \left| \sigma_N^2(\underline\rho) - \hat{\sigma}_N^2(\underline\rho) \right| &\asto 0. \end{align*} \end{proposition} Since both the estimation of $\sigma_N^2(\underline\rho)$ in Proposition~\ref{prop:1} and the convergence in Theorem~\ref{th:T} are uniform over $\rho\in\mathcal R_\kappa$, we have the following result. \begin{corollary}[Empirical performance optimum] \label{co:1} Let $\hat{\sigma}_N^2(\underline\rho)$ be defined as in Proposition~\ref{prop:1} and define $\hat{\rho}_N^*$ as any value satisfying \begin{align*} \hat{\rho}_N^* &\in \argmin_{ \rho\in \mathcal R_\kappa } \left\{ \hat{\sigma}_N^2(\underline\rho) \right\} \end{align*} (this set being in general a singleton). Then, for every $\gamma>0$, \begin{align*} P\left( \sqrt{N}T_N(\hat{\rho}_N^*) > \gamma \right) - \inf_{\rho\in \mathcal R_\kappa} \left\{ P\left( \sqrt{N}T_N(\rho) > \gamma \right) \right\} &\to 0. \end{align*} \end{corollary} This last result states that, for $N,n$ sufficiently large, it is increasingly close-to-optimal to use the detector $T_N(\hat{\rho}_N^*)$ in order to reach minimal false alarm probability. A practical graphical confirmation of this fact is provided in Figure~\ref{fig:FAR} where, in the same scenario as in Figures~\ref{fig:hist_detector_20}--\ref{fig:hist_detector_100}, the false alarm rates for various values of $\gamma$ are depicted. In this figure, the black dots correspond to the actual values taken by $P(\sqrt{N}T_N(\rho)>\gamma)$ empirically obtained out of $10^6$ Monte Carlo simulations. The plain curves are the approximating values $\exp(-\gamma^2/(2\sigma_N(\underline\rho)^2))$. Finally, the white dots with error bars correspond to the mean and standard deviations of $\exp(-\gamma^2/(2\hat\sigma_N(\underline\rho)^2))$ for each $\underline\rho$, respectively. It is first interesting to note that the estimates $\hat\sigma_N(\underline\rho)$ are quite accurate, especially so for $N$ large, with standard deviations sufficiently small to provide good estimates, already for small $N$, of the false alarm minimizing $\rho$. However, similar to Figures~\ref{fig:hist_detector_20}--\ref{fig:hist_detector_100}, we observe a particularly weak approximation in the (small) $N=20$ setting for large values of $\gamma$, corresponding to tail events, while for $N=100$, these values are better recovered. This behavior is obviously explained by the fact that $\gamma=3$ is not small compared to $\sqrt{N}$ when $N=20$. Nonetheless, from an error rate viewpoint, it is observed that errors of order $10^{-2}$ are rather well approximated for $N=100$. In Figure~\ref{fig:FAR2}, we consider this observation in depth by displaying $P(T_N(\hat{\rho}_N^*)>\Gamma)$ and its approximation $\exp(-N\Gamma^2/(2\hat\sigma_N^2(\underline\rho)))$ for $N=20$ and $N=100$, for various values of $\Gamma$. This figures shows that even errors of order $10^{-4}$ are well approximated for large $N$, while only errors of order $10^{-2}$ can be evaluated for small $N$.\footnote{Note that a comparison against alternative algorithms that would use no shrinkage (i.e., by setting $\rho=0$) or that would not implement a robust estimate is not provided here, being of little relevance. Indeed, a proper selection of $c_N$ to a large value or $C_N$ with condition number close to one would provide an arbitrarily large gain of shrinkage-based methods, while an arbitrarily heavy-tailed choice of the $\tau_i$ distribution would provide a huge performance gain for robust methods. It is therefore not possible to compare such methods on fair grounds.} \begin{figure}[h!] \centering \begin{tabular}{cc} \begin{tikzpicture}[font=\footnotesize,scale=.7] \renewcommand{\axisdefaulttryminticks}{4} \tikzstyle{every major grid}+=[style=densely dashed] \tikzstyle{every axis y label}+=[yshift=-10pt] \tikzstyle{every axis x label}+=[yshift=5pt] \tikzstyle{every axis legend}+=[cells={anchor=west},fill=white, at={(0.98,0.02)}, anchor=south east, font=\scriptsize ] \begin{semilogyaxis}[ xmin=0, xmax=1, ymax=1, grid=major, ymajorgrids=false, scaled ticks=true, xlabel={$\rho$}, ylabel={$P(\sqrt{N}T_N(\rho)>\gamma)$} ] \addplot[black] plot coordinates{ (0.010000,0.132470)(0.020000,0.129832)(0.030000,0.127406)(0.040000,0.125179)(0.050000,0.123139)(0.060000,0.121277)(0.070000,0.119582)(0.080000,0.118045)(0.090000,0.116660)(0.100000,0.115417)(0.110000,0.114311)(0.120000,0.113336)(0.130000,0.112486)(0.140000,0.111756)(0.150000,0.111141)(0.160000,0.110639)(0.170000,0.110244)(0.180000,0.109954)(0.190000,0.109766)(0.200000,0.109677)(0.210000,0.109685)(0.220000,0.109788)(0.230000,0.109983)(0.240000,0.110270)(0.250000,0.110648)(0.260000,0.111114)(0.270000,0.111669)(0.280000,0.112311)(0.290000,0.113040)(0.300000,0.113856)(0.310000,0.114758)(0.320000,0.115747)(0.330000,0.116822)(0.340000,0.117984)(0.350000,0.119233)(0.360000,0.120570)(0.370000,0.121996)(0.380000,0.123511)(0.390000,0.125116)(0.400000,0.126811)(0.410000,0.128599)(0.420000,0.130480)(0.430000,0.132456)(0.440000,0.134528)(0.450000,0.136696)(0.460000,0.138964)(0.470000,0.141332)(0.480000,0.143802)(0.490000,0.146376)(0.500000,0.149055)(0.510000,0.151841)(0.520000,0.154736)(0.530000,0.157743)(0.540000,0.160862)(0.550000,0.164095)(0.560000,0.167446)(0.570000,0.170915)(0.580000,0.174504)(0.590000,0.178216)(0.600000,0.182052)(0.610000,0.186014)(0.620000,0.190104)(0.630000,0.194324)(0.640000,0.198675)(0.650000,0.203159)(0.660000,0.207776)(0.670000,0.212530)(0.680000,0.217420)(0.690000,0.222448)(0.700000,0.227615)(0.710000,0.232922)(0.720000,0.238368)(0.730000,0.243955)(0.740000,0.249682)(0.750000,0.255549)(0.760000,0.261556)(0.770000,0.267703)(0.780000,0.273987)(0.790000,0.280408)(0.800000,0.286965)(0.810000,0.293655)(0.820000,0.300476)(0.830000,0.307425)(0.840000,0.314500)(0.850000,0.321698)(0.860000,0.329014)(0.870000,0.336445)(0.880000,0.343987)(0.890000,0.351635)(0.900000,0.359384)(0.910000,0.367230)(0.920000,0.375167)(0.930000,0.383190)(0.940000,0.391293)(0.950000,0.399471)(0.960000,0.407718)(0.970000,0.416027)(0.980000,0.424394)(0.990000,0.432813)(1.000000,0.441279) }; \addplot[black,only marks,mark=o,mark options={scale=0.75},error bars/.cd,y dir=both,y explicit, error bar style={mark size=1.5pt}] plot coordinates{ (0.050000,0.122858)+-(0.004658,0.004658)(0.100000,0.115290)+-(0.007070,0.007070)(0.150000,0.111293)+-(0.008363,0.008363)(0.200000,0.110090)+-(0.009027,0.009027)(0.250000,0.111235)+-(0.009386,0.009386)(0.300000,0.114445)+-(0.009656,0.009656)(0.350000,0.119667)+-(0.009976,0.009976)(0.400000,0.126863)+-(0.010424,0.010424)(0.450000,0.136131)+-(0.011170,0.011170)(0.500000,0.147467)+-(0.012326,0.012326)(0.550000,0.161167)+-(0.013813,0.013813)(0.600000,0.177344)+-(0.015758,0.015758)(0.650000,0.196270)+-(0.018054,0.018054)(0.700000,0.218062)+-(0.020671,0.020671)(0.750000,0.242979)+-(0.023689,0.023689)(0.800000,0.270952)+-(0.026959,0.026959)(0.850000,0.301740)+-(0.030677,0.030677)(0.900000,0.334893)+-(0.034650,0.034650)(0.950000,0.369402)+-(0.038847,0.038847) }; \addplot[black,only marks,mark=*,mark options={scale=0.75}] plot coordinates{ (0.050000,0.102554)(0.100000,0.095524)(0.150000,0.092681)(0.200000,0.091376) (0.250000,0.093394)(0.300000,0.097212)(0.350000,0.103109)(0.400000,0.111273)(0.450000,0.121610)(0.500000,0.134944)(0.550000,0.152715)(0.600000,0.172006)(0.650000,0.197016)(0.700000,0.226062)(0.750000,0.258595)(0.800000,0.296503)(0.850000,0.339433)(0.900000,0.384739)(0.950000,0.432905)(1.000000,0.478739) }; \addplot[black] plot coordinates{ (0.010000,0.010587)(0.020000,0.010118)(0.030000,0.009698)(0.040000,0.009321)(0.050000,0.008982)(0.060000,0.008680)(0.070000,0.008409)(0.080000,0.008168)(0.090000,0.007954)(0.100000,0.007764)(0.110000,0.007598)(0.120000,0.007453)(0.130000,0.007328)(0.140000,0.007221)(0.150000,0.007132)(0.160000,0.007060)(0.170000,0.007003)(0.180000,0.006962)(0.190000,0.006935)(0.200000,0.006922)(0.210000,0.006924)(0.220000,0.006938)(0.230000,0.006966)(0.240000,0.007007)(0.250000,0.007061)(0.260000,0.007128)(0.270000,0.007208)(0.280000,0.007302)(0.290000,0.007409)(0.300000,0.007530)(0.310000,0.007665)(0.320000,0.007814)(0.330000,0.007979)(0.340000,0.008158)(0.350000,0.008354)(0.360000,0.008566)(0.370000,0.008796)(0.380000,0.009043)(0.390000,0.009310)(0.400000,0.009596)(0.410000,0.009903)(0.420000,0.010232)(0.430000,0.010584)(0.440000,0.010960)(0.450000,0.011362)(0.460000,0.011790)(0.470000,0.012247)(0.480000,0.012734)(0.490000,0.013253)(0.500000,0.013805)(0.510000,0.014392)(0.520000,0.015017)(0.530000,0.015681)(0.540000,0.016388)(0.550000,0.017138)(0.560000,0.017936)(0.570000,0.018783)(0.580000,0.019682)(0.590000,0.020636)(0.600000,0.021649)(0.610000,0.022724)(0.620000,0.023863)(0.630000,0.025072)(0.640000,0.026352)(0.650000,0.027710)(0.660000,0.029147)(0.670000,0.030669)(0.680000,0.032279)(0.690000,0.033983)(0.700000,0.035785)(0.710000,0.037690)(0.720000,0.039702)(0.730000,0.041826)(0.740000,0.044068)(0.750000,0.046432)(0.760000,0.048924)(0.770000,0.051549)(0.780000,0.054312)(0.790000,0.057218)(0.800000,0.060272)(0.810000,0.063479)(0.820000,0.066845)(0.830000,0.070374)(0.840000,0.074071)(0.850000,0.077939)(0.860000,0.081985)(0.870000,0.086210)(0.880000,0.090619)(0.890000,0.095215)(0.900000,0.100002)(0.910000,0.104981)(0.920000,0.110156)(0.930000,0.115527)(0.940000,0.121096)(0.950000,0.126865)(0.960000,0.132834)(0.970000,0.139003)(0.980000,0.145372)(0.990000,0.151942)(1.000000,0.158710) }; \addplot[black,only marks,mark=o,mark options={scale=0.75},error bars/.cd,y dir=both,y explicit, error bar style={mark size=1.5pt}] plot coordinates{ (0.050000,0.008954)+-(0.000743,0.000743)(0.100000,0.007786)+-(0.001034,0.001034)(0.150000,0.007211)+-(0.001170,0.001170)(0.200000,0.007047)+-(0.001250,0.001250)(0.250000,0.007217)+-(0.001323,0.001323)(0.300000,0.007694)+-(0.001418,0.001418)(0.350000,0.008505)+-(0.001558,0.001558)(0.400000,0.009696)+-(0.001762,0.001762)(0.450000,0.011363)+-(0.002074,0.002074)(0.500000,0.013608)+-(0.002542,0.002542)(0.550000,0.016628)+-(0.003196,0.003196)(0.600000,0.020636)+-(0.004120,0.004120)(0.650000,0.025945)+-(0.005360,0.005360)(0.700000,0.032905)+-(0.006995,0.006995)(0.750000,0.042004)+-(0.009156,0.009156)(0.800000,0.053704)+-(0.011903,0.011903)(0.850000,0.068460)+-(0.015448,0.015448)(0.900000,0.086600)+-(0.019812,0.019812)(0.950000,0.108034)+-(0.025015,0.025015) }; \addplot[black,only marks,mark=*,mark options={scale=0.75}] plot coordinates{ (0.050000,0.001664)(0.100000,0.001371)(0.150000,0.001260)(0.200000,0.001244)(0.250000,0.001266)(0.300000,0.001392)(0.350000,0.001612)(0.400000,0.001974)(0.450000,0.002550)(0.500000,0.003313)(0.550000,0.004514)(0.600000,0.006323)(0.650000,0.009090)(0.700000,0.013042)(0.750000,0.019099)(0.800000,0.027746)(0.850000,0.040506)(0.900000,0.057496)(0.950000,0.080889)(1.000000,0.108458) }; \node at (axis cs:0.4,0.3) {$\gamma=2$}; \draw (axis cs:0.4,0.12) ellipse [black,x radius=2,y radius=0.5]; \node at (axis cs:0.4,0.03) {$\gamma=3$}; \draw (axis cs:0.4,0.005) ellipse [black,x radius=2,y radius=1.5]; \legend{ {Limiting theory},{Empirical estimator},{Detector} } \end{semilogyaxis} \end{tikzpicture} & \begin{tikzpicture}[font=\footnotesize,scale=.7] \renewcommand{\axisdefaulttryminticks}{4} \tikzstyle{every major grid}+=[style=densely dashed] \tikzstyle{every axis y label}+=[yshift=-10pt] \tikzstyle{every axis x label}+=[yshift=5pt] \tikzstyle{every axis legend}+=[cells={anchor=west},fill=white, at={(0.98,0.02)}, anchor=south east, font=\scriptsize ] \begin{semilogyaxis}[ xmin=0, xmax=1, ymax=1, grid=major, ymajorgrids=false, scaled ticks=true, xlabel={$\rho$}, ylabel={$P(\sqrt{N}T_N(\rho)>\gamma)$} ] \addplot[black] plot coordinates{ (0.010000,0.132556)(0.020000,0.130002)(0.030000,0.127659)(0.040000,0.125515)(0.050000,0.123558)(0.060000,0.121777)(0.070000,0.120163)(0.080000,0.118708)(0.090000,0.117404)(0.100000,0.116244)(0.110000,0.115221)(0.120000,0.114330)(0.130000,0.113565)(0.140000,0.112922)(0.150000,0.112395)(0.160000,0.111983)(0.170000,0.111680)(0.180000,0.111483)(0.190000,0.111391)(0.200000,0.111401)(0.210000,0.111510)(0.220000,0.111717)(0.230000,0.112020)(0.240000,0.112418)(0.250000,0.112910)(0.260000,0.113495)(0.270000,0.114171)(0.280000,0.114940)(0.290000,0.115800)(0.300000,0.116752)(0.310000,0.117795)(0.320000,0.118930)(0.330000,0.120157)(0.340000,0.121477)(0.350000,0.122890)(0.360000,0.124398)(0.370000,0.126000)(0.380000,0.127699)(0.390000,0.129495)(0.400000,0.131391)(0.410000,0.133386)(0.420000,0.135483)(0.430000,0.137683)(0.440000,0.139989)(0.450000,0.142401)(0.460000,0.144923)(0.470000,0.147555)(0.480000,0.150300)(0.490000,0.153159)(0.500000,0.156136)(0.510000,0.159233)(0.520000,0.162451)(0.530000,0.165793)(0.540000,0.169262)(0.550000,0.172859)(0.560000,0.176587)(0.570000,0.180449)(0.580000,0.184447)(0.590000,0.188584)(0.600000,0.192861)(0.610000,0.197281)(0.620000,0.201846)(0.630000,0.206559)(0.640000,0.211421)(0.650000,0.216435)(0.660000,0.221602)(0.670000,0.226924)(0.680000,0.232402)(0.690000,0.238039)(0.700000,0.243834)(0.710000,0.249789)(0.720000,0.255905)(0.730000,0.262182)(0.740000,0.268620)(0.750000,0.275218)(0.760000,0.281977)(0.770000,0.288895)(0.780000,0.295972)(0.790000,0.303204)(0.800000,0.310590)(0.810000,0.318129)(0.820000,0.325815)(0.830000,0.333647)(0.840000,0.341621)(0.850000,0.349732)(0.860000,0.357975)(0.870000,0.366345)(0.880000,0.374838)(0.890000,0.383447)(0.900000,0.392165)(0.910000,0.400988)(0.920000,0.409907)(0.930000,0.418917)(0.940000,0.428010)(0.950000,0.437180)(0.960000,0.446419)(0.970000,0.455720)(0.980000,0.465078)(0.990000,0.474484)(1.000000,0.483934) }; \addplot[black,only marks,mark=o,mark options={scale=0.75},error bars/.cd,y dir=both,y explicit, error bar style={mark size=1.5pt}] plot coordinates{ (0.050000,0.123520)+-(0.001961,0.001961)(0.100000,0.116258)+-(0.003004,0.003004)(0.150000,0.112500)+-(0.003619,0.003619)(0.200000,0.111525)+-(0.003995,0.003995)(0.250000,0.113046)+-(0.004201,0.004201)(0.300000,0.116926)+-(0.004334,0.004334)(0.350000,0.122988)+-(0.004459,0.004459)(0.400000,0.131404)+-(0.004595,0.004595)(0.450000,0.142259)+-(0.004695,0.004695)(0.500000,0.155536)+-(0.005049,0.005049)(0.550000,0.171997)+-(0.005367,0.005367)(0.600000,0.191571)+-(0.005818,0.005818)(0.650000,0.214513)+-(0.006541,0.006541)(0.700000,0.241244)+-(0.007450,0.007450)(0.750000,0.271917)+-(0.008836,0.008836)(0.800000,0.306142)+-(0.010523,0.010523)(0.850000,0.344681)+-(0.012411,0.012411)(0.900000,0.386030)+-(0.015349,0.015349)(0.950000,0.429722)+-(0.018687,0.018687) }; \addplot[black,only marks,mark=*,mark options={scale=0.75}] plot coordinates{ (0.050000,0.123990)(0.100000,0.115930)(0.150000,0.111060)(0.200000,0.109690)(0.250000,0.111230)(0.300000,0.114650)(0.350000,0.120460)(0.400000,0.129600)(0.450000,0.141420)(0.500000,0.154690)(0.550000,0.174360)(0.600000,0.194010)(0.650000,0.218770)(0.700000,0.246950)(0.750000,0.279720)(0.800000,0.318310)(0.850000,0.355070)(0.900000,0.398220)(0.950000,0.447030)(1.000000,0.492040) }; \addplot[black] plot coordinates{ (0.010000,0.010602)(0.020000,0.010148)(0.030000,0.009741)(0.040000,0.009377)(0.050000,0.009051)(0.060000,0.008760)(0.070000,0.008501)(0.080000,0.008271)(0.090000,0.008068)(0.100000,0.007890)(0.110000,0.007735)(0.120000,0.007601)(0.130000,0.007487)(0.140000,0.007392)(0.150000,0.007314)(0.160000,0.007254)(0.170000,0.007210)(0.180000,0.007182)(0.190000,0.007168)(0.200000,0.007170)(0.210000,0.007186)(0.220000,0.007216)(0.230000,0.007260)(0.240000,0.007318)(0.250000,0.007390)(0.260000,0.007476)(0.270000,0.007577)(0.280000,0.007692)(0.290000,0.007823)(0.300000,0.007968)(0.310000,0.008129)(0.320000,0.008306)(0.330000,0.008500)(0.340000,0.008712)(0.350000,0.008942)(0.360000,0.009190)(0.370000,0.009459)(0.380000,0.009748)(0.390000,0.010059)(0.400000,0.010394)(0.410000,0.010752)(0.420000,0.011136)(0.430000,0.011547)(0.440000,0.011987)(0.450000,0.012457)(0.460000,0.012959)(0.470000,0.013494)(0.480000,0.014065)(0.490000,0.014675)(0.500000,0.015324)(0.510000,0.016017)(0.520000,0.016754)(0.530000,0.017540)(0.540000,0.018376)(0.550000,0.019267)(0.560000,0.020214)(0.570000,0.021223)(0.580000,0.022295)(0.590000,0.023436)(0.600000,0.024649)(0.610000,0.025938)(0.620000,0.027308)(0.630000,0.028764)(0.640000,0.030310)(0.650000,0.031951)(0.660000,0.033693)(0.670000,0.035541)(0.680000,0.037501)(0.690000,0.039578)(0.700000,0.041779)(0.710000,0.044110)(0.720000,0.046578)(0.730000,0.049188)(0.740000,0.051947)(0.750000,0.054862)(0.760000,0.057940)(0.770000,0.061188)(0.780000,0.064612)(0.790000,0.068219)(0.800000,0.072015)(0.810000,0.076007)(0.820000,0.080202)(0.830000,0.084605)(0.840000,0.089223)(0.850000,0.094060)(0.860000,0.099121)(0.870000,0.104413)(0.880000,0.109938)(0.890000,0.115701)(0.900000,0.121704)(0.910000,0.127951)(0.920000,0.134445)(0.930000,0.141185)(0.940000,0.148174)(0.950000,0.155412)(0.960000,0.162900)(0.970000,0.170636)(0.980000,0.178621)(0.990000,0.186853)(1.000000,0.195330) }; \addplot[black,only marks,mark=o,mark options={scale=0.75},error bars/.cd,y dir=both,y explicit, error bar style={mark size=1.5pt}] plot coordinates{ (0.050000,0.009048)+-(0.000322,0.000322)(0.100000,0.007900)+-(0.000456,0.000456)(0.150000,0.007341)+-(0.000528,0.000528)(0.200000,0.007201)+-(0.000576,0.000576)(0.250000,0.007425)+-(0.000617,0.000617)(0.300000,0.008010)+-(0.000663,0.000663)(0.350000,0.008974)+-(0.000728,0.000728)(0.400000,0.010414)+-(0.000816,0.000816)(0.450000,0.012448)+-(0.000921,0.000921)(0.500000,0.015215)+-(0.001107,0.001107)(0.550000,0.019077)+-(0.001335,0.001335)(0.600000,0.024311)+-(0.001655,0.001655)(0.650000,0.031357)+-(0.002145,0.002145)(0.700000,0.040842)+-(0.002831,0.002831)(0.750000,0.053472)+-(0.003899,0.003899)(0.800000,0.069831)+-(0.005378,0.005378)(0.850000,0.091197)+-(0.007359,0.007359)(0.900000,0.117723)+-(0.010475,0.010475)(0.950000,0.149908)+-(0.014589,0.014589) }; \addplot[black,only marks,mark=*,mark options={scale=0.75}] plot coordinates{ (0.050000,0.008520)(0.100000,0.007550)(0.150000,0.006520)(0.200000,0.006560)(0.250000,0.006420)(0.300000,0.006850)(0.350000,0.008040)(0.400000,0.009110)(0.450000,0.011040)(0.500000,0.014580)(0.550000,0.017780)(0.600000,0.023820)(0.650000,0.030230)(0.700000,0.041990)(0.750000,0.054280)(0.800000,0.071530)(0.850000,0.092110)(0.900000,0.121480)(0.950000,0.157840)(1.000000,0.197780) }; \node at (axis cs:0.4,0.3) {$\gamma=2$}; \draw (axis cs:0.4,0.12) ellipse [black,x radius=2,y radius=0.5]; \node at (axis cs:0.4,0.025) {$\gamma=3$}; \draw (axis cs:0.4,0.01) ellipse [black,x radius=2,y radius=0.5]; \legend{ {Limiting theory},{Empirical estimator},{Detector} } \end{semilogyaxis} \end{tikzpicture} \end{tabular} \caption{False alarm rate $P(\sqrt{N}T_N(\rho)>\gamma)$, $N=20$ (left), $N=100$ (right), $p=N^{-\frac12}[1,\ldots,1]^\trans$, $[C_N]_{ij}=0.7^{|i-j|}$, $c_N=1/2$.} \label{fig:FAR} \end{figure} \begin{figure}[h!] \centering \begin{tikzpicture}[font=\footnotesize] \renewcommand{\axisdefaulttryminticks}{4} \tikzstyle{every major grid}+=[style=densely dashed] \tikzstyle{every axis y label}+=[yshift=-10pt] \tikzstyle{every axis x label}+=[yshift=5pt] \tikzstyle{every axis legend}+=[cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\scriptsize ] \begin{semilogyaxis}[ xmin=0, ymin=1e-4, xmax=1, ymax=1, grid=major, ymajorgrids=false, scaled ticks=true, xlabel={$\Gamma$}, mark repeat=2; ylabel={$P(T_N(\rho)>\Gamma)$} ] \addplot[black] plot coordinates{ (0.,1.)(0.01,0.998896)(0.02,0.995589)(0.03,0.990103)(0.04,0.982474)(0.05,0.97275)(0.06,0.960997)(0.07,0.94729)(0.08,0.931716)(0.09,0.914376)(0.1,0.895377)(0.11,0.874837)(0.12,0.852881)(0.13,0.82964)(0.14,0.805251)(0.15,0.779853)(0.16,0.753589)(0.17,0.726602)(0.18,0.699035)(0.19,0.671028)(0.2,0.642722)(0.21,0.614251)(0.22,0.585744)(0.23,0.557328)(0.24,0.529119)(0.25,0.501229)(0.26,0.473761)(0.27,0.446809)(0.28,0.420461)(0.29,0.394792)(0.3,0.369873)(0.31,0.345761)(0.32,0.322507)(0.33,0.300153)(0.34,0.278732)(0.35,0.258268)(0.36,0.238778)(0.37,0.220272)(0.38,0.202751)(0.39,0.186212)(0.4,0.170645)(0.41,0.156033)(0.42,0.142358)(0.43,0.129595)(0.44,0.117715)(0.45,0.106688)(0.46,0.096481)(0.47,0.087058)(0.48,0.078381)(0.49,0.070414)(0.5,0.063117)(0.51,0.056451)(0.52,0.050377)(0.53,0.044858)(0.54,0.039856)(0.55,0.035333)(0.56,0.031254)(0.57,0.027585)(0.58,0.024293)(0.59,0.021346)(0.6,0.018716)(0.61,0.016373)(0.62,0.014292)(0.63,0.012448)(0.64,0.010818)(0.65,0.009381)(0.66,0.008117)(0.67,0.007007)(0.68,0.006036)(0.69,0.005188)(0.7,0.004449)(0.71,0.003807)(0.72,0.003251)(0.73,0.002769)(0.74,0.002354)(0.75,0.001997)(0.76,0.00169)(0.77,0.001427)(0.78,0.001202)(0.79,0.001011)(0.8,0.000848)(0.81,0.00071)(0.82,0.000593)(0.83,0.000494)(0.84,0.000411)(0.85,0.000341)(0.86,0.000282)(0.87,0.000233)(0.88,0.000192)(0.89,0.000158)(0.9,0.00013)(0.91,0.000106)(0.92,0.000087)(0.93,0.000071)(0.94,0.000057)(0.95,0.000047)(0.96,0.000038)(0.97,0.00003)(0.98,0.000025)(0.99,0.00002)(1.,0.000016) }; \addplot[black,only marks,mark=*,mark options={scale=0.75}] plot coordinates{ (0.,1.)(0.01,0.99896)(0.02,0.995809)(0.03,0.990564)(0.04,0.983483)(0.05,0.974368)(0.06,0.963289)(0.07,0.950217)(0.08,0.935453)(0.09,0.919055)(0.1,0.900819)(0.11,0.881349)(0.12,0.860069)(0.13,0.837701)(0.14,0.813929)(0.15,0.789225)(0.16,0.763256)(0.17,0.736593)(0.18,0.709161)(0.19,0.681532)(0.2,0.652937)(0.21,0.624299)(0.22,0.595142)(0.23,0.565768)(0.24,0.537037)(0.25,0.508517)(0.26,0.479995)(0.27,0.45221)(0.28,0.424418)(0.29,0.397055)(0.3,0.370642)(0.31,0.344963)(0.32,0.320174)(0.33,0.296156)(0.34,0.273264)(0.35,0.251097)(0.36,0.230089)(0.37,0.210095)(0.38,0.191335)(0.39,0.173287)(0.4,0.156354)(0.41,0.140724)(0.42,0.126194)(0.43,0.112588)(0.44,0.099988)(0.45,0.088543)(0.46,0.077981)(0.47,0.068355)(0.48,0.059736)(0.49,0.051963)(0.5,0.044954)(0.51,0.038702)(0.52,0.033099)(0.53,0.028047)(0.54,0.023645)(0.55,0.019754)(0.56,0.016417)(0.57,0.013608)(0.58,0.011183)(0.59,0.009115)(0.6,0.00741)(0.61,0.005953)(0.62,0.004697)(0.63,0.003682)(0.64,0.002875)(0.65,0.002209)(0.66,0.001709)(0.67,0.001276)(0.68,0.000938)(0.69,0.000692)(0.7,0.000487)(0.71,0.000338)(0.72,0.000227)(0.73,0.000163)(0.74,0.000123)(0.75,0.000076)(0.76,0.00005)(0.77,0.00003)(0.78,0.000014)(0.79,0.000006)(0.8,0.000002)(0.81,0.000001)(0.82,0.)(0.83,0.)(0.84,0.)(0.85,0.)(0.86,0.)(0.87,0.)(0.88,0.)(0.89,0.)(0.9,0.)(0.91,0.)(0.92,0.)(0.93,0.)(0.94,0.)(0.95,0.)(0.96,0.)(0.97,0.)(0.98,0.)(0.99,0.)(1.,0.) }; \addplot[black] plot coordinates{ (0.,1.)(0.01,0.994528)(0.02,0.978292)(0.03,0.951819)(0.04,0.915955)(0.05,0.871823)(0.06,0.820761)(0.07,0.764257)(0.08,0.703876)(0.09,0.641191)(0.1,0.577714)(0.11,0.51484)(0.12,0.453802)(0.13,0.395635)(0.14,0.341159)(0.15,0.290974)(0.16,0.245462)(0.17,0.20481)(0.18,0.169025)(0.19,0.13797)(0.2,0.111391)(0.21,0.088952)(0.22,0.070257)(0.23,0.054886)(0.24,0.04241)(0.25,0.032412)(0.26,0.024501)(0.27,0.018318)(0.28,0.013547)(0.29,0.009908)(0.3,0.007168)(0.31,0.005129)(0.32,0.00363)(0.33,0.002541)(0.34,0.00176)(0.35,0.001205)(0.36,0.000816)(0.37,0.000547)(0.38,0.000362)(0.39,0.000237)(0.4,0.000154)(0.41,0.000099)(0.42,0.000063)(0.43,0.000039)(0.44,0.000024)(0.45,0.000015)(0.46,0.000009)(0.47,0.000005)(0.48,0.000003)(0.49,0.000002)(0.5,0.000001)(0.51,0.000001)(0.52,0.)(0.53,0.)(0.54,0.)(0.55,0.)(0.56,0.)(0.57,0.)(0.58,0.)(0.59,0.)(0.6,0.)(0.61,0.)(0.62,0.)(0.63,0.)(0.64,0.)(0.65,0.)(0.66,0.)(0.67,0.)(0.68,0.)(0.69,0.)(0.7,0.)(0.71,0.)(0.72,0.)(0.73,0.)(0.74,0.)(0.75,0.)(0.76,0.)(0.77,0.)(0.78,0.)(0.79,0.)(0.8,0.)(0.81,0.)(0.82,0.)(0.83,0.)(0.84,0.)(0.85,0.)(0.86,0.)(0.87,0.)(0.88,0.)(0.89,0.)(0.9,0.)(0.91,0.)(0.92,0.)(0.93,0.)(0.94,0.)(0.95,0.)(0.96,0.)(0.97,0.)(0.98,0.)(0.99,0.)(1.,0.) }; \addplot[black,only marks,mark=*,mark options={scale=0.75}] plot coordinates{ (0.000000,1.000000)(0.010000,0.994368)(0.020000,0.978268)(0.030000,0.952258)(0.040000,0.917284)(0.050000,0.872770)(0.060000,0.821488)(0.070000,0.766126)(0.080000,0.706967)(0.090000,0.643454)(0.100000,0.580648)(0.110000,0.517506)(0.120000,0.455490)(0.130000,0.396165)(0.140000,0.341170)(0.150000,0.290833)(0.160000,0.244757)(0.170000,0.202599)(0.180000,0.166316)(0.190000,0.135318)(0.200000,0.108255)(0.210000,0.085529)(0.220000,0.066516)(0.230000,0.051258)(0.240000,0.039086)(0.250000,0.029266)(0.260000,0.021645)(0.270000,0.015920)(0.280000,0.011507)(0.290000,0.008071)(0.300000,0.005601)(0.310000,0.003907)(0.320000,0.002626)(0.330000,0.001749)(0.340000,0.001136)(0.350000,0.000720)(0.360000,0.000436)(0.370000,0.000256)(0.380000,0.000152)(0.390000,0.000100)(0.400000,0.000045)(0.410000,0.000024)(0.420000,0.000014)(0.430000,0.000010)(0.440000,0.000003)(0.450000,0.000003)(0.460000,0.000000)(0.470000,0.000000)(0.480000,0.000000)(0.490000,0.000000)(0.500000,0.000000)(0.510000,0.000000)(0.520000,0.000000)(0.530000,0.000000)(0.540000,0.000000)(0.550000,0.000000)(0.560000,0.000000)(0.570000,0.000000)(0.580000,0.000000)(0.590000,0.000000)(0.600000,0.000000)(0.610000,0.000000)(0.620000,0.000000)(0.630000,0.000000)(0.640000,0.000000)(0.650000,0.000000)(0.660000,0.000000)(0.670000,0.000000)(0.680000,0.000000)(0.690000,0.000000)(0.700000,0.000000)(0.710000,0.000000)(0.720000,0.000000)(0.730000,0.000000)(0.740000,0.000000)(0.750000,0.000000)(0.760000,0.000000)(0.770000,0.000000)(0.780000,0.000000)(0.790000,0.000000)(0.800000,0.000000)(0.810000,0.000000)(0.820000,0.000000)(0.830000,0.000000)(0.840000,0.000000)(0.850000,0.000000)(0.860000,0.000000)(0.870000,0.000000)(0.880000,0.000000)(0.890000,0.000000)(0.900000,0.000000)(0.910000,0.000000)(0.920000,0.000000)(0.930000,0.000000)(0.940000,0.000000)(0.950000,0.000000)(0.960000,0.000000)(0.970000,0.000000)(0.980000,0.000000)(0.990000,0.000000)(1.000000,0.000000) }; \node at (axis cs:0.2,0.004) {$N=100$}; \draw (axis cs:0.32,0.004) ellipse [black,x radius=2,y radius=0.5]; \node at (axis cs:0.75,0.01) {$N=20$}; \draw (axis cs:0.65,0.004) ellipse [black,x radius=2,y radius=1.5]; \legend{ {Limiting theory},{Detector}} \end{semilogyaxis} \end{tikzpicture} \caption{False alarm rate $P(T_N(\rho_N^*)>\Gamma)$ for $N=20$ and $N=100$, $p=N^{-\frac12}[1,\ldots,1]^\trans$, $[C_N]_{ij}=0.7^{|i-j|}$, $c_N=1/2$.} \label{fig:FAR2} \end{figure} \section{Proof} \label{sec:proof} In this section, we shall successively prove Theorem~\ref{th:bilin}, Theorem~\ref{th:T}, Proposition~\ref{prop:1}, and Corollary~\ref{co:1}. Of utmost interest is the proof of Theorem~\ref{th:bilin} which shall be the concern of most of the section and of Appendix~\ref{app:key_lemma} for the proof of a key lemma. Before delving into the core of the proofs, let us introduce a few notations that shall be used throughout the section. First recall from \citep{COU14} that we can write, for each $\rho\in(\max\{0,1-c_N^{-1}\},1]$, \begin{align*} \hat{C}_N(\rho)=\frac{1-\rho}{1-(1-\rho)c_N} \frac1n\sum_{i=1}^n \frac{z_iz_i^*}{\frac1Nz_i^*\hat{C}_{(i)}^{-1}(\rho)z_i} + \rho I_N \end{align*} where $\hat{C}_{(i)}(\rho)=\hat{C}_N(\rho)-(1-\rho)\frac1n\frac{z_iz_i^*}{\frac1Nz_i^*\hat{C}_N^{-1}(\rho)z_i}$. Now, we define \begin{align*} \alpha(\rho) &= \frac{1-\rho}{1-(1-\rho)c_N} \\ d_i(\rho) &= \frac1Nz_i^* \hat{C}_{(i)}^{-1}(\rho)z_i = \frac1Nz_i^* \left( \alpha(\rho) \frac1n\sum_{j\neq i} \frac{z_jz_j^*}{d_j(\rho)} + \rho I_N \right)^{-1}z_i \\ \tilde d_i(\rho) &= \frac1Nz_i^* \hat{S}_{(i)}^{-1}(\rho)z_i = \frac1Nz_i^* \left( \alpha(\rho) \frac1n\sum_{j\neq i}^n \frac{z_jz_j^*}{\gamma_N(\rho)} + \rho I_N \right)^{-1}z_ \end{align*} Clearly by uniqueness of $\hat{C}_N$ and by the relation to $\hat{C}_{(i)}$ above, $d_1(\rho),\ldots,d_n(\rho)$ are uniquely defined by their $n$ implicit equations. We shall also discard the parameter $\rho$ for readability whenever not needed. \subsection{Bilinear form equivalence} In this section, we prove Theorem~\ref{th:bilin}. As shall become clear, the proof unfolds similarly for each $k\in\ZZ\setminus\{0\}$ and we can therefore restrict ourselves to a single value for $k$. As Theorem~\ref{th:T} relies on $k=-1$, for consistency, we take $k=-1$ from now on. Thus, our objective is to prove that, for $a,b\in\CC^N$ with $\Vert a\Vert=\Vert b\Vert=1$, and for any $\varepsilon>0$, \begin{align*} \sup_{\rho\in \mathcal R_\kappa} N^{1-\varepsilon}\left| a^*\hat{C}_N^{-1}(\rho)b - a^*\hat{S}_N^{-1}(\rho)b \right| \asto 0. \end{align*} For this, forgetting for some time the index $\rho$, first write \begin{align} \label{eq:firsteq} a^*\hat{C}_N^{-1}b - a^*\hat{S}_N^{-1}b &= a^*\hat{C}_N^{-1} \left( \frac{\alpha}n \sum_{i=1}^n \left[ \frac1{\gamma_N} - \frac1{d_i} \right] z_iz_i^* \right) \hat{S}_N^{-1}b \\ \label{eq:secondeq} &= \frac{\alpha}n \sum_{i=1}^n a^*\hat{C}_N^{-1} z_i\frac{d_i-\gamma_N}{\gamma_Nd_i}z_i^*\hat{S}_N^{-1}b. \end{align} In \citep{COU14}, where it is shown that $\Vert \hat{C}_N-\hat{S}_N\Vert\asto 0$ (that is the spectral norm of the inner parenthesis in \eqref{eq:firsteq} vanishes), the core of the proof was to show that $\max_{1\leq i\leq n}|d_i-\gamma_N|\asto 0$ which, along with the convergence of $\gamma_N$ away from zero and the almost sure boundedness of $\Vert\frac1n\sum_{i=1}^nz_iz_i^*\Vert$ for all large $N$ (from e.g.\@ \citep{SIL98}), gives the result. A thorough inspection of the proof in \citep{COU14} reveals that $\max_{1\leq i\leq n}|d_i-\gamma_N|\asto 0$ may be improved into $\max_{1\leq i\leq n}N^{\frac12-\varepsilon}|d_i-\gamma_N|\asto 0$ for any $\varepsilon>0$ but that this speed cannot be further improved beyond $N^{\frac12}$. The latter statement is rather intuitive since $\gamma_N$ is essentially a sharp deterministic approximation for $\frac1N\tr \hat{C}_N^{-1}$ while $d_i$ is a quadratic form on $\hat{C}_{(i)}^{-1}$; classical random matrix results involving fluctuations of such quadratic forms, see e.g.\@ \citep{KAM09}, indeed show that these fluctuations are of order $N^{-\frac12}$. As a consequence, $\max_{1\leq i\leq n}N^{1-\varepsilon}|d_i-\gamma_N|$ and thus $N^{1-\varepsilon}\Vert \hat{C}_N-\hat{S}_N\Vert$ are not expected to vanish for small $\varepsilon$. This being said, when it comes to bilinear forms, for which we shall naturally have $N^{\frac12-\varepsilon}|a^*\hat{C}_N^{-1}b - a^*\hat{S}_N^{-1}b|\asto 0$, seeing the difference in absolute values as the $n$-term average \eqref{eq:secondeq}, one may expect that the fluctuations of $d_i-\gamma_N$ are sufficiently loosely dependent across $i$ to further increase the speed of convergence from $N^{\frac12-\varepsilon}$ to $N^{1-\varepsilon}$ (which is the best one could expect from a law of large numbers aspect if the $d_i-\gamma_N$ were truly independent). It turns out that this intuition is correct. Nonetheless, to proceed with the proof, it shall be quite involved to work directly with \eqref{eq:secondeq} which involves the rather intractable terms $d_i$ (as the random solutions to an implicit equation). As in \citep{COU14}, our approach will consist in first approximating $d_i$ by a much more tractable quantity. Letting $\gamma_N$ be this approximation is however not good enough this time since $\gamma_N-d_i$ is a non-obvious quantity of amplitude $O(N^{-\frac12})$ which, due to intractability, we shall not be able to average across $i$ into a $O(N^{-1})$ quantity. Thus, we need a refined approximation of $d_i$ which we shall take to be $\tilde{d}_i$ defined above. Intuitively, since $\tilde{d}_i$ is also a quadratic form closely related to $d_i$, we expect $d_i-\tilde{d}_i$ to be of order $O(N^{-1})$, which we shall indeed observe. With this approximation in place, $d_i$ can be replaced by $\tilde{d}_i$ in \eqref{eq:secondeq}, which now becomes a more tractable random variable (as it involves no implicit equation) that fluctuates around $\gamma_N$ at the expected $O(N^{-1})$ speed. Let us then introduce the variable $\tilde{d}_i$ in \eqref{eq:firsteq} to obtain \begin{align*} a^*\hat{C}_N^{-1}b - a^*\hat{S}_N^{-1}b &= a^*\hat{C}_N^{-1} \left( \frac{\alpha}n \sum_{i=1}^n \left[ \frac1{\gamma_N} - \frac1{\tilde{d}_i} \right] z_iz_i^* \right) \hat{S}_N^{-1}b \nonumber \\ &+ a^*\hat{C}_N^{-1} \left( \frac{\alpha}n \sum_{i=1}^n \left[ \frac1{\tilde{d}_i} - \frac1{d_i} \right] z_iz_i^* \right) \hat{S}_N^{-1}b \\ &\triangleq \xi_1+\xi_2. \end{align*} We will now show that $\xi_1=\xi_1(\rho)$ and $\xi_2=\xi_2(\rho)$ vanish at the appropriate speed and uniformly so on $\mathcal R_\kappa$. Let us first progress in the derivation of $\xi_1(\rho)$ from which we wish to discard the explicit dependence on $\hat{C}_N$. We have \begin{align*} \xi_1 &= a^*\hat{C}_N^{-1} \left( \frac{\alpha}n \sum_{i=1}^n \left[ \frac1{\gamma_N} - \frac1{\tilde{d}_i} \right] z_iz_i^* \right) \hat{S}_N^{-1}b \\ &= a^*\hat{S}_N^{-1} \left( \frac{\alpha}n \sum_{i=1}^n \left[ \frac1{\gamma_N} - \frac1{\tilde{d}_i} \right] z_iz_i^* \right) \hat{S}_N^{-1}b \nonumber \\ &+ a^*(\hat{C}_N^{-1}-\hat{S}_N^{-1}) \left( \frac{\alpha}n \sum_{i=1}^n \left[ \frac1{\gamma_N} - \frac1{\tilde{d}_i} \right] z_iz_i^* \right) \hat{S}_N^{-1}b \\ &= a^*\hat{S}_N^{-1} \left( \frac{\alpha}n \sum_{i=1}^n \frac{\tilde{d}_i-\gamma_N}{\gamma_N^2} z_iz_i^* \right) \hat{S}_N^{-1}b \nonumber \\ &- a^*\hat{S}_N^{-1} \left( \frac{\alpha}n \sum_{i=1}^n \frac{(\tilde{d}_i-\gamma_N)^2}{\gamma_N^2\tilde{d}_i} z_iz_i^* \right) \hat{S}_N^{-1}b \nonumber \\ &+ a^*(\hat{C}_N^{-1}-\hat{S}_N^{-1}) \left( \frac{\alpha}n \sum_{i=1}^n \left[ \frac1{\gamma_N} - \frac1{\tilde{d}_i} \right] z_iz_i^* \right) \hat{S}_N^{-1}b \\ &\triangleq \xi_{11} + \xi_{12} + \xi_{13}. \end{align*} The terms $\xi_{12}$ and $\xi_{13}$ exhibit products of two terms that are expected to be of order $O(N^{-\frac12})$ and which are thus easily handled. As for $\xi_{11}$, it no longer depends on $\hat{C}_N$ and is therefore a standard random variable which, although involved, is technically tractable via standard random matrix methods. In order to show that $N^{1-\varepsilon}\max\{|\xi_{12}|,|\xi_{13}|\}\asto 0$ uniformly in $\rho$, we use the following lemma. \begin{lemma} \label{le:1} For any $\varepsilon>0$, \begin{align*} \max_{1\leq i\leq n}\sup_{\rho\in\mathcal R_\kappa}N^{\frac12-\varepsilon}|\tilde{d}_i(\rho)-\gamma_N(\rho)| &\asto 0 \\ \max_{1\leq i\leq n}\sup_{\rho\in\mathcal R_\kappa}N^{\frac12-\varepsilon}|d_i(\rho)-\gamma_N(\rho)| &\asto 0. \end{align*} \end{lemma} Note that, while the first result is a standard, easily established, random matrix result, the second result is the aforementioned refinement of the core result in the proof of \citep[Theorem~1]{COU14}. \begin{proof}[Proof of Lemma~\ref{le:1}] We start by proving the first identity. From \cite[p.~17]{COU14} (taking $w=-\gamma_N\rho \alpha^{-1}$), we have, for each $p\geq 2$ and for each $1\leq k\leq n$, \begin{align*} \EE\left[ \left| \tilde{d}_k(\rho) - \gamma_N(\rho) \right|^p\right] &= O(N^{-\frac{p}2}) \end{align*} where the bound does not depend on $\rho>\max\{0,1-1/c\}+\kappa$. Let now $\max\{0,1-1/c\}+\kappa=\rho_0<\ldots<\rho_{\lceil\sqrt{n}\rceil}=1$ be a regular sampling of $\mathcal R_\kappa$ in $\lceil\sqrt{n}\rceil$ intervals. We then have, from Markov inequality and the union bound on $n(\lceil\sqrt{n}\rceil+1)$ events, for $C>0$ given, \begin{align*} P \left( \max_{1\leq k\leq n,0\leq i\leq \lceil\sqrt{n}\rceil} \left| N^{\frac12-\varepsilon}( \tilde{d}_k(\rho_i) - \gamma_N(\rho_i)) \right| > C \right) &\leq KN^{-p\varepsilon+\frac32} \end{align*} for some $K>0$ only dependent on $p$ and $C$. From the Borel Cantelli lemma, we then have $\max_{k,i} | N^{\frac12-\varepsilon}( \tilde{d}_k(\rho_i) - \gamma_N(\rho_i))|\asto 0$ as long as $-p\varepsilon+3/2<-1$, which is obtained for $p>5/(2\varepsilon)$. Using $|\gamma_N(\rho)-\gamma_N(\rho')|\leq K|\rho-\rho'|$ for some constant $K$ and each $\rho,\rho'\in\mathcal R_\kappa$ (see \citep[top of Section~5.1]{COU14}) and similarly $\max_{1\leq k\leq n}|\tilde{d}_k(\rho)-\tilde{d}_k(\rho')|\leq K|\rho-\rho'|$ for all large $n$ a.s.\@ (obtained by explicitly writing the difference and using the fact that $\Vert z_k\Vert^2/N$ is asymptotically bounded almost surely), we get \begin{align*} \max_{1\leq k\leq n}\sup_{\rho\in\mathcal R_\kappa}N^{\frac12-\varepsilon}|\tilde{d}_k(\rho)-\gamma_N(\rho)| &\leq \max_{k,i} N^{\frac12-\varepsilon} | \tilde{d}_k(\rho_i) - \gamma_N(\rho_i) | + KN^{-\varepsilon} \\ &\asto 0. \end{align*} The second result relies on revisiting the proof of \cite[Theorem~1]{COU14} incorporating the convergence speed on $\tilde{d}_k-\gamma_N$. For convenience and compatibility with similar derivations that appear later in the proof, we slightly modify the original proof of \cite[Theorem~1]{COU14}. We first define $f_i(\rho)=d_i(\rho)/\gamma_N(\rho)$ and relabel the $d_i(\rho)$ in such a way that $f_1(\rho)\leq \ldots\leq f_n(\rho)$ (the ordering may then depend on $\rho$). Then, we have by definition of $d_n(\rho)=\gamma_N(\rho) f_n(\rho)$ \begin{align*} \gamma_N(\rho) f_n(\rho) &= \frac1Nz_n^*\left( \alpha(\rho) \frac1n\sum_{i<n} \frac{z_iz_i^*}{\gamma_N(\rho) f_i(\rho)} + \rho I_N \right)^{-1}z_n \\ &\leq \frac1Nz_n^*\left( \alpha(\rho) \frac1{f_n(\rho)} \frac1n\sum_{i<n} \frac{z_iz_i^*}{\gamma_N(\rho)} + \rho I_N \right)^{-1}z_n \end{align*} where we used $f_n(\rho)\geq f_i(\rho)$ for each $i$. The above is now equivalent to \begin{align*} \gamma_N(\rho) &\leq \frac1Nz_n^* \left( \alpha(\rho) \frac1n\sum_{i<n} \frac{z_iz_i^*}{\gamma_N(\rho)} + f_n(\rho) \rho I_N \right)^{-1}z_n. \end{align*} We now make the assumption that there exists $\eta>0$ and a sequence $\{\rho^{(n)}\}\in\mathcal R_\kappa$ such that $f_n(\rho^{(n)})>1+N^{\eta-\frac12}$ infinitely often, which is equivalent to saying $d_n(\rho^{(n)})>\gamma_N(\rho^{(n)})(1+N^{\eta-\frac12})$ infinitely often (i.o.). Then, from these assumptions and the above first convergence result \begin{align} \label{eq:gammaineq} \gamma_N(\rho^{(n)}) &\leq \frac1Nz_n^*\left( \alpha(\rho^{(n)}) \frac1n\sum_{i<n} \frac{z_iz_i^*}{\gamma_N(\rho^{(n)})} + \rho^{(n)} (1+N^{\eta-\frac12}) I_N \right)^{-1}z_n \nonumber \\ &= \tilde{d}_n(\rho^{(n)}) - N^{\eta-\frac12} \frac1Nz_n^*\left( \frac1n\sum_{i<n} \frac{\alpha(\rho^{(n)}) z_iz_i^*}{\rho^{(n)}\gamma_N(\rho^{(n)})} + (1+N^{\eta-\frac12}) I_N \right)^{-1} \nonumber \\ &\times\left( \frac1n\sum_{i<n} \frac{\alpha(\rho^{(n)}) z_iz_i^*}{\gamma_N(\rho^{(n)})} + \rho^{(n)} I_N \right)^{-1}z_n. \end{align} Now, by the first result of the lemma, letting $0<\varepsilon<\eta$, we have \begin{align*} \left| \tilde{d}_n(\rho^{(n)}) - \gamma_N(\rho^{(n)}) \right| &\leq \max_{\rho\in\mathcal R_\kappa} \left| \tilde{d}_n(\rho) - \gamma_N(\rho) \right| \leq N^{\varepsilon-\frac12} \end{align*} for all large $n$ a.s., so that, for these large $n$, $\tilde{d}_n(\rho^{(n)}) \leq \gamma_N(\rho^{(n)}) + N^{\varepsilon-\frac12}$. Applying this inequality to the first right-end side term of \eqref{eq:gammaineq} and using the almost sure boundedness of the rightmost right-end side term entails \begin{align*} 0 \leq N^{\varepsilon-\frac12} - KN^{\eta-\frac12} \end{align*} for some $K>0$ for all large $n$ a.s. But, $N^{\varepsilon/2-1/2} - KN^{\eta/2-1/2}<0$ for all large $N$, which contradicts the inequality. Thus, our initial assumption is wrong and therefore, for each $\eta>0$, we have for all large $n$ a.s., $d_n(\rho)<\gamma_N(\rho)+N^{\eta-\frac12}$ uniformly on $\rho\in\mathcal R_\kappa$. The same calculus can be performed for $d_1(\rho)$ by assuming that $f_1(\rho^{\prime(n)})<1-N^{\eta-\frac12}$ i.o.\@ over some sequence $\rho^{\prime(n)}$; by reverting all inequalities in the derivation above, we similarly conclude by contradiction that $d_1(\rho)>\gamma_N(\rho)-N^{\eta-\frac12}$ for all large $n$, uniformly so in $\mathcal R_\kappa$. Together, both results finally lead, for each $\varepsilon>0$, to \begin{align*} \max_{1\leq k\leq n} \sup_{\rho\in\mathcal R_\kappa} \left| N^{\frac12-\varepsilon} \left( d_k(\rho) - \gamma_N(\rho) \right) \right| &\asto 0 \end{align*} obtained by fixing $\varepsilon$, taking $\eta$ such that $0<\eta<\varepsilon$, and using $\max_k \sup_{\rho} |d_k(\rho)-\gamma_N(\rho)|<N^{\eta-\frac12}$ for all large $n$ a.s. \end{proof} Thanks to Lemma~\ref{le:1}, expressing $\hat{C}_N^{-1}(\rho)-\hat{S}_N^{-1}(\rho)$ as a function of $d_i(\rho)-\gamma_N(\rho)$ and using the (almost sure) boundedness of the various terms involved, we finally get $N^{1-\varepsilon}\xi_{12}\asto 0$ and $N^{1-\varepsilon}\xi_{13}\asto 0$ uniformly on $\rho$. \bigskip It then remains to handle the more delicate term $\xi_{11}$, which can be further expressed as \begin{align*} \xi_{11} &= \frac{\alpha}{\gamma_N^2}a^*\hat{S}_N^{-1}\left( \frac1n \sum_{i=1}^n (\tilde{d}_i-\gamma_N) z_iz_i^* \right)\hat{S}_N^{-1}b \nonumber \\ &= \frac{\alpha}{\gamma_N^2} \frac1n\sum_{i=1}^n a^*\hat{S}_N^{-1}z_iz_i^*\hat{S}_N^{-1}b \left( \tilde{d}_i - \gamma_N \right). \end{align*} For that, we will resort to the following lemma, whose proof is postponed to Appendix~\ref{app:key_lemma}. \begin{lemma} Let $\first$ and $\second$ be random or deterministic vectors, independent of $z_1,\cdots,z_n$, such that $\max\left(\EE[\|\first\|^{k}], \EE[\|\second\|^k]\right) \leq K$ for some $K>0$ and all integer $k$. Then, for each integer $p$, \begin{align*} \EE\left[\left|\frac{1}{n}\sum_{i=1}^n \first^*\SN z_iz_i^* \SN \second \left(\frac{1}{N}z_i^* \Si z_i-\gammanrho\right) \right|^{2p}\right] =O\left(N^{-2p}\right) \end{align*} \label{le:keylemma1} \end{lemma} By the Markov inequality and the union bound, similar to the proof of Lemma~\ref{le:1}, we get from Lemma~\ref{le:keylemma1} (with $a=c$ and $d=b$) that, for each $\eta>0$ and for each integer $p\geq 1$, \begin{align*} P \left( \sup_{\rho\in \{\rho_0<\ldots<\rho_{\lceil\sqrt{n}\rceil}\}} N^{1-\varepsilon} |\xi_{11}| > \eta \right) &\leq K N^{-p\varepsilon+\frac12} \end{align*} with $K$ only function of $\eta$ and $\rho_0<\ldots<\rho_{\lceil\sqrt{n}\rceil}$ a regular sampling of $\mathcal R_\kappa$. Taking $p>3/(2\varepsilon)$, we finally get from the Borel Cantelli lemma that \begin{align*} N^{1-\varepsilon} \xi_{11} &\asto 0 \end{align*} uniformly on $\{\rho_0,\ldots,\rho_{\lceil\sqrt{n}\rceil}\}$ and finally, using Lipschitz arguments as in the proof of Lemma~\ref{le:1}, uniformly on $\mathcal R_\kappa$. Putting all results together, we finally have \begin{align*} \sup_{\rho\in\mathcal R_\kappa} N^{1-\varepsilon} |\xi_1(\rho)| &\asto 0 \end{align*} which concludes the first part of the proof. \bigskip We now continue with $\xi_2(\rho)$. In order to prove $N^{1-\varepsilon}\xi_2(\rho)\asto 0$ uniformly on $\rho\in \mathcal R_\kappa$, it is sufficient (thanks to the boundedness of the various terms involved) to prove that \begin{align*} \max_{1\leq i\leq n}\sup_{\rho\in\mathcal R_\kappa} \left| N^{1-\varepsilon} \left(\tilde{d}_i(\rho) - d_i(\rho)\right)\right| &\asto 0. \end{align*} To obtain this result, we first need the following fundamental proposition. \begin{proposition} \label{prop:2} For any $\varepsilon>0$, \begin{align*} \max_{1\leq k\leq n}\sup_{\rho \in\mathcal R_\kappa} \left| N^{1-\varepsilon} \left( \tilde{d}_k(\rho) - \frac1N z_k^*\left( \alpha(\rho) \frac1n \sum_{i\neq k} \frac{z_iz_i^*}{\tilde{d}_i(\rho)} + \rho I_N \right)^{-1}z_k \right) \right| \asto 0. \end{align*} \end{proposition} \begin{proof} By expanding the definition of $\tilde{d}_k$, first observe that \begin{align*} &\tilde{d}_k - \frac1N z_k^*\left( \alpha \frac1n \sum_{i\neq k} \frac{z_iz_i^*}{\tilde{d}_i} + \rho I_N \right)^{-1}z_k \\ &=\alpha \frac1n\sum_{i\neq k} \frac1N z_k^* \hat{S}_{(k)}^{-1} z_iz_i^* \frac{\gamma_N - \tilde{d}_i}{\gamma_N\tilde{d}_i} \left( \alpha \frac1n \sum_{j\neq k} \frac{z_jz_j^*}{\tilde{d}_j} + \rho I_N \right)^{-1}z_k. \end{align*} Similar to the derivation of $\xi_1$, we now proceed to approximating $\tilde{d}_i$ in the central denominator and each $\tilde{d}_j$ in the rightmost inverse matrix by the non-random $\gamma_N$. We obtain (from Lemma~\ref{le:1}) \begin{align*} &\tilde{d}_k - \frac1N z_k^*\left( \alpha \frac1n \sum_{i\neq k} \frac{z_iz_i^*}{\tilde{d}_i} + \rho I_N \right)^{-1}z_k \\ &= \frac{\alpha}{\gamma_N^2} \frac1n\sum_{i\neq k} \frac1N z_k^* \hat{S}_{(k)}^{-1} z_iz_i^* (\gamma_N - \tilde{d}_i) \hat{S}_{(k)}^{-1} z_k + o(N^{\varepsilon-1}) \end{align*} almost surely, for $\varepsilon>0$ and uniformly so on $\rho$. The objective is then to show that the first right-hand side term is $o(N^{\varepsilon-1})$ almost surely and that this holds uniformly on $k$ and $\rho$. This is achieved by applying Lemma~\ref{le:keylemma1} with $c=d=z_k$. Indeed, Lemma~\ref{le:keylemma1} ensures that, for each integer $p$,\footnote{Note that Lemma~\ref{le:keylemma1} can strictly be applied here for $n-1$ instead of $n$; but since $1/n-1/(n-1)=O(n^{-2})$, this does not affect the result.} \begin{align*} \EE\left[\left|\frac{1}{n}\sum_{i\neq k}\frac{1}{N}z_k^*S_{(k)}^{-1}(\rho) z_iz_i^* S_{(k)}^{-1}(\rho)z_k\left(\frac{1}{N}z_i^*{S}_{(i,k)}^{-1}(\rho)z_i-\gamma_N(\rho)\right)\right|^p\right]=O(N^{-p}) \end{align*} From this lemma, applying Markov's inequality, we have for each $k$, \begin{align*} P \left( N^{1-\varepsilon} \left| \frac1n\sum_{i\neq k} \frac1Nz_k^* \hat{S}_{(k)}^{-1}z_iz_i^*\hat{S}_{(k)}^{-1}z_k \left( \frac1Nz_i^*\hat{S}_{(i,k)}^{-1}z_i - \gamma_N \right) \right| > \eta \right) \leq K N^{-p\varepsilon} \end{align*} for some $K>0$ only dependent on $\eta>0$. Applying the union bound on the $n(n+1)$ events for $k=1,\ldots,n$ and for $\rho\in\{\rho_0,\ldots,\rho_n\}$, regular $n$-discretization of $\mathcal R_\kappa$, we then have \begin{align*} &P \left( \max_{k,j} N^{1-\varepsilon} \left| \frac1n\sum_{i\neq k} \frac1Nz_k^* \hat{S}_{(k)}^{-1}z_iz_i^*\hat{S}_{(k)}^{-1}z_k \left( \frac1Nz_i^*\hat{S}_{(i,k)}^{-1}z_i - \gamma_N(\rho_j) \right) \right| > \eta \right) \nonumber \\ &\leq K N^{-p\varepsilon+2}. \end{align*} Taking $p>3/\varepsilon$, by the Borel Cantelli lemma the above convergence holds almost surely, we finally get \begin{align*} \max_{k,j} \left| N^{1-\varepsilon} \left( \tilde{d}_k(\rho_j) - \frac1N z_k^*\left( \alpha(\rho_j) \frac1n \sum_{i\neq k} \frac{z_iz_i^*}{\tilde{d}_i(\rho_j)} + \rho_j I_N \right)^{-1}z_k \right) \right| \asto 0. \end{align*} Using the $\rho$-Lipschitz property (which holds almost surely so for all large $n$ a.s.) on both terms in the above difference concludes the proof of the proposition. \end{proof} The crux of the proof for the convergence of $\xi_2$ starts now. In a similar manner as in the proof of Lemma~\ref{le:1}, we define $\tilde{f}_i(\rho)=d_i(\rho)/\tilde{d}_i(\rho)$ and reorder the indexes in such a way that $\tilde{f}_1(\rho)\leq \ldots\leq \tilde{f}_n(\rho)$ (this ordering depending on $\rho$). Then, by definition of $d_n(\rho)=\tilde{f}_i(\rho)\tilde{d}_i(\rho)$, \begin{align*} \tilde{d}_n(\rho) \tilde{f}_n(\rho) &= \frac1Nz_n^*\left( \alpha(\rho) \frac1n\sum_{i<n} \frac{z_iz_i^*}{\tilde{d}_i(\rho)\tilde{f}_i(\rho)} + \rho I_n \right)^{-1}z_n \\ &\leq \frac1Nz_n^*\left( \alpha(\rho) \frac1{\tilde{f}_n(\rho)} \frac1n\sum_{i<n} \frac{z_iz_i^*}{\tilde{d}_i(\rho)} + \rho I_n \right)^{-1}z_n \end{align*} where we used $\tilde{f}_n(\rho)\geq \tilde{f}_i(\rho)$ for each $i$. This inequality is equivalent to \begin{align*} \tilde{d}_n(\rho) &\leq \frac1Nz_n^* \left( \alpha(\rho) \frac1n\sum_{i<n} \frac{z_iz_i^*}{\tilde{d}_i(\rho)} + \tilde{f}_n(\rho) \rho I_n \right)^{-1}z_n. \end{align*} Assume now that, over some sequence $\{\rho^{(n)}\}\in\mathcal R_\kappa$, $\tilde{f}_n(\rho^{(n)})>1+N^{\eta-1}$ infinitely often for some $\eta>0$ (or equivalently, $d_n(\rho^{(n)})>\tilde{d}_n(\rho^{(n)})+N^{\eta-1}$ i.o.). Then we would have \begin{align*} \tilde{d}_n(\rho^{(n)}) &\leq \frac1Nz_n^*\left( \alpha(\rho^{(n)}) \frac1n\sum_{i<n} \frac{z_iz_i^*}{\tilde{d}_i(\rho^{(n)})} + \rho^{(n)} (1+N^{\eta-1}) I_N \right)^{-1}z_n \\ &=\tilde{d}_n(\rho^{(n)}) - N^{\eta-1} \frac1Nz_n^*\left( \frac1n\sum_{i<n} \frac{\alpha(\rho^{(n)}) z_iz_i^*}{\rho^{(n)}\tilde{d}_i(\rho^{(n)})} + (1+N^{\eta-1}) I_N \right)^{-1}\nonumber \\ &\times\left( \frac1n\sum_{i<n} \frac{\alpha(\rho^{(n)}) z_iz_i^*}{\tilde{d}_i(\rho^{(n)})} + \rho I_N \right)^{-1}z_n. \end{align*} But, by Proposition~\ref{prop:2}, letting $0<\varepsilon<\eta$, we have, for all large $n$ a.s., \begin{align*} \frac1Nz_n^*\left( \alpha(\rho^{(n)}) \frac1n\sum_{i<n} \frac{z_iz_i^*}{\tilde{d}_i(\rho^{(n)})} + \rho^{(n)} I_n \right)^{-1}z_n \leq \tilde{d}_n(\rho^{(n)}) + N^{\varepsilon-1} \end{align*} which, along with the uniform boundedness of the $\tilde{d}_i$ away from zero, leads to \begin{align*} \tilde{d}_n(\rho^{(n)}) &\leq \tilde{d}_n(\rho^{(n)}) + N^{\varepsilon-1} - KN^{\eta-1} \end{align*} for some $K>0$. But, as $N^{\varepsilon-1} - KN^{\eta-1}<0$ for all large $N$, we obtain a contradiction. Hence, for each $\eta>0$, we have for all large $n$ a.s., $d_n(\rho)<\tilde{d}_n(\rho)+N^{\eta-1}$ uniformly on $\rho\in\mathcal R_\kappa$. Proceeding similarly with $d_1(\rho)$, and exploiting $\limsup_n \sup_\rho \max_i |\tilde{d}_i(\rho)|=O(1)$ a.s.\@, we finally have, for each $0<\varepsilon<\frac12$, that \begin{align*} \max_{1\leq k\leq n} \sup_{\rho\in\mathcal R_\kappa}\left| N^{1-\varepsilon} \left( d_k(\rho) - \tilde{d}_k(\rho) \right) \right| &\asto 0 \end{align*} (for this, take an $\eta$ such that $0<\eta<\varepsilon$ and use $\max_k \sup_\rho |d_k(\rho)-\tilde{d}_k(\rho)|<N^{\eta-1}$ for all large $n$ a.s.). Getting back to $\xi_2$, we now have \begin{align*} N^{1-\varepsilon} |\xi_2(\rho)| &= N^{1-\varepsilon} \left|a^*\hat{C}_N^{-1}(\rho) \left( \frac{\alpha(\rho)}n \sum_{i=1}^n \frac{d_i(\rho)-\tilde{d}_i(\rho)}{d_i(\rho)\tilde{d}_i(\rho)} z_iz_i^* \right)\hat{S}_N^{-1}(\rho)b\right|. \end{align*} But, from the above result, \begin{align*} N^{1-\varepsilon} \left\Vert \frac{\alpha(\rho)}n \sum_{i=1}^n \frac{d_i(\rho)-\tilde{d}_i(\rho)}{d_i(\rho)\tilde{d}_i(\rho)} z_iz_i^* \right\Vert &\leq N^{1-\varepsilon} \max_{1\leq k\leq n} \left| \frac{d_k(\rho)-\tilde{d}_k(\rho)}{d_k(\rho)\tilde{d}_k(\rho)} \right| \left\Vert \frac{\alpha(\rho)}n \sum_{i=1}^n z_iz_i^* \right\Vert \\ &\asto 0 \end{align*} uniformly so on $\rho\in\mathcal R_\kappa$ which, along with the boundedness of $\Vert \hat{C}_N^{-1}\Vert$, $\Vert \hat{S}_N^{-1}\Vert$, $\Vert a\Vert$, and $\Vert b\Vert$, finally gives $N^{1-\varepsilon} \xi_2\asto 0$ uniformly on $\rho\in\mathcal R_\kappa$ as desired. \bigskip We have then proved that for each $\varepsilon>0$, \begin{align*} \sup_{\rho\in\mathcal R_\kappa}\left| N^{1-\varepsilon} \left( a^*\hat{C}_N^{-1}(\rho)b - a^*\hat{S}_N^{-1}(\rho)b \right)\right| \asto 0 \end{align*} which proves Theorem~\ref{th:bilin} for $k=-1$. The generalization to arbitrary $k$ is rather immediate. Writing recursively $\hat{C}_N^k-\hat{S}_N^k=\hat{C}_N^{k-1}(\hat{C}_N-\hat{S}_N)+(\hat{C}_N^{k-1}-\hat{S}_N^{k-1})\hat{S}_N$, for positive $k$ or $\hat{C}_N^k-\hat{S}_N^k=\hat{C}_N^k(\hat{S}_N-\hat{C}_N)\hat{S}_N^{-1}+(\hat{C}_N^{k-1}-\hat{S}_N^{k-1})\hat{S}_N^{-1}$, \eqref{eq:firsteq} becomes a finite sum of terms that can be treated almost exactly as in the proof. This concludes the proof of Theorem~\ref{th:bilin}. \subsection{Fluctuations of the GLRT detector} This section is devoted to the proof of Theorem~\ref{th:T}, which shall fundamentally rely on Theorem~\ref{th:bilin}. The proof will be established in two steps. First, we shall prove the convergence for each $\rho\in\mathcal R_\kappa$, which we then generalize to the uniform statement of the theorem. Let us then fix $\rho\in\mathcal R_\kappa$ for the moment. In anticipation of the eventual replacement of $\hat{C}_N(\rho)$ by $\underline{\hat{S}}_N(\underline\rho)$, we start by studying the fluctuations of the bilinear forms involved in $T_N(\rho)$ but with $\hat{C}_N(\rho)$ replaced by $\underline{\hat{S}}_N(\underline\rho)$ (note that $T_N(\rho)$ remains constant when scaling $\hat{C}_N(\rho)$ by any constant, so that replacing $\hat{C}_N(\rho)$ by $\underline{\hat{S}}_N(\underline\rho)$ instead of by $\underline{\hat{S}}_N(\underline\rho)\cdot\frac1N\tr \hat{S}_N(\rho)$ as one would expect comes with no effect). Our first goal is to show that the vector $\sqrt{N}(\Re[y^*\underline{\hat S}^{-1}_N(\underline\rho) p],\Im[y^*\underline{\hat S}^{-1}_N(\underline\rho) p])$ is asymptotically well approximated by a zero mean Gaussian vector with given covariance matrix. To this end, let us denote $A=[y~p]\in\CC^{N\times 2}$ and $Q_N=Q_N(\underline\rho)=(I_N+(1-\underline{\rho}) m(-\underline{\rho})C_N)^{-1}$. Then, from \cite[Lemma~5.3]{CHA12} (adapted to our current notations and normalizations), for any Hermitian $B\in\CC^{2\times 2}$ and for any $u\in\RR$, \begin{align} &\EE\left[\exp\left( \imath \sqrt{N} u \tr BA^*\left[\underline{\hat{S}}_N(\underline\rho)^{-1}-\frac1{\underline\rho}Q_N(\underline\rho) \right]A \right) ~\Big|~y\right] \nonumber \\ &= \exp\left(-\frac12 u^2 \Delta_N^2(B;y;p) \right) + O(N^{-\frac12}) \label{eq:characteristic_fun} \end{align} where we denote by $\EE[\cdot |y]$ the conditional expectation with respect to the random vector $y$ and where \begin{align*} \Delta_N^2(B;y;p) &\triangleq \frac{cm(-\underline{\rho})^2(1-\underline{\rho})^2\tr \left( A B A^* C_NQ_N^2(\underline\rho)\right)^2}{\underline{\rho}^2 \left(1-c m(-\underline{\rho})^2(1-\underline{\rho})^2 \frac1N\tr C_N^2Q_N^2(\underline\rho)\right) } . \end{align*} Also, we have from classical central limit results on Gaussian random variables \begin{align*} \EE\left[\exp\left( \imath \sqrt{N} u \tr B \left[A^*Q_N(\underline\rho)A - \Gamma_N \right]\right)\right] &= \exp \left( -\frac12u^2 \Delta_N^{\prime 2}(B;p) \right) + O(N^{-\frac12}) \end{align*} where \begin{align*} \Gamma_N &\triangleq \frac1{\underline\rho}\begin{bmatrix} \frac1N\tr C_NQ_N(\underline\rho)& 0 \\ 0 & p^*Q_N(\underline\rho)p \end{bmatrix} \\ \Delta_N^{\prime 2}(B;p) &\triangleq \frac{B_{11}^2}{\underline\rho^2} \frac1N\tr C_N^2Q_N^2(\underline\rho) + \frac{2|B_{12}|^2}{\underline\rho^2} p^*C_NQ_N^2(\underline\rho)p. \end{align*} Besides, the $O(N^{-\frac12})$ terms in the right-hand side of \eqref{eq:characteristic_fun} remains $O(N^{-\frac12})$ under expectation over $y$ (for this, see the proof of \cite[Lemma~5.3]{CHA12}). Altogether, we then have \begin{align*} & \EE\left[\exp\left( \imath \sqrt{N} u \tr B\left[ A^*\underline{\hat{S}}_N^{-1}(\underline\rho)A - \Gamma_N\right] \right)\right] \nonumber \\ &= \EE \left[\exp\left(-\frac12 u^2 \Delta_N^2(B;y;p)\right) \right] \exp\left(-\frac12 u^2 \Delta_N^{\prime 2}(B;p) \right) + O(N^{-\frac12}). \end{align*} Note now that \begin{align*} A^* C_NQ_N^2(\underline\rho)A - \Upsilon_N &\asto 0 \end{align*} where \begin{align*} \Upsilon_N &\triangleq \begin{bmatrix} \frac1N\tr C_N^2 Q_N^2(\underline\rho) & 0 \\ 0 & p^*C_NQ_N^2(\underline\rho) p \end{bmatrix} \end{align*} so that, by dominated convergence, we obtain \begin{align*} &\EE\left[\exp\left( \imath \sqrt{N} u \tr B\left[ A^*\underline{\hat{S}}_N^{-1}(\underline\rho)A - \Gamma_N\right] \right)\right] \nonumber \\ &= \exp\left(-\frac12 u^2 \left[ \Delta_N^2(B;p) + \Delta_N^{\prime 2}(B;p) \right]\right) + o(1) \end{align*} where we defined \begin{align*} \Delta_N^2(B;p) &\triangleq \frac{cm(-\underline{\rho})^2(1-\underline{\rho})^2 \tr \left( B \Upsilon_N\right)^2}{\underline{\rho}^2 \left(1-c m(-\underline{\rho})^2(1-\underline{\rho})^2 \frac1N\tr C_N^2Q_N^2(\underline\rho)\right)}. \end{align*} By a generalized L\'evy's continuity theorem argument (see e.g.\@ \cite[Proposition~6]{HAC06}) and the Cramer-Wold device, we conclude that \begin{align*} \sqrt{N}\left(y^*\underline{\hat{S}}_N^{-1}(\underline{\rho})y,\Re[y^*\underline{\hat{S}}_N^{-1}(\underline{\rho})p],\Im[y^*\underline{\hat{S}}_N^{-1}(\underline{\rho})p]\right) - Z_N &= o_P(1) \end{align*} where $Z_N$ is a Gaussian random vector with mean and covariance matrix prescribed by the above approximation of $\sqrt{N} \tr BA^*\underline{\hat{S}}_N^{-1}A$ for each Hermitian $B$. In particular, taking $B_1\in\left\{ \left[\begin{smallmatrix} 0 & \frac12 \\ \frac12 & 0\end{smallmatrix}\right],\left[\begin{smallmatrix} 0 & \frac{\imath}2 \\ -\frac{\imath}2 & 0\end{smallmatrix}\right]\right\}$ to retrieve the asymptotic variances of $\sqrt{N}\Re[y^*\underline{\hat{S}}_N^{-1}(\underline{\rho})p]$ and $\sqrt{N}\Im[y^*\underline{\hat{S}}_N^{-1}(\underline{\rho})p]$, respectively, gives \begin{align*} \Delta_N^2(B_1;p) &=\frac1{2 \underline{\rho}^2} p^*C_NQ_N^2(\underline\rho)p \frac{c m(-\underline{\rho})^2(1-\underline{\rho})^2 \frac1N\tr C_N^2Q_N^2(\underline\rho) }{1-c m(-\underline{\rho})^2(1-\underline{\rho})^2 \frac1N\tr C_N^2Q_N^2(\underline\rho)} \\ \Delta_N^{\prime 2}(B_1;p) &=\frac1{2\underline{\rho}^2} p^*C_NQ_N^2(\underline\rho)p \end{align*} and thus $\sqrt{N}(\Re[y^*\underline{\hat{S}}_N^{-1}(\underline{\rho})p],\Im[y^*\underline{\hat{S}}_N^{-1}(\underline{\rho})p])$ is asymptotically equivalent to a Gaussian vector with zero mean and covariance matrix \begin{align*} (\Delta_N^2(B_1;p)+\Delta_N^{\prime 2}(B_1;p))I_2 &= \frac1{2\underline{\rho}^2} \frac{p^*C_NQ_N^2(\underline\rho)p}{1-c m(-\underline{\rho})^2(1-\underline{\rho})^2 \frac1N\tr C_N^2Q_N^2(\underline\rho)} I_2. \end{align*} We are now in position to apply Theorem~\ref{th:bilin}. Reminding that $\hat{S}_N^{-1}(\rho) (\rho + \frac1{\gamma_N(\rho)} \frac{1-\rho}{1-(1-\rho)c}) =\underline{\hat{S}}_N^{-1}(\underline\rho)$, we have by Theorem~\ref{th:bilin} for $k=-1$ \begin{align*} \sqrt{N} A^*\left[\hat{C}_N^{-1}(\rho)-\frac{\underline{\hat{S}}_N(\underline\rho)^{-1}}{\rho + \frac1{\gamma_N(\rho)} \frac{1-\rho}{1-(1-\rho)c}}\right] A &\asto 0. \end{align*} Since almost sure convergence implies weak convergence, $\sqrt{N} A^*\hat{C}_N^{-1}(\rho)A$ has the same asymptotic fluctuations as $\sqrt{N} A^*\underline{\hat{S}}_N^{-1}(\underline\rho)A/(\frac1N\tr \hat{S}_N(\rho))$. Also, as $T_N(\rho)$ remains identical when scaling $\hat{C}_N^{-1}(\rho)$ by $\frac1N\tr \hat{S}_N(\rho)$, only the fluctuations of $\sqrt{N} A^*\underline{\hat{S}}_N^{-1}(\underline\rho)A$ are of interest, which were previously derived. We then finally conclude by the delta method (or more directly by Slutsky's lemma) that \begin{align*} \sqrt{\frac{N}{y^*\hat{C}_N^{-1}(\rho)y p^*\hat{C}_N^{-1}(\rho)p}} \begin{bmatrix} \Re\left[ y^*\hat{C}_N^{-1}(\rho)p \right] \\ \Im\left[ y^*\hat{C}_N^{-1}(\rho)p \right] \end{bmatrix} - \sigma_N(\underline{\rho}) Z' = o_P(1) \end{align*} for some $Z'\sim \mathcal N(0,I_2)$ and \begin{align*} \sigma_N^2(\underline\rho) &\triangleq \frac12 \frac{ p^*C_NQ_N^2(\underline\rho)p}{ p^*Q_N(\underline\rho)p\cdot \frac1N\tr C_NQ_N(\underline\rho)\cdot \left(1-c m(-\underline{\rho})^2(1-\underline{\rho})^2 \frac1N\tr C_N^2Q_N^2(\underline\rho)\right) }. \end{align*} It unfolds that, for $\gamma>0$, \begin{align} \label{eq:conv_proba_rho} P\left( T_N(\rho) > \frac{\gamma}{\sqrt{N}} \right) - \exp \left( - \frac{\gamma^2}{2\sigma_N^2(\underline\rho)} \right) \to 0 \end{align} as desired. \bigskip The second step of the proof is to generalize \eqref{eq:conv_proba_rho} to uniform convergence across $\rho\in\mathcal R_\kappa$. To this end, somewhat similar to above, we shall transfer the distribution $P(\sqrt{N}T_N(\rho) > \gamma)$ to $P(\sqrt{N}\underline T_N(\rho) > \gamma)$ by exploiting the uniform convergence of Theorem~\ref{th:bilin}, where we defined\begin{align*} \underline T_N(\rho) &\triangleq \frac{\left|y^*\underline{\hat S}_N(\underline\rho) p\right|}{\sqrt{y^*\underline{\hat S}_N(\underline\rho) y}\sqrt{p^*\underline{\hat S}_N(\underline\rho) p}} \end{align*} and exploit a $\rho$-Lipschitz property of $\sqrt{N}\underline T_N(\rho)$ to reduce the uniform convergence over $\mathcal R_\kappa$ to a uniform convergence over finitely many values of $\rho$. The $\rho$-Lipschitz property we shall need is as follows: for each $\varepsilon>0$ \begin{align} \label{eq:tightness_condition} \lim_{\delta\to 0} \lim_{N\to\infty} P\left( \sup_{\substack{\rho,\rho'\in\mathcal R_\kappa \\ |\rho-\rho'|<\delta} } \sqrt{N}\left|T_N(\rho)-T_N(\rho')\right| > \varepsilon \right) &= 0. \end{align} Let us prove this result. By Theorem~\ref{th:bilin}, since almost sure convergence implies convergence in distribution, we have \begin{align*} P\left( \sup_{\rho\in\mathcal R_\kappa} \sqrt{N}\left|T_N(\rho)-\underline T_N(\rho)\right| > \varepsilon \right) &\to 0. \end{align*} Applying this result to \eqref{eq:tightness_condition} induces that it is sufficient to prove \eqref{eq:tightness_condition} for $\underline T_N(\rho)$ in place of $T_N(\rho)$. Let $\eta>0$ small and $\mathcal A_N^\eta\triangleq \{\exists \underline\rho \in \mathcal R_\kappa,y^*\underline{\hat{S}}_N^{-1}(\underline\rho)yp^*\underline{\hat{S}}_N^{-1}(\underline\rho)p<\eta\}$. Developing the difference $\underline T_N(\rho)-\underline T_N(\rho')$ and isolating the denominator according to its belonging to $\mathcal A_N^\eta$ or not, we may write \begin{align*} & P\left( \sup_{\substack{\rho,\rho'\in\mathcal R_\kappa \\ |\rho-\rho'|<\delta} } \sqrt{N}\left|\underline T_N(\rho)-\underline T_N(\rho')\right| > \varepsilon \right) \nonumber \\ &\leq P\left( \mathcal A_N^\eta \right) + P\left( \sup_{\substack{\rho,\rho'\in\mathcal R_\kappa \\ |\rho-\rho'|<\delta} } \sqrt{N} V_N(\rho,\rho')>\varepsilon\eta \right) \end{align*} where \begin{align*} V_N(\rho,\rho') &\triangleq \left| y^*\underline{\hat{S}}_N^{-1}(\underline\rho)p \right| \sqrt{y^*\underline{\hat{S}}_N^{-1}(\underline\rho')y}\sqrt{p^*\underline{\hat{S}}_N^{-1}(\underline\rho')p} \nonumber \\ &- \left| y^*\underline{\hat{S}}_N^{-1}(\underline\rho')p \right| \sqrt{y^*\underline{\hat{S}}_N^{-1}(\underline\rho)y}\sqrt{p^*\underline{\hat{S}}_N^{-1}(\underline\rho)p}. \end{align*} From classical random matrix results, $P( \mathcal A_N^\eta )\to 0$ for a sufficiently small choice of $\eta$. To prove that $\lim_\delta \limsup_n P(\sup_{|\rho-\rho'|<\delta} \sqrt{N} V_N(\rho,\rho')>\varepsilon\eta)=0$, it is then sufficient to show that \begin{align} \label{eq:rho-rho'} \lim_{\delta\to 0} \limsup_n P\left( \sup_{\substack{\rho,\rho'\in\mathcal R_\kappa \\ |\rho-\rho'|<\delta}} \sqrt{N}|y^*\underline{\hat S}_N(\underline\rho)^{-1}p-y^*\underline{\hat S}_N(\underline\rho')^{-1}p| > \varepsilon' \right) = 0 \end{align} for any $\varepsilon'>0$ and similarly for $y^*\underline{\hat S}_N(\underline\rho)^{-1}y-y^*\underline{\hat S}_N(\underline\rho')^{-1}y$ and $p^*\underline{\hat S}_N(\underline\rho)^{-1}p-p^*\underline{\hat S}_N(\underline\rho')^{-1}p$. Let us prove \eqref{eq:rho-rho'}, the other two results following essentially the same line of arguments. For this, by \cite[Corollary~16.9]{KAL02} (see also \cite[Theorem~12.3]{BIL68}), it is sufficient to prove, say \begin{align*} \sup_{\substack{\rho,\rho'\in \mathcal R_\kappa\\ \rho\neq \rho'}} \sup_n \frac{\EE \left[\sqrt{N}\left|y^*\underline{\hat S}_N(\underline\rho)^{-1}p-y^*\underline{\hat S}_N(\underline\rho')^{-1}p \right|^2\right]}{|\rho-\rho'|^2} < \infty. \end{align*} But then, remarking that \begin{align*} &\sqrt{N} y^*\underline{\hat S}_N(\underline\rho)^{-1}p-y^*\underline{\hat S}_N(\underline\rho')^{-1}p \nonumber \\ &= (\underline\rho'-\underline\rho) \sqrt{N} y^*\underline{\hat S}_N(\underline\rho)^{-1}\left( I_N - \frac1n\sum_{i=1}^n z_iz_i^* \right)\underline{\hat S}_N(\underline\rho')^{-1}p \end{align*} this reduces to showing that \begin{align*} \sup_{\rho,\rho'\in \mathcal R_\kappa} \sup_n \EE \left[ N \left| y^*\underline{\hat S}_N(\underline\rho)^{-1}\left( I_N - \frac1n\sum_{i=1}^n z_iz_i^* \right)\underline{\hat S}_N(\underline\rho')^{-1}p \right|^2\right] <\infty. \end{align*} Conditioning first on $z_1,\ldots,z_n$, this further reduces to showing \begin{align*} \sup_{\rho,\rho'\in \mathcal R_\kappa} \sup_n \EE \left[ \left\Vert \underline{\hat S}_N(\underline\rho)^{-1}\left( I_N - \frac1n\sum_{i=1}^n z_iz_i^* \right)\underline{\hat S}_N(\underline\rho')^{-1}p \right\Vert^2\right] <\infty. \end{align*} But this is yet another standard random matrix result, obtained e.g., by noticing that \begin{align*} \left\Vert \underline{\hat S}_N(\underline\rho)^{-1}\left( I_N - \frac1n\sum_{i=1}^n z_iz_i^* \right)\underline{\hat S}_N(\underline\rho')^{-1}p \right\Vert^2 &\leq \frac1{\kappa^4}\left\Vert I_N - \frac1n\sum_{i=1}^n z_iz_i^* \right\Vert^2 \end{align*} which remains of uniformly finite expectation (left norm is vector Euclidean norm, right norm is matrix spectral norm). This completes the proof of \eqref{eq:tightness_condition}. \smallskip Getting back to our original problem, let us now take $\varepsilon>0$ arbitrary, $\rho_1<\ldots<\rho_K$ be a regular sampling of $\mathcal R_\kappa$, and $\delta=1/K$. Then by \eqref{eq:conv_proba_rho}, $K$ being fixed, for all $n>n_0(\varepsilon)$, \begin{align} \label{eq:Kfoldconv} \max_{1\leq k\leq K} \left| P\left( T_N(\rho_i) > \frac{\gamma}{\sqrt{N}} \right) - \exp\left( -\frac{\gamma^2}{2\sigma_N^2(\rho_i)} \right) \right| &< \varepsilon. \end{align} Also, from \eqref{eq:tightness_condition}, for small enough $\delta$, \begin{align*} &\max_{1\leq k\leq K} P\left( \sup_{\substack{\rho\in\mathcal R_\kappa \\ |\rho-\rho_k|<\delta}} \sqrt{N}|T_N(\rho)-T_N(\rho_k)| > \gamma \zeta \right) \nonumber \\ &\leq P\left( \sup_{\substack{\rho,\rho'\in\mathcal R_\kappa \\ |\rho-\rho'|<\delta}} \sqrt{N}|T_N(\rho)-T_N(\rho')| > \gamma \zeta \right) \\ &< \varepsilon \end{align*} for all large $n>n_0'(\varepsilon,\zeta)>n_0(\varepsilon)$ where $\zeta>0$ is also taken arbitrarily small. Thus we have, for each $\rho\in \mathcal R_\kappa$ and for $n>n_0'(\varepsilon,\zeta)$ \begin{align*} P\left( T_N(\rho) > \frac{\gamma}{\sqrt{N}} \right) &\leq P\left( T_N(\rho_i) > \frac{\gamma(1-\zeta)}{\sqrt{N}} \right)+ P\left( \sqrt{N} |T_N(\rho)-T_N(\rho_i)|>\gamma\zeta\right) \\ &\leq P\left( T_N(\rho_i) > \frac{\gamma(1-\zeta)}{\sqrt{N}} \right) + \varepsilon \end{align*} for $i\leq K$ the unique index such that $|\rho-\rho_i|<\delta$ and where the inequality holds uniformly on $\rho\in \mathcal R_\kappa$. Similarly, reversing the roles of $\rho$ and $\rho_i$, \begin{align*} P\left( T_N(\rho) > \frac{\gamma}{\sqrt{N}} \right) &\geq P\left( T_N(\rho_i) > \frac{\gamma(1+\zeta)}{\sqrt{N}} \right) - \varepsilon. \end{align*} As a consequence, by \eqref{eq:Kfoldconv}, for $n>n_0'(\varepsilon,\zeta)$, uniformly on $\rho\in\mathcal R_\kappa$, \begin{align*} P\left( T_N(\rho) > \frac{\gamma}{\sqrt{N}} \right) &\leq \exp\left( -\frac{\gamma^2(1-\zeta)^2}{2\sigma_N^2(\rho_i)} \right) + 2\varepsilon \\ P\left( T_N(\rho) > \frac{\gamma}{\sqrt{N}} \right) &\geq \exp\left( -\frac{\gamma^2(1+\zeta)^2}{2\sigma_N^2(\rho_i)} \right) - 2\varepsilon \end{align*} which, by continuity of the exponential and of $\rho\mapsto \sigma_N(\rho)$,\footnote{Note that it is unnecessary to ensure $\liminf_N \sigma_N(\rho)>0$ as the exponential would tend to zero anyhow in this scenario.} letting $\zeta$ and $\delta$ small enough (up to growing $n_0'(\varepsilon,\zeta)$), leads to \begin{align*} \sup_{\rho\in\mathcal R_\kappa}\left| P\left( \sqrt{N} T_N(\rho) > \gamma \right) - \exp\left( -\frac{\gamma^2}{2\sigma_N^2(\rho)} \right)\right| &\leq 3\varepsilon \end{align*} for all $n>n_0'(\varepsilon,\zeta)$, which completes the proof. \subsection{Around empirical estimates} This section is dedicated to the proof of Proposition~\ref{prop:1} and Corollary~\ref{co:1}. We start by showing that $\hat{\sigma}^2_N(1)$ is well defined. It is easy to observe that the ratio defining $\hat{\sigma}^2_N(\underline\rho)$ converges to an undetermined form (zero over zero) as $\underline\rho\uparrow 1$. Applying l'Hospital's rule to the ratio, using the differentiation $\frac{d}{d\underline\rho} \underline{\hat{S}}^{-1}_N(\underline\rho)=-\underline{\hat{S}}^{-2}_N(\underline\rho)(I_N-\frac1n\sum_i z_iz_i^*)$ and the limit $\underline{\hat{S}}^{-1}_N(\underline\rho)\to I_N$ as $\underline\rho\uparrow 1$, we end up with \begin{align*} \hat{\sigma}^2_N(\underline\rho) \to \frac12 \frac{p^*\left( \frac1n\sum_{i=1}^n z_iz_i^* \right)p}{\frac1N\tr \left( \frac1n\sum_{i=1}^n z_iz_i^* \right)}. \end{align*} Letting $\varepsilon>0$ arbitrary, since $p^*\frac1n\sum_i z_iz_i^*p - p^*C_Np\asto 0$, $\frac1N\tr \frac1n\sum_i z_iz_i^*\asto 1$ as $n\to\infty$, we immediately have, by continuity of both $\sigma^2_N(\underline\rho)$ and $\hat\sigma^2_N(\underline\rho)$, \begin{align*} \sup_{\rho \in (1-\kappa,1]}\left|\hat{\sigma}^2_N(\underline\rho) - \sigma^2_N(\underline\rho) \right| &\leq \varepsilon \end{align*} for all large $n$ almost surely. From now on, it then suffices to prove Proposition~\ref{prop:1} on the complementary set $\mathcal R_\kappa'\triangleq [\kappa+\min\{0,1-c^{-1}\},1-\kappa]$. For this, we first recall the following results borrowed from \citep{COU14}: \begin{align*} \sup_{\rho\in\mathcal R_\kappa} \left\Vert \frac{\hat{C}_N(\rho)}{\frac1N\tr \hat{C}_N(\rho)} - \underline{\hat{S}}_N(\underline\rho) \right\Vert &\asto 0. \end{align*} Also, for $z\in\CC\setminus \RR^+$, defining \begin{align*} \underline{\underline{\hat S}}_N(z)&\triangleq (1-\underline\rho)\frac1n\sum_{i=1}^n z_iz_i^* -z I_N \end{align*} (so in particular $\underline{\underline{\hat S}}_N(-\underline\rho)=\underline{\hat S}_N(\underline\rho)$, for all $\underline\rho\in \mathcal R_\kappa$), we have, with $\mathcal C$ a compact set of $\CC\setminus\RR^+$ and any integer $k$, \begin{align*} \sup_{\bar z\in\mathcal C} \left| \frac{d^k}{dz^k} \left\{ \frac1N\tr \underline{\underline{\hat S}}_N^{-1}(z) - \frac1N\tr \left(-z \left[I_N + (1-\underline\rho)m_N(z)C_N\right]\right)^{-1} \right\}_{z=\bar z} \right| &\asto 0 \\ \sup_{\bar z\in\mathcal C} \left| \frac{d^k}{dz^k} \left\{ p^*\underline{\underline{\hat S}}_N^{-1}(z)p - p^*\left(-z \left[I_N + (1-\underline\rho)m_N(z)C_N\right]\right)^{-1}p \right\}_{z=\bar z} \right| &\asto 0 \end{align*} where $m_N(z)$ is defined as the unique solution with positive (resp.\@ negative) imaginary part if $\Im[z]>0$ (resp.\@ $\Im[z]<0$) or unique positive solution if $z<0$ of \begin{align*} m_N(z) &= \left( -z + c \int \frac{(1-\underline\rho)t}{1+(1-\underline\rho)tm_N(z)}\nu_N(dt) \right)^{-1} \end{align*} (this follows directly from \citep{SIL95}). This expression of $m_N(z)$ can be more rewritten under the more convenient form \begin{align*} m_N(z) &= -\frac{1-c}z + c \int \frac{\nu_N(dt)}{-z-z(1-\underline\rho)tm_N(z)} \\ &= -\frac{1-c}z + c \frac1N\tr \left(-z \left[I_N + (1-\underline\rho)m_N(z)C_N\right]\right)^{-1} \end{align*} so that, from the above relations \begin{align*} \sup_{\rho \in\mathcal R_\kappa'} \left| m_N(-\underline\rho) - \left( \frac{1-c_N}{\underline\rho} + c_N \frac1N\tr \hat{C}_N^{-1}(\rho)\cdot\frac1N\tr \hat{C}_N(\rho) \right) \right| &\asto 0 \\ \sup_{\rho \in\mathcal R_\kappa'} \left| \int \frac{t\nu_N(dt)}{1+(1-\underline\rho)m_N(-\underline\rho)t} - \frac{ 1 - \underline\rho \frac1N\tr \hat{C}_N^{-1}(\rho)\cdot\frac1N\tr \hat{C}_N(\rho)}{(1-\underline\rho)m_N(-\underline\rho)} \right| &\asto 0. \end{align*} Differentiating along $z$ the first defining identity of $m_N(z)$, we also recall that \begin{align*} m_N'(z) &= \frac{m_N^2(z)}{1-c \int \frac{m_N(z)^2(1-\underline\rho)^2t^2\nu_N(dt)}{(1-(1-\underline\rho)tm_N(-\underline\rho))^2}}. \end{align*} Now, remark that \begin{align*} p^*\underline{\underline{\hat S}}_N(\underline\rho)^{-2}p &= \frac{d}{dz} \left[ p^*\underline{\underline{\hat S}}_N(z)^{-1}p \right]_{z=-\underline\rho} \end{align*} which (by analyticity) is uniformly well approximated by \begin{align*} & \frac{d}{dz} \left[ p^*\left( -z \left[ I_N + (1-\underline\rho)m_N(z) C_N\right]\right)^{-1}p \right]_{z=-\underline\rho} \nonumber \\ &= \frac1{\underline\rho^2} p^*Q_N(\underline\rho)p - \frac1{\underline\rho} (1-\underline\rho) m_N'(-\underline\rho)p^*C_NQ^2_N(\underline\rho)p \\ &= \frac1{\underline\rho^2} p^*Q_N(\underline\rho)p - \frac1{\underline\rho} (1-\underline\rho) \frac{m_N^2(-\underline\rho)p^*C_NQ^2_N(\underline\rho)p}{1-c m_N(-\underline\rho)^2(1-\underline\rho)^2 \frac1N\tr Q^2_N(\underline\rho)}. \end{align*} (recall that $Q_N(\underline\rho)=\left(I_N+(1-\underline\rho)m_N(-\underline\rho)C_N \right)^{-1}$). We then conclude \begin{align*} &\sup_{\rho\in\mathcal R_\kappa'} \left| \frac{p^*C_NQ^2_N(\underline\rho)p}{1-c m_N(-\underline\rho)^2(1-\underline\rho)^2 \frac1N\tr Q^2_N(\underline\rho)} \right. \nonumber \\ & \left. - \frac{p^*\hat{C}_N^{-1}(\rho)p\cdot \frac1N\tr \hat{C}_N(\rho)-\underline\rho p^*\hat{C}_N^{-2}(\rho)p\cdot \left( \frac1N\tr \hat{C}_N(\rho)\right)^2}{(1-\underline\rho)m_N(-\underline\rho)^2} \right| &\asto 0. \end{align*} Putting all results together, we obtain the expected result. \bigskip It now remains to prove Corollary~\ref{co:1}. This is easily performed thanks to Theorem~\ref{th:T} and Proposition~\ref{prop:1}. From these, we indeed have the three relations \begin{align*} P\left( \sqrt{N} T_N(\hat\rho_N^*) > \gamma \right) - \exp\left(- \frac{\gamma^2}{2 \sigma_N^2(\underline{\hat\rho}_N^*)} \right) &\asto 0 \\ P\left( \sqrt{N} T_N(\rho_N^*) > \gamma \right) - \exp\left(- \frac{\gamma^2}{2 \sigma_N^2(\underline\rho_N^*)} \right) &\to 0 \\ \exp\left(- \frac{\gamma^2}{2 \sigma_N^2(\underline{\hat\rho}_N^*)} \right) - \exp\left(- \frac{\gamma^2}{2 \sigma_N^{*2}} \right) &\asto 0 \end{align*} where we denoted $\rho_N^*$ any element in the argmin over $\rho$ of $P(\sqrt{N}T_N(\rho)>\gamma)$ (and $\underline\rho_N^*$ its associated value through the mapping $\rho\mapsto\underline\rho$) and $\sigma_N^{*2}$ the minimum of $\sigma_N(\underline\rho)$ (i.e.\@ the minimizer for $\exp(- \frac{\gamma^2}{2 \sigma_N^2(\underline\rho)})$). Note that the first two relations rely fundamentally on the uniform convergence $\sup_{\rho\in\mathcal R_\kappa}|P\left( \sqrt{N} T_N(\rho) > \gamma \right)-\exp\left(-\gamma^2/(2\sigma_N^2(\underline\rho)) \right)|\asto 0$. By definition of $\rho_N^*$ and $\sigma_N^{*2}$, we also have \begin{align*} \exp\left(- \frac{\gamma^2}{2 \sigma_N^{*2}} \right) &\leq \min\left\{ \exp\left(- \frac{\gamma^2}{2 \sigma_N^2(\underline{\hat\rho}_N^*)} \right) , \exp\left(- \frac{\gamma^2}{2 \sigma_N^2(\underline{\rho}_N^*)} \right) \right\} \\ P\left( \sqrt{N} T_N(\rho_N^*) > \gamma \right) &\leq P\left( \sqrt{N} T(\hat\rho_N^*) > \gamma \right). \end{align*} Putting things together then gives \begin{align*} P\left( \sqrt{N} T(\hat\rho_N^*) > \gamma \right) - P\left( \sqrt{N} T_N(\rho_N^*) > \gamma \right) \asto 0 \end{align*} which is the expected result.
2,869,038,154,469
arxiv
\section{Introduction} \label{s.intro} \subsection{The Coulomb gas and a model computation.}\ \label{s.intro1} For $d \geq 2$, the $d$-dimensional Coulomb gas (or one-component plasma) at inverse temperature $\beta \in (0,\infty)$ is a probability measure on point configurations $X_N = (x_1,\ldots,x_N) \in (\mathbb{R}^d)^N$ given by \begin{equation} \label{e.P1def} \P^{W}_{N,\beta}(dX_N) = \frac{1}{\mathcal Z} \exp\(-\beta \mathcal H^{W}(X_N)\) dX_N \end{equation} where $dX_N$ is Lebesgue measure on $(\mathbb{R}^d)^N$, $\mathcal Z$ is a normalizing constant, and \begin{equation} \label{e.mclHdef} \mathcal H^{W}(X_N) = \frac12 \sum_{1 \leq i \ne j \leq N} \mathsf{g}(x_i - x_j) + \sum_{i = 1}^N W(x_i) \end{equation} is the Coulomb energy of $X_N$ with confining potential $W$. The kernel $\mathsf{g}$ is the Coulomb interaction given by \begin{equation} \mathsf{g}(x) = \begin{cases} -\log |x| \quad &\text{if } d = 2 \\ \frac{1}{|x|^{d-2}} \quad &\text{if } d \geq 3. \end{cases} \end{equation} We will typically consider $W = V_N := N^{2/d}V(N^{-1/d} \cdot)$ for a potential $V$ satisfying certain conditions. The normalization in $V_N$ is chosen so that the typical interstitial distance is of size $O(1)$, i.e.\ the Coulomb gas $\P^{V_N}_{N,\beta}$ is on the "blown-up" scale. Throughout the paper, we will work under the assumption that $\Delta W$ exists and is bounded on $\mathbb{R}^d$, and that $W(x) \to \infty$ sufficiently fast as $|x| \to \infty$, though see \rref{A1} for comments on how this can be loosened significantly. For some results, we will need additional assumptions on $W$. The kernel $\mathsf{g}$ represents the repulsive interaction between two positive electric point charges, and so the Coulomb gas exhibits a competition between particle repulsion, given by the first sum in \eref{mclHdef}, and particle confinement, given by the second sum in \eref{mclHdef}. The behavior of $X_N$ at the macroscopic scale (i.e.\ in a box of side length $O(N^{1/d})$) is largely dictated by the equilibrium measure $\mueq$, a compactly supported probability measure on $\mathbb{R}^d$ solving a variational problem involving $V$. The particles $N^{-1/d} X_N$ have macroscopic density given by $N \mueq$ with high probability as $N \to \infty$. In particular, the rescaled points condense on the {\it droplet}, i.e.\ the support of $\mueq$. Even on mesoscales $O(N^\alpha)$, $0 < \alpha < 1/d$, the equilibrium measure gives a good approximation for particle density. Letting $\mueq^N$ be defined by $\mueq^N(A) = N\mueq(N^{-\frac1d}A)$ for measurable $A \subset \mathbb{R}^d$, one can form the "fluctuation measure" $$ \mathrm{fluct}(dx) = \sum_{i=1}^N \delta_{x_i}(dx) - \mueq^N(dx), $$ which, despite being of size $O(N)$ in total variation, is typically of size $O(1)$ when considered as a functional on smooth functions. We will apply a transport-type argument, called {\it isotropic averaging}, to give upper bounds for $\P^W_{N,\beta}$ on a variety of events, all concerning the overcrowding of particles. This terminology and general method was first used in \cite{L21}, but a technical issue limited its applicability. A main contribution of this paper is to salvage the method in a different context and demonstrate the method's power in a wide range of scenarios. We will now define a certain {\it isotropic averaging operator} and give a model computation that we will refer to throughout the paper. Given an index set $\mathcal{I} \subset \{1,2,\ldots,N\}$ and a rotationally symmetric probability measure $\nu$ on $\mathbb{R}^d$, we define $$ \mathrm{Iso}_{\mathcal{I}, \nu} F((x_i)_{i \in \mathcal{I}})= \int_{(\mathbb{R}^d)^{\mathcal{I}}} F\( (x_i + y_i )_{i \in \mathcal{I}}\) \prod_{i \in \mathcal{I}} \nu(dy_i) $$ for any nice enough function $F : (\mathbb{R}^d)^{\mathcal{I}} \to \mathbb{R}$. By convention, we also consider the operator $\mathrm{Iso}_{\mathcal{I},\nu}$ acting on functions of $X_N$, or more generally any set of labeled coordinates, by convolution with $\nu$ on the coordinates with labels in $\mathcal{I}$. For example, we have $$ \mathrm{Iso}_{\mathcal{I}, \nu} F(X_N)= \int_{(\mathbb{R}^d)^{\mathcal{I}}} F\( X_N + (y_i \mathbbm{1}_{\mathcal{I}}(i) )_{i =1}^N\) \prod_{i \in \mathcal{I}} \nu(dy_i) $$ by convention, and $\mathrm{Iso}_{\mathcal{I},\nu} F(x_1) = F(x_1)$ if $1 \not \in \mathcal{I}$, otherwise $\mathrm{Iso}_{\mathcal{I},\nu} F(x_i) = F \ast \nu(x_i)$. An important observation is that the kernel $\mathsf{g}$ is superharmonic everywhere and harmonic away from the origin, and thus $$ \mathrm{Iso}_{\mathcal{I}, \nu} \mathsf{g}(x_i - x_j) \leq \mathsf{g}(x_i - x_j) $$ for any $i,j$. Intuitively, the isotropic averaging operator replaces each point charge $x_i$, $i \in \mathcal{I}$, by a charge distribution shaped like $\nu$ centered at $x_i$. Newton's theorem says that the electric interaction between two disjoint, radial, unit charge distributions is the same as the interaction between two point charges located at the respective centers. More generally, if the charge distributions are not disjoint, then the interaction is more mild than that of the point charge system (this is because $\mathsf{g}(r)$ is decreasing in $r$ and the electric field generated by a uniform spherical charge is $0$ in the interior). Consider an event $E$ which we wish to show to be unlikely. For definiteness, we let $E$ be the event $B_r(z)$ contains at least $2$ particles some $r \ll 1$ and $z \in \mathbb{R}^d$. We can write $E$ in terms of events $E_{\mathcal{I}}$ for index sets $\mathcal{I}\subset \{1,\ldots,N\}$, where $E_{\mathcal{I}}$ occurs when $x_i \in B_r(z)$ for all $i \in \mathcal{I}$. The idea is to use a well-designed isotropic averaging operator and Jensen's inequality to write $$ \P^W_{N,\beta}(E_\mathcal{I}) = \frac{1}{\mathcal Z}\int_{E_{\mathcal{I}}} e^{-\beta \mathcal H^{W}(X_N)} dX_N \leq \frac{e^{-\beta \Delta}}{\mathcal Z} \int_{E_{\mathcal{I}}} e^{-\beta \mathrm{Iso}_{\mathcal{I},\nu} \mathcal H^{W}(X_N)} dX_N \leq \frac{e^{-\beta \Delta}}{\mathcal Z} \int_{E_{\mathcal{I}}} \mathrm{Iso}_{\mathcal{I},\nu} e^{-\beta \mathcal H^{W}(X_N)} dX_N $$ for $$ \Delta = \inf_{X_N \in E_\mathcal{I}} \mathcal H^{W}(X_N) - \mathrm{Iso}_{\mathcal{I},\nu} \mathcal H^{W}(X_N). $$ We can then consider the $L^2((\mathbb{R}^{d})^{\mathcal{I}})$-adjoint of $\mathrm{Iso}_{\mathcal{I},\nu}$, which we call $\mathrm{Iso}^\ast_{\mathcal{I},\nu}$, to bound $$ \P^{W}_{N,\beta}(E_\mathcal{I}) \leq \frac{1}{e^{\beta \Delta} \mathcal Z} \int_{(\mathbb{R}^d)^N} \mathrm{Iso}^\ast_{\mathcal{I},\nu} \mathbbm{1}_{E_\mathcal{I}}(X_N) e^{-\beta \mathcal H^{W}(X_N)}dX_N = e^{-\beta \Delta} \mathbb{E}^W_{N,\beta}[\mathrm{Iso}^\ast_{\mathcal{I},\nu_\mathcal{I}} \mathbbm{1}_{E_\mathcal{I}}]. $$ We will let $\nu = |\mathrm{Ann}_{[1/2,1]}(0)|^{-1} \mathbbm{1}_{\mathrm{Ann}_{[1/2,1]}(0)}$ be the uniform probability measure on the annulus of inner radius $\frac12$ and outer radius $1$. There are now two tasks: (1) give a lower bound for $\Delta$ and (2) give an upper bound for $\mathrm{Iso}^\ast_{\mathcal{I},\nu} \mathbbm{1}_{E_\mathcal{I}}$. Regarding task (1), we expect $\Delta$ will be large: the particles initially clustered in $B_r(z)$ are "replaced" by annular charges of much larger length scale, which interact mildly. It is a simple calculation to see the pairwise interaction between charged annuli is bounded by $\mathsf{g}(1/2)$ (with the abuse of notation $\mathsf{g}(x) = \mathsf{g}(|x|)$). Regarding the potential term $\sum_{i=1}^N W(x_i)$ within $\mathcal H^W(X_N)$, it will increase by $O(|\mathcal{I}|)$ after isotropic averaging since $\Delta W \leq C$. Therefore, we have $$ \Delta \geq {|\mathcal{I}| \choose 2} \(\mathsf{g}(2r) - \mathsf{g}\(\frac12\)\) - O(|\mathcal{I}|). $$ Regarding task (2), the adjoint term is easily computed as $$ \mathrm{Iso}^\ast_{\mathcal{I},\nu} \mathbbm{1}_{E_\mathcal{I}}(X_N) = \int_{(\mathbb{R}^d)^\mathcal{I}} \mathbbm{1}_{E_\mathcal{I}}(X_N - (y_i \mathbbm{1}_{\mathcal{I}}(i))_{i=1}^N) \prod_{i \in \mathcal{I}} \nu(dy_i) = \prod_{i \in \mathcal{I}} \nu(B_r(x_i - z)). $$ In the last equality, we used that $x_i - y_i \in B_r(z)$ is equivalent to $y_i \in B_r(x_i - z)$. Since $\| \nu \|_{L^\infty(\mathbb{R}^d)} \leq C$, the RHS above can be bounded by $C^{|\mathcal{I}|} r^{d|\mathcal{I}|} \mathbbm{1}_{\{ x_i \in B_2(z) \ \forall i \in \mathcal{I}\}}(X_N)$. Putting it all together, in $d=2$ we find $$ \P^W_{N,\beta}(\{x_i \in B_r(z) \ \forall i \in \mathcal{I}\}) \leq C^{|\mathcal{I}|} r^{d|\mathcal{I}| + \beta {|\mathcal{I}| \choose 2}} \P^{W}_{N,\beta}(\{x_i \in B_2(z) \ \forall i \in \mathcal{I} \}). $$ The factor $r^{d|\mathcal{I}|}$ represents an increase in volume of phase space occupied by the isotropically averaged configurations. Indeed, each particle initially confined to $B_r(z)$ is "released" into $B_2(z)$, giving an $O(r^{-d})$ factor increase in phase space volume per particle. Finally, we must deal with appropriately summing over the labels $\mathcal{I}$ in order to bound the probability of $E$. In doing so, we will lose combinatorial factors representing "informational entropy" fundamental to the labeling. Some form of the above computation is used for most of the main technical estimates in the paper. We must however consider different isotropic averaging operators, methods for estimating $\Delta$ and the adjoint operator, and ways of dividing events into pieces suitable for the computation. We remark that the above procedure bears resemblance to the Mermin-Wagner argument from statistical physics \cite{MW66}. We will now state some assumptions on $W$. The second assumption guarantees the existence of the measure $\P^{W}_{N,\beta}$, and the first assumption is convenient for our arguments but can be loosened (see \rref{A1}). \begin{equation} W \in C^2(\mathbb{R}^d) \ \text{and} \ \| \Delta W \|_{L^\infty(\mathbb{R}^d)} \leq C, \tag{A1}\label{e.A1} \end{equation} \begin{equation} \int_{\mathbb{R}^d} e^{-\beta(W(x) - \mathbbm{1}_{d=2} \log(1+|x|))}dx < \infty. \tag{A2} \label{e.A2} \end{equation} We will sometimes require the following two assumptions when we take $W = V_N$ to guarantee the existence of the equilibrium measure $\mueq$, as well as a thermal equilibrium measure needed for applying \cite{S22}, Theorem 1. \begin{equation} \begin{cases} \lim_{|x| \to \infty} V(x) + \mathsf{g}(x) = +\infty \quad &\text{if}\ d=2, \\ \lim_{|x| \to \infty} V(x) = +\infty \quad &\text{if}\ d\geq 3, \end{cases} \tag{A3} \label{e.A3} \end{equation} \begin{equation} \begin{cases} \int_{\mathbb{R}^d} e^{-\frac{\beta}{2} N^{2/d}(V(x) - \log(1+|x|))} dx + \int_{\mathbb{R}^d} e^{-\beta N^{2/d}(V(x) - \log(1+|x|))}(|x| \log(1+|x|))^2 dx < \infty \quad &\text{if}\ d=2, \\ \int_{\mathbb{R}^d} e^{-\frac{\beta}{2} N^{2/d}V(x)} dx < \infty \quad &\text{if}\ d\geq 3. \end{cases} \tag{A4} \label{e.A4} \end{equation} \begin{notn} We identify $\P^W_{N,\beta}$ with the law of a point process $X$, with the translation between $X_N$ and $X$ given by $X = \sum_{i=1}^N \delta_{x_i}$. All point processes will be assumed to be simple. We also define the "index" process $\mathbb X$ given by $\mathbb X(A) = \{i \ : \ x_i \in A\}$ for measurable sets $A$. For example, we have $|\mathbb X(A)| = X(A)$ always. \end{notn} \subsection{Jancovici-Lebowitz-Manificat laws.}\ Introduced in \cite{JLM93}, Jancovici-Lebowitz-Manificat (JLM) laws give the probability of large charge discrepencies in the Coulomb gas. The authors considered an infinite volume {\it jellium} and considered the probability of an absolute net charge of size much larger than $R^{(d-1)/2}$ in a ball of radius $R$ as $R \to \infty$. The jellium is a Coulomb gas with a uniform negative background charge, making the whole system net neutral in charge. Since the typical net charge in $B_R(0)$ is expected to be of order $R^{(d-1)/2}$ (see \cite{MY80}), the JLM laws are large deviation results and exhibit tail probabilities with very strong decay in the charge excess. The arguments of \cite{JLM93} are based on electrostatic principles and consider several different regimes of the charge discrepancy size. We are interested in a rigorous proof of the high density versions of the JLM laws. These versions apply when $X(B_R(z))$ exceeds the expected charge $\mueq(B_R(z))$ by a large multiplicative factor $C$. They predict that \begin{equation} \label{e.JLM.pred} \P_{\mathrm{jell}}(\{X(B_R(z)) \geq Q\}) \sim \begin{cases} \exp\(-\frac{\beta}{4} Q^2 \log \frac{Q}{Q_0}\) \quad \text{if } d=2, \\ \exp\(-\frac{\beta}{4R} Q^2\) \quad \text{if } d=3, \end{cases} \end{equation} for $Q_0 = \mueq(B_R(z))$. The prediction applies for $Q \gg R^d$. Our main results prove the high density JLM law upper bounds in all dimensions in the ultra-high positive charge excess regime. We do so for $\P^W_{N,\beta}$, a Coulomb gas with potential confinement, though the result holds also for the jellium {\it mutatis mutandis}. We note that our result does not require $R \to \infty$. Indeed, it is often useful at $R=1$ as a local law upper bound valid on all of $\mathbb{R}^d$, an extension of the microscale local law in \cite{AS21} which is only proved for $z$ sufficiently far into the interior of the droplet. Note that although we do not obtain a sharp coefficient on $Q^2$ in the $d \geq 3$ case, it could be improved with additional effort in \pref{1C.LL.iso}. \begin{theorem}[High Density JLM Law] \label{t.1C.LL} For any $R \geq 1$, integer $\lambda \geq 100$, and integer $Q$ satisfying \begin{equation} \label{e.K.cond.1} Q \geq \begin{cases} \frac{C \lambda^2 R^2 + C\beta^{-1} }{\log(\frac14 \lambda)} \quad &\text{if } d=2,\\ C R^{d} +C \beta^{-1} R^{d-2} \quad &\text{if } d \geq 3, \end{cases} \end{equation} we have \begin{equation} \P^{W}_{N,\beta}(\{X(B_R(z)) \geq Q \}) \leq \begin{cases} e^{-\frac12 \beta \log(\frac14 \lambda) Q^2 + C(1+\beta \lambda^2 R^2) Q} \quad &\text{if } d=2,\\ e^{-2^{-d} \beta R^{-d + 2} Q(Q-1)} \quad &\text{if } d \geq 3, \end{cases} \end{equation} and the result remains true if $z$ is replaced by $x_1$. The constant $C$ depends only on the dimension. In particular if $d=2$ and $Q \geq C_{\beta,W} R^2$, we may choose $\lambda = \sqrt{\frac{Q}{R^2}}$ to see $$ \P^{W}_{N,\beta}(\{X(B_R(z)) \geq Q \}) \leq e^{-\frac{\beta}{4} \log \(\frac{Q}{R^2}\) Q^2 + C\beta Q^2 + CQ}. $$ \end{theorem} \begin{remark} The physical principles leading to the law \eref{JLM.pred} focus on the change of free energy between an unconstrained Coulomb gas and one constrained to have charge $Q$ in $B_R(z)$. For the constrained gas, the most likely particle configurations involve a build up of positive charge on an inner boundary layer of $B_R(z)$ and a near vacuum outside of $B_R(z)$ which "screens" the excess charge. Since the negative charge density is bounded (in a jellium by definition and in $\P^{W}_{N,\beta}$ by $\Delta W \leq C$), the negative screening region must be extremely thick when $Q \gg R^d$. The self-energy of the negative screening region is the dominant contributor to the \eref{JLM.pred} bounds in \cite{JLM93}. In our proof, we apply an isotropic averaging operator that moves the particles within $B_R(z)$ to the bulk of the vacuum region, extracting a large energy change per particle, thus providing a different perspective on the JLM law. \end{remark} \tref{1C.LL} immediately allows us to generalize (\cite{AS21}, Corollary 1.1), which established the existence of limiting microscopic point processes for $(x_1,\ldots,x_N)$ re-centered around a point $z$. Previous to the work of Armstrong and Serfaty, the existence of such a process was only known in $d=2$ and $\beta=2$, where it is the Ginibre point process with an explicit correlation kernel. In \cite{AS21}, the point $z$ must be in $\Sigma_N$ and distance $CN^{\frac{1}{d+2}}$ distance from the edge of the droplet $\partial \Sigma_N$. We are able to lift that restriction, and in particular we can take $z = z_N$ near or in $\partial \Sigma_N$, in which case one would expect a genuinely different limit than the bulk case. \begin{corollary} \label{c.point.process} For any sequence of points $z_N \in \mathbb{R}^d$, the law under $\P^{V_N}_{N,\beta}$ of the point process $\sum_{i=1}^N \delta_{x_i - z_N}$ converges along subsequences as $N \to \infty$ to a simple point process. \end{corollary} \begin{remark} Any limit from \cref{point.process} will also enjoy analogs of \tref{kpoint}, \tref{1C.LL}, and \tref{fLL.over}. \end{remark} \subsection{Clustering and the $k$-point function.}\ Perhaps the most natural use of isotropic averaging is the description of the gas below the microscale. To this end, we are interested in pointwise bounds for the {\it $k$-point correlation function} $\rho_k$, defined by \begin{equation} \label{e.kpoint.def} \int_{A_1 \times A_2 \times \cdots \times A_k} \rho_k(y_1,y_2,\ldots,y_k) dy_1 \cdots dy_k = \frac{N!}{(N-k)!} \P^W_{N,\beta}(\bigcap_{i=1}^k \{x_i \in A_i \}) \end{equation} for measurable sets $A_1,A_2,\ldots,A_k \subset \mathbb{R}^d$. The functions $\rho_k$, and their truncated versions, are objects of intense interest. They capture screening behavior and other interesting phenomena for the Coulomb gas, and satisfy many interesting relations. For example, in the physics literature, they are known to capture the charge screening behavior of the gas and satisfy "sum rules" and BBGKY equations \cite{GLM80,M88}. For $d=2$, spatial oscillations of $\rho_1$ are expected to occur for large enough $\beta$ (\cite{CSA20}). Starting at $\beta > 2$, the oscillations occur near the edge of the droplet, and as $\beta$ increases the oscillations penetrate the bulk of the droplet (numerically, it is present at $\beta = 200$) (\cite{CSA20}). This phenomenon is part of a debated "freezing" or "crystallization" transition in the two-dimensional Coulomb gas (\cite{KK16}). Many results on $\rho_k$ are known when integrated on the microscale or higher, though these results are often stated in terms of integration of the empirical measure $X$ against test functions. We will not comprehensively review previous results, but only mention that \cite{AS21} proves, among other things, that $\int_{B_1(z)} \rho_1(y) dy$ is uniformly bounded in $N$ for $z$ sufficiently far inside the droplet. We will be interested in pointwise bounds on $\rho_k(y_1,\ldots,y_k)$ particularly when some of the $y_i$ are sub-microscopically close together. One should see $\rho_k \to 0$ as $y_1 \to y_2$ due to the repulsion between particles. There are no previously existing rigorous results for pointwise values for general $\beta$; even boundedness of $\rho_1$ was until now unproved. \begin{theorem}[$k$-point function] \label{t.kpoint} We have that \begin{equation} \rho_1(y) \leq C \end{equation} for some constant $C$ independent of $N$ and $y$. We also have \begin{equation} \rho_k(y_1,y_2,\ldots,y_k) \leq C \prod_{1 \leq i < j \leq k} (1 \wedge |y_i - y_j|^{\beta}) \quad \text{if } d=2 \end{equation} and \begin{equation} \rho_k(y_1,y_2,\ldots, y_k) \leq C \exp\(-\beta \mathcal H^0(y_1,\ldots,y_k) \) \quad \text{if } d\geq 3. \end{equation} \end{theorem} The following bound on sub-microscopic particle clusters is easily derived by integrating \tref{kpoint}. We point out the enhanced $r^{2d-2}$ factor in \eref{cluster.centered.3.Q2}, which will be crucial for \tref{etak.tight}. \begin{theorem}[Particle Clusters] \label{t.1C.cluster} Let $Q$ be a positive integer. There exists a constant $C$, dependent only on $\beta$, $Q$, and $W$, such that for all $r > 0$ we have \begin{equation} \label{e.cluster.fixed.2} \P^W_{N,\beta}(\{X(B_r(z)) \geq Q \}) \leq C r^{2Q + \beta \binom{Q}{2}} \quad \text{if } d=2, \end{equation} and \begin{equation} \label{e.cluster.fixed.3} \P^W_{N,\beta}(\{X(B_r(z)) \geq Q \}) \leq Cr^{dQ} e^{-\frac{\beta}{2^{d-2}} \cdot \frac{1}{r^{d-2}}\binom{Q}{2}}\quad \text{if } d \geq 3. \end{equation} We also have for $Q \geq 2$ and $d = 2$ that \begin{equation} \label{e.cluster.centered.2} \P^W_{N,\beta}(\{X(B_r(x_1)) \geq Q \}) \leq C r^{2(Q-1) + \beta \binom{Q}{2}} \end{equation} and $d \geq 3$ that \begin{equation} \label{e.cluster.centered.3} \P^W_{N,\beta}(\{X(B_r(x_1)) \geq Q \}) \leq Cr^{d(Q-1)} e^{-\frac{\beta}{2^{d-2}} \cdot \frac{1}{r^{d-2}}\binom{Q-1}{2}} e^{-\beta \frac{1}{r^{d-2}} (Q-1)}. \end{equation} In the case of $Q=2$ and $d\geq 3$ this can be improved to \begin{equation} \label{e.cluster.centered.3.Q2} \P^W_{N,\beta}(\{X(B_r(x_1)) \geq 2 \}) \leq Cr^{2d-2} e^{-\frac{\beta}{r^{d-2}}}. \end{equation} \end{theorem} We remark that the $k=1$ and $Q=1$ cases of \tref{kpoint} and \tref{1C.cluster}, respectively, are instances of {\it Wegner estimates}. In the context of $\beta$-ensembles on the line, Wegner estimates were proved in \cite{BMP21}, and for Wigner matrices in \cite{ELS10}. The $Q=2$ cases of \tref{1C.cluster} are {\it particle repulsion estimates}. These estimates, as well as eigenvalue minimal gaps, have been considered for random matrices in many articles, e.g. \cite{NTV17,T13}. \begin{remark} We claim that our results in \tref{kpoint} are essentially optimal and that \tref{1C.cluster} is optimal if $d=2$ or $Q=2$. For $d,Q \geq 3$, one can improve \tref{1C.cluster} by more carefully integrating \tref{kpoint}, though an optimal, explicit solution for all $Q$ would be difficult. Our claim is evidenced by the tightness results we prove in \tref{etak.tight} and by computations for merging points with fixed $N$. \end{remark} \subsection{Minimal particle gaps.}\ We will also study the law of the $k$th smallest particle gap $\eta_{k}$, i.e.\ \begin{equation} \label{e.etak.def} \eta_k(X_N) = \text{the}\ k\text{th smallest element of}\ \{|x_i - x_j| \ : \ i,j \in \{1,\ldots,N\}, i \ne j\}. \end{equation} Note that the particle gaps $|x_i - x_j|$, $i \ne j$, are almost surely unique under $\P^W_{N,\beta}$. Previously, the order of $\eta_1$ was investigated dimension $d=2$ in \cite{A18} and \cite{AR22}. The latter article proves that $\eta_1 \geq (N \log N)^{-\frac{1}{\beta}}$ with high probability as $N \to \infty$ for all $\beta > 1$. It is also proved that $\eta_1 > C^{-1}$ with high probability if $\beta = \beta_N$ grows like $\log N$. This suggests that the gas "freezes", even at the level of the extremal statistic $\eta_1$, in this temperature regime. We remark that \tref{1C.cluster} already improves this bound. \begin{corollary} \label{c.eta1.bd.d2} Let $d=2$. We have \begin{equation} \label{e.eta1.bd.d2} \P^W_{N,\beta}(\{ \eta_1 \leq \gamma N^{-\frac{1}{2+\beta}} \}) \leq C \gamma^{2 + \beta} \quad \forall \gamma > 0 \end{equation} for a constant $C$ independent of $N$. \end{corollary} \begin{proof} This follows from \tref{1C.cluster}, specifically \eref{cluster.centered.2} with $Q = 2$ and $r = \gamma N^{-\frac{1}{2+\beta}}$, and a union bound. \end{proof} Furthermore, an examination of the proof's dependence on $\beta$ shows we can let $\beta = \beta_N = c_0 \log N$ and take a constant $C$ equal to $e^{C\beta} = N^{C}$ on the RHS of \eref{eta1.bd.d2}. Letting $\gamma = N^{1/(2+\beta)} \gamma'$ for a small enough $\gamma' > 0$ in \cref{eta1.bd.d2} shows that $\eta_1$ is bounded below by a constant independent of $N$ with high probability, offering an alternative proof of the freezing regime identified in \cite{A18, AR22}. An identical idea using \eref{cluster.centered.3} in place of \eref{cluster.centered.2} proves that the gas freezes in dimension $d \geq 3$ in the $\log N$ inverse temperature regime as well. It is natural to wonder whether \cref{eta1.bd.d2} is sharp in some sense, and whether there are versions for $\eta_k$ and $d \geq 3$. For the eigenvalues of random matrices, the law of $\eta_k$ has been of great interest. While one dimensional (for the most studied cases), these models are particularly relevant to the $d=2$ case because the interaction between eigenvalues is also given by $\mathsf{g} = -\log$ for certain ensembles. In \cite{BAB13}, the authors prove for the CUE and GUE ensembles that $N^{4/3} \eta_1$ is tight and has limiting law with density proportional to $x^{3k-1} e^{-x^3}$. Note that the interstitial distance is order $N^{-1}$ for these ensembles, whereas for us it will be order $1$. The proof uses determinantal correlation kernel methods. Recently, the extremal statistics of generalized Hermitian Wigner matrices was studied by Bourgade in \cite{B22} with dynamical methods. It is proved that $\eta_1$ is also of order $N^{-4/3}$ and the rescaled limiting law is identical to the GUE case. For a symmetric Wigner matrix, \cite{B22} proves that the minimal gap is of order $N^{-3/2}$. The papers \cite{FW21, FTW19} prove even more detailed results on the minimal particle gaps for certain random matrix models. Specifically, for the GOE ensemble and the circular $\beta$ ensemble with positive integer $\beta$, the joint limiting law of the minimal particle gaps is determined. Actually, convergence of a point process containing the data of the minimal particle gaps and their location is proved, showing in particular that the gap locations are asymptotically Poissonian. Once again, the methods crucially rely on certain exact identities unavailable in our case. The only optimal $d=2$ result is in \cite{SJ12}. Here, the authors consider the $k$th smallest eigenvalue gap of certain normal random matrix ensembles. The Ginibre ensemble, after rescaling the eigenvalues by $N^{1/2}$, corresponds to $\P^{V_N}_{N,\beta}$ for a certain quadratic $V$ and $\beta = 2$. Using determinantal correlation kernel methods, \cite{SJ12} proves that, for the Ginibre ensemble, $N^{\frac{1}{4}} \eta_k$ is tight as $N \to \infty$ (with interstitial distance $O(1)$), and its limiting law on $\mathbb{R}$ has density proportional to $x^{4k-1} e^{-x^4} dx$. \tref{etak.tight} extends the tightness result to all $\beta$. While we are unable to exactly identify or prove uniqueness of the rescaled limiting law, we are able to identify the correct order of $\eta_k$ and give some tail bounds. For $d \geq 3$, we are able to quite precisely identify the order of $\eta_k$ and its fluctuations. We must first state some extra assumptions on $V$. These assumptions are only used to prove \lref{lonely}, which is used in proving lower bounds on $\eta_k$. In particular, our upper bounds on $\eta_k$ below hold for for $\P^{W}_{N,\beta}$. For \tref{etak.tight}, we assume $W = V_N$ and \begin{equation} \tag{A5} \label{e.A5} \int_{\mathbb{R}^d} e^{-\beta (V(x) - \log(1+|x|))} dx < \infty \quad \text{if }d=2, \end{equation} \begin{equation} \tag{A6} \label{e.A6} \exists \kappa > 0 \quad \liminf_{|x| \to \infty} \frac{V(x)}{|x|^\kappa} > 0 \quad \text{if } d \geq 3, \end{equation} \begin{theorem} \label{t.etak.tight} Let $W = V_N$ with conditions \eref{A1}, \eref{A3} satisfied, in $d=2$ condition \eref{A5} satisfied, and in $d \geq 3$ condition \eref{A6} satisfied. Then, in $d=2$ the law of $N^{\frac{1}{2+\beta}} \eta_k$ is tight as $N \to \infty$. Moreover, we have $$ \limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{N^{\frac{1}{2+\beta}} \eta_k \leq \gamma\}) \leq C \gamma^{k(2 + \beta)}, $$ $$ \limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{N^{\frac{1}{2+\beta}} \eta_k \geq \gamma\}) \leq C\gamma^{-\frac{4+2\beta}{4+\beta}} $$ for all $\gamma > 0$. In $d\geq 3$, let $Z_k$ be defined by $$ \eta_k = \(\frac{\beta}{\log N} \)^{\frac{1}{d-2}} \( 1 + \frac{2d-2}{(d-2)^2} \frac{ \log \log N}{\log N} + \frac{Z_k}{(d-2) \log N} \). $$ Then the law of $Z_k$ is tight as $N \to \infty$. We have $$ \limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{Z_k \leq -\gamma\}) \leq C e^{-k \gamma}, $$ $$ \limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{Z_k \geq \gamma\}) \leq C e^{- \frac12 \gamma} $$ for $\gamma > 0$. \end{theorem} \begin{proof} The theorem follows easily from combining \pref{etak.isbig} and \pref{etak.issmall} from section \sref{etak}. \end{proof} \begin{remark} The estimates for $\eta_k$ in \tref{etak.tight} can be understood by the following Poissonian ansatz. Consider $x_i$, $i=1,\ldots,N$, to be an i.i.d.\ family with $x_1$ having law with density proportional to $e^{-\beta \mathsf{g}(x)} dx$ on $B_1(0) \subset \mathbb{R}^d$. If we let $\eta_k$ be the $k$th smallest element of $\{|x_i| : i =1,\ldots,N\}$, then the order of $\eta_k$ agrees with \tref{etak.tight}. \end{remark} \subsection{Discrepancy bounds.}\ Our previously mentioned results all concerned either sub-microscopic length scales or high particle densities, but we can in fact effectively use isotropic averaging on mesoscopic length scales and under only slight particle density excesses. We are interested in the discrepancy \begin{equation} \mathrm{Disc}(\Omega) = X(\Omega) - N\mueq(N^{-\frac{1}{d}} \Omega), \quad \Omega \subset \mathbb{R}^d \end{equation} which measures the deficit or excess of particles with reference to the equilibrium measure in a measurable set $\Omega$. It is also useful to define \begin{equation} \mathrm{Disc}_{W}(\Omega) = X(\Omega) - \frac{1}{c_d} \int_{\Omega} \Delta W(x) dx \end{equation} where $c_d$ is such that $\Delta \mathsf{g} = -c_d \delta_0$. Note that $\mathrm{Disc}$ and $\mathrm{Disc}_W$ agree when $\Omega \subset \Sigma_N := N^{\frac1d}\supp \mueq$. In \cite{AS21}, it is proved that the discrepancy in $\Omega = B_R(z)$ is typically not more than $O(R^{d-1})$ in size whenever $z$ is sufficiently far in the interior of the droplet and $R$ is sufficiently large. The proof involves a multiscale argument inspired by stochastic homogenization \cite{AJM19}, as well as a technical screening procedure. More generally, \cite{AS21} gives local bounds on the electric energy, which provide a technical basis for central limit theorems \cite{S22} among other things. The upper bound of $O(R^{d-1})$ represents that the dominant error contribution within the argument comes from surface terms appearing in the multiscale argument. The JLM laws \cite{JLM93} predict that the discrepancy in $B_R(z)$ is actually typically of size $O(R^{(d-1)/2})$; in particular, the Coulomb gas is {\it hyperuniform}. This motivates the search for other methods to prove discrepancy bounds that, with enough refinement, may be able to overcome surface error terms. We present a new method based on isotropic averaging. Since isotropic averaging naturally controls overcrowding, we first prove the following bound on the positive part of the discrepancy. While it is usually weaker than the best known bound, it is interesting because of its independent proof and because it has weaker restrictions on the location of $z$ than \cite{AS21}. We will restrict to $d=2$ due to a technical issue in extending \pref{energymin} to higher dimensions. Also for technical reasons in \pref{energymin}, we will need to approximate the equilibrium measure $\mueq$ by a constant in $B_{2R}(z)$, and the resulting error begins to dominate for $R \geq N^{3/10}$. We have therefore chosen to restrict to small enough mesoscales. \begin{theorem} \label{t.fLL.over} Let $d=2$, $R \geq 2$ and $z \in \mathbb{R}^d$. Suppose either $\Delta W$ is constant in $B_{2R}(z)$ or both of the following: firstly that $$ \| \nabla \Delta W \|_{L^\infty(B_{2R}(z))} \leq CN^{-1/2}, $$ and secondly $R \leq N^{3/10}$. Assume also $\Delta W(x) \geq C^{-1}$ for $x \in B_{2R}(z)$. Then we have \begin{equation} \P^{W}_{N,\beta}(\{\mathrm{Disc}_W(B_R(z)) \geq T R^{4/3}\log R \}) \leq e^{-c T R^2} + e^{-c T^2 R^{4/3}} \end{equation} for some $c > 0$ for all $T \geq 1$ large enough. \end{theorem} While \tref{fLL.over} controls only positive discrepancies, when combined with an estimate on the fluctuations of smooth linear statistics, it can also give lower bounds. Indeed, when $B_R(z)$ has a large discrepancy, the physically realistic scenario is that positive charge excess builds up either on the inside or outside of $\partial B_R(z)$. We call a thin annulus with the positive charge buildup a "screening region", the existence of which is implied by rigidity for smooth linear statistics. We use rigidity from \cite{S22}, which we note does not take "heavy-lifting", e.g.\ it is independent of the multi-scale argument from \cite{AS21}. \pref{fLL.upbd} proves a stronger version of \tref{fLL.over} that applies to screening regions. We will need some additional assumptions to apply results from \cite{S22} and \cite{AS19}. We refer to the introduction of \cite{S22} for commentary on the conditions. While the conditions hold in significant generality, our main purpose is to demonstrate the usefulness of \tref{fLL.over} rather than optimize the conditions on $V$. \begin{theorem} \label{t.fLL.improved} Let $d=2$, $R \in [2,N^{13/50}]$, and $z \in \mathbb{R}^d$ be such that $B_{2R}(z) \subset \{x \in \Sigma_N \ : \ \dist(x,\partial\Sigma_N) \geq C_0 N^{1/6}\}$ for a large constant $C_0$. Assume that $W = V_N$ is such that $\eref{A1}-\eref{A4}$ holds, $V \in C^{7}(\mathbb{R}^d)$, the droplet $\Sigma = \supp \mueq$ has $C^{1}$ boundary, and $\Delta V \geq C^{-1} > 0$ in a neighborhood of $\Sigma$. Finally, assume there exists a constant $K$ such that $$ \mathsf{g} \ast \mueq + V - K \geq C^{-1} \min(\dist(x,\Sigma)^2,1). $$ Then we have $$ \P^{V_N}_{N,\beta}(\{|\mathrm{Disc}(B_R(z))| \geq T R^{14/13}\log R\}) \leq e^{-cR^{14/13} T} + e^{-cR^{4/39}T^2} $$ for large enough $T \gg 1$ and some $c > 0$. \end{theorem} We note that by applying isotropic averaging to a screening region, not only do we obtain bounds on the absolute discrepancy, we also give a sharper bound on the positive part than \tref{fLL.over}, albeit with some extra restrictions on $R$ and $z$. \subsection{Notation.}\ \label{s.notation} We will now introduce some notation and conventions used throughout the paper. First, we recall the point process $X$ and "index" process $\mathbb X$ introduced at the end of \sref{intro1}, and that all point processes are assumed to be simple. It will also be useful to define the probability measure $\P^{W,U}_{N,\beta} := \P^{W+U}_{N,\beta}$ for superharmonic potentials $U$. For definiteness, we will allow $U$ to be of the form $\mathsf{g} \ast \nu$ for a finite nonnegative measure $\nu$ with compact support. In any estimates relying purely on isotropic averaging, i.e.\ totally self-contained to the present article, all implicit constants can be taken independent of $U$. For estimates relying on results external to the present article, we will take $U = 0$ and $W = V_N = N^{2/d} V(N^{-1/d} \cdot)$. Implicit constants $C$ will change from line to line and may depend on $W$ and $d$ without further comment. In some sections, we will also allow $C$ to depend on $\beta$ and $\beta^{-1}$. A numbered constant like $C_0, C_1$ will be fixed, but may be needed to be taken large depending on various parameters. For positive quantities $a,b$, we write $a \gg b$ to mean that $a \geq C b$ for a large enough constant $C > 0$, and $a \ll b$ for $a \leq C^{-1} b$ for large enough $C > 0$. For brevity we will sometimes write $\P$ for $\P^{W,U}_{N,\beta}$ and $\mathbb{E}$ for $\mathbb{E}^{W,U}_{N,\beta}$. This will only be done in proofs or sections where the probability measure is fixed throughout. We will write $\mathsf{g}(s)$ to mean $-\log s$ in dimension $2$ or $|s|^{-d+2}$ in $d \geq 3$ when $s > 0$. For a measure with a Lebesgue density, we often denote the density with the same symbol as the measure, e.g.\ $\nu(x)$ as the density of $\nu(dx)$. When it exists, we let $\mueq$ be the equilibrium measure associated to $V$, and let $\Sigma = \supp \mueq$ be the droplet. Note that $\mueq$ is a probability measure and $\Sigma$ has length scale $1$. We let $\mueq^N$ be the blown-up equilibrium measure with $\mueq^N(A) = N \mueq(N^{-1/d}A)$ for Borel sets $A$, and $\Sigma_N = \supp \mueq^N = N^{1/d} \supp \mueq$ be the blown-up droplet. Finally, we define \begin{equation} \mu(dx) = \frac{1}{c_d} \Delta W(x) dx \end{equation} where $c_d$ is such that $\Delta \mathsf{g} = -c_d \delta_0$. When we take $W = V_N$, we have that $\mu = \mueq^N$ on the blown-up droplet $\Sigma_N$. We let $B_R(z) \subset R^d$ be the open Euclidean ball of radius $R$ centered at $z$, and for a nonempty interval $I \subset \mathbb{R}^{\geq 0}$, we let $\mathrm{Ann}_I(z)$ be the annulus containing points $x$ with $|x-z| \in I$. We use $|\cdot|$ to denote both Lebesgue measure for subsets of $\mathbb{R}^d$ and cardinality for finite sets. It will be clear via context which is meant. \begin{remark} \label{r.A1} Since many of our results hold for $\P^{W,U}_{N,\beta}$, we can often loosen condition \eref{A1} to just $\Delta W \leq C$ by absorbing the negative part of $\Delta W$ into $\Delta U$. Furthermore, the points $X_N$ are typically confined to a vicinity of the droplet, and so versions of our results hold with only $\Delta W$ locally bounded provided one supplies an appropriate argument to control contributions of points far from the equilibrium measure. In $d=2$, for example, Ameur \cite{A21} has proved strong confinement estimates. \end{remark} \subsubsection*{Acknowledgments.}\ The author would like to thank his advisor, Sylvia Serfaty, for her encouragement and comments. The author was partially supported by NSF grant DMS-2000205. \section{High Density Estimates} In this section, we prove the JLM law \tref{1C.LL}. We will take a potential $W$ satisfying \eref{A1} and \eref{A2} throughout. We will state our results for $\P^{W,U}_{N,\beta}$ for a superharmonic potential $U$. Given a configuration with a high density of particles in some $B_r(z)$, we will transport the particles to a larger ball $B_R(z)$ using the model computation in \sref{intro1}. If the density within $B_r(z)$ is high enough, the transport will give both a large energy and volume benefit (\pref{1C.LL.iso}), but has an entropy cost due to the mixing of particles within $B_r(z)$ and $\mathrm{Ann}_{(r,R]}(z)$. We find that the entropy cost is controlled as long as $B_R(z)$ does not have a higher particle density than $B_r(z)$ (\pref{1C.LL.iterate}). If $B_R(z)$ does have a higher particle density, we iterate to the larger scale $R$ and repeat the process. Note that the below computation leading up to \eref{iso.W} and \eref{iso.subharmonic} will be used often in slightly different contexts. \begin{proposition} \label{p.1C.LL.iso} Consider $0 < r < \frac{1}{10} R$, and $\nu_R$ the uniform measure on $\mathrm{Ann}_{[\frac12 R, R-2r]}(0)$. We have \begin{equation} \label{e.LL.iso.energy} \mathrm{Iso}_{\mathbb X(B_r(z)), \nu_R} \mathcal H^{W+U}(X_N) \leq \mathcal H^{W+U}(X_N) + CR^2X(B_r(z)) + {X(B_r(z)) \choose 2}\(\mathsf{g}\(\frac R2\)-\mathsf{g}(2r)\), \end{equation} and for any index sets $\mathcal{N}, \mathcal{M} \subset \{1,\ldots,N\}$ we have \begin{equation} \label{e.LL.adjoint} \mathrm{Iso}_{\mathcal{N},\nu_R}^\ast \mathbbm{1}_{\{\mathbb X(B_r(z)) = \mathcal{N}\} \cap \{\mathbb X(B_R(z)) = \mathcal{M} \}} \leq e^{C|\mathcal{N}|}\(\frac{r}{R}\)^{d|\mathcal{N}|} \mathbbm{1}_{\{ \mathbb X(B_R(z)) = \mathcal{M} \}}. \end{equation} \end{proposition} \begin{proof} We first prove \eref{LL.iso.energy} by considering the effect of the isotropic averaging operator on $\mathsf{g}(x_i - x_j)$ and $W(x_i)$. Let $\sigma$ denote $(d-1)$-dimensional Hausdorff measure. First, since $$ -\Delta \mathsf{g} = c_d \delta_0 $$ for $c_d = \sigma(\partial B_1)(d-2 + \mathbbm{1}_{d=2})$, we have for any $s > 0$ and $y \in \mathbb{R}^d$ by Green's third identity that $$ \frac{1}{c_d} \int_{B_s(y)} \mathsf{g}(x-y) \Delta W(x) dx + W(y) = \frac{1}{c_d s} \int_{\partial B_s(y)}\( \mathsf{g}(x-y) \nabla W(x) \cdot (x-y) - W(x) \nabla \mathsf{g}(x-y) \cdot (x-y) \) \sigma(dx). $$ Using the divergence theorem, the RHS can be simplified further to $$ \frac{\mathsf{g}(s)}{c_d} \int_{B_s(y)} \Delta W(x) dx + \frac{d-2 + \mathbbm{1}_{d=2}}{c_d s^{d-1}} \int_{\partial B_s(y)}W(x) \sigma(dx). $$ After a rearrangement, we see $$ \frac{1}{\sigma(\partial B_s)} \int_{\partial B_s(y)}W(x) \sigma(dx) = W(y) + \frac{1}{c_d} \int_{B_s(y)} (\mathsf{g}(x-y) - \mathsf{g}(s)) \Delta W(x) dx, $$ and we can then integrate against $\sigma(\partial B_s) ds$ with $y = x_1$ to see $$ \mathrm{Iso}_{\{1\},\nu_R} W(x_1) = W(x_1) + \frac{1}{c_d |\mathrm{Ann}_{[\frac12 R, R-2r]}(0)|} \int_{\frac12 R}^{R-2r} \sigma(\partial B_s) \int_{B_s(x_1)} (\mathsf{g}(x-x_1) - \mathsf{g}(s)) \Delta W(x) dx ds. $$ If we bound $\Delta W \leq C$, one can check by explicit integration that \begin{equation} \label{e.iso.W} \mathrm{Iso}_{\{1\},\nu_R} W(x_1) \leq W(x_1) + CR^2. \end{equation} A similar computation, this time using $\Delta \mathsf{g} \leq 0$ or $\Delta U \leq 0$, shows that \begin{equation} \label{e.iso.subharmonic} \mathrm{Iso}_{\mathbb X(B_r(z)), \nu_R} \mathsf{g}(x_i - x_j) \leq \mathsf{g}(x_i - x_j), \quad \mathrm{Iso}_{\mathbb X(B_r(z)), \nu_R} U(x_i) \leq U(x_i)\quad \forall i,j \in \{1,\ldots,N\}. \end{equation} Finally, by Newton's theorem, the Coulomb interaction between a sphere of unit charge and radius $s$ and a point charge is bounded above by $\mathsf{g}(s)$. It follows from superposition that $$ \mathrm{Iso}_{\mathbb X(B_r(z)), \nu_R} \mathsf{g}(x_i - x_j) \leq \mathsf{g}\(\frac{R}{2}\) $$ whenever $i \in \mathbb X(B_r(z))$, particularly whenever $i,j \in \mathbb X(B_r(z))$ in which case $\mathsf{g}(x_i - x_j) \geq \mathsf{g}(2r)$. Putting the above results together proves \eref{LL.iso.energy}. We now consider \eref{LL.adjoint}. Let $E = \{\mathbb X(B_r(z)) = \mathcal{N}\} \cap\{\mathbb X(B_R(z)) = \mathcal{M}\}$. For any nice enough nonnegative function $F$ of $(x_i)_{i \in \mathcal{N}}$ and fixed $(x_i)_{i \not \in \mathcal{N}}$, we have \begin{align*} \lefteqn{ \int_{(\mathbb{R}^d)^{\mathcal{N}}} \mathbbm{1}_{E}(X_N) \mathrm{Iso}_{\mathcal{N},\nu_R}F((x_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i} \quad & \\ &= \int_{(B_r(z))^{\mathcal{N}}} \int_{(\mathbb{R}^d)^{\mathcal{N}}} \mathbbm{1}_{E}(X_N) F((x_i + y_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} \nu_R(dy_i)\prod_{i \in \mathcal{N}} dx_i \\ &= \int_{(\mathbb{R}^d)^{\mathcal{N}}} \( \int_{\prod_{i \in \mathcal{N}} B_r(x_i - z)} \mathbbm{1}_{E}(X_N - (y_i \mathbbm{1}_{\mathcal{N}}(i))_{i=1}^N) \prod_{i \in \mathcal{N}} \nu_R(dy_i) \)F((x_i)_{i \in \mathcal{N}})\prod_{i \in \mathcal{N}} dx_i. \end{align*} where we applied a change of variables $x_i \to x_i - y_i$ and used that $x_i - y_i \in B_r(z)$ is equivalent to $y_i \in B_r(x_i - z)$. Since $|y_i| \leq R-2r$ for $y_i \in \supp \nu_R$, one can bound for all such $y_i$ $$ \mathbbm{1}_{E}(X_N - (y_i \mathbbm{1}_{\mathcal{N}}(i))_{i=1}^N) \leq \mathbbm{1}_{\{\mathbb X(B_R(z)) = \mathcal{M}\}}(X_N). $$ Together with $\| \nu_R \|_{L^\infty(\mathbb{R}^d)} \leq C R^{-d}$, we see $$ \int_{(\mathbb{R}^d)^{\mathcal{N}}} \mathbbm{1}_{E}(X_N) \mathrm{Iso}_{\mathcal{N},\nu_R}F((x_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i \leq e^{C|\mathcal{N}|}\(\frac{r}{R}\)^{d|\mathcal{N}|} \int_{(\mathbb{R}^d)^{\mathcal{N}}} \mathbbm{1}_{\{\mathbb X(B_R(z)) = \mathcal{M}\}}(X_N) F((x_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i. $$ This establishes \eref{LL.adjoint}. \end{proof} We are ready to prove the main iterative estimate that establishes \tref{1C.LL}. \begin{proposition} \label{p.1C.LL.iterate} Let $0 < r < R$ be such that $\lambda := \frac{R}{r} \geq 10$. Then we have that $$ \P(\{X(B_r(z)) \geq Q \}) \leq \P(\{X(B_R(z)) \geq \lambda^d Q \}) + e^{C(1+\beta \lambda^2 r^2) Q - \beta {Q \choose 2}(\mathsf{g}(2r) - \mathsf{g}(\lambda r/2))} $$ for all $z \in \mathbb{R}^d$ and integers $Q$ with \begin{equation} Q \geq \begin{cases} 1+ \frac{C \lambda^2 r^2 + C\beta^{-1} }{\log(\frac14 \lambda)} \quad &\text{if } d=2,\\ 1+C \lambda^2 r^{d} +C \beta^{-1} r^{d-2} \quad &\text{if } d \geq 3. \end{cases} \end{equation} \end{proposition} \begin{proof} For simplicity, we consider $\lambda$ an integer. Let $\mathcal{N} \subset \mathcal{M}$ be index sets of size $n$ and $m$, respectively. We apply the isotropic averaging model computation detailed in \sref{intro1} and \pref{1C.LL.iso} to see $$ \P(\{\mathbb X(B_r(z)) = \mathcal{N}\} \cap \{\mathbb X(B_R(z)) = \mathcal{M} \}) \leq e^{C(1+\beta R^2)n - \beta {n \choose 2}(\mathsf{g}(2r) - \mathsf{g}(R/2))} \lambda^{-d} \P(\{\mathbb X(B_R(z)) = \mathcal{M}\}). $$ By particle exchangeability, we have $$ \P(\{X(B_r(z)) = n\} \cap \{X(B_R(z) = m)\}) = {N \choose m} {m \choose n}\P(\{\mathbb X(B_r(z)) = \mathcal{N}\} \cap \{\mathbb X(B_R(z)) = \mathcal{M} \}), $$ $$ \P(\{X(B_R(z) = m)\}) = {N \choose m}\P(\{ X(B_R(z)) = m\}), $$ whence $$ \P(\{X(B_r(z)) = n\} \cap \{X(B_R(z) = m)\}) \leq e^{C(1+\beta R^2)n - \beta {n \choose 2}(\mathsf{g}(2r) - \mathsf{g}(R/2))} {m \choose n} \lambda^{-d}\P(\{ X(B_R(z)) = m\}). $$ By Stirling's approximation, we can estimate for $2 \leq n \leq m$ that $$ {m \choose n} \leq \sqrt{\frac{m}{n(m-n)}}e^{Cn} e^{n \log(m/n)} \leq e^{Cn} e^{n \log(m/n)}. $$ The last inequality uses that $\log(1 + \cdot)$ is subadditive on $[0,\infty)$. Indeed, we have $$ \log(m) - \log(1+n) - \log(m - n) \geq 0 $$ and $|\log n - \log(1+n)| \leq 1$. If $m \leq \lambda^d n$, we can then bound ${m \choose n} \lambda^{-d} \leq e^{Cn}$. We thus have \begin{align*} \P(\{X(B_r(z)) \geq Q \} \cap \{X(B_R(z)) \leq Q\lambda^d \}) &\leq \sum_{n = Q}^{Q \lambda^d} e^{C(1+\beta R^2)n - \beta {n \choose 2}(\mathsf{g}(2r) - \mathsf{g}(R/2))} \sum_{m = n}^{Q \lambda^d} \P(\{ X(B_R(z)) = m\}) \\ &\leq \sum_{n = Q}^{Q \lambda^d} e^{C(1+\beta R^2)n - \beta {n \choose 2}(\mathsf{g}(2r) - \mathsf{g}(R/2))}. \end{align*} We can bound the ratio between successive terms above as $$ e^{C(1+\beta R^2) - \beta n(\mathsf{g}(2r) - \mathsf{g}(R/2))} \leq \frac12 $$ if $Q \log(\lambda/4) \geq C\beta^{-1} + C r^2 \lambda^2$ in $d=2$ and if $Q \geq C \beta^{-1}r^{d-2} + C\lambda^2 r^d$ in $d \geq 3$. We conclude $$ \P(\{X(B_r(z)) \geq Q \} \cap \{X(B_R(z)) \leq Q\lambda^d \}) \leq e^{C(1+\beta \lambda^2 r^2) Q - \beta {Q \choose 2}(\mathsf{g}(2r) - \mathsf{g}(\lambda r/2))}, $$ and the proposition follows. \end{proof} We conclude the section with a proof of the high density JLM laws. \begin{proof}[Proof of \tref{1C.LL}] In $d=2$, we let $\lambda \geq 10$ be arbitrary, and in $d \geq 3$ we set $\lambda = 10$. We apply \pref{1C.LL.iterate} iteratively to a series of radii $r_k = \lambda r_{k-1}$, $k \geq 1$, with $r_0 = R$ to achieve $$ \P(\{X(B_R(z)) \geq Q \} ) \leq \limsup_{k \to \infty} \P(\{X(B_{r_k}(z)) \geq \lambda^{dk} Q \} ) + \sum_{k=0}^\infty e^{a_k} $$ for $$ a_k = {C(1+\beta \lambda^{2k+2} R^2) \lambda^{dk}Q - \beta { \lambda^{dk} Q \choose 2}(\mathsf{g}(2\lambda^{k}R) - \mathsf{g}(\lambda^{k+1} R/2))}. $$ Note that $$ a_{k+1} - a_k \leq -\beta \frac{\lambda^{2d(k+1)} \log(\lambda/4) Q^2}{2} + C(1 + \beta \lambda^{2k+4}R^2)\lambda^{d(k+1)}Q. $$ We can use $Q \log(\lambda/4) \geq C \lambda^2 R^2 + C \beta^{-1}$ for a large enough $C$ to easily see $$ a_{k+1} - a_k \leq -\beta \frac{\lambda^{2d(k+1)} \log(\lambda/4) Q^2}{4} \leq - \log 2. $$ We obtain $$ \P(\{X(B_R(z)) \geq Q \} ) \leq 2e^{a_0} \leq e^{- \beta { Q \choose 2}(\mathsf{g}(2R) - \mathsf{g}(\lambda R/2)) + C(1+\beta \lambda^2 R^2) Q}. $$ The desired result for balls centered at a fixed $z$ follows from some routine simplifications. Finally, it will be useful to have versions of the overcrowding estimates for $z = x_1$. For this, note that conditioning $\P^{W,U}_{N,\beta}$ on $x_1$ gives a new measure $\P^{W,U+\mathsf{g}(x_1 - \cdot)}_{N-1,\beta}$. We can then apply our results to this $(N-1)$-particle Coulomb gas with modified potential. Actually, we could even extract an extra beneficial term in \eref{LL.iso.energy} from strict superharmonicity of $\mathsf{g}(x_1-\cdot)$, but it is mostly inconsequential for the large $Q$ results. We omit the details. \end{proof} \section{Clustering Estimates} The goal of this section is to prove \tref{kpoint} and \tref{1C.cluster}. Our idea is similar to that of the previous section, except we will work with submicroscopic scales and transport particles distances of order $1$. We will precisely compute energy and volume gains associated to the transport and control entropy costs using \tref{1C.LL} with $R=1$. Here, it will be important that we work with measures $\P^{W,U}_{N,\beta}$, since changing $U$ will effectively allow us to condition the gas without deteriorating the estimates. \subsection{$k$-point function bounds}\ We will first upper-bound the $k$-point function $\rho_k(y_1,\ldots,y_k)$, $y_1,\ldots,y_k \in \mathbb{R}^d$, which was defined in \eref{kpoint.def}. In particular, we will prove \tref{kpoint}. Our results will hold for the $k$-point function of the gas $\P^{W,U}_{N,\beta}$. Note that $$ \rho_k(y_1,\ldots,y_k) = \frac{N!}{(N-k)!} \lim_{r \to 0} r^{-dk} |B_1(0)|^{-k} \P^{W,U}_{N,\beta}\(\bigcap_{i=1}^k \{x_i \in B_r(y_i)\}\) $$ and so we will consider the probability in the limit on the RHS for small $r > 0$. Let $\nu$ be the uniform probability measure on the annulus $\mathrm{Ann}_{[\frac12+2r,1-2r]}(0)$. We have $$ \P^{W,U}_{N,\beta}\(\bigcap_{i=1}^k \{x_i \in B_r(y_i)\}\) = \P^{W,U}_{N,\beta}\(\bigcap_{i=2}^k \{x_i \in B_r(y_i)\}\) \P^{W,U}_{N,\beta}\( \{x_i \in B_r(y_i)\} \ | \ \{x_1 \in B_r(y_i) \ \forall i=2,\ldots,k\}\). $$ The conditional probability can be further rewritten as a superposition of Coulomb gases with modified superharmonic part of the potential: \begin{equation} \label{e.kpoint.superposition} \P^{W,U}_{N,\beta}\( \{x_1 \in B_r(y_i)\} \ | \ \{x_i \in B_r(y_i) \ \forall i=2,\ldots,k\}\) = \mathbb{E}^\ast \left[ \P^{W,U - U_{x_2,\ldots,x_k}}_{N-k+1,\beta}(\{x_1 \in B_r(y_i)\}) \right] \end{equation} where $\mathbb{E}^\ast$ is an expectation over $(x_2,\ldots,x_k)$ with $x_i \in B_r(y_i)$ for $i=2,\ldots,k$ a.s., the tuple $(x_1,x_{k+1},x_{k+2},\ldots,x_N)$ is distributed by the probability measure $\P^{W,U + U_{x_2,\ldots,x_k}}_{N-k+1,\beta}$, and $U_{x_2,\ldots,x_k}$ is the electric potential generated by $x_2,\ldots,x_k$, i.e. $$ U_{x_2,\ldots,x_k}(y) = \sum_{i=2}^k \mathsf{g}(y-x_i). $$ \begin{lemma} \label{l.kpoint.iso} With the definitions above, and $0 < r < \frac{1}{100}$ and $r < \frac{1}{100} \min_{i \ne j} |y_i - y_j|$, we have \begin{align} \label{e.kpoint.delta} &\inf_{\substack{(x_1,x_{k+1},x_{k+2}, x_N) \in \mathbb{R}^{d(N-k+1)}\\ x_1 \in B_r(y_1)}} \mathcal H^{W + U + U_{x_2,\ldots,x_k}}(x_1,x_{k+1},\ldots,x_N) - \mathrm{Iso}_{\{1\},\nu} \mathcal H^{W + U + U_{x_2,\ldots,x_k}}(x_1,x_{k+1},\ldots,x_N) \\ \notag &\quad \geq -Ck + \sum_{i=2}^k \max(\mathsf{g}(|y_1 - y_i| + 2r),0). \end{align} We also have $$ \mathrm{Iso}^\ast_{\{1\},\nu} \mathbbm{1}_{\{x_1 \in B_r(y_1)\}}(x_1,x_{k+1},\ldots,x_N) \leq Cr^d \mathbbm{1}_{\{ x_1 \in B_2(y_1) \}}(x_1,x_{k+1},\ldots,x_N). $$ \end{lemma} \begin{proof} Fix $x_2,\ldots,x_k$ with $x_i \in B_r(x_i)$ and let $U' = U + U_{x_2,\ldots,x_k}$ and $X_{N,k-1} = (x_1,x_{k+1},x_{k+2},\ldots,x_N)$. We begin by estimating, for all $\ell \ne j$, that $$ \mathrm{Iso}_{\{1\},\nu} \mathsf{g}(x_\ell - x_j) \leq \mathsf{g}(x_\ell - x_j) $$ since $\mathsf{g}$ is superharmonic. We now quantify the strictness of the above inequality with $\ell = 1$ and $j \in \{2,\ldots,k\}$. Assume $|y_1 - y_j| \leq \frac18$, so that $|x_1 - x_j| \leq \frac14$. Thus for any spherical shell $S$ within the annulus $\mathrm{Ann}_{[\frac12, 1]}(x_1)$, the point $x_j$ lies in its interior, meaning that $$ \frac{1}{\text{volume}_{d-1}(S)}\int_S \mathsf{g}(x_1 + y - x_j) dy = \mathsf{g}(\diam(S)/2) $$ by Gauss' theorem, where the integral is over $(d-1)$-dimensional surface measure. Integrating over shells shows \begin{align} \mathrm{Iso}_{\{1\},\nu} \mathsf{g}(x_1 - x_j) &= \frac{1}{|\mathrm{Ann}_{[\frac12,1]}(0)|} \int_{\mathrm{Ann}_{[\frac12, 1]}(0)} \mathsf{g}(x_1 + y - x_j) dy \leq C. \end{align} for a dimensional constant $C$. It remains to consider the $W$ and $U$ term. Since $U$ is superharmonic in $x_1$, we have $\mathrm{Iso}_{\{1\},\nu} U(x_i) \leq U(x_i)$. By \eref{iso.W} and $\Delta W \leq C$, we find $$ \mathrm{Iso}_{\{1\},\nu} \sum_{i=k+1}^N W(x_i) = \sum_{i=k+1}^N W(x_i) $$ and $$ \mathrm{Iso}_{\{1\},\nu} W(x_1) \leq W(x_1) + C. $$ Putting it all together, we conclude \begin{align} \mathrm{Iso}_{\{1\},\nu} \mathcal H^{W,U'}(X_{N,k-1}) &\leq Ck + \mathcal H^{W,U'}(X_{N,k-1}) - \sum_{\substack{j=2 \\ |x_1 - x_j| \leq \frac14 }}^k \mathsf{g}(x_1 - x_j) \\ \notag &\leq Ck + \mathcal H^{W,U'}(X_{N,k-1}) - \sum_{j=2}^k \max(\mathsf{g}(|y_1 - y_j| + 2r), 0), \end{align} as desired. Let $F$ be a nice enough function of $X_{N,k-1}$. By the change of variables $(x_1,x_{k+1},\ldots,x_N) \to (x_1 - y, x_{k+1},\ldots,x_N)$ we have \begin{align} &\int_{\mathbb{R}^{d(N-k+1)}} \mathbbm{1}_{\{x_1 \in B_r(y_1)\}}(X_{N,k-1}) \mathrm{Iso}_{1,\nu_1}F(X_{N,k-1}) dX_{N,k-1} \\ \notag &\quad = \int_{\mathbb{R}^d} \int_{\mathbb{R}^{d(N-k+1)}} \mathbbm{1}_{\{x_1 \in B_r(y_1)\}}(X_{N,k-1}) F(x_1 + y, x_{k+1},\ldots,x_N) dX_{N,k-1} \nu(dy) \\ \notag &\quad = \int_{\mathbb{R}^{d(N-k+1)}} \( \int_{\mathbb{R}^d} \mathbbm{1}_{\{x_1 \in B_r(y_1 + y)\}}(X_{N,k-1}) \nu(dy) \)F(X_{N,k-1}) dX_{N,k-1}. \end{align} Thus \begin{align*} \mathrm{Iso}^\ast_{\{1\},\nu} \mathbbm{1}_{\{x_1 \in B_r(y_1)\}}(X_{N,k-1}) = \nu(B_r(x_1 - y_1)) &\leq \frac{Cr^d}{|\mathrm{Ann}_{[\frac12, 1]}(0)|} \mathbbm{1}_{\{ |x_1 - y_1| \in [\frac12-r,1+r] \}}(X_{N,k-1}) \\ &\leq Cr^d \mathbbm{1}_{\{ x_1 \in B_2(y_1) \}}(X_{N,k-1}). \end{align*} \end{proof} We are now ready to prove \tref{kpoint}. \begin{proof}[Proof of \tref{kpoint}.] We will prove a more general theorem for the $k$-point function of $\P^{W,U}_{N,\beta}$. Constants will be independent of $U$. Consider the representation \eref{kpoint.superposition} for a small $r > 0$. Fix $x_2,\ldots,x_k$, $x_i \in B_r(y_i)$, and let $X_{N,k-1} = (x_1,x_{k+1},\ldots,x_N)$ and $U' = U + U_{x_2,\ldots,x_k}$. For fixed $x_2,\ldots,x_k$, $x_i \in B_r(y_i)$, let $\mathcal Z'$ be a normalizing constant so that $$ \P^{W,U'}_{N-k+1,\beta} = \frac{1}{\mathcal Z'} e^{-\beta \mathcal H^{W,U'}(X_{N,k})} dX_{N,k}. $$ We can apply \lref{kpoint.iso} and our isotropic averaging procedure to see \begin{align} \P^{W,U'}_{N-k+1,\beta}(\{x_1 \in B_r(y_i)\}) &= \frac{1}{\mathcal Z'} \int_{\mathbb{R}^{d(N-k+1)}} \mathbbm{1}_{x_1 \in B_r(y_i)} e^{-\beta \mathcal H^{W,U'}(X_{N,k-1})} dX_{N,k-1} \\ &\leq \frac{e^{-\beta \Delta_{r,1}}}{\mathcal Z'}\int_{\mathbb{R}^{d(N-k+1)}} \mathbbm{1}_{x_1 \in B_r(y_i)} e^{-\beta \mathrm{Iso}_{\{1\},\nu} \mathcal H^{W,U'}(X_{N,k-1})} dX_{N,k-1} \\ &\leq \frac{e^{-\beta \Delta_{r,1}}}{\mathcal Z'}\int_{\mathbb{R}^{d(N-k+1)}} \mathbbm{1}_{x_1 \in B_r(y_i)} \mathrm{Iso}_{\{1\},\nu} e^{-\beta \mathcal H^{W,U'}(X_{N,k-1})} dX_{N,k-1} \\ &= \frac{e^{-\beta \Delta_{r,1}}}{\mathcal Z'}\int_{\mathbb{R}^{d(N-k+1)}} \mathrm{Iso}_{\{1\},\nu}^\ast \mathbbm{1}_{x_1 \in B_r(y_i)} e^{-\beta \mathcal H^{W,U'}(X_{N,k-1})} dX_{N,k-1} \\ &\leq Ce^{-\beta \Delta_{r,1}} r^d \P^{W,U'}_{N-k+1,\beta}(\{x_1 \in B_2(y_i)\}) \end{align} for $\Delta_{r,1}$ equal to the RHS of \eref{kpoint.delta}. Taking an expectation over $x_2,\ldots,x_k$ proves $$ \P^{W,U}_{N,\beta}\(\bigcap_{i=1}^k \{x_i \in B_r(y_i)\}\) \leq Ce^{-\beta \Delta_{r,1}} r^d \P^{W,U}_{N,\beta}\(\{x_1 \in B_2(y_1)\} \cap \bigcap_{i=2}^k \{x_i \in B_r(y_i)\}\). $$ If $k > 1$, we will iterate the argument, this time using $\mathrm{Iso}_{\{2\},\nu}$ in place of $\mathrm{Iso}_{\{1\},\nu}$. We write $$ \P^{W,U}_{N,\beta}\(\{x_1 \in B_2(y_1)\} \cap \bigcap_{i=2}^k \{x_i \in B_r(y_i)\}\) = \P^{W,U}_{N,\beta}(\{x_1 \in B_2(y_1)\})\mathbb{E}^{\ast \ast} \left[ \P^{W,U_1}_{N-1,\beta}\(\bigcap_{i=2}^k \{x_i \in B_r(y_i)\}\) \right] $$ for some expectation $\mathbb{E}^{\ast \ast}$ over a new random superharmonic potential $U_1 = U + \mathsf{g}(x_1-\cdot)$. The law of $(x_2,\ldots,x_N)$ is given by $\P^{W,U_1}_{N-1,\beta}$ in the RHS. After applying isotropic averaging to $x_2$, we find $$ \P^{W,U}_{N,\beta}\(\{x_1 \in B_2(y_1)\} \cap \bigcap_{i=2}^k \{x_i \in B_r(y_i)\}\) \leq C e^{-\beta\Delta_{r,2}} r^d \P^{W,U}_{N,\beta}\(\{x_1 \in B_2(y_1)\} \cap \{x_2 \in B_2(y_2)\} \cap \bigcap_{i=3}^k \{x_i \in B_r(y_i)\}\) $$ for $$ \Delta_{r,2} = -C(k-1) + \sum_{i=3}^k \max(\mathsf{g}(|y_i - y_2| + 2r),0). $$ We proceed with the iteration until all $x_1,\ldots,x_k$ have all had an isotropic averaging argument applied to them. We find that $$ \P^{W,U}_{N,\beta}\(\bigcap_{i=1}^k \{x_i \in B_r(y_i)\}\) \leq e^{Ck^2} \exp\(-\frac{\beta}{2} \sum_{\substack{i,j=1 \\ i \ne j}}^k \max(\mathsf{g}(|y_i - y_j| + 2r),0)\) r^{dk} \P^{W,U}_{N,\beta}\(\bigcap_{i=1}^k \{x_i \in B_2(y_i)\}\). $$ All that remains is to bound the probability in the RHS. Note that $$ \P^{W,U}_{N,\beta}\(\bigcap_{i=1}^k \{x_i \in B_2(y_i)\}\) = \frac{(N-k)!}{N!}\sum_{i_1,\ldots,i_k \text{ distinct}} \P^{W,U}_{N,\beta}\(\bigcap_{\ell=1}^k \{x_{i_\ell} \in B_2(y_{\ell})\}\). $$ We compute $$ \sum_{i_1,\ldots,i_k \text{ distinct}} \mathbbm{1}_{\bigcap_{\ell=1}^k \{x_{i_\ell} \in B_2(y_{\ell})\}} \leq \sum_{n=0}^\infty n^k \mathbbm{1}_{E_n} \quad \text{where} \quad E_n = \left \{\sup_{i=1,\ldots,k} X(B_2(y_i)) = n \right\}. $$ Indeed, if $E_n$ occurs then the LHS sum is bounded by the number of ways to choose one of the at most $n$ particles in each $B_2(y_{\ell})$. We apply a union bound and the local law \tref{1C.LL} to see $$ \P^{W,U}_{N,\beta}(E_n) \leq \sum_{i=1}^k \P(\{X(B_2(y_i)) \geq n\}) \leq Cke^{-c_0 \beta n^2}. $$ for some $c_0 > 0$. Thus $$ \P^{W,U}_{N,\beta}\(\bigcap_{i=1}^k \{x_i \in B_2(y_i)\}\) \leq C k \frac{(N-k)!}{N!}\sum_{n=0}^\infty n^k e^{-c_0 \beta n^2}\leq C_{\beta,k} \frac{(N-k)!}{N!}. $$ Collecting the estimates and letting $r \to 0$ proves the theorem. \end{proof} \subsection{Clustering Estimates.}\ We now apply our $k$-point function estimates to prove the clustering estimates of \tref{1C.cluster}. \begin{proof}[Proof of \tref{1C.cluster}.] We will need slightly different arguments based on $d=2$ or $d \geq 3$. We integrate the results of \tref{kpoint}. We have $$ \P(\{X(B_r(z)) \geq Q\}) \leq \frac{1}{Q!} \sum_{i_1,\ldots,i_Q \ \text{distinct}} \P\(\bigcap_{\ell=1}^Q \{ x_{i_\ell} \in B_r(z) \} \) = \frac{1}{Q!} \int_{(B_r(z))^{Q}} \rho_Q(y_1,\ldots,y_Q) dy_1 \cdots dy_Q. $$ Thus, for a $Q$ dependent constant $C$, we have $$ \P(\{X(B_r(z)) \geq Q\}) \leq C r^{Qd} \| \rho_Q \|_{L^\infty((B_r(z))^Q)}. $$ In $d=2$ one estimates $$ \| \rho_Q \|_{L^\infty((B_r(z))^Q)} \leq C r^{\beta {Q \choose 2}} $$ which establishes \eref{cluster.fixed.2}. In $d \geq 3$, we can estimate $$ \| \rho_Q \|_{L^\infty((B_r(z))^Q)} \leq C e^{-\frac{\beta}{2^{d-2}r^{d-2}} {Q \choose 2}} $$ since $B_r(z)$ has diameter $2r$, establishing \eref{cluster.fixed.3}. To prove the estimates with $z = x_1$, we consider $(x_2,\ldots,x_N)$ drawn from the Coulomb gas $\P^{W,U_{x_1}}_{N-1,\beta}$ where $U_{x_1}(y) = \mathsf{g}(x_1 - y)$. The proof of \tref{kpoint} applies to this gas, except we have an extra term when applying our isotropic averaging operator to $U_{x_1}$ coming from particle repulsion generated by $x_1$. We can prove \begin{equation} \rho_{Q-1}(y_2,\ldots,y_Q | y_1) \leq \begin{cases} C\prod_{1 \leq i < j \leq Q} \min(1, |y_i - y_j|^\beta) \quad &\text{if}\ d=2, \\ C e^{-\beta \mathcal H^0(y_1,\ldots,y_Q)} \quad &\text{if}\ d \geq 3, \end{cases} \end{equation} where $\rho_{Q-1}(y_2,\ldots,y_Q | y_1)$ is the $(Q-1)$-point function of $\P^{W,U_{x_1}}_{N-1,\beta}$ with $x_1 = y_1$. Then since $$ \P(\{X(B_r(x_1)) \geq Q\}) \leq C r^{(Q-1)d} \sup_{y_1 \in \mathbb{R}^d} \| \rho_{Q-1}(\cdot | y_1) \|_{L^\infty((B_r(y_1))^{Q-1})}, $$ we obtain $$ \P(\{X(B_r(x_1)) \geq Q\}) \leq Cr^{(Q-1)d + \beta {Q \choose 2}} \quad \text{if}\ d= 2, $$ and \begin{equation}\label{e.cluster.centered.naive.3} \P(\{X(B_r(x_1)) \geq Q\}) \leq C r^{(Q-1)d} e^{-\beta \frac{Q-1}{r^{d-2}} - \beta \frac{1}{2^{d-2} r^{d-2}} {Q-1 \choose 2}} \quad \text{if}\ d\geq 3. \end{equation} In the latter estimate, we used that $|y_i - y_1| \leq r$ in our $L^\infty(B_r(y_1)^{Q-1})$ bound on $\rho_{Q-1}(\cdot|y_1)$, and $|y_i - y_j| \leq 2r$ for $i,j \geq 2$. We expect one can obtain significant improvements to the $d \geq 3$ bound by more accurately estimating the minimum value of $\mathcal H(y_1,\ldots,y_Q)$ and the volume of $(B_r(z))^Q$ of near-minimizers. We will now do so for the case of $Q=2$ and $z = x_1$ since it is relevant to the minimal separation problem. Let $d \geq 3$. We find for any $\varepsilon \in (0,1)$ that \begin{align} \P(\{X(B_r(x_1)) \geq 2\}) \leq C \sup_{y_1 \in \mathbb{R}^d} \int_{B_r(y_1)} e^{-\frac{\beta}{|y_1 - y_2|^{d-2}}} dy_2 &\leq C r^d \int_{B_1(0)} e^{-\frac{\beta}{r^{d-2} |y|^{d-2}}} dy \\ \notag &\leq Cr^d \int_0^1 s^{d-1} e^{-\frac{\beta}{r^{d-2}} \frac{1}{s^{d-2}}} ds. \end{align} For any $\alpha > 0$, we can compute $$ \int_0^1 s^{d-1} e^{-\frac{\alpha}{s^{d-2}}} ds \leq \int_0^1 \frac{1}{s^{d-1}} e^{-\frac{\alpha}{s^{d-2}}} ds = \frac{e^{-\alpha}}{\alpha}. $$ Applying this at $\alpha = r^{-d+2} \beta$ allows us to conclude $$ \P(\{X(B_r(x_1)) \geq 2\}) \leq Cr^{2d-2} e^{-\frac{\beta}{r^{d-2}}}. $$ Notice that this is significantly stronger than \eref{cluster.centered.naive.3} with $Q=2$. The "volume factor" has been reduced from $r^d$ to $r^{2d-2}$. \end{proof} \section{Minimal Separation} \label{s.etak} In this section, we prove the minimal separation theorem \tref{etak.tight} via \pref{etak.isbig} and \pref{etak.issmall}. We begin with showing that $\eta_k$ is not smaller than expected. The result follows relatively easily from the estimates of \tref{1C.cluster}. \begin{proposition} \label{p.etak.isbig} There is an absolute constant $C_0 > 0$ such that in $d=2$ we have \begin{equation} \label{e.etak.isbig.2} \P^W_{N,\beta}(\{\eta_k \leq \gamma N^{-\frac{1}{2+\beta}} \}) \leq C \gamma^{(2+\beta)k} + CN^{-\frac{2+2\beta}{2+\beta}} \gamma^{4+3\beta} + CN^{-7k}\gamma^{C_0(1+\beta)k} \quad \forall \gamma > 0, \end{equation} and in $d=3$ we have \begin{equation} \label{e.etak.isbig.3} \P^W_{N,\beta}\left (\left \{\eta_k \leq \(\frac{\beta}{\log N - \frac{2d-2}{d-2} \log \log N +\gamma}\)^{\frac{1}{d-2}}\right\} \right) \leq \frac{C}{\sqrt{N}} + Ce^{-k\gamma} \quad \forall \gamma > 0. \end{equation} The constant $C$ depends on $\beta$, $W$, and $k$. \end{proposition} \begin{proof} Let $r \in (0,1)$. Let $S$ be the event that there exists $i \in \{1,\ldots,N\}$ such that $B_{r}(x_i)$ contains $3$ or more particles $x_j$. We have \begin{equation} \label{e.3cluster.bd} \P^{W,U}_{N,\beta}(S) \leq CN\P(\{X(B_r(x_1)) \geq 3\}) \leq \begin{cases} CNr^{4+3\beta} \quad &\text{if}\ d=2, \\ C N r^{2d} e^{-\frac{2\beta}{r^{d-2}}} \quad &\text{if}\ d\geq 3, \end{cases} \end{equation} by a union bound and \tref{1C.cluster}. Let $E_{k,r}$ be the event that $\eta_k \leq r$. Also, let $a_j< b_j$ be random indices such that $\eta_j = |x_{a_j} - x_{b_j}|$ for $j=1,\ldots,k$, which are well-defined almost surely. Furthermore, on the event $S \cap E_{k,r}$, we have that $\{a_j, b_j\} \cap \{a_{\ell}, b_\ell\} = \varnothing$ for $\ell \ne j$ almost surely. Let $F_j = \bigcap_{\ell=1}^j \{a_\ell = 2\ell - 1 \} \cap \{b_{\ell} = 2\ell\}$. By exchangeability, we have $$ \P^{W,U}_{N,\beta}(E_{k,r} \cap S^c) \leq {N \choose 2}^{k} \P^{W,U}_{N,\beta}(E_{k,r} \cap F_k) \leq {N \choose 2}^{k} \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r\}) . $$ Let $Y = X - \sum_{i=1}^{2k-2} \delta_{x_i}$, i.e.\ the point process with the first $2k-2$ points deleted. We estimate \begin{align*} \lefteqn{ \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) } \quad & \\ &= \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^{k-1} \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \P^{W,U}_{N,\beta}(\{|x_{2k} - x_{2k-1}| \leq r\}|\bigcap_{\ell=1}^{k-1} \{|x_{2\ell} - x_{2 \ell-1}| \leq r \} ), \end{align*} and furthermore $$ \P^{W,U}_{N,\beta}(\{|x_{2k} - x_{2k-1}| \leq r\} |\bigcap_{\ell=1}^{k-1} \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) = \mathbb{E}^{k-1} \left[ \P^{W,U_{k-1}}_{N-2k+2,\beta}(\{ |x_{2k} - x_{2 k-1}| \leq r \} ) \right]. $$ for random potential $U_{k-1} = U + \sum_{\ell = 1}^{k-1} \mathsf{g}(x_{2\ell} - \cdot) + \mathsf{g}(x_{2\ell-1} - \cdot)$. The expectation $\mathbb{E}^{k-1}$ is over the law of the points $x_{\ell}$, $\ell \in \{1,\ldots,2k-2\}$, conditioned to be pairwise close as above, and $\P^{W,U_{k-1}}_{N-2k+2,\beta}$ is a measure on the particles $(x_{2k-1},x_{2k},\ldots,x_N)$. For any positive integer $n$, we have by exchangeability that $$ \P^{W,U_{k-1}}_{N-2k+2,\beta}(\{|x_{2k} - x_{2 k-1}| \leq r \}) \leq \frac{C_n}{N} \P^{W,U_{k-1}}_{N-2k+2,\beta}(\{ Y(B_r(x_{2k})) \geq 2 \}) + \P^{W,U_{k-1}}_{N-2k+2,\beta}(\{ Y(B_r(x_{2k})) \geq n \}). $$ We apply \tref{1C.cluster} to bound each piece above. Collecting the estimates, we have proved \begin{equation*} \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \leq C \(\frac{1}{N} r^{2+\beta} + r^{2(n-1) + \beta {n \choose 2}}\)\P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^{k-1} \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \quad \text{if}\ d=2 \end{equation*} and \begin{equation*} \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \leq C \(\frac{1}{N} r^{2d-2} e^{-\frac{\beta}{r^{d-2}}} + r^{(n-1)d} e^{-\frac{c\beta n^2}{r^{d-2}}} \)\P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^{k-1} \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \quad \text{if}\ d\geq 3 \end{equation*} for some dimensional constant $c > 0$. We can iterate this estimate to see $$ \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \leq \begin{cases} C\(\frac{1}{N} r^{2+\beta} + r^{2(n-1) + \beta {n \choose 2}}\)^k \quad &\text{if}\ d=2, \\ C \(\frac{1}{N} r^{2d-2} e^{-\frac{\beta}{r^{d-2}}} + r^{(n-1)d} e^{-\frac{c\beta n^2}{r^{d-2}}} \)^k \quad &\text{if}\ d\geq 3. \end{cases} $$ We conclude the general argument by noting \begin{equation} \label{e.etak.isbig.conclude} \P^{W,U}_{N,\beta}(E_{k,r}) \leq \P^{W,U}_{N,\beta}(S) + \P^{W,U}_{N,\beta}(E_{k,r} \cap S^c) \leq \P^{W,U}_{N,\beta}(S) + \binom{N}{2}^k \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}). \end{equation} The probability of $S$ is bounded in \eref{3cluster.bd}. We now choose specific $r$ and $n$ to conclude the proposition. We must consider $d=2$ and $d\geq 3$ separately. For $d=2$, we choose $r = \gamma N^{-\frac{1}{2+\beta}}$ and $n = 10$ to see $$ \P^{W,U}_{N,\beta}(S) \leq CN^{-\frac{2+2\beta}{2+\beta}} \gamma^{4+3\beta}, \quad \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \leq CN^{-2k} \gamma^{(2+\beta)k} + C N^{-9k} \gamma^{C_0(1+\beta)k} $$ for some $C_0 > 0$, and \eref{etak.isbig.2} follows. For $d \geq 3$, in proving \eref{etak.isbig.3} we will always have $r^{d-2} \leq \frac{4\beta}{3\log N}$ if we consider $N \geq C$. It follows that $$ \P^{W,U}_{N,\beta}(\bigcap_{\ell=1}^k \{|x_{2\ell} - x_{2 \ell-1}| \leq r \}) \leq C\frac{r^{(2d-2)k}}{N^k} e^{-\frac{\beta k}{r^{d-2}}} \leq C N^{-k} \exp\(-\frac{\beta k}{r^{d-2}} - \frac{2d-2}{d-2} k \log \log N\), $$ if $n$ is chosen large enough, and $$ \P^{W,U}_{N,\beta}(S) \leq CN \exp\(-2\beta\(\frac{3\log N}{4\beta}\)\) \leq CN^{-\frac12}. $$ We can then let $r = \(\frac{\beta}{\log N - \frac{2d-2}{d-2}\log \log N + \gamma}\)^{\frac{1}{d-2}}$, and after a short computation using \eref{etak.isbig.conclude}, we find \eref{etak.isbig.3} holds. \end{proof} Our next goal is to prove that $\eta_k$, properly normalized, is tight as $N \to \infty$. To do this, we must create close particle pairs using isotropic averaging and estimate the energy and entropy cost. For $i \in \{1,\ldots,N\}$, let $L_{i,R}$ be the event that $\min_{j \ne i} |x_j - x_i| \geq R$, i.e.\ that the point $x_i$ is "lonely". Clearly, if $R$ is much larger than the interstitial distance, the event $L_{i,R}$ is rare. We use \cite{CHM18} to quantify this fact. The below result is their Theorem 1.5, written in our blown-up coordinates and corresponding to their inverse temperature chosen as $N^{-1+2/d} \beta$ in terms of our $\beta$. \begin{theorem}[\cite{CHM18}, Theorem 1.5] Let $W = V_N$ with $V$ satisfying \eref{A1}, \eref{A3}, \eref{A5} in $d=2$, and \eref{A6} in $d \geq 3$. Recall the blown-up equilibrium measure $\mueq^N$ from \sref{notation}. We have \begin{equation} \label{t.CHM} \P^{V_N}_{N,\beta}(\{d_{\mathrm{BL},N}(X,\mueq^N) \geq N\sqrt{\log N} r\}) \leq e^{-c N \log N r^2} \end{equation} for any $r \geq c^{-1}$ for some $\beta$-dependent $c > 0$, where, for nonnegative measures $\nu_1$ and $\nu_2$ of mass $N$, we define \begin{equation*} d_{\mathrm{BL},N}(\nu_1,\nu_2) := \sup_{\substack{ f \in C_N^{0,1}(\mathbb{R}^d) \\ \| f \|_{C_N^{0,1}(\mathbb{R}^d)} \leq 1}} \int_{\mathbb{R}^d} f(x)(\nu_1 - \nu_2)(dx), \end{equation*} and $C_N^{0,1}(\mathbb{R}^d)$ is the space of bounded Lipschitz functions with norm $$ \| f \|_{C^{0,1}(\mathbb{R}^d)} = \sup_{x \in \mathbb{R}^d} N^{-1/d}|f(x)| + \sup_{x \ne y \in \mathbb{R}^d} \frac{|f(x) - f(y)|}{|x-y|}. $$ \end{theorem} \begin{lemma} \label{l.lonely} Let $W = V_N$ with $V$ satisfying \eref{A1}, \eref{A3}, \eref{A5} in $d=2$, and \eref{A6} in $d \geq 3$. We have $$ \P^{W,U}_{N,\beta}(L_{1,R}) \leq C R^{-d} + CN^{-\frac1d} \sqrt{\log N} $$ for all $R \geq 1$. \end{lemma} \begin{proof} Define $\varphi(x) = \max(0,1-N^{-1/d}\dist(x,\Sigma_N))$. One can easily check $$ \| \varphi(x) \|_{C^{0,1}_N(\mathbb{R}^d)} \leq CN^{-1/d}. $$ Therefore $$ \left | \int_{\mathbb{R}^d} \varphi(x) (X - \mueq^N)(dx) \right | \leq \min\(CN^{-\frac1d} d_{\mathrm{BL},N}(X,\mueq^N), N\), $$ and so for $r = c^{-1}$ large enough we have by \tref{CHM} \begin{align} \label{e.CHM} \mathbb{E}\left[ \left | \int_{\mathbb{R}^d} \varphi(x) (X - \mueq^N)(dx) \right | \right] &\leq N \P^{V_N}_{N,\beta}(\{d_{\mathrm{BL},N}(X,\mueq^N) \geq N\sqrt{\log N} r\}) + CN^{1-\frac1d} \sqrt{\log N} r \\ \notag &\leq Ne^{-cN \log N r^2} + CN^{1-\frac1d}\sqrt{\log N} r \leq CN^{1-\frac1d}\sqrt{\log N}. \end{align} We will use \eref{CHM} to show that most points are typically within $\supp \varphi$. Indeed, we have $$ \int_{\mathbb{R}^d} \varphi(x)(X - \mueq^N)(dx) \leq -X((\supp \phi)^c) = - \sum_{i=1}^N \P^{V_N}_{N,\beta}(\{x_i \not \in \supp \varphi\}), $$ and so taking expectation shows $\P^{V_N}_{N,\beta}(\{x_1 \not \in \supp \varphi\}) \leq CN^{-\frac1d}\sqrt{\log N}$. Let $\xi_i = \min(R,\min_{j \ne i} |x_i - x_j|)$, so the event $L_{i,R}$ is equivalent to $\xi_r = R$. Since the balls $B_{\xi_i/2}(x_i)$ are disjoint, we must have $$ |\{i : x_i \in \supp \varphi, \xi_i = R\}| \cdot R^d \leq C|\{x : \dist(x,\supp \varphi) \leq R\}| \leq CN, $$ where we assumed WLOG $R \leq N^{\frac1d}$ and used $\supp \mueq$ compact in the last inequality. Thus $$ \P^{V_N}_{N,\beta}(L_{1,R} \cap \{x_1 \in \supp \varphi\}) \leq CR^{-d}, $$ and the lemma follows. \end{proof} For a configuration $X_N$, let $\phi(i) = \phi_{X_N}(i) \in \{1,\ldots,N\}$ be the index of the closest particle to $i$, i.e.\ $|x_i - x_{\phi(i)}| = \min_{i \ne j} |x_i - x_j|$. This is almost surely well-defined. Also, let $T_{k,r}$ be the event that the cardinality of $\{\{i,j\} \ : \ |x_i - x_j| < r, i \ne j\}$ is at most $k$. Define the random indices $a_\ell < b_\ell$ such that $\eta_\ell = |x_{a_\ell} - x_{b_{\ell}}|$. The next proposition uses isotropic averaging to move $x_i$ and $x_{\phi(i)}$ closer together. \begin{proposition} \label{p.createpairs} Let $r \in (0,1)$ and let $\nu$ be a rotationally symmetric probability measure supported in $\overline{B_r(0)}$ with a Lebesgue density, also denoted $\nu$. For any $R \geq 1$ and integer $n \geq 3$ and $k \in \{1,\ldots,N\}$, we have \begin{align} \P^{W,U}_{N,\beta}(\{\eta_k \geq r\}) &\leq \frac{2k-2}{N} + \P^{W,U}_{N,\beta}(L_{1,R}) \\ \notag &\quad +{C e^{C \beta r^2 + \beta \Delta_{\nu}}} \| \nu \|_{L^\infty(\mathbb{R}^d)} M_{r,R}\(\frac{k+n}{N} + \P^{W,U}_{N,\beta}(\{X(B_r(x_1)) \geq 3\}) + N\P^{W,U}_{N,\beta}(\{X(B_r(x_1)) \geq n\})\), \end{align} where $\Delta_\nu = \int_{\mathbb{R}^d} \mathsf{g}(y) \nu(dy)$ and $M_{r,R} = \int_{\mathrm{Ann}_{[r,R]}(0)} e^{-\beta \mathsf{g}(y)} dy$. The constant $C$ depends only on $W$ and $n$. \end{proposition} \begin{proof} We will abbreviate $\P = \P^{W,U}_{N,\beta}$ and $\mathbb{E} = \mathbb{E}^{W,U}_{N,\beta}$ throughout the proof. First note that $T_{k-1,r} = \{\eta_k \geq r\}$. Furthermore, on $T_{k-1,r}$ we have $|x_1 - x_{\phi(1)}| \geq r$ unless $a_\ell = 1$ or $b_\ell = 1$ for some $\ell \in \{1,\ldots,k-1\}$. We will furthermore want to fix the label $\phi(1)$ and ensure $x_1$ is not $R$-lonely, which inspires the bound \begin{align} \label{e.Tkr.breakdown} \P(T_{k-1,r}) &\leq (N-1)\P(\{|x_1 - x_2| \geq r\} \cap \{\phi(1) = 2\} \cap L_{1,R}^c \cap T_{k-1,r}) + \P(L_{1,R}) \\ \notag &\quad + \P(\{\exists \ell \in \{1,\ldots,k-1\} \ a_\ell = 1 \ \text{or}\ b_\ell = 1 \}). \end{align} The $N-1$ factor comes from the fact that $\phi(1)$ is equally likely to be each of $\{2,\ldots,N\}$. The first probability on the RHS of \eref{Tkr.breakdown} is suitable to apply a transport procedure to move $x_2$ closer to $x_1$, but first we will bound the last term. By exchangeability, we have \begin{equation} \label{e.albl.bd} \P(\{\exists \ell \in \{1,\ldots,k-1\} \ a_\ell = 1 \ \text{or}\ b_\ell = 1 \}) \leq \frac{2k-2}{N} \end{equation} since $\{a_\ell : \ell=1,\ldots,k-1\} \cup \{a_\ell : \ell=1,\ldots,k-1\}$ is random subset of $\{1,\ldots,N\}$ size at most $2k-2$. We condition on $x_3,x_4,\ldots,x_N$ to rewrite \begin{align} \label{e.etak.issmall.cond} \P(\{|x_1 - x_2| \geq r\} \cap \{\phi(1) = 2\} \cap L_{1,R}^c \cap T_{k-1,r}) &\leq \P(T'_{k-1,r} \cap \{|x_1 - x_2| \in [r,R]\} \cap \{\phi(1) = 2\}) \\ \notag &= \mathbb{E}\left[ \mathbbm{1}_{T'_{k-1,r}} \P(\{|x_1 - x_2| \in [r,R]\} \cap \{\phi(1) = 2\} | x_3,x_4,\ldots,x_N) \right], \end{align} where $T'_{k-1,r}$ is the event that the cardinality of $\{\{i,j\} \subset \{3,\ldots,N\} \ : \ |x_i - x_j| < r, i \ne j \}$ is at most $k-1$. We next define a new type of isotropic averaging operator to apply to the conditional probability above. For a rotationally symmetric probability measure $\nu$ on $\mathbb{R}^d$, define the "mimic" operators \begin{align*} \mathrm{Mim}_{1,2,\nu} F(x_1,x_2) &= \int_{\mathbb{R}^d} F(x_1,x_1 + y)\nu(dy), \\ \mathrm{Mim}_{2,1,\nu} F(x_1,x_2) &= \int_{\mathbb{R}^d} F(x_2 + y,x_2)\nu(dy). \end{align*} Note that $$ \mathrm{Mim}_{1,2,\nu} \sum_{j=3}^N \mathsf{g}(x_2 - x_j) \leq \sum_{j=3}^N \mathsf{g}(x_1 - x_j) $$ and $$ \mathrm{Mim}_{1,2,\nu} U(x_2) \leq U(x_1). $$ We also have $$ \mathrm{Mim}_{1,2,\nu} W(x_2) \leq W(x_1) + C r^2 $$ since $\supp \nu \subset \overline{B_r(0)}$. Define $$ \Delta_{\nu} = \mathrm{Mim}_{1,2,\nu} \mathsf{g}(x_1 - x_2) = \mathrm{Mim}_{2,1,\nu} \mathsf{g}(x_1 - x_2) $$ which depends only on $\nu$. Analogous results as above hold for $\mathrm{Mim}_{2,1,\nu}$. It follows that $$ e^{-\beta \mathrm{Mim}_{1,2,\nu} \mathcal H^{W + U}(X_N) } + e^{-\beta \mathrm{Mim}_{2,1,\nu} \mathcal H^{W + U}(X_N) } \geq e^{-C\beta r^2} e^{\beta (\mathsf{g}(x_1 - x_2) - \Delta_\nu) } e^{-\beta \mathcal H^{W+U}(X_N)}. $$ Indeed, the first summand on the LHS dominates the RHS in the event that $$ \sum_{j=3}^N \mathsf{g}(x_1 - x_j) + U(x_1) + W(x_1) \leq \sum_{j=3}^N \mathsf{g}(x_2 - x_j) + U(x_2) + W(x_2) $$ since $x_2$ "mimics" $x_1$ in the operator $\mathrm{Mim}_{1,2,\nu}$. When the reverse inequality is true, the second summand on the LHS is dominating. The adjoints are easily computed as \begin{align*} \mathrm{Mim}_{1,2,\nu}^\ast F(x_1,x_2) &= \nu(x_2 - x_1) \int_{\mathbb{R}^d} F(x_1,y)dy \\ \mathrm{Mim}_{1,2,\nu}^\ast F(x_1,x_2) &= \nu(x_1 - x_2) \int_{\mathbb{R}^d} F(y,x_2)dy. \end{align*} We will now apply the isotropic averaging procedure. Let $X_{N,3} = (x_3,\ldots,x_N)$, and consider the conditional probability measure $\P(\cdot | X_{N,3})$. For a normalizing factor $\mathcal Z(X_{N,3})$, we have a.s.\ \begin{align*} \P(\{|x_1 - x_2| \in [r,R]\} \cap \{\phi(1) = 2\} | X_{N,3}) &= \frac{1}{\mathcal Z(X_{N,3})} \iint_{\mathbb{R}^d \times \mathbb{R}^d} \mathbbm{1}_{\{|x_1-x_2| \in [r,R]\} \cap \{\phi(1) = 2\}}(x_1,x_2)e^{-\beta \mathcal H^{W+U}(X_N)} dx_1 dx_2 \\ \notag &\leq \frac{e^{C\beta r^2 + \beta \Delta_\nu}}{\mathcal Z(X_{N,3})} (A_1 + A_2), \end{align*} where (with a slight abuse of notation) $$ A_1 = \iint_{\mathbb{R}^d \times \mathbb{R}^d} \mathrm{Mim}_{1,2,\nu}^\ast \( e^{-\beta \mathsf{g}(x_1 - x_2)} \mathbbm{1}_{\{|x_1-x_2| \in [r,R]\} \cap \{\phi(1) = 2\}} \)(x_1,x_2) e^{-\beta \mathcal H^{W+U}(X_N)} dx_1 dx_2 $$ and $A_2$ identical but with $\mathrm{Mim}_{2,1,\nu}^\ast$ in the place of $\mathrm{Mim}_{1,2,\nu}^\ast$. Note that $\mathrm{Mim}_{1,2,\nu}^\ast$ is monotonic and $$ \mathbbm{1}_{\{|x_1-x_2| \in [r,R]\} \cap \{\phi(1) = 2\}} \leq \mathbbm{1}_{\{|x_1-x_2| \in [r,R]\}}, $$ whence $$ \mathrm{Mim}_{1,2,\nu}^\ast \( e^{-\beta \mathsf{g}(x_1 - x_2)} \mathbbm{1}_{\{|x_1-x_2| \in [r,R]\} \cap \{\phi(1) = 2\}} \)(x_1,x_2) \leq \nu(x_2 - x_1) \int_{\mathrm{Ann}_{[r,R]}(x_1)} e^{-\beta \mathsf{g}(x_1 - y)} dy. $$ Define $M_{r,R} = \int_{\mathrm{Ann}_{[r,R]}(0)} e^{-\beta \mathsf{g}(y)}dy$. Since $\nu$ is supported in $\overline{B_r(0)}$, the RHS above is a.s.\ bounded by $$ \| \nu \|_{L^\infty(\mathbb{R}^d)} M_{r,R} \mathbbm{1}_{B_r(0)}(x_1 - x_2), $$ and we find $$ \frac{A_1}{\mathcal Z(X_{N,3})} \leq M_{r,R} \| \nu \|_{L^\infty(\mathbb{R}^d)} \P(\{ |x_1 - x_2| < r \} | X_{N,3}). $$ We can prove an identical bound for $A_2$ with the same argument, and using the bounds in \eref{etak.issmall.cond} shows \begin{equation} \label{e.mimic.bd} \P(\{|x_1 - x_2| \geq r\} \cap \{\phi(1) = 2\} \cap L_{1,R}^c \cap T_{k-1,r}) \leq \kappa \P( \{|x_1 - x_2| < r\} \cap {T'_{k-1,r}}). \end{equation} where $$ \kappa := e^{C \beta r^2 + \beta \Delta_\nu}M_{r,R} \| \nu \|_{L^\infty(\mathbb{R}^d)}. $$ We wish to partially recover the information $\phi(1) = 2$ after the transport, i.e.\ on the RHS of \eref{mimic.bd}. To do so, we use the bound $$ \P( \{|x_1 - x_2| < r\} \cap \{\phi(1) \ne 2\}) \leq \frac{C_n}{N} \P(\{X(B_r(x_1)) \geq 3\}) + \P(\{X(B_r(x_1)) \geq n\}), $$ finding \begin{align} \label{e.etak.isbig.mimic.bd2} \lefteqn{ \P(\{|x_1 - x_2| \geq r\} \cap \{\phi(1) = 2\} \cap L_{1,R}^c \cap T_{k-1,r}) } \quad & \\ \notag &\leq \kappa \(\P( \{|x_1 - x_2| < r\} \cap \{\phi(1) = 2\}\cap {T'_{k-1,r}}) + \frac{C_n}{N} \P(\{X(B_r(x_1)) \geq 3\}) + \P(\{X(B_r(x_1)) \geq n\})\). \end{align} Let $T'_{k-1,r}(j)$ be the event that $\{x_i : i \in \{2,\ldots,N\}, i \ne j\}$ contains at most $k-1$ pairs of points within distance $r$. For example, $T'_{k-1,r}(2) = T'_{k-1,r}$. Since the events $\{\phi(i) = j\}$, $j=2,\ldots,N$ are disjoint up to probability $0$, we see \begin{align} \label{e.etak.isbig.gain1} \P( \{|x_1 - x_2| < r\} \cap \{\phi(1) = 2\}\cap {T'_{k-1,r}}) &= \frac{1}{N-1} \sum_{j=2}^N \P( \{|x_1 - x_j| < r\} \cap \{\phi(1) = j\}\cap {T'_{k-1,r}(j)}) \\ \notag &\leq \frac{1}{N-1}\P(\{|x_1 - x_{\phi(1)}| < r\} \cap T'_{k-1,r}(\phi(1))\}). \end{align} On the event $\{|x_1 - x_{\phi(1)}| < r\} \cap T'_{k-1,r}(\phi(1))\}$, the index $1$ is an exceptional index since there likely are very few particles within distance $r$ of each other, so we expect the probability of the event to of order $O(1/N)$. To be precise, if $X(B_r(x_i)) \leq n$ for all $i \in \{1,\ldots,N\}$, then $T'_{k-1,j}(\phi(1))$ occurring implies $T_{k-1+2n,r}$. Thus \begin{align} \label{e.etak.isbig.gain2} \P ( {\{|x_1 - x_{\phi(1)}| < r \}} \cap {T'_{k-1,j}(\phi(1))}) &\leq \P(\{\exists i \ X(B_r(x_i)) \geq n\}) + \P ( {\{|x_1 - x_{\phi(1)}| < r \}} \cap T_{k-1+2n,r}) \\ \notag &\leq N \P(\{X(B_r(x_1)) \geq n\}) + \P ( {\{|x_1 - x_{\phi(1)}| < r \}} \cap T_{k-1+2n,r}). \end{align} Since, by definition of $T_{k-1+2n,r}$, we have pointwise a.s.\ $$ \sum_{i=1}^N \mathbbm{1}_{\{|x_i - x_{\phi(i)}| < r \}} \mathbbm{1}_{T_{k-1+2n, r}} \leq 2k-2+4n, $$ we can apply exhchangeability to see $$ \P ( {\{|x_1 - x_{\phi(1)}| < r \}} \cap T_{k-1+2n,r}) \leq \frac{2k-2+4n}{N}. $$ Collecting this estimate, \eref{etak.isbig.gain2}, \eref{etak.isbig.gain1}, and \eref{etak.isbig.mimic.bd2}, we have $$ \P(\{|x_1 - x_2| \geq r\} \cap \{\phi(1) = 2\} \cap L_{1,R}^c \cap T_{k-1,r}) \leq \kappa\(\frac{2k-2+4n}{N(N-1)} + \frac{C_n}{N} \P(\{X(B_r(x_1)) \geq 3\}) + 2\P(\{X(B_r(x_1)) \geq n\})\). $$ Finally, plugging this bound into \eref{Tkr.breakdown} along with \eref{albl.bd} proves the proposition. \end{proof} \begin{proposition} \label{p.etak.issmall} Consider $W = V_N$ with $V$ satisfying \eref{A1}, \eref{A3}, \eref{A5} in $d=2$, and \eref{A6} in $d \geq 3$. In $d=2$, the law of $N^{\frac{1}{2+\beta}}\eta_k$ is tight as $N \to \infty$ and $\limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{N^{\frac{1}{2+\beta}} \eta_k \geq \gamma\}) \leq C\gamma^{-\frac{4+2\beta}{4+\beta}}$ for $\gamma > 0$. For $d \geq 3$, let $Z_k$ be defined by $$ \eta_k = \(\frac{\beta}{\log N} \)^{\frac{1}{d-2}} \( 1 + \frac{2d-2}{(d-2)^2} \frac{ \log \log N}{\log N} + \frac{Z_k}{(d-2) \log N} \). $$ Then we have $\limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{ Z_k \geq \gamma \}) \leq Ce^{-\gamma/2}$. \end{proposition} \begin{proof} Both results are consequences of \pref{createpairs}, \lref{lonely}, and our clustering result \tref{1C.cluster}. We adopt the notation from \pref{createpairs}. For the $d=2$ result, choose $r = \gamma N^{-\frac{1}{2+\beta}}$ for $\gamma \geq 1$, $n = 5$ (say), and let $\nu$ be the uniform probability measure on $\mathrm{Ann}_{[r/2,r]}(0)$. Without loss of generality, we assume $r \leq 1$. We compute \begin{align*} &M_{r,R} \leq CR^{2+\beta} \\ &\Delta_{\nu} \leq -\log r + C \\ &\| \nu \|_{L^\infty(\mathbb{R}^2)} \leq Cr^{-2}. \end{align*} Applying \pref{createpairs}, \lref{lonely}, and \tref{1C.cluster}, we see \begin{equation} \P^{V_N}_{N,\beta}(\{\eta_k \geq r\}) \leq CN^{-\frac1d}\sqrt{\log N} + CR^{-2} + C R^{2+\beta} \gamma^{-2-\beta}e^{C\beta N^{-\frac{2}{2+\beta}}\gamma^2}\(1 + N^{1-\frac{4+3\beta}{2+2\beta}} \gamma^{c_\beta}\) \end{equation} for a constant $C$ depending on $k$ and some constant $c_\beta > 0$. Taking $\limsup_{N \to \infty}$ of both sides and optimizing in $R$ proves that $$ \limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{N^{\frac{1}{2+\beta}} \eta_k \geq \gamma\}) \leq CR^{-d} + CR^{2+\beta} \gamma^{-2-\beta} \leq C\gamma^{-\frac{4+2\beta}{4+\beta}}. $$ For the $d \geq 3$ result, choose $r = \(\frac{\beta}{\log N - \frac{2d-2}{d-2} \log \log N - \gamma}\)^{\frac{1}{d-2}}$ for $\gamma > 0$ and WLOG assume $r \leq 1$. Let $\nu$ be the uniform measure on the annulus $\mathrm{Ann}_{[(1 -(\log N)^{-1})r, r]}(0)$, and compute \begin{align*} &M_{r,R} \leq CR^d \\ &\Delta_\nu \leq \frac{1}{(1-(\log N)^{-1})^{d-2} r^{d-2}} = \frac{1}{r^{d-2}} \(1 + \frac{d-2}{\log N} + C(\log N)^{-2}\) \\ &\| \nu \|_{L^\infty(\mathbb{R}^d)} \leq \frac{C\log N}{r^d}. \end{align*} We can then estimate \begin{align*} \frac{e^{C \beta r^2 + \beta \Delta_{\nu}}}{N} M_{r,R} \| \nu \|_{L^\infty(\mathbb{R}^d)} &\leq CR^d \exp\( \frac{\beta}{r^{d-2}} \(1 + \frac{d-2}{\log N} \) + \log \log N - d \log r - \log N\) \\ &\leq C R^d \exp\(- \gamma + d-2 - \frac{2d-2}{d-2} \frac{\log \log N}{\log N} \) \leq CR^d e^{-\gamma}. \end{align*} Thus we have \begin{align*} \P^{V_N}_{N,\beta}(\{\eta_k \geq r\}) &\leq CN^{-\frac1d} \sqrt{\log N} + CR^{-d} + CR^d e^{-\gamma}(1+N\P^{V_N}_{N,\beta}(\{X(B_r(x_1)) \geq 3\}) + N^2\P^{V_N}_{N,\beta}(\{X(B_r(x_1)) \geq n\})) \\ &\leq CN^{-\frac1d} \sqrt{\log N} + CR^{-d} + CR^d e^{-\gamma}(1+Ne^{-\frac{2\beta}{r^{d-2}}} + N^2 e^{-\frac{(n-1) \beta}{r^{d-2}}}) \\ &\leq CN^{-\frac1d} \sqrt{\log N} + CR^{-d} + CR^d e^{-\gamma}(1+e^{C\gamma - \log N + C\log \log N}), \end{align*} where we chose $n=4$. We conclude $$ \limsup_{N \to \infty} \P^{V_N}_{N,\beta}(\{\eta_k \geq r\}) \leq CR^{-d} + CR^d e^{-\gamma} \leq Ce^{-\gamma/2}. $$ To conclude the result on $Z_k$, note that $$ r = \(\frac{\beta}{\log N}\)^{\frac{1}{d-2}} \(1 + \frac{2d-2}{(d-2)^2} \frac{\log \log N}{\log N} + \frac{\gamma}{(d-2) \log N}\) + O((\log N)^{-\frac{1}{d-2} - 2 + \varepsilon}) $$ for any $\varepsilon > 0$. \end{proof} \section{Discrepancy Bounds} \label{s.discrepancy} In this section, we prove \tref{fLL.over} and \tref{fLL.improved}. We will specialize to $d=2$, though results are possible for $d \geq 3$ if a version of \pref{energymin} can be proved. All implicit constants $C$ may depend on $\beta$ and on various characteristics of $W$ or $V$. We let $\mu(dx) = \frac{1}{c_d} \Delta W(x)dx$ throughout, where $\Delta \mathsf{g} = -c_d \delta_0$. It will be necessary to consider more general domains $\Omega$ for our discrepancy bounds. We will work with a domain $\Omega \subset \mathbb{R}^d$ that is an {\it $\alpha$-thin annulus} for fixed $\alpha \in (0,1]$. To be precise, we will take either $\Omega$ to be a ball of radius $R$ if $\alpha = 1$ or $\Omega = \mathrm{Ann}_{[R-\alpha R,R]}(z)$ for $\alpha \in (0,1)$. This means there exists some $C > 0$ such that \begin{equation} \label{e.thin.annular} C^{-1} \alpha R^{d} \leq |\Omega| \leq C \alpha R^{d} \quad \text{where}\ R := \diam(\Omega). \end{equation} We will use $R$ for $\diam(\Omega)$ throughout this section. We assume $\alpha R \geq C$ dependent on $\beta, \| \Delta W \|_{L^\infty(\mathbb{R}^d)}$, and \begin{equation} \Delta W(x) \geq C^{-1} \quad \forall x \in \Omega_R. \end{equation} We also define thickened and thinned versions of $\Omega$. For $s \geq 0$, define $\Omega_s = \Omega \cup \{x \in \mathbb{R}^d : \dist(x,\partial \Omega) \leq s\}$, and for $s < 0$, define $\Omega_s = \{x \in \Omega : \dist(x,\partial \Omega) \geq |s|\}$. We will consider the event $E_{\rho,r,M}$ for parameters $\rho,r > 0$ and an integer $M > 0$. It is defined by $$ E_{\rho,r,M} = \{X(\Omega) \geq \mu(\Omega) + \rho |\Omega|\} \cap \{X(\Omega_{5r} \setminus \Omega_{-5r}) \leq M R^{d-1}r \}. $$ Let $\phi : \mathbb{R}^d \to \mathbb{R}$ be a smooth, nonnegative, rotationally symmetric function with $\int_{\mathbb{R}^d} \phi(x) dx = 1$ and support within $B_1(0)$. For $s > 0$, define $\phi_s = s^{-d} \phi(x/s)$, and $\psi_s = \phi_s \ast \phi_1$. For a measure (or function) $\lambda$, we will write $\Phi_s \lambda$ for $\phi_s \ast \lambda$ when it makes sense. We will apply the isotropic averaging argument with operator $\mathrm{Iso}_{\mathbb X(\Omega), \psi_r}$. The analysis mainly hinges on a precise lower bound for \begin{equation} \label{e.LL.Delta} \Delta_{\rho,r,M} := \inf_{X_N \in E_{\rho,r,M}} \mathcal H^{W+U}(X_N) - \mathrm{Iso}_{\mathbb X(\Omega), \psi_r} \mathcal H^{W + U}(X_N). \end{equation} We want to be able to take $\rho$ to be very small and still achieve $\Delta \gg 1$. \subsection{Discrepancy upper bound.}\ In this subsection, we prove \tref{fLL.over}. We will be able to prove our overcrowding bound for $\P^{W,U}_{N,\beta}$ and on $\alpha$-thin annuli $\Omega$ under some conditions. We will first control \eref{LL.Delta} in terms of an "energy" functional defined on measures. We will only need to consider measures $\nu$ with bounded densities, so for simplicity we restrict to this case from the start. For $r > 0$, define \begin{equation} \label{e.Erf.def} \mathcal E_r(\nu) = \frac12 \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r^2 \mathsf{g})(x-y) \nu(dx)\nu(dy). \end{equation} We let $q \in \mathbb{R}$ denote a constant and also write $q(x) = q$ and $q(dx) = q dx$. We let $\nu_{|A}$ denote the restriction of $\nu$ to a set $A$. Define \begin{equation} \mathcal E_r(\nu;q) = \mathcal E_r(\nu + q_{|\Omega_{2r} \setminus \Omega}) - \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y)q(x) (f(y) + q_{|\Omega_{r} \setminus \Omega}(y)) dx dy. \end{equation} We will relate $\Delta_{\rho,r,M}$ to $\mathcal E_r(q + \rho;q)$ where $q$ is a good approximation to $\mu$ near $\Omega$. Actually, we would like to relate it to $\mathcal E_r(\mu + \rho; \mu)$, suitably defined, but for technical reasons it is easier to replace $\mu$ by a constant. \begin{remark} The reason for the structure of $\psi_r$ is that \eref{Erf.def} includes the diagonal of $\mathbb{R}^d \times \mathbb{R}^d$, and the $\phi_1$ convolution within $\psi_r$ will eventually serve to smooth the empirical measure $X$. The smoothing, a form of renormalization taming the formal infinity within $\mathcal E_r(X)$, is a well-known strategy in Coulomb gas theory. It is also the source of the limiting error in our method. \end{remark} \begin{lemma} \label{l.smooth.iso} We have \begin{align*} \mathrm{Iso}_{\{1,2\}, \psi_r} \mathsf{g}(x_1 - x_2) &= (\phi_1 \ast \phi_1 \ast \phi_r \ast \phi_r \ast \mathsf{g})(x_1 - x_2), \\ \mathrm{Iso}_{\{1,2\}, \psi_r} \mathsf{g}(x_1 - x_3) &\leq \mathsf{g}(x_1 - x_3). \end{align*} Furthermore, \begin{equation} \label{e.smooth.iso.pot} \mathrm{Iso}_{\{1\},\psi_r} W(x_1) = W(x_1) + \int_{\mathbb{R}^d} (\mathsf{g} - \Phi_1 \mathsf{g})(y - x_1) \mu(dy) + \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(y - x) \phi_1(x - x_1) dx \mu(dy). \end{equation} \end{lemma} \begin{proof} The first two equations follow immediately from the definition of $\mathrm{Iso}_{\{1,2\}, \psi_r}$ and superharmonicity of $\mathsf{g}$; see the proof of \pref{1C.LL.iso} for a similar calculation. Considering now isotropic averaging of the potential and letting $\hat \theta = (\cos \theta, \sin \theta)$, we have \begin{equation} \frac{1}{2\pi} \int_0^{2\pi} W(z + r \hat \theta) d\theta = W(z) + \int_{B_{r}(z)} (\mathsf{g}(x - z) - \mathsf{g}(r))\mu(dx). \end{equation} Defining $\mathsf{g}_s = (\mathsf{g}(x) - \mathsf{g}(s))_+$, we compute \begin{equation} \phi_r \ast W(z) = \int_{\mathbb{R}^2} \phi_{r}(y) W(z + y) dy = \int_{\mathbb{R}^2} \phi_r(y) \frac{1}{2\pi} \int_0^{2\pi}W(z+ |y| \hat{\theta}) d\theta dy = W(z) + \(\int_{\mathbb{R}^2} \phi_{r}(y) \mathsf{g}_{|y|} dy\) \ast \mu(x). \end{equation} Note that $\mathsf{g}_s = \mathsf{g} - \delta^{(s)} \ast \mathsf{g}$, where $\delta^{(s)}$ is Dirac delta "smeared" evenly on the circle $\partial B_s(0)$, and so $$ \int_{\mathbb{R}^2} \phi_{r}(y) \mathsf{g}_{|y|}(x) dy = \mathsf{g}(x) - \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \mathsf{g}(x-z) \phi_r(y) \delta^{(|y|)}(dz) dy = \mathsf{g}(x) - \int \mathsf{g}(x-z) \phi_r(z) dz= (\mathsf{g} - \Phi_r \mathsf{g})(x). $$ In the last line, we used that $\int (\delta^{(|y|)}(dz) \phi_r(y)) dy = \phi_r(z)dz$, formally, since $\phi$ is radially symmetric. By applying the above computation twice, we have \begin{align*} \psi_r \ast W(x_i) &= \phi_r \ast (\phi_1 \ast W)(x_i) = (\phi_1 \ast W)(x_i) + ((\mathsf{g} - \Phi_r \mathsf{g}) \ast \phi_1 \ast \mu)(x_i) \\ &= W(x_i) + (\mathsf{g} - \Phi_1 \mathsf{g}) \ast \mu(x_i) + ((\mathsf{g} - \Phi_r \mathsf{g}) \ast \phi_1 \ast \mu)(x_i) . \end{align*} The last line with $i=1$ gives \eref{smooth.iso.pot}. \end{proof} \begin{lemma} \label{l.twostep.energy.avg} Let $r > 1$ and $q \in \mathbb{R}$. We have \begin{align} \label{e.twostep.energy.avg} \lefteqn{\mathcal H^{W+U}(X_N) - \mathrm{Iso}_{\mathbb X(\Omega), \psi_r} \mathcal H^{W+U}(X_N) } \quad & \\ \notag &\geq \mathcal E_r((\Phi_1 X_{| \Omega})_{|\Omega}; q) + \iint_{\mathbb{R}^d \times \Omega} (\mathsf{g} - \Phi_r \mathsf{g})(y-x) (q - \mu)(dx) (\Phi_1 X_{| \Omega})(dy) - \mathrm{Error}_{\mathrm{bl}} - \mathrm{Error}_{\mathrm{vol}}, \end{align} where \begin{align} \label{e.twostep.energy.errorbl} \mathrm{Error}_{\mathrm{bl}} &\leq Cr^2 |q| X(\Omega \setminus \Omega_{-3r}) + Cr^3 q^2 R^{d-1} \\ \notag &\quad + Cr^2 ( \| \mu \|_{L^\infty(\Omega_{2r})} + |q|) X(\Omega \setminus \Omega_{-1}), \end{align} and \begin{equation} \label{e.twostep.energy.errorvol} \mathrm{Error}_{\mathrm{vol}} \leq -X(\Omega)\mathsf{g}(4r) + CX(\Omega). \end{equation} \end{lemma} \begin{proof} By \lref{smooth.iso}, we have \begin{equation*} \mathrm{Iso}_{\mathbb X(\Omega),\psi_r} \mathcal H^0(X_N) \leq \frac12 \sum_{\substack{i \ne j \\ \{i,j\} \not \subset \mathbb X(\Omega)}} \mathsf{g}(x_i - x_j) + \frac12 \sum_{i,j \in \mathbb X(\Omega)} (\Phi_1^2 \Phi_r^2 \mathsf{g})(x_i - x_j) - \frac12 X(\Omega) \Phi_1^2 \Phi_r^2 \mathsf{g}(0). \end{equation*} We then use that $\Phi_1$ is self-dual to write $$ \frac12 \sum_{i,j \in \mathbb X(\Omega)} (\Phi_1^2 \Phi_r^2 \mathsf{g})(x_i - x_j) = \frac12 \iint_{\mathbb{R}^d \times \mathbb{R}^d} \Phi_r^2 \mathsf{g}(x - y) (\Phi_1 X_{|\Omega})(dx)(\Phi_1 X_{|\Omega})(dy). $$ We can also estimate $$ \frac12 \sum_{\substack{i,j \in \mathbb X(\Omega) \\ i \ne j}} \mathsf{g}(x_i - x_j) \geq \frac12 \iint_{\mathbb{R}^d \times \mathbb{R}^d} \mathsf{g}(x - y) (\Phi_1 X_{|\Omega})(dx)(\Phi_1 X_{|\Omega})(dy) - \frac12 X(\Omega) \Phi_1^2 \mathsf{g}(0) $$ using $\Phi_1^2 \mathsf{g} \leq \mathsf{g}$ pointwise and duality like above. It follows that \begin{align} \mathcal H^0(X_N) - \mathrm{Iso}_{\mathbb X(\Omega),\psi_r} \mathcal H^0(X_N) &\geq \frac12 \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r^2 \mathsf{g})(x - y) (\Phi_1 X_{|\Omega})(dx)(\Phi_1 X_{|\Omega})(dy) \\ \notag &\quad + \frac12 X(\Omega) (\Phi_1^2 \Phi_r^2 \mathsf{g} (0)-\Phi_1^2 \mathsf{g}(0)). \end{align} We can compare the double integral above to $\mathcal E_r((\Phi_1 X_{|\Omega})_{|\Omega} + q_{|\Omega_{2r} \setminus \Omega})$. They will only differ by boundary layer terms. We start by simply restricting the integral to $\Omega \times \Omega$ using $\mathsf{g} - \Phi_r^2 \mathsf{g} \geq 0$. After doing so, we find $$ \frac12 \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r^2 \mathsf{g})(x - y) (\Phi_1 X_{|\Omega})^{\otimes 2} (dx,dy) \geq \mathcal E_r((\Phi_1 X_{|\Omega})_{|\Omega} + q_{|\Omega_{2r} \setminus \Omega}) - T_1 - T_2 $$ where \begin{align*} T_1 &= \iint_{\Omega \times \mathbb{R}^d} (\mathsf{g} - \Phi_r^2 \mathsf{g})(x-y) \Phi_1 X_{|\Omega}(dx) q_{|\Omega_{2r} \setminus \Omega}(dy) \leq Cr^2 |q| X(\Omega \setminus \Omega_{-2r-1}) \\ T_2 &=\iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r^2 \mathsf{g})(x-y) q_{|\Omega_{2r} \setminus \Omega}(dy) q_{|\Omega_{2r} \setminus \Omega}(dy) \leq C r^3 q^2 R_{\Omega}^{d-1}. \end{align*} We also bound $$ \frac12 X(\Omega) (\Phi_1^2 \Phi_r^2 \mathsf{g} (0)-\Phi_1^2 \mathsf{g}(0)) \geq X(\Omega) (\mathsf{g}(4r) - C). $$ We now handle the potential terms. Like usual, we can handle the $U$ terms by superharmonicity. For the $W$ term, we use \lref{smooth.iso} to see \begin{align} \label{e.W.iso.1} \lefteqn{ \mathrm{Iso}_{\mathbb X(\Omega),\psi_r} \sum_{i \in \mathbb X(\Omega)} W(x_i)} \quad & \\ \notag &= \sum_{i \in \mathbb X(\Omega)} W(x_i) + \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_1 \mathsf{g})(x-y) X_{|\Omega}(dx) \mu(dy) + \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) (\Phi_1 X_{|\Omega})(dx) \mu(dy). \end{align} The middle term on the RHS in \eref{W.iso.1} contributes to the volume error $\mathrm{Error}_{\mathrm{vol}}$. It is bounded by $$ \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_1 \mathsf{g})(x-y) X_{|\Omega}(dx) \mu(dy) \leq C \| \mu \|_{L^\infty(\Omega_1)} X(\Omega). $$ For the last term in \eref{W.iso.1}, we replace $\mu$ by $q$ to generate the term $$ \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) (\Phi_1 X_{|\Omega})(dx) (q - \mu)(dy) $$ in \eref{twostep.energy.avg}. We then estimate \begin{align*} \lefteqn{ \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) (\Phi_1 X_{|\Omega})(dx) q(dy) } \quad & \\ &\leq \iint_{\Omega \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) (\Phi_1 X_{|\Omega})(dx) q(dy) + Cr^2 |q| X(\Omega \setminus \Omega_{-1}) \\ &\leq \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) ((\Phi_1 X_{|\Omega})_{|\Omega} + q_{|\Omega_r \setminus \Omega})(dx) q(dy) + Cr^2 |q| X(\Omega \setminus \Omega_{-1}), \end{align*} where we used $\| \mathsf{g} - \Phi_r \mathsf{g} \|_{L^1(\mathbb{R}^d)} \leq Cr^2$ and the fact that the mass of $\Phi_1 X_{|\Omega}$ lying outside of $\Omega$ is bounded by $X(\Omega \setminus \Omega_{-1})$. Assembling the above estimates proves the lemma. \end{proof} We now turn to studying minimizers of $\mathcal E_r$ conditioned on the weight given to $\Omega$. The following proposition is the key technical result of \sref{discrepancy}, and it is the reason for considering the precise form of the energy $\mathcal E_r(\cdot;q)$. \begin{proposition} \label{p.energymin} Let $\Omega$ an $\alpha$-thin annulus and $q \in \mathbb{R}$. We have \begin{equation} \label{e.energymin.inf} \inf_{\nu : \nu(\Omega)= q |\Omega|} \mathcal E_r(\nu;q) \geq 0 \end{equation} where the infimum is over measures $\nu$ supported on $\overline \Omega$ with a bounded Lebesgue density. \end{proposition} \begin{proof} The first step is to prove nonnegativity of $\mathcal E_r(\cdot)$ defined in \eref{Erf.def}. This is equivalent to proving that $\mathsf{g} - \Phi_r^2 \mathsf{g}$ is a positive-definite kernel. Note that this kernel can be written as a superposition of kernels of the form $$ x \mapsto \mathsf{g}_s(x) := \max(\mathsf{g}(x) - \mathsf{g}(s),0), \quad s > 0. $$ It thus suffices to prove that $\mathsf{g}_s$ is positive definite. For $d=2$, this is a standard fact from Gaussian multiplicative chaos (see e.g. \cite{RV13}, Section 2). In higher dimensions, we do not know if $\mathsf{g} - \Phi_r^2 \mathsf{g}$ is positive definite, and this is why we must specialize to $d=2$ in this section. Let $\nu$ be a measure supported on $\overline{\Omega}$ with a bounded Lebesgue density and $\nu(\Omega) = q|\Omega|$. We use that $\mathcal E_r(\cdot;q)$ is quadratic to expand \begin{align} \label{e.Er.quadratic} \mathcal E_r(\nu;q) &= \mathcal E_r(q;q) + \mathcal E_r(q-\nu) \\ \notag &\quad + \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r^2 \mathsf{g})(x-y) (\nu - q_{|\Omega})(dx) (q_{|\Omega_{2r}})(dy) - \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) q(dx) (\nu - q_{|\Omega})(dy). \end{align} We claim that both terms on the last line are $0$. Indeed, since $q$ is constant, we have \begin{equation} \label{e.qconv} \int_{\mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y)q(y) dy = q c_{1,\phi,r},\quad \int_{\mathbb{R}^d} (\mathsf{g} - \Phi_r^2 \mathsf{g})(x-y)q(y) dy = q c_{2,\phi,r} \end{equation} for constants $c_{1,\phi,r}, c_{2,\phi,r}$ independent of $x$, and the same holds for $q_{|\Omega_{2r}}$ in place of $q$ as long as $x \in \Omega$. The claim follows. Since $\mathcal E_r(q-\nu) \geq 0$, we have proved that the infimum in \eref{energymin.inf} is obtained at $\nu = q_{|\Omega}$. It remains to compute $\mathcal E_r(q_{|\Omega};q)$. We write $\mathcal E_r(q_{|\Omega};q)= T_1 + T_2 - T_3$ for \begin{align*} T_1 &= \frac12\iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y)q_{|\Omega_{2r}}(x)q_{|\Omega_{2r}}(y)dxdy, \\ T_2 &= \frac12\iint_{\mathbb{R}^d \times \mathbb{R}^d} \Phi_r (\mathsf{g} - \Phi_r \mathsf{g})(x-y) q_{|\Omega_{2r}}(x)q_{|\Omega_{2r}}(y)dxdy, \\ T_3 &= \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) q(x) q_{|\Omega_r}(y) dxdy. \end{align*} Note that $\mathsf{g} - \Phi_r \mathsf{g} \geq 0$. We can then bound $ T_1 \geq \frac12 q^2 c_{1,\phi,r} |\Omega_r| $ and use $\Phi_r \mathbbm{1}_{\Omega_{2r}} \geq \mathbbm{1}_{\Omega_r}$ to see $$ T_2 = \frac{q^2}{2} \iint_{\mathbb{R}^d \times \mathbb{R}^d} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) \Phi_r \mathbbm{1}_{\Omega_{2r}}(x) \mathbbm{1}_{\Omega_{2r}}(y) dx dy \geq \frac{q^2}{2} \iint_{\Omega_r \times \Omega_{2r}} (\mathsf{g} - \Phi_r \mathsf{g})(x-y) dx dy \geq \frac{c_{1,\phi,r} q^2}{2} |\Omega_r|. $$ Similarly, we have $T_3 \leq q^2 c_{1,\phi,r} |\Omega_r|$, and combining the bounds on $T_1,T_2,T_3$ finishes the proof. \end{proof} \begin{proposition} \label{p.fLL.Delta} Let $\Omega$ be an $\alpha$-thin annulus. Assume the parameters $\rho,\alpha > 0$ and $M \geq 2$ and constant $q > 0$ satisfy \begin{align} \label{e.Mrho.bd} M + \log(\alpha R) &\leq C_0^{-1} (\alpha R)^{2/3} \rho, \\ \label{e.alphaR.bd} C_0^3 &\leq \alpha R, \\ \label{e.rho.lwrbd} C_0^2 &\leq (\alpha R)^{2/3} \rho, \\ \label{e.logRr.bd} \log R &\leq C_0^{-1} (\alpha R)^{\frac23}, \\ \label{e.qmu.bd} \| \mu - q \|_{L^\infty(\Omega_R)} &\leq C_0^{-1} \rho \\ \label{e.rho.upbd} \rho &\leq \sqrt{C_0} \end{align} for a large enough constant $C_0$ depending on $(\inf_{\Omega} \mu)^{-1}$, $\sup_{\Omega_{R}} \mu$, $\beta$. Let $r = (\alpha R)^{\frac13}$. Then we have \begin{equation} \Delta_{\rho,r,M} - \beta^{-1} \alpha R^d \log(C \rho \alpha R^d) \geq C_0^{-1} \alpha^{2/3} R^{2/3} \rho \mu(\Omega). \end{equation} The second term on the LHS will have relevance for an entropy cost in \pref{fLL.upbd}. \end{proposition} \begin{proof} Let $X_N \in E_{\rho,r,M}$ be arbitrary, and note that $|q| \leq C + C_0^{-1} \rho \leq C + 1$. Define $q_X = q + m_X$ for $$ m_X := \frac{1}{|\Omega|}(\Phi_1 X_{|\Omega} (\Omega) - q |\Omega|). $$ We will apply \lref{twostep.energy.avg} for $X$ in the event $E_{\rho,r,M}$ and constant $q_X$. Since $q_X|\Omega| = \Phi_1 X_{|\Omega}(\Omega)$, by \pref{energymin} we have $$ \mathcal E_r((\Phi_1 X_{|\Omega})_{|\Omega}, q_X) \geq 0. $$ We compute $$ \iint_{\mathbb{R}^d \times \Omega} (\mathsf{g} - \Phi_r \mathsf{g})(y-x) (q_X - \mu)(dx) (\Phi_1 X_{| \Omega})(dy) \geq c_{\phi} r^2 q_X |\Omega| m_X - c_\phi r^2 \| q - \mu \|_{L^\infty(\Omega_r)} q_X(\Omega), $$ for some constant $c_{\phi} > 0$. We also have $$ q_X|\Omega| = \Phi_1 X_{|\Omega}(\Omega) \geq X(\Omega) - X(\Omega \setminus \Omega_{-1}), $$ and $$ m_X \geq \rho - \frac{X(\Omega \setminus \Omega_{-1})}{|\Omega|} - \| \mu - q \|_{L^\infty(\Omega)} \geq \rho - M \frac{r}{\alpha R} - \| \mu - q \|_{L^\infty(\Omega)}. $$ Define $T_0 = c_\phi r^2 X(\Omega) \rho$. We will show that $T_0$ is the dominant contributor to $\Delta_{\rho,r,M}$, bounding all other error terms by $\frac12 T_0$, and if we do so we will prove the proposition. By \lref{twostep.energy.avg} and the above, we have $$ \Delta_{\rho,r,M} \geq T_0 - Cr^2 m_X X(\Omega \setminus \Omega_{-1}) - Cr^2 X(\Omega) M \frac{r}{\alpha R} - C r^2\| \mu - q \|_{L^\infty(\Omega)}X(\Omega) - \mathrm{Error}_{\mathrm{vol}} - \mathrm{Error}_{\mathrm{bl}}, $$ where we've recalled the error terms defined in \lref{twostep.energy.avg}. Using $m_X \leq C\sqrt{C_0}$, $\mu(\Omega) \geq C^{-1} |\Omega|$ and $M \leq C_0^{-1}(\alpha R)^{2/3} \rho$, we have $$ r^2 m_X X(\Omega \setminus \Omega_{-1}) \leq C \sqrt{C_0} \alpha^{2/3} R^{2/3} M R^{d-1} \alpha^{1/3} R^{1/3} \leq C\sqrt{C_0} M |\Omega| \leq C C_0^{-1/2} (\alpha R)^{2/3} \rho X(\Omega) \leq \frac{1}{100} T_0 $$ for $C_0$ large enough. Using \eref{Mrho.bd}, we see $$ Cr^2 X(\Omega) M \frac{r}{\alpha R} = Cr^2 X(\Omega) M \alpha^{-2/3} R^{-2/3} \leq C C_0^{-1} r^2 X(\Omega) \rho \leq \frac{1}{100} T_0. $$ By \eref{qmu.bd}, we have $$ C r^2\| \mu - q \|_{L^\infty(\Omega)}X(\Omega) \leq \frac{1}{100}T_0. $$ It remains to handle $\mathrm{Error}_\mathrm{vol}$ and $\mathrm{Error}_\mathrm{bl}$ and the entropy term. We have $$ \mathrm{Error}_\mathrm{bl} = T_1 + T_2 $$ for $$ T_1 := Cr^2 X(\Omega \setminus \Omega_{-3r}) \leq CM \alpha R^{d} \leq C C_0^{-1} r^2 |\Omega| \rho \leq C C_0^{-1} r^2 X(\Omega) \rho \leq \frac{1}{100}T_0, $$ where we used \eref{Mrho.bd}, and $$ T_2 := Cr^3 R^{d-1} = C \alpha R^d \leq C C_0^{-1} |\Omega|r^2 \rho \leq \frac{1}{100} T_0, $$ where we used \eref{rho.lwrbd} to say $r^2 \rho \geq C_0$. We have $$ \mathrm{Error}_{\mathrm{vol}} = CX(\Omega) - X(\Omega) \mathsf{g}(4r). $$ Since $r^2 \rho \geq C_0$, we have $CX(\Omega) \leq \frac{1}{100}T_0$. For $d \geq 3$, we have $\mathsf{g}(4r) \leq C$ since $\alpha R \geq C_0$. In $d=2$, we have by \eref{Mrho.bd} that $$ -\mathsf{g}(4r) \leq \log 4 + \frac13\log(\alpha R) \leq 4 + C_0^{-1} r^2 \rho $$ so that $-X(\Omega) \mathsf{g}(4r)$ is dominated by $\frac{1}{100} T_0$. Finally, we consider $$ T_3 := \beta^{-1} \alpha R^d \log(C \rho \alpha R^d). $$ Note that (by \eref{Mrho.bd} applied twice and \eref{rho.upbd}) $$ M \alpha^{1/3} R^{d-2/3} \log M \leq C_0^{-1} |\Omega| \rho \log M \leq C_0^{-1} |\Omega| \rho \log(C_0^{-1/2} r^2). $$ Since $\log(C_0^{-1/2} r^2) \leq r^2$, this is dominated by $\beta \frac{1}{100} T_0$. For the other part, we have $$ M \alpha^{1/3} R^{d-2/3} \log \mu(\Omega) \leq C C_0^{-1} \rho |\Omega| \log |\Omega| \leq \frac{\beta}{100} T_0, $$ where we used \eref{logRr.bd}. \end{proof} We are finally ready to prove our overcrowding estimate. \begin{proposition} \label{p.fLL.upbd} Let $\Omega$ be an $\alpha$-thin annulus, and suppose, for a large enough constant $C_0$ depending on $(\inf_\Omega \mu)^{-1}$, $\sup_{\Omega_R} \mu$, and $\beta$, that $\rho$ and $\alpha$ satisfy \begin{align} \label{e.fLL.upbd.a1} (\alpha R)^{2/3} &\geq C_0^2 + C_0\log R - C_0 \log \alpha \\ \label{e.fLL.upbd.a2} (\alpha R)^{2/3} \rho &\geq C_0^3 + C_0\log (\alpha R). \end{align} Assume further that there exists a constant $q$ with $\| \mu - q\|_{L^\infty(\Omega_R)} \leq C_0^{-1} \min(\rho,1)$. Then we have \begin{equation} \P^{W,U}_{N,\beta} (\{X(\Omega) \geq \mu(\Omega) + \rho |\Omega| \}) \leq e^{-\beta c(\alpha R)^{2/3} \rho(\mu(\Omega) + \rho|\Omega|)} + e^{-c \beta(\alpha R)^{\frac13(d+2)} (\alpha R)^{\frac43} \rho^2}, \end{equation} for some $c > 0$. \end{proposition} \begin{proof} We first restrict to $\rho \leq \sqrt{C_0}$ so that we may apply \pref{fLL.Delta}. We do this by applying the high-density local law \tref{1C.LL}. Let $(A_\lambda)_{\lambda \in \Lambda}$ be a covering of $\Omega$ by balls of radius $\alpha R$ of cardinality at most $C\alpha^{-d+1}$ to see \begin{align*} \P^{W,U}_{N,\beta} (\{X(\Omega) \geq \mu(\Omega) + \rho |\Omega| \}) &\leq \P^{W,U}_{N,\beta} (\{X(\Omega) \geq C^{-1}\rho |\Omega| \}) \\ &\leq |\Lambda| \sup_{\lambda \in \Lambda} \P^{W,U}_{N,\beta}(\{X(A_\lambda) \geq C^{-1} \alpha^{d-1} \rho |\Omega| \}). \end{align*} When $\rho \geq \sqrt{C_0}$, we have that $\alpha^{d-1} \rho |\Omega| \geq C^{-1}\sqrt{C_0} |A_\lambda|$, so we may apply \tref{1C.LL} to see $$ \P^{W,U}_{N,\beta} (\{X(\Omega) \geq \mu(\Omega) + \rho |\Omega| \}) \leq e^{-c \beta (\alpha R)^{d+2}} $$ for some $c > 0$, where we used $\alpha R \gg -\log \alpha$. Let $M = C_0^{-1}(\alpha R)^{2/3} \rho$ and $r = (\alpha R)^{1/3}$. We have $$ \P^{W,U}_{N,\beta} (\{X(\Omega) \geq \mu(\Omega) + \rho |\Omega| \}) \leq \P^{W,U}_{N,\beta} (E_{\rho, r, M}) + \P^{W,U}_{N,\beta} (\{ X(\Omega_{5r} \setminus \Omega_{-5r}) \geq C_0^{-1} \alpha^{1/3} R^{d-\frac23} M \}). $$ We control the latter probability using \tref{1C.LL}. Let $\{ \tilde A_\lambda \}_{\lambda \in \tilde \Lambda}$ be a finite covering of $\Omega_{5r}\setminus \Omega_{-5r}$ by balls of radius $r$ of cardinality at most $C r^{-d+1} R^{d-1} = C\alpha^{\frac13(-d+1)}R^{\frac23 (d-1)}$. We have $$ \P^{W,U}_{N,\beta} (\{ X(\Omega_{5r} \setminus \Omega_{-5r}) \geq C_0^{-1} \alpha^{1/3} R^{d-\frac23} M \}) \leq \sum_{\lambda \in \tilde \Lambda} \P^{W,U}_{N,\beta} (\{ X(\tilde A_\lambda) \geq C^{-1} C_0^{-1} (\alpha R)^{\frac13 d} M \}). $$ Since $$ C_0^{-1} M = C_0^{-2} (\alpha R)^{\frac23} \rho \geq C_0 \quad \text{by }\eref{fLL.upbd.a2}, $$ we can apply \tref{1C.LL} to see $$ \P^{W,U}_{N,\beta} (\{ X(\Omega_{5r} \setminus \Omega_{-5r}) \geq C_0^{-1} \alpha^{1/3} R^{d-\frac23} M \}) \leq e^{-c\beta (\alpha R)^{\frac13(d+2)}}, $$ where we used $\alpha R \geq C_0(\log R - \log \alpha)$ from \eref{fLL.upbd.a1}. Next, we apply the isotropic averaging argument to bound the probability of $E_{\rho,r,M}$. Let $n_\rho = \lceil \rho |\Omega| + \mu(\Omega) \rceil$ and $M_\rho = \lceil C_0^{-1} \alpha R^d_\Omega \rceil$. We write $$ E_{\rho,r,M} \subset \bigcup_{m = 0}^{M_\rho} \bigcup_{n = n_\rho}^N \bigcup_{\substack{\mathcal{M} \subset \{1,\ldots,N\} \\ |\mathcal{M}| = m}} \bigcup_{\substack{\mathcal{N} \subset \{1,\ldots,N\} \\ |\mathcal{N}| = n}} E_{\mathcal{M}, \mathcal{N}, r, M} $$ for $$ E_{\mathcal{M}, \mathcal{N}, r, M} = \{\mathbb X(\Omega) = \mathcal{N} \} \cap \{\mathbb X(\Omega_{5r} \setminus \Omega) = \mathcal{M} \} \cap \{ X(\Omega_{5r} \setminus \Omega_{-5r} ) \leq M rR^{d-1}\}. $$ Note that $$ \mathrm{Iso}_{\mathcal{N}, \psi_r}^\ast \mathbbm{1}_{E_{\mathcal{M}, \mathcal{N}, r, M}} \leq \mathbbm{1}_{\{\mathbb X(\Omega_{5r}) = \mathcal{M}\cup \mathcal{N}\}}. $$ Indeed, for a nice enough function $F$ of $(x_i)_{i \in \mathcal{N}}$ and for any realization of $(x_i)_{i \not \in \mathcal{N}}$, we have \begin{align*} \lefteqn{\int_{(\mathbb{R}^d)^{\mathcal{N}}} \mathbbm{1}_{E_{\mathcal{M}, \mathcal{N}, r, M}}(X_N) \mathrm{Iso}_{\mathcal{N}, \psi_r} F((x_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i} \quad &\\ &= \int_{(\mathbb{R}^d)^{\mathcal{N}}} \int_{\Omega^{\mathcal{N}}} \mathbbm{1}_{E_{\mathcal{M}, \mathcal{N}, r, M}}(X_N) F((x_i + y_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i \prod_{i \in \mathcal{N}} \psi_r(y_i) dy_i \\ &= \int_{(\mathbb{R}^d)^{\mathcal{N}}} \int_{\prod_{i \in \mathcal{N}} (\Omega + y_i)} \mathbbm{1}_{E_{\mathcal{M}, \mathcal{N}, r, M}}(X_N - (y_i \mathbbm{1}_{i \in \mathcal{N}})) F((x_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i \prod_{i \in \mathcal{N}} \psi_r(y_i) dy_i \\ &\leq \int_{(\mathbb{R}^d)^{\mathcal{N}}} \int_{\Omega_{5r}^{\mathcal{N}}} \mathbbm{1}_{\{\mathbb X(\Omega_{5r}) = \mathcal{M}\cup \mathcal{N}\}}(X_N) F((x_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i \prod_{i \in \mathcal{N}} \psi_r(y_i) dy_i \\ &= \int_{(\mathbb{R}^d)^{\mathcal{N}}} \mathbbm{1}_{\{\mathbb X(\Omega_{5r}) = \mathcal{M}\cup \mathcal{N}\}}(X_N)F((x_i)_{i \in \mathcal{N}}) \prod_{i \in \mathcal{N}} dx_i. \end{align*} By isotropic averaging, we conclude, for $|\mathcal{N}| = n$, that $$ \P(E_{\mathcal{M}, \mathcal{N}, r, M}) \leq e^{-\beta \Delta_{\rho_n, r, M}}\P(\{\mathbb X(\Omega_{5r}) = \mathcal{M}\cup \mathcal{N}\}) $$ where $\rho_n := \frac{n - \mu(\Omega)}{|\Omega|}$ and $\Delta_{\rho_n, r, M}$ is as in \eref{LL.Delta}. By a union bound and exchangeability, we have $$ \P(E_{\rho,r,M}) \leq \sum_{m = 0}^{M_\rho} \sum_{n = n_\rho}^N {N \choose m}{N-m \choose n} e^{-\beta \Delta_{\rho_n, r, M}} \P(\{\mathbb X(\Omega_{5r}) = \{1,\ldots,m+n\}\}). $$ Here, WLOG we assumed $\mathcal{M}$ and $\mathcal{N}$ are disjoint. We have $$ \P(\{\mathbb X(\Omega_{5r}) = \{1,\ldots,m+n\}\}) = \frac{1}{{N \choose m+n}}\P(\{ X(\Omega_{5r}) = m+n\}\}) $$ and $$ \frac{{N \choose m}{N-m \choose n}}{{N \choose m+n}} = \frac{(m+n)!}{m! n!} \leq \frac{(n+m)^m}{m!} \leq n^m $$ whenever $m \leq n$, which for us is always the case if $C_0$ is large enough. Letting $j = m+n$, we have $$ \P(E_{\rho,r,M}) \leq \sum_{n = n_\rho}^N \sum_{j=n}^{n+M_\rho} n^{j - n}e^{-\beta \Delta_{\rho_n, r, M}} \P(\{X(\Omega_{5r}) = j\}) \leq \sum_{n=n_\rho}^N n^{M_\rho}e^{-\beta \Delta_{\rho_n,r,M}}. $$ We now apply \pref{fLL.Delta} to bound $$ n^{M_\rho}e^{-\beta \Delta_{\rho_n,r,M}} \leq e^{-\frac{\beta}{2} c_\phi r^2 \rho_n (\mu(\Omega) + \rho_n |\Omega|)} $$ where $c_\phi > 0$ depends only on $\phi$. Since $r^2 \rho_n \geq C_0$, we can write $$ \sum_{n=n_\rho}^N e^{-\frac{\beta}{2} c_\phi r^2 \rho_n (\mu(\Omega) + \rho_n |\Omega|)} \leq 2e^{-\frac{\beta}{2} c_\phi r^2 \rho(\mu(\Omega) + \rho|\Omega|)}, $$ and the proof is concluded. \end{proof} It is now a simple matter to prove \tref{fLL.over}. \begin{proof}[Proof of \tref{fLL.over}] We will apply \pref{fLL.upbd} to the $1$-thin annulus $\Omega = B_R(z)$ with a certain parameter $\rho$. For the constant $q$ approximation to $\mu = \frac{1}{c_d} \Delta W$, we take $q = \mu(z)$. If $W$ is quadratic near $B_R(z)$, this approximation is exact, but we focus on the general case. One finds that $$ \| \mu - q \|_{L^\infty(B_{2R}(z))} \leq CN^{-1/2} R, $$ whence we have the restriction $\rho \gg N^{-1/2}R$ in applying \pref{fLL.upbd}. We also have the restriction $\rho \gg R^{-2/3} \log R$ from \eref{fLL.upbd.a2}. If $ R \leq N^{\frac{3}{10}}, $ then the latter restriction is the only relevant one, and we achieve $$ \P(\{X(B_R(z)) \geq \mu(B_R(z)) + \rho |B_R(z)|\}) \leq e^{-c R^{8/3} \rho} + e^{-c R^{8/3} \rho^2}, $$ as desired, where we set $\rho = TR^{-2/3} \log R$ for a large $T > 0$. \end{proof} \subsection{Upgrading the discrepancy bound.}\ In this subsection, we upgrade \tref{fLL.over} using rigidity results for smooth test functions. We will assume conditions stated in \tref{fLL.improved} throughout. We do not spend undue effort trying to optimize our bounds in $\beta$ or $V$ since our results are generally weaker than those of \cite{AS21}. Instead, our purpose is to show how our overcrowding estimates can be upgraded using known rigidity bounds for smooth linear statistics. In particular, we demonstrate that overcrowding bounds are sufficient to bound the absolute discrepancy, rather than just the positive part of the discrepancy. The mechanism for this is consists of finding a screening region of excess positive charge near the boundary of a ball with large absolute discrepancy. We note that the idea of obtaining a screening region is already present and features prominently in \cite{L21}, and we do not add anything fundamentally new to this procedure, but rather adapt it for our overcrowding estimate. For $\alpha \in (0,1]$ and $R \in (0,\infty)$, let $\xi_{R,\alpha} : \mathbb{R} \to \mathbb{R}$ be a function satisfying \begin{itemize} \item $0 \leq \xi_{R,\alpha} \leq 1$ \item $\xi_{R,\alpha}(x) = 1 \quad \forall x \in [-R,R]$ and $\xi_{R,\alpha}(x) = 0 \quad \forall x \not \in (-R-\alpha R, R+\alpha R)$ \item $\xi'_{R,\alpha}(x) \leq 0$ for $x \geq 0$. \item $\sup_{x \in \mathbb{R}} |\xi^{(k)}_{R,\alpha}(x)| \leq C_k (\alpha R)^{-k}$ for $k=1,2,3,4$. \end{itemize} In what follows, we will consider the map $x \mapsto \xi_{R,\alpha}(|x|)$ from $\mathbb{R}^2 \to \mathbb{R}$, and by abuse of notation we will write this map as $\xi_{R,\alpha}$. \begin{theorem}[Corollary of \cite{S22}, Theorem 1] \label{t.serfaty} Under the conditions of \tref{fLL.improved}, for a large enough constant $C > 0$ and $R \geq C$, we have \begin{equation} \label{e.serfaty} \log \mathbb{E}^{V_N}_{N,\beta} \exp(t \mathrm{Fluct}(\xi_{R,\alpha})) \leq C + \frac{C}{\alpha^4 R^2} + \frac{CR^2}{N} \end{equation} for all $|t| \leq C^{-1}$. \end{theorem} \begin{proof} A more general estimate on a non-blown up scale, i.e.\ the scale with interstitial distance of order $N^{-1/2}$, and with the thermal equilibrium measure $\mu_{\theta}$ in place of the equilibrium measure $\mueq$, is Theorem 1 of cite \cite{S22}. We can then use $\beta \geq C^{-1}$, transform to our length scale, and plug in the properties of $\xi_{R,\alpha}$. We also use (see \cite{AS19}, Theorem 1) $$ \|\mu_{\theta} - \mueq \|_{L^\infty(\mathbb{R}^d)} \leq \frac{C}{N} $$ to replace the thermal equilibrium measure by $\mueq$, generating the $CR^2N^{-1}$ term in \eref{serfaty}. \end{proof} The following proposition shows that one can find a "screening region" of excess positive charge whenever the absolute discrepancy is large in a ball. It is a natural consequence of rigidity for fluctuations of smooth linear statistics. \begin{proposition} \label{p.disc.screen.region} Fix $\delta \in (0,d)$ and $R \geq C$ for a large enough $C > 0$ and $\alpha \in (0,1]$. For any $T \geq 1$, if $\mathrm{Disc}(B_R(z)) \geq T R^\delta$ and $\mathrm{Fluct}(\xi_{R-\alpha R,\alpha}) \leq \frac{T}{10}R^{\delta}$, we can find $k \in \mathbb{Z}^{\geq 0}$ such that $$ \mathrm{Disc}(\mathrm{Ann}_{[R-\alpha_k R, R]}(z)) \geq \frac{T}{2} R^\delta $$ for $\alpha_k = \alpha - C_1^{-1} R^{-d + \delta} k$, $\alpha_k \in (0,\alpha]$, with a large enough constant $C_1$. If instead $\mathrm{Disc}(B_R(z)) \leq -TR^{\delta}$ and $\mathrm{Fluct}(\xi_{R,\alpha}) \geq -\frac{T}{10} R^{\delta}$, we can find $\alpha_k$ as above with $$ \mathrm{Disc}(\mathrm{Ann}_{[R, R+\alpha_k R]}(z)) \geq \frac{T}{2} R^\delta. $$ \end{proposition} \begin{proof} First, note that $$ \mathrm{Disc}(B_{s+\varepsilon}(z)) - \mathrm{Disc}(B_s(z))) = \int_{\mathrm{Ann}_{[s+\varepsilon,s)}(z)} \mathrm{fluct}(dx) $$ whenever $\mathrm{fluct}$ has no atoms on $\partial B_s(z)$ or $\partial B_{s+\varepsilon}(z)$. Thus, we have by integration in spherical coordinates that \begin{align*} \mathrm{Fluct}(\xi_{R-\alpha R,\alpha}) &= \int_0^{R} \xi_{R-\alpha R,\alpha}(s) d(\mathrm{Disc}(B_s(z))) = -\int_0^R \frac{d}{ds} \xi_{R-\alpha R,\alpha}(s) \mathrm{Disc}(B_s(z)) ds \\ &= \mathrm{Disc}(B_{R}(z)) -\int_{R-\alpha R}^R \frac{d}{ds} \xi_{R-\alpha R,\alpha}(s) \(\mathrm{Disc}(B_s(z)) - \mathrm{Disc}(B_{R}(z))\) ds. \end{align*} Assuming now that $\mathrm{Disc}(B_R(z)) \geq T R^\delta$ and $\mathrm{Fluct}(\xi_{R-\alpha R,\alpha}) \leq \frac{T}{10} R^\delta$, by the mean value theorem, there must exist $s \in (R-\alpha R,R)$ such that $\mathrm{Disc}(B_s(z)) - \mathrm{Disc}(B_{R}(z)) \leq \frac{-9T}{10} R^{\delta}$. This means that $$ \mathrm{Disc}(\mathrm{Ann}_{[s,R)}(z)) \geq \frac{9T}{10} R^\delta. $$ Let $k$ be such that $R - \alpha R \leq R - \alpha_k R < s$. We have that $$ \mathrm{Disc}(\mathrm{Ann}_{(R-\alpha_k R,R)}(z)) \geq \mathrm{Disc}(\mathrm{Ann}_{[s,R)}(z)) - \mu(\mathrm{Ann}_{(R-\alpha_k,s]}(z)) \geq \mathrm{Disc}(\mathrm{Ann}_{[s,R)}(z)) - CR^{d-1}|s - \alpha_k R|. $$ We choose $k$ such that $|s-(R- \alpha_k R)| \leq C_1^{-1}R^{-d+1 + \delta}$ for a large enough constant $C_1$ to conclude. If we instead assume $\mathrm{Disc}(B_R(z)) \leq -TR^{\delta}$ and $\mathrm{Fluct}(\xi_{R,\alpha}) \geq -\frac{T}{10} R^{\delta}$, then since $$ \mathrm{Fluct}(\xi_{R,\alpha}) = \mathrm{Disc}(B_{R}(z)) -\int_{R}^{R+\alpha R} \frac{d}{ds} \xi_{R,\alpha}(s) \(\mathrm{Disc}(B_s(z)) - \mathrm{Disc}(B_{R}(z))\) ds. $$ we can find $s \in (R,R+\alpha R)$ with $$ \mathrm{Disc}(\mathrm{Ann}_{[R,s)}(z)) \geq \frac{9T}{10} R^\delta. $$ Choosing $\alpha_k$ such that $s \leq R + \alpha_k R \leq R+\alpha R$ and $|s - (R+\alpha_k R)| \leq C_1^{-1} R^{-d+1+\delta}$ is sufficient to conclude. \end{proof} We are now ready to prove \tref{fLL.improved}. \begin{proof}[Proof of \tref{fLL.improved}] We will consider the case $\mathrm{Disc}(B_R(z)) \geq T R^{\delta} \log R$ for $\delta \in(1,2)$ and $T > 1$ first. We choose $\alpha = R^{-\lambda}$ for $\lambda = \frac{10}{13}$ and consider discrepancies $TR^{\delta}\log R$ for $\delta = 2 - \frac{12}{13}$ and $T$ sufficiently large. If $\mathrm{Fluct}(\xi_{R-\alpha R,R}) \leq \frac{9T}{10}R^{\delta}$, we can use \pref{disc.screen.region} to find $\alpha_k \in (0,\alpha]$ such that $\mathrm{Disc}(\mathrm{Ann}_{[R-\alpha_k R, R]}(z)) \geq \frac{T}{2}R^{\delta}\log R$. By a union bound, we have \begin{align} \label{e.disc.breakdown} \lefteqn{ \P^{V_N}_{N,\beta}(\{\mathrm{Disc}(B_R(z)) \geq TR^{\delta}\log R\}) } \quad & \\ \notag & \leq \P^{V_N}_{N,\beta}(\{\mathrm{Fluct}(\xi_{R-\alpha R,\alpha}) > \frac{9T}{10}R^{\delta}\}) + C \alpha^{-1} R^{2-\delta} \sup_{\alpha_k} \P( \mathrm{Disc}(\mathrm{Ann}_{[R-\alpha_k R, R]}(z)) \geq \frac{T}{2}R^{\delta}\log R ), \end{align} where the supremum is over all $\alpha_k = \alpha - C_1^{-1} R^{\delta-2}k$, $k \in \mathbb{Z}$ and $\alpha_k \in [0,\alpha)$. Note that we have $\alpha_k \geq C^{-1} R^{-2+\delta} \gg R^{-1}$ always. Next, we bound the supremum in \eref{disc.breakdown}. Let $\alpha' \in [C^{-1}R^{-2+\delta},\alpha]$ and let $\Omega$ be the $\alpha'$-thin annulus $\mathrm{Ann}_{[R-\alpha'R,R]}(z)$. We bound $$ \P( \{ \mathrm{Disc}(\Omega) \geq \frac{T}{2}R^{\delta}\log R \} ) \leq \P( X(\Omega) \geq \mu(\Omega) + \rho |\Omega| ) $$ for $\rho = C^{-1} T (\alpha')^{-1} R^{\delta - 2} \log R$. Applying \pref{fLL.upbd} shows \begin{equation} \P(\{ \mathrm{Disc}(\Omega) \geq \frac{T}{2}R^{\delta} \log R \}) \leq e^{-c\beta (\alpha' R)^{2/3} \rho(\mu(\Omega) + \rho|\Omega|)} + e^{-c \beta(\alpha' R)^{8/3} \rho^2} \end{equation} whenever $T R^{\delta - 4/3 +\frac13 \lambda}$ is very large and we can estimate $\| \mu - q \|_{L^\infty(B_{2R}(z))} \leq C_0^{-1} R^{\delta - 2}$ for a constant $q$. One can check that this is possible with $\delta = 2 - \frac{12}{13}$ and $\lambda = \frac{10}{13}$ when $T$ is sufficiently large, and with $R \leq N^{13/50}$ under assumption $V \in C^3_{\mathrm{loc}}(\mathbb{R}^d)$. Using $\alpha' \geq C^{-1} R^{-2+\delta}$, one can bound $$ (\alpha' R)^{\frac43} \rho^2 \geq C^{-1} $$ whence $$ \P(\{ \mathrm{Disc}(\Omega) \geq \frac{T}{2}R^{\delta} \log R\}) \leq e^{-cTR^{2}} + e^{-cR^{4/39} T^2}. $$ We use this to bound the last term on the RHS of \eref{disc.breakdown}. Considering the other term in \eref{disc.breakdown}, we apply \tref{serfaty} and Chebyshev's inequality to bound $$ \P^{V_N}_{N,\beta}(\{\mathrm{Fluct}(\xi_{R-\alpha R,\alpha}) > \frac{9T}{10}R^{\delta}\}) \leq e^{-t \frac{9T}{10} R^\delta + tA} $$ for $t = C^{-1}$ and $$ A = CR^{-2+4\lambda} + CN^{-1}R^2 \leq CR^{-2+4\lambda}, $$ where we used $R \leq N^{13/50}$ in the last inequality. Finally, with $\delta = 2 - \frac{12}{13}$ and $\lambda = \frac{10}{13}$ one sees that for $T$ sufficiently large, we can bound $$ \P^{V_N}_{N,\beta}(\{\mathrm{Fluct}(\xi_{R-\alpha R,\alpha}) > \frac{9T}{10}R^{\delta}\}) \leq e^{-c T R^{\delta}} $$ for some $c > 0$. This finishes the one half of the proof of \tref{fLL.improved}. The proof of the lower bound on $\mathrm{Disc}(B_R(z))$ is nearly identical, except we use fluctuation bounds on $\xi_{R,\alpha}$ to find a screening region outside of $\partial B_R(z)$ with positive discrepancy. \end{proof} \addcontentsline{toc}{section}{References} \bibliographystyle{plain}
2,869,038,154,470
arxiv
\section{Field Theory Approach} We normalise the free boson Hamitlonian density: \begin{equation} {\cal H}={1\over 2}[(\partial_x \phi )^2+(\partial_x\theta )^2].\end{equation} The staggered components of the spin operators are represented in the conformal embedding by \begin{equation} \hbox{tr}g\vec \sigma \propto i\sigma (\sin \sqrt{\pi}\phi ,\cos \sqrt{\pi} \phi ,\cos \sqrt{\pi}\theta ).\label{gs}\end{equation} We see from this equation, and Eq. (3) of the paper, that $g$ has scaling dimension $1/8+1/4=3/8$, $(\hbox{tr}g)^2$ has dimension $1$ and $\vec J_L\cdot \vec J_R$ has dimension $2$, all correct values for the $SU(2)_2$ WZW model. The total central charge is $c=1+1/2=3/2$, also the correct value. Eq. (\ref{gs}) is also consistent with $\langle \hbox {tr}g\vec \sigma \rangle =0$ in all 3 phases, as must be the case since spin rotation symmetry is unbroken in all 3 phases. We also see from Eq. (\ref{gs}) that a spin rotation around the $z$ axis, $S^\pm_j\to e^{\pm i\alpha }S^\pm_j$, corresponds to $\phi \to \phi +\alpha /\sqrt{\pi}$, the $U(1)$ symmetry of the boson model. Thus all excitations of non-zero $S^z$ are in the boson sector. Since all bosonic excitations are gapped on the Ising critical line, it follows from $SU(2)$ symmetry that all gapless excitations must have zero total spin on the Ising line. We note that $\theta$ and $\phi$ are not simply periodic bosons but rather $(\theta ,\sigma )$ should be identified with $(\theta +\sqrt{\pi},-\sigma )$ and $(\phi ,\sigma )$ should be identified with $(\phi +\sqrt{\pi},-\sigma )$. Therefore, for $\lambda_1<0$, there are only 2 inequivalent ground states, not 4, corresponding to the sign of $\langle \sigma \sin \sqrt{\pi}\theta \rangle$. In the Haldane phase where $\langle \sigma\rangle =0$, there is only 1 ground states with $\theta$ pinned at $0$ or $\sqrt{\pi}$ being equivalent. Likewise, in the NNN Haldane phase where $\langle \sigma\rangle =0$, $\theta$ being pinned at $\pm \sqrt{\pi}/2$ are equivalent. The assumption that open boundary conditions in the Haldane phase impose a boundary condition $\theta (0)=\pm \sqrt{\pi}/2$ on the field theory may need further justification. This assumption can be justified close to the $c=3/2$ critical line by observing that the effective boundary magnetic field is $O(1)$ whereas the Haldane gap is very small. \section{Finite Size Spectrum} There are 3 conformal towers that can occur in the finite size spectrum (FSS) of the Ising model, labeled by the corresponding primary fields, $I$, $\epsilon$ and $\sigma$, with dimensions $0$, $1/2$ and $1/16$ respectively. The finite size spectrum of the Ising model with the four different boundary conditions discussed in this paper were all worked out by Cardy.\cite{cardy86} With PBC and anti-periodic boundary conditions, direct products of conformal towers in left and right-moving sectors occur: $(I,I)+(\epsilon ,\epsilon )+(\sigma ,\sigma )$ and $(I,\epsilon )+(\epsilon ,I)+(\sigma ,\sigma )$ respectively. The corresponding energies and momentum are: \begin{eqnarray} E&=&\epsilon_0N+{2\pi v\over N}\left[-{1\over 24}+x_R+x_L\right]\nonumber \\ P&=&{2\pi \over N}[x_R-x_L] \end{eqnarray} where $\epsilon_0$ is a non-universal constant and $x_R$ and $x_L$ are dimensions of chiral operators: dimensions of primary operators plus non-negative integers. The PBC ground state has $x_R=x_L=0$ and the anti-periodic boundary conditions ground state has $x_R=x_L=1/16$. For $\uparrow ,\uparrow$ boundary conditions only the conformal tower $I$ occurs and for $\uparrow ,\downarrow$ boundary conditions, only $\epsilon$ occurs. The corresponding finite size spectrum is: \begin{equation} E=\epsilon_0N+\epsilon_1+{\pi v\over N}\left[-{1\over 48}+x\right] \end{equation} where $\epsilon_1$ is another non-universal constant and $x$ is a dimension. The characters of these conformal towers, determining the multiplicities of excited states, were calculated in [\onlinecite{rocha-caridi}]. The CFT predictions for the first few states in the spectrum, for all 4 boundary conditions are given in Table I and compared to our DMRG results. \begin{table} \begin{tabular}{l||r|r} & &DMRG\\ Energy level& CFT&$J_2=0.7$\\ & Ising &$J_3=0.058$\\ \hline \hline OBC, Even, ground state&-1/48& -1/48\\ \hline OBC, Even, 1st excited state&2& 1.99\\ \hline OBC, Even, 2nd excited state&3& 2.90\\ \hline OBC, Even, 3rd excited state&4& 3.82\\ \hline OBC, Even, 4th excited state&4& 3.87\\ \hline \hline OBC, Odd, ground state&23/48 $\simeq$ 0.479& 0.477\\ \hline OBC, Odd, 1st excited state&1& 1.00 \\ \hline OBC, Odd, 2nd excited state&2& 1.98 \\ \hline OBC, Odd, 3rd excited state&3& 2.98\\ \hline OBC, Odd, 4th excited state&4& 3.97\\ \hline \hline PBC, Even, ground state&-1/12 $\simeq$ -0.0833& -0.094\\ \hline \hline PBC, Odd, ground state&1/6 $\simeq$ 0.167 & 0.196 \end{tabular} \caption{Energy levels on Ising line. Ground state refers to the $1/N$ term in the ground state energy. For excited states, the gap above the ground state is given. Results are in units of $\pi v/N$. Note the degeneracy of the 3rd and 4th excited state, for OBC, N even, which occurs in the Ising conformal tower\cite{rocha-caridi} and is well-reproduced by our DMRG results.} \end{table} \begin{table} \begin{tabular}{l||r|r} & &DMRG\\ Energy level& CFT&$J_2=0.12$\\ & SU(2)$_2$ &$J_3=0.087$\\ \hline \hline OBC, Even, ground state $S_z^\mathrm{tot}=0$ &-1/16& -1/16\\ \hline OBC, Even, ground state $S_z^\mathrm{tot}=1$ &1& 1.027\\ \hline OBC, Even, ground state $S_z^\mathrm{tot}=2$ &2& 2.052\\ \hline OBC, Even, ground state $S_z^\mathrm{tot}=3$ &5& 5.14\\ \hline OBC, Even, ground state $S_z^\mathrm{tot}=4$ &8& 8.29\\ \hline OBC, Even, ground state $S_z^\mathrm{tot}=5$ &13& 13.50\\ \hline \hline OBC, Odd, ground state $S_z^\mathrm{tot}=1$ &7/16 & \\ &$\simeq$ 0.4375 & 0.443\\ \hline OBC, Odd, 1st exited state $S_z^\mathrm{tot}=1$ &1& 1.052\\ \hline OBC, Odd, ground state $S_z^\mathrm{tot}=2$ &2& 2.052\\ \hline OBC, Odd, ground state $S_z^\mathrm{tot}=3$ &4& 4.12\\ \hline OBC, Odd, ground state $S_z^\mathrm{tot}=4$ &8& 8.26\\ \hline OBC, Odd, ground state $S_z^\mathrm{tot}=5$ &12& 12.52 \end{tabular} \caption{Energy levels at $SU(2)_2$ critical point. Ground state $S_z^\mathrm{tot}=0$ and odd $S_z^\mathrm{tot}=1$ refers to the $1/N$ term in the ground state energy. For the rest, the gap above the ground state is given. Results are in units of $\pi v/N$.} \end{table} \begin{figure}[h!] \includegraphics[width=0.47\textwidth]{conformal_tower_wzw} \caption{(Color online) Ground state and excitation energy at $J_2=0.12$ and $J_3=0.087$, on the critical line between the Haldane and the Dimerized phases. Upper panels: Linear scaling of the ground state energy per site with $1/N^2$ after subtracting $\varepsilon_0$ and $\varepsilon_1$ in open chains with a) even and b) odd numbers of sites $N$. c) and d) Energy gap between the ground state and the lowest energies in different sectors of $S_z^\mathrm{tot}=0,1,...,5$ (black symbols) as a function of $1/N$ for even and odd numbers of sites. The multiplicity of the ground state and of the first excited states has been obtained by calculating excited states in the sectors $S_z^\mathrm{tot}=0$ (blue crosses) and $S_z^\mathrm{tot}=1$ (blue pluses). Insets: Conformal towers for even and odd $N$. Black and blue symbols are DMRG data for the ground states in different sectors of $S_z^\mathrm{tot}$ and for the first excited state in the sector $S_z^\mathrm{tot}=1$} \label{fig:ct_wzw} \end{figure} For the $SU(2)_2$ WZW model there are 3 conformal towers labeled by the spin of the lowest energy states, $j=0$, $1/2$ and $1$. Finite size spectra with conformally invariant boundary boundary conditions at both ends of the system can be determined from the corresponding boundary states, which are labeled by the primary operators.\cite{cardy89} OBC with $N$ even in our model corresponds to the $|0\rangle$ boundary state at both ends of the system and the corresponding conformal tower in the FSS is $j=0$. Going to an odd number of sites is formally analogous to the infrared fixed point spectrum of a spin-1 Kondo model and the corresponding boundary state changes to $|1\rangle$ at one end of the system.\cite{affleckludwig91} The resulting FSS contains the $j=1$ conformal tower. Thus the ground state energies of an open chain with an even or odd number of sites are: \begin{equation} E_\mathrm{even}=\varepsilon_0 N+\varepsilon_1-\frac{\pi v}{16N},\ E_\mathrm{odd}=\varepsilon_0 N+\varepsilon_1+\frac{7\pi v}{16N}. \end{equation} In order to build the conformal tower at the end point $J_2=0.12$ and $J_3=0.087$, we calculate the gap between the ground state energy and the lowest energies in different sectors of $S_z^\mathrm{tot}$. The gap scales linearly with $1/N$ and the slope gives access to the velocity. By calculating excited states in the sectors $S_z^\mathrm{tot}=0,1$, we could determine the multiplicity of two lowest energy levels. In a chain with an even number of sites, the ground state is a singlet and the first excited state is a triplet, while for a chain with an odd number of sites, the ground state is a triplet and the first excited state is degenerate and consists of one triplet and one singlet, in complete agreement with CFT predictions. The DMRG data on the scaling is presented in Fig.\ref{fig:ct_wzw} a) and b) and summarised in Table II. \begin{figure}[h!] \includegraphics[width=0.49\textwidth]{EndPoint} \caption{(Color online) Velocity along the critical line between the Haldane and the dimerized phases extracted from the gap between $n$'s energy level and a groundstate. Red, blue and green lines show results for a) $N=50$ and b) $N=51$. Similar results for a) $N=30,24,20$ and b) $N=31,25,21$ are shown in gray (from dark thick to light thin).} \label{fig:velocity} \end{figure} We checked that the conformal tower is destroyed by moving along the critical line away from the end point. To demonstrate this, we have plotted the velocities extracted from three different excitation levels $n$ (fig.\ref{fig:velocity}). At the end point, all velocities are expected be the same, implying that the conformal tower is restored. This occurs around $J_2=0.12$, in agreement with the value determined from the critical exponent (see main text). \section{Central charge from entanglement entropy at the Ising transition} For a periodic chain with $N$ sites, the entanglement entropy of a subsystem of size $n$ is defined by $S_N(n)=-\mathrm{Tr}\rho_n\ln\rho_n$, where $\rho_n$ is the reduced density matrix. According to conformal field theory, the entanglement entropy in periodic systems depends on the size of the block according to: \begin{equation} S_N(n)=\frac{c}{3}\ln\left[\frac{N}{\pi}\sin\left(\frac{\pi n}{N}\right)\right]+s_1 \label{eq:calabrese_cardy} \end{equation} Let us first focus on the Ising transition between the NNN-Haldane and the dimerized phases. In the vicinity of this phase transition the convergence of the entanglement entropy in DMRG algorithm is very slow. This results in big oscillations that appear on top of the curve given by Eq.~\ref{eq:calabrese_cardy}. In principle these oscillations can be removed by increasing the number of sweeps and the number of states kept in DMRG. We went up to 16 sweeps keeping up to 900 states in two-site DMRG. With these parameters, oscillations disappear only for chains smaller than 30 sites. For larger systems, we have extracted the central charge for lower and upper curves of the entanglement entropy separately, as shown in Fig.~\ref{fig:cc_pbc}a). Note that the finite-size corrections to Eq.\ref{eq:calabrese_cardy} are minimal when the block sizev$n$ is as far as possible from the extreme values $1$ and $N$\cite{franca}. Therefore we discard a few points close to the edges while fitting. Alternatively, one can estimate the finite-size central charge by calculating it in the middle of the curve with only two points (see sketches with diamonds in Fig.~\ref{fig:cc_pbc}a)). Using Eq.~\ref{eq:calabrese_cardy} leads to the estimates: \begin{equation} c_{k}=\frac{3\left[S_N(\frac{N}{2}-(k+2))-S_N(\frac{N}{2}-k)\right]}{\ln\left[\cos(\frac{(k+2)\pi}{N})/\cos(\frac{k\pi}{N})\right]}, \label{eq:cc} \end{equation} where $k=0,1$ for upper and lower curves. For each system size, we then extrapolate the extracted values of the central charges with the number of states kept in DMRG algorithm. The extrapolated values of the central charge as a function of system size $N$ are shown in Fig.~\ref{fig:cc_summary}. They are consistent with $c=1/2$ in the thermodynamic limit. \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{central_charge_pbc} \caption{(Color online) Extraction of the central charge for periodic boundary conditions and $J_2=0.7$, $J_3=0.058$. a) Example of entanglement entropy as a function of block size $n$ for $N=36$ sites and 800 states kept in DMRG. Light red and light blue lines are fits to the Calabrese-Cardy formula of Eq.~\ref{eq:calabrese_cardy}. Red and blue diamonds schematically show how the formula (\ref{eq:cc}) can be applied. b) Scaling of the central charge extracted in different ways with the number of states kept in the DMRG calculation. The lines are linear fits to the data-points (circles for the Calabrese-Cardy fit and diamonds for central charge calculated in the middle of the chain).} \label{fig:cc_pbc} \end{figure} \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{central_charge_obc} \caption{(Color online) Entanglement entropy as a function of the conformal distance for $N=300$ (green), $600$ (blue) and $800$ (red) sites at $J_2=0.7$ and $J_3=0.058$. a) The solid lines are fits of the upper and lower curves to Eq.~\ref{eq:calabrese_cardy_obc}. The slopes of the fits give upper and lower limits for the central charge. b) Entanglement entropy after removing the Friedel oscillations with weight $\zeta\approx 2/9$. The data for $N=300$ and $600$ are shifted downward by 0.1 and 0.05 for clarity. } \label{fig:cc_obc} \end{figure} \begin{figure}[t!] \includegraphics[width=0.3\textwidth]{central_charge_summary} \caption{(Color online) Central charge for the Ising transition as a function of $1/N$. The light blue and red circles have been obtained with the fits of the upper and lower curves of the entanglement entropy with Calabrese-Cardy formula. The red and blue diamonds stand for the central charge extracted in the middle of each curve. All results are extrapolated with the inverse number of sweeps. Magenta triangles stand for the central charge extracted from the entanglement entropy in open chains.} \label{fig:cc_summary} \end{figure} \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{dmrg_convergence} \caption{(Color online) Extrapolation of the DMRG results towards infinite number of sweeps. a) Ground state energies for periodic chains with different numbers of sites. The continuation of the line is a fit linear in $1/sweep$ of the last few points. b) Ground-state energy and energy of the lowest excited states as a function of the inverse number of DMRG sweeps. Dots are DMRG results while red lines are linear fits of the last few points for each curve marked with large circles.} \label{fig:convergence} \end{figure} It is well established that DMRG algorithm has better performances for open systems, and much bigger system sizes can be reached then. In systems with open boundary conditions, the entanglement entropy scales with the block size according to: \begin{equation} S_N(n)=\frac{c}{6}\ln\left[\frac{2N}{\pi}\sin\left(\frac{\pi n}{N}\right)\right]+s_1+\log g \label{eq:calabrese_cardy_obc} \end{equation} Since we are dealing here with much larger system sizes it is useful to present results in a logarithmic scale by introducing the conformal distance $d$: \begin{equation} d=\frac{2N}{\pi}\sin\left(\frac{\pi n}{N}\right) \label{eq:conformal_distance} \end{equation} As in the case of periodic boundary conditions, big oscillations appear on top of the prediction of Eq.~\ref{eq:calabrese_cardy_obc}. However, in open systems, the oscillations are caused by Friedel oscillations and cannot be removed by increasing the number of sweeps or the number of states. Separate fits of the upper and lower curves of the entanglement entropy leads to rough estimates of the central charge: $c_\mathrm{lower}\approx0.41$ and $c_\mathrm{upper}\approx0.63$ (see Fig.~\ref{fig:cc_obc}a)). In order to remove the oscillations, following Ref.~[\onlinecite{capponi}], we have subtracted the spin-spin correlation on the corresponding link from the entanglement entropy with some weight $\zeta$. Then the reduced entanglement entropy as a function of the conformal distance takes the form: \begin{equation} \tilde{S}_N(n)=\frac{c}{6}\ln d(n)+\zeta \langle {\bf S}_n{\bf S}_{n+1} \rangle+s_1+\log g \label{eq:calabrese_cardy_obc_corrected} \end{equation} The results of the numerical calculation of the central charge from the entanglement entropy for both OBC and PBC are summarized in Fig~\ref{fig:cc_summary}. These results are consistent with $c=1/2$. \section{Convergence of energies in DMRG} Close to the critical point, the DMRG algorithm converges very slowly, especially for periodic boundary conditions. In Fig.~\ref{fig:convergence}a), we have plotted the ground-state energy of periodic chains as a function of the inverse number of sweeps. Note that we plot measurements after each passage through the system, whereas a sweep corresponds to going back and forth. So the variable sweep takes half-integer as well as integer values. The almost flat part of the curves for large number of sweeps indicates that convergence was reached. For each curve, we have used the slope of the last few points to extrapolate the results for infinite number of sweeps. We do up to 16 sweeps and keep up to 900 states. In the first 6-7 sweeps the number of kept states increases from 100 to approximately $90\%$ of the maximal value, in the following sweeps we jiggle the wave-function by decreasing and increasing the number of states until the convergence is reached. The lack of convergence is also a problem for higher excited states even with open boundary conditions as shown in Fig.~\ref{fig:convergence}b). To estimate the excitation energies, we have extrapolated the last few points of each curve to infinite number of sweeps with a linear fit in $1/sweep$. We do 7-9 sweeps and the number of kept states increase linearly from 100 to 900. Therefore finite-size scaling of the energy with the number of sweeps is equivalent to scaling with number of kept states. \end{document}
2,869,038,154,471
arxiv
\section*{Introduction} Let $S=K[x_1,\ldots,x_n]$, $n\in {\bf N}$, be a polynomial ring over a field $K$ and $\mm=(x_1.\ldots,x_n)$. Let $I\supsetneq J$ be two monomial ideals of $S$ and $u\in I \setminus J$ a monomial. For $Z\subset \{x_1,\ldots ,x_n\}$ with $(J:u)\cap K[Z]=0$, let $uK[Z]$ be the linear $K$-subspace of $I/J$ generated by the elements $uf$, $f\in K[Z]$. A presentation of $I/J$ as a finite direct sum of such spaces ${\mathcal D}:\ I/J=\Dirsum_{i=1}^ru_iK[Z_i]$ is called a {\em Stanley decomposition} of $I/J$. Set $\sdepth (\mathcal{D}):=\min\{|Z_i|:i=1,\ldots,r\}$ and \[ \sdepth\ I/J :=\max\{\sdepth \ ({\mathcal D}):\; {\mathcal D}\; \text{is a Stanley decomposition of}\; I/J \}. \] Let $h$ be the height of $a=\sum_{P\in \Ass_SS/I} P$ and $r$ the minimum $t$ such that there exist $\{P_1,\ldots,P_t\}\subset \Ass_S S/I$ such that $\sum_{i=1}^t P_i=a$. We call the {\em size} of $I$ the integer $\size_SI=n-h+r-1$. Lyubeznik \cite{L} showed that $\depth_SI\geq 1+\size_SI$. If Stanley's Conjecture \cite{S} would hold, that is $\sdepth_S I/J\geq \depth_S I/J$, then we would get also $\sdepth_SI\geq 1+\size_SI$ as it is stated in \cite{HPV}. Unfortunately, there exists a counterexample in \cite{DGKM} of this conjecture for $I=S$, $J\not =0$ and it is possible that there are also counterexamples for $J=0$. However, the counterexample of \cite{DGKM} induces another one for $J\not =0$ and $I\not =S$ generated by $5$ monomials, which shows that our result from \cite{P1} is tight. This counterexample does not affect Question 1 from \cite{P2}. Y.-H. Shen noticed that the second statement of \cite[Lemma 3.2]{HPV} is false when $I$ is not squarefree and so the proof from \cite{HPV} of $\sdepth_SI\geq 1+\size_SI$ is correct only when $I$ is squarefree. Since the depth is not a lower bound of sdepth due to \cite{DGKM} the lower bound of sdepth given by size will have a certain value. The main purpose of this paper is to show the above inequality in general (see Theorem \ref{t}). The important tool in the crucial point of the proof is the application of \cite[Theorem 4.5]{IKM} (a kind of polarization) to the so called the lcm-lattice associated to $I$ (see \cite{GPW}). Unfortunately, the polarization does not behaves well with size (see e.g. \cite[Example 1.2]{F}). Since it behaves somehow better with the so-called bigsize (very different from that introduced in \cite{P}, see Definition \ref{d}), we have to replace the size with the bigsize. Our bigsize is the right notion for a monomial squarefree ideal $I\subset S$ (see Theorem \ref{b}, an illustration of its proof is given in Examples \ref{e4}, \ref{e5}). If $I$ is not squarefree and $I^p\subset S^p$ is its polarization then it seems that a better notion will be $\bigsize_{S^p}(I^p)-\dim S^p+\dim S$. The inequality $\sdepth_SS/I\geq \size_SI$ conjectured in \cite{HPV} was proved in \cite{T} when $I$ is squarefree and it is extended in \cite{F}. Our bigsize is useless for this inequality (see Remark \ref{r}). A similar inequality is proved by Y.-H. Shen in the frame of the quotients of squarefree monomial ideals \cite[Theorem 3.6]{Sh}. We owe thanks to Y.-H. Shen and S. A. Seyed Fakhari who noticed several mistakes in some previous versions of this paper, and to B. Ichim, A. Zarojanu for a bad example. \section{Squarefree monomial ideals} The proof of the the following theorem is given in \cite{HPV} in a more general form, which is correct only for squarefree ideals. For the sake of completeness we recall it here in sketch. \begin{Theorem} (Herzog-Popescu-Vladoiu) \label{hpv} If $I$ is a squarefree monomial ideal then $$\sdepth_SI\geq \size_S(I)+1.$$ \end{Theorem} \begin{proof} Write $I=\cap_{i\in [s]}P_i$ as an irredundant intersection of monomial prime ideals of $ S$ and assume that $P_1=(x_1,\ldots,x_r)$ for some $r\in [n]$. Apply induction on $s$, the case $s=1$ being trivial. Assume that $s>1$. Using \cite[Lemma 3.6]{HVZ} we may reduce to the case when $\sum_{i\in [s]} P_i=\mm$. Set $S'=K[x_1,\ldots,x_r]$, $S''=K[x_{r+1},\ldots,x_n]$. For every nonempty proper subset $\tau\subset [s]$ set $$S_{\tau}=K[\{x_i: i\in [r], x_i\not \in \sum_{j\in \tau} P_j\}],$$ $$J_{\tau}=(\cap_{i\in [s]\setminus\tau}P_i)\cap S_{\tau},\ \ L_{\tau}=(\cap_{i\in \tau}P_i)\cap S''.$$ If $J_{\tau}\not =0$, $L_{\tau}\not =0$ define $A_{\tau}=\sdepth_{S_{\tau}} J_{\tau}+\sdepth_{S''} L_{\tau}$. Also define $A_0=\sdepth_S I_0$ for $I_0=(I\cap S')S$. By \cite[Theorem 1.6]{P} (the ideas come from \cite[Proposition 2.3]{AP}) we have $$\sdepth_SI\geq \min\{A_0, \{A_{\tau}: \emptyset \not =\tau \subset [s], J_{\tau}\not =0, L_{\tau}\not =0\}\}.$$ Using again \cite[Lemma 3.6]{HVZ} we see that if $I_0\not =0$ then $\sdepth_S I_0\geq n-r\geq \size_S(I)+1$. Fix a nonempty proper subset $\tau\subset [s]$ such that $J_{\tau}\not =0, \ L_{\tau}\not =0$. It is enough to show that $A_{\tau}\geq \size_S(I)+1,$ that is to verify that $\sdepth_{S''}L_{\tau}\geq \size_S(I)$ because $\sdepth_{S_{\tau}}(J_{\tau})\geq 1$. Set $P_{\tau}=\sum_{i\in \tau} P_i\cap S''$, let us say $P_{\tau}=(x_{r+1},\ldots,x_e)$ for some $e\leq n$. Let $j_1<\ldots <j_t$ in $\tau$ with $t$ minim such that $\sum_{i=1}^tP_{j_i}\cap S''=P_{\tau}$. Thus $\size_{S''}L_{\tau}=t-1+n-e$. Choose $k_1<\ldots<k_u$ in $[s]\setminus(\tau \cup \{1\})$ with $u$ minim such that $(x_{e+1},\ldots,x_n)\subset \sum_{i=1}^uP_{k_i}$. We have $u\leq n-e$. Then $P_1+\sum_{i=1}^tP_{j_i}+\sum_{i=1}^uP_{k_i}=\mm$ and so $u+t+1\geq \size_S(I)+1$. By induction hypothesis on $s$ we have $\sdepth_{S''}L_{\tau}\geq \size_{S''}L_{\tau}+1=t+n-e\geq t+u\geq \size_S(I).$ \hfill\ \end{proof} Now let $I\subset S$ be a monomial ideal not necessarily squarefree and $I=\cap_{i\in [s]}Q_i$ an irredundant decomposition of $I$ as an intersection of irreducible monomial ideals, $P_i=\sqrt{Q_i}$. Set $a=\sum_{i=1}^s P_i$. Let $\nu$ be a total order on $ [s]$. We say that $\nu$ is {\em admissible} if given $i,j,k\in [s]$ with $j,k>i$ with respect to $\nu$ and such that from $\height (\sum_{p\in [i]}P_p+P_k)>\height (\sum_{p\in [i]}P_p+P_j)$ it follows that $j< k$. Let ${\mathcal F}=(Q_{i_k})_{k\in [t]}$ be a family of ideals from $(Q_j)_{j\in [s]}$, $t\in [s]$, $i_1<\ldots<i_t$ with respect to $\nu$ such that $P_{i_k}$ are maximal among $(P_i)_i$, and set $a_{k,{\mathcal F}}=\sum_{j=1}^{k} P_{i_j}\subset a$, $a_{0,{\mathcal F}}=0$, $a_{\mathcal F}=a_{t,{\mathcal F}}$, $t_{\mathcal F}=t$, $h_{\mathcal F}=\height a_{\mathcal F}$. Shortly, we speak about a family $\mathcal F$ of $I$. If $I$ is squarefree then each $P_j$ is maximal among $(P_i)$. \begin{Definition} \label{d0} A family ${\mathcal F}$ of $I$ with respect to $\nu$ is {\em admissible} if $P_{i_k}\not\subset a_{k-1,{\mathcal F}}$ for all $k\in [t]$. The admissible family ${\mathcal F}$ is {\em maximal} if $a_{\mathcal F}=a$, that is, there exist no prime ideal $P\in \Ass_SS/I$ which is not contained in $a_{\mathcal F}$. \end{Definition} \begin{Definition}\label{d} Let $\mathcal F$ be a family of $I$ with respect to $\nu$. If $t_{\mathcal F}=1$ we set $\bigsize({\mathcal F})=\dim S/P_{i_1}$. If $t_{\mathcal F}>1$ then define by recurrence the $\bigsize({\mathcal F})=\min\{\bigsize({\mathcal F}'),1+\bigsize({\mathcal F}_1)\}$, where ${\mathcal F}'=(Q_{i_k})_{1\leq k< t}$ and ${\mathcal F}_1$ is the family obtained from the family $\widetilde{{\mathcal F}_1}=(Q_{i_t}+Q_{i_k})_{1\leq k< t}$ removing those ideals $ Q_{i_t}+Q_{i_k}$ which contain another ideal $ Q_{i_t}+Q_{i_{k'}}$ with $k'\in [t-1]\setminus \{k\}$. Note that ${\mathcal F}_1$ is given by $\Ass_SS/I_1$, where $I_1=\cap_{1\leq k< t}(Q_{i_t}+Q_{i_k})$, the decomposition being not necessarily irredundant. Then ${\mathcal F}_1$ is a family of $I_1$ with respect to the order induced by $\nu$ such that roughly speaking $Q_{i_t}+Q_{i_k}$ is smaller than $Q_{i_t}+Q_{i_{k'}}$ if $k<k'$ with respect to $\nu$. The integer $\bigsize({\mathcal F})$ is called the {\em bigsize} of $\mathcal F$. Note that $\bigsize({\mathcal F})\leq t-1+\dim S/a_{\mathcal F}$. Set $\bigsize_{\nu}(I)=\bigsize({\mathcal F})$ for a maximal admissible family $\mathcal F$ of $I$ with respect to $\nu$. We call the {\em bigsize} of $I$ the maximum $\bigsize_S(I) $ of $\bigsize_{\nu}(I)$ for all total admissible orders $\nu$ on $[s]$. \end{Definition} \begin{Remark} \label{r'} Note that given a total admissible order $\nu$ there exists just one maximal admissible family $\mathcal F$ with respect to $\nu$ so the above definition has sense. \end{Remark} \begin{Example}\label{ex} Let $n=6$, $P_1=(x_1,x_2,x_4)$, $P_2=(x_1,x_3,x_4,x_6)$, $P_3=(x_2,x_3,x_4,x_6)$, $P_4=(x_1,x_4,x_5,x_6)$, $P_5=(x_1,x_2,x_3,x_5,x_6)$ and set $I=\cap_{i\in [5]}P_i$. Then ${\mathcal F}=\{P_1,P_2,P_5\}$, ${\mathcal G}=\{P_1,P_3,P_4\}$ are maximal admissible families of $I$ with respect of some total admissible order of $[5]$, but $\bigsize({\mathcal F}')=\min\{3,1+1\}=2= \bigsize({\mathcal G}')$ and $\bigsize({\mathcal F}_1)=0$, $\bigsize({\mathcal G}_1)=1$ which implies $\bigsize({\mathcal F})=1<2=\bigsize({\mathcal G})$. Note that $a_{k,{\mathcal F}}=a_{k,{\mathcal G}}$ for each $k\in [3]$. \end{Example} \begin{Remark} \label{r''} Assume that $a_{\mathcal F}=(x_1,\ldots,x_r)$ for some $r\in [n]$. Set ${\tilde S}=K[x_1,\ldots,x_r]$ and let ${\tilde{\mathcal F}}=(Q_{i_k}\cap {\tilde S})_{k\in [t]}$. Then $\bigsize({\mathcal F})=n-r+\bigsize({\tilde {\mathcal F}})$. \end{Remark} \begin{Remark} \label{r0} Let ${\mathcal F}=(Q_{i_k})_{k\in [t]}$ be a an admissible family of $I$ with respect to a total admissible order $\nu$ and $r\in [t-1]$. Then ${\mathcal G}=(Q_{i_k})_{k\in [r]}$ is an admissible family of $I$ with respect to $\nu$ and $\bigsize({\mathcal F})\leq \bigsize({\mathcal G})$. \end{Remark} \begin{Remark} \label{r4} Let ${\mathcal F}=(Q_{i_k})_{k\in [t]}$ be a a family of $I$ with respect to a total admissible order $\nu$. Then $\bigsize({\mathcal F})=r-1+\dim S/(P_{i_{k_1}}+\ldots +P_{i_{k_r}})$ for some $k_1<\ldots <k_r$ from $[t]$. \end{Remark} \begin{Example} \label{e} Let $n=5$, $P_1=(x_1,x_2)$, $P_2=(x_2,x_3)$, $P_3=(x_1,x_4,x_5)$ and $I=P_1\cap P_2\cap P_3$. Then ${\mathcal F}=(P_i)_{i\in [3]}$ is a maximal admissible family of $I$ with respect to the usual order $\nu$ and $\size_SI=1$ because $P_2+P_3=\mm$. Note that ${\mathcal F}'=(P_i)_{i=1,2}$ has $\bigsize({\mathcal F}')=\min\{3,1+2\}=3$ and ${\mathcal F}_1=(P_3+P_i)_{i=1,2}$ has $\bigsize({\mathcal F}_1)=1$. Thus $\bigsize({\mathcal F})=2$. The order given by $I=P_2\cap P_3\cap P_1$ is not admissible, but the order $\nu'$ given by $I=P_2\cap P_1\cap P_3$ is admissible. The family ${\mathcal G}=(P_i)_{i=2,1,3}$ has $\bigsize({\mathcal G}')=\min\{3,1+2\}=3$ and ${\mathcal G}_1=(P_3+P_i)_{i=2,1}$ has $\bigsize({\mathcal G}_1)=1$. Thus $\bigsize({\mathcal G})=\min\{3,1+1\}=2$. Similarly, the order $\nu''$ given by $\{3,1,2\}$ is total admissible, the family ${\mathcal H}=(P_i)_{i=3,1,2}$ has $\bigsize({\mathcal H}')=\min\{2,1+1\}=2$ and ${\mathcal H}_1=(P_2+P_i)_{i=3,1}$ has $\bigsize({\mathcal H}_1)=1$. Thus $\bigsize({\mathcal H})=\min\{2,1+1\}=2$ and we have $\bigsize_{\nu'',S}(I)=2$. \end{Example} \begin{Example} \label{e1} Let $n=2$, $Q_1=(x_1)$, $Q_2= (x_1^2,x_2)$ and $I=Q_1\cap Q_2$. Then $P_2$ is the only prime $P_i$ maximal among $(P_j)_{j\in [2]}$ and for ${\mathcal F}=\{P_2\}$ we have $\bigsize_S({\mathcal F})=\size_S(I)=0$. \end{Example} \begin{Example} \label{e0} Let $n=4$, $Q_1=(x_1,x_2^2)$, $Q_2= (x_2,x_3)$, $Q_3=(x_3^2,x_4)$ and $I=Q_1\cap Q_2\cap Q_3$. Then ${\mathcal F}=(Q_i)_{i\in [3]}$ is a maximal admissible family of $I$ with respect to the usual order $\nu$ and $\size_SI=1$ because $P_1+P_3=\mm$. Note that ${\mathcal F}'=(Q_i)_{i=1,2}$ has $\bigsize({\mathcal F}')=\min\{2,1+1\}=2$ and ${\mathcal F}_1=(Q_3+Q_i)_{i=1,2}$ has $\bigsize({\mathcal F}_1)=0$. Thus $\bigsize({\mathcal F})=\min\{2,1+0\}=1$. The order $\nu'$ given by $I=P_2\cap P_1\cap P_3$ is admissible. The family ${\mathcal G}=(Q_i)_{i=2,1,3}$ has $\bigsize({\mathcal G}')=2$ and ${\mathcal G}_1=(Q_3+Q_i)_{i=2,1}$ has $\bigsize({\mathcal G}_1)=0$. Thus $\bigsize({\mathcal G})=\min\{2,1+0\}=1$ and we have $\bigsize_{\nu',S}(I)=1$. Similarly, the order $\nu''$ given by $\{2,3,1\}$ is total admissible and $\bigsize_{\nu''}(I)=1$. Also note that the order ${\bar \nu}$ given by $\{3,2,1\}$ is total admissible, the family ${\mathcal H}=(Q_i)_{i=3,2,1}$ has $\bigsize({\mathcal H}')=\min\{2,1+1\}=2$ and ${\mathcal H}_1=(Q_1+Q_i)_{i=3,2}$ has $\bigsize({\mathcal H}_1)=0$. Thus $\bigsize({\mathcal H})=\min\{2,1+0\}=1$ and we have $\bigsize_{{\bar\nu},S}(I)=1$. \end{Example} \begin{Example} \label{e'} Let $n=6$, $P_1=(x_1,x_2)$, $P_2= (x_1,x_3)$, $P_3=(x_1,x_6)$, $P_4=(x_3,x_4)$, $P_5=(x_3,x_5)$, $P_6=(x_2,x_4)$, $P_7=(x_5,x_6)$ and $I=\cap_{i\in [7]} P_i$. Let $\nu$ be the usual order and ${\mathcal F}=(P_i)_{i\in [5]}$. Then $\mathcal F$ is maximal admissible and $\bigsize({\mathcal F})=4>\size_SI$. Taking $\nu'$ given by the order $\{7,5,3,1,4\}$ we get a maximal admissible family $\mathcal G$ with $\bigsize({\mathcal G})=3$. Thus $\bigsize_S(I)=4>3=\size_SI$. \end{Example} \begin{Lemma}\label{l0} Let $\nu$ be a total admissible order on $[s]$ and ${\mathcal F}=(Q_{i_k})_{k\in [t]} $ a family of $I$ with respect to $\nu$. Then $ \bigsize(\mathcal F)\geq \size_SI.$ \end{Lemma} \begin{proof} By Remark \ref{r4} we have $\bigsize({\mathcal F})=r-1+\dim S/(P_{i_{k_1}}+\ldots +P_{i_{k_r}})$ for some $k_1<\ldots <k_r$ from $[t]$. We may suppose that $\sum_{j\in [r]}P_{i_{k_j}}=(x_1,\ldots,x_e)$ for some $e\in [n]$. Choose for each $p>e$, $p\leq n$ an $u_p\in [s]$ such that $x_p\in P_{u_p}$. Then $\sum_{j\in [r]}P_{i_{k_j}}+\sum_{p=e+1}^n P_{u_p}=\mm$ and so $\size I\leq r-1+\dim S/(P_{i_{k_1}}+\ldots +P_{i_{k_r}})=\bigsize({\mathcal F})$. \hfill\ \end{proof} Next we present a slightly extension of Theorem \ref{hpv}. \begin{Theorem}\label{b} Let $I=\cap_{i\in [s]}P_i$ be an irredundant intersection of monomial prime ideals of $ S$. Then $\sdepth_SI\geq 1+ \bigsize_S(I)$. \end{Theorem} \begin{proof} Using \cite[Lemma 3.6]{HVZ} we may reduce to the case when $\sum_{i\in [s]} P_i=\mm$. Apply induction on $n$. Assume that $\bigsize_S(I) =\bigsize(\mathcal F)$ for a maximal admissible family ${\mathcal F}=(P_{i_k})_{k\in [t]}$ of $I$ with respect to a total admissible order $\nu$. We may suppose that $i_t=s$ and $\sum_{k\in [t-1]}P_{i_k}=(x_{r+1},\ldots,x_n)$, $r\geq 1$. Set $S'=K[x_1,\ldots,x_r]$, $S''=K[x_{r+1},\ldots,x_n]$. We may use \cite[Theorem 1.6]{P} even when $(x_1,\ldots,x_r)\not \in \Ass_SS/I$ (see \cite[Lemma 2.1]{HPV}). In the notations of Theorem \ref{hpv} we have $$\sdepth_SI\geq \min\{A_0, \{A_{\tau}: \emptyset \not =\tau \subset [s], J_{\tau}\not =0, L_{\tau}\not =0\}\}.$$ If $I_0=(I\cap S')S\not =0$ then $A_0\geq 1+(n-r)\geq 1+\dim S/P_{i_t}\geq 2+\bigsize({\mathcal F}_1)\geq 1+\bigsize_S(I) $. Now suppose that $\sdepth_SI\geq A_{\tau}$ for some $\tau\subset [s]$ with $J_{\tau}\not =0$, $ L_{\tau}\not =0$. Thus $i_k$ must be in $\tau$ for any $k\in [t-1]$ because otherwise $J_{\tau}=0$. Then ${\mathcal H}=(P_{i_k}\cap S'')_{k\in [t-1]}$ is a maximal admissible family of $L_{\tau}$ with respect to $\nu$. Note that $\bigsize(\mathcal H)\geq \bigsize({\mathcal F}_1)$. By induction hypothesis on $n$ we have $$\sdepth_{S''}L_{\tau}\geq 1+\bigsize_{S''}(L_{\tau})\geq 1+\bigsize(\mathcal H)\geq 1+\bigsize({\mathcal F}_1)\geq \bigsize({\mathcal F}).$$ Therefore, $$\sdepth_SI\geq A_{\tau}\geq 1+\sdepth_{S''}L_{\tau}\geq 1+\bigsize({\mathcal F})= 1+ \bigsize_S(I).$$ \hfill\ \end{proof} \begin{Example}\label{e4} We illustrate the above proof on the case of $\mathcal F$ given in Example \ref{e'}. Set $S'=K[x_5]$, $S''=K[x_1,x_2,x_3,x_4,x_6]$. Then $\tau=\{1,2,3,4,6\}$ is the only $\tau\subset [7]$ such that $J_{\tau}\not = 0$. We have $\sdepth_SI=5=1+\sdepth_{S''}L_{\tau}$. Set ${\tilde S}'=K[x_4]$, ${\tilde S}''=K[x_1,x_2,x_3,x_6]$. Then ${\tilde \tau}=\{1,2,3\}$ is the only ${\tilde \tau}\subset \tau=[7]\setminus \{5,7\}$ such that $J_{\tilde \tau}\not =0$. We have $\sdepth_{ S''}L_{\tau}=4=1+\sdepth_{{\tilde S}''}L_{\tilde \tau}$. Now set ${\hat S}'=K[x_6]$, ${\hat S}''=K[x_1,x_2,x_3]$. Then ${\hat \tau}=\{1,2\}$ is the only ${\hat \tau}\subset {\tilde \tau}=[7]\setminus \{4,5,6,7\}$ such that $J_{\hat \tau}\not =0$. We have $\sdepth_{{\tilde S}''}L_{\tilde \tau}=3=1+\sdepth_{{\hat S}''}L_{\hat \tau}$. On the other hand, ${\mathcal H}=\{P_1\cap S'', P_2\cap S'',P_3\cap S'',P_4\cap S''\}$ is a maximal admissible family of $L_{\tau}$ and we have $\bigsize({\mathcal H})=3=\bigsize({\mathcal F}_1)$. Also note that ${\mathcal P}=\{P_1\cap {\tilde S}'',P_2\cap {\tilde S}'',P_3\cap {\tilde S}''\}$ is a maximal admissible family of $L_{\tilde \tau}$ and $\bigsize({\mathcal P})=2=\bigsize({\mathcal H}_1)$. Finally, ${\mathcal E}=\{P_1\cap {\hat S}'',P_2\cap {\hat S}''\}$ is a maximal admissible family of $L_{\hat \tau}$ and $\bigsize({\mathcal E})=1=\bigsize({\mathcal P}_1)$. Therefore, we have $\sdepth_SI=1+\bigsize({\mathcal F})$, $\sdepth_{S''}L_{\tau}=1+\bigsize({\mathcal H})$, $\sdepth_{{\tilde S}''}L_{\tilde \tau}=1+\bigsize({\mathcal P})$ and $\sdepth_{{\hat S}''}L_{\hat \tau}=1+\bigsize({\mathcal E})$. \end{Example} \begin{Remark}\label{r} Note that in Example \ref{e4} we have $\sdepth_SS/I=3=\bigsize({\mathcal G})<4=\bigsize({\mathcal F})=\bigsize_S(I)$ which shows that the corresponding inequality for $S/I $ fails using this bigsize. As $\sdepth_{S''}S''/L_{\tau}=3$ too, we see that the proof of Theorem \ref{b} fails in the case of the module $S/I$. Thus the so-called the splitting of variables for arbitrary $r$ from \cite[Proposition 2.1]{HPV} does not hold for $S/I$ (this holds for the case when $r$ is given by a so-called main prime as it is used in \cite{T}). \end{Remark} \begin{Example}\label{e5} We consider now the case of $\mathcal G$ given in Example \ref{e'}. Set $S'=K[x_4]$, $S''=K[x_1,x_2,x_3,x_5,x_6]$. Then $\tau=\{1,2,3,5,7\}$ is the only $\tau\subset [7]$ such that $J_{\tau}\not = 0$. We have $\sdepth_SI=5=1+\sdepth_{S''}L_{\tau}$. Set ${\tilde S}'=K[x_2]$, ${\tilde S}''=K[x_1,x_3,x_5,x_6]$. Then ${\tilde \tau}=\{3,5,7\}$ is the only ${\tilde \tau}\subset \tau= [7]\setminus \{4,6\}$ such that $J_{\tilde \tau}\not =0$. We have $\sdepth_{ S''}L_{\tau}=4=1+\sdepth_{{\tilde S}''}L_{\tilde \tau}$. Now set ${\hat S}'=K[x_1]$, ${\hat S}''=K[x_3,x_5,x_6]$. Then ${\hat \tau}=\{5,7\}$ is the only ${\hat \tau}\subset{\tilde \tau}= [7]\setminus \{2,4,6\}$ such that $J_{\hat \tau}\not =0$. We have $\sdepth_{{\tilde S}''}L_{\tilde \tau}=3=1+\sdepth_{{\hat S}''}L_{\hat \tau}$. On the other hand, ${\mathcal H}=\{P_7\cap S'', P_5\cap S'',P_3\cap S'',P_1\cap S''\}$ is a maximal admissible family of $L_{\tau}$ and we have $\bigsize({\mathcal H})=2=\bigsize({\mathcal G}_1)$. Also note that ${\mathcal P}=\{P_7\cap {\tilde S}'',P_5\cap {\tilde S}'',P_3\cap {\tilde S}''\}$ is a maximal admissible family of $L_{\tilde \tau}$ and $\bigsize({\mathcal P})=2>1=\bigsize({\mathcal H}_1)$. Finally, ${\mathcal E}=\{P_7\cap {\hat S}'',P_5\cap {\hat S}''\}$ is a maximal admissible family of $L_{\hat \tau}$ and $\bigsize({\mathcal E})=1=\bigsize({\mathcal P}_1)$. Therefore, we have $\sdepth_SI>1+\bigsize({\mathcal G})$, $\sdepth_{S''}L_{\tau}>1+\bigsize({\mathcal H})$, $\sdepth_{{\tilde S}''}L_{\tilde \tau}=1+\bigsize({\mathcal P})$ and $\sdepth_{{\hat S}''}L_{\hat \tau}=1+\bigsize({\mathcal E})$. \end{Example} \section{Bigsize and Stanley depth} Let $I\subset S$ be a monomial ideal and $I=\cap_{i\in [s]}Q_i$ an irredundant decomposition of $I$ as an intersection of irreducible monomial ideals, $P_i=\sqrt{Q_i}$. Let $G(I) $ be the minimal set of monomial generators of $I$. Assume that $\sum_{P\in \Ass_S S/I} P=\mm$. Given $j\in [n]$ let $\deg_j I$ be the maximum degree of $x_j$ in all monomials of $G(I)$. \begin{Lemma}\label{l} Suppose that $c:=\deg_nI>1$, let us say $c=\deg_nQ_j$ if and only if $j\in [e]$ for some $e\in [s]$. Assume that $Q_j=(J_j,x_n^c)$ for some irreducible ideal $J_j\subset S_n=K[x_1,\ldots,x_{n-1}]$, $j\in [e]$. Let $Q'_j=(J_j,x_n^{c-1})\subset S$, $Q''_j=(J_j,x_{n+1})\subset {\tilde S}=S[x_{n+1}]$ and set $${\tilde I}=(\cap_{i=e+1}^sQ_i{\tilde S})\cap (\cap_{i\in [e]}Q'_i{\tilde S})\cap (\cap_{i=s+1}^ {s+e}Q_i)\subset {\tilde S},$$ where $Q_i=Q''_{i-s}$ for $i>s$, the decomposition of ${\tilde I}$ being not necessarily irredundant. Then $\sdepth_{{\tilde S}}{\tilde I}\leq \sdepth_SI+1$ and $\sdepth_{\tilde S}{\tilde S}/{\tilde I}\leq \sdepth_SS/I+1$. \end{Lemma} \begin{proof} Let $L_I$, $L_{\tilde I}$ be the LCM-lattices associated to $I,{\tilde I}$. The map ${\tilde S}\to S$ given by $x_{n+1}\to x_n$ induces a surjective join-preserving map $L_{\tilde I}\to L_I$ and by \cite[Theorem 4.5]{IKM} we get $\sdepth_{\tilde S}{\tilde I}\leq \sdepth_SI+1$ and $\sdepth_{{\tilde S}}{\tilde S}/{\tilde I}\leq \sdepth_SS/I+1$. \hfill\ \end{proof} With the notations and assumptions of Lemma \ref{l} let $${\tilde C}=\{i\in [s]:P_i{\tilde S}\in \Ass_{\tilde S}{\tilde S}/{\tilde I}\}\cup ([s+e]\setminus [s]).$$ Choose a total admissible order $\tilde \nu$ on ${\tilde C}$ and a total admissible order $\nu$ on $[s]$ extending the restriction of ${\tilde \nu}$ to $[s]\cap {\tilde C}$. Let $\tilde{\mathcal F}=({\tilde Q}_{i_k})_{k\in [t]}$ be a family of $\tilde I$ with respect to $\tilde \nu$. Replace in $\tilde{\mathcal F}$ the ideals ${\tilde Q}_{i_k}$ by $Q_{i_k}={\tilde Q}_{i_k}\cap S$ when $P_{i_k}$ is maximal in $\Ass_SS/I$ and $\tilde{Q}_{i_k}$ is not of the form $Q'_i{\tilde S}$ or $Q''_i$ for some $i\in [e]$. When ${\tilde Q}_{i_k}$ is of the form $Q'_i{\tilde S}$ or $Q''_i$ for some $i\in [e]$ then replace in $\tilde{\mathcal F}$ the ideal ${\tilde Q}_{i_k}$ by $Q_i$. If $P_{i_k}$ is not maximal in $\Ass_SS/I$ then ${\tilde Q}_{i_k}\subset Q'_i{\tilde S}$ for some $i\in [e]$ and we replace in $\tilde{\mathcal F}$ the ideal ${\tilde Q}_{i_k}$ by $Q_i$ (this $i$ is not unique and we have to choose a possible one). Note that $x_n\in P_{i_k}$ because otherwise $Q_{i_k}\subset Q_i$ which is impossible. In this way, we get a family $\overline{\mathcal F}$ of ideals which are maximal in $\Ass_SS/I$. Sometimes $\overline{\mathcal F}$ contains the same ideal $Q_i$, $i\in [e]$ several times. Keeping such $Q_i$ in $\overline{\mathcal F}$ only the first time when it appears and removing the others we get a family ${\mathcal F}$ of $I$ with respect to $\nu$. \begin{Lemma}\label{l4} There exists a family $\mathcal G$ of $I$ with respect to $\nu$ such that $\bigsize({\tilde{\mathcal F}})\geq 1+\bigsize({\mathcal G})$. \end{Lemma} \begin{proof} Apply induction on $t$. Assume that $t=1$. Then note that $\bigsize({\tilde{\mathcal F}})=\dim {\tilde S}/{\tilde P}_{i_1}=1+\dim S/ P_{i_1}=1+\bigsize({\mathcal F})$ when $P_{i_1}$ is maximal in $\Ass_SS/I$ and $\tilde{Q}_{i_1}$ is not of the form $Q'_i{\tilde S}$ or $Q''_i$ for some $i\in [e]$. If $\tilde{Q}_{i_1}=Q'_i{\tilde S}$ for some $i\in [e]$ then $\bigsize({\tilde{\mathcal F}})=\dim {\tilde S}/ P_i{\tilde S}=1+\dim S/ P_i=1+\bigsize({\mathcal F})$. Similarly, it happens when $\tilde{Q}_{i_1}=Q''_i$ because then $\dim {\tilde S}/{\tilde P}_{e+i}=\dim S/J_i=1+\dim S/P_i$. If $\tilde{Q}_{i_1}=Q_l{\tilde S}$ for some $l\in [s]$ such that $Q_l\subset Q'_i$ for some $i\in [e]$ and $P_l$ is not maximal in $\Ass_SS/I$ then note that $\dim {\tilde S}/P_l{\tilde S}=1+\dim S/P_l>1+\dim S/P_i$. Let $t>1$. Assume that $\bigsize(\tilde { \mathcal F})=t-1+\dim {\tilde S}/a_{\tilde{\mathcal F}}$. As above we see that $\dim {\tilde S}/a_{\tilde{\mathcal F}}\geq 1+\dim S/a_{\mathcal F}$. Let $\overline{\mathcal F}=(Q_{i_k})_{k\in [t]}$. If $\overline{\mathcal F}=\mathcal F$ then we get $\bigsize( { \mathcal F})\leq t-1+\dim S/a_{\mathcal F}\leq \bigsize(\tilde { \mathcal F})-1$. Otherwise, assume that ${\mathcal F}=(Q_{i_k})_{k\in E}$ for some $E\subsetneq [t]$. We have $\bigsize( { \mathcal F})\leq |E|-1+\dim S/a_{\mathcal F}<t-1+\dim S/a_{\mathcal F}\leq \bigsize(\tilde { \mathcal F})-1$. Then take ${\mathcal G}={\mathcal F}$. Now assume that $\bigsize(\tilde{\mathcal F})=r-1+\dim {\tilde S}/\sum_{j\in [r]} {\tilde P}_{i_{k_j}}$ for some $r\in [t-1]$ and $k_1<\ldots<k_r$ from $[t]$ (see Remark \ref{r4}). Set $\tilde{\mathcal G}=({\tilde Q}_{i_{k_j}})_{j\in [r]}$. We have $\bigsize(\tilde{\mathcal G})\leq r-1+\dim {\tilde S}/a_{\tilde{\mathcal G}}=\bigsize(\tilde{\mathcal F})$. Consider the families $\overline{\mathcal G}$, ${\mathcal G}$ corresponding to $\tilde{\mathcal G}$ similarly to $\overline{\mathcal F}$, ${\mathcal F}$ corresponding to $\tilde{\mathcal F}$. By induction hypothesis ($r<t$) we have $\bigsize( \tilde{\mathcal G})\geq 1+\bigsize({\mathcal G})$. Then $$\bigsize(\tilde{\mathcal F})\geq \bigsize(\tilde{\mathcal G})\geq 1+\bigsize({\mathcal G}).$$ \hfill\ \end{proof} \begin{Example} \label{e2} Let $n=4$, $Q_1=(x_1,x_2)$, $Q_2=(x_1,x_3)$, $Q_3=(x_1^2,x_2,x_3)$, $Q_4=(x_1^2,x_3,x_4)$ and $I=\cap_{i\in [4]}Q_i$. Let ${\mathcal F}=\{Q_3,Q_4\}$. Then $\size_S(I)=1$ because $P_3+P_4=\mm$. Also note that $\bigsize({\mathcal F}')= \min\{1,1+0\}=1$, $\bigsize({\mathcal F}_1)=0$ and so $\bigsize({\mathcal F})= \min\{1,1+0\}=1$. Clearly, ${\tilde I}=Q_1{\tilde S}\cap Q_2{\tilde S}\cap Q''_3\cap Q''_4$. Now $P_1{\tilde S},P_2{\tilde S}$ are maximal in $\Ass_{\tilde S}{\tilde S}/{\tilde I}$. For ${\mathcal G}=\{Q_1{\tilde S},Q_2{\tilde S},Q''_3,Q''_4\}$ we get $\bigsize({\mathcal G}')=\min\{\min\{3,1+2\}, 1+1\}=2$, $\bigsize({\mathcal G}_1)=\min\{1,1+0\}=1$ and so $\bigsize({\mathcal G})=\min\{2,1+1\}=2$. If we take ${\mathcal H}=\{Q''_3,Q''_4,Q_1\}$ then $\bigsize({\mathcal H}')=\min\{2,1+1\}=2$, $\bigsize({\mathcal H}_1)=1$ and so $\bigsize({\mathcal H})=2$. Thus $\bigsize({\mathcal G})=\bigsize({\mathcal H})=1+\bigsize({\mathcal F})$. \end{Example} \begin{Example} \label{e3} Let $n=4$, $Q_1=(x_1,x_2)$, $Q_2=(x_1^2,x_3)$, $Q_3=(x_1^2,x_4)$ and $I=\cap_{i\in [3]}Q_i$. Let ${\mathcal F}=\{Q_1,Q_2,Q_3\}$. Then we see that $\bigsize({\mathcal F})=2=\size I$. Clearly, ${\tilde I}=Q_1{\tilde S}\cap Q'_2{\tilde S}\cap Q'_3{\tilde S}\cap Q''_2\cap Q''_3$, where $Q'_2= (x_1,x_3)$, $Q'_3= (x_1,x_4)$, $Q''_2= (x_3,x_5)$, $Q''_3= (x_4,x_5)$. Then $\{Q_1{\tilde S},Q''_2,Q''_3\}$, $\{Q'_2{\tilde S},Q_1{\tilde S},Q''_3\}$, $\{Q'_3{\tilde S},Q_1{\tilde S},Q''_2\}$, $\{Q''_2, Q_1{\tilde S},Q''_3\}$, $\{Q''_3,Q_1{\tilde S},Q'_2{\tilde S}\}$ are maximal admissible families of ${\tilde I}$ but with respect to some total orders which are not admissible. However, $\mathcal G=\{Q''_2,Q'_2{\tilde S},Q_1{\tilde S},\\ Q'_3{\tilde S}\} $ is a maximal admissible family of ${\tilde I}$ with respect to an admissible order. Note that $\bigsize({\mathcal G}')=\min\{3,1+2\}=3$ and ${\mathcal G}_1=\{(x_1,x_3,x_4), (x_1,x_2,x_4)\}$ has bigsize $2$. Thus $\bigsize({\mathcal G})=\min\{3,1+2\}=3=1+ \bigsize({\mathcal F})$. We see that $\size_{\tilde S}{\tilde I}=2$ because $Q_1+Q''_2+Q'_3={\tilde \mm}$. \end{Example} \begin{Theorem} \label{t} Let $I$ be a monomial ideal of $S$ and $I=\cap_{i\in [s]}Q_i$ an irredundant decomposition of $I$ as an intersection of irreducible monomial ideals, $P_i=\sqrt{Q_i}$. Then $\sdepth_SI\geq \size_S I+1$. \end{Theorem} \begin{proof} Using \cite[Lemma 3.6]{HVZ} we may reduce to the case when $\sum_{P\in \Ass_S S/I} P=\mm$. If $I$ is squarefree then apply Theorem \ref{hpv}, or Theorem \ref{b} with Lemma \ref{l0}. Otherwise, assume that $c=\deg_nI>1$. By Lemma \ref{l} there exist $e$ and a monomial ideal ${\tilde I}$ such that $\sdepth_{{\tilde S}}{\tilde I}\leq\sdepth_SI+1$. Set $I^{(1)}={\tilde I}$ and $S^{(1)}={\tilde S}$. If $I^{(1)}$ is not squarefree then apply again Lemma \ref{l} for some $x_i$ with $\deg_i I^{(1)}>1$. We get $I^{(2)}=(I^{(1)})^{(1)}$, $S^{(2)}=(S^{(1)})^{(1)}$ such that $ S^{(2)}=S[x_{n+1},x_{n+2}]$, $\sdepth_{ S^{(2)}} I^{(2)}\leq\sdepth_SI+2$. Applying Lemma \ref{l} by recurrence we get some monomial ideals $I^{(j)} \subset S^{(j)}$, $j\in [r]$ for some $r$ such that $S^{(j)}=S[x_{n+1},\ldots,x_{n+j}]$, $\sdepth_{ S^{(j)}} I^{(j)}\leq\sdepth_SI+j$ and $I^{(r)}$ is a squarefree monomial ideal (thus $I^{(r)}$ is the polarization of $I$). Now, let ${\mathcal F}^{(r)}$ be a maximal admissible family of $I^{(r)}$ with respect to some total admissible order $\nu_r$ such that $\bigsize_{S^{(r)}}(I^{(r)})=\bigsize_{\nu_r}(I^{(r)})=\bigsize({\mathcal F}^{(r)})$. By Theorem \ref{b} we have $\sdepth_{S^{(r)}}I^{(r)}\geq 1+ \bigsize({\mathcal F}^{(r)})$. Using Lemma \ref{l4} there exists a family ${\mathcal F}^{(r-1)}$ of $I^{(r-1)}$ such that $1+\bigsize({\mathcal F}^{(r-1)})\leq \bigsize({\mathcal F}^{(r)})$. Applying again Lemma \ref{l4} by recurrence we find a family ${\mathcal F}$ of $I$ such that $r+\bigsize({\mathcal F})\leq \bigsize({\mathcal F}^{(r)})$. Thus $$\sdepth_SI\geq \sdepth_{ S^{(r)}} I^{(r)}-r\geq $$ $$\bigsize({\mathcal F}^{(r)})-r+1\geq 1+\bigsize({\mathcal F}).$$ Applying Lemma \ref{l0} we are done. \hfill\ \end{proof} \begin{Remark} Let $n=4$, $P_1=(x_1,x_2)$, $P_2= (x_1^2,x_3^2)$, $P_3=(x_2,x_4)$, $P_4=(x_3,x_4)$, and $J=\cap_{i\in [4]}P_i$. Note that the polarization of $J$ is the ideal $I$ from Examples \ref{e'}, \ref{e4}, \ref{e5}. \end{Remark}
2,869,038,154,472
arxiv
\section{Introduction} Cluster algebras were first developed by Fomin and Zelevinsky more than a decade ago \cite{fz1}. Their structure arises in diverse parts of mathematics and theoretical physics, including Lie theory, quantum algebras, Teichm\"uller theory, discrete integrable systems and T- and Y-systems. One of the original motivations for cluster algebras came from a series of observations made by Michael Somos and others (see \cite{gale}), concerning nonlinear recurrence relations of the form \beq \label{recf} x_{n+N} \, x_n = F(x_{n+1}, \ldots , x_{n+N-1}), \eeq where $F$ is a polynomial in $N-1$ variables. The original observation of Somos was that certain choices of $F$ lead to integer sequences when all $N$ initial values are chosen to be 1. This was explained by the further observation that for such special $F$ the recurrence (\ref{recf}) exhibits the \textit{Laurent phenomenon}, meaning that all iterates are Laurent polynomials in the initial data with integer coefficients. One of the most well-known examples is the Somos-4 recurrence given by \beq\label{somos4} x_{n+4}\, x_n=x_{n+3}x_{n+1}+x_{n+2}^2, \eeq which generates the sequence $1,1,1,1,2,3,7,23,59,314,1529,8209,83313,\ldots$\footnote{ {\tt http://oeis.org/A006720}} starting from four initial 1s, while if the initial data $x_1,x_2,x_3,x_4$ are viewed as variables then the iterates $x_n$ belong to the Laurent polynomial ring $\Z[x_1^{\pm 1}, x_2^{\pm 1}, x_3^{\pm 1}, x_4^{\pm 1}]$. In this paper we consider recurrences of the general form \beq\label{arec} x_{n+N}\, x_n = \prod_{a_j\geq 0}x_{n+j}^{a_j}+ \prod_{a_j\leq 0}x_{n+j}^{-a_j}, \eeq where the indices in each product lie in the range $1\leq j\leq N-1$, with the exponents $(a_1,...,a_{N-1})$ forming an integer $(N-1)$-tuple which is palindromic, so that $a_j = a_{N-j}$. The main purpose of this paper is to identify which recurrences of the form (\ref{arec}) can be regarded as finite-dimensional discrete integrable systems, in the sense of the standard Liouville-Arnold definition of integrability for maps \cite{maeda, veselov}. The latter requires that a map should preserve a Poisson bracket, as well as having sufficiently many first integrals that commute with respect to this bracket. A quiver is a graph consisting of a number of nodes together with arrows between the nodes. In \cite{fordy_marsh}, Fordy and Marsh showed how recurrences of the form (\ref{arec}) are produced from sequences of mutations in cluster algebras defined by quivers with a special periodicity property with respect to mutations. They define a quiver $Q$ with $N$ nodes to be \textit{cluster mutation-periodic with period} $m$ if $\mu_m\cdots \mu_{2}\cdot \mu_1 Q = \rho^m Q$, where $\mu_j$ denotes quiver mutation at node $j$ and $\rho$ denotes a cyclic permutation of the nodes. Associated with the quiver mutation there is a corresponding cluster mutation acting on a cluster ${\bf x}=(x_1, \ldots ,x_N)$ in a (coefficient-free) cluster algebra. In the period 1 case, $m=1$, the action of a suitable ordered sequence of cluster mutations on cluster variables is precisely equivalent to iteration of a recurrence of the form (\ref{arec}). A complete classification of period 1 quivers is given in \cite{fordy_marsh}: any such quiver produces a recurrence, and conversely any recurrence of the form (\ref{arec}) determines a cluster mutation-periodic quiver with period 1. \subsection{Outline of the paper} In the next section, we briefly review how the recurrences (\ref{arec}), or the equivalent birational maps in dimension $N$, come about from cluster mutations. The main object is the $N\times N$ skew-symmetric integer matrix $B$ corresponding to the quiver, which not only defines the exponents appearing in (\ref{arec}), but also produces a presymplectic form which is invariant under the map; this is the two-form introduced in \cite{gsvduke}. When det$\, B\neq 0$, the form is symplectic, so the map automatically has a nondegenerate Poisson bracket. The main result of section 2 is Theorem \ref{torusred}, which states that (even if det$\,B= 0$) it is always possible to reduce (\ref{arec}) to a symplectic map, possibly on a space of lower dimension. This provides us with the appropriate setting in which to consider Liouville integrability in the rest of the paper. In section \ref{intmaps} we consider the recurrences (\ref{arec}) in the light of the algebraic entropy test \cite{bellon_viallet}. We give details of a series of conjectures which show that the algebraic entropy can be determined explicitly from the tropical version of (\ref{arec}), expressed in terms of the max-plus algebra. From the point of view of the rest of the paper, the main result is a corollary of these conjectures (Theorem \ref{zeroe}), which classifies the cases with zero entropy into four families, labelled (i)-(iv). All of the maps in family (i) have periodic orbits, and it is a trivial task to show that they are Liouville integrable maps. The majority of the rest of the paper is devoted to families (ii), (iii) and (iv). Section \ref{primit} is concerned with the family (ii), which arises from cluster mutations applied to the primitive quivers, denoted $P_{N}^{(q)}$, which are defined for each positive integer $N$ and $q=1,\ldots , \lfloor{N/2}\rfloor$. These were introduced in \cite{fordy_marsh}, where they were shown to be the building blocks of all mutation-periodic quivers with period 1. They are also equivalent to affine $A$-type Dynkin quivers (or copies of such), and it was shown by Fordy and Marsh that the cluster variables in this case satisfy linear recurrence relations with constant coefficients. (This was subsequently shown for the general case of cluster algebras associated with affine Dynkin quivers, in \cite{assem} and \cite{keller_scher}.) Here we give a new proof of these linear recurrences, which relies on additional linear relations with periodic coefficients, and their associated monodromy matrices. These periodic quantities are the key to the Liouville integrability of the maps in family (ii). A large number of new examples of integrable maps arise in this construction and are explicitly presented. Section \ref{pert} deals with the family (iii), each member of which arises from a quiver which is a deformation of a primitive $P_{N}^{(q)}$, for a particular $q$ and $N$. The general properties of this family are very close to those of the primitives. In particular, the cluster variables satisfy linear recurrences with constant coefficients, and there are additional linear relations with periodic coefficients. Once again, associated monodromy arguments, and Poisson subalgebras defined by the periodic quantities, are the key to the Liouville integrability of members of this family. Again, many new examples of integrable maps arise and are explicitly presented. The family (iv) consists of Somos-type recurrences with three terms, typified by (\ref{somos4}). In section \ref{somosmaps} we outline some different approaches to understanding the Liouville integrability of the maps in this family, such as making reductions of the Hirota-Miwa equation and its Lax pair and by deriving higher bilinear relations with constant coefficients. Some of our results were announced previously in \cite{sigma}. \section{Symplectic maps from cluster recurrences} \label{torusaction} \setcounter{equation}{0} Given a recurrence, a major problem is to find an appropriate symplectic or Poisson structure which is invariant under the action of the corresponding finite-dimensional map. Remarkably, in the case of the cluster recurrences (\ref{arec}) this problem can be solved algorithmically. \subsection{Recurrences from periodic quivers} A quiver $Q$ with $N$ nodes and no 1-cycles or 2-cycles admits quiver mutation. The mutation $\mu_k$ at node $k$ produces a new quiver $\tilde{Q}=\mu_kQ$ which is obtained as follows: (i) reverse all arrows in/out of node $k$; (ii) if there are $p$ arrows from node $j$ to node $k$, and $q$ arrows from node $k$ to node $\ell$, then add $pq$ arrows from node $j$ to node $\ell$; (iii) remove any 2-cycles created in step (ii). An $N\times N$ skew-symmetric matrix $B$ with integer matrix elements $b_{j\ell}$ defines a quiver $Q$ with $N$ nodes, without 1-cycles or 2-cycles. Matrix mutation applied at the vertex $k$ is also denoted $\mu_k$, and starting from $B$ it produces a new matrix $\tilde{B}=\mu_kB=(\tilde{b}_{j\ell})$ defined by \beq\label{matmut} \tilde{b}_{j\ell} = \left\{ \begin{array}{ll} -b_{j\ell}\qquad & \mathrm{if} \quad j=k\quad \mathrm{or}\quad \ell=k, \\ b_{j\ell} + \frac{1}{2}(|b_{jk}|b_{k\ell} + b_{jk}|b_{k\ell}|) & \mathrm{otherwise.} \end{array} \right. \eeq This matrix mutation is equivalent to the action of $\mu_k$ on $Q$, via quiver mutation, producing the new quiver $\tilde{Q}=\mu_k Q$. In addition to the matrix mutation, the cluster variables ${\bf x}=(x_1,x_2,\ldots,x_N)$ are transformed by $\mu_k$ to a new cluster $\tilde{{\bf x}}=(\tilde{x}_1,\tilde{x}_2,\ldots,\tilde{x}_N)$ in such a way that all variables except $x_k$ are left unaltered, so that $\tilde{x}_j= x_j, \; j\neq k$, and the exchange relation corresponding to the mutation $\mu_k$ is conveniently written as \beq \label{exchange} \tilde{x}_k x_k = \prod_{j=1}^N x_j^{[b_{k,j}]_+} + \prod_{j=1}^N x_j^{[-b_{k,j}]_+}, \eeq where $ [b]_+=\max (b,0). $ Note the identity $ \frac{1}{2}(a|b|-|a|b) = a[-b]_+-b[-a]_+. $ In what follows, we require that the $N\times N$ skew-symmetric matrix $B$ defines an $N$-node quiver $Q$ that is cluster mutation-periodic with period 1. All such matrices were classified in \cite{fordy_marsh}. Cluster mutation-periodicity, in the case that the period is 1, means that after applying a single step of mutation at one of the nodes, $\mu_1$ say, the quiver $\tilde{Q}$ is the same as the quiver $\rho Q$ obtained from $Q$ by applying the cyclic permutation $\rho$, given by $\rho: \, (1,2,3,\ldots,N)\mapsto (N,1,2,\ldots,N-1)$, such that the number of arrows from $j$ to $k$ in $Q$ is the same as the number of arrows from $\rho^{-1}(j)$ to $\rho^{-1}(k)$ in $\rho Q$. This periodicity requirement corresponds to explicit conditions on the matrix elements of $B$, namely that \beq\label{mutper} \tilde{b}_{j+1,k+1}=b_{jk} \eeq for all $j,k$, where the indices are read modulo $N$. Since $\rho^N = \mathrm{id}$, from the periodicity it is clear that $Q$ is preserved by the composition of $N$ mutations that cycle around its nodes, i.e. $\mu_N\cdots\mu_2\cdot \mu_1Q =Q$. Starting from $B$, one constructs the $N$th order recurrence relation \beq\label{crec} x_{n+N}x_n = \prod_{j=1}^{N-1} x_{n+j}^{[b_{1,j+1}]_+} + \prod_{j=1}^{N-1} x_{n+j}^{[-b_{1,j+1}]_+}, \qquad n=1,2,3,\ldots. \eeq If $B$ satisfies the conditions (\ref{mutper}), then iterating the recurrence (\ref{crec}) starting from the initial data $(x_1,\ldots,x_N)$ is precisely equivalent to applying cluster mutation $\mu_1$ at the vertex 1, followed by the subsequent mutations $\mu_2, \mu_3,\ldots, \mu_j, \mu_{j+1},\ldots$, and so on. The recurrence relation (\ref{crec}) is clearly reversible, in the sense that (as long as neither is zero) it can be solved both for $x_{n+N}$, to iterate forwards, and for $x_n$, to iterate backwards. This means that the index $n$ in (\ref{crec}) is allowed to take all values $n\in\Z$, and also that iteration of the recurrence is equivalent to iteration of the birational map $\varphi$ from $\C^N$ to itself, defined by \beq \label{bir} \varphi: \qquad \left(\begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_{N-1} \\ x_{N} \end{array} \right) \longmapsto \left(\begin{array}{c} x_2 \\ x_3 \\ \vdots \\ x_{N} \\ x_{N+1} \end{array} \right), \qquad \mathrm{where} \qquad x_{N+1}=\frac{ \prod_{j=1}^{N-1} x_{j+1}^{[b_{1,j+1}]_+} + \prod_{j=1}^{N-1} x_{j+1}^{[-b_{1,j+1}]_+}, }{x_1}. \eeq One can decompose the map (\ref{bir}) as $\varphi =\rho^{-1}\cdot \mu_1$, where in terms of the cluster ${\bf x}=(x_j)$ the map $\mu_1$ sends $(x_1, x_2, \ldots , x_N)$ to $(\tilde{x}_1, x_2, \ldots , x_N)$, with $\tilde{x}_1$ defined according to the exchange relation (\ref{exchange}) with $k=1$, and $\rho^{-1}$ sends $(x_1, x_2, \ldots , x_N)$ to $( x_2, \ldots , x_N, x_1)$. Due to the periodicity requirement on $B$, we have $\rho^{-1}\cdot \mu_1 \, (B) = B$, so the action of $\varphi$ on this matrix is trivial. \subsection{The Gekhtman-Shapiro-Vainshtein bracket}\label{gsv} In \cite{gsv} it was shown that very general cluster algebras admit a linear space of Poisson brackets of log-canonical type, compatible with the cluster maps generated by mutations, and having the form \beq\label{logcan} \{x_j,x_k\}=c_{jk}\, x_{j}x_k \eeq for some skew-symmetric constant coefficient matrix $C=(c_{jk})$. Compatibility of the Poisson structure means that the cluster transformations $\mu_i$ given by (\ref{exchange}) correspond to a {\em change of coordinates}, $\tilde{\bf{ x}}=\mu_i ({\bf x}) $, with their bracket also being log-canonical, \beq\label{logcant} \{ \tilde{x}_j,\tilde{x}_k\}=\tilde{c}_{jk}\,\tilde{x}_{j}\tilde{x}_k, \eeq for another skew-symmetric constant matrix $\tilde{C}=(\tilde{c}_{jk})$. Our viewpoint is to regard the cluster transformation as a {\em birational map} ${\bf x}\mapsto \tilde{\bf{ x}}=\varphi ({\bf x})$ in $\C^N$, and require a Poisson structure that is {\it invariant} with respect to $\varphi$ (not just {\em covariant}). Therefore, in (\ref{logcant}) we require $\tilde{C}=C$. However, there may not be a non-trivial log-canonical Poisson bracket that is covariant or invariant under cluster transformations. \bex \label{affineA2} {\em Corresponding to (\ref{bir}) with $N=3$, the matrix $B$ and birational map on $\C^3$ are given by \beq \label{affA2} B=\left(\bear{ccc} 0 & -1 & -1 \\ 1 & 0 & -1 \\ 1 & 1 & 0 \eear\right),\qquad \left(\begin{array}{c} x_1 \\ x_2 \\ x_{3} \end{array} \right) \longmapsto \left(\begin{array}{c} x_2 \\ x_3 \\ (x_{2}x_3+1)/x_1 \end{array} \right). \eeq Suppose that there is an invariant Poisson bracket of the form (\ref{logcan}). The condition $\varphi^*x_j = x_{j+1}$, implies that $c_{j+\ell,k+\ell}=c_{jk}$ for all indices $j,k,\ell$ in the appropriate range, so $C$ is a Toeplitz matrix: $$ C= \left(\bear{ccc} 0 & \al & \beta \\ -\al & 0 & \al \\ -\beta & -\al & 0 \eear\right). $$ Upon taking the bracket of both sides of the relation $x_4x_1=x_2x_3+1$ with $x_2$, one finds that $(\al-\beta )x_1x_2x_4 = -\al x_2^2x_3$, so $(\al-\beta )(x_2^2x_3+ x_2)=-\al x_2^2x_3$, which gives $\al = \beta = 0$, so the bracket is trivial. } \eex \begin{rem}[Non-invariance of symplectic leaves] \label{sleaf} {\em Even in the case where the map $\varphi$ does admit an invariant log-canonical Poisson bracket, it may be degenerate, and in that case $\varphi$ need not preserve the symplectic leaves of the bracket. For instance, the Somos-4 recurrence (\ref{somos4}) has the invariant Poisson bracket \cite{honelaur} \beq\label{s4br} \{\, x_j,x_k\,\} = (k-j)\,x_jx_k, \eeq which has rank two, but the two independent Casimirs $x_1x_3/x_2^2$, $x_2x_4/x_3^2$ are not fixed by the action of $\varphi$ (see Example \ref{s4e} below), so the symplectic leaves are not preserved. The analogous observation for Somos-5 appears in \cite{sigma}. } \end{rem} In general, we shall see that it is more useful to start with a two-form in the variables $x_j$, rather than a Poisson bivector field corresponding to a bracket. \subsection{Symplectic forms for cluster maps} Given a skew-symmetric matrix $B$, one can define the log-canonical two-form \beq \label{omega} \om =\sum_{j<k} \frac{b_{jk}}{x_jx_k}\dd x_j\wedge \dd x_k, \eeq which is just the constant skew-form $\om = \sum_{j<k} b_{jk}\, \dd z_j\wedge \dd z_k$, written in the logarithmic coordinates $z_j=\log x_j$, so it is evidently closed, but may be degenerate. In \cite{gsvduke} (see also \cite{fockgon}) it was shown that for a cluster algebra defined by a skew-symmetric integer matrix $B$, this two-form is compatible with cluster transformations, in the sense that under a mutation map $\mu_i : \, {\bf x} \mapsto \tilde{ {\bf x}}$, it transforms as $\mu_i^* \omega =\sum_{j<k}\tilde{b}_{jk} \dd\log\tilde{x}_j\wedge \dd\log\tilde{x}_k$. In the case that the matrix $B$ is nondegenerate, $\om$ turns out to be a symplectic form for the map $\varphi$, but in general it is a presymplectic form. For the purposes of our discussion it is convenient to present some formulae from \cite{fordy_marsh}, which give the following result. \begin{lem} \label{symplectic} Let $B$ be a skew-symmetric integer matrix. The following conditions are equivalent. \noindent {\em (i)} The matrix $B$ defines a cluster mutation-periodic quiver with period 1. \noindent {\em (ii)} The matrix elements $B$ satisfy the relations \beq\label{reln1} b_{j,N}=b_{1,j+1}, \qquad j=1,\ldots , N-1, \eeq and \beq\label{reln2} b_{j+1,k+1}=b_{j,k}+b_{1,j+1} [-b_{1,k+1}]_+ - b_{1,k+1} [-b_{1,j+1}]_+ , \qquad 1\leq j,k \leq N-1. \eeq \noindent {\em (iii)} The two-form $\om$ is preserved by the map $\varphi$, i.e. $\varphi^* \om =\om$. \end{lem} \begin{prf} The proof of (i)$\iff$(ii) follows straight from the definition of periodicity (see the proof of Theorem 6.1 in \cite{fordy_marsh}). The implication (ii)$\implies$(iii) in Lemma \ref{symplectic} is a consequence of Theorem 2.1 in \cite{gsvduke}, after applying the permutation $\rho^{-1}$ to the coordinates. The reverse implication follows from a direct calculation, which is omitted. \end{prf} The formulae (\ref{reln1}) and (\ref{reln2}) entail that for a period 1 cluster mutation-periodic quiver, the matrix $B$ is completely determined by the elements in its first row, so that each recurrence of the form (\ref{arec}) with palindromic exponents corresponds to a matrix $B$. Theorem 6.1 in \cite{fordy_marsh} is equivalent to the following formula for $b_{jk}$: \beq \label{aform} b_{jk}=-b_{kj}= a_{k-j} +\sum_{\ell = 1}^{\max (j-1,N-k)} a_\ell[-a_{\ell+k-j}]_+ -a_{\ell+k-j} [-a_\ell]_+ , \qquad 1\leq j< k\leq N, \eeq where the $a_j=b_{1,j+1}$ for $j=1,\ldots ,N-1$ form a palindromic integer $(N-1)$-tuple, such that $a_j=a_{N-j}$. Apart from being skew-symmetric, $B$ is also symmetric about the skew diagonal, i.e. $b_{jk}=b_{N-k+1,N-j+1}$. Henceforth when we refer to a recurrence of the form (\ref{crec}), and the corresponding matrix $B=(b_{jk})$, we assume that the conditions (\ref{reln1}) and (\ref{reln2}) hold. \bex [Corollary 2.2 in \cite{honelaur}] {\em For integer values of $c\geq 0$, the skew-symmetric matrix \beq \label{s4gen} B=\left(\bear{cccc} 0 & -1 & c & -1 \\ 1 & 0 & -(c+1) & c \\ -c & (c+1) & 0 & -1 \\ 1 & -c & 1 & 0 \eear\right), \eeq satisfies the conditions (\ref{reln1}) and (\ref{reln2}), which means that it defines a period 1 cluster mutation-periodic quiver $Q$, in the sense of \cite{fordy_marsh}. Thus, by Lemma \ref{symplectic}, for each $c$ the map $\varphi$ corresponding to the recurrence \beq \label{7betarec} x_{n+4}\, x_{n} = x_{n+3}\, x_{n+1} + x_{n+2}^c \eeq preserves the two-form \beq \label{7sympm1is1} \omega = -\left( \frac{\dd x_1 \wedge \dd x_2}{x_1x_2}+\frac{\dd x_1 \wedge \dd x_4}{x_1x_4}+ \frac{\dd x_3 \wedge \dd x_4}{x_3x_4}\right) +c \left(\frac{\dd x_1 \wedge \dd x_3}{x_1x_3}+ \frac{\dd x_2 \wedge \dd x_4}{x_2x_4}\right) -(c+1) \frac{\dd x_2 \wedge \dd x_3}{x_2x_3}. \eeq } \eex \begin{rem}[Invariant volume form] \label{volume}{\em For all maps $\varphi$ of the form (\ref{bir}), or of the more general form (\ref{recf}), the volume $N$-form \beq\label{vol} \Omega = \frac{\dd x_1\wedge \ldots \wedge \dd x_N}{\prod_{j=1}^Nx_j}, \eeq is invariant up to a sign, depending on the parity of $N$, i.e. $\varphi^* \Omega = (-1)^N\Omega$. In the case that $\om$ is nondegenerate, which can only happen for even $N=2K$, up to overall scale this volume form is the Poincar\'e invariant $\om^K=\om\wedge\ldots \wedge\om$ ($K$ terms). } \end{rem} If the matrix $B$ is nondegenerate, then (up to rescaling by an overall constant) the associated log-canonical Poisson bracket (\ref{logcan}) is given by the dual bivector field \beq\label{biv} \mathcal{J} = \sum_{j<k} c_{jk}\, x_jx_k\, \frac{\partial}{\partial x_j}\wedge \frac{\partial}{\partial x_k} \qquad \mathrm{with} \qquad C=(c_{jk})=B^{-1}. \eeq In the case that $\det \,B =0$, it is necessary to consider a projection to a lower-dimensional space with a symplectic form, as follows. \begin{thm} \label{torusred} The map $\varphi$ is symplectic whenever $B$ is nondegenerate. For $\mathrm{rank} \, B = 2K \leq N$, there is a rational map $\pi$ and a symplectic birational map $\hat\varphi$ such that the diagram \beq \label{cd} \begin{CD} \C^N @>\varphi >> \C^N\\ @VV\pi V @VV\pi V\\ \C^{2K} @>\hat{\varphi}>> \C^{2K} \end{CD} \eeq is commutative, with a log-canonical symplectic form $\hat\om$ on $\C^{2K}$ that satisfies $\pi^*\hat\om = \om$. \end{thm} The proof of this theorem will occupy most of the rest of this subsection. To begin with we consider the null distribution of $\om$, which (away from the hyperplanes $x_j=0$) is generated by $N-2K$ independent commuting vector fields, each of which is of the form \beq\label{null} \frac{\partial }{\partial t} = \sum_{j=1}^N u_jx_j \, \frac{\partial}{\partial x_j} \qquad \mathrm{for} \qquad {\bf u}=(u_j)\in\mathrm{ker} \, B. \eeq Since this is an integrable distribution, Frobenius' theorem gives local coordinates $t_1, \ldots , t_{N-2K} , y_1,\ldots ,y_{2K}$ such that the integral manifolds of the null distribution are given by $y_j =$constant, $j=1,\ldots ,2K$. The coordinates $y_j$ must be invariants for these commuting vector fields, and can be chosen as linear functions of the logarithmic coordinates $z_j = \log x_j$, but for our purposes it is more convenient to take functions of the form $$ y= {\bf x}^{\bf v} :=\prod_j x_j^{v_j} \qquad \iff \qquad ({\bf u}, {\bf v})=0 \quad \forall \, {\bf u}\in \mathrm{ker}\, B , $$ where $(\, , )$ denotes the standard scalar product. This yields a log-canonical symplectic form in terms of $y_j$, which is the generalised Weil-Petersson form in \cite{gsvduke}. Alternatively, one can consider integral curves of the vector fields (\ref{null}), each of which is the orbit of the point ${\bf x}$ under the action of the algebraic torus $\C^*$, which acts by scaling the coordinates $x_j$. We denote this one-parameter group action by \beq\label{scale} {\bf x}\rightarrow \la^{\bf u}\cdot {\bf x}=(\la^{u_j}x_j), \qquad \la\in\C^*. \eeq Combining $N-2K$ independent vector fields of the form (\ref{null}), which (without loss of generality) can be defined by choosing $N-2K$ independent vectors ${\bf u}=(u_j)\in\mathrm{ker}\, B$ with components $u_j\in\Z$, we see that the integral manifold through a generic point ${\bf x}$ is the same as its orbit under the scaling action of the algebraic torus $(\C^*)^{N-2K}$. The coordinates $y_j={\bf x}^{{\bf v}_j}$ are the invariants under these scaling transformations, and, by choosing the ${\bf v}_j$ to be integer vectors, define a rational map $\pi$. \begin{lem} \label{symp} Suppose that the integer vectors ${\bf v}_1, \ldots , {\bf v}_{2K}$ form a basis for $\mathrm{im}\, B$, and define the rational map $$ \bear{lrcl} \pi : \quad & \C^N & \longrightarrow & \C^{2K} \\ & {\bf x} & \longmapsto & {\bf y}, \qquad y_j ={\bf x}^{{\bf v}_j}, \quad j=1,\ldots, 2K. \eear $$ Then the $y_j$ are a complete set of Laurent monomial invariants for all of the scaling transformations (\ref{scale}) defined by integer vectors ${\bf u}\in \mathrm{ker}\, B$, and there is a log-canonical symplectic form \beq\label{yform} \hat{\om} = \sum_{j<k} \frac{\hat{b}_{jk}}{y_jy_k}\dd y_j\wedge \dd y_k, \qquad with \qquad \pi^*\hat\om = \om . \eeq \end{lem} \begin{prf} Since the integer matrix $B$ is skew-symmetric, the vector space $\Q^N$ has an orthogonal direct sum decomposition $\Q^N = \mathrm{im}\, B \oplus \mathrm{ker}\, B$. The scaling action on Laurent monomials gives $\la^ {{\bf u}}\cdot {\bf x}^{{\bf v}} = \prod_j \la^{u_jv_j} x_j^{v_j} =\la^{({\bf u},{\bf v})} {\bf x}^{{\bf v}}$, hence ${\bf x}^{{\bf v}}$ is invariant under the overall action of $(\C^*)^{N-2K}$ if and only if $({\bf u},{\bf v})=0$ for all ${\bf u}\in \mathrm{ker}\, B$, so ${\bf v}\in\mathrm{im}\, B$, and a basis of $\mathrm{im}\, B$ gives $2K$ independent monomial invariants. Now extend the basis of $\mathrm{im}\, B$ to a basis $\{{\bf v}_j\}_{j=1}^{N}$ for $\Q^N$, so that ${\bf v}_j\in\mathrm{ker}\, B$ for $2K+1\leq j\leq N$, and let $M$ be the matrix whose rows consist of the $N$ basis vectors. Then one finds the block matrix $$ B^\natural = M^{-T}BM^{-1} =\left(\bear{cc} \hat{B} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \eear \right) $$ (with $M^{-T}=(M^{-1})^T$, and $T$ denoting transpose) where $ \hat{B} = (\hat{b}_{jk}) $ is a nondegenerate skew-symmetric $2K\times 2K$ matrix. Defining the two-form (\ref{yform}) in terms of $ \hat{B}$ gives $ \pi^*\hat\om =\om$, as required. \end{prf} The map $\pi$ and the specific form of $\hat\om$ depend on the choice of integer basis for $\mathrm{im}\, B$. Here we consider rational maps between fixed affine spaces, even if these are not defined everywhere. In the case where $B$ is nondegenerate, the diagram (\ref{cd}) is trivial for $\pi =\mathrm{id}$, but there are other non-trivial choices of $\pi$, corresponding to different integer bases for $\mathrm{im}\, B$. The following classical theorem (which is a special case of Theorem IV.1 in \cite{newman}) provides a canonical choice of $\pi$, and via Lemma \ref{symp} gives Darboux coordinates for the presymplectic form $\om$. The proof in \cite{newman} is constructive. \begin{thm}\label{normalform} If $B$ is a skew-symmetric matrix of rank $2K$ in $\mathrm{Mat}_N(\Z )$ then there are integers $h_1,\ldots ,h_K$ and a unit matrix $M\in\mathrm{Mat}_N(\Z )$ such that $B=M^TB^\natural M$, where $$ B^\natural = h_1 \, \left(\bear{cc} 0 & 1 \\ -1 & 0 \eear\right) \oplus h_2 \, \left(\bear{cc} 0 & 1 \\ -1 & 0 \eear\right) \oplus \ldots \oplus h_K \, \left(\bear{cc} 0 & 1 \\ -1 & 0 \eear\right) \oplus \mathbf{0}, $$ with $h_j|h_{j+1}$ for $j=1,\ldots ,K-1$. \end{thm} In the above result, the first $2K$ rows of the unit matrix $M$ provide a $\Z$-basis for the $\Z$-module $\mathrm{im}\, B_\Z=\mathrm{im}\, B\cap \Z^N$ (a sublattice of $\Z^N$), and the remaining $N-2K$ rows provide a $\Z$-basis for $\mathrm{ker}\, B_\Z$. For our purposes, it is only the choice of integer vectors spanning $\mathrm{im}\, B$ that matters for the definition of the map $\pi$, and if the integers $h_j$ are not all 1 then it is not essential to have a $\Z$-basis for $\mathrm{im}\, B_\Z$. The second part of the proof of Theorem \ref{torusred} involves showing that, when $B$ is degenerate, it is possible to choose integer vectors ${\bf v}_j$ spanning $\mathrm{im}\, B$ in a way that is compatible with the map $\varphi$. In other words, $\varphi$ should reduce to a symplectic map $\hat\varphi$ in the coordinates $y_j$, and we require that the latter map should also be birational. The next result provides two sets of sufficient conditions on the vectors ${\bf v}_j$ which ensure birationality of the map $\hat\varphi$. \begin{lem} \label{ablem} Let $\{{\bf v}_j\}_{j=1}^{2K}$ be integer vectors that span $\mathrm{im}\, B$, and suppose that the columns of $B$ belong to $<{\bf v}_1,\ldots, {\bf v}_{2K}>_\Z$. If either {\em (a)} $\mathrm{im}\, B_\Z=<{\bf v}_1,\ldots, {\bf v}_{2K}>_\Z$, or {\em (b)} each ${\bf v}_j$ belongs to the $\Z$-span of the columns of $B$, \noindent then for $y_j({\bf x}) = {\bf x}^{{\bf v}_j}$, there exist rational functions $f_j({\bf y}), \, f^\dagger_j({\bf y}) \in \Q ({\bf y})=\Q (y_1,\ldots ,y_{2K})$ such that $\varphi^* y_j({\bf x}) =\pi^* f_j({\bf y})$, $(\varphi^{-1})^* y_j({\bf x}) =\pi^* f^\dagger_j({\bf y})$ for $j=1,\ldots , 2K$, and the map $\hat\phi: \, {\bf y} \mapsto (f_j({\bf y}))$ is birational and symplectic. \end{lem} \begin{prf} Let $y({\bf x})={\bf x}^{\bf v}$ be one of the coordinate functions $y_j$ in the image of the map $\pi$, with $\eta=\log y$, and denote the columns of $B$ by ${\bf w}_1,\ldots , {\bf w}_N$. Then $$ \varphi^* \eta ({\bf x})=\sum_{k=1}^N v_k \varphi^* z_k =\sum_{k=1}^{N-1}v_k z_{k+1} + v_N\Big( -z_1+\sum_{j=2}^N [-b_{1j}]_+z_j + \log (1+ \exp F)\Big), $$ with $F$ given by $F:=\sum_{k=1}^{N-1} b_{1,k+1}z_{k+1}$. By the initial assumption on the ${\bf v}_j$, we can write ${\bf w}_1=\sum_{j=1}^{2K} c_j {\bf v}_j$ for $c_j\in\Z$, which implies that $\exp F=\pi^*\exp \sum_j c_j \eta_j =\pi^*\prod_j y_j^{c_j}$, the pullback of a rational function of ${\bf y}$. Then we have \beq\label{etaf} \varphi^* \eta ({\bf x})= ({\bf m}, {\bf z}) + \log (1+ \exp F)^{v_N}, \eeq where ${\bf m}$ is an integer vector with components $m_1=-v_N$, $m_j=v_{j-1}+[-b_{1j}]_+v_N$ for $2\leq j\leq N$. By definition the ${\bf w}_j$ span $\mathrm{im}\, B$, so we can write ${\bf v}=\sum_{k=1}^Nd_k{\bf w}_k$ for some $d_k\in\Q$, which in components gives $v_j=\sum_{k=1}^Nd_kb_{jk}$ for $j=1,\ldots,N$. By using (\ref{reln1}) and (\ref{reln2}) one obtains $$ m_j = \sum_{k=1}^Nd_k(b_{j-1,k}+[-b_{1j}]_+b_{Nk}) =\sum_{k=1}^{N-1}d_k(b_{j,k+1}+[-b_{1,k+1}]_+b_{j1})-d_Nb_{j1} $$ for $j\geq 2$, while $m_1=-\sum_{k=1}^Nd_k b_{Nk}= \sum_{k=1}^{N-1}d_k b_{1,k+1}$, implying that $$ {\bf m} =\Big(\sum_{k=1}^{N-1}d_k[-b_{1,k+1}]_+ - d_N\Big){\bf w}_1 +\sum_{j=2}^Nd_{j-1}{\bf w}_j. $$ Hence we see that ${\bf m}\in\mathrm{im}\, B_\Z$. In case (a), ${\bf m}=\sum_{j=1}^{2K} \tilde{c}_j {\bf v}_j$ for $\tilde{c}_j\in\Z$, so substituting in (\ref{etaf}) and exponentiating yields $\varphi^* y({\bf x}) = \pi^* f$ with $$ f({\bf y}) = \prod_j y_j^{\tilde{c}_j} \, \Big(1+\prod_j y_j^{c_j}\Big)^{v_N}\in \Q ({\bf y}), $$ as required. In case (b), on the other hand, each $d_k\in\Z$ and the ${\bf w}_j$ are in $<{\bf v}_1,\ldots, {\bf v}_{2K}>_\Z$, so again one can expand ${\bf m}$ in this basis with coefficients $\tilde{c}_j\in\Z$, and the same formula holds for $f$. For the inverse map $\varphi^{-1}:\, (x_1,\ldots , x_N)\mapsto (x_0,x_1,\ldots, x_{N-1})$, with $$ x_0=\frac{1}{x_N}\left(\prod_{j=1}^{N-1}x_j^{[b_{1,j+1}]_+}+\prod_{j=1}^{N-1}x_j^{[-b_{1,j+1}]_+}\right), $$ the fact that $(\varphi^{-1})^* y({\bf x}) = \pi^* f^\dagger$ with $f^\dagger ({\bf y})\in \Q ({\bf y})$ follows from an almost identical calculation. The rational map $\hat\varphi$ is defined by the $2K$ functions $f_j({\bf y})$, with a rational inverse $\hat\varphi^{-1}$ given by the functions $f^\dagger_j({\bf y})$, and by construction $\hat\varphi \cdot \pi = \pi \cdot \varphi$. Combining part (iii) of Lemma \ref{symplectic} together with the result of Lemma \ref{symp} gives $\varphi^*\om -\om = \varphi^*\pi^*\hat\om-\pi^*\hat\om=\pi^*(\hat\varphi^*\hat\om-\hat\om)=0$, so $\hat\varphi^*\hat\om=\hat\om$ as required. Thus the proof of the lemma and the proof of Theorem \ref{torusred} are complete. \end{prf} \begin{rem}{\em Theorem \ref{normalform} provides a basis of $\mathrm{im}\, B$ which satisfies condition (a) above. To satisfy condition (b), one can choose any $2K$ independent columns (or rows) of $B$, corresponding to the $\tau$-coordinates of \cite{gsv, gsvduke}. } \end{rem} To illustrate Theorem \ref{torusred} we now present a couple of examples. \bex [Somos-4]{\em \label{s4e} The Somos-4 recurrence (\ref{somos4}) is the special case $c=2$ of (\ref{7betarec}), in which case the skew-symmetric matrix $B$ of (\ref{s4gen}) is degenerate, and $\mathrm{ker}\, B$ is spanned by the integer column vectors $ \mathbf{u}_1= (1,1,1,1)^T , \;\; \mathbf{u}_2= (1,2,3,4)^T . $ By (\ref{scale}), each of these vectors generates a scaling action on the phase space $\C^4$, with weights given by their components, so that the torus $(\C^*)^2$ acts via $(x_1,x_2,x_3,x_4)\rightarrow (\la_1\la_2\, x_1 , \la_1\la_2^2\, x_2, \la_1\la_2^3\, x_3 , \la_1\la_2^4\, x_4)$, for $(\la_1,\la_2)\in(\C^*)^2$. Then $\mathrm{im}\, B=(\mathrm{ker}\, B)^\perp=<{\bf v}_1, {\bf v}_2>$, with $ {\bf v}_1=(1,-2,1,0)^T , \;\; {\bf v}_2=(0,1,-2,1)^T, $ whose components provide the exponents for the monomial invariants \beq\label{s4yj} y_1 = \frac{x_1x_3}{x_2^2}, \qquad y_2 = \frac{x_2x_4}{x_3^2}. \eeq (These monomials also provide two independent Casimirs for the degenerate Poisson bracket (\ref{s4br}) mentioned above. The coefficients of (\ref{s4br}) are obtained from the bivector ${\bf u}_1\wedge {\bf u}_2$.) This defines the rational map $\pi : \C^4\rightarrow \C^2$ from the $x_j$ to the $y_j$. Upon computing the pullback of $\varphi$ on these monomials, one has the map \beq \label{s4map} \hat{\varphi}: \qquad \left(\begin{array}{c} y_1 \\ y_2 \end{array} \right) \longmapsto \left(\begin{array}{c} y_2 \\ (y_2+1)/(y_1y_2^2) \end{array} \right), \eeq which is of QRT type \cite{qrt1}, and preserves the symplectic form \beq\label{canon} \hat\om = \frac{1}{y_1y_2}\, \dd y_2\wedge \dd y_1 \eeq where $\om = \pi^*\hat\om$, with $\om$ given by the formula (\ref{7sympm1is1}) for $c=2$. } \eex Note that, in the preceding example, the chosen basis is such that every integer vector in im$\, B$ can be written as a $\Z$-linear combination of the vectors ${\bf v}_1$, ${\bf v}_2$, so the condition (a) in Lemma \ref{ablem} holds. At the same time, from the matrix (\ref{s4gen}) with $c=2$ we see that the first column of $B$ is ${\bf v}_2$, and the last column is $-{\bf v}_1$, so condition (b) holds as well. The next example shows that (a) and (b) can give inequivalent results. \bex \label{viallet} {\em In \cite{honelaur}, a singularity confinement pattern was used to obtain the sixth-order recurrence \beq \label{sixthorder} x_{n+6}\, x_n = (x_{n+5}\, x_{n+1})^2+ x_{n+4}^2x_{n+3}^4x_{n+2}^2, \eeq which is associated with mutations of a skew-symmetric matrix of rank 2, namely $$ B=\left(\bear{cccccc} 0 & -2 & 2 & 4 & 2 & -2 \\ 2 & 0 & -6 & -6 & 0 & 2 \\ -2 & 6 & 0 & -6 & -6 & 4 \\ -4 & 6 & 6 & 0 & -6 & 2 \\ -2 & 0 & 6 & 6 & 0 & -2 \\ 2 & -2 & -4 & -2 & 2 & 0 \eear\right). $$ A basis of $\mathrm{im}\, B$ corresponding to case (a) of Lemma \ref{ablem} is given in terms of the first and last columns of $B$ by ${\bf v}_1=-\frac{1}{2}{\bf w}_6$, ${\bf v}_2=\frac{1}{2}{\bf w}_1$, which gives the ${\bf y}$-coordinates $y_j = x_jx_{j+4}/(x_{j+1}x_{j+2}^2x_{j+3})$, $j=1,2$, and (up to rescaling) produces the same symplectic form as in (\ref{canon}). The corresponding symplectic map is \beq\label{vialletmap} \hat{\varphi}: \qquad \left(\begin{array}{c} y_1 \\ y_2 \end{array} \right) \longmapsto \left(\begin{array}{c} y_2 \\ (y_2^2+1)/(y_1y_2) \end{array} \right), \eeq whose singularity pattern under successive blowups was considered by Viallet \cite{viallet}. The map (\ref{vialletmap}) has positive algebraic entropy, indicating nonintegrability. (See Example \ref{trsix} in the next section.) However, one can take a different basis, corresponding to case (b) of Lemma \ref{ablem}, given by ${\bf v}_1'=-{\bf w}_6$, ${\bf v}_2'={\bf w}_1$, which is not a $\Z$-basis for $\mathrm{im}\, B_\Z$. From this basis, one has the coordinates $(y_1',y_2')=(y_1^2,y_2^2)$, and the map becomes $$ \varphi' : \qquad \left(\begin{array}{c} y_1' \\ y_2' \end{array} \right) \longmapsto \left(\begin{array}{c} y_2' \\ (y_2'+1)^2/(y_1'y_2') \end{array} \right). $$ The map from $(y_1,y_2)$ to $(y_1',y_2')$ is ramified, as generically there are four pairs of values $(\pm y_1,\pm y_2)$ for each pair $(y_1',y_2')$, and so the two maps $\hat\varphi$ and $\varphi '$ are not conjugate to each other. } \eex \section{Algebraic entropy and tropical recurrences} \label{intmaps} \setcounter{equation}{0} The deep connection between the integrability of maps and various weak growth properties of the iterates has been appreciated for some time (see \cite{veselov} and references). In the case of rational maps, Bellon and Viallet \cite{bellon_viallet} considered the growth of degrees of iterates, and used this to define a notion of algebraic entropy. Each component of a rational map $\varphi$ in affine space is a rational function of the coordinates, and the degree of the map, $d=\mathrm{deg}\,\varphi$, is the maximum of the degrees of the components. By iterating the map $n$ times one gets a sequence of rational functions whose degrees grow generically like $d^n$. At the $n$th step one can set $d_n=\mathrm{deg}\,\varphi^n$, and then the algebraic entropy $\mathcal{E}$ of the map is defined to be $ \mathcal{E}=\mathrm{lim}_{n\to\infty}\frac{1}{n}\log d_n. $ Generically, for a map of degree $d$, the entropy is $\log d>0$, but for special maps there can be cancellations in the rational functions that appear under iteration, which means that the entropy is smaller than expected. It is conjectured that Liouville-Arnold integrability corresponds to zero algebraic entropy. In an algebro-geometric setting, there are plausible arguments which indicate that zero entropy should be a necessary condition for integrability in the Liouville-Arnold sense \cite{bellon}. In the latter setting, each iteration of the map corresponds to a translation on an Abelian variety (the level set of the first integrals), and the degree is a logarithmic height function, which necessarily grows like $d_n\sim \mathrm{C}n^2$. Often the algebraic entropy of a map can only be guessed experimentally, by calculating the degree sequence $(d_n)$ up to some large $n$ and doing a numerical fit to a linear recurrence. This is increasingly impractical as the dimension increases, and provides no proof that the linear relation, with its corresponding entropy value, is correct. In dimension two, exact results are possible via intersection theory \cite{takenawa, viallet}. In the rest of this section we seek to isolate those recurrences with $\mathcal{E}=0$, by finding a condition on the exponents which should be necessary and sufficient for $\mathcal{E}>0$. The main conjecture is the following. \begin{conje} \label{entropyconj} For a birational map given by (\ref{bir}), corresponding to a recurrence of the form (\ref{crec}), the algebraic entropy is $\mathcal{E}=\log |\la_{max}|$, where, of all the roots of the two polynomials \beq\label{ppm} P_{\pm}(\la )=\la^N + 1 - \sum_{j=1}^{N-1}[ \pm b_{1,j+1}]_+\la^j , \eeq $\la_{max}$ is the one of largest magnitude. The entropy is positive if and only if \beq \label{maxe} \max \left(\, \sum_{j=1}^{N-1} [b_{1,j+1}]_+ \, , \, \sum_{j=1}^{N-1} [-b_{1,j+1}]_+ \, \right) \geq 3, \eeq \end{conje} We now give very strong evidence for the above assertion, showing how it rests on a sequence of other conjectures. For the recurrences (\ref{crec}), the key to calculating their entropy is the Laurent phenomenon, which leads to an exact recursion for the degrees of the denominators. The Laurent property for the associated cluster algebra \cite{fz} implies that the iterates have the factorized form $$x_n = \frac{\mathrm{N}_n({\bf x})} {\mathrm{M}_n({\bf x})}, \qquad \mathrm{with} \quad \mathrm{N}_n \in\Z [ {\bf x} ] = \Z [x_1,\ldots ,x_N], \quad \mathrm{M}_n =\prod_{j=1}^N x_j^{d^{(j)}_n}, $$ where the polynomials $\mathrm{N}_n$ are not divisible by $x_j$ for $1\leq j\leq N$, and $\mathrm{M}_n$ are Laurent monomials. A lower bound for the entropy is provided by the growth of degrees of denominators, and if the exponents $d_n^{(j)}$ are all positive (for large enough $n$) then the monomial $\mathrm{M}_n$ is the denominator of the Laurent polynomial $x_n$, and $\mathrm{N}_n$ is the numerator. In addition to being Laurent polynomials, the form of the exchange relations (\ref{exchange}) in a cluster algebra means that all the cluster variables are given by subtraction-free rational expressions in terms of any initial cluster ${\bf x}=(x_1,\ldots ,x_N)$. This implies that the dynamics of the $\mathrm{M}_n$ can be decoupled from the $\mathrm{N}_n$. \begin{propn} \label{troprec} For all $n$, the exponent $d^{(j)}_n$ of each variable in the Laurent monomial $\mathrm{M}_n$ satisfies the recurrence \beq \label{tropical} d_{n+N}+d_n = \max \left( \, \sum_{j=1}^{N-1} [b_{1,j+1}]_+\, d_{n+j}\, , \, \sum_{j=1}^{N-1} [-b_{1,j+1}]_+ \,d_{n+j}\, \right) , \eeq with the initial conditions $d_1=-1$, $d_2=\ldots =d_N=0$ (up to shifting the index). \end{propn} \begin{prf} Upon substituting $x_n = \mathrm{N}_n/\mathrm{M}_n$ into (\ref{crec}) and comparing monomial factors on both sides one has \beq \label{denoms} \mathrm{M}_{n+N}\mathrm{M}_n = \mathrm{lcm}\left( \, \prod_{j=1}^{N-1} \mathrm{M}_{n+j}^{[b_{1,j+1}]_+}\, , \, \prod_{j=1}^{N-1} \mathrm{M}_{n+j}^{[-b_{1,j+1}]_+}\, \right) , \eeq where $\mathrm{lcm}$ denotes the lowest common multiple. To be more precise, for any sequence of Laurent polynomials $(x_n)$ with positive or negative coefficients, the formula (\ref{denoms}) certainly holds provided that the two products on the right are not of equal degree in any of the variables $x_j$, for $1\leq j\leq N$. If it happens that $D:=\sum_{\ell =1}^{N-1} [b_{1,\ell+1}]_+\, d_{n+\ell}^{(j)} =\sum_{\ell=1}^{N-1} [-b_{1,\ell+1}]_+\, d_{n+\ell}^{(j)}$ for some $j$, then the coefficient of $x_j^{-D}$ in each of the two terms on the right hand side of (\ref{crec}) is a non-zero subtraction-free rational expression in the other variables $x_k$ with $1\leq k\leq N$, $k\neq j$, and the sum of these two coefficients cannot vanish. Hence no cancellations can occur between the two products of numerators $\mathrm{N}_n$ on the right, and the formula (\ref{denoms}) always holds. Taking the degree of each variable on the left and right hand sides of (\ref{denoms}) gives the same recurrence (\ref{tropical}) in each case. From the initial data $x_1,\ldots,x_N$ for (\ref{crec}) it is clear that $d_{1}^{(1)}=-1$ and $d_{n}^{(1)}=0$ for $n=2, \ldots , N$, and the degrees $d_{n}^{(j)}$ for $2\leq j \leq N$ have the same initial data but shifted along by an appropriate number of steps. \end{prf} \begin{rem} \label{zelevinsky}{\em The recurrence (\ref{tropical}) is the tropical (or ultradiscrete \cite{nobe}) analogue of the original nonlinear recurrence (\ref{crec}), in terms of the max-plus algebra. It is a special case of the recursion for the denominator vectors in a general cluster algebra, which is stated without justification as equation (7.7) in \cite{fziv}. }\end{rem} \begin{conje}\label{numers} Suppose that the sequence $d_n$ is not periodic and satisfies (\ref{tropical}) with the initial conditions as in Proposition \ref{troprec}. Then {\em (a)} $d_n>0$ for all $n>N$, and {\em (b)} the total degree of the numerators satisfies $\mathrm{deg}\,\mathrm{N}_n\sim \tilde{\mathrm{C}}\, d_n$ as $n\to\infty$, for some constant $\tilde{\mathrm{C}}>0$. \end{conje} \begin{rem} {\em Part (a) above follows from the first part of Conjecture 7.4 in \cite{fziv}. Part (b) implies that the growth of the denominators completely determines the algebraic entropy, since the numerators grow at the same rate; it should be a consequence of Proposition 6.1 in \cite{fziv} (graded homogeneity of cluster variables). } \end{rem} \bex[Tropical Somos-4] \label{ts4} {\em The tropical version of the Somos-4 recurrence is \beq\label{tsomos4} d_{n+4}+d_{n}=\max (d_{n+3}+d_{n+1},2d_{n+2}). \eeq With initial conditions $d_1=-1$ and $d_2=d_3=d_4=0$ this generates a sequence that begins $$ -1,0,0,0,1,1,2,3,3,5,6,7,9,10,12,14,15,18,20,22,25,27,30,33,\ldots , $$ which are the degrees (in each of the variables $x_1,x_2,x_3,x_4$) of the denominators of the Laurent polynomials generated by (\ref{somos4}). The preceding sequence has quadratic growth, $d_n\sim \mathrm{C}n^2$ as $n\to\infty$ (consistent with the growth of logarithmic height on an elliptic curve), so that the algebraic entropy is zero, and this can be proved by considering the combination \beq \label{ts4y} Y_n=d_{n+2}-2d_{n+1}+d_{n}, \eeq whose coefficients are the exponents in (\ref{s4yj}). The sequence of quantities $Y_n$ is generated by the tropical analogue of the map (\ref{s4map}), that is \beq \label{ts4map} \left(\begin{array}{c} Y_1 \\ Y_2 \end{array} \right) \longmapsto \left(\begin{array}{c} Y_2 \\ \left[ Y_2 \right]_+ -2Y_2-Y_1 \end{array} \right), \eeq and all of the orbits of the latter are periodic with period 8 (which is a special case of Nobe's results on tropical QRT maps \cite{nobe}). Thus, if $\mathcal{S}$ denotes the shift operator such that $\mathcal{S}f_n = f_{n+1}$ for any function of the index $n$, then applying the operator $\mathcal{S}^8-1$ to both sides of (\ref{ts4y}) implies that the sequence of degrees $d_n$ satisfies a linear relation of order 10, namely $ (\mathcal{S}^8-1)(d_{n+2}+d_{n}-2d_{n+1})=0. $ All of the roots $\la$ of the characteristic polynomial corresponding to this linear relation have modulus $|\la |=1$, and $\la =1$ is a triple root, which accounts for the growth rate of $d_n$; the fact that $d_n=O(n^2)$ also follows directly from (\ref{ts4y}), using $Y_n=O(1)$. } \eex \bex\label{trsix} {\em The tropical version of the recurrence (\ref{sixthorder}) in Example \ref{viallet} is \beq \label{tsixth} d_{n+6}+ d_n = \max (\, 2d_{n+5}+ 2d_{n+1}\, , \, 2d_{n+4}+4d_{n+3} + 2d_{n+2}\, ). \eeq With the initial conditions $d_1=-1$ and $d_2=\ldots =d_5=0$, this generates a degree sequence beginning $$ -1,0,0,0,0,0,1,2,4,8,18,38,79, 164,342,712,1482,3084,6417,13356, \ldots , $$ that grows exponentially, such that the entropy is $\mathcal{E}=\log\lambda_{max}$, where $\la_{max}\approx 2.08$ is the root of largest magnitude of the polynomial $\la^4-\la^3-2\la^2-\la+1$. To see this, note that the map (\ref{vialletmap}), in recurrence form, is $y_{n+2}\, y_n = y_{n+1} + y_{n+1}^{-1}$, so that its tropical analogue is \beq \label{tnonint} Y_{n+2} + Y_n = |Y_{n+1}| . \eeq From the tropical version of Lemma \ref{ablem}, an appropriate choice of basis for $\mathrm{im}\, B$ gives the reduction from (\ref{tsixth}) to (\ref{tnonint}), by setting $Y_n= d_{n+4} -d_{n+3}-2d_{n+2}-d_{n+1}+d_n$. It can be shown directly that all the orbits of (\ref{tnonint}) are periodic with period 9 \cite{period9}, and hence in this case the degrees $d_n$ satisfy a linear recurrence of order 13, that is $ (\mathcal{S}^9-1)Y_n = (\mathcal{S}^9-1)(\mathcal{S}^4-\mathcal{S}^3-2\mathcal{S}^2-\mathcal{S}+1)d_{n}=0. $ From the periodicity of $Y_n$ it is clear that $d_{n+4}-d_{n+3}-2d_{n+2}-d_{n+1}+d_n=O(1)$, which implies that $d_n\sim \mathrm{C} \la_{max}^n$ for some $\mathrm{C}>0$. } \eex Observe that in the two examples above, the tropical recurrences (\ref{tsomos4}) and (\ref{tsixth}) both exhibit periodic behaviour, in the sense that the maximum on the right hand side is achieved periodically by the first or the second entry. If one writes ``$+$'' in the case where $\sum_{j=2}^N [b_{1,j}]_+ > \sum_{j=2}^N [-b_{1,j}]_+$, and ``$-$'' otherwise, then for the tropical Somos-4 recurrence (\ref{tsomos4}) one finds that the integer sequence in Example \ref{ts4} repeats the block pattern ``$+-+--+--$'' of length 8, while for (\ref{tsixth}) the repeating symbolic sequence is the block ``$++--+++--$'' of length 9. The block length is in accord with the period of the associated periodic maps for the variables $Y_n$ in each case; for other choices of initial data there are still repeating blocks of the appropriate length, but the patterns can be different. Based on a large amount of other numerical evidence, we are led to formulate the following. \begin{conje} \label{periodicity} For each recurrence (\ref{tropical}) there exists some $k\geq 1$ such that for all real solution sequences $(d_n)$, the associated symbolic sequence corresponding to {\em max} is eventually periodic with minimal period $k$. \end{conje} \begin{cor}\label{lintrop} For large enough $n$, every real solution of (\ref{tropical}) satisfies a linear recurrence relation of order $kN$ with constant coefficients. \end{cor} The above corollary results from the fact that, given Conjecture \ref{periodicity}, for large enough $n$ one can regard (\ref{tropical}) as being equivalent to the iteration of a linear recurrence relation whose coefficients vary with period $k$, and then the following result can be applied. \begin{lem}\label{perlin} Suppose that a sequence $(s_n)$ satisfies a linear recurrence of order $\ell$ whose coefficients are periodic of period $k$, say $$ s_{n+\ell} = \sum_{r=0}^{\ell-1} c_{n}^{(r)}\, s_{n+r}, \qquad c_{n+k}^{(r)}=c_{n}^{(r)}. $$ Then the terms of the sequence also satisfy a linear recurrence of order $k\ell$ with constant coefficients, of the form \beq\label{colin} s_{n+k\ell} = \sum_{r=0}^{\ell-1} \tilde{c}_r\, s_{n+rk}. \eeq \end{lem} \begin{prf} This result should be well known in the literature on linear recurrences, but for completeness we sketch the proof. It suffices to consider the $(\ell+1)\times (\ell+1)$ matrix \beq\label{phi} \tilde{\Phi}_n = \left(\bear{cccc} s_n & s_{n+1} & \ldots & s_{n+\ell } \\ s_{n+k} & s_{n+k+1} & \ldots & s_{n+k+\ell} \\ \vdots & \vdots & \ddots & \vdots \\ s_{n+k\ell} & s_{k\ell+1} & \ldots & s_{n+(k+1)\ell} \eear\right) \eeq which has vanishing determinant, by virtue of the fact that the vector $(c_n^{(0)},\ldots, c_n^{(\ell-1)},-1)^T$ is in the right kernel. The recurrence with constant coefficients is obtained from the left kernel (the kernel of $\tilde{\Phi}_n^T$). \end{prf} In general, it is not easy to determine the period of the symbolic sequence, and Conjecture \ref{periodicity} seems challenging. The real orbits of the recurrences (\ref{tropical}) can be quite complicated. A further refinement of Conjecture \ref{periodicity} is possible. Let the matrices $$ {\bf M}_\pm = \left(\bear{cc} {\mathbf 0}^T & -1 \\ {\mathbf 1}_{N-1} & {\bf a}_\pm \eear\right) $$ be defined in terms of the palindromic vectors ${\bf a}_\pm$ of size $N-1$, which correspond to the exponents $[\pm a_j]_+=[\pm b_{1,j+1}]_+$ (where $ {\mathbf 0}^T$ is a zero row vector of size $N-1$, and ${\mathbf 1}_{N-1}$ is the corresponding identity matrix), so that their characteristic polynomials are $P_\pm$ as in (\ref{ppm}), respectively. Also, let $$ {\mathbf \Pi} = \prod_{j=1,\ldots ,k}^{\rightarrow}{\bf M}_{\epsilon_j} = {\bf M}_{\epsilon_1} \ldots {\bf M}_{\epsilon_k} $$ be the path-ordered product corresponding to the symbolic sequence of length $k$ for a particular orbit of (\ref{tropical}), defined by an appropriate choice of $\epsilon_j=\pm$, and let $\rho ({\bf M}_\pm)$ and $\rho ({\mathbf \Pi})$ denote the corresponding spectral radii. \begin{conje}\label{spec} Let $k$ be the period of Conjecture \ref{periodicity}. There are two possibilities: {\em (1)} If $|\la_{max}|=\rho ({\bf M}_\pm)>\rho ({\bf M}_\mp)\geq 1$ then $k=1$, with repeated symbol $\epsilon_1=\pm$, respectively. {\em (2)} If $|\la_{max}|=\rho ({\bf M}_+)=\rho ({\bf M}_-)$ then $\rho ({\mathbf \Pi})=|\la_{max}|^k$ with $k\geq 1$. \end{conje} To see how Conjecture \ref{entropyconj} now follows from all the rest, consider the matrix $\tilde{\Phi}_n$ in the proof of Lemma \ref{perlin}, for the case $s_n=d_n$ and $\ell=N$. If $\Phi_n$ is the $N\times N$ submatrix such that det$\, \Phi_n$ is the upper left connected minor of size $N$ in (\ref{phi}), then a single iteration of (\ref{tropical}) gives $\Phi_{n+1}=\Phi_n {\bf M}_{\epsilon_1}$ for some choice of $\epsilon_1=\pm$. After $k$ iterations it follows from Conjecture \ref{periodicity} that $\Phi_{n+k}=\Phi_n {\bf \Pi}$, and if $n$ is large enough then $\Phi_{n+rk}=\Phi_n {\bf \Pi}^r$ for all $r$, for some block of symbols $\epsilon_j$ of length $k$ that is fixed up to a cyclic permutation (which depends on the choice of $n \bmod k$). By the Cayley-Hamilton theorem, if $\tilde{P}(\kappa ) = \kappa^N-\sum_{r=0}^{N-1}\tilde{c}_r\kappa^r$ is the characteristic polynomial of ${\bf \Pi}$ (which is independent of cyclic permutations of the ${\bf M}_{\epsilon_j}$), then $\tilde{P}(\mathcal{S}^k)\Phi_n = \Phi_n \tilde{P}({\bf \Pi})=0$, which shows that $\Phi_n$, and hence $d_n$, satisfies the recurrence (\ref{colin}) with $\ell =N$, for all $n$ large enough. Thus the characteristic roots $\la$ for the growth of $d_n$ satisfy $\tilde{P}(\la^k)=0$, which implies that $d_n \sim \mathrm{C}|\la_{max}|^n$ for $|\la_{max}|=\rho({\bf \Pi} )^{1/k}>1$, or $d_n$ has polynomial growth for $|\la_{max}|=1$. If $k=1$ holds, then in either case (1) or case (2) of Conjecture \ref{spec}, ${\bf \Pi}={\bf M}_+$ or ${\bf M}_-$, so $|\la_{max}|$ is given by one of the spectral radii of the matrices ${\bf M}_\pm$, whichever is the larger, and $\mathcal{E}=\log|\la_{max}|$. The condition in case (2) is required to ensure that first part of the statement of Conjecture \ref{entropyconj} holds for $k>1$ as well. For the second statement in Conjecture \ref{entropyconj}, note that $P_\pm$ in (\ref{ppm}) are reciprocal polynomials (their coefficients are palindromic), so that $P_+(\la^{-1}) =\la^{-N}P_{+}(\la )$, and similarly for $P_-$. Let $$ S_{\pm} = \sum_{j=1}^N [\pm b_{1,j+1}]_+, $$ respectively. By the symmetry of the matrices $B\to -B$ (or equivalently, the freedom to replace the quiver $Q$ by its opposite), it can be assumed without loss of generality that $S_+$ is the larger of the two, so $S_{+}\geq S_{-}$, and take $S_+\geq 3$ so that condition (\ref{maxe}) holds. Now $P_+(0)=1$, and $P_+(1)= 2-S_+\leq -1$, so $P_+$ has a real root between $0$ and $1$. The reciprocal property implies that the reciprocal of a root is also a root, hence $P_+$ has a real root larger than $1$, implying that $|\la_{max}|>1$ and the entropy is positive. The cases for which (\ref{maxe}) does not hold are easily enumerated, and it can be checked directly that $|\la_{max}|=1$ in these cases. The main conclusion of this section is the following. \begin{thm}\label{zeroe} Suppose that Conjecture \ref{entropyconj} holds. Then up to the symmetry $S_+\leftrightarrow S_-$, there are only four distinct choices of the pair $(S_+,S_-)$ for which the algebraic entropy is zero, corresponding to four different families of recurrences: \noindent {\em (i)} $(S_+,S_-)=(1,0)$: For even $N=2m$ only, the recurrence is \beq\label{per5} x_{n+2m}\, x_n = x_{n+m} +1. \eeq \noindent {\em (ii)} $(S_+,S_-)=(2,0)$: For each $N\geq 2$ and $1\leq q\leq\lfloor{N/2}\rfloor$, the recurrence is \beq \label{primrec} x_{n+N}\, x_n = x_{n+N-q}\, x_{n+q} + 1. \eeq \noindent {\em (iii)} $(S_+,S_-)=(2,1)$: For even $N=2m$ only, and $1\leq q\leq m-1$, the recurrence is \beq \label{comprec} x_{n+2m}\, x_n = x_{n+2m-q}\, x_{n+q} + x_{n+m}. \eeq \noindent {\em (iv)} $(S_+,S_-)=(2,2)$: For each $N\geq 2$ and $1\leq p <q\leq \lfloor{N/2}\rfloor$, the recurrence is \beq \label{somosN} x_{n+N}\, x_n = x_{n+N-p}\, x_{n+p} + x_{n+N-q}\, x_{n+q}. \eeq \end{thm} The simplest is case (i), where the recurrence (\ref{per5}) decouples into $m$ independent copies of the Lyness map: all the orbits are periodic, and the overall period of the sequence of $x_n$ is $5m$. For each $n$ the function $F_n=x_n+x_{n+m}+x_{n+2m}+x_{n+3m}+x_{n+4m}$ is invariant, and can be written as a function of $x_n$ and $x_{n+m}$ only, using (\ref{per5}). Moreover, the functions $F_1,F_2,\ldots,F_m$ are independent and Poisson commute with respect to the bracket corresponding to the symplectic form $\omega$, so trivially this system is also integrable in the Liouville-Arnold sense. The integrability of the families (ii),(iii) and (iv) above is discussed in subsequent sections. \section{Linearisable recurrences from primitives}\label{primit} \setcounter{equation}{0} The primitives introduced in \cite{fordy_marsh} are the simplest examples of cluster mutation-periodic quivers with period 1, and are the building blocks of all such quivers. The nonlinear recurrences that arise from the primitives have the form (\ref{primrec}), corresponding to case (ii) in Theorem \ref{zeroe}, and can be rewritten as \beq x_{n+N}\, x_n = x_{n+p}\, x_{n+q} + 1, \qquad p+q=N, \label{jkrec} \eeq so that for each $q=1,\ldots,\lfloor{N/2}\rfloor$ there is a different recurrence corresponding to the primitive $P_N^{(q)}$. When $p$ and $q$ are coprime, the associated quivers are mutation-equivalent to the affine Dynkin quivers $\tilde{A}_{q,p}$, corresponding to the $A_{N-1}^{(1)}$ Dynkin diagram with $q$ edges oriented clockwise and $p$ oriented anticlockwise, while if $\gcd (p,q)=k>1$ so that $p=k\hat{p}$, $q=k\hat{q}$, then the quiver is just the disjoint union of $k$ copies of $\tilde{A}_{\hat{q},\hat{p}}$. In the latter case, it is clear that the recurrence (\ref{jkrec}) is also equivalent to $k$ copies of the recurrence of order $N/k$ corresponding to the coprime integers $\hat{p},\hat{q}$. Hence it is sufficient to consider only the case where $p,q$ are coprime, which we will assume from now on. The cluster algebras generated by affine $A$-type Dynkin quivers arise from surfaces, and are of finite mutation type \cite{finitemutation}, meaning that only a finite number of distinct quivers is obtained under sequences of mutations (\ref{matmut}). However, by the classification result in \cite{fz1}, these cluster algebras are not themselves finite: the recurrence (\ref{jkrec}) generates infinitely many cluster variables starting from the initial cluster ${\bf x}=(x_1,\ldots ,x_N)$. It was conjectured recently that the cluster variables in cluster algebras obtained from affine Dynkin quivers satisfy linear recurrence relations, and this was proved for all but the exceptional types \cite{assem}. A different proof using cluster categories, valid for all affine Dynkin types, was subsequently found by Keller and Scherotzke \cite{keller_scher}. In the case of $\tilde{A}_{q,p}$ (with coprime $p,q$) a proof of the corresponding linear recurrence relations was already given in \cite{fordy_marsh}. Here we present a more direct derivation of the linear recurrences arising from these affine $\tilde{A}_{q,p}$ quivers, before using the Poisson structures from section 2 to explain the Liouville integrability of the maps defined by (\ref{jkrec}). \subsection{Linear relations with periodic coefficients} The key to the properties of the nonlinear recurrence (\ref{jkrec}) is the fact it can be written in the form \beq \label{frieze} \det \,\Psi_n = 1,\quad\mbox{where}\quad \Psi_n = \left( \bear{cc} x_n & x_{n+q} \\ x_{n+p} & x_{n+N} \eear\right). \eeq The identity (\ref{frieze}) is the frieze relation (see e.g. \cite{assem}); it implies that the iterates of (\ref{jkrec}) form an infinite frieze. Upon forming the matrix \beq\label{3by3} \tilde{\Psi}_n = \left( \bear{ccc} x_n & x_{n+q} & x_{n+2q} \\ x_{n+p} & x_{n+N} & x_{n+N+q} \\ x_{n+2p} & x_{n+N+p} & x_{n+2N} \eear\right), \eeq one can use the Dodgson condensation method \cite{1866-1} to expand the $3\times 3$ determinant in terms of its $2\times 2$ minors, as $$ \det\, \tilde{\Psi}_{n}=\frac{1}{x_{n+N}}\left( \det\Psi_n \, \det \Psi_{n+N}-\det\Psi_{n+q} \, \det \Psi_{n+p}\right)=0, $$ by (\ref{frieze}). By considering the right and left kernels of $\tilde{\Psi}_{n}$, we are led to the following result. \begin{lem} \label{JK} The iterates of the recurrence (\ref{jkrec}) satisfy the linear relations \beq\label{Jrec} x_{n+2q}-J_n\, x_{n+q} + x_n = 0, \eeq \beq\label{Krec} x_{n+2p}-K_n\, x_{n+p} + x_n = 0, \eeq whose coefficients are periodic functions of period $p,q$ respectively, that is $$ J_{n+p}=J_n, \qquad K_{n+q}=K_n, \qquad for\,\, all \, \, n. $$ \end{lem} \begin{prf} A vector in the (right) kernel of $\tilde{\Psi}_{n}$, can be written as ${\bf k}_n = (\tilde{J}_n, -J_n, 1)^T$, where the third entry is normalised to 1 without loss of generality. From the first two rows of the equation $\tilde{\Psi}_{n}{\bf k}_n = 0$ we have the linear system $$ \Psi_n \, \left(\bear{c} -\tilde{J}_n \\ J_n \eear\right)=\left(\bear{c} x_{n+2q} \\ x_{n+N+q} \eear\right). $$ From Cramer's rule it follows that $$ \tilde{J}_n=-(\det\Psi_n)^{-1}\,\left| \bear{cc} x_{n+2q} & x_{n+q} \\ x_{n+N+q} & x_{n+N} \eear\right| =(\det\Psi_n)^{-1}\det\Psi_{n+q}=1, $$ and $$ J_n = \left| \bear{cc} x_n & x_{n+2q} \\ x_{n+p} & x_{n+N+q} \eear\right|, $$ where we made use of (\ref{frieze}). Hence the recurrence (\ref{Jrec}) is given by the first row of $\tilde{\Psi}_{n}{\bf k}_n = 0$. The second and third rows of the latter equation provide the $2\times 2$ linear system $$ \Psi_{n+p} \, \left(\bear{c} -\tilde{J}_n \\ J_n \eear\right)=\left(\bear{c} x_{n+N+q} \\ x_{n+2N} \eear\right). $$ whose solution gives an alternative formula for $J_n$. Noting that the coefficients in this linear system are the same as those in the first two rows of the equation $\tilde{\Psi}_{n+p}{\bf k}_{n+p} = 0$, it follows that $J_{n+p}=J_n$ for all $n$. If we consider the left kernel of $\tilde{\Psi}_{n}$ (the kernel of $\tilde{\Psi}_{n}^T$ ), then by symmetry we obtain the recurrence (\ref{Krec}), whose coefficient $K_n$ has period $q$. \end{prf} \br {\em In the particular case $q=1$, corresponding to $\tilde{A}_{1,N-1}$ Dynkin quivers, the coefficient $K_n$ has period 1, so $K_{n+1}=K_n=\mathcal{K}$ (constant) for all $n$. In that case the recurrence (\ref{Krec}) becomes \beq\label{k-pn1} x_{n+2N-2}+x_n=\mathcal{K} \, x_{n+N-1}, \eeq which is a {\em constant coefficient, linear difference equation} for $x_n$. This was first shown in \cite{fordy_marsh} and is precisely of the form derived for affine $A$-type quivers in \cite{assem, keller_scher}.} \er The quantity $\mathcal{K}$ in (\ref{k-pn1}) can be considered as a Laurent polynomial of the initial data $x_1,x_2, \ldots, x_{N}$ for the map $\varphi$ associated with (\ref{jkrec}); from (\ref{k-pn1}) this is obtained explicitly by repeatedly using the nonlinear recurrence in order to write $\mathcal{K}$ in terms of lower order iterates until it is given in terms of the $N$ initial data. Moreover, the fact that it is independent of $n$ means that $\mathcal{K}$ is a first integral of $\varphi$. In the case of general coprime $p,q$, it is straightforward to obtain first integrals of the map $\varphi$ corresponding to (\ref{jkrec}). Indeed, the orbit of $J_1$ under the action of $\varphi$ generates $p$ functions $J_1, J_2, \ldots, J_p$, which can be written as rational functions of the $N$ initial data via back-substitution in the relation \beq\label{jdef} J_{n}=\frac{x_n+x_{n+2q}}{x_{n+q}}. \eeq Clearly any cyclically symmetric function of $J_1,J_2, \ldots, J_p$ is a first integral of $\varphi$ and the first $p$ elementary symmetric functions of these variables are independent functions of the $J_n$. Given that $J_1,J_2, \ldots, J_p$ are functionally independent as functions of $x_1,x_2,\ldots, x_N$, any $p$ independent cyclically symmetric functions of the $J_n$ can be picked as first integrals. Similarly, the action of $\varphi$ generates the functions $K_1, K_2, \ldots, K_q$, and any $q$ independent cyclically symmetric functions of them provide another set of independent first integrals of $\varphi$. This gives a total of $p+q=N$ first integrals. These cannot all be functionally independent, because that would imply that the map is periodic. However, the generic orbit generated by (\ref{jkrec}) is not periodic. Below we describe a single functional relation between these first integrals (equation (\ref{k2}) below), by considering the monodromy properties of the linear relations (\ref{Jrec}) and (\ref{Krec}). \br {\em The existence of $N-1$ independent first integrals ${\cal I}_j$, together with the volume form $\Omega$ in (\ref{vol}), means that the map $\varphi$ is Liouville integrable in a rather elementary sense. By applying a method in \cite{byrnes}, one can pick, say, the first $N-2$ integrals and obtain a Poisson bivector field $\hat{\cal J}$ which is invariant (or anti-invariant, for odd $N$), namely $$ \hat{\cal J} = \hat{\Omega}\lrcorner \dd {\cal I}_1\lrcorner \ldots \lrcorner \dd {\cal I}_{N-2}, $$ where the $N$-vector field $\hat{\Omega}$ is defined by $\hat{\Omega}\lrcorner \Omega =1$. By construction, this Poisson structure has Casimirs ${\cal I}_j$ for $j=1,\ldots , N-2$, and restricting to the two-dimensional symplectic leaves one has an integrable system with one degree of freedom, with ${\cal I}_{N-1}$ being the independent first integral. However, we shall see that the Poisson structure coming from the two-form (\ref{omega}) leads to more interesting integrable systems. } \er \subsection{Monodromy matrices and linear relations with constant coefficients}\label{2x2monodromy} The relation (\ref{Jrec}) implies that the matrix $\Psi_{n}$ satisfies \beq\label{psirel1} \Psi_{n+q}=\Psi_n\, {\bf L}_n, \qquad {\bf L}_n=\left(\bear{cc} 0 & -1 \\ 1 & J_n \eear\right). \eeq Upon taking the ordered product of the ${\bf L}_n$ over $p$ steps, shifting by $q$ each time, we have the monodromy matrix \beq\label{mdy1} {\bf M}_n:= {\bf L}_{n}{\bf L}_{n+q}\ldots{\bf L}_{n+(p-1)q}=\Psi_n^{-1}\, \Psi_{n+pq}. \eeq On the other hand, the recurrence (\ref{Krec}) yields \beq\label{psirel2} \Psi_{n+p}= \hat{\bf L}_n \,\Psi_n, \qquad \hat{\bf L}_n=\left(\bear{cc} 0 & 1 \\ -1 & K_n \eear\right), \eeq which gives another monodromy matrix \beq\label{mdy2} \hat{\bf M}_n:= \hat{\bf L}_{n+(q-1)p}\ldots\hat{\bf L}_{n+p}\hat{\bf L}_n=\Psi_{n+pq}\, \Psi_n^{-1}. \eeq The cyclic property of the trace implies that \beq\label{k2} \mathcal{K}_n:= \mathrm{tr}\, {\bf M}_n = \mathrm{tr}\, \hat{\bf M}_n. \eeq Also, since ${\bf L}_n$ has period $p$, shifting $n\to n+p$ in (\ref{mdy1}) and taking the trace implies that $\mathcal{K}_{n+p}=\mathcal{K}_n$. Similarly, from (\ref{mdy2}) we have $\mathcal{K}_{n+q}=\mathcal{K}_n$. Now the periods $p$ and $q$ are coprime, and since $\mathcal{K}_n$ has both these periods it follows that $\mathcal{K}_n=\mathcal{K}=$constant, for all $n$, hence $\mathcal{K}$ is an invariant of $\varphi$. From the expression (\ref{mdy1}) it further follows that $\mathcal{K}$ is a cyclically symmetric function of the $J_n$, $n=1,\ldots, p$, while from (\ref{mdy2}) it is also a cyclically symmetric function of the $K_n$, $n=1,\ldots, q$. Thus we see that the equality of the traces in (\ref{k2}) provides the aforementioned functional relation between these two sets of functions. We are now ready to show that in general the iterates of (\ref{jkrec}) satisfy a linear relation with constant coefficients. The existence of such a relation follows immediately from (\ref{Jrec}) or (\ref{Krec}), by applying Lemma \ref{perlin}, but the monodromy matrices provide more detailed information about the coefficients. \begin{thm}\label{Klin} The iterates of the nonlinear recurrence (\ref{jkrec}) satisfy the linear relation \beq\label{krec} x_{n+2pq}+x_n = \mathcal{K}\,x_{n+pq}, \eeq where $\mathcal{K}$ is the first integral defined by (\ref{k2}), with $\mathcal{K}_n=\mathcal{K}$ for all $n$. \end{thm} \begin{prf} Using (\ref{mdy1}) we see that $\Psi_{n+pq}=\Psi_n {\bf M}_n$, so $\Psi_{n+2pq}=\Psi_{n}{\bf M}_n{\bf M}_{n+pq}=\Psi_n {\bf M}_n^2$ by periodicity. Noting that ${\bf M}_n$ is a $2\times 2$ matrix, with $\det {\bf M}_n=1$ and $\mathrm{tr}\, {\bf M}_n={\cal K}$, yields (by Cayley-Hamilton) $$ \Psi_{n+2pq}-{\cal K} \Psi_{n+pq}+\Psi_n = \Psi_n({\bf M}_n^2-{\cal K} {\bf M}_n+{\bf I})=0. $$ The $(1,1)$ component of this equation is just (\ref{krec}). \end{prf} By making use of the Chebyshev polynomials of the first and second kind, defined by $T_n(\upzeta ) = \cos (n\theta)$ and $U_n (\upzeta)=\sin ((n+1)\theta) /\sin\theta$, respectively, for $\upzeta=\cos\theta$, the linear equation (\ref{krec}) yields an exact expression for the iterates of the nonlinear recurrence. \begin{cor}[Chebyshev polynomials] \label{cheby} The recurrence (\ref{jkrec}) has the explicit solution \beq\label{chebform} x_{i + npq} = x_i\, T_n (\mathcal{K}/2) + \left(x_{i+pq}-\frac{x_i\mathcal{K}}{2}\right)\, U_{n-1}(\mathcal{K}/2), \qquad i=1,\ldots ,pq-1, \qquad for\,\, all\,\,\, n. \eeq \end{cor} \subsection{The structure of monodromy matrices} Here we give some relations between the elements of ${\bf M}_n$, which provide properties of a natural Poisson tensor associated with the functions $J_i$. Analogous results regarding $\hat{\bf M}_n$ and $K_i$ also hold, since the structure of ${\bf M}_n$ and $\hat{\bf M}_n$ is the same up to switching $p\leftrightarrow q$, $J_i\leftrightarrow K_i$ and taking the transpose. To simplify the presentation, we concentrate on the case of $P_{N}^{(1)}$. (Similar results hold for $P_{2m}^{(q)}$, with $p+q=2m$ and $p,q$ coprime.) The remarkable fact is that (when $N$ is even) these properties of the Poisson bracket are derived directly from the monodromy matrix. In subsection \ref{pn1-poisson} we proceed to show how the Poisson algebra of the $J_i$ is derived from the Poisson bracket between the coordinates $x_j$. For the case $q=1$, we denote the matrix ${\bf M}_n={\bf L}_n {\bf L}_{n+1}\cdots {\bf L}_{n+p-1}$ by ${\bf M}_n^{(2m)}$, when $p=2m-1, m\geq 1$, and ${\bf M}_n^{(2m+1)}$, when $p=2m, m\geq 1$. It is important in the calculations below that ${\bf M}_n^{(p+1)}$ depends only upon the variables $J_n,\dots ,J_{n+p-1}$. For the moment, this is not really a monodromy matrix, if we do not assume any periodicity. The calculations below give a recursive procedure for building the matrices ${\bf M}_n^{(2m)}$ and ${\bf M}_n^{(2m+1)}$. The recursion ${\bf M}_n^{(p+3)}={\bf M}_n^{(p+1)}{\bf L}_{n+p}{\bf L}_{n+p+1}$, with the short-hand notation $\RA=\RA_n^{(p+1)}$, $\tilde \RA=\RA_n^{(p+3)}$, etc., leads to \be\label{mtilde} \begin{array}{ll} \tilde \RA = J_{n+p} \RB-\RA,& \tilde \RB= (J_{n+p}J_{n+p+1}-1)\RB-J_{n+p+1}\RA,\\[3mm] \tilde \RC=J_{n+p}\RD-\RC, & \tilde \RD=(J_{n+p}J_{n+p+1}-1)\RD-J_{n+p+1}\RC, \end{array} \ee so that $\tilde {\cal K}=-{\cal K}+J_{n+p}\RB-J_{n+p+1}\RC+J_{n+p}J_{n+p+1}\RD$. \begin{lem}[Relations for $\tilde {\bf M}$] \label{brelns The components of ${\bf M}_n^{(p+1)}$ satisfy the relations $$ \frac{\pa \RA_n^{(p+1)}}{\pa J_n}=\frac{\pa \RB_n^{(p+1)}}{\pa J_n}=0,\quad \RA_n^{(p+1)}= -\frac{\pa \RC_n^{(p+1)}}{\pa J_n}, \quad \RB_n^{(p+1)}=-\frac{\pa {\cal K}^{(p+1)}}{\pa J_n},\quad \RC_n^{(p+1)}=\frac{\pa {\cal K}^{(p+1)}}{\pa J_{n+p-1}}, $$ where ${\cal K}^{(p+1)}=\RA_n^{(p+1)}+\RD_n^{(p+1)}$. \end{lem} \begin{prf} We just prove this for the even case. For $m=1$, ${\bf M}_n^{(2)}={\bf L}_n$, which clearly satisfies these relations. The recursion (\ref{mtilde}) provides us with an inductive step. We have $$ \frac{\pa\tilde \RA}{\pa J_n} = J_{n+2m-1} \frac{\pa \RB}{\pa J_n}-\frac{\pa \RA}{\pa J_n}=0, \quad \frac{\pa\tilde \RB}{\pa J_n} = (J_{n+2m-1}J_{n+2m}-1) \frac{\pa \RB}{\pa J_n}+J_{n+2m}\frac{\pa \RA}{\pa J_n}=0. $$ Then $$ \frac{\pa\tilde \RC}{\pa J_n} = J_{n+2m-1}\frac{\pa {\cal K}}{\pa J_n}-\frac{\pa \RC }{\pa J_n}=\RA-J_{n+2m-1} \RB=-\tilde \RA, $$ where we have used $\frac{\pa \RD}{\pa J_n}=\frac{\pa (\RA+\RD)}{\pa J_n}=\frac{\pa {\cal K}}{\pa J_n}$, and $\frac{\pa\tilde {\cal K}}{\pa J_{n+2m}}=-\RC+J_{n+2m-1}\RD=\tilde \RC$. \noindent Finally \bea \frac{\pa\tilde {\cal K}}{\pa J_n} &=& -\frac{\pa{\cal K}}{\pa J_n}+J_{n+2m-1}\frac{\pa \RB}{\pa J_n} -J_{n+2m}\frac{\pa \RC}{\pa J_n}+J_{n+2m-1}J_{n+2m}\frac{\pa {\cal K}}{\pa J_n} \nn\\ &=& (J_{n+2m-1}J_{n+2m}-1) \frac{\pa {\cal K}}{\pa J_n}+J_{n+2m} \RA= -(J_{n+2m-1}J_{n+2m}-1) \RB+J_{n+2m} \RA= -\tilde \RB, \nn \eea where again we have used $\frac{\pa \RD}{\pa J_n}=\frac{\pa {\cal K}}{\pa J_n}$. \end{prf} We can use the above relations to form {\em recursion operators} which can be used to build the functions ${\cal K}^{(p+1)}$. Starting with ${\cal K}^{(2)}=J_n$ and ${\cal K}^{(3)}=J_nJ_{n+1}-2$, we can use ${\cal K}^{(p+3)}={\cal R}^{(p)}{\cal K}^{(p+1)}$, where the {\em recursion operator} is \be\label{krecursion} {\cal R}^{(p)} = J_{n+p}J_{n+p+1}\frac{\pa^2}{\pa J_n\pa J_{n+p-1}}-J_{n+p}\frac{\pa}{\pa J_n}-J_{n+p+1}\frac{\pa}{\pa J_{n+p-1}} + (J_{n+p}J_{n+p+1}-1). \ee \br {\em An alternative formula for ${\cal K}$ is given by a link with the dressing chain: $$ \mathcal{K} = \prod_{j=1}^p \left( 1-\frac{\partial^2}{\partial J_j \partial J_{j+1}}\right) \, \prod_{k=1}^p J_k . $$ When $p$ is odd, this formula follows from the results in \cite{shabves}, by setting $\beta_i \to 0$ and $g_i \to J_i$. }\er \subsubsection{Link with the Poisson structure} In the case $N=2m$, we can use the monodromy matrix to build the Poisson bracket for the functions $J_n,\, K_n$. Staying within the context of one quiver $P_{2m}^{(1)}$ for fixed $m$, we now reinstate the periodicity ${\bf L}_{n+p}={\bf L}_n$. From (\ref{mdy1}), by periodicity, for all $k$ and $n$ we have $$ {\bf M}_{n+k}{\bf L}_{n+k}={\bf L}_{n+k}{\bf M}_{n+k+1}. $$ When $k=0$, this implies $$ \RA_{n+1}=\RD_n-J_n\RC_{n+1},\quad \RB_{n+1}+\RC_n=J_n(\RD_{n}-\RD_{n+1}), \quad \RC_{n+1}=-\RB_n,\quad \RD_{n+1}=\RA_n-J_n\RB_n . $$ Shifting $n$, we can write $\RC_n=-\RB_{n-1}=-\RB_{n+p-1}$. Only two of the remaining equations are independent, leading to $$ J_n\RB_n=\RA_n-\RD_{n+1},\quad \RB_{n+p-1}-\RB_{n+1}=J_n(\RD_{n+1}-\RD_n). $$ The equations for $k\neq 0$ are obtained by shifting the indices. The first equation leads to $$ \sum_{k=1}^{p-1}(-1)^{k+1}J_{n+k}\RB_{n+k}=\sum_{k=1}^{p-1}(-1)^{k+1}(\RA_{n+k}-\RD_{n+k+1})=\RA_{n+1}-\RA_{n+p}=\RA_{n+1}-\RA_n, $$ since the remaining sum consists of $$ \sum_{j=1}^{m-1}(\RA_{n+2j+1}+\RD_{n+2j+1})-\sum_{j=1}^{m-1}(\RA_{n+2j}+\RD_{n+2j})=(m-1){\cal K}-(m-1){\cal K}=0. $$ On the other hand $\RB_{n+p-1}-\RB_{n+1}=J_n(\RD_{n+1}-\RD_n)$, so $$ J_n\sum_{k=1}^{p-1}(-1)^{k+1}J_{n+k}\RB_{n+k}+\RB_{n+p-1}-\RB_{n+1}=J_n(\RA_{n+1}-\RA_{n+p}+\RD_{n+1}-\RD_n)=0. $$ If we take cyclic permutations, we obtain a matrix equation of the form $$ ({\bf P}^{(2)}+{\bf P}^{(0)}){\bf b}=0,\quad \mbox{with}\quad {\bf b} =(\RB_n,\RB_{n+1},\dots ,\RB_{n+p-1})^T= -\nabla {\cal K}, $$ where the components of the matrix ${\bf P}^{(2)}+{\bf P}^{(0)}$ are those of the Poisson tensor in Lemma \ref{jbra} below, and we have used the formula $\RB_n=-\frac{\pa {\cal K}}{\pa J_n}$ for all $n$, coming from Lemma \ref{brelns}. We can summarise these results in \begin{thm} \label{isomonodromy} The function $\cal K$, defined by (\ref{k2}) is the Casimir of the Poisson bracket of Lemma \ref{jbra}: $$ ({\bf P}^{(2)}+{\bf P}^{(0)})\nabla {\cal K}=0. $$ \end{thm} \subsection{Poisson brackets and Liouville integrability for $P_{2m}^{(1)}$}\label{pn1-poisson} It was proved in \cite{fordy_rec} that the linearisable maps coming from the primitives $P_N^{(1)}$ (the $\tilde{A}_{1,N-1}$ Dynkin quivers) are Liouville integrable when $N$ is even. We give the proof here, since it is the basis for understanding the integrability of the maps for the other $P_N^{(q)}$ quivers. The proof starts from the Poisson bracket for the cluster variables. The matrix $B$ in this case is nondegenerate, having the form \be \label{btau} B = \tau_N-\tau_N^T, \quad\mbox{with}\quad \tau_N= \sum_{r=1}^{N-1} {\bf E}_{r+1,r} -{\bf E}_{1,N}, \ee where ${\bf E}_{r,s}$ denotes an element of the standard basis for $gl(N)$. The ``skew rotation'' matrix $\tau_N$ plays an important role in the classification presented in \cite{fordy_marsh}. The matrix (\ref{btau}) for $N=4$ is obtained by setting $c=0$ in (\ref{s4gen}). By Theorem \ref{torusred}, the map is symplectic, and hence there is a nondegenerate Poisson bracket of the form (\ref{logcan}), with $C=B^{-1}$, up to scaling. In accordance with \cite{fordy_rec}, we take $$ C =\tau_N^T+(\tau_N^T)^3+\dots +(\tau_N^T)^{N-1}= \sum_{s=1}^{\frac{N}{2}}\sum_{r=1}^{N-2s+1} \left( {\bf E}_{r,r+2s-1} - {\bf E}_{r+2s-1,r}\right), $$ so that $CB=2\,I$, which gives \beq\label{xbra} \{ \, x_j , x_k \, \}=\left\{ \bear{ll} \mathrm{sgn}(k-j)\, x_j x_k, \qquad & k-j \quad \mathrm{odd}, \\[2mm] 0 & \mathrm{otherwise}, \eear \right. \eeq for $j,k = 1,\ldots , N$, with $\mathrm{sgn}$ denoting the sign function. The key to the Liouville integrability of the $P_N^{(1)}$ maps is the expression of the Poisson bracket between the periodic functions $J_n$, which appeared in \cite{honelaur} (for $N=4$) and \cite{fordy_rec} (for general even $N=2m$). \begin{lem} \label{jbra} For even $N=p+1$ and $q=1$, the functions $J_n$ given by (\ref{jdef}) define a Poisson subalgebra of codimension one in the algebra (\ref{xbra}), with brackets \beq\label{jbrac} \{ \, J_j, J_k\, \} = 2\, \mathrm{sgn}(k-j)\, (-1)^{j+k+1}J_jJ_k +2(\delta_{j,k+1}-\delta_{j+1,k}+\delta_{j+p-1,k}-\delta_{j,k+p-1}), \eeq for $j,k=1,\ldots, p$. \end{lem} The bracket for the $J_n$ is clearly a sum $\{\, ,\,\}=\{\, ,\,\}_2 + \{\, ,\,\}_0$, corresponding to the splitting of the Poisson tensor into homogeneous parts ${\bf P}^{(2)}+{\bf P}^{(0)}$, as in Theorem \ref{isomonodromy}. The quadratic bracket $\{\, ,\,\}_2$ is log-canonical, while the degree zero bracket can be defined simply by specifying its only non-zero terms as \beq\label{dzero} \{ \, J_{j+1},J_j \, \}_0 = 2 = - \{ \, J_j,J_{j+1} \, \}_0 \eeq for all $j \bmod\, p=2m-1$. Since (\ref{jbrac}) is still a Poisson bracket after scaling $J_j\to\mu J_j $ for arbitrary $\mu$, the brackets $\{ ,\}_0$ and $\{ ,\}_2$ are compatible, so define a bi-Hamiltonian structure. This means that one can use the standard bi-Hamiltonian chain \cite{magri}, defining a sequence of functions $\mathcal{I}_j$ which satisfy $$ \{ \mathcal{I}_j, \mathcal{I}_k \}_0 = 0 = \{ \mathcal{I}_j, \mathcal{I}_k \}_2, \qquad \mathrm{for} \,\,\, \mathrm{all}\quad j,k, $$ where the sequence starts from $\mathcal{I}_0 = \sum_j J_j$, the Casimir of the bracket $\{ ,\}_0$ given by (\ref{dzero}), and finishes with $\mathcal{I}_{m-1}=\prod_j J_j$, the Casimir of $\{ ,\}_2$. By Theorem \ref{isomonodromy}, the function $\mathcal{K}$ defined by (\ref{k2}) is the generating function for these integrals, so that \beq\label{kgen} \mathcal{K} = \sum_{j=0}^{m-1}(-1)^{m+j+1}\mathcal{I}_j, \eeq where $\mathcal{I}_j$ is the term of degree $2j+1$ in the variables $J_i$. Since these integrals commute with respect to both brackets, they commute with respect to the sum $\{\, ,\,\}_2 + \{\, ,\,\}_0$, and hence provide $m$ commuting integrals for the map $\varphi$ of the variables $x_j$ in even dimension $N$, which implies that the map is Liouville integrable. In summary, we have \begin{thm} For $N=2m$ and $q=1$ the map $\varphi$ defined by (\ref{jkrec}) has $m$ functionally independent Poisson commuting integrals, given by the terms of each odd homogeneous degree in the quantity ${\cal K}$, as given by equation (\ref{kgen}). The map is also superintegrable, having a total of $N-1$ independent first integrals. \end{thm} As discussed in subsection \ref{2x2monodromy}, extra first integrals are obtained by choosing $m-1$ additional cyclically symmetric functions of $J_1,\ldots , J_p$. \subsection{Primitives of the form $P_{2m+1}^{(1)}$} The recurrences (\ref{jkrec}) for $p=2m$ and $q=1$ are given by \beq\label{oddprim} x_{n+2m+1}\, x_n = x_{n+2m}\, x_{n+1} + 1. \eeq The formula (\ref{btau}) still holds, but now the matrix $B$ is singular. It has a one-dimensional kernel, spanned by the vector ${\bf u}=(1,-1,\dots ,1,-1,1)^T$, which generates the scaling symmetry \beq\label{lascala} x_j \rightarrow \la^{(-1)^{j+1}}\, x_j, \qquad \la \in \C^*, \eeq and $\mathrm{im}\, B$ is spanned by \beq \label{oddprimvj} {\bf v}_j = {\bf e}_j +{\bf e}_{j+1}, \qquad j=1,\ldots , 2m, \eeq where ${\bf e}_j$ is the $j$th standard basis vector. Hence, by Lemma \ref{symp}, the coordinates \beq\label{oddprimy} y_j = x_j\, x_{j+1}, \qquad j=1,\ldots , 2m, \eeq are invariant under the scaling (\ref{lascala}), and the degenerate form (\ref{omega}) pushes forward to a symplectic form (\ref{yform}) in dimension $2m$, whose coefficients $\hat{b}_{jk}$ are the matrix elements of $$ \hat{B} = \tau_{2m}-\tau_{2m}^2+\tau_{2m}^3-\dots +\tau_{2m}^{2m-1}, $$ where $\tau_{2m}$ is the $2m\times 2m$ version of $\tau_N$. The inverse of this is the skew-symmetric matrix $$ \hat{B}^{-1}=\tau_{2m}^T+(\tau_{2m}^T)^2+(\tau_{2m}^T)^3+\dots +(\tau_{2m}^T)^{2m-1}, $$ with all components above the diagonal equal to $1$, giving a nondegenerate Poisson bracket for the $y_j$, i.e. \beq \label{ybra} \{ \, y_j , y_k\,\} = \mathrm{sgn}(k-j)\, y_j y_k , \qquad 1\leq j,k\leq 2m. \eeq Upon applying the rest of Theorem \ref{torusred}, we see that (\ref{oddprim}) induces a symplectic map on the variables $y_i$, given (for $m\geq 2$) by \be \label{ymap} \hat\varphi: \, (y_1,y_2,\ldots , y_{2m-1},y_{2m}) \mapsto (y_2,y_3,\ldots , y_{2m},y_{2m+1}), \ee where $$ y_{2m+1} = y_2y_4\cdots y_{2m} (y_2y_4\cdots y_{2m}+y_3\cdots y_{2m-1})/(y_1y_3^2\cdots y_{2m-1}^2) . $$ The map is simpler for the case $m=1$, given in Example \ref{ymap-p31} below. By the general discussion above, the iterates $x_n$ satisfy the linear relation (\ref{k-pn1}), where ${\cal K}$ is the trace of the monodromy matrix ${\bf M}_n$. The latter is given in terms of the quantities \beq \label{oddjdef} J_n = \frac{x_n + x_{n+2}}{x_{n+1}}, \qquad n = 1, \ldots, 2m, \eeq which cycle with period $2m$ under the action of the recurrence (\ref{oddprim}). The polynomial ${\cal K}$ can be expanded as \beq\label{koddgen} \mathcal{K} = \sum_{j=0}^{m}(-1)^{m+j}\mathcal{I}_j, \qquad \mathrm{where} \qquad \mathcal{I}_0 = 2, \eeq and each polynomial $\mathcal{I}_j$ is homogeneous of degree $2j$ in the variables $J_n$. The non-trivial homogeneous components $\mathcal{I}_1, \ldots , \mathcal{I}_{m}$ provide $m$ first integrals for (\ref{oddprim}), and an additional $m$ independent first integrals can be obtained by choosing cyclically symmetric functions of $J_n$ for each odd degree $1,3,\ldots ,2m-1$. However, not all of these first integrals reduce to functions on the $2m$-dimensional symplectic manifold with coordinates $y_j$. Note that the scaling symmetry (\ref{lascala}) acts on the variables (\ref{oddjdef}) according to \beq\label{jscale} J_n\longrightarrow \lambda^{2(-1)^{n+1}}\, J_n. \eeq Applying this to the formula (\ref{krecursion}) shows that each component of ${\cal K}$ is invariant under scaling, which means that the $m$ first integrals $\mathcal{I}_1, \ldots , \mathcal{I}_{m}$ can be rewritten as functions of the scale-invariant variables $y_j$. The Liouville integrability of the map (\ref{ymap}) then follows, provided that these $m$ functions are in involution with respect to the bracket (\ref{ybra}). We shall not pursue the case of general $m$ further here, but content ourselves with presenting some low-dimensional examples. \bex[The primitive $P_3^{(1)}$] \label{ymap-p31} {\em For $m=1$ the map $\varphi$ for the $x_j$ variables is (\ref{affA2}), which is associated with $P_3^{(1)}$, the affine $A_2^{(1)}$ Dynkin quiver. The matrix $B$ has rank two, so in terms of the variables $y_j$ given by (\ref{oddprimy}) the symplectic form has the log-canonical form (\ref{canon}), and the induced map of the plane is $$ \hat\varphi: \qquad \left(\bear{c} y_1 \\ y_2 \eear\right) \mapsto \left(\bear{c} y_2 \\ y_2(y_2+1)/y_1 \eear\right). $$ Symmetric functions of the period 2 quantities $$ J_1=\frac{x_1+x_3}{x_2},\;\; J_2=\frac{x_1x_2+x_2x_3+1}{x_1x_3} $$ give two first integrals for the map (\ref{affA2}), namely $$ J_1+J_2 \qquad \mathrm{and} \qquad {\cal I}_1= J_1J_2 ={\cal K}+2=\frac{(y_1+y_2)(y_1+y_2+1)}{y_1y_2}. $$ The latter is defined on the $(y_1,y_2)$ plane, and $\hat\varphi^*{\cal I}_1={\cal I}_1$, so the symplectic map $\hat\varphi$ with one degree of freedom has an invariant function and hence is integrable. }\eex \bex[The primitive $P_5^{(1)}$] \label{ymap-p51} {\em For $m=2$ the recurrence (\ref{oddprim}) has the first integrals ${\cal I}_1$ and ${\cal I}_2$ given by the homogeneous terms of degree 2 and 4, respectively, in the expression (\ref{koddgen}): \beq\label{kp51} {\cal K} = 2-(J_1J_2+J_2J_3+J_3J_4+J_4J_1)+ J_1J_2J_3J_4={\cal I}_0 - {\cal I}_1+{\cal I}_2, \eeq where the $J_n$ are defined by (\ref{oddjdef}). Picking another pair of cyclically symmetric functions of $J_n$, of degrees 1 and 3, say, adds two more independent first integrals. Now defining $y_j$ for $j=1,2,3,4$ by (\ref{oddprimy}), these variables are endowed with the nondegenerate Poisson bracket (\ref{ybra}), which is invariant under the map (\ref{ymap}): $$ \hat\varphi: \quad (y_1,y_2,y_3,y_4)\mapsto \Big(y_2,y_3,y_4 , y_2y_4(y_2y_4+y_3)/(y_1y_3^2)\Big). $$ To show that this map is Liouville integrable, it is necessary to verify that $\{ {\cal I}_1 , {\cal I}_2\} =0$ with respect to this bracket. The terms in the formula (\ref{kp51}) can all be expressed via the functions \beq \label{wi} w_i=J_i\, J_{i+1}, \eeq so that $ {\cal I}_1=w_1+w_2+w_3+w_4$, $ {\cal I}_2=w_1w_3$. From (\ref{jscale}) it is clear that these $w_i$ are invariant under the action of the scaling symmetry (\ref{lascala}). This means that they can be written as functions of $y_j$, viz: $$ w_1=\frac{(y_1+y_2)(y_2+y_3)}{y_2^2},\quad w_2=\frac{(y_2+y_3)(y_3+y_4)}{y_3^2}, \quad w_3=\frac{(y_3+y_4)(y_2y_3+y_1y_3^2+y_2^2y_4)}{y_1y_3^2y_4}. $$ Under the action of $\hat\varphi$, since the $J_n$ cycle with period 4 under $\varphi$, the $w_i$ transform as $$ \hat\varphi^* w_1=w_2,\;\; \hat\varphi^*w_2=w_3,\;\; \hat\varphi^*w_3=w_4=\frac{w_1w_3}{w_2},\;\; \hat\varphi^*w_4=w_1. $$ Although only the first three are independent, it is convenient to make use of $w_4$ as well. The first three $w_i$ form a three-dimensional Poisson subalgebra of the $y_j$, which is non-polynomial: $$ \{w_1,w_2\}=w_1w_2-w_1-w_2,\quad \{w_1,w_3\}=w_2-\frac{w_1w_3}{w_2}, \quad \{w_2,w_3\}=w_2w_3-w_2-w_3. $$ The Casimir of this algebra is $ {\cal I}_1-{\cal I}_2=w_1+w_2+w_3+\frac{w_1w_3}{w_2}-w_1w_3 =2-{\cal K} . $ Since ${\cal I}_1$ and ${\cal I}_2$ are both functions defined on this subalgebra, it follows that $\{ {\cal I}_1,{\cal I}_2\} = \{{\cal I}_1,{\cal K}\} =0$, so the two first integrals are in involution, as required. }\eex \br {\em The Poisson bracket of the four functions $w_i$ can be calculated in polynomial form as $$ \{w_1,w_2\}=w_1w_2-w_1-w_2,\quad \{w_1,w_3\}=w_2-w_4,\quad \{w_1,w_4\}=w_1+w_4-w_1w_4, $$ with the remaining brackets following from the cyclic property. This bracket has the two Casimirs $$ {\cal C}_1=w_1w_3-w_2w_4\quad\mbox{and}\quad {\cal C}_2=w_1+w_2+w_3+w_4-w_1w_3, $$ so that the three-dimensional algebra for $w_1,w_2,w_3$ arises from the constraint ${\cal C}_1=0$. } \er \subsection{Primitives $P_{N}^{(q)}$ with $q>1$} For general $q$, to obtain the quiver $P_{N}^{(q)}$ we must modify the formula (\ref{btau}) and write \be \label{btauk} B = \tau_N^q-(\tau_N^q)^T. \ee In this case, we have to take into account both sets of functions $J_n,\, n=1,\dots ,p$ and $K_n,\, n=1,\dots ,q$, defined through (\ref{Jrec}) and (\ref{Krec}) respectively, with the function $\cal K$ being given by the two formulae of (\ref{k2}). It turns out that the essential properties of the quantities $J_n$ with period $p$ can be obtained by considering the $J_n$ in the case of $P_{p+1}^{(1)}$ (the $\tilde{A}_{1,p}$ quiver), and applying a suitable permutation of indices; and similarly the properties of the $K_n$ are the same as those of the $J_n$ for $P_{q+1}^{(1)}$, up to a permutation of indices. Concentrating for the moment on the functions $J_n$, we consider the formula (\ref{mdy1}) for ${\bf M}_n$. Each of the matrices ${\bf L}_{n+\ell q}$ (after using the cyclic property $J_{n+p}=J_n$) is just one of the matrices ${\bf L}_n,\, n=1,\dots ,p$. For coprime $p,q$, each of the $J_n$ appears exactly once in this product and in a specific order, which defines a permutation $\sigma$ of the integers $(1,\dots ,p)$. For this discussion, for a general pair $p,q$ (with $q<p$) let us use ${\cal K}_{p,q}$ to mean tr$\,\,{\bf M}_n$, considered as a function of the $J_n$. Then we have \be\label{kpq} {\cal K} = {\cal K}_{p,q}(J_1,\dots ,J_p)= {\cal K}_{p,1}(J_{\sigma(1)},\dots ,J_{\sigma(p)}). \ee Similarly, in terms of the functions $K_n$, the formula (\ref{mdy2}) for $\hat{\bf M}_n$ defines a permutation $\hat\sigma$ of $(1,\dots ,q)$, and if we write ${\cal K}_{q,p}$ to denote tr$\,\hat{\bf M}_n$, then we have \beq\label{kpqhat} {\cal K} = {\cal K}_{q,p}(K_1,\dots ,K_q) = {\cal K}_{q,1}(K_{\hat{\sigma}(1)},\dots ,K_{\hat{\sigma}(q)}). \eeq (There is no risk of confusion between ${\cal K}_{q,p}$ and ${\cal K}_{p,q}$ once we have fixed $q<p$.) For more detailed properties of the maps defined by (\ref{jkrec}), and the associated quantities $J_n$ and $K_n$, it is necessary to consider even/odd $N$ separately. \subsubsection{The even case} In the case that $N$ is even, when $q$ and $p$ are coprime it can be shown by elementary row/column operations on the matrix (\ref{btauk}) that $\det \, B =4$. The matrix $B$ is of Toeplitz type, and invertible, with an inverse of the same type that defines a nondegenerate Poisson structure of the form (\ref{logcan}). With the choice of scale $C=2B^{-1}$, the Poisson bracket for the $x_j$ is given explicitly by \beq\label{xqbra} \{ \, x_j , x_k \, \} =\left\{ \bear{ll} \mathrm{sgn}(k-j)\, (-1)^{r+s}x_j x_k, \qquad & k-j \quad \mathrm{odd}, \\[2mm] 0 & \mathrm{otherwise}, \eear \right. \eeq where (by the Euclidean algorithm) the integers $r,s$ are uniquely determined by writing $$ \frac{1}{2}\Big(p-|k-j|\Big) = sm -r\ell, \qquad \mathrm{for}\quad 0\leq r <m, \quad 0\leq s\leq \ell, $$ in terms of the coprime integers $\ell = (p-q)/2$, $m=(p+q)/2$. (Note that $p$ and $q$ are both odd.) Consider the functions $J_n$ once again, and for fixed $n$ define the following sequences: $$ X_j = x_{n+(j-1)q}, \quad j=1,2, \ldots ; \qquad {J}^\dagger_k = J_{n+(k-1)q}=\frac{X_k+X_{k+2}}{X_{k+1}}, \quad k=1,2,\ldots . $$ Note that the sequence of ${J}^\dagger_k$ is also periodic with the same period: ${J}^\dagger_{k+p}={J}^\dagger_k$; in fact (up to the choice of $n$, which just gives an overall cyclic permutation) the ordering of this sequence corresponds to the permutation $\sigma$ in (\ref{kpq}), i.e. ${J}^\dagger_k=J_{\sigma (k)}$. Using the Poisson bracket (\ref{xqbra}), we compute $\{X_1,X_2\} = \{x_n,x_{n+q}\} =X_1X_2$, $\{X_1,X_3\}= \{x_n,x_{n+2q}\} = 0$, and so on, and then in each case we find that, for a suitable range of indices, the $X_j$ satisfy the same Poisson algebra (\ref{xbra}) as the $x_j$ in the $P_{p+1}^{(1)}$ case, so that the bracket is $$ \{ \, X_j , X_k \, \}=\left\{ \bear{ll} \mathrm{sgn}(k-j)\, X_j X_k, \qquad & k-j \quad \mathrm{odd}, \\[2mm] 0 & \mathrm{otherwise}, \qquad \qquad \mathrm{for} \quad | j-k|\leq p . \eear \right. $$ To verify the analogue of Lemma \ref{jbra}, it is sufficient to calculate the brackets $\{J^\dagger_1,J^\dagger_j\}$ for $j=2,\ldots ,(p+1)/2$ and then use the periodicity, which shows that the permuted quantities ${J}^\dagger_k$ satisfy $$ \{ \, J_j^\dagger, J_k^\dagger\, \} = 2\, \mathrm{sgn}(k-j)\, (-1)^{j+k+1} J_j^\dagger J_k^\dagger +2(\delta_{j,k+1}-\delta_{j+1,k}+\delta_{j+p-1,k}-\delta_{j,k+p-1}), $$ for $j,k=1,\ldots, p$. This is the same as the Poisson algebra (\ref{jbrac}) for the $J_n$ in the case of $P_{p+1}^{(1)}$. An identical argument implies that, up to the permutation $\hat\sigma$, the $q$ functions $K_n$ satisfy the algebra (\ref{jbrac}) corresponding to $P_{q+1}^{(1)}$. \begin{rem} {\em Theorem \ref{isomonodromy}, together with (\ref{k2}), implies that the same function ${\cal K}$ is simultaneously the Casimir of the $J_n$ subalgebra and of the $K_n$ subalgebra. } \end{rem} By further direct calculation using the bracket (\ref{xqbra}) one can verify that \beq\label{commjk} \{ J_i, K_i\} = 0 \qquad \forall\, i \qquad\implies \qquad \{ J_i, K_j\} = 0 \qquad \forall\, i,j, \eeq where the second statement follows from the first by repeatedly shifting $i\rightarrow i+1$, and using periodicity and the coprimality of $p$ and $q$. Thus, from (\ref{commjk}), we see that these two subalgebras Poisson commute with each other. The $J_n$ subalgebra provides $(p+1)/2$ commuting integrals, which are found by applying the permutation $\sigma$ to each homogeneous component of the sum (\ref{kgen}) for the case of the primitive $P_{p+1}^{(1)}$. Similarly, by applying the permutation $\hat\sigma$ one obtains $(q+1)/2$ commuting integrals for the $K_n$ subalgebra. These two sets of integrals also commute with each other, and the relation (\ref{k2}) provides a single constraint, which gives a total of $(p+1)/2+(q+1)/2-1=N/2$ independent commuting integrals, as required for Liouville integrability. \bex[The primitive $P_8^{(3)}$] \label{map-p83} {\em The matrix $C=2B^{-1}$ is Toeplitz, with top row $$ (c_{1,j}) = ( 0 ,-1, 0 , 1 , 0 , 1 , 0 ,- 1), $$ which specifies the Poisson bracket (\ref{xqbra}) in this case. This Poisson bracket is invariant under the map \beq\label{p38map} \varphi: \qquad (x_1,\ldots ,x_8)\mapsto (x_2,\ldots ,x_9), \qquad x_9=\frac{x_4x_6+1}{x_1}. \eeq The functions $J_n$, which cycle with period 5, are given by $$ J_1=\frac{x_1+x_7}{x_4},\quad J_2=\frac{x_2+x_8}{x_5},\quad J_3=\frac{x_1x_3+x_4x_6+1}{x_1x_6}, \quad J_4=\frac{x_2x_4+x_5x_7+1}{x_2x_7},\quad J_5=\frac{x_3x_5+x_6x_8+1}{x_3x_8}. $$ These form a Poisson subalgebra with brackets $ \{J_1,J_2\}=-2J_1J_2,\quad \{J_1,J_3\}=-2J_1J_3+2, $ with all other brackets following from the cyclic property and skew-symmetry. This is the algebra of $P_6^{(1)}$, after the permutation $\sigma:\, (1,2,3,4,5)\mapsto (1,4,2,5,3)$. It provides three commuting functions, $$ {\cal I}_0=J_1+J_2+J_3+J_4+J_5, \quad {\cal I}_1=J_1J_4J_2+J_2J_5J_3+J_3J_1J_4+ J_4J_2J_5+J_5J_3J_1, \quad {\cal I}_2=J_1J_2J_3J_4J_5, $$ which (from Theorem \ref{isomonodromy}) are the homogeneous components of the Casimir of this subalgebra, namely $$ {\cal K}=\mathrm{tr}\, {\bf M}_1 = {\cal I}_0-{\cal I}_1+{\cal I}_2. $$ The generators of the period 3 subalgebra are given by $$ K_1=\frac{x_1x_3+x_6x_8+1}{x_3x_6},\quad K_2=\frac{x_1+x_7+x_4(x_1x_2+x_6x_7)}{x_1x_4x_7},\quad K_3=\frac{x_2+x_8+x_5(x_2x_3+x_7x_8)}{x_2x_5x_8}, $$ whose Poisson bracket relations are $$ \{K_1,K_2\}=-2K_1K_2+2,\quad \{K_2,K_3\}=-2K_2K_3+2,\quad \{K_1,K_3\}=2K_1K_3-2. $$ Up to the permutation $\hat{\sigma}:\, (1,2,3)\mapsto (1,3,2)$, this is the algebra associated with $P_4^{(1)}$, with Casimir $$ {\cal K}=\mathrm{tr}\, \hat{\bf M}_1 = -\hat{\cal I}_0+\hat{\cal I}_1, \qquad \mathrm{where} \quad \hat{\cal I}_0 = K_1+K_2+K_3, \quad \hat{\cal I}_1 = K_1K_2K_3. $$ The latter two quantities commute with each other, and we have the relation $ {\cal I}_0-{\cal I}_1+{\cal I}_2+\hat{\cal I}_0- \hat{\cal I}_1=0$. Since (\ref{commjk}) holds, any four of the five functions $ {\cal I}_0,{\cal I}_1,{\cal I}_2,\hat{\cal I}_0, \hat{\cal I}_1$ provide the correct number of independent commuting first integrals to show Liouville integrability of the 8-dimensional map (\ref{p38map}). }\eex \subsubsection{The odd case} When $N$ is odd, then $p$ is odd and $q$ is even, or vice versa. With $N=2m+1$ we find that, as for the primitives $P_{2m+1}^{(1)}$, the kernel of the matrix (\ref{btauk}) is spanned by the same vector ${\bf u}=(1,-1,\dots ,1,-1,1)^T$, orthogonal to the vectors (\ref{oddprimvj}) providing the symplectic coordinates $y_j = {\bf x}^{{\bf v}_j}$, as in (\ref{oddprimy}), which are invariant under the one-parameter scaling group (\ref{lascala}). The symplectic form obtained via Lemma \ref{symp} gives a nondegenerate Poisson bracket for the $y_j$, of the form \beq \label{yqbra} \{ \, y_j , y_k\,\} = \epsilon_{jk}\, y_j y_k , \qquad 1\leq j,k\leq 2m. \eeq In all examples we find that the Toeplitz matrix $\hat{B}^{-1}=( \epsilon_{jk})$ has only the entries $1,-1,0$ above the diagonal, but a concise formula for these $\epsilon_{jk}$ in terms of the coprime integers $p,q$ is presently unavailable. By Theorem \ref{torusred}, we have an induced birational map $\hat\varphi$ in $2m$ dimensions, which is a Poisson map with respect to the nondegenerate bracket (\ref{yqbra}). We would like to assert that this is an integrable map. Assume for the sake of argument that $p$ is odd and $q$ is even. Then the $J_n$ can be written as functions of $y_j$, and from (\ref{yqbra}) they should satisfy the algebra (\ref{jbrac}) corresponding to $P_{p+1}^{(1)}$, hence providing $(p+1)/2$ commuting integrals. The scaling-invariant quantities $\hat{w}_n =K_n K_{n+q}$, for $1\leq n \leq q-1$, can also be written in terms of the $y_j$, and (up to the permutation $\hat\sigma$) should give a Poisson subalgebra isomorphic to that of the functions (\ref{wi}) for $P_{q+1}^{(1)}$, providing another $q/2$ commuting integrals. With the constraint (\ref{k2}), this would give $m$ independent commuting integrals in dimension $2m$, as required. For the rest of this section, we present examples of primitives $P_{N}^{(q)}$ with odd $N$ and $q>1$. \bex[The primitive $P_5^{(2)}$] \label{ymap-p52} {\em For $N=5$, $q=2$ the matrix (\ref{btauk}) has null vector ${\bf u}=(1,-1,1,-1,1)^T$, and im$\, B$ is spanned by the vectors $${\bf v}_1=(1,1,0,0,0)^T,\, {\bf v}_2=(0,1,1,0,0)^T,\, {\bf v}_3=(0,0,1,1,0)^T,\, {\bf v}_4=(0,0,0,1,1)^T.$$ Upon applying Lemma \ref{symp}, the Toeplitz matrix $\hat{B}^{-1}$ is specified by its first row, namely $ (\epsilon_{1,j})=(0,0, 1, 1), $ which determines the components of the nondegenerate Poisson bracket (\ref{yqbra}) for the variables $y_i=x_ix_{i+1},\; i=1,\dots ,4$. Explicitly (for indices $j<k$) this is just $$ \{y_1,y_3\}= y_1y_3,\;\; \{y_1,y_4\}= y_1y_4,\;\; \{y_2,y_4\}= y_2y_4, $$ all other brackets being zero. The Poisson bracket is invariant under the induced map \beq\label{p25map} \hat\varphi: \qquad (y_1,y_2,y_3,y_4)\mapsto \Big(y_2,y_3,y_4, y_2y_4(y_3+1)/(y_1y_3)\Big). \eeq The period 3 functions $J_n$ take the form $$ J_1=\frac{x_1+x_5}{x_3},\quad J_2=\frac{x_1x_2+x_3x_4+1}{x_1x_4},\quad J_3=\frac{x_2x_3+x_4x_5+1}{x_2x_5}. $$ Being invariant under the scaling symmetry (\ref{lascala}), they can also be written in terms of the variables $y_i$, as $$ J_1=\frac{y_1y_3+y_2y_4}{y_2y_3},\quad J_2=\frac{y_2(y_1+y_3+1)}{y_1y_3},\quad J_3=\frac{y_3(y_2+y_4+1)}{y_2y_4}. $$ The Poisson brackets between these functions follow by the cyclic property from $ \{J_1,J_2\}=1-J_1J_2, $ which, up to rescaling by a factor of 2 and applying the permutation $\sigma : (1,2,3)\mapsto (1,3,2)$, is the bracket (\ref{jbrac}) of the $J_n$ for $P_4^{(1)}$. The $J_n$ subalgebra provides a pair of first integrals in involution, namely $$ {\cal I}_0=J_1+J_2+J_3, \qquad {\cal I}_1=J_1J_2J_3, $$ which is sufficient for the map (\ref{p25map}) to be Liouville integrable. (Since these functions are totally symmetric, not just cyclically symmetric, the permutation $\sigma $ plays no role.) The period 2 quantities $K_n$, which are not invariant under the scaling (\ref{lascala}), are $$ K_1=\frac{x_1x_2+x_4x_5+1}{x_2x_4},\quad K_2=\frac{x_1+x_5+x_3(x_1x_2+x_4x_5)}{x_1x_3x_5}. $$ As for the case of $P_3^{(1)}$ in Example \ref{ymap-p31}, the product $K_1K_2$ is invariant under the scaling symmetry, so can be written in terms of the variables $y_i$. In fact, from (\ref{k2}) we have $ {\cal K} = {\cal I}_1-{\cal I}_0 = K_1K_2-2, $ where (by Theorem \ref{isomonodromy}) ${\cal K}$ is the Casimir of the bracket for the $J_n$. }\eex \bex[The Case $P_7^{(2)}$] \label{ymap-p72} {\em In the case $N=7$, $q=2$, the Toeplitz matrix $\hat{B}^{-1}$ is specified by its first row, namely $ (\epsilon_{1,j})=(0, 1, 1, 0, 0, 1), $ which determines the components of the nondegenerate Poisson bracket (\ref{yqbra}) for the variables $y_i=x_ix_{i+1},\; i=1,\dots ,6$. This bracket is preserved by the 6-dimensional map $$ \hat\varphi:\quad (y_1,\ldots , y_6)\mapsto (y_2,\ldots ,y_7), \qquad y_7= \frac{y_2y_6(y_3y_5+y_4)}{y_1y_3y_5}, $$ which (by Theorem \ref{torusred}) is induced from the map $\varphi$ defined by the recurrence (\ref{jkrec}) with $p=5$, $q=2$. The functions $J_n$, which cycle with period 5 under the action of $\varphi$, can be written in terms of $y_i$ thus: \bea && J_1=\frac{y_1y_3+y_2y_4}{y_2y_3},\quad J_2=\frac{y_2y_4+y_3y_5}{y_3y_4}, \quad J_3=\frac{y_3y_5+y_4y_6}{y_4y_5}, \nn\\ && J_4=\frac{y_2y_4+y_1y_3y_4+y_2y_3y_5}{y_1y_3y_5},\quad J_5=\frac{y_3y_5+y_2y_4y_5+y_3y_4y_6}{y_2y_4y_6}. \nn \eea The Poisson subalgebra generated by these functions is specified by $$ \{J_1,J_2\}=J_1J_2,\quad \{J_1,J_3\}=J_1J_3-1, $$ with all other brackets following from the cyclic property and skew-symmetry. By rescaling by a factor of 2 and applying a permutation to order these functions as $(J_1,J_3,J_5,J_2,J_4)$, this is seen to isomorphic to the algebra of the $J_n$ for $P_6^{(1)}$, so we find the commuting functions $$ {\cal I}_0 =J_1+J_3+J_5+J_2+J_4,\quad {\cal I}_1=J_1J_3J_5+J_2J_4J_1+J_3J_5J_2+J_4J_1J_3+J_5J_2J_4,\quad {\cal I}_2=J_1J_3J_5J_2J_4. $$ Of course, the ordering is unimportant for the totally symmetric functions ${\cal I}_0$ and ${\cal I}_2$. The period 2 quantities, which scale as $K_1\rightarrow \la^{2}K_1$, $K_2\rightarrow \la^{-2}K_2$ under (\ref{lascala}), are $$ K_1=\frac{x_2+x_6+x_4(x_1x_2+x_6x_7)}{x_2x_4x_6},\quad K_2=\frac{x_1x_3+x_1x_7+x_5x_7+x_3x_5(x_1x_2+x_6x_7)}{x_1x_3x_5x_7}. $$ From (\ref{k2}), the scaling-invariant combination $K_1K_2$ can be written in terms of $y_i$, via $$ K_1K_2-2={\cal K} ={\cal I}_0-{\cal I}_1+{\cal I}_2 . $$ }\eex \bex[The Case $P_7^{(3)}$] \label{ymap-p73} {\em For $N=7$, $q=3$, the first row of the Toeplitz matrix $\hat{B}^{-1}$ is $ (\epsilon_{1,j})=(0, 0, 0, 1, 1,0). $ This defines the nondegenerate Poisson bracket (\ref{yqbra}), which is invariant under the map $$ \hat\varphi:\quad (y_1,\ldots , y_6)\mapsto (y_2,\ldots ,y_7), \qquad y_7= \frac{y_2y_4y_6(y_4+1)}{y_1y_3y_5} , $$ in terms of the variables $y_i=x_ix_{i+1},\; i=1,\dots ,6$. The functions $J_n$, with period 4, are not invariant under (\ref{lascala}), but $w_n=J_nJ_{n+1}, \, n=1,\dots,4$ are: \bea && w_1=\frac{(1+y_1+y_4)(y_1y_3y_5+y_2y_4y_6)}{y_1y_3y_4y_5},\quad w_2=\frac{(1+y_1+y_4)(1+y_2+y_5)}{y_1y_5}, \nn\\ && w_3=\frac{(1+y_2+y_5)(1+y_3+y_6)}{y_2y_6},\quad w_4=\frac{w_1w_3}{w_2}, \nn \eea of which only three are independent (since $w_1w_3=w_2w_4$). By periodicity and skew-symmetry, all of their Poisson brackets follow from $$ \{w_1,w_2\}=w_1+w_2-w_1w_2,\quad \{w_1,w_3\}=w_4-w_2. $$ By applying the permutation $\sigma: \, (1,2,3,4)\mapsto (1,4,3,2)$, this is seen to be isomorphic to the algebra for $P_5^{(1)}$, as in Example \ref{ymap-p51}. Hence we have two functions in involution, namely $$ {\cal I}_1=w_1+w_4+w_3+w_2,\qquad {\cal I}_2=w_1w_3, $$ where the ordering is unimportant since both functions ${\cal I}_1$ and ${\cal I}_2$ are invariant under $\sigma$. The necessary third function in involution is derived from the quantities $K_n$, which cycle with period 3. Being invariant under the scaling (\ref{lascala}), they can be written in terms of the variables $y_i$: $$ K_1=\frac{y_3(y_1+y_5+1)}{y_2y_4},\quad K_2=\frac{y_4(y_2+y_6+1)}{y_3y_5},\quad K_3=\frac{y_1y_3y_5(y_3+1)+y_2y_4y_6(y_4+1)}{y_1y_3y_4y_6}. $$ They generate a three-dimensional Poisson subalgebra with the same relations as for the subalgebra in $P_4^{(1)}$ (up to a factor of 2), i.e. $$ \{K_1,K_2\}=K_1K_2-1,\quad \{K_2,K_3\}=K_2K_3-1,\quad \{K_1,K_3\}=-K_1K_3+1. $$ There are two first integrals that are commuting functions defined on this subalgebra, which we denote by $$ \hat{\cal I}_0=K_1+K_2+K_3, \qquad \hat{\cal I}_1=K_1K_2K_3, $$ and the joint Casimir of the two subalgebras is given by $ {\cal K} = 2-{\cal I}_1+{\cal I}_2=\hat{\cal I}_1-\hat{\cal I}_0. $ Since $\{K_i,w_j\}=0$ for all $i,j$, we may use any three of the functions ${\cal I}_1,{\cal I}_2,\hat{\cal I}_0,\hat{\cal I}_1$ to show Liouville integrability. }\eex \section{Linearisable recurrences from $P_{2m}^{(q)}-P_{2m}^{(m)}+P_{2(m-q)}^{(m-q)}$ quivers} \setcounter{equation}{0} \label{pert} In this section we consider the family of recurrences (\ref{comprec}), as in case (iii) of Theorem \ref{zeroe}, which come from the quivers of the form $P_{2m}^{(q)}-P_{2m}^{(m)}+P_{2(m-q)}^{(m-q)}$. It is convenient to rewrite each recurrence as \beq x_{n+N}\, x_n = x_{n+p}\, x_{n+q} + x_{n+m}, \qquad p+q=N=2m, \label{pqrec} \eeq which (for fixed $m$) gives a different recurrence for each $q=1,\ldots,m-1$. The associated matrix $B$ is given by \be \label{btau3} B = \tau_{2m}^q-(\tau_{2m}^q)^T-\tau_{2m}^m+\hat\tau_{2(m-q)}^{m-q}, \ee where $\tau_N$ is defined in (\ref{btau}), and $\hat\tau_{2(m-q)}$ denotes the $N\times N$ matrix obtained by adding $q$ left and right columns and upper and lower rows of zeros to the $2(m-q)\times 2(m-q)$ matrix $\tau_{2(m-q)}$. The simplest examples of the quivers corresponding to such $B$ are shown in Figure 1. Observe that if $\mbox{gcd}(m,q)=r>1$ then the quiver consists of $r$ disjoint copies of the same type of quiver, but with the parameters $q$ and $m$ replaced by the coprime integers $q/r$ and $m/r$, respectively, and similarly (\ref{pqrec}) decouples into $r$ copies of the corresponding recurrence. Therefore we shall assume that $\mbox{gcd}(m,q)=1$ from now on. With this assumption it follows from $p+q=2m$ that $\mbox{gcd}(p,q)=1$ or $2$ only. The case $\mbox{gcd}(p,q)=1$ has a very similar structure to that of the primitives $P_{N}^{(q)}$ for even $N$, but the case $\mbox{gcd}(p,q)=2$ has several new features, so we will need to distinguish between these two cases in due course. The family (\ref{pqrec}) has some basic properties that are analogous to those of the family in case (ii) of Theorem \ref{zeroe}, as described in the previous section. In particular, all of the recurrences (\ref{pqrec}) are linearisable, and they have two sets of periodic functions, with periods $p,q$ respectively, which lead to the construction of first integrals. \subsection{More linear relations with periodic coefficients} By analogy with (\ref{frieze}), the recurrence (\ref{pqrec}) can be written in the form \beq \label{2frieze} \det \,\Psi_n = \left| \bear{cc} x_n & x_{n+q} \\ x_{n+p} & x_{n+N} \eear\right| = x_{n+m}. \eeq The above identity is the relation for a 2-frieze \cite{morier}, and it implies that the iterates of (\ref{pqrec}) can be placed in the form an infinite 2-frieze. Using Dodgson condensation once again to condense a $3\times 3$ determinant, we have $$ \det\tilde{\Psi}_n =(x_{n+m}x_{n+N+m}-x_{n+m+q}x_{n+m+p})/x_{n+N}=1, $$ with $\tilde{\Psi}_n$ as in (\ref{3by3}). Then condensing the appropriate $4\times 4$ matrix $\Delta_n$ in terms of $3\times 3$ minors yields $$ \det \Delta_n = \left| \bear{cccc}x_n & x_{n+q} & x_{n+2q} & x_{n+3q}\\ x_{n+p} & x_{n+N} & x_{n+N+q} & x_{n+N+2q} \\ x_{n+2p} & x_{n+N+p} & x_{n+2N} & x_{n+2N+q} \\ x_{n+3p} & x_{n+N+2p} & x_{n+2N+p} & x_{n+3N} \eear\right|=0. $$ As in the case of the primitives considered before, the left and right kernels of the singular matrix $\Delta_n$ yield linear relations between the $x_n$. \begin{lem} \label{PQ} The iterates of the recurrence (\ref{jkrec}) satisfy the linear relations \beq\label{Prec} x_{n+3q}-J_{n+m}\,x_{n+2q}+ J_{n}\, x_{n+q} - x_n = 0, \eeq \beq\label{Qrec} x_{n+3p}-K_{n+m}\,x_{n+2p}+ K_{n}\, x_{n+p} - x_n = 0, \eeq whose coefficients are periodic functions of period $p,q$ respectively, that is $$ J_{n+p}=J_n, \qquad K_{n+q}=K_n, \qquad for\,\, all \, \, n. $$ \end{lem} \begin{prf} Upon solving $\Delta_n{\bf k}_n =0$, using the first three rows, it is convenient to normalise the first entry of ${\bf k}_n$ to be $-1$, and solve a $3\times 3$ system to find the other three entries. Then from Cramer's rule and $\det\tilde{\Psi}_n =1$ the fourth entry must be $+1$, so that this vector has the form ${\bf k}_n=(-1,J_n,\hat{J}_n,1)^T$. The $3\times 3$ linear system coming from the first three rows of the equation $\Delta_{n+p}{\bf k}_{n+p} =0$ is the same as that coming from the last three rows of $\Delta_n{\bf k}_n =0$, which implies that $J_{n+p}=J_n$ and $\hat{J}_{n+p}=\hat{J}_n$. Applying Cramer's rule in the first three rows of $\Delta_n{\bf k}_n =0$ together with Dodgson condensation also implies that \beq\label{cform} J_n =\left|\bear{ccc} x_n & x_{n+2q} & x_{n+3q}\\ x_{n+p} & x_{n+N+q} & x_{n+N+2q}\\ x_{n+2p} & x_{n+2N} & x_{n+2N+q} \eear \right| =\frac{1}{x_{n+4m-p}}\, \left|\bear{cc}E_n & x_{n+5m-2p} \\ E_{n+p} & x_{n+5m-p} \eear\right|, \eeq and similarly \beq\label{dform} \hat{J}_n = \frac{1}{x_{n+2m}}\, \left|\bear{cc}E_{n+2m-p} & x_{n+m} \\ E_{n+2M} & x_{n+m+p} \eear\right|, \qquad \mathrm{where} \qquad E_n = \left|\bear{cc}x_{n} & x_{n+2q} \\ x_{n+p} & x_{n+N+q} \eear\right|. \eeq Permuting the first and second columns of the determinant in the identity $\det\tilde{\Psi}_n =1$ and expanding in terms of $2\times 2$ minors, and then doing the same thing after permuting the second and third columns instead, leads to the formulae \beq\label{eids} \left|\bear{cc} E_{n} & x_{n+m} \\ E_{n+p} & x_{n+m+p} \eear\right| = -x_{n+p}, \qquad \left|\bear{cc}E_{n} & x_{n+3m-p} \\ E_{n+p} & x_{n+3m} \eear\right| =x_{n+4m-p}, \eeq respectively. Then the combination $x_{n+2m}x_{n+5m-p}(J_{n+m}+\hat{J}_n)$ can be rewritten as a sum of determinants, whose entries can be expanded using each of the identities (\ref{eids}), to yield $$ \bear{l} \quad \left|\bear{cc} x_{n+2m}E_{n+m} & x_{n+6m-2p} \\ E_{n+m+p} x_{n+2m} & x_{n+6m-p} \eear\right| + \left|\bear{cc} x_{n+5m-p}E_{n+2m-p} & x_{n+m} \\ x_{n+5m-p}E_{n+2m} & x_{n+m+p} \eear\right| \\ \\ = \left|\bear{cc} x_{n+2m}E_{n+m} & x_{n+6m-2p} \\ x_{n+2m+p}E_{n+m}+ x_{n+m+p} & x_{n+6m-p} \eear\right| +\left|\bear{cc} x_{n+5m-2p}E_{n+2m} +x_{n+6m-2p} & x_{n+m} \\ x_{n+5m-p}E_{n+2m} & x_{n+m+p} \eear\right| \\ \\ = E_{n+m} \left|\bear{cc} x_{n+2m} & x_{n+2m+2q} \\ x_{n+2m+p} & x_{n+2m+N+q} \eear\right| +E_{n+2m}\left|\bear{cc} x_{n+m+2q}& x_{n+m} \\ x_{n+m+N+q}& x_{n+m+p} \eear\right|=0, \eear $$ as required. This proves (\ref{Prec}), and the relation (\ref{Qrec}) follows by symmetry, from considering the left kernel of $\Delta_n$. \end{prf} \begin{rem}\label{penta} {\em The four-term linear relations (\ref{Prec}) and (\ref{Qrec}), together with $\det\tilde{\Psi}_n =1$, should be compared with those of the pentagram map \cite{pentagram}, but there the coefficients of the second and third terms are independent. } \end{rem} \begin{rem} \label{gcd1} {\em When $q=1$ the coefficient $K_n$ has period 1, so $K_{n+1}=K_n={\cal K}$ for all $n$, and the recurrence (\ref{Qrec}) is just the {\it constant coefficient, linear difference equation} \beq\label{q1krec} x_{n+3N-3} -{\cal K}\, x_{n+2N-2}+{\cal K}\, x_{n+N-1}-x_n =0. \eeq An immediate consequence of the latter relation is an inhomogeneous version of (\ref{k-pn1}), namely $$ x_{n+2N-2} -({\cal K}-1)\, x_{n+N-1} + x_n = F_n, \qquad \mathrm{where}\qquad F_{n+N-1}= F_n, $$ for some quantity $F_n$ (which has period $p=N-1$ in this case). Thus, by a minor modification of Corollary \ref{cheby}, one can find explicit formulae for $x_n$ in terms of Chebyshev polynomials. } \end{rem} \begin{rem} \label{gcd2} {\em When $q=2$, the coefficients in the recurrence (\ref{Qrec}) have period 2, so $K_{n+2}=K_n$ for all $n$, and (since $\mbox{gcd}(m,q)=1$ implies that $m$ is odd) we have $K_{n+m}=K_{n+1}$, whence \beq\label{q2krec} x_{n+3p} -K_{n+1}\, x_{n+2p}+K_n\, x_{n+p}-x_n =0. \eeq This is a four-term linear relation, whose coefficients alternate with the parity of the index $n$. } \end{rem} The existence of the periodic quantities $J_n$ and $K_n$ means that, as for the case of primitives considered in the last section, one can construct first integrals by taking cyclically symmetric functions of each of these sets of quantities. When $\mbox{gcd}(p,q)=1$ we find one relation between these two sets of quantities, and when $\mbox{gcd}(p,q)=2$ we find two relations, which will be discussed below. \subsection{Monodromy and linear relations with constant coefficients} \label{3x3monodromy} This subsection follows closely the discussion of subsection \ref{2x2monodromy} for the case of primitives. However, it will subsequently be necessary to refine the discussion further, depending on whether $\mbox{gcd}(p,q)=1$ or $2$. The relation (\ref{Prec}) implies that the matrix $\tilde\Psi_{n}$ satisfies \beq\label{psi3rel1} \tilde\Psi_{n+q}=\tilde\Psi_n\, {\bf L}_n, \qquad {\bf L}_n=\left(\bear{ccc} 0 & 0 & 1 \\ 1 & 0 & -J_n \\ 0 & 1 & J_{n+m} \eear\right). \eeq On the other hand, the recurrence (\ref{Qrec}) yields \beq\label{psi3rel2} \tilde\Psi_{n+p}= \hat{\bf L}_n \,\tilde\Psi_n, \qquad \hat{\bf L}_n=\left(\bear{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & -K_n & K_{n+m} \eear\right). \eeq As before, upon taking the ordered product of the ${\bf L}_n$ over $p$ steps, shifting by $q$ each time, we have the monodromy matrix \beq\label{mdy31} {\bf M}_n:= {\bf L}_{n}{\bf L}_{n+q}\ldots{\bf L}_{n+(p-1)q}=\tilde\Psi_n^{-1}\,\tilde \Psi_{n+pq}. \eeq Taking the ordered product of the $\hat{\bf L}_n$ over $q$ steps, shifting by $p$ each time, gives another monodromy matrix \beq\label{mdy32} \hat{{\bf M}}_n:= \hat{\bf L}_{n+(q-1)p}\ldots\hat{\bf L}_{n+p}\hat{\bf L}_n=\tilde\Psi_{n+pq}\, \tilde\Psi_n^{-1}. \eeq From the cyclic property of the trace it follows that \beq\label{k32} \mathcal{K}_n:= \mathrm{tr}\, {\bf M}_n = \mathrm{tr}\, \hat{\bf M}_n. \eeq The periodicity of ${\bf L}_n$, together with (\ref{mdy31}), implies that $\mathcal{K}_{n+p}=\mathcal{K}_n$, and similarly, from (\ref{mdy32}), we have $\mathcal{K}_{n+q}=\mathcal{K}_n$. If the periods $p$ and $q$ are coprime, then $\mathcal{K}_n=\mathcal{K}=$constant, for all $n$, hence $\mathcal{K}$ is a first integral for the map $\varphi$ corresponding to (\ref{pqrec}). However, if $\mbox{gcd}(p,q)=2$ holds instead then we have $\mathcal{K}_{n+2}=\mathcal{K}_n$, so this quantity has period 2. Once again the general result of Lemma \ref{perlin} can be applied here, to show that the iterates of (\ref{pqrec}) satisfy a linear relation with constant coefficients. \begin{propn}\label{K3pq} The iterates of the nonlinear recurrence (\ref{pqrec}) satisfy a linear relation of order $3pq$ with constant coefficients. It is a four-term relation if $\mbox{gcd}(p,q)=1$, and a seven-term relation if $\mbox{gcd}(p,q)=2$. \end{propn} \begin{prf} From (\ref{Prec}), Lemma \ref{perlin} implies that $x_n$ satisfies a linear recurrence of order $3pq$ with constant coefficients, for which the gaps between indices of adjacent terms with nonzero coefficients are of size $p$. On the other hand, (\ref{Qrec}) implies a linear recurrence of the same order with gaps of size $q$. Thus the actual size of the gaps must be the lowest common multiple of $p$ and $q$. Hence, when $p$ and $q$ are coprime, the gaps are of size $pq$, giving a four-term relation, while $\mbox{gcd}(p,q)=2$ gives a seven-term relation with gaps of size $pq/2$. \end{prf} Below we provide refined versions of the preceding result, with more precise details of the coefficients, by considering the cases $\mbox{gcd}(p,q)=1$ and $\mbox{gcd}(p,q)=2$ separately. \subsection{The case $\mbox{gcd}(p,q)=1$} The discussion of the case where $p$ and $q$ are coprime is almost identical to that for the primitives $P_N^{(q)}$ with $N$ even, as in subsection 4.7 above. The integers $p$ and $q$ are both odd, and for the matrix (\ref{btau3}) in each case we find that det$\,B=4$ whenever $p$ or $q$ is divisible by 3, and det$\,B=1$ otherwise. From such a nondegenerate matrix $B$ we get an invariant Poisson bracket of the form (\ref{logcan}), which is specified uniquely (up to scale) by the Toeplitz matrix $C=B^{-1}$. However, in this case a general closed-form expression for the entries of $C$, analogous to the formula (\ref{xqbra}) for the Poisson bracket of the even primitives, is not available to us at present. We now consider the associated functions $J_n$ and $K_n$, of periods $p$ and $q$ respectively. From the expression (\ref{mdy31}), the first integral $\mathcal{K}$ defined by (\ref{k32}) is a cyclically symmetric polynomial in the $J_n$, $n=1,\ldots, p$, and from (\ref{mdy32}) it is also a cyclically symmetric polynomial in the $K_n$, $n=1,\ldots, q$. Thus the equality of the traces in (\ref{k32}) provides a single functional relation between these two sets of functions. Note that we also have the same phenomenon as in (\ref{kpq}) with regard to the different expressions for ${\cal K}$ as a function of the quantities $J_n$, when we compare the cases with $q=1$ and $q>1$, for the same value of $p$: up to the action of a suitable permutation $\sigma$ of $(1,\ldots, p)$, the two expressions are identical; and the analogous statement applies to ${\cal K}$ considered as a function of the $K_n$. Now observe that all of the preceding comments concerning ${\cal K}$ apply equally well to the quantity $$ \tilde{\cal K} := \mathrm{tr}\, {\bf M}^{-1}_n = \mathrm{tr}\, \hat{\bf M}^{-1}_n, $$ which is also a first integral (so this definition holds for any $n$). We would like to assert that in fact $\tilde{\cal K}={\cal K}$. To see this, note that tr$\, \hat{\bf L}_n = K_{n+m}$, and tr$\, \hat{\bf L}^{-1}_n = K_{n}$. Thus, in the case $q=1$, when $K_n={\cal K}=\,$constant, we have ${\cal K}=\mathrm{tr}\,\hat{\bf M}_n =\mathrm{tr}\,\hat{\bf M}^{-1}_n=\tilde{\cal K}$ by (\ref{mdy32}). This then implies that, in terms of functions of $J_1,\ldots ,J_p$, we have \beq\label{kt32} \mathrm{tr}\,{\bf M}_n=\mathrm{tr}\,{\bf M}^{-1}_n \eeq for $q=1$, and clearly this identity remains true when $q>1$, since (for fixed $p$) the functions ${\cal K}$ and $\tilde{\cal K}$ are obtained from the case $q=1$ by applying the same permutation $\sigma$ to both sides. This allows us to make a more precise statement than Proposition \ref{K3pq}. \begin{thm}\label{Klin3} When $\mbox{gcd}(p,q)=1$, the iterates of the nonlinear recurrence (\ref{pqrec}) satisfy the linear relation \beq\label{krec3} x_{n+3pq}- \mathcal{K}\,x_{n+2pq}+\mathcal{K}\,x_{n+pq}-x_n=0, \eeq where $\mathcal{K}$ is the first integral defined by (\ref{k32}). \end{thm} \begin{prf} Using (\ref{mdy31}) we see that $\tilde\Psi_{n+pq}=\tilde\Psi_n {\bf M}_n$, so $\tilde\Psi_{n+2pq}=\tilde\Psi_{n+pq}{\bf M}_n{\bf M}_{n+pq}=\tilde\Psi_n {\bf M}_n^2$, by periodicity of ${\bf M}_n$, and similarly $\tilde\Psi_{n+3pq}=\tilde\Psi_n {\bf M}_n^3$. Applying Cayley-Hamilton to both ${\bf M}_n$ and ${\bf M}_n^{-1}$, and noting that det$\,{\bf M}_n=1$ and tr$\,{\bf M}_n={\cal K}$, as well as (\ref{kt32}), yields $$ \tilde\Psi_{n+3pq}-{\cal K} \tilde\Psi_{n+2pq}+{\cal K} \tilde\Psi_{n+pq}-\tilde\Psi_n = \tilde\Psi_n({\bf M}_n^3-{\cal K} {\bf M}_n^2+{\cal K} {\bf M}_n-{\bf I})=0. $$ The $(1,1)$ component of this equation is just (\ref{krec3}). \end{prf} \br[Another construction of the integral for $P_3^{(1)}$] {\em We may think of the map $\hat\varphi$ in Example \ref{ymap-p31} as coming from the recurrence $y_{n+2}y_n=y_{n+1}^2+y_{n+1}$, which is of the same form as (\ref{pqrec}), with $p=q=m=1$, although not directly obtained from a cluster mutation. Replacing $x_n\to y_n$ and ${\cal K}\to \hat{\cal K}$ in (\ref{krec3}), and solving, leads to the first integral $$ \hat{\cal K} =\frac{y_{n+3}-y_n}{y_{n+2}-y_{n+1}}. $$ When this is written purely in terms of $y_n, y_{n+1}$, using the map, we find the previously obtained quantity $$ {\cal I}_1=\hat{\cal K}+1=\frac{(y_n+y_{n+1})(y_n+y_{n+1}+1)}{y_ny_{n+1}}. $$ } \er For the Liouville integrability of the map $\varphi$ corresponding to (\ref{pqrec}), the counting of first integrals appears to be the same as for the even primitives $P_{2m}^{(q)}$. The two sets of quantities $J_n$ and $K_n$ generate Poisson subalgebras of dimensions $p$ and $q$, which should contain $(p+1)/2$ and $(q+1)/2$ commuting integrals, respectively, including the Casimir $\cal K$, which is common to both subalgebras. Taking the constraint (\ref{k32}) into account, this produces $m=(p+q)/2$ independent commuting integrals in terms of the $x_j$, as required. However, while the above counting argument is plausible, it rests on some unproven assumptions. Everything relies on the structure of the Poisson bracket for the $J_n$ in the case $q=1$, since all other subalgebras of $J_n$ or $K_n$ should be isomorphic to one with $q=1$. Yet in general (excluding $m=2$), this Poisson bracket consists of three homogeneous parts (of degrees $0, 1$ and $2$), which (unlike the bracket (\ref{jbrac}) for the primitive $P_{2m}^{(1)}$) does not have an obvious splitting into a bi-Hamiltonian pair. Moreover, while every cyclically symmetric polynomial function of the $J_n$ is a first integral of the map $\varphi$, we do not yet have an algorithm for selecting an involutive set of them. \begin{rem}{\em In \cite{pentagram} a quadratic Poisson structure is presented for the coefficients of the four-term linear relations for twisted polygons, together with corresponding monodromy matrices and commuting first integrals for the pentagram map. However, a Dirac reduction of this bracket to the case of (\ref{Prec}) or (\ref{Qrec}) gives only the trivial bracket. A general approach to Poisson structures related to twisted polygons is described in \cite{marshall}, which should shed some light on the situation here. } \end{rem} For want of more general statements, we illustrate the foregoing discussion with several examples of the integrable systems that arise in this case. \bex[The quiver $P_{4}^{(1)}-P_{4}^{(2)}+P_{2}^{(1)}$] \label{ex-p4142} {\em This example was first studied in an ad hoc way in \cite{honelaur}. The appropriate matrix (\ref{btau3}), which corresponds to the first quiver in Figure 1, is obtained by setting $c=1$ in (\ref{s4gen}). The explicit form of the map is \be \label{p4142map} \varphi : \qquad (x_1,x_2,x_3,x_4) \mapsto (x_2,x_3,x_4,x_5), \qquad x_5= \frac{x_2x_4+x_3}{x_1}. \ee The invariant Poisson bracket (\ref{logcan}) for this map is given by the Toeplitz matrix $C=2B^{-1}$, with top row $ (c_{1,j})=(0,1,1,2). $ The period 3 functions $J_i$ take the form $$ J_1=\frac{x_3(x_2+x_1x_3)+x_4(x_1^2+x_2^2)}{x_1x_2x_4},\quad J_2=\frac{x_1x_4+x_2^2+x_3^2}{x_2x_3},\quad J_3=\frac{x_3(x_2+x_1x_3)+x_4(x_1x_4+x_2^2)}{x_1x_3x_4}. $$ The Poisson brackets between these three functions follow by the cyclic property from \beq\label{N4jbr} \{J_1,J_2\}=J_1J_2-2J_3. \eeq This example is exceptional, in that the bracket for the $J_i$ is the sum of only {\em two} homogeneous terms: $$ {\bf P}={\bf P}^{(2)}+{\bf P}^{(1)},\quad\mbox{where}\;\;\; {\bf P}^{(2)}=\left(\begin{array}{ccc} 0 & J_1J_2 & -J_1J_3 \\ -J_1J_2 & 0 & J_2J_3 \\ J_1J_3 & -J_2J_3 & 0 \end{array} \right), \quad {\bf P}^{(1)}=\left(\begin{array}{ccc} 0 & -2J_3 & 2J_2 \\ 2J_3 & 0 & -2J_1 \\ -2J_2 & 2J_1 & 0 \end{array} \right). $$ Each of the tensors specified by ${\bf P}^{(1)}$ and ${\bf P}^{(2)}$ satisfies the Jacobi identity, so (since their sum is a Poisson tensor) they define a compatible pair of Poisson brackets. The first integrals $$ \CH_1=J_1^2+J_2^2+J_3^2, \qquad \CH_2=J_1J_2J_3 $$ satisfy the bi-Hamiltonian ladder $$ {\bf P}_1\nabla \CH_1=0,\qquad {\bf P}_2\nabla \CH_1={\bf P}_1\nabla \CH_2, \qquad {\bf P}_2\nabla \CH_2=0, $$ so they commute with respect to the bracket defined by (\ref{N4jbr}). The quantity \beq\label{k31eq} {\cal K}=3-\CH_1+\CH_2 \eeq provides the Casimir of this bracket; following (\ref{kpq}), we will find it useful to denote this quantity by ${\cal K}_{3,1}$. }\eex \bex[The quiver $P_{6}^{(1)}-P_{6}^{(3)}+P_{4}^{(2)}$] \label{ex-p6163} {\em Mutation of the second quiver in Figure 1 gives the map $$ \varphi : \qquad (x_1,x_2,x_3,x_4,x_5,x_6) \mapsto (x_2,x_3,x_4,x_5,x_6,x_7), \qquad x_7= \frac{x_2x_6+x_4}{x_1}. $$ The invariant Poisson bracket for this map is given by (\ref{logcan}), with the coefficients specified by the Toeplitz matrix $C=B^{-1}$ with top row $ (c_{1,j})=(0,0,1,0,1,1). $ The functions $J_i$, which cycle with period 5 under the action of $\varphi$, take the form \bea && J_1=\frac{x_1x_5+x_2x_6+x_3x_4}{x_2x_5},\quad J_2=\frac{x_4(x_3+x_1x_5)+(x_1+x_3)x_2x_6}{x_1x_3x_6},\quad J_3=\frac{x_3(x_2+x_4)+x_1x_5)}{x_2x_4}, \nn\\ && J_4=\frac{x_4(x_3+x_5)+x_2x_6)}{x_3x_5},\quad J_5=\frac{x_4(x_3+x_1x_5)+(x_1x_5+x_2x_3)x_6}{x_1x_4x_6}. \nn \eea The Poisson bracket between these five functions follow by the cyclic property from $$ \{J_1,J_2\}=-J_1J_2-J_4+1,\quad \{J_1,J_3\}=2J_1J_3. $$ This Poisson bracket is the sum of three homogeneous terms, $$ {\bf P}={\bf P}^{(2)}+{\bf P}^{(1)}+{\bf P}^{(0)}, \quad\mbox{with}\;\;\; {\bf P}^{(2)}_{ik}=c^{(2)}_{ik}J_iJ_k,\;\;\; {\bf P}^{(1)}_{ik}=c^{(1)}_{ik}J_{k+2},\;\;\; {\bf P}^{(0)}_{ik}=c^{(0)}_{ik}, $$ where $c^{(\ell)}_{ik}$ are the Toeplitz matrices with top rows given by $$ (c^{(2)}_{1,k})=(0,-1,2,-2,1),\quad (c^{(1)}_{1,k}) =(0,-1,0,0,1), \quad (c^{(0)}_{1,k})=(0,1,0,0,-1). $$ Both ${\bf P}^{(0)}$ and ${\bf P}^{(2)}$ satisfy the Jacobi identity, but ${\bf P}^{(1)}$ does not, so we cannot think of this sum as some sort of Poisson compatibility. The Casimir for the 5-dimensional Poisson algebra generated by the $J_i$ is the trace of the monodromy matrix, as in (\ref{k32}). It can be written as the sum \beq\label{k51} {\cal K}=\CH_1-\CH_2+\CH_3, \eeq where each of the components is a first integral: $$ \CH_1=\sum_{i=1}^5(J_i-J_iJ_{i+1}),\quad \CH_2=\sum_{i=1}^5(J_iJ_{i+1}J_{i+2}-J_iJ_{i+1}^2J_{i+2}),\quad \CH_3=\prod_{i=1}^5J_i. $$ We find that $\{ \CH_i ,\CH_j \} =0$ for all $i,j$, so the 6-dimensional Poisson map $\varphi$ has the correct number of first integrals in involution. For later comparison, we denote ${\cal K}$ in (\ref{k51}) by ${\cal K}_{5,1}$. }\eex \bex[The quiver $P_{8}^{(1)}-P_{8}^{(4)}+P_{6}^{(3)}$] \label{ex-p8184} {\em The map obtained from mutation of this quiver is $$ \varphi : \qquad (x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8) \mapsto (x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9), \qquad x_9= \frac{x_2x_8+x_5}{x_1}. $$ The corresponding non-singular matrix $B$ defines an invariant Poisson bracket (\ref{logcan}) with matrix $C=B^{-1}$. The top row of the Toeplitz matrix $C$ is $ (c_{1,j})=(0,1,0,0,1,1,0,1). $ The functions $J_i$ with period 7 can be determined from $J_1$, which takes the form $$ J_1=\frac{x_1x_6+x_2x_7+x_3x_5}{x_2x_6}, $$ The remaining six functions are obtained by applying $\varphi^*J_i=J_{i+1}$, with $(\varphi^*)^7J_i = J_i$. The Poisson bracket relations between these functions follow by the cyclic property from $$ \{J_1,J_2\}=2J_1J_2-J_5,\quad \{J_1,J_3\}=-J_1J_3+1,\quad \{J_1,J_4\}=-J_1J_4. $$ Again, this Poisson bracket is the sum of three homogeneous terms, ${\bf P}={\bf P}^{(2)}+{\bf P}^{(1)}+{\bf P}^{(0)}$, where ${\bf P}^{(0)}$ and ${\bf P}^{(2)}$ satisfy the Jacobi identity, but ${\bf P}^{(1)}$ does not. The Casimir of the Poisson subalgebra generated by the $J_i$ is again $\cal K$ (the trace of the monodromy matrix), but in this case it is not clear how to split the Casimir into four pieces that Poisson commute, as required for Liouville integrability. Of course, there are still seven functionally independent invariant functions (built from cyclically symmetric combinations of the $J_i$) and we expect that four commuting functions exist. }\eex \begin{figure}[htb] \centering \psfrag{1}{$1$}\psfrag{2}{$2$}\psfrag{3}{$3$}\psfrag{4}{$4$}\psfrag{5}{$5$}\psfrag{6}{$6$} \subfigure Example \ref{ex-p4142}.]{ \includegraphics[width=3.5cm]{p4142.eps}\label{subfig:p4142fig} } \qquad\qquad \subfigure Example \ref{ex-p6163}.]{ \includegraphics[width=4cm]{p6163.eps}\label{subfig:p6163fig} } \qquad\qquad \subfigure Example \ref{ex-p6263}.]{ \includegraphics[width=4cm]{p6263.eps}\label{subfig:p6263fig} } \caption{The first three quivers in this class.}\label{quivers} \end{figure} \bex[The quiver $P_{8}^{(3)}-P_{8}^{(4)}+P_{2}^{(1)}$] \label{ex-p8384} {\em For this quiver, with $p=5$ and $q=3$, the recurrence is $$ x_nx_{n+8}=x_{n+3}x_{n+5}+x_{n+4}. $$ The corresponding birational map $\varphi$ is Poisson with respect to a log-canonical bracket (\ref{logcan}) defined by the inverse of the associated matrix $B$. Upon setting $C=2B^{-1}$, the bracket is given by the first row of this Toeplitz matrix: $ (c_{1,j})= (0,1,1,0,-1,1,2,1). $ Since $\mbox{gcd}(p,q)=1$, we have the same phenomenon as in the discussion of (\ref{kpq}). From (\ref{mdy31}), we have $$ {\bf M}_n:= {\bf L}_{n}{\bf L}_{n+3}{\bf L}_{n+6}{\bf L}_{n+9}{\bf L}_{n+12}= {\bf L}_{n}{\bf L}_{n+3}{\bf L}_{n+1}{\bf L}_{n+4}{\bf L}_{n+2}, $$ since ${\bf L}_n$ has period 5, so the permutation $\sigma :\, (1,2,3,4,5)\mapsto (1,4,2,5,3)$ gives the trace as $$ {\cal K}={\cal K}_{5,3}(J_1,J_2,J_3,J_4,J_5)= {\cal K}_{5,1}(J_1,J_4,J_2,J_5,J_3), $$ where ${\cal K}_{5,1}$ is given by the formula (\ref{k51}) in Example \ref{ex-p6163}. Similarly, from (\ref{mdy32}) we find $$ \hat{{\bf M}}_n:= \hat{\bf L}_{n+10}\hat{\bf L}_{n+5}\hat{\bf L}_n=\hat{\bf L}_{n+1}\hat{\bf L}_{n+2}\hat{\bf L}_n, $$ where $\hat{\bf L}_n$ has period 3, so with the same notation as in (\ref{kpqhat}) we also have $$ {\cal K} = {\cal K}_{3,5}(K_1,K_2,K_3)= {\cal K}_{3,1}(K_1,K_3,K_2), $$ where ${\cal K}_{3,1}$ is given by (\ref{k31eq}) in Example \ref{ex-p4142}. Since ${\cal K}_{3,1}$ is totally symmetric, the permutation $\hat\sigma : \, (1,2,3)\mapsto (1,3,2)$ makes no difference to the result. The explicit forms of the period 5 functions $J_i$, appearing as entries in the matrices ${\bf L}_n$, all derive from $$ J_1=\frac{x_1x_3x_8+x_3x_5x_7+x_4x_6x_8+x_4x_7}{x_3x_4x_8} $$ by acting with the map $\varphi$. From the above bracket for the $x_j$, determined by the matrix $C$, the Poisson brackets between the $J_i$ can be calculated as $$ \{J_1,J_2\}=4J_1J_2,\quad \{J_1,J_3\}=2J_1J_3+2J_2-2, $$ with all other brackets being deduced through the cyclic property. The resulting 5-dimensional Poisson subalgebra is isomorphic to that for the $J_i$ in Example \ref{ex-p6163}, as can be seen by applying the permutation $\sigma$ and rescaling by an overall factor of $2$ (which depends on the choice of scale for $c_{ij}$). Therefore, subject to this permutation, it follows that the same three functions $\CH_i$ are in involution: $$ \CH_1=\sum_{i=1}^5(J_i-J_iJ_{i+3}),\quad \CH_2=\sum_{i=1}^5(J_iJ_{i+1}J_{i+3}-J_iJ_{i+1}J_{i+3}^2),\quad \CH_3=\prod_{i=1}^5J_i . $$ The required fourth function that commutes with these $\CH_i$ is derived from the algebra of the $K_i$, which appear as entries in the matrices $\hat{\bf L}_n$. In terms of $x_j$, we have $$ K_1 = \frac{x_1}{x_6}+\frac{x_6}{x_1}+ \frac{x_2}{x_3x_6}+ \frac{x_5}{x_1x_4}+ \frac{x_8}{x_4x_7}+ \frac{x_2x_8}{x_3x_7}, $$ with $K_2=\varphi^* K_1$ and $K_3=(\varphi^*)^2 K_1$ providing the other two functions which cycle with period 3 under the action of the map. The $K_i$ generate a 3-dimensional Poisson subalgebra, whose brackets are all determined by acting with $\varphi$ on the single relation $$ \{K_1,K_2\}=-K_1K_2+2K_3. $$ This algebra is isomorphic to that of the $J_i$ in Example \ref{ex-p4142} (as is seen immediately by applying the permutation $\hat\sigma$), so it contains the two commuting quantities $$ \hat\CH_1=K_1^2+K_2^2+K_3^2,\qquad \hat\CH_2=K_1K_2K_3. $$ The subalgebras of $J_i$ and $K_i$ share the joint Casimir $ {\cal K} = \CH_1-\CH_2+\CH_3 = 3-\hat\CH_1+\hat\CH_2, $ which gives a single relation between these two sets of functions. Since $\{K_i,J_j\}=0$, for all $i,j$, we can take any four of the first integrals $\CH_1,\CH_2,\CH_3,\hat\CH_1,\hat\CH_2$ as a commuting set, which demonstrates that this 8-dimensional map $\varphi$ is integrable in the Liouville sense. }\eex \subsection{The case $\mbox{gcd}(p,q)=2$} The discussion of the case where $p$ and $q$ are both even involves some new features. Upon setting $p=2\hat p$, $q=2\hat q$ with $\mbox{gcd}(\hat{p},\hat{q})=1$, the fact that $\mbox{gcd}(m,q)=1$ implies that $m=\hat{p}+\hat{q}$ is odd, hence either $\hat q$ is odd and $\hat p$ is even, or vice versa. For the matrix (\ref{btau3}) in each case we find that det$\,B=0$ whenever $\hat p$ or $\hat q$ is divisible by 3, and det$\,B=9$ otherwise. These two possibilities lead to quite different behaviour, so eventually we shall have to distinguish between them. For the time being we concentrate on the associated linear relations, which do not depend on whether $B$ is degenerate or not. Since $q$ is even, observe that the ordered product of matrices ${\bf L}_n$ in (\ref{mdy31}) now cycles only through indices with the same parity, so the product cycles twice through $\hat{p}=p/2$ terms. Thus we see that ${\bf M}_n$ is a perfect square, and it is convenient to take the square root $$ {\bf M}_n^{1/2}= {\bf L}_n {\bf L}_{n+q}\ldots {\bf L}_{n+(\hat{p}-1)q}, $$ and similarly for $\hat{\bf M}_n^{1/2}$. We know that in this situation the quantity ${\cal K}_n$ in (\ref{k32}) has period 2, but for our purposes it is more useful to consider the quantity \beq\label{kstar} \mathcal{K}^*_n:= \mathrm{tr}\, {\bf M}_n^{1/2} = \mathrm{tr}\, \hat{\bf M}_n^{1/2}, \eeq which cycles with period 2 for the same reasons. In this setting, with $\mbox{gcd}(p,q)=2$, the algebraic structure in terms of the functions $J_n$ and $K_n$ is based on that for the case $q=2$ (up to suitable permutations), similarly to the way that for $\mbox{gcd}(p,q)=1$, and for the primitives, this structure is based on the case $q=1$. Now when $q=2$ we have the quantities $K_n$ with period 2, giving $$\mathrm{tr}\, \hat{\bf M}_n ^{1/2}=\mathrm{tr}\, \hat{\bf L}_n=K_{n+1}, \qquad \mathrm{tr}\, \hat{\bf M}_n ^{-1/2}=\mathrm{tr}\, \hat{\bf L}_n^{-1}=K_{n}, $$ and hence the identity \beq\label{per2tr} \mathrm{tr}\, {\bf M}_n^{-1/2}=\mathrm{tr}\, {\bf M}_{n+1}^{1/2}= \mathcal{K}^*_{n+1} \eeq holds. For even $q>2$, with $p$ fixed, the formula for $\mathrm{tr}\, {\bf M}_n^{1/2}$ as a function of $J_1,\ldots ,J_p$ is identical to that for the case $q=2$, up to a permutation, which means that the formula (\ref{per2tr}) holds in general. \begin{thm}\label{Klin3k} When $\mbox{gcd}(p,q)=2$, the iterates of the nonlinear recurrence (\ref{pqrec}) satisfy the linear relation \beq\label{krec3k} x_{n+3pq/2}- \mathcal{K}^*_n\,x_{n+pq}+\mathcal{K}_{n+1}^*\,x_{n+pq/2}-x_n=0, \eeq where $\mathcal{K}_n^*$ is the period 2 quantity defined by (\ref{kstar}). \end{thm} \begin{prf} This follows by essentially the same argument as in the proof of Theorem \ref{Klin3}, applying Cayley-Hamilton to ${\bf M}_n^{1/2}$ and ${\bf M}_n^{-1/2}$, and making use of (\ref{per2tr}). \end{prf} \begin{rem}\label{twocas} {\em With respect to the functions $J_n$ and $K_n$, the main new feature in this case, compared with the case $\mbox{gcd}(p,q)=1$, is that here we have two quantities $\mathcal{K}_1^*$ and $\mathcal{K}_2^*$, so the identity (\ref{kstar}) for $n=1,2$ provides two independent relations between these two sets of functions. Given that these are the only relations, the cyclically symmetric functions of the $J_n$, and those of the $K_n$, together provide $N-2$ independent first integrals. } \end{rem} We are now ready to present a further refinement of Proposition \ref{K3pq}. \begin{thm} \label{Klin3cas} When $\mbox{gcd}(p,q)=2$, the iterates of the nonlinear recurrence (\ref{pqrec}) satisfy the constant coefficient, linear relation \beq\label{krec3c} x_{n+3pq}-{\cal B}\, x_{n+5pq/2} +\mathcal{C}\,x_{n+2pq} -{\cal D}\, x_{n+3pq/2} + {\cal C}\, x_{n+pq}-\mathcal{B}\,x_{n+pq/2}+x_n=0, \eeq where the non-trivial coefficients are first integrals, given by $$ {\cal B} =\mathcal{K}_1^*+\mathcal{K}_2^* , \qquad {\cal C} = \mathcal{K}_1^* \mathcal{K}_2^*+\mathcal{K}_1^*+\mathcal{K}_2^* , \qquad {\cal D} =(\mathcal{K}_1^*)^2+(\mathcal{K}_2^*)^2+2. $$ \end{thm} \begin{prf} For the sake of argument, suppose that $\hat q$ is odd and $\hat p$ is even. To get the seven-term relation (\ref{krec3c}), we consider the $6\times 6$ matrix that is specified in terms of its $(j,k)$ entry by $\Phi_n = (x_{n+p\hat{q}(j-1)+\hat{q}(k-1)}) $. The linear recurrence (\ref{Prec}) implies that $$ \Phi_{n+\hat q} = \Phi_n \, {\bf L}_n^*, \qquad \mathrm{where} \qquad {\bf L}_n^*=\left(\bear{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & -J_n \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & J_{n+m} \\ 0 & 0 & 0 & 0 & 1 & 0 \eear \right). $$ The monodromy matrix corresponding to $p$ iterations of the latter equation is $$ {\bf M}_n^* = {\bf L}_n^* \, {\bf L}_{n+\hat q}^*\ldots {\bf L}_{n+(p-1)\hat q}^*, $$ so that \beq\label{mstar} \Phi_{n+p\hat q} = \Phi_n \, {\bf M}_n^*. \eeq We need to show that the characteristic polynomial of the $6\times 6$ matrix $ {\bf M}_n^*$ is given by \beq\label{chi} \chi (\zeta ) = \zeta^6 -{\cal B}\, \zeta^5 +{\cal C}\, \zeta^4 - {\cal D}\, \zeta^3 + {\cal C}\, \zeta^2 - {\cal B}\, \zeta + 1, \eeq as once we have determined this the recurrence (\ref{krec3c}) follows immediately, by applying the Cayley-Hamilton theorem to (\ref{mstar}). It is convenient to conjugate all of the $6\times 6$ matrices by the permutation matrix corresponding to $(1,2,3,4,5,6)\rightarrow (1,3,5,2,4,6)$, which reduces everything to calculations with $3\times 3$ blocks. This gives $$ {\bf L}_n^* \sim \left(\bear{cc} \mathbf{0} & {\bf L}_n \\ \mathbf{1} & \mathbf{0} \eear \right), \qquad {\bf L}_n^* \, {\bf L}_{n+\hat q}^*\sim \left(\bear{cc} {\bf L}_n & \mathbf{0} \\ \mathbf{0} & {\bf L}_{n+\hat{q}} \eear \right), $$ so that that for the monodromy matrix we have $$ {\bf M}_n^* \sim \left(\bear{cc} {\bf L}_n {\bf L}_{n+q}\ldots {\bf L}_{n+(\hat{p}-1)q} & \mathbf{0} \\ \mathbf{0} & {\bf L}_{n+\hat{q}} {\bf L}_{n+\hat{q}+q}\ldots {\bf L}_{n+\hat{q}+(\hat{p}-1)q} \eear \right) \sim \left(\bear{cc} {\bf M}_n^{1/2} & \mathbf{0} \\ \mathbf{0} & {\bf M}_{n+1}^{1/2} \eear \right). $$ Thus the characteristic polynomial of ${\bf M}_n^*$ factors as the product of the characteristic polynomials of ${\bf M}_n^{1/2}$ and ${\bf M}_{n+1}^{1/2}$, $$ \chi (\zeta ) = (\zeta^3 - {\cal K}_1^*\, \zeta^2 + {\cal K}_2^*\, \zeta -1)\, (\zeta^3 - {\cal K}_2^*\, \zeta^2 + {\cal K}_1^*\, \zeta -1), $$ which multiplies out to give (\ref{chi}) with the correct coefficients ${\cal B}, {\cal C},{\cal D}$. An analogous argument holds when $\hat q$ is even and $\hat p$ is odd. \end{prf} To discuss the Liouville integrability of the systems that appear when $\mbox{gcd}(p,q)=2$, it is necessary to give a separate treatment according to whether the matrix $B$ is invertible or not. \subsubsection{Nondegenerate $B$ matrix} When matrix $B$ is nondegenerate, it appears that the Liouville integrability of the symplectic map $\varphi$ should follow by very similar arguments to those for the case $\mbox{gcd}(p,q)=1$ (or for the primitives with even $N$). The map preserves a Poisson bracket of the form (\ref{logcan}), which is specified uniquely (up to scale) by the Toeplitz matrix $C=B^{-1}$. There are the two sets of quantities, $J_n$ and $K_n$, with periods $p$ and $q$ respectively, and by Remark \ref{twocas} these provide $N-2$ independent first integrals, but it is necessary to find $m=N/2$ integrals in involution. Given that the $J_n$ generate a Poisson subalgebra of dimension $p$, and that both of the quantities ${\cal K}_1^*$ and ${\cal K}_2^*$ defined by (\ref{kstar}) are Casimirs, with the symplectic leaves being of dimension $p-2=2(\hat{p}-1)$, a further $\hat{p}-1$ independent commuting functions of the $J_n$ are required in order to define an integrable system on this subalgebra. Similarly, given that $\{J_i, K_j\}=0$ for all $i,j$, and the $K_n$ produce a $q$-dimensional subalgebra, with the same functions ${\cal K}_1^*$ and ${\cal K}_2^*$ as Casimirs, it is necessary to have an additional $\hat{q}-1$ independent functions that define an integrable system in terms of $K_n$ alone. The quantities ${\cal B}$ and ${\cal C}$ which appear as the coefficients in the linear relation (\ref{krec3c}) are first integrals, as well as being joint Casimirs for the two subalgebras, so combining these with the two sets of first integrals gives a total of $2+( \hat{p}-1) +(\hat{q}-1) =m$ independent functions in involution, as required. Since we do not have a general proof of all the foregoing assertions, here we will merely illustrate the discussion with the simplest example of this kind, which arises for $p=4$, $q=2$. \bex[The quiver $P_{6}^{(2)}-P_{6}^{(3)}+P_{2}^{(1)}$] \label{ex-p6263} {\em Mutation of the third quiver in Figure 1 gives the map \beq\label{p4q2} \varphi : \qquad (x_1,x_2,x_3,x_4,x_5,x_6) \mapsto (x_2,x_3,x_4,x_5,x_6,x_7), \qquad x_7= \frac{x_3x_5+x_4}{x_1}. \eeq The inverse of the non-singular matrix $B$ defines a Poisson bracket (\ref{logcan}), invariant with respect to $\varphi$. Taking $C=3B^{-1}$, the top row of this Toeplitz matrix is $ (c_{1,j})=(0,1,1,0,2,2) . $ The period 4 functions $J_i$, generated from $J_1$ by the action of the map, are \bea && J_1=\frac{x_1x_2x_6+x_2x_4x_5+x_3x_4x_6+x_3x_5}{x_2x_3x_6},\quad J_2=\frac{x_1x_6+x_2x_3+x_4x_5}{x_3x_4}, \nn\\ && J_3=\frac{x_1x_3x_4+x_2x_3x_5+x_1x_5x_6}{x_1x_4x_5},\quad J_4=\frac{x_1x_3x_5+x_2x_4x_6+x_1x_2x_4x_5+x_1x_3x_4x_6+ x_2x_3x_5x_6}{x_1x_2x_5x_6}. \nn \eea The Poisson brackets between these functions all follow by the cyclic property from \beq\label{jp4q2} \{J_1,J_2\}=3J_1J_2-3,\quad \{J_1,J_3\}=3(J_2-J_4). \eeq From the first equality in (\ref{kstar}) we have the two functions $$ {\cal K}_1^*=J_2J_4-J_1-J_3, \qquad {\cal K}_2^*=J_1J_3-J_2-J_4, $$ which are both Casimir functions of the 4-dimensional algebra generated by the $J_i$, but cycle with period 2 under the map. Symmetric functions of ${\cal K}_1^*$ and ${\cal K}_2^*$ are first integrals as well as being Casimirs, so we may take the quantities $$ {\cal B}={\cal K}_1^*+{\cal K}_2^* , \qquad {\cal C}={\cal K}_1^*{\cal K}_2^* + {\cal B} , $$ as in Theorem \ref{Klin3cas}. The homogeneous components of ${\cal B}$ are automatically first integrals, and they Poisson commute: $$ {\cal B} =-\CH_1 +\CH_2 $$ where $$ \CH_1=J_1+J_2+J_3+J_4,\quad \CH_2=J_1J_3+J_2J_4 , \quad\mbox{with}\quad \{\CH_1,\CH_2\}=0. $$ Hence, we may take $\CH_1, {\cal B},{\cal C}$ as three independent first integrals in involution, which proves Liouville integrability of the 6-dimensional map (\ref{p4q2}). Clearly there are other choices, for instance $\CH_1, \CH_2,{\cal C}$, which would do just as well. } \eex \br {\em Note that in the preceding example, since $q=2$ we have $K_1={\cal K}_2^*$ and $K_2={\cal K}_1^*$, which Poisson commute with each other, so the subalgebra of the $K_i$ is trivial. } \er \subsubsection{Degenerate $B$ matrix} When $B$ is degenerate, which only happens when either $p$ or $q$ is a multiple of 6, the maps that arise have features that make them much more like the odd primitives $P_{2m+1}^{(q)}$ than the other cases with even $N$. For these particular cases, the matrix $B$ has a two-dimensional kernel, which is spanned by two integer vectors of the form $$ {\bf u}_1 =(1,1,0,-1,-1,0, \ldots )^T, \qquad {\bf u}_2 =(0,1,1,0,-1,-1, \ldots )^T, $$ where in each vector the components continue to repeat the same blocks of six numbers, until the final block which is truncated (of length 2 or 4, since $N=2m$ is even and not a multiple of 6). Hence there is a two-parameter scaling group, which acts by \beq\label{lascala2} (x_1,x_2,x_3,x_4,x_5,x_6, \ldots ) \rightarrow (\la \,x_1,\la\, \mu \, x_2,\mu\, x_3, \la^{-1}\, x_4,\la^{-1}\mu^{-1}\,x_5,\mu^{-1}\,x_6, \ldots ), \qquad (\la ,\mu ) \in (\C^*)^2. \eeq This action extends to all the iterates $x_n$ of (\ref{pqrec}); the pattern repeats itself on each successive block of six adjacent iterates. Now $\mathrm{im}\, B$ is spanned by \beq\label{degvec} {\bf v}_j = {\bf e}_j -{\bf e}_{j+1}+{\bf e}_{j+2}, \qquad j=1,\ldots , 2m-2, \eeq where ${\bf e}_j$ is the $j$th standard basis vector. Hence, by Lemma \ref{symp}, the coordinates \beq\label{degy} y_j = \frac{x_j\,x_{j+2}} {x_{j+1}}, \qquad j=1,\ldots , 2m-2, \eeq are invariant under the scaling (\ref{lascala2}), and the degenerate form (\ref{omega}) pushes forward to a symplectic form (\ref{yform}) in dimension $2m-2$. The coefficients of the latter are obtained from a skew-symmetric matrix $\hat{B}=(\hat{b}_{jk})$, whose inverse provides a nondegenerate Poisson bracket for the $y_j$, i.e. \beq\label{toep} \{ \, y_j , y_k\,\} = \epsilon_{jk}\, y_j y_k , \qquad 1\leq j,k\leq 2m-2, \eeq where $\hat{B}^{-1} = (\epsilon_{jk})$ must be a Toeplitz matrix, because the coordinates (\ref{degy}) transform as $\varphi^*y_j = y_{j+1}$ under the map corresponding to (\ref{pqrec}). Upon applying the rest of Theorem \ref{torusred}, we see that this induces a symplectic map $\hat\varphi$ on the variables $y_j$. To prove the Liouville integrability of all of the maps $\hat\varphi$ that arise in this way, we require a general expression for the coefficients $\epsilon_{jk}$ in (\ref{toep}), which is presently lacking. However, it is possible to give a plausible argument for the counting of first integrals, which agrees with all examples we have checked so far. Suppose, for the sake of argument, that $6|q$, hence 6$\not | p$. The scaling action of $(\C^*)^2$ on the $x_j$, as in (\ref{lascala2}), extends to an action on the coefficients $J_n$ that appear in the linear equation (\ref{Prec}). However, the indices of the terms $x_{n+jq}$ for $j=0,1,2,3$ differ by multiples of 6, which means that they all scale the same way, and hence the period $p$ quantities $J_n$ are invariant under this scaling, and can be expressed in terms of the $y_j$ given by (\ref{degy}). Given that the $J_n$ generate a $p$-dimensional Poisson subalgebra with respect to the bracket (\ref{toep}), with two Casimirs given by the quantities ${\cal K}_1^*$ and ${\cal K}_2^*$ as in (\ref{kstar}), a further $\hat{p}-1$ independent commuting functions of the $J_n$ are needed to have an integrable system defined on this subalgebra. Similarly, the scaling action (\ref{lascala2}) extends to an action on the coefficients $K_n$ in (\ref{Qrec}), but now the terms $x_{n+jp}$ for $j=0,1,2,3$ do {\it not} all differ by multiples of 6, so they scale differently. This means that there is a non-trivial scaling action of the two-parameter group $(\C^*)^2$ on the quantities $K_n$, $n=1,\ldots , q$, for which there should be $q-2$ invariant monomials. The invariant monomial functions of $K_n$, which we denote by $w_i$ for $i=1,\ldots, q-2$, can also be written in terms of the original variables $x_j$. The fact that they are invariant under (\ref{lascala2}) means that these $w_i$ can be written as functions of the symplectic coordinates $y_j$ as well. Given that the $w_i$ generate a $(q-2)$-dimensional Poisson subalgebra in the $(2m-2)$-dimensional space with the bracket (\ref{toep}), and that the quantities ${\cal K}_1^*$ and ${\cal K}_2^*$ are Casimirs for this subalgebra too, an integrable system is defined on the $(q-4)$-dimensional symplectic leaves by an additional set of $\hat{q} -2$ commuting functions of the $w_i$. Supposing further that $\{J_i, w_j\}=0$ for all $i,j$, we take the quantities ${\cal B}$ and ${\cal C}$ from (\ref{krec3c}), which are both first integrals and joint Casimirs for the two subalgebras, and combining these with the two sets of first integrals gives a total of $2+( \hat{p}-1) +(\hat{q}-2) =m-1$ independent functions in involution, as required for the Liouville integrability of the symplectic map $\hat\varphi$ in dimension $2m-2$. Since we do not have a general proof of all the preceding assertions, here we will only present the simplest example of this kind, which arises for $p=6$, $q=4$. \bex[The quiver $P_{10}^{(4)}-P_{10}^{(5)}+P_{2}^{(1)}$] \label{ex-p6q4} {\em The recurrence arising from mutation of this quiver is \beq\label{p6q4} x_{n+10}\, x_n = x_{n+6}\,x_{n+4} +x_{n+5}. \eeq The matrix $B$ has a two-dimensional kernel, spanned by the integer vectors $$ {\bf u}_1 = (1,1,0,-1,-1,0,1,1,0,-1)^T, \qquad {\bf u}_2 = (0,1,1,0,-1,-1,0,1,1,0)^T , $$ which generate the action of a two-parameter scaling group on the iterates of (\ref{p6q4}), as given in (\ref{lascala2}). Taking the scaling-invariant variables $y_j$, given by (\ref{degy}) for $j=1,\ldots ,8$, we apply Lemma \ref{symp} to find the symplectic form $\hat\om$ expressed in these coordinates, which leads to the nondegenerate Poisson bracket specified by \beq\label{yp6q4bra} \{ y_1,y_5\} =y_1y_5, \qquad \{ y_1,y_6\} =-y_1y_6, \qquad \{ y_1,y_7\} =y_1y_7, \eeq where all other brackets are either zero or follow from the Toeplitz property/skew-symmetry. From Theorem \ref{torusred}, this Poisson bracket is preserved by the induced map \beq\label{yp6q4} \hat\varphi : \qquad (y_1,\ldots ,y_{8}) \mapsto (y_2,\ldots,y_{9}), \qquad y_{9}= \frac{y_4y_5y_6(y_5+1)}{y_1y_2y_8}. \eeq Note that (in contrast to the foregoing discussion) in this example $q$ is {\it not} a multiple of 6, but rather $p$ is. The period 6 functions $J_n$ are given by the formula $$ J_n =\frac{x_n}{x_{n+4}} +\frac{x_{n+7}}{x_{n+3}} +\frac{x_{n-1}x_{n+8}}{x_{n+3}x_{n+4}}, \quad n=1,\ldots ,6, $$ where each of the above quantities can be written as a function of $x_1,\ldots ,x_{10}$ by iterating the recurrence (\ref{p6q4}) either forwards or backwards. Since the gaps between the indices of the terms $x_j$ in the linear relation (\ref{Prec}) are not all multiples of 6, these $J_n$ are not invariant under the scaling action (\ref{lascala2}), but instead they transform as follows: $$ J_1\to \la^2\mu J_1, \quad J_2\to \la\mu^2 J_2,\quad J_3\to \la^{-1}\mu J_3, \quad J_4\to \la^{-2}\mu^{-1} J_4, \quad J_5\to \la^{-1}\mu^{-2} J_5, \quad J_6\to \la\mu^{-1} J_6 . $$ There are four independent scaling-invariant monomial functions of the $J_n$, but it is convenient to consider the five functions given by $ w_1=J_1J_4, \;\; w_2=J_2J_5, \;\; w_3=J_3J_6, \;\; w_4=J_1J_3J_5, \;\; w_5 = J_2J_4J_6, $ which satisfy the single relation \beq\label{wreln} w_1w_2w_3=w_4w_5. \eeq Since these $w_i$ can also be expressed in terms of $x_1,\ldots ,x_{10}$, the scaling-invariance implies that they can be written as functions of the symplectic coordinates $y_j$ as well. For instance, we have the formula $$ w_1 =\frac{(y_1y_2y_8+y_4y_5y_6+y_5y_6y_8+y_5y_6) (y_1y_2+y_1y_6+y_5y_6+y_1+y_6)}{y_1y_2y_5y_6y_8}, $$ and analogous formulae can be obtained for $w_2=\hat{\varphi}^*w_1$, $w_3=(\hat{\varphi}^*)^2w_1$ using the map (\ref{yp6q4}); there are different expressions for $w_4$ and $w_5=\hat{\varphi}^*w_4$, but these are more unwieldy so they are omitted. Calculating the Poisson brackets between these functions, we have a subalgebra with four generators $w_1,w_2,w_3,w_4$, but this is more conveniently expressed with the extra function $w_5$ included. The brackets can be determined from the relations \beq\label{5dim} \{w_1,w_2\} =w_1w_2 -w_4-w_5, \qquad \{w_1,w_4\} = w_1(w_2-w_3), \eeq with all other brackets following by applying the map and noting that $w_1$ cycles with period 3, and $w_4$ has period 2, so $(\hat{\varphi}^*)^3w_1=w_1$ and so $(\hat{\varphi}^*)^2w_4=w_4$. The 5-dimensional algebra defined by (\ref{5dim}) has three Casimirs, given by $$ {\cal K}_1^*=3-w_1-w_2-w_3+w_5, \qquad {\cal K}_2^*=3-w_1-w_2-w_3+w_4, \qquad \hat{\cal C} = w_1w_2w_3-w_4w_5, $$ where (as functions of $J_n$) the quantities ${\cal K}_1^*$ and ${\cal K}_2^*$ come from (\ref{kstar}), by taking the trace of the monodromy matrix ${\bf M}_n^{1/2}$ for $n=1,2$, while fixing $\hat{\cal C}=0$ corresponds to the constraint (\ref{wreln}). The Casimir function $ {\cal B} = {\cal K}_1^* + {\cal K}_2^* = 3-2\hat\CH_1+\hat\CH_2, $ is also a first integral, with the components $$ \hat\CH_1 = J_1J_4+J_2J_5+J_3J_6= w_1+w_2 +w_3, \qquad \hat\CH_2 = J_1J_3J_5 +J_3J_4J_6 =w_4 +w_5, $$ which are homogeneous functions of the $J_n$, are also first integrals, and Poisson commute: $\{ \hat\CH_1 ,\hat\CH_2 \}=0$. Either of the latter two functions defines an integrable system on the two-dimensional symplectic leaves of the algebra generated by the $w_i$. The function ${\cal C} = {\cal K}_1^* {\cal K}_2^* +{\cal B}$ is another first integral that is also a Casimir. The functions $K_n$, which cycle with period 4, are specified by the formula $$ K_n= \frac{x_n}{x_{n+6}}+ \frac{x_{n+9}}{x_{n+3}}+ \frac{x_{n+1}}{x_{n+2}x_{n+6}}+ \frac{x_{n+8}}{x_{n+3}x_{n+7}}+ \frac{x_{n+1}x_{n+8}}{x_{n+2}x_{n+7}}, \qquad n=1,\ldots ,4. $$ These quantities are invariant under the scaling (\ref{lascala2}), so they can also be written in terms of the iterates of (\ref{yp6q4}): $$ K_n = \frac{y_{n+1}+y_{n+6}+y_ny_{n+1}+y_{n+1}y_{n+6}+y_{n+6}y_{n+7}}{y_{n+3}y_{n+4}}, \qquad n=1,\ldots ,4, $$ where $\hat\varphi^*y_n=y_{n+1}$ defines the sequence of $y_n$ for all $n\in\Z$. Upon using (\ref{yp6q4bra}) we find the relations $$ \{K_1,K_2\} = -K_1K_2+1, \qquad \{K_1,K_3\}=-K_2+K_4, $$ which provide all the Poisson brackets between the $K_n$ by applying the cyclic property. Comparing with (\ref{jp4q2}) and scaling by a factor of $-3$, this four-dimensional Poisson subalgebra is seen to be isomorphic to the algebra of the $J_i$ in Example \ref{ex-p6263}. Hence the Casimirs of this subalgebra are $$ {\cal K}_1^*=K_2K_4 -K_1-K_3, \qquad {\cal K}_2^*=K_1K_3 -K_2-K_4, $$ and, with ${\cal B}=-\CH_1+\CH_2$, two first integrals in involution are $$ \CH_1=K_1+K_2+K_3+K_4, \qquad \CH_2=K_1K_3+K_2K_4. $$ From (\ref{kstar}), these Casimirs are shared with the subalgebra generated by the $J_n$, and the two different formulae for ${\cal B}$ imply that the first integrals are related according to $$ 3-2\hat\CH_1+\hat\CH_2 +\CH_1-\CH_2=0. $$ Since $\{w_i,K_j\}=0$ for all $i,j$, the functions ${\cal B},{\cal C},\hat\CH_1,\CH_1$ provide four first integrals that Poisson commute, as required for Liouville integrability of the 8-dimensional symplectic map (\ref{yp6q4}). } \eex \section{Integrable maps from Somos sequences} \label{somosmaps} \setcounter{equation}{0} The quadratic recurrences (\ref{somosN}) (case (iv) of Theorem \ref{zeroe}), are referred to as three-term Gale-Robinson recurrences \cite{gale, rob}. We mow consider the slightly more general case where these recurrences have coefficients: \beq \label{GR} x_{n+N}\, x_n = \alpha \, x_{n+N-p}\, x_{n+p} +\beta \, x_{n+N-q}\, x_{n+q}. \eeq These can be included by adding extra nodes to the quiver for the coefficient-free recurrence (see Section 10 in \cite{fordy_marsh}). The coefficients are frozen variables, attached to these new nodes, which do not change under mutations. For three-term Gale-Robinson recurrences, one can add two extra nodes, corresponding to the parameters $\al$, $\beta$, so a quiver with $N+2$ nodes is obtained (Proposition 10.4. in \cite{fordy_marsh}). It is then straightforward to check that the presymplectic form $\om$ in (\ref{omega}), defined by the same $N\times N$ skew-symmetric matrix $(b_{jk})$ as for the coefficient-free case, is preserved by (\ref{GR}). Hence Theorem \ref{torusred} can be applied directly to the latter, to obtain a reduced symplectic map for suitable variables $y_j$. Below we outline two different approaches to showing that the map for the variables $y_j$ is integrable. The first way is to use the fact that all of the recurrences (\ref{GR}) are ordinary difference equations that arise as reductions of the Hirota-Miwa equation, which is an integrable partial difference equation with three independent variables. The Lax pair of the Hirota-Miwa equation allows one to obtain Lax pairs for its reductions, and the associated spectral curves provide first integrals in terms of the $y_j$. The second way is to find Somos-type recurrences of higher order that are satisfied by the iterates of (\ref{GR}). The coefficients of these Somos-$k$ relations, for certain $k>N$, can also provide first integrals (analogous to the first integrals which appear as coefficients in {\it linear} recurrence relations for the iterates of the families (ii) and (iii)). Recently, Goncharov and Kenyon have found Somos recurrences arising as discrete symmetries of classical integrable systems associated with dimer models on a torus \cite{gk}. A further connection with relativistic analogues of the Toda lattice appeared in \cite{eager}. \subsection{Reductions of the Hirota-Miwa equation} The relations (\ref{GR}) all arise by reduction of the Hirota-Miwa (discrete KP) equation, which is the bilinear partial difference equation \beq\label{dkp} T_{1}\, T_{-1}=T_{2}\, T_{-2}+T_{3}\, T_{-3}. \eeq In the above, $T=T(n_1,n_2,n_3)$ is a function of three independent variables, and to denote shifts we have used $T_{\pm j} = T|_{n_j\to n_j\pm 1}$. If we set \beq\label{redntau} T(n_1,n_2,n_3)=\exp \left(\sum_{i,j}S_{ij}n_in_j\right)\, \uptau (n), \eeq where $S=(S_{ij})$ is a symmetric matrix and $n=n_0+\updelta_1n_1+\updelta_2n_2+\updelta_3n_3$, then $\uptau(n)$ satisfies the ordinary difference equation $$ \uptau (n+\updelta_1)\uptau(n-\updelta_1)=\alpha \, \uptau(n+\updelta_2)\uptau(n-\updelta_2) +\beta\, \uptau(n+\updelta_3)\uptau(n-\updelta_3) $$ where $\alpha = \exp(2(S_{22}-S_{11}))$, $\beta =\exp(2(S_{33}-S_{11}))$. Upon taking $x_n=\uptau (n-\updelta_1)$ with $\updelta_1=\frac{1}{2}N$, $\updelta_2= \frac{1}{2}(N-2p)$, $\updelta_3= \frac{1}{2}(N-2q)$, this becomes (\ref{GR}). In the combinatorics literature, equation (\ref{dkp}) is referred to as the octahedron recurrence, which has the Laurent property (shown in \cite{fz}). The Laurent property for three-term Somos (or Gale-Robinson) recurrences of the form (\ref{GR}) then follows by the reduction (\ref{redntau}). The Hirota-Miwa equation (\ref{dkp}) has a scalar Lax pair (see equation (3.8) in \cite{zabrodin}, for instance): it is the compatibility condition for the linear system given by \beq \label{kplax} \bear{rcrcr} T_{-1,3}\, \psi_{1,2} & + & T\, \psi_{2,3} & = & T_{2,3}\, \psi , \\ T\, \psi_{-1,2} & + & T_{-1,3}\, \psi_{2,-3} & = & T_{-1,2}\, \psi , \eear \eeq in terms of the scalar function $\psi = \psi (n_1,n_2,n_3)$, with the same notation for shifts as before. Using the latter, one can use the reduction (\ref{redntau}) to obtain Lax pairs for all of the Somos recurrences (\ref{GR}), which leads directly to spectral curves whose coefficients are conserved quantities. Here we briefly illustrate how this works for the cases $N=4$ and $N=5$ only; reducing the Lax pair becomes more involved as $N$ increases. The general Somos-4 recurrence with coefficients is \beq\label{s4ab} x_{n+4}\, x_n = \al \, x_{n+3}\, x_{n+1} + \beta x_{n+2}^2. \eeq By taking the same monomials as in (\ref{s4yj}), this reduces to the map \beq\label{s4mapab} (y_1\, , \, y_2) \mapsto \Big( y_2\, ,\, (\alpha y_2 +\beta)/(y_1y_2^2) \Big) , \eeq which becomes (\ref{s4map}) in the case $\alpha=\beta =1$, and preserves the same symplectic from $\hat\om$. This means that (\ref{s4mapab}) has the invariant Poisson bracket \beq\label{s4bra} \{ y_1\, ,\, y_2\} =y_1\, y_2. \eeq The recurrence (\ref{s4ab}) arises from the reduction (\ref{redntau}) with $\updelta_1=2$, $\updelta_2=1$, $\updelta_3=0$. Upon taking $S_{jk}=0$ for $j\neq k$, without loss of generality, and setting $$ \psi (n_1,n_2,n_3) = \exp \Big(\sum_{j=1}^3 n_j\log\lambda_j+S_{jj}n_j^2\Big)\, \uptau (n)\, \phi (n), \qquad \mathrm{with }\qquad \lambda_1 = \frac{e^{-S_{11}}}{\sqrt{\zeta \xi} }, \quad \la_2 = \la_1\zeta, \quad \la_3 = e^{2S_{22}}\, \la_2, $$ the scalar linear equations (\ref{kplax}) reduce to a $2\times 2$ linear system for the vector ${\bf w} = (\phi (n),\phi (n+1))^T$. Up to shifts of indices, the coefficients of the latter can all be written in terms of the $y_j$ given in (\ref{s4yj}), as well as the spectral parameters $\zeta$ and $\xi$, as follows. \bex[Somos-4 Lax pair] \label{s4lax} {\em The Lax pair for the map (\ref{s4mapab}) takes the form \beq \label{laxs4} {\bf L} \, {\mathbf w} = \xi {\mathbf w}, \qquad \tilde{{\mathbf w}} ={\bf M} \, {\mathbf w} , \eeq where the tilde denotes the shift $n\to n+1$. The matrices ${\bf L}={\bf L}(\zeta )$, ${\bf M}={\bf M}(\zeta )$ are functions of $y_j$ and the spectral parameter $\zeta$, given by $$ {\bf L} = \left(\bear{cc} -\frac{(\al y_1+\beta)}{y_1y_2} \, \zeta & -\al y_1 \zeta + \frac{(\al y_1+\beta)}{y_1y_2} \\ \frac{\al}{y_1}\, \zeta^2 -\zeta & \left(-y_1y_2-\frac{\al}{y_1}\right)\, \zeta +1 \eear \right), \qquad {\bf M} = \left(\bear{cc} 0 & 1 \\ -\frac{1}{y_1y_2}\, \zeta & \frac{1}{y_1y_2} \eear\right). $$ The discrete Lax equation $\tilde{\bf L} {\bf M}={\bf M} {\bf L}$ holds if and only if the map (\ref{s4mapab}) does. The spectral curve is $$ \mathrm{det}\, ({\bf L}(\zeta ) - \xi \, {\mathbf 1}) = \xi^2+(H_1\, \zeta -1)\xi+ \al^2\zeta^3+\beta\zeta^2 =0, $$ in which the coefficient of $\zeta \xi$ is the first integral \beq\label{s4ham} H_1=y_1y_2+\frac{\alpha}{y_1}+\frac{\alpha}{y_2}+\frac{\beta}{y_1y_2}. \eeq The level sets of $H_1$ are biquadratic curves of genus one in the $(y_1,y_2)$ plane, and the map is a particular instance of the QRT family \cite{qrt1}. } \eex Applying the reduction (\ref{redntau}) to the Hirota-Miwa equation (\ref{dkp}) with $\updelta_1=5/2$, $\updelta_2=3/2$, $\updelta_3=1/2$ leads to the general form of the Somos-5 recurrence with coefficients, which we denote by $\tilde{\alpha}$, $\tilde{\beta}$: \beq \label{s5ab} x_{n+5}\, x_n =\tilde{\alpha}, x_{n+4}\, x_{n+1}+ \tilde{\beta}\, x_{n+3}\, x_{n+2}. \eeq From the appropriate $B$ matrix, one obtains $y_1=x_1x_4/(x_2x_3)$, $y_2=x_2x_5/(x_3x_4)$ as coordinates in the plane (see \cite{sigma}), satisfying the Poisson bracket (\ref{s4bra}), and the corresponding map $\hat\varphi$ is also of QRT type, namely \beq\label{s5mapab} \hat\varphi : \qquad (y_1\, , \, y_2) \mapsto \Big( y_2\, ,\, (\tilde{\alpha} y_2 +\tilde{\beta})/(y_1y_2) \Big) . \eeq Similar calculations to those in the Somos-4 case yield the appropriate reduction of (\ref{kplax}). \bex[Somos-5 Lax pair] \label{s5lax} {\em The map (\ref{s5mapab}) arises as the compatibility condition $\tilde{\bf L} {\bf M}={\bf M} {\bf L}$ for a linear system of the form (\ref{laxs4}), where \beq\label{s5lm} {\bf L} ={\bf C}_0 +{\bf C}_1\, \zeta + {\bf C}_2 \, \zeta^2, \qquad {\bf M} ={\bf C}_0 + \left(\begin{array}{cc} 0 & 0 \\ -y_1 & 0 \end{array} \right)\, \zeta, \eeq $$ \mathrm{with }\qquad {\bf C}_0=\left(\begin{array}{cc} 0 & 1 \\ 0 & 1 \end{array} \right) , \quad {\bf C}_1=\left(\begin{array}{cc} -y_1 & -\left(y_2+\frac{\tilde{\alpha}}{y_1}\right) \\ -y_1 & -\left(y_2+\frac{\tilde{\alpha}}{y_1}+\frac{\tilde{\alpha}}{y_2}+\frac{\tilde{\beta}}{y_1y_2}\right)\end{array} \right) , \quad {\bf C}_2= \left(\begin{array}{cc} \tilde{\alpha} & 0 \\ \tilde{\alpha}+\frac{(\tilde{\alpha}y_1+\tilde{\beta})}{y_2} & \tilde{\alpha} \end{array} \right). $$ The coefficient of $\zeta \xi$ in the equation for the spectral curve, that is $$ \mathrm{det}\, ({\bf L}(\zeta ) - \xi \, {\mathbf 1}) = \xi^2-(2\tilde{\alpha}\zeta^2-\tilde{J}\, \zeta +1)\xi+ \tilde{\alpha}^2\zeta^4+\tilde{\beta}\zeta^3 =0, $$ gives a first integral whose level sets are cubic (also biquadratic) curves of genus one, that is \beq\label{s5j} \tilde{J}=y_1+y_2+\tilde{\alpha}\left(\frac{1}{y_1}+ \frac{1}{y_2}\right) + \frac{\tilde{\beta}}{y_1y_2}. \eeq } \eex \begin{rem}\label{hgsint}{\em The first integral (\ref{s4ham}) for Somos-4 can be rewritten in terms of the cluster variables, so that it becomes a ratio of homogeneous polynomials of total degree 4 in $x_1,x_2,x_3,x_4$, with the denominator just being $x_1x_2x_3x_4$. Similarly, in the case of Somos-5 the first integral (\ref{s5j}) can be rewritten as a ratio of homogeneous polynomials of degree 5, with the denominator $x_1x_2x_3x_4x_5$. It turns out that (\ref{s5ab}) has another rational first integral, also of degree 5 in terms of the cluster variables, which can be written as $$ \tilde{I} =f_1f_2f_3+ \tilde{\alpha}\left(\frac{1}{f_1}+ \frac{1}{f_2}+ \frac{1}{f_3}\right) +\frac{\tilde{\beta}}{f_1f_2f_3} \qquad \mathrm{with} \quad f_j=\frac{x_jx_{j+2}}{x_{j+1}^2} \qquad \mathrm{for} \quad j=1,2,3 $$ (see Proposition 2.3 in \cite{hones5}). However, the quantity $\tilde{I}$ is not defined on the $(y_1,y_2)$ plane, where there is only one first integral (as required for the Liouville-Arnold theorem). } \end{rem} \subsection{Bilinear relations of higher order} In \cite{swartvdp}, Swart and van der Poorten proved that sequences generated by Somos-4 recurrences also satisfy quadratic (Somos-type) relations of order $k$, for all $k\geq 4$. They also noted that for Somos-5 sequences, there are Somos-$k$ relations of all odd orders $k=5,7,9,\ldots$. Moreover, the coefficients of the higher order relations are constant along orbits, which means that, as long as they are not trivially constant, they provide first integrals. In this subsection we explain how to obtain first integrals for Somos-7 recurrences using associated quadratic (bilinear) relations of higher order. The analogous results for Somos-6 recurrences are in \cite{hones6}. To present the results concisely, it is convenient to consider the most general form of a Somos-7 recurrence, which is the four-term Gale-Robinson relation \beq\label{s7gen} x_{n+7}\, x_n = \upalpha \, x_{n+6}\, x_{n+1} + \upbeta \, x_{n+5}\, x_{n+2} +\upgamma \, x_{n+4}\, x_{n+3}. \eeq With all three terms on the right hand side, this does not arise from a cluster algebra. Nevertheless, the general Somos-7 recurrence can be obtained as a reduction of the cube recurrence (Miwa's equation), and in \cite{fz} this was used to prove the Laurent property for all four-term Gale-Robinson recurrences, including (\ref{s7gen}). In each of the cases where one of the parameters $\upalpha,\upbeta,\upgamma$ vanishes, (\ref{s7gen}) reduces to a bilinear relation with two terms on the right hand side, corresponding to a different cluster algebra in each case. Our main result on the family of recurrences (\ref{s7gen}) is stated as follows and the remainder of this subsection is devoted to its proof. \begin{thm} \label{s7ints} The Somos-7 recurrence (\ref{s7gen}) has three independent first integrals, denoted $\mathcal{H}_1,\mathcal{H}_2,\hat{I}$, which are rational functions (in fact, Laurent polynomials) of degree 7 in $x_1,\ldots, x_7$. $\hat{I}$ can be written as \beq\label{I} \begin{array}{rcl} \hat{I} & = & \upbeta \, f_1f_2f_3f_4f_5 +\upgamma (f_1f_2f_3 + f_2f_3f_4+f_3f_4f_5) +\upalpha \upbeta \left( \frac{1}{f_1}+ \frac{1}{f_2}+\frac{1}{f_3}+\frac{1}{f_4}+\frac{1}{f_5}\right) \\ && + \upalpha \upgamma \left(\frac{1}{(f_1f_2f_3} + \frac{1}{ f_2f_3f_4}+\frac{1}{f_3f_4f_5} \right) +\frac{\upbeta^2} { f_1f_2f_3f_4f_5 } +\upbeta \upgamma\left( \frac{1}{f_1f_2f_3^2f_4^2f_5} +\frac{1}{f_1f_2^2f_3^2f_4f_5} \right) +\frac{\upgamma^2}{f_1f_2^2f_3^3f_4^2f_5} , \end{array} \eeq in terms of the variables $f_j=x_jx_{j+2}/x_{j+1}^2$, $j=1,2,3,4,5$, while $\mathcal{H}_1$ and $\mathcal{H}_2$ are given in terms of $y_j = x_{j}x_{j+3}/(x_{j+1}x_{j+2})$, as in Proposition \ref{higherbil} below. For $\upalpha =0$ or $\upgamma=0$, Somos-7 reduces to a Liouville integrable map $\hat\varphi$ in four dimensions, while for $\upbeta =0$ it reduces to an integrable map of the plane. \end{thm} The equation (\ref{s7gen}) admits the action of the two-parameter scaling group $(\C^*)^2$, via \beq\label{2torus} x_n \rightarrow \la \, \mu^n\, x_n \eeq for non-zero $\la ,\mu$. The variables $f_j$ are invariants under this scaling symmetry. In terms of these variables, the Somos-7 recurrence (\ref{s7gen}) is transformed to a recurrence of fifth order, namely \beq\label{s7frec} f_{n+5}f_{n+4}^2f_{n+3}^3f_{n+2}^3f_{n+1}^2f_{n}=\upalpha \, f_{n+4}f_{n+3}^2f_{n+2}^2f_{n+1} +\upbeta \, f_{n+3}f_{n+2}+\upgamma. \eeq Being of odd order, (\ref{s7gen}) has a further scaling symmetry depending on the parity of $n$: \beq \label{extrasym} x_n\rightarrow \nu^{(-1)^n}x_n, \qquad \nu \in\C^*. \eeq The variables $f_n$ have the symmetry $f_n\rightarrow \nu^{\pm 2}f_n$ for even/odd $n$ respectively. The variables $y_j=f_jf_{j+1}$ are the invariants under the combined scaling group and lead to \beq\label{s7yrec} y_{n+4}y_{n+3}y_{n+2}^2 y_{n+1}y_{n}=\upalpha\, y_{n+3}y_{n+2} y_{n+1} +\upbeta \, y_{n+2}+\upgamma. \eeq Of the cluster algebra subcases, we shall see that the case $\upalpha =0$ and the case $\upgamma =0$, the 4D map defined by (\ref{s7yrec}) is symplectic, while for $\upbeta =0$, additional scaling symmetries allow reduction to the plane. The quantities $\mathcal{H}_1,\mathcal{H}_2,\hat{I}$ in Theorem \ref{s7ints} can all be written in terms of $f_j$, so the recurrence (\ref{s7frec}) has three independent first integrals. The rational function $\hat{I}$ is not invariant under (\ref{extrasym}), which means that it does not provide a first integral for (\ref{s7yrec}), but both $\mathcal{H}_1$ and $\mathcal{H}_2$ do. The quantities $\mathcal{H}_1$ and $\mathcal{H}_2$ appear in coefficients of bilinear relations of higher order. Since the Somos-7 recurrence is invariant under the three-parameter family of scalings defined by (\ref{2torus}) and (\ref{extrasym}), one expects there to be relations of odd order having the same symmetry. The first non-trivial relation is the Somos-11 recurrence (\ref{s11}) below. It can be seen that a combination of $\mathcal{H}_1$ and $\mathcal{H}_2$ appears in one of the coefficients. Since this coefficient remains constant along each orbit of (\ref{s7gen}), it provides a non-trivial first integral. A second independent integral is provided by the Somos-13 recurrence (\ref{s13}), given in the following. \begin{propn}\label{higherbil} The iterates of the Somos-7 recurrence (\ref{s7gen}) also satisfy the Somos-11 recurrence \beq\label{s11} \bear{rcl} x_{n+11}\, x_n & = & -\upbeta \, x_{n+10} \, x_{n+1}+\upgamma(\upgamma-\upalpha^2)x_{n+8}\,x_{n+3} -\upalpha \upbeta \upgamma\, x_{n+7}\, x_{n+4} \\ && +(\upalpha^5+2\upalpha \upgamma^2 +2\upbeta^3+\upbeta \mathcal{H}_1+\upalpha^2 \mathcal{H}_2)\, x_{n+6}\, x_{n+5} , \eear \eeq as well as the Somos-13 recurrence \beq\label{s13} x_{n+13}\, x_n=-\upbeta \upgamma \, x_{n+11}\,x_{n+2} +\upGamma\, x_{n+9}\, x_{n+4}+ \upDelta\, x_{n+8}\, x_{n+5} +\upTheta \, x_{n+7}\,x_{n+6}, \eeq where $$ \upGamma = \upalpha^4\upgamma+\upalpha^2\upgamma^2-\upalpha \upbeta^3+\upgamma^3 +\upalpha\upgamma \mathcal{H}_2, \quad \upDelta = \upalpha^5 \upbeta+\upalpha^3\upbeta\upgamma+4\upalpha\upbeta\upgamma^2+\upbeta^4 +\upbeta^2\mathcal{H}_1 +\upbeta(\upalpha^2+\upgamma)\mathcal{H}_2, $$ $$ \upTheta = \upalpha^7+3\upalpha^3\upgamma^2+2\upalpha^2\upbeta^3-\upalpha\upgamma^3 +\upalpha^2\upbeta \mathcal{H}_1+\upalpha^4 \mathcal{H}_2 $$ and, in terms of $y_j = x_{j}x_{j+3}/(x_{j+1}x_{j+2})$ for $j=1,2,3,4$, the first integrals $\mathcal{H}_1$ and $\mathcal{H}_2$ are given by \beq\label{s7h12} \bear{rcl} \mathcal{H}_1 & = & \upgamma y_1y_2y_3y_4 +\upalpha\upbeta (y_1y_3+y_1y_4+y_2y_4) +\upalpha\upgamma\left(y_1 +y_2+y_3+y_4 +\frac{y_1y_3}{y_4}+\frac{y_2y_4}{y_1}\right) \\ && +\upalpha^2\upbeta\left(\frac{1}{y_1}+\frac{1}{y_2}+\frac{1}{y_3}+\frac{1}{y_4}+\frac{y_2}{y_1y_3}+\frac{y_3}{y_2y_4}\right) + \upbeta\upgamma \left( \frac{1}{y_1}+\frac{1}{y_2}+\frac{1}{y_3}+\frac{1}{y_4}\right) \\ && +\upalpha^2\upgamma \left(\frac{1}{y_1y_2}+\frac{1}{y_2y_3}+\frac{1}{y_3y_4}+\frac{1}{y_1y_3}+\frac{1}{y_2y_4} \right) +\upgamma^2\left( \frac{1}{y_1y_3}+\frac{1}{y_2y_4} \right) +\upalpha\upbeta^2\left( \frac{1}{y_1y_2y_4}+\frac{1}{y_1y_3y_4} \right) \\ && +\upalpha\upbeta\upgamma\left( \frac{2}{y_1y_2y_3y_4}+\frac{1}{y_1y_2^2y_4} +\frac{1}{y_1y_3^2y_4}\right) + \upalpha\upgamma^2 \left( \frac{1}{y_1y_2^2y_3y_4}+\frac{1}{y_1y_2y_3^2y_4} \right) , \\ \mathcal{H}_2& = & \upgamma (y_1y_2y_3 +y_2y_3y_4)+\upalpha\upbeta (y_1+y_2+y_3+y_4) + \upalpha\upgamma \left( \frac{y_1}{y_4}+\frac{y_4}{y_1}\right) + \upalpha^2 \upbeta \left( \frac{1}{y_1y_3}+\frac{1}{y_2y_4}\right) \\ && + \upbeta\upgamma \left( \frac{1}{y_1y_2}+\frac{1}{y_2y_3}+\frac{1}{y_3y_4}\right) + \upgamma (\upalpha^2+\upgamma ) \left( \frac{1}{y_1y_2y_3}+\frac{1}{y_2y_3y_4}\right) +\frac{ \upalpha \upbeta^2}{y_1y_2y_3y_4} \\ && +\upalpha\upbeta\upgamma\left( \frac{1}{y_1y_2^2y_3y_4}+\frac{1}{y_1y_2y_3^2y_4} \right) +\frac{ \upalpha \upgamma^2}{y_1y_2^2y_3^2y_4} . \eear \eeq \end{propn} Having obtained the first integrals, we now consider each of the three cases corresponding to cluster algebras separately, and explain how the reduction result of Theorem \ref{torusred} applies in each case. \vspace{.1in} \noindent {\bf The case $\mathbf{\upalpha =0}$:} When $\upalpha =0$, the recurrence (\ref{s7gen}) arises from a cluster algebra defined by a 7-node quiver. The latter comes from a $7\times 7$ matrix, specified in terms of its columns by $$ B=\left( -{\bf v}_3, -{\bf v}_4,{\bf v}_1+{\bf v}_2+2{\bf v}_3+{\bf v}_4, -{\bf v}_1+{\bf v}_4, -{\bf v}_1-2{\bf v}_2-{\bf v}_3-{\bf v}_4,{\bf v}_1,{\bf v}_2 \right), $$ where im$\, B$ is spanned by the vectors \beq\label{vj} {\bf v}_j={\bf e}_j-{\bf e}_{j+1}-{\bf e}_{j+2}+{\bf e}_{j+3} \qquad \mathrm{for}\qquad j=1,2,3,4, \eeq where ${\bf e}_j$ is the $j$th standard basis vector. In this case, ker$\, B$ is spanned by the three integer vectors \beq\label{kerb} {\bf u}_1=(1,1,1,1,1,1,1)^T, \qquad {\bf u}_2=(1,2,3,4,5,6,7)^T, \qquad {\bf u}_3=(1,-1,1,-1,1,-1,1)^T. \eeq These three integer vectors produce a three-dimensional group of scaling transformations, $${\bf x} \rightarrow \la^{{\bf u}_1} \cdot \mu^{{\bf u}_2}\cdot \nu^{{\bf u}_3}\cdot {\bf x},$$ which coincides with the scalings defined by (\ref{2torus}) and (\ref{extrasym}). By Lemma \ref{symp} there is a symplectic form, given in terms of the scale-invariant monomials $y_j = {\bf x}^{{\bf v}_j}$ for $j=1,2,3,4$, as $$ \hat\om = \frac{ \dd y_1 \wedge \dd y_3}{y_1y_3}+ \frac{ \dd y_2 \wedge \dd y_3}{y_2y_3}+ \frac{ \dd y_2 \wedge \dd y_4}{y_2y_4}. $$ This yields the Poisson bracket \beq\label{PBs7azero} \{y_j,y_{j+1}\}=0, \qquad \{ y_j, y_{j+2} \} = -y_j y_{j+2}, \qquad \{ y_j, y_{j+3} \} = y_j y_{j+3}. \eeq In terms of $y_j$ one finds the map $$ \hat\varphi : \qquad (y_1,y_2,y_3,y_4) \mapsto \Big(y_2,y_3,y_4,(\upbeta y_3+\upgamma)/(y_1y_2y_3^2y_4)\Big), $$ which is equivalent to iteration of (\ref{s7yrec}) with $\upalpha =0$, and preserves the nondegenerate Poisson bracket (\ref{PBs7azero}). Setting $\upalpha =0$ in the two first integrals in (\ref{s7h12}) and computing their bracket gives $ \{ \mathcal{H}_1,\mathcal{H}_2\} |_{\upalpha =0} =0, $ so this is a Liouville integrable system in 4D. \vspace{.1in} \noindent {\bf The case $\mathbf{\upbeta =0}$:} In this case the matrix $B$ is specified by $ B= (-\hat{{\bf v}}_2 , \hat{{\bf v}}_1,\hat{{\bf v}}_2,-\hat{{\bf v}}_1+\hat{{\bf v}}_2, -\hat{{\bf v}}_1, -\hat{{\bf v}}_2,\hat{{\bf v}}_1), $ where $\hat{{\bf v}}_j={\bf v}_j+{\bf v}_{j+1}+{\bf v}_{j+2}$ for $j=1,2$. The kernel of $B$ is 5-dimensional, being spanned by the integer vectors ${\bf u}_2$ and ${\bf u}_3$ together with $ {\bf u}_4 = (1,0,0,1,0,0,1)^T, \;\; {\bf u}_5 = (0,1,0,0,1,0,0)^T, \;\; {\bf u}_6 = (0,0,1,0,0,1,0)^T. $ These five independent integer vectors give an action of the algebraic torus $(\C^*)^5$ on ${\bf x}$ by scaling transformations, and the scalings (\ref{2torus}) and (\ref{extrasym}) form a three-parameter subgroup, since ${\bf u}_1={\bf u}_4+{\bf u}_5+{\bf u}_6$. The invariants under the full 5-parameter scaling group are $$\hat{y}_j = {\bf x}^{\hat{{\bf v}}_j} = x_jx_{j+5}/(x_{j+2}x_{j+3}) = y_jy_{j+1}y_{j+2},$$ and in terms of these one obtains the general Lyness-2 recurrence with coefficients, that is \beq\label{lyness2} \hat{y}_{n+2}\, \hat{y}_{n}=\upalpha \, \hat{y}_{n+1}+\upgamma, \eeq which is equivalent to iteration of a map in the $(\hat{y}_1,\hat{y}_2)$ plane and of QRT type \cite{qrt1}, with invariant symplectic form $(\hat{y}_1\hat{y}_2)^{-1} \dd \hat{y}_1 \wedge \dd \hat{y}_2$. The first integral $\mathcal{H}_1$ does not reduce to the plane, as it is not invariant under the full 5-dimensional scaling group, but $\mathcal{H}_2$ is fully invariant, and reduces to a first integral of (\ref{lyness2}): $$ \mathcal{H}_2|_{\upbeta =0} = \upgamma \, (\hat{y}_1+ \hat{y}_2) +\upalpha\upgamma \, \left( \frac{\hat{y}_1}{\hat{y}_2}+\frac{\hat{y}_2}{\hat{y}_1}\right) +\upgamma (\upalpha^2+\upgamma ) \left( \frac{1}{\hat{y}_1}+\frac{1}{\hat{y}_2}\right)+\frac{\upalpha\upgamma^2}{\hat{y}_1\hat{y}_2}. $$ The level sets of the latter are cubic (and biquadratic) plane curves of genus one. \vspace{.1in} \noindent {\bf The case $\mathbf{\upgamma =0}$:} This is very similar to the case $\upalpha =0$, since the recurrence comes from the rank 4 matrix $$ B=\left( -{\bf v}_2-{\bf v}_4, {\bf v}_1+{\bf v}_2+{\bf v}_4,-{\bf v}_1+{\bf v}_2, -{\bf v}_2+{\bf v}_3, -{\bf v}_3+{\bf v}_4,-{\bf v}_1-{\bf v}_3-{\bf v}_4,{\bf v}_1+{\bf v}_3\right), $$ with the same vectors ${\bf v}_j$ as in (\ref{vj}). Since both ker$\, B$ and im$\, B$ are the same as for $\upalpha =0$, there is the same scaling group with invariants $y_j={\bf x}^{{\bf v}_j}$. The symplectic form in this case is $$ \hat\om = \frac{ \dd y_1 \wedge \dd y_2}{y_1y_2}+ \frac{ \dd y_1 \wedge \dd y_4}{y_1y_4}+ \frac{ \dd y_3 \wedge \dd y_4}{y_3y_4}, $$ and (up to an overall constant) this gives the unique log-canonical Poisson bracket \beq\label{PBs7czero} \{y_j,y_{j+1}\}=y_jy_{j+1}, \qquad \{ y_j, y_{j+2} \} = 0 = \{ y_j, y_{j+3} \} \eeq that is preserved by the map $$ \hat\varphi : \qquad (y_1,y_2,y_3,y_4) \mapsto \Big(y_2,y_3,y_4,(\upalpha y_2y_4+ \upbeta )/(y_1y_2y_3y_4)\Big). $$ The latter map in 4D corresponds to iteration of (\ref{s7yrec}) with $\upgamma =0$. Setting $\upgamma =0$ in (\ref{s7h12}) and computing the bracket using (\ref{PBs7czero}) gives $ \{ \mathcal{H}_1,\mathcal{H}_2\} |_{\upgamma =0} =0, $ so the two first integrals are involution, as required. \begin{rem} \label{s7integ}{\em We have shown that for the two parameter subcases of (\ref{s7gen}), the reduced map is integrable in the Liouville sense, either in 4D or 2D. However, there is a different Poisson structure in each case. When $\upalpha\upbeta\upgamma \neq 0$, it is easy to check that there is no log-canonical Poisson bracket in the variables $y_j$ that is compatible with (\ref{s7yrec}). Nevertheless, we expect that there is a compatible Poisson structure for which the two first integrals in (\ref{s7h12}) are in involution, so that (\ref{s7yrec}) defines a 4D map that is Liouville integrable. } \end{rem} \subsubsection*{Acknowledgments:} The authors would like to thank the Isaac Newton Institute, Cambridge for hospitality in 2009 during the Programme on Discrete Integrable Systems, where this collaboration began. We are also grateful to A. Veselov and A. Zelevinsky for helpful comments. \small
2,869,038,154,473
arxiv
\section{Introduction} White dwarfs represent the last evolutionary stage of low and intermediate mass stars, i.e. stars with masses smaller than $10\pm 2 \, M_\odot$. Most of them are composed of carbon and oxygen, but white dwarfs with masses smaller than $0.4\, M_\odot$ are made of helium and are members of close binary systems, while those more massive than $\sim 1.05\, M_\odot$ are probably made of oxygen and neon. The exact composition of the cores of carbon-oxygen white dwarfs critically depends on the evolution during the previous red giant and asymptotic giant branch phase, and more specifically on the competition between the carbon--$\alpha$ and triple-$\alpha$ reactions, on the details of the stellar evolutionary codes and on the choice of several other nuclear cross sections. In a typical case --- for instance a white dwarf of $0.58\, M_\odot$ --- the total amount of oxygen represents the 62\% of the total mass while its concentration in the central layers of the white dwarf can be as high as 85\%. In all cases, the core is surrounded by a thin layer of pure helium with a mass ranging from $10^{-2}$ to $10^{-4}\, M_\odot$. This layer is, in turn, surrounded by an even thinner layer of hydrogen with a typical mass lying in the range of $10^{-4}$ to $10^{-15}\, M_\odot$. This layer is missing in $\sim 25\%$ of the cases. From the phenomenological point of view, white dwarfs containing hydrogen are classified as DA while the remaining ones (the collectively denoted as non-DA) are classified as DO, DB, DQ, DZ and DC, depending on their spectral features. The origin of these spectral differences and the relationship among them has not been yet elucidated, although it is related to the initial conditions during the AGB evolutionary phase, and also to delicate interplay between several physical process, among which we mention gravitational and thermal diffusion, radiative levitation, convection at the H-He and He-core interfaces, proton burning, stellar winds and mass accretion from the interstellar medium --- see, for instance Fontaine (2013), this volume. The structure of white dwarfs is sustained by the pressure of degenerate electrons and these stars cannot obtain energy from thermonuclear reactions. Therefore, their evolution can be described in terms of a simple cooling process \cite{mest52} in which the internal degenerate core acts as a reservoir of energy and the outer non-degenerate layers control the energy outflow. A simple calculation indicates that the time they take to fade and disappear beyond the capabilities of the present telescopes is very long, $\sim 10$ Gyr. Thus the populations of white dwarfs retain important information about the past history of our Galaxy. In particular, they allow to obtain the age of the different Galactic components, namely the disk, the spheroid and the system of open and globular clusters, as well as the star formation history of the Galactic disk \cite{alth10,font01,hans03,iser98,koes02,koes90}. The tool to obtain such information is the luminosity function, which is defined as the number of white dwarfs of a given luminosity per unit volume and magnitude interval: \begin{equation} N(l) = \int^{M_{\rm s}}_{M_{\rm i}}\,\Phi(M)\,\Psi[T-t_{\rm cool}(l,M)-t_{\rm PS}(M)] \tau_{\rm cool}(l,M) \;dM \label{ewdlf} \end{equation} \noindent \noindent where $T$ is the age of the population under study, $l = -\log (L/L_\odot)$, $M$ is the mass of the parent star (for convenience all white dwarfs are labeled with the mass of the main sequence progenitor), $t_{\rm cool}$ is the cooling time down to luminosity $l$, $\tau_{\rm cool}=dt/dM_{\rm bol}$ is the characteristic cooling time, $M_{\rm s}$ is the maximum mass of a main sequence star able to produce a white dwarf, and $M_{\rm i}$ is the minimum mass of the main sequence stars able to produce a white dwarf of luminosity $l$, i.e. is the mass that satisfies the condition $T=t_{\rm cool}(l,M) + t_{\rm PS}(M)$ and $t_{\rm PS}$ is the lifetime of the progenitor of the white dwarf. The remaining quantities, the initial mass function, $\Phi(M)$, and the star formation rate, $\Psi(t)$, are not known a priori and depend on the astronomical properties of the stellar population under study. Since the total density of white dwarfs of a given population is not usually well known, to compare the theoretical and observational luminosity functions it is customary to normalize the computed luminosity function to the bin with the smallest error bar. For instance, for the case of the disk white dwarf luminosity function it is traditionally chosen to normalize at $l=3$ \cite{iser98}. In summary, if the observed luminosity function and the evolutionary behavior of white dwarfs are well known, it is possible to obtain the age and the star formation rate of the evolution under study. \section{The observed luminosity functions} \begin{figure} \vspace*{9.2 cm} \begin{center} \special{psfile=isern_f1.ps hscale=50 vscale=50 voffset=-80 hoffset=40} \caption{Luminosity functions obtained before the era of large surveys. The different symbols represent different determinations: full circles \cite{lieb88}, full squares \cite{evan92}, open triangles \cite{ossw96}, open diamonds \cite{legg98}, open circles \cite{knox99}.} \end{center} \label{oldlf} \end{figure} The first luminosity function was derived about four decades ago \cite{weid68}, and since then it has been noticeably improved with the work of many authors --- see Fig. \ref{oldlf}. The monotonic behavior of this function clearly proves that the evolution of white dwarfs is a simple gravothermal process, while the sharp cut-off at low luminosities is the consequence of the finite age of the Galaxy. The recent availability of data from the Sloan Digital Sky Survey (SDSS) has noticeably improved the accuracy of the new luminosity functions, as many new white dwarfs were added to the very limited sample of white dwarfs with measured magnitudes, parallaxes and proper motions. For instance, the white dwarf luminosity function of Ref.~\cite{harr06} (HA-LF) was built from a sample of $\sim 6,000$ DA and non-DA white dwarfs with accurate photometry and proper motions obtained from the SDSS Data Release 3 and the USNO-B catalogue, whereas that of Ref.~\cite{dege08} (DG-LF) was constructed from a sample of $3,528$ spectroscopically-identified DA white dwarfs obtained from the SDSS Data Release 4. The discrepancies between the HA-LF and the DG-LF at low luminosities are well understood, and can be attributed to the different way in which the effective temperatures and gravities of the sample of white dwarfs were observationally determined \cite{dege08}. Furthermore, the DG-LF only considers DA white dwarfs and, at low temperatures, it is difficult to separate them from non-DA white dwarfs. For this reason in this work we will restrict ourselves to analyze the DG-LF for magnitudes smaller than $M_{\rm bol} \sim 13$. At high luminosities, say magnitudes smaller than $M_{\rm bol} \sim 6$, the dispersion of both luminosity functions is very large --- see Fig. \ref{newlf}. The reason is that both luminosity functions have been built using the reduced proper motion method that is not appropriate for bright white dwarfs. The UV-excess technique has allowed to build a luminosity function in the range $-0.75$ to 7 (KZ-LF) \cite{krze09}. This method, however, is not adequate for dim stars and becomes rapidly incomplete out of this range of magnitudes. Fortunately, this sample overlaps with the HA-LF and, assuming continuity, it is possible to extend the luminosity function to the brightest region. \begin{figure} \vspace*{8 cm} \begin{center} \special{psfile=isern_f2.eps hscale=70 vscale=70 voffset=-140 hoffset=0} \caption{Luminosity functions obtained from large surveys. Solid squares: SDSS, all spectral types \cite{harr06}, hollow squares: SDSS, only DA white dwarfs \cite{dege08}, crosses: SDSS, hot DA white dwarfs \cite{krze09}, stars: SuperCOSMOS Sky Survey \cite{rowe11}.} \end{center} \label{newlf} \end{figure} One of the potential problems of the luminosity functions obtained from the SDSS results from the fact that the integration time is fixed and, consequently, the S/N ratio depends on the brightness of the source. This can lead to large uncertainties in the determination of the parameters of faint white dwarfs, and may introduce systematic errors \cite{limo10}. Fortunately, a completely independent luminosity function has been obtained \cite{rowe11} from the SuperCOSMOS Sky Survey, which culls white dwarfs using their proper motion. As can be seen in Fig.~\ref{newlf}, this luminosity function (hereafter called RH-LF) does not excessively deviate from the HA-LF at low luminosities, thus providing some hope that these luminosity functions are not affected by large systematic effects. However, for bright white dwarfs this luminosity function suffers from the same drawbacks as the HA-LF and the DG-LF luminosity functions. Thus, at bright luminosities is better to use the KZ-LF data. Because of the overlap in the velocity distributions, the luminosity function obtained from reduced proper motion methods are in fact a superposition of thin and thick disc objects. It has been recently shown \cite{rowe11} that, in principle, it is possible to separate both populations using kinematic arguments. This technique yielded, for the first time, luminosity functions for both populations in a self-consistent way, thus offering the possibility to provide interesting insight in the sequence of events that led to the formation of the galactic disk --- see below. \section{An overall view of white dwarf cooling} Since white dwarfs are degenerate objects, they cannot obtain energy from nuclear burning. Therefore, their evolution can be considered as a simple cooling process. Globally, the evolution of their luminosity can be written as: \begin{equation} L+L_{\nu} =- \int^{M_{\rm WD}}_0 C_{\rm v}\frac{dT}{dt}\,dm - \int^{M_{\rm WD}}_0 T\Big(\frac{\partial P}{\partial T}\Big)_{V,X_0}\frac{dV}{dt}\,dm +\;\; (l_{\rm s}+e_{\rm s}) \dot{M}_{\rm s} \end{equation} \label{eener} where the l.h.s. of the equation represents the sinks of energy, photons and neutrinos, while the r.h.s. contains the sources of energy, the heat capacity of the star, the compressional work, the contribution of the latent heat release and of the energy released by gravitationl settling upon crystallization, times the rate of crystallization, $\dot{M}_{\rm s}$ \cite{iser98}. This equation has to be complemented with a relationship connecting the temperature of the core with the luminosity of the star. The evolution of white dwarfs from the planetary nebula phase to its disappearance can be roughly divided in four stages: \textbf{Neutrino cooling:} The range of luminosities of this phase is $\log (L/L_\odot) > -1.5$. This stage is very complicated because of the dependence on the initial conditions of the newly born star as well as on the complex and not yet well understood behavior of the hydrogen envelope. If the hydrogen layer is smaller than a critical value, $M_{\rm H} \le 10^{-4}$ $M_\odot$, nuclear burning via the pp--reactions quickly drops as the star cools down and never becomes dominant. Since astero-seismological observations seem to constrain the size of $M_{\rm H}$ well below this critical value, this source can be neglected. Fortunately, when neutrino emission becomes dominant, the different thermal structures converge to a unique one, granting the uniformity of the models with $\log (L/L_\odot)\le -1.5$. Furthermore, since the time necessary to reach this value is $\le 8\times 10^7$ years for any model, its influence in the total cooling time is negligible \cite{dant89}, except of course at large luminosities. \textbf{Fluid cooling:} This phase occurs at luminosities $-1.5 \ge \log (L/L_\odot) \ge -3$. The main source of energy is the gravothermal one. Since the plasma is not very strongly coupled ($\Gamma < 179$), its properties are reasonably well known. Furthermore, the flux of energy through the envelope is controlled by a thick non degenerate layer with an opacity dominated by hydrogen (if present) and helium, and weakly dependent on the metal content. The main source of uncertainty is related to the chemical structure of the interior, which depends on the adopted rate of the $^{12}$C$(\alpha,\gamma)^{16}$O reaction and on the treatment given to semiconvection and overshooting. If this rate is high, the oxygen abundance is higher in the center than in the outer layers, thus resulting in a reduced specific heat at the central layers of the star, where the oxygen abundance can be as high as $X_{\rm O}=0.85$ \cite{sala97}. \begin{figure} \vspace*{9 cm} \begin{center} \special{psfile=isern_f3.eps hscale=70 vscale=70 voffset=-120 hoffset=25} \caption{Oxygen profile of a $0.61\, M_{\odot}$ white dwarf at the beginning of solidification (solid line) and when the $\sim 15\%$ (dashed line) and $\sim 85\%$ (dotted line) in mass has already crystallized.} \end{center} \label{fig3} \end{figure} \textbf{Crystallization:} White dwarfs with $\log (L/L_\odot) < -3$ are expected to experience a first order phase transition, and their deep cores crystallize at these luminosities. Crystallization introduces two new sources of energy: latent heat and sedimentation. In the case of Coulomb plasmas, the latent heat is small, $\sim k_{\rm B} T_{\rm s}$ per nuclei, where $k_{\rm B}$ is the Boltzmann constant and $T_{\rm s}$ is the temperature of solidification. Its contribution to the total luminosity is small, $\sim 5$\%, but not negligible \cite{shav76}. During the crystallization process, the equilibrium chemical compositions of the solid and liquid plasmas are not equal. Therefore, if the resulting solid flakes are denser than the liquid mixture, they sink towards the central region. If they are lighter, they rise upwards and melt where the solidification temperature, which depends on the density, becomes equal to that of the isothermal core. The net effect is a migration of the heavier elements towards the central regions with the subsequent release of gravitational energy \cite{moch83,iser97}. Of course, the efficiency of the process depends on the detailed chemical composition and on the initial chemical profile and it is maximum for a mixture made of half oxygen and half carbon uniformly distributed through all the star \cite{iser00}. An additional source of energy that has to be taken into account is the gravitational diffusion of $^{22}$Ne synthesized from the initial content of $^{12}$C, $^{14}$N and $^{16}$O during the He-burning phase \cite{garc08}. \textbf{Debye cooling:} When almost all the star has solidified, the specific heat follows Debye's law. However, the outer layers still have very large temperatures as compared with the Debye's one, and since their total heat capacity is still large enough, they prevent the sudden disappearance of the white dwarf at least for the case of white dwarfs with thick hydrogen envelopes \cite{dant89}. \section{Cooling sequences} In this work we have adopted the BASTI models \footnote{These models can be downloaded from http://www.oa-teramo.inaf.it/BASTI} \cite{sala10}. These models can follow the diffusion of the different chemical species, convective mixing, residual nuclear burning and all the phenomena related with the crystallization process, and the evolutionary ages are in excellent agreement with other recent calculations \cite{rene10}. The parameters for the envelopes adopted here are $q_{\rm He} = 10^{-2} M_{\rm WD}$ and $q_{\rm H} = 10^{-4} M_{\rm WD}$ for the DAs and $q_{\rm He} = 10^{-3.5} M_{\rm WD}$ for the non-DAs. The choice of the chemical composition of the white dwarf interior is of critical importance since all the factors influencing the cooling rate, specific heat, neutrino emission, crystallization temperature, sedimentation and so on, depend on the detailed chemical structure. In the present work, the chemical profile has been obtained assuming solar metallicity, convective overshooting during the main sequence and semiconvection during central He-burning, while the breathing pulses occurring at the end of the core He-burning have been suppressed \cite{sala10}. The adopted rate for the $^{16}$C$(\alpha,\gamma)$$^{16}$O was that of Ref.~\cite{kunz02}. \begin{figure} \vspace*{8 cm} \begin{center} \special{psfile=isern_f4.eps hscale=70 vscale=70 voffset=-135 hoffset=0} \caption{Time evolution of the luminosity of a $0.61\, M_{\odot}$ white dwarf. The solid line corresponds to a DA model in which phase separation was included. The dotted line corresponds to the same model but disregarding phase separation. The dashed and the dotted-dashed lines represent, respectively, a non-DA model when separation is included and disregarded.} \end{center} \label{fig4} \end{figure} Fig.~\ref{fig3} displays the oxygen profiles for the CO core of a $\sim 0.6\, M_\odot$ white dwarf progenitor obtained just at the end of the AGB phase (solid line). The inner part of the core, with a constant abundance of $^{16}$O, is determined by the maximum extension of the central He-burning convective region while beyond this region the oxygen profile is built when the thick He-burning shell is moving towards the surface. Simultaneously, gravitational contraction increases its temperature and density and, since the ratio between the $^{16}$C$(\alpha,\gamma)$$^{16}$O and the $3\alpha$ reaction rates is smaller for higher temperatures, the oxygen mass fraction steadily decreases in the external part of the CO core \cite{sala97}. Fig.~\ref{fig3} also displays how the inner abundance of oxygen gradually increases in the inner regions as the crystallization front advances in mass. The region with a flat chemical profile placed just above the solidification front is due to the convective instability induced upon crystallization. The first calculation of a phase diagram for CO mixtures was done several years ago \cite{stev80} and resulted in an eutectic shape. This result was a consequence of the assumption that the solid was entirely random. Later on \cite{segr94}, it was found that the CO phase diagram was of the spindle form. Because of this, the solid formed upon crystallization is richer in oxygen than the liquid and therefore denser. For a $0.6\, M_{\odot}$ white dwarf with equal amounts of carbon and oxygen, $\delta \rho /\rho \simeq 10^{-4}$. Therefore the solid settles down at the core of the star and the lighter liquid left behind is redistributed by Rayleigh-Taylor instabilities \cite{stev80,moch83,iser97,alth12}. The result is an enrichment of oxygen in the central layers and its depletion in the outer ones. Even when rotation is considered, convection is indeed an efficient mechanism to redistribute the carbon rich fluid out from the crystallization front and that the liquid phase can be considered as well mixed \cite{iser97}. Finally, we mention that the characteristic cooling time that appears in Eq.~\ref{ewdlf} not only depends on the internal energy sources or sinks but also on the photon luminosity which, in turn, depends on the transparency of the envelope. Since non-DA models are more transparent than the DA ones, they cool down much more rapidly as it can be seen in Fig.~\ref{fig4}. \section{The age of the Galactic disk} \begin{figure} \vspace*{8 cm} \begin{center} \special{psfile=isern_f5.eps hscale=70 vscale=70 voffset=-135 hoffset=0} \caption{White dwarf luminosity function for different star formation rates and different ages of the Galactic disk. The symbols corresponding to the observational data are those of Fig.~\ref{newlf}. Dashed lines, from left to right, constant SFR, and $t_{\rm disk} = 10, \, 13$~Gyr. Solid line, $ \Psi \propto (1+ \exp[(t-t_0)/\tau])^{-1}$, $\tau=3$~Gyr, $t_0= 10$ and $t_{\rm disk} = 13$ Gyr. Dotted line, $\Psi \propto \exp(-t/\tau)$, $\tau=3$~Gyr, $t_{\rm disk} = 13$~Gyr.} \end{center} \label{fig5} \end{figure} A common picture of the formation of the Milky Way is that spiral galaxies form as a consequence of the gas cooling inside a spinning dark matter halo. In a first stage, the gas collapses in a dynamical time scale that lasts for several hundred million years, leaving behind a spherical stellar halo, and settles down into a disk from where stars form. Galactic discs are structures that are easily destroyed by mergers with other structures of similar mass. Therefore, if a disk appears almost undamaged, it means that its has not suffered strong mergers since it was born and that its life has been reasonably quiet. It is also possible for disks formed early in the life of a spiral galaxy to have been heated by minor mergers leading to the formation of a thick disk able to produce a new thin disk within it. According to this picture, the sequence of events leading to the formation of the presently observed structure of the Milky Way could have been the following one \cite{reid05}: i) Formation of the primitive halo at $t \sim 12 - 13$ Gyr, ii) Episodes of minor mergers of satellite systems at $t \sim 11-12$ Gyr, iii) Formation of the disk at $t \sim 10-11$ Gyr, iv) A major merger produces the formation of a thick disk at $ t \sim 9-10$ Gyr, and v) Formation of the thin disk at $ t \sim 8 $ Gyr. \begin{figure} \vspace*{8 cm} \begin{center} \special{psfile=isern_f6.eps hscale=70 vscale=70 voffset=-135 hoffset=0} \caption{White dwarf luminosity function of the thin (upper curves) and thick (lower curves) disks \cite{rowe11}. The data corresponding to the thick disk have been shifted by a quantity of $-1$ for a sake of clarity. The theoretical function have been computed for a constant star formation rate and ages of 10, 11 and 13~Gyr (from left to right respectively).} \end{center} \label{fig6} \end{figure} In the case of the halo, the age is essentially determined by the age of the system of globular clusters, which at present is estimated to be $\sim 13$~Gyr. In the case of the disk, several indicators are used. The main ones are the ages of F and G stars, the luminosity function of white dwarfs, radioactive clocks are also often employed, and finally the ages of old open clusters. Some of these ages are obtained from objects in the solar neighborhood, as it is the case of white dwarfs, assuming they are representative of the total Galaxy, which is not the case. On the contrary, old open clusters are distributed all over the Galaxy, and are considered more representative but, since their lifetime is relatively short (several $10^8$ years) many of them could have been destroyed and, in fact they only provide a lower limit to the age of the disk, for which reason the local indicators continue to be extremely useful. Incidentally, it is worth noting that one of the key points in the determination of the age of the disk is NGC~6791, a very old, extremely metal-rich Galactic cluster. The age of this cluster estimated from the main-sequence turn-off method was different from the one estimated from the termination of the white dwarf cooling sequence. When the energy release due to the gravitational diffusion of $^{22}$Ne and the settling of oxygen upon crystallization are included, both ages coincide and turn to be 8~Gyr \cite{garc10}. There are two important properties of the luminosity function that deserve a comment. The first one is that, after normalization, the bright part of the luminosity function $( M_{\rm bol} \le 14)$ is almost independent of the star formation rate \cite{iser09}, $N(l) \propto \langle\tau_{\rm cool}\rangle$, unless a burst of star formation occurred very recently. The second important fact that needs to be considered is that Eq.~(\ref{ewdlf}) do not satisfies Piccard's theorem for the inversion of integral equations. Thus, the star formation rate cannot be directly obtained, as the unicity of the solution cannot be guaranteed and the final result depends on the trial function used to fit observations. The first property is clearly illustrated in Fig.~\ref{fig5}, where the luminosity functions obtained using different SFRs are almost coincident in the region $ 6 \le M_{\rm bol} \le 14$. The corollary is that the BASTI models reproduce reasonably well the evolution of white dwarfs in this region. The age of the disk depends on the form adopted for the star formation rate. If a constant rate is adopted, the cutoff is compatible with an age of $\sim 13$ Gyr, i.e. the disk would had been formed a short time after the primitive halo. If an exponentially declining star formation rate is adopted, it is necessary to reduce the age of the disk to $\sim 11$ Gyr (not shown in the figure) to adjust the position of the cut-off. A good fit can also be obtained if an almost constant rate lasting for the last $\sim 10$~Gyr preceded by an exponentially growing star formation activity is adopted. This is the same to say that the disk started to form from the center to the periphery. Notice that the cool end of the luminosity function still shows an important dispersion in the values and to elucidate among these possibilities it will be necessary to improve its accuracy at low luminosities. Obviously, the determination of the ages of the thin and thick disks would be extremely helpful to prove the previously described sequence of events. Fig.~\ref{fig6} displays the observed luminosity functions of the thin and thick disks, as well as the theoretical predictions assuming a constant star formation rate and different ages. The most striking feature is that both structures look as if they were coeval since the maximum of both distributions lies approximately at the same magnitude, $M_{\rm bol} \sim 15$. Furthermore, both populations seem to be rather old, $\sim 11$~Gyr, in agreement with the luminosity functions obtained using the SDSS. Certainly, it is premature to extract conclusions since the cut-off of the thick disk is not well defined and the modeling of very old white dwarfs is still plenty of uncertainties. Nevertheless, if this result turns out to be correct, it could be obviously interpreted as if there was no difference between the time at which thin and the thick disk formed, and moreover that this unique disk formed quite soon in life of the Milky Way. This, if correct, could give support to the recent discovery that sub-populations with similar [$\alpha$/Fe]$-$[Fe/H] have a smooth distribution of scale heights, thus suggesting that effectively there is not a distinctive thick disk population. It is also important to notice here that in deriving the luminosity function it has been assumed that no vertical and radial migration of stars is effective. If these effects were important, it would be necessary to compute the luminosity function in the context of a complete numerical simulation of the Galaxy. \section{Conclusions} The use of white dwarfs as cosmochronometers has experienced noticeable advances during the last years both from the theoretical and observational point of views, and has become a reliable tool to measure the age of an ensemble population of stars if the conditions are well defined, as is the case, for instance, of NGC~6791. Furthermore, having for the first time separated luminosity functions of the thin and thick disk opens new possibilities to understand the origin and evolution of the Milky Way. However, several unsolved problems still remain. From the theoretical point of view, there are still noticeable differences among the different cooling tracks at low luminosities. These differences are probably due to the use of different boundary conditions \cite{rohr12} and different sizes and physics adopted for the envelope. An additional problem is our incomplete understanding of the origin and evolution of the DA, non-DA character, which could introduce some uncertainties in the determinations of the theoretical luminosity function. From the observational point of view the main problem resides in the still poor determination of the luminosity function of cool white dwarfs, as well as in the criteria to efficiently separate the different populations (thin disk, thick disk and spheroid) of Galactic white dwarfs. \vspace{0.1cm} \noindent {\sl Acknowledgements.} This research was partially supported by MICINN grants AYA2011--24704, and AYA2011--23102 by the ESF EUROGENESIS project (MICINN grants EUI2009--04167 and 04170), by the European Union FEDER funds and by the AGAUR. \bibliographystyle{epj}
2,869,038,154,474
arxiv
\section{Introduction} \input{sections/Intro} \section{Emulator Design} \label{sec:design} \input{sections/design} \subsection{Base Model} \label{subsec:analytic_emu} \input{sections/analyticEmu} \subsection{Nonlinear Boost Emulator} \label{subsec:boost_emu} \input{sections/boostEmu} \subsection{Neural Networks as Emulators} \label{subsec:NNs} \input{sections/neuralnets} \section{Emulator Accuracy} \label{sec:accuracy} \input{sections/accuracy} \section{Mock Full Shape Analyses} \label{sec:fullshape} \input{sections/fullshape} \section{Discussion} \label{sec:disc} \input{sections/disc} \section{Conclusions} \label{sec:conclusion} \input{sections/conclusions} \section*{Acknowledgement} \input{sections/acknowledgement} \section*{Data Availability} \input{sections/data_av} \bibliographystyle{mnras} \subsubsection{Halo Occupation Distribution Model} A key ingredient when calculating the galaxy power spectrum in the HM framework is the halo occupation distribution (HOD). A HOD model is a probabilistic model that describes the probability of a dark matter halo of mass $M$ hosting $N$ galaxies of a given type. For this work we use the popular five parameter \citet{zheng_halo_2009} model. This model is split into two terms describing the occupation of central and satellite galaxies separately, where the expected central occupation is modelled as a smoothed step-function \begin{equation} \langle N_\mathrm{cen}|M \rangle = \frac{1}{2}\mathrm{erfc}\left[\frac{\ln{M_\mathrm{cut}}-{\ln{M}}}{\sqrt{2}\sigma}\right]\, , \end{equation} and the expected number of satellite galaxies is modelled as a power law \begin{equation} \langle N_\mathrm{sat}|M \rangle = \begin{cases} 0 & \text{if}\ M < \kappa M_\mathrm{cut}\, , \\ \left(\frac{M-\kappa M_\mathrm{cut}}{M_1}\right)^\alpha & \text{if}\ M > \kappa M_\mathrm{cut}\, . \end{cases} \end{equation} $M_\mathrm{cut}$ defines the minimum mass for a halo to host a central galaxy, $\sigma$ defines to what extent the central step-function is smoothed, the product $\kappa M_\mathrm{cut}$ defines the minimum mass for a halo to host a satellite, $M_1$ defines the typical mass for a halo to host a satellite, and $\alpha$ defines how the expected number of galaxies increases with mass. We also impose the condition that a halo cannot host a satellite galaxy without first hosting a central galaxy, such that the expected total occupation is given by \begin{equation} \langle N|M \rangle = \langle N_\mathrm{cen}|M \rangle(1 + \langle N_\mathrm{sat}|M \rangle). \end{equation} In order to calculate $P_{gg,1h}(k)$ (equation \ref{eq:p_1h}) we need to compute the expected number of pairs for a given halo mass $\langle N(N-1)|M \rangle$. As shown in section 3.1 of \citet{zheng_theoretical_2005} $\langle N(N-1)|M \rangle$ can be written as \begin{equation} \langle N(N-1)|M \rangle = 2\langle N_\mathrm{cen} N_\mathrm{sat}|M \rangle+\langle N_\mathrm{sat}(N_\mathrm{sat}-1)|M \rangle. \label{eq:cs_pairs} \end{equation} Under the assumption that the number of satellite galaxies follows a Poisson distribution we can write that $\langle N_\mathrm{sat}(N_\mathrm{sat}-1)|M \rangle=\langle N_\mathrm{sat}|M \rangle^2$, and following \citet{miyatake_cosmological_2020} we approximate $\langle N_\mathrm{cen} N_\mathrm{sat}|M \rangle=\langle N_\mathrm{cen}|M \rangle \langle N_\mathrm{sat}|M \rangle$ under the central condition. This is a simple HOD model, and for this work this is the HOD model used in the emulated base model, in addition to being used to generate nonlinear power spectra that will train the boost emulator. It should be noted that there is no requirement for the galaxy halo connection model to be identical for emulated base model and the nonlinear boost component emulator. The only requirement is that it is possible to relate the two models such that power spectra predictions produced by the base model and those coming from the simulation agree on large scales (see section \ref{subsec:boost_absorb}). In section \ref{sec:fullshape} we conduct a series of mock power spectrum FS analyses for a BOSS CMASS \citep{dawson_baryon_2012} style power spectrum, as such we define the extent of the HOD parameter space to be the same as that from \citet{kwan_cosmic_2015}. This parameter space was designed to cover the HOD models from the analyses of BOSS CMASS galaxies by \citet{white_clustering_2011}. The ranges of the five HOD parameters are given in table \ref{tab:HOD_param}. \begin{table} \centering \begin{tabular}{c c c} \hline Parameter & Range & Fiducial Value \\ \hline $\log{M_\mathrm{cut}}$ & $[12.9, 13.78]$ & 13.04 \\ $\log{M_1}$ & $[13.5, 14.7]$ & 14.05 \\ $\sigma$ & $[0.5, 1.2]$ & 0.94 \\ $\kappa$ & $[0.5, 1.5]$ & 0.93 \\ $\alpha$ & $[0.5, 1.5]$ & 0.97 \\ \hline \end{tabular} \caption{Table defining the ranges for each of the HOD parameters used to train the nonlinear boost emulator, along with the parameters used to calculate the mock observations for the fiducial full shape analysis in section \ref{sec:fullshape}. The extent of the HOD parameter space matches that of \citet{kwan_cosmic_2015} and is designed to cover the results of \citet{white_clustering_2011}. These results from \citet{white_clustering_2011} also define our fiducial HOD parameters.} \label{tab:HOD_param} \end{table} \subsubsection{Analytic Training Spectra} \label{subsubsec:fake_sim} For this work we aim to introduce \texttt{matryoshka}, and demonstrate how the \texttt{matryoshka} emulated base model can be focused on a suite of simulations to be used along side the nonlinear boost component emulator that has been trained on data coming from that same suite of simulations. Generating this training data from simulations is cheaper than running the simulations themselves, but still comes at considerable computational cost. With this in mind we opt to generate the training data for the nonlinear boost component emulator with HALOFIT \citep{takahashi_revising_2012}. Nonlinear training data is calculated via equation \ref{eq:1h+2h}, with $P_L$ in equation \ref{eq:p_2h} being replaced with the nonlinear matter power spectrum, with nonlinearities introduced via HALOFIT. This allows us to very quickly generate data for the nonlinear boost component emulator, and demonstrate \texttt{matryoshka}. We take steps to replicate the scenario where simulated training data is used, such as introducing noise into the HALOFIT training data that would be present in simulated training data, and only generating training data for the 40 Aemulus training cosmologies. In future work we will train this nonlinear boost component emulator directly on simulated training data. Following a similar procedure to \citet{zhai_aemulus_2019} we sample the HOD parameter space 50 times for each cosmology, resulting in 2000 training samples for the nonlinear boost component emulator. When calculating power spectra associated with these cosmological and HOD parameters we only include scales that would be accessible from a simulation box. The smallest $k$-mode is defined by the fundamental mode of the simulation box \begin{equation} k_\textrm{fund} = \frac{2\pi}{L_\textrm{box}}, \end{equation} with $L_\textrm{box}$ being the length of one side of the simulation box. For a simulation from the Aemulus suite with $L_\textrm{box}=1.05 \ \textrm{Gpc} \ h^{-1}$ we have $k_\textrm{fund} \approx 6 \times 10^{-3} \ h \ \mathrm{Mpc}^{-1}$. To determine the highest $k$-mode that we want to emulate we consider the smallest scales that can be accurately represented by a dark matter only (DMO) N-body simulation. Many works have studied the impact of baryonic effects on the dark matter power spectrum, by comparing the dark matter power spectrum measured from a DMO simulation and hydrodynamic counterpart \citep{schneider_quantifying_2019,arico_modelling_2020,debackere_impact_2020}, and although there is some disagreement between different hydrodynamical codes and baryonic feedback models as to how strong the impact is on the dark matter power spectrum, there is general agreement that for scales where $k \gtrsim 1 \ h \ \mathrm{Mpc}^{-1}$ the impact on the dark matter power spectrum is >1\%. With this in mind we consider $k\approx 1 \ h \ \mathrm{Mpc}^{-1}$ the smallest possible scale that can be accurately modelled without including baryonic effects in the simulation and thus the smallest scale we want to emulate. With these limitations that would come from simulated training data in mind we limit the analytic training data coming from HALOFIT to 127 $k$-values from $0.012\ h \ \mathrm{Mpc}^{-1}\lesssim k \lesssim 1.152\ h \ \mathrm{Mpc}^{-1}$ \footnote{These values correspond to bin centres from a hypothetical power spectrum measurement covering $k_\mathrm{fund.}<k<k_\mathrm{Nyq.}/2$, where $k_\mathrm{Nyq.}=N_\mathrm{mesh} \pi / L_\mathrm{box}$ and $N_\mathrm{mesh}=1024$ such that the smallest emulated scale covers at least $k_\mathrm{Nyq.}/2$.}. As mentioned above simulated training data comes with noise. One source of noise is the sample variance of the simulations in the suite. There are methods to reduce the impact of this sample variance on the training data. In the case of the Aemulus simulations the initial conditions for each of the training simulations are generated with different random seeds, which prevents the emulator learning the noise coming from a specific set of initial conditions. Running phase-matched simulations effectively removed this sample variance \citep{angulo_cosmological_2016,chuang_unit_2019,klypin_suppressing_2020}, however this method doubles the computational cost of producing the training simulations without increasing the density of the sampling in the training space. It is not clear if a suite of phase-matched simulations would outperform a suite with random phases but with twice the sampling density. Another source of noise in simulated training data comes from the procedure used to populate the simulation with galaxies. Populating simulated dark matter halos according to an HOD is a random process, as such multiple realisations of the same HOD will result in variation of the measured power spectrum. To try and remove some of this HOD realisation noise, multiple realisations are normally generated and the power spectrum from each of these realisation is averaged. This averaging will remove some but not all of the HOD realisation noise. To approximate this left over noise we take the average of 10 random draws from a multivariate Gaussian with mean being the smooth nonlinear power spectrum calculated with HALOFIT and covariance give by \begin{equation} C_{ii} = \frac{(2\pi)^3}{V}\left[\frac{2(P(k_i) + n_g^{-1})^2}{4\pi k_i^2 dk}\right]. \label{eq:gauss_cov} \end{equation} \subsection{Base model} To test the accuracy of the emulated components of the base model we use the test subsample, make predictions for each sample in the test set and compare the predictions to the results of calculating the components analytically. These comparisons are shown in figure \ref{fig:linear_accu_comp}, where we show the percentage errors for each of the components of the base model. The green and blue regions show the 68\% and 95\% confidence intervals (CIs) respectively. We can see that these shaded regions are all centred on 0\%, indicating that all component emulators produce unbiased predictions on average. We can also see that all component emulators are producing predictions with sub-percent accuracy for almost all 2000 samples in the test set (the prediction accuracy is better than 0.1\% for the 68\% CI). In the top two panels of figure \ref{fig:linear_accu_comp} there is a sample that performs considerably worse than the others. This sample represents a very extreme cosmology (with $w_0\approx-0.40$) we can see that even in this extreme case the highest level of prediction error from the transfer function component emulator is $\approx 1.25\%$. \begin{figure} \includegraphics[height=21cm,keepaspectratio]{plots/component_err-v3-test.png} \caption{Percentage errors on the test set for each of the components of the base model. The panels from top to bottom correspond to the transfer function, growth factor, mass variance, and logarithmic derivative of the mass variance. The grey lines show the percentage error for each sample in the test set, the shaded regions show the 95\% and 68\% confidence intervals.} \label{fig:linear_accu_comp} \end{figure} The top panel of figure \ref{fig:linear_accu_comp} clearly shows the highest levels of error in predictions around the BAO scale ($k \sim 0.2 \ h \ \textrm{Mpc}^{-1}$), where the BAO wiggles make these scales of $T(k)$ particularly difficult to predict. A higher level of accuracy could be achieved on these scales by increasing the number of $k$-modes around these scales, however this would mean that these scales have a greater contribution to the loss function and as such the weights of the NN would be optimised to preferentially recover these scales. For this work we aim to obtain a prediction accuracy of $<1\%$ at 68\% CI (see appendix \ref{sec:accu_reqs} on accuracy requirements) on the nonlinear galaxy power spectrum from \texttt{matryoshka}. As is shown in section \ref{sec:accuracy} this is achievable without exploring this solution and therefore we leave it to future works. \begin{figure*} \includegraphics[width=\linewidth]{plots/response_transfer.png} \caption{Plots showing the response of the transfer function to relevant parameters of our cosmological model. All panels except the bottom right show the effect of varying a single parameter of the model, the bottom right panel shows five random parameter sets. In all panels the ratio is with respect to the transfer function calculated for parameters that correspond to the mean of the training space shown in figure \ref{fig:linear_emu_space}. The solid lines show responses of transfer functions calculated with \texttt{CLASS}, the crosses show the emulator predictions for the same parameters. In all except the lower right panel the lines and crosses are coloured by the value of the parameter that is being varied, in the lower right panel the colour corresponds to the index of the test set from which the parameter sets were randomly selected.} \label{fig:transfer_grad} \end{figure*} Figure \ref{fig:transfer_grad} shows the response of the transfer function to various parameters of our cosmological model, when calculating the transfer function with \texttt{CLASS} and making a prediction for the transfer function with the emulator. Figure \ref{fig:transfer_grad} shows that the emulator well recovers the response of the transfer function for all the relevant parameters of our model, however there is some indication of a slight bias in the prediction of the transfer function on small scales for increasingly negative values of $w_0$. This bias is $\ll 1\%$ and only occurs for extreme values of $w_0$, so we don't expect this small bias to have a significant impact when using the emulator to make predictions for less extreme cosmologies. \subsection{Nonlinear Boost} To test the performance of the nonlinear boost emulator we calculate nonlinear boosts for the seven Aemulus test cosmologies. As with the training data we generate power spectra for 50 different HOD parameter sets for each cosmology, which results in 350 test samples. We generate two versions of this test set: the first contains the same level of noise as the training data and the second contains no noise. The reasons for generating these two versions of the same test set is clear when looking at figure \ref{fig:boost_accu}, which shows percentage error on the predictions from the nonlinear boost component emulator. The green shaded region shows the prediction error (68\% CI) on the noise free test set and the green dashed line shows the prediction error on the noisy test set. We can see that for $k \gtrsim 0.8 \ h \ \mathrm{Mpc}^{-1}$ the green dashed line and shaded region agree while on larger scales we can see that the prediction error calculated using the noisy test set very closely follows the noise level of the noisy test set. This indicates that the calculated prediction error is dominated by the noise on these scales, which is confirmed by looking at the prediction error on the noise free test set on the same scales. The NNs are able to predict the nonlinear boost at higher accuracy than the noise level of the training data because the noise is random and therefore the noise averages across the training set. When using a simulation suite it is not possible to generate a noise free test set. The test simulations in the Aemulus suite do however have multiple realisations, such that the noise level in the test set generated from these simulations is lower than that of the training set allowing for an accurate assessment of the prediction accuracy. \begin{figure} \includegraphics[width=\columnwidth]{plots/boost_err.png} \caption{Prediction error from the nonlinear boost component emulator compared to the statistical error of our fiducial mock and the noise level of the training data. The green shaded region represents the true $1\sigma$ prediction error of the emulator, the green dashed line represents the $1\sigma$ prediction error measured when evaluating the emulator predictions with a noisy test data. The orange solid line shows the $1\sigma$ noise level in the nonlinear boost training data, and the red dashed line shows the inverse signal to noise ratio (SNR) of our fiducial mock observation in percent.} \label{fig:boost_accu} \end{figure} From figure \ref{fig:boost_accu} we can see that the nonlinear boost component emulator has $\lesssim 0.5\%$ prediction error from $0.01 \ h \ \mathrm{Mpc}^{-1} \lesssim k \lesssim 1 \ h \ \mathrm{Mpc}^{-1}$. The level of prediction error is largest on the smallest scales. This is where the dynamic range of the nonlinear boost is largest. This prediction error gets smaller going to larger scales, however there is a spike in prediction error at $k \sim 0.1 \ h \ \mathrm{Mpc}^{-1}$. This spike in prediction error is a result of the noise level of the training set having an impact on the NNs ability to learn the nonlinear boost on these scales. To reduce the impact of this when producing predictions for the nonlinear galaxy power spectrum (by combing the predictions of the boost with the base model prediction), the predictions of the nonlinear boost can be smoothly "stitched" with those that we expect from linear theory (that being $B(k)\approx1$ on large scales) as was done in \citet{kobayashi_accurate_2020}. We are able to produce predictions for the nonlinear galaxy power spectrum with \texttt{matryoshka} with <1\% error (at 68\% CI) without exploring this solution, as such we leave this for future works. We can examine the relative contributions from the emulated base model and the nonlinear boost component emulator to the prediction error on the nonlinear galaxy power spectrum. Figure \ref{fig:relative_err} compares the $1\sigma$ prediction errors on the nonlinear galaxy power spectrum to those on: the nonlinear boost, the emulated base model (which is calculated with predictions from all the base model component emulators), and the linear matter power spectrum (which is calculated with predictions from the transfer function component emulator). We can immediately see that the prediction error from the nonlinear boost is dominating on all scales. We can also see that on small scales the contribution to the error from the linear power spectrum (and thus the transfer function) is lower than the emulated base model. This is to be expected as on these small scales the 1-halo term (equation \ref{eq:p_1h}) is dominating, and the prediction errors from the other base model component errors are more significant. The prediction accuracy of the nonlinear boost component emulator (the dominating component of the \texttt{matryoshka} prediction error) is compared to the inverse signal to noise ration of our fiducial mock in figure \ref{fig:boost_accu} (with a volume of $(1 \ \mathrm{Gpc} \ h^{-1})^3$ and number density of $\sim 6\times 10^{-4} \ (\mathrm{Mpc}^{-1} \ h)^3$). We can see that the prediction error from the nonlinear boost component emulator is significantly lower than the statistical error of our fiducial mock on all scales considered. This implies that the achieved level of prediction accuracy is high enough to produce predictions for the power spectrum that are consistent with the truth within the statistical errors of our mock analyses setup (see appendix \ref{sec:accu_reqs} for a discussion on how the required prediction accuracy depends on sample considered). \begin{figure} \includegraphics[width=\columnwidth]{plots/relative_err_nonlinear_power-log-PL-v2.png} \caption{Plot showing the relative contributions to the prediction error on the nonlinear galaxy power spectrum $P_{nl}(k)$. The prediction error from each of the component emulators contributes to the overall prediction error on $P_{nl}(k)$. For simplicity we have only shown the error coming from the nonlinear boost $B(k)$ component emulator, the error in the base model $P^\mathrm{base}(k)$ (which is calculated with predictions from all the base model component emulators), and the error in the prediction of the linear matter power spectrum $P_L(k)$ (which is calculated with predictions from the transfer function component emulator).} \label{fig:relative_err} \end{figure} Figure \ref{fig:nonlinear_grad} shows the response of the nonlinear galaxy power spectrum to the cosmological and HOD parameters of our model. We can see that the response is generally well recovered by \texttt{matryoshka}, particularly on large scales, however the response is not well recovered on small scales for all parameters. This is most apparent in the response to $w_0$, where we can see that the response is under predicted on small scales. This effect is greater the further the value of $w_0$ deviates from -1. It should be noted that the response of the nonlinear galaxy power spectrum to $w_0$ is very small. The difference between the power spectra for $w_0=-1.15$ and $w_0=-0.8$ is $\sim 2\%$. This response is considerably smaller than any of the other parameters. The result of this is that the response to $w_0$ has the least impact on the loss function. This effect is exaggerated by the noise in the training data for the nonlinear boost component emulator. The $1\sigma$ noise level at $k=0.8 \ h \ \mathrm{Mpc}^{-1}$ is $\sim 0.2\%$, which is similar to the the difference between power spectra calculated with $w_0=-1.15$ and $w_0=-1.11$. \begin{figure*} \includegraphics[width=\linewidth]{plots/response_nonlinear_galaxy_power-ALL.png} \caption{Similar to figure \ref{fig:transfer_grad}, for the response of the nonlinear galaxy power spectrum to the cosmological and HOD parameters of our model.} \label{fig:nonlinear_grad} \end{figure*} \subsection{Inclusion of Additional Effects Through B(k)} \label{subsec:boost_absorb} The base model of \texttt{matryoshka} uses fitting functions for quantities such as the halo mass function and the concentration-mass relation. Furthermore, for this work we are ignoring redshift space distortions and effects such as halo exclusion. The base model of \texttt{matryoshka} has been designed to be used alongside simulations. Effects that cause $B(k)$ to differ significantly from unity on large scales will need to be included in the base model in future work. On the other hand, effects that predominately impact small scales (that can be challenging to model analytically in some cases) do not to necessarily be included as they can effectively be absorbed into the prediction of $B(k)$. For example, redshift space distortions (RSD) can be modelled on large scales with a Kaiser factor \begin{equation} P_s^\mathrm{base}(k, \mu) = P(k)(1+f\mu^2)^2\ , \end{equation} while the impact of RSD on small scales is more challenging to model analytically. To make sure the small scale RSD is included in the \texttt{matryoshka} predictions without modelling them analytically we can decompose $P_s^\mathrm{base}(k, \mu)$ into multipoles \begin{equation} P_\ell^\mathrm{base}(k) = \frac{2\ell+1}{2}\int^1_{-1}L_\ell(\mu)P_s^\mathrm{base}(k, \mu)\ , \end{equation} with $L_\ell$ being the $\ell$th order Legendre polynomial. We can then create nonlinear boost component emulators for each multipole $B_\ell(k) = P_\ell(k) \ / \ P^\mathrm{base}_\ell(k)$. The effect of RSD on small scales will already be included in the simulated $P_\ell(k)$ such that the neglected small scale RSD in the base model is captured by the prediction of the boost $B_\ell(k)$ \begin{equation} B_\ell(k) = B_{\ell,\mathrm{NL}}(k) + c_\ell(k)\ , \end{equation} where $B_{\ell,\mathrm{NL}}(k)$ would be the scale dependant nonlinear boost, and $c_\ell(k)$ includes corrections to the base model that accommodate the neglected effects that predominately impact small scales. It should be noted that in order to obtain the best possible prediction accuracy we would want to keep $B_\ell(k)$, and thus $c_\ell(k)$, as small as possible. Exactly what small scale effects to include in the base model will depend on the given application of \texttt{matryoshka} and is beyond the scope of this work. \subsection{\texttt{matryoshka} Python Package} \label{subsec:python_package} Alongside this paper we also publish the \texttt{matryoshka} Python package\footnote{\url{https://matryoshka-emu.readthedocs.io/en/latest/}}. This package includes all the weights for the NNs discussed in this paper, allowing them to be used without any requirement of re-training. The Python package has been developed such that the component emulators can be used in isolation. For example the transfer function component emulator can be loaded with \texttt{matryoshka.emulator.Transfer()}, and predictions can then be made with the \texttt{.emu\_predict()} method. The transfer function emulator makes predictions in $\sim 0.0004 \ \mathrm{s}$, and a nonlinear galaxy power spectrum prediction can be made in $\sim 0.1 \ \mathrm{s}$ (this is $\sim 3 \times$ faster than a transfer function prediction that can be made using \texttt{CLASS} with the accuracy settings implemented in \texttt{nbodykit}). It should be noted that although we have focused on training \texttt{matryoshka} to be used alongside the Aemulus simulations, it is simple to re-train any of the base model component emulators based on different parameters spaces. Functions to generate training samples and re-train the component emulators will be provided in the \texttt{matryoshka} Python package. Many of the halo model functions in \texttt{matryoshka} are modified versions of those from the Python package \texttt{halomod} \citep{murray_thehalomod_2020}. In most cases these functions have been modified to allow \texttt{matryoshka} to make batch predictions more easily. \subsubsection{Base Model Component Emulation} We emulate four quantities that allow us to greatly increase the speed of the base model predictions. Those are the matter transfer function $T(k)$, the mass variance $\sigma(M)$, the logarithmic derivative of the mass variance $\frac{d\text{ln}\sigma(M)}{d\text{ln}M} \equiv \mathcal{S}(M)$, and the linear growth function $D(z)$. To achieve the best accuracy for a prediction of the power spectrum from the HM, $T(k)$ is normally calculated using a Boltzmann code such as \texttt{CAMB} \citep{lewis_efficient_2000} or \texttt{CLASS} \citep{lesgourgues_cosmic_2011}. There are cheaper analytic alternatives, such as \citet{eisenstein_baryonic_1998}, but the accuracy on large scales from these cheaper alternatives is not high enough. When developing a boosting emulator it is important that there is good agreement on large scales between the base model prediction and the prediction coming from the numerical simulation, as this is what gives the small dynamic range for $B(k)$ on large scales. The transfer function enters equations \ref{eq:p_1h} and \ref{eq:p_2h} in multiple terms, such as the matter power spectrum \begin{equation} P_L(k) = A_s k^{n_s} T^2(k)\, , \label{eq:P_L} \end{equation} where $A_s$ is the amplitude of the primordial power spectrum, and $n_s$ is the spectral index. $T(k)$ also enters indirectly in the halo mass function $n(M)$ and halo bias $b_h(M)$ via the mass variance $\sigma(M)$, given by \begin{equation} \sigma^2(M) = \frac{1}{2 \pi^2}\int_0^\infty k^2P_L(k)W^2(kM)dk\ , \label{eq:sigma} \end{equation} and the logarithmic derivative $\mathcal{S}(M)$, given by \begin{equation} \mathcal{S}(M) = \frac{3}{2 \pi^2 r^4 \sigma^2(M)}\int_0^\infty \frac{dW^2(kM)}{dM}\frac{P_L(k)}{k^2}dk\ . \label{eq:dnls} \end{equation} Although we could just emulate $T(k)$ and evaluate \ref{eq:sigma} and \ref{eq:dnls} directly using the $T(k)$ emulator predictions, these calculations add non-negligible time to the base model prediction, so we decide to emulate $\sigma(M)$ and $\mathcal{S}(M)$ in addition to $T(k)$. In the equations above $W(kM)$ is the Fourier transform of the window function, throughout this work we use a top-hat window function of the form \begin{equation} W(kM) = 3\frac{\sin(kM)-kM\cos(kM)}{(kM)^3}\ . \end{equation} For our base model we use a \citet{tinker_toward_2008} halo mass function, with the form \begin{equation} n(M) = \frac{\rho_0}{M^2}f_n[\sigma(M)]\left|\mathcal{S}(M)\right|\ , \label{eq:dndm} \end{equation} where the function $f_n[\sigma(M)]$ is given by, \begin{equation} f_n[\sigma(M)]=A\left[\left(\frac{b}{\sigma(M))}\right)^a+1 \right]\exp{\left(-\frac{c}{\sigma(M)^2}\right)}\ , \label{eq:hmf_func} \end{equation} and the coefficients of the function above are calibrated against simulations in \citet{tinker_toward_2008}. For the base model we use a \citet{tinker_large_2010} halo bias with the form \begin{equation} b_h(M) = 1-A\frac{\nu^a}{\nu^a+\delta_c^a}+B\nu^b+C\nu^c\ , \label{eq:halo_bias} \end{equation} where $\nu=\delta_c/\sigma(M)$, $\delta_c=1.686$, and as with equation \ref{eq:hmf_func}, the coefficients of \ref{eq:halo_bias} are calibrated against simulations in \citet{tinker_large_2010}. To avoid including the redshift $z$ as an extra input parameter for the linear component emulators we include all redshift dependence through the growth factor $D(z)$; $P(k,z) \propto D^2(z)$, $\sigma^2(M,z) \propto D^2(z)$, and $\mathcal{S}(M) \propto D^2(z)/D^2(z)$. This means we can emulate $T(k)$, $\sigma(M)$, and $\mathcal{S}(M)$ at redshift zero, and include all redshift dependence with an emulator for $D(z)$. These four component emulators are what make up the \textit{emulated} base model. \subsubsection{Parameter Space \& Training Data} \label{subsubsec:analytic_data} For this work we focus on the context where the base model will be trained to be used alongside a suite of numerical simulations, such as the Abacus Cosmos \citep{garrison_abacus_2018}, Aemulus \citep{derose_aemulus_2019}, Dark Quest \citep{nishimichi_dark_2019}, or Quijote \citep{villaescusa-navarro_quijote_2020} simulations. We choose to consider the Aemulus simulations. This publicly available simulation suite totals 75 dark matter only simulations, each with a volume of $( 1.05 \ \mathrm{Gpc} \ h^{-1})^3$. 40 of these simulations form a training set that has already been used to successfully train an emulator for the correlation function in redshift space \citep{zhai_aemulus_2019}. This emulator has recently been used to measure the growth rate from the eBOSS LRG sample \citep{chapman_completed_2021}. These 40 simulations sample a seven dimensional cosmological parameter space, with parameters $\{\Omega_m, \Omega_b, \sigma_8, h, n_s, N_{\text{eff}}, w_0 \}$. The samples are selected to uniformly cover a $4\sigma$ region on these seven parameters coming from analysis of cosmic microwave background (CMB), baryon acoustic oscillations (BAO), and supernovae measurements \citep[for details see section 2 of][]{derose_aemulus_2019}. To focus our training data for our base model on the Aemulus simulations we calculate the covaraince of these 40 training simulations, we then use the Cholesky decomposition $\Sigma = \mathbfit{L}\mathbfit{L}^\mathrm{H}$ to decompose the covaraince matrix into the lower triangle and corresponding conjugate transpose (indicated by the superscript H). This lower triangle is used to transform the samples that make up the Aemulus training set into an uncorrelated latent space, shown in figure \ref{fig:latent_space}. We generate 10,000 samples uniformly in each dimension of this latent space, with intervals defined by the latent space samples corresponding to the Aemulus simulations. The lower triangle \textbf{\textit{L}} can then be used to transform these latent space samples back into the cosmological parameter space. Sampling the parameter space in this way allows us to focus only on the region where we will have simulated training data to combine with our base model, and not extend the training space (at the determent of prediction accuracy) to regions that will not be covered by simulations. It should be noted that in the case where the simulations uniformly cover the parameter space, such as the Dark Quest simulations, this procedure is not necessary. All of the 1D and 2D projections of the cosmological parameter space are shown in figure \ref{fig:linear_emu_space}. We can see that the 10,000 samples cover the region sampled by the Aemulus simulations whilst minimising sampling in regions where there are no simulations. Figure \ref{fig:linear_emu_space} also shows that our fiducial cosmology (based on the most recent Planck $\Lambda$CDM TT, TE, EE + lowE + lensing + BAO analysis, see section \ref{subsec:mock_cmass}) that will be used in the various analysis tests in section \ref{sec:fullshape} is located roughly at the centre of this seven dimensional parameter space. \begin{figure} \includegraphics[width=\columnwidth]{plots/latent_space_demo-Om-h-NEW.png} \caption{Visualisation of the procedure of focusing the \texttt{matryoshka} training data on a suite of simulations. The left panel shows the 40 Aemulus training cosmologies (blue squares) and our fiducial cosmology (grey diamond) in the uncorrelated latent space. The orange circles show 50 random samples in the latent space, and the red dotted lines show the boundaries in the latent space defined by the extreme values of the Aemulus suite. The right panel shows how these 50 samples are distributed in the cosmological parameter space.} \label{fig:latent_space} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{plots/basemodel_trainspace-AEMULUS-NEW.png} \caption{1D and 2D projections of the training space for the cosmological parameters of our model. The blue points and histograms show the Aemulus training and test cosmologies. The sampling of these training cosmologies is influenced by results from CMB, BAO, and SN experiments \citep[see section 2][]{derose_aemulus_2019}. The orange points and histograms show the training data for the base model of \texttt{matryoshka}. This training data has been generated to cover the same region of the parameter space as the Aemulus training cosmologies by uniformly sampling from an uncorrelated latent space defined by the Aemulus training cosmologies (see section \ref{subsubsec:analytic_data}). The grey point and solid lines show the location of the cosmology used to generate the mock power spectrum for the full shape analyses described in section \ref{sec:fullshape}.} \label{fig:linear_emu_space} \end{figure*} The 10,000 samples are split into training, validation, and test subsamples (with 6400, 1400, 2000 samples respectively). The components of the emulated base model are trained using the training subsample, the validation subsample is used for model selection, and the prediction accuracy of the emulated base model is tested using the test subsample. Transfer functions and growth factors are calculated for the 10,000 samples using \texttt{CLASS} as implemented in the Python package \texttt{nbodykit} \citep{Hand_2018}, for 400 logarithmically spaced $k$-bins in the interval $10^{-4} <k<10 \ h \ \textrm{Mpc}^{-1}$ and at 200 linearly spaced redshifts in the interval $0<z<2$. These transfer functions are then used to calculate $\sigma^2(M)$ and $\mathcal{S}(M)$ with the Python package \texttt{hmf} \citep*{murray_hmfcalc_2013}. \begin{table} \centering \begin{tabular}{c c c} \hline Parameter & Range & Fiducial Value \\ \hline $h$ & $[0.570, 0.780]$ & 0.6766 \\ $\Omega_m$ & $[0.256, 0.353]$ & 0.30966 \\ $\Omega_b$ & $[0.0334, 0.0653]$ & 0.04897 \\ $w_0$ & $[-1.58, -0.322]$ & -1. \\ $N_\text{eff}$ & $[1.61, 5.14]$ & 3.046 \\ $\sigma_8$ & $[0.502, 1.07]$ & 0.8102 \\ $n_s$ & $[0.906, 1.01]$ & 0.9665 \\ \hline\hline Emulator & Parameters & Architecture\\ \hline $T(k)$ & $\{\Omega_m, \Omega_b, h, N_\mathrm{eff}, w_0 \}$ & 5:300:300:300 \\ $D(z)$ & $\{\Omega_m, \Omega_b, h, N_\mathrm{eff}, w_0 \}$ & 5:200:200:200 \\ $\sigma(M)$ & $\{\Omega_m, \Omega_b, h, N_\mathrm{eff}, w_0, n_s, \sigma_8 \}$ & 7:200:200:500 \\ $\mathcal{S}(M)$ & $\{\Omega_m, \Omega_b, h, N_\mathrm{eff}, w_0, n_s, \sigma_8 \}$ & 7:200:200:500 \\ \hline \end{tabular} \caption{Table defining the ranges for each of the parameters considered when constructing the base model emulators, along with the parameters used to calculate the mock observations for the fiducial full shape analysis in section \ref{sec:fullshape}. We also indicate which parameters are used by which emulator(s) and the architecture of each base model component emulator.} \label{tab:linear_param} \end{table} \subsection{Mock Observation} \label{subsec:mock_cmass} We produce a mock observed power spectrum designed to approximate the power spectrum of BOSS CMASS galaxies, with nonlinearities coming from HALOFIT as with the nonlinear training data (see ection \ref{subsubsec:fake_sim}). The cosmological and HOD parameters corresponding to this mock observation are shown in tables \ref{tab:linear_param} and \ref{tab:HOD_param} respectively. The cosmological parameters come from the most recent Planck $\Lambda$CDM TT, TE, EE + lowE + lensing + BAO analysis \citep[table 2 in][henceforth Planck 2018]{planck_collaboration_planck_2020}. The HOD parameters are the best fit parameters that result from the small scale clustering analysis of BOSS CMASS galaxies conducted by \citet{white_clustering_2011}. The number density associated to this mock observation is $\sim 6\times 10^{-4} \ (\mathrm{Mpc}^{-1} \ h)^3$. It should be noted that this number density is slightly greater than the observed CMASS number density. This value corresponds to the number density calculated using the Planck 2018 cosmological parameters and \citet{white_clustering_2011} best fit HOD parameters with the equation \begin{equation} n_g = \int n(M)\langle N | M \rangle dM\ . \label{eq:number_dens} \end{equation} We include linearly spaced $k$-bins covering the range $0.0025 \ h \ \textrm{Mpc}^{-1} < k < 0.85 \ h \ \textrm{Mpc}^{-1}$ with $\Delta k = 0.005 \ h \ \textrm{Mpc}^{-1}$. These scales are selected such that the $k$-bins included in our fiducial analysis (with $k_\textrm{max} = 0.25 \ h \ \textrm{Mpc}^{-1}$) match those from \citet{ivanov_cosmological_2020}, which used the perturbation theory based EFTofLSS approach to analyse multipoles of the power spectra of BOSS galaxies. Figure \ref{fig:mock_observation} shows our mock observation with grey points. We calculate an uncertainty for this mock observation using equation \ref{eq:gauss_cov} with a volume of $(1 \ \mathrm{Gpc} \ h^{-1})^3$. This uncertainty is shown with the grey shaded region in figure \ref{fig:mock_observation}. The \texttt{matryoshka} prediction for the fiducial parameters is shown with the orange solid line. We can see that the \texttt{matryoshka} prediction only becomes distinguishable from the truth at very small scales ($k\gtrsim 0.5 \ h \mathrm{Mpc}^{-1}$), but is still consistent with the truth at the $1\sigma$ level. The FS analyses that follow will determine to what extent this small error in the prediction of the power spectrum on small scales impacts the constrained cosmology.\\ \begin{figure} \includegraphics[width=0.95\columnwidth]{plots/mockobs_Planck18_new4.png} \caption{The top panel shows the mock CMASS power spectrum described in section \ref{subsec:mock_cmass} (grey points and shaded region), as well as the \texttt{matryoshka} prediction for the fiducial parameters (solid orange line). The vertical dashed lines show the $k_\mathrm{max}$ values of the full shape analyses. The dotted line shows the shot noise level of our mock observation. The bottom panel shows the normalised residuals comparing our mock observation to the \texttt{matryoshka} prediction.} \label{fig:mock_observation} \end{figure} \subsection{Fiducial Analysis} \label{subsec:fiducial} For our fiducial analysis we fit four out of the five HOD parameters of our model ($\log{M_\mathrm{cut}}, \sigma, \log{M_1}, \alpha$) and five out of the seven cosmological parameters ($\Omega_m, \sigma_8, h, n_s, w_0$). We fix $\kappa$ to it's true value as it is not well constrained by the real space power spectrum for the scales considered in our fiducial analysis (or any of the FS analyses that follow). The purpose of the analyses of this section is to verify that unbiased cosmology can be recovered, as such fixing $\kappa$ to the truth in this way doesn't influence any conclusions we draw. We also fix $\Omega_b$ and $N_\mathrm{eff}$ to their true values. We do not expect to get competitive constraints on these parameters from the real space power spectrum. It is common practice to use a very tight prior on $\Omega_b$ informed by big bang nucleosynthesis, and to fix $N_\mathrm{eff}=3.046$ to align with standard model predictions. We use Markov Chain Monte Carlo (MCMC) sampling to calculate the posterior distributions of HOD and cosmological parameters. We define a Gaussian likelihood with the form \begin{equation} \ln{\mathcal{L}(d|\theta,\phi)} = -\frac{1}{2}(P-\tilde{P})^T\mathbfit{C}^{-1}(P-\tilde{P})\, , \label{eq:likelihood} \end{equation} where $P$ is the mock observed galaxy power spectrum, $\mathbfit{C}$ is the Gaussian covaraince matrix calculated using equation \ref{eq:gauss_cov} shown by the grey shaded region in figure \ref{fig:mock_observation}, and $\tilde{P}$ is the emulated galaxy power spectrum. We do not include any information about the galaxy number density in the likelihood for our fiducial analysis, however the number density is very sensitive to the HOD parameters. We explore the impact of including number density information in the likelihood in section \ref{subsec:number_dens}. We use flat priors on all of the free HOD parameters with ranges equivalent to the extent of the HOD training space (see table \ref{tab:HOD_param}). For the cosmological parameters we use a multivariate Gaussian prior with mean and covariance defined by the training samples for the emulated base model shown in figure \ref{fig:linear_emu_space}. This is a very wide prior, as mentioned in section \ref{subsubsec:analytic_data}, which covers a $4\sigma$ region coming from previous CMB, BAO, and supernovae analyses. The use of this multivariate Gaussian prior on the cosmological parameters is necessary to assign low probability to areas of the parameter space that have not been sampled with training data, as the predictions from the emulators will not be accurate in these regions of the parameter space. MCMC sampling is done using \texttt{zeus} \citep*{karamanis_zeus_2021}, \texttt{zeus} uses ensemble slice sampling, a method that is robust when sampling from challenging distributions which is often the case for HOD parameters. Convergence of MCMC chains is discussed in appendix \ref{sec:convergence}. The posterior distributions calculated from our fiducial analysis are shown in figure \ref{fig:kmax_corner} with blue filled contours. We can see that the true cosmological parameters are recovered within $1\sigma$, verifying that the obtained level of prediction accuracy from \texttt{matryoshka} is sufficient to return unbiased cosmology for our mock. We also show the marginalised posterior distributions for the effective galaxy bias $b_\mathrm{eff.}$ in figure \ref{fig:kmax_corner}. $b_\mathrm{eff.}$ is not a free parameter but can be calculated from the cosmological and HOD parameters with the equation \begin{equation} b_\mathrm{eff.} = \frac{1}{n_g}\int n(M)b_h(M)\langle N|M \rangle \ . \end{equation} We can see there is a strong, and expected, degeneracy between $b_\mathrm{eff.}$ and cosmological parameters that primarily impact the amplitude of the power spectrum such as $\sigma_8$. The effective bias is sensitive to the HOD parameters, which are more tightly constrained by small scales. Therefore we expect to see an improved constraint on the cosmological parameters when including smaller scales. This improvement is coming from the increase in statistical power, along with the improved constraint on the HOD parameters, and thus the effective bias. \begin{figure*} \includegraphics[width=\linewidth]{plots/corner_kmax-0.25-0.85-NF-beff-NEW.pdf} \caption{Marginalised 1D and 2D posterior distributions resulting from the full shape analyses described in sections \ref{subsec:fiducial} and \ref{subsec:increase_kmax}. The two contour levels in the off-diagonal panels represent the $1\sigma$ and $2\sigma$ regions. The grey dashed horizontal and vertical lines show the true cosmological and HOD parameters. The grey contours in the off-diagonal panels and grey dotted lines in the diagonal panels show the multivariate prior on the cosmological parameters. All model parameters not shown are fixed to the truth.} \label{fig:kmax_corner} \end{figure*} \subsection{Impact of Number Density} \label{subsec:number_dens} In our fiducial analysis we do not include any information about the number density in our likelihood. The number density is sensitive to the HOD parameters. Works such has \citet{zhou_clustering_2020} and \citet{lange_five-percent_2021} have included information about the number density via an extra term in the likelihood, such that \begin{equation} \ln{\mathcal{L}(d|\theta,\phi)} = -\frac{1}{2} \left[ (P-\tilde{P})^T\mathbfit{C}^{-1}(P-\tilde{P})+\frac{(n_g - \tilde{n_g})^2}{\sigma_{n_g}^2}\right]\ , \label{eq:likelihood_nd} \end{equation} where $P$, $\tilde{P}$, and $\mathbfit{C}$ are the same as in equation \ref{eq:likelihood}, $n_g$ is the observed number density, $\tilde{n}_g$ is the number density predicted by the model which can be calculated via equation \ref{eq:number_dens}, and $\sigma_{n_g}$ is the uncertainty on the observed number density. \citet{miyatake_cosmological_2020} noted very little change to inferred cosmological parameters by including information about the number density when analysing projected clustering and weak lensing of a CMASS like sample. To investigate the impact of the number density when doing a FS analysis of the power spectrum, we re-run our fiducial analysis with $\sigma_{n_g}=[0.1,0.05,0.01]n_g$. The decreasing values of $\sigma_{n_g}$ have a similar impact to placing tighter priors on the HOD parameters. \begin{figure} \includegraphics[width=\linewidth]{plots/corner-priornd-HOD-NF-ng-NEW.pdf} \caption{Similar to figure \ref{fig:kmax_corner}. The blue filled contours are the same as in figure \ref{fig:kmax_corner}. The empty contours show the results of the number density analyses described in section \ref{subsec:number_dens}. Each of the empty contours shows the results of a full shape analysis over the same scales of the fiducial analysis with different levels of assumed uncertainty on the number density from the mock. Only the HOD parameter posteriors are shown here as the impact on the cosmological parameters is negligible, see figure \ref{fig:setup_comp_1d} for the 1D cosmological posteriors.} \label{fig:corner_number_dens} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{plots/1Dpost-ALL-NF-beff-NEW.png} \caption{Percent constraint and marginalised 1D posteriors for each of the cosmological parameters considered in the full shape analyses of section \ref{sec:fullshape}. The green crosses and green dashed line shows the percent constrain and corresponds to the top x-axis. The blue points and error bars show the median and $1\sigma$ region of the marginalised posteriors and correspond to the bottom x-axis. The blue shaded region shows the width of our fiducial analysis and the black vertical dashed line shows the location of the truth.} \label{fig:setup_comp_1d} \end{figure*} The posterior distributions of the HOD parameters resulting from these analyses are shown in figure \ref{fig:corner_number_dens}, alongside the results from the fiducial analysis for comparison. The cosmological parameters are not shown as the difference between the inferred cosmology from these analyses is minimal (the marginalised 1D posteriors for the cosmological parameters are shown in figure \ref{fig:setup_comp_1d} for reference), however we do show the marginalised posteriors for the number density predicted by our model. We can see that even for our fiducial analysis the true value of $n_g$ is well recovered. We can also see that even a relatively high value of $\sigma_{n_g}$ significantly improves the constraint on some of the HOD parameters, particularly $\log{M_\mathrm{cut}}$. This is to be expected as $\log{M_\mathrm{cut}}$ directly controls the number of central galaxies (which make up the majority of a CMASS-like sample), and thus the number density. \subsection{Increasing $k_\mathrm{max}$} \label{subsec:increase_kmax} To investigate the impact of the minimum scale included when conducting a FS analysis of the power spectrum on the constraint on the cosmological and HOD parameters, we re-run our fiducial analysis pushing the value of $k_\mathrm{max}$ to $[0.45, 0.65, 0.85] \ h \ \mathrm{Mpc}^{-1}$. The 1D and 2D marginalised posteriors resulting from these analyses are shown in figures \ref{fig:kmax_corner} and \ref{fig:setup_comp_1d}. As expected we see significant improvement in the constraint on all cosmological parameters by including smaller scales. This is particularly true for $\Omega_m$ and $\sigma_8$. Our fiducial analysis results in a $\sim 8.7\%$ ($\sim 4.8\%$) constraint on $\sigma_8$ ($\Omega_m$). Pushing the minimum scale to $k_\mathrm{max}=0.85 \ h \ \mathrm{Mpc}^{-1}$ results in a $\sim 4.9\%$ ($\sim 3.9\%$) constraint on $\sigma_8$ ($\Omega_m$), which represents a $\sim 1.8 \times$ ($\sim 1.2 \times$) improvement. This improvement is coming from the higher statistical power of smaller scales in addition to the improved constraint on the HOD parameters that arises from the increasing magnitude of the response of the power spectrum to the HOD parameters shown in figure \ref{fig:nonlinear_grad}. We can see from figures \ref{fig:kmax_corner} and \ref{fig:setup_comp_1d} that there is a small ($< 1\sigma$) bias in the median values (blue squares in figure \ref{fig:setup_comp_1d}) of the marginalised posteriors from these analyses. The origin of this bias is likely the small error in the emulator predictions of the power spectrum. This effect is exaggerated by degeneracies between model parameters. For example the median value for $\sigma_8$ is over-predicted whilst $b_\mathrm{eff.}$ is under-predicted. Improving the constraint on $b_\mathrm{eff.}$ by including smaller scales (or fixing the HOD as described in section \ref{subsec:fixed_hod}) reduces the observed bias in $b_\mathrm{eff.}$ and $\sigma_8$. \subsection{Fixed HOD} \label{subsec:fixed_hod} To investigate to what level the cosmological parameter constraints are degraded by fitting the HOD parameters, we re-run our fiducial analysis with all HOD parameters fixed to the truth, and for the same $k_\mathrm{max}$ values used in section \ref{subsec:increase_kmax}. The results of these fixed HOD analyses are shown in figures \ref{fig:setup_comp_1d} and \ref{fig:fixHOD_corner}. The orange contours show the results that cover the same scales as our fiducial analysis. We can see that compared to the results from the fiducial analysis (blue filled contours) there is a significant increase in the constraint on all cosmological parameters even when the same scales are considered. The improvement on the constraint on $\sigma_8$ is $\sim 2.0\times$, which is already larger than the improvement from pushing to $k_\mathrm{max}=0.85 \ h \ \mathrm{Mpc}^{-1}$ in section \ref{subsec:increase_kmax}. This result is expected and demonstrates just how much fitting the HOD parameters degrades the constraint on the cosmological parameters, and highlights that accurate prior information about the HOD coming from small scale clustering studies \citep[such as][]{zheng_galaxy_2007,white_clustering_2011,parejko_clustering_2013,beutler_6df_2013,zhai_clustering_2017} can greatly improve constraints on cosmology coming from a full shape analysis of the power spectrum in the HM framework. We can also see that increasing $k_\mathrm{max}$ compared to our fiducial analysis results in further improvement to the constraint on cosmology. Pushing the fixed HOD analysis to $k_\mathrm{max}=0.45$ results in a $\sim 4.8\times$ improvement to the constraint on $\sigma_8$, however we notice that the improvement including scales smaller than this is much less significant. The dotted line in figure \ref{fig:mock_observation} shows the shot noise of our mock observation. We can see that when $k \sim 0.4 \ h \ \mathrm{Mpc}^{-1}$ the shot noise is the same magnitude as our clustering signal. As such, increasing $k_\mathrm{max}$ to values $\lesssim 0.4 \ h \ \mathrm{Mpc}^{-1}$ will result in a higher signal to noise ratio, however increasing $k_\mathrm{max}$ to values $\lesssim 0.4 \ h \ \mathrm{Mpc}^{-1}$ will not. The reason we see continued gain when pushing $k_\mathrm{max}\gtrsim 0.4 \ h \ \mathrm{Mpc}^{-1}$ for the analyses of section \ref{subsec:increase_kmax} but not these fixed HOD analyses is due to the scale dependence of the response to the HOD parameters shown in figure \ref{fig:nonlinear_grad}. We can see that increasing $k_\mathrm{max}$ increases sensitivity to all the HOD parameters, however for the cosmological parameters there is no scale dependence beyond $k_\mathrm{max}\gtrsim 0.4 \ h \ \mathrm{Mpc}^{-1}$. \begin{figure} \includegraphics[width=\linewidth]{plots/corner-fixHOD-NF-beff-New.pdf} \caption{Similar to figure \ref{fig:kmax_corner}. The blue filled contours are the same as in figure \ref{fig:kmax_corner}. The empty contours show the results of the fixed HOD analyses described in section \ref{subsec:fixed_hod}, where only the five cosmological parameters shown are allowed to vary (all other model parameters are fixed to the truth).} \label{fig:fixHOD_corner} \end{figure}
2,869,038,154,475
arxiv
\section{Introduction} Currently the biggest obstacle to the development of quantum computing continues to be control of quantum errors. Since the beginnings of quantum computing in the 90s of the last century one of the main research objectives was to solve this stumbling block. To address the problem, two fundamental tools were developed: quantum error correction codes~\cite{CS,St1,Go1,CRSS1,Go2,CRSS2} in combination with fault tolerant quantum computing~\cite{Sh,St2,Pr1,Go3,KLZ,Ki,AB}. These studies culminated in the proof of the quantum threshold theorem, which reads as follows: a quantum computer with a physical error rate below a certain threshold can, through application of quantum error correction schemes, suppress the logical error rate to arbitrarily low levels. However, the proof of this theorem depends on the discretized treatment of quantum errors, inherited from the construction of quantum codes. We believe that the quantum error model used for the proof of the quantum threshold theorem is not general and that the techniques developed to control quantum errors do not verify the golden rule of error control: correct all small errors exactly. For example, in the case of the coding of a qubit by means of the 5-qubit code~\cite{BDSW,LMPZ} it is argued, using error discretization and that this code exactly corrects errors in any of the qubits, that the error probability goes from $p$ to $p^2$, once the correction circuit has been applied. But, what is actually happening is that the probability of an error (small with high probability) in all qubits is 1 and that the code cannot correct these simultaneous errors. Then, an error occurs with probability 1 and, once the correction circuit is applied, it becomes undetectable. Therefore it is necessary to make an analysis of quantum errors regardless of their discretization. The procedure indicated for this is to consider quantum errors as continuous random variables and characterize them by their corresponding density functions. In this article we analyze a specific type of error: isotropic quantum errors. An isotropic error of an $n-$qubit $\Phi$ is one in which the probability of the error $\Psi$ only depends on the distance between the two states, $\|\Psi-\Phi\|$, and not on the direction in which the imprecision $\Psi$ occurs with respect to $\Phi$. They are errors that are easy to analyze due to their central symmetry with respect to $\Phi$. In work~\cite{LPF} we have studied the ability of an arbitrary quantum code to correct these errors, using the variance as the error measure. If $\Phi$ is the $n-$qubit without error state, $\Psi$ the state resulting from a disturbance modeled by an isotropic quantum error and $\tilde\Phi$ the result of applying the quantum code correction circuit, assuming that it does not introduce new errors, the result that we demonstrate in~\cite{LPF} is the following: $$ V(\tilde\Phi)\geq V(\Psi), $$ where $V(\tilde\Phi)=E[\|\tilde\Phi-\Phi\|^2]$ and $V(\Psi)=E[\|\Psi-\Phi\|^2]$ are the variances of the corrected state $\tilde\Phi$ and the disturbed state $\Psi$ respectively. This means that no quantum code can handle isotropic errors, or even reduce their variance. Now we are interested in analyzing the ability of quantum codes to increase fidelity against isotropic errors, since the fidelity allows to measure the quantum errors taking into account that the quantum states do not change if they are multiplied by a phase factor, while the variance used in~\cite{LPF} does not take this fact into account. We represent $n-$qubits as points of the unit real sphere of dimension $2d-1$ being $d=2^n$~\cite{NC}, $S^{2d-1}=\{x\in\mathbb{R}^{2d}\ |\ \|x\| =1\}$, taking coordinates with respect to the computational basis $[|0\rangle,|1\rangle,\dots,|2^n-1\rangle]$, \begin{equation} \label{For:QubitFormula} \Psi=(x_0+ix_1,x_2+ix_3,\dots,x_{2d-2}+ix_{2d-1}). \end{equation} We consider quantum computing errors as random variables with density function defined on $S^{2d-1}$. In~\cite{LPF} it is mentioned that it is easy to relate this representation to the usual representation in quantum computing by density matrices and that the representation through random variables is more accurate. We define the variance of a random variable $X$ as the mean of the quadratic deviation from the mean value $\mu$ of $X$, $V(X)=E[\|X-\mu\|^2]$. In our case, since the random variable $X$ represents a quantum computing error, the mean value of $X$ is the $n-$qubit $\Phi$ resulting from an errorless computation. Without loss of generality, we will assume that the mean value of every quantum computing error will always be $\Phi = |0\rangle$. To achieve this, it suffices to move $\Phi$ into $|0\rangle$ through a unitary transformation. Therefore, using the pure quantum states given by Formula (\ref{For:QubitFormula}), the variance of $X$ will be: \begin{equation} \label{For:DefVariance} V(X)=E\left[[\|\Psi-\Phi\|^2\right]=E[2-2x_0]=2-2\int_{S^{2d-1}}x_0f(x)dx. \end{equation} Obviously the variance satisfies $V(X)\in[0,4]$. In~\cite{LP} the variance of the sum of two independent errors on $S^{2d-1}$ is presented for the first time. It is proved for isotropic errors and it is conjectured in general that: \begin{equation} \label{For:VarFormula} V(X_1+X_2)=V(X_1)+V(X_2)-\frac{V(X_1)V(X_2)}{2}. \end{equation} Considering the representation of errors through random variables, the definition of fidelity is very simple: \begin{equation} \label{For:DefFidelity} F^2(X)=E\left[|\langle\Psi|\Phi\rangle|^2\right]=E\left[x_0^2+x_1^2\right]=\int_{S^{2d-1}}(x_0^2+x_1^2)f(x)dx. \end{equation} Then, the problem we want to address is the following: Let $\Phi_0$ be an $m-$qubit and $\Phi$ the corresponding $n-$qubit encoded by an $(n,m)-$quantum code $\mathcal C$. Suppose that the coded state $\Phi$ is changed by error, becoming the state $\Psi$. Now, to fix the error we apply the code correction circuit, obtaining the final state $\tilde\Phi$. While $\Phi$ is a pure state, $\Psi$ and $\tilde\Phi$ are random variables (mixed states). We also want to study the possibility of not using quantum codes. In this case, we suppose that the initial state $\Phi_0$ is changed by error, becoming the state $\Psi_0$. State $\Psi_0$ is also a random variable. Then our goal is to compare the fidelities of $\Psi$, $\tilde\Phi$ and $\Psi_0$: $$ F(\Psi)=E\left[|\langle\Psi|\Phi\rangle|^2\right],\ F(\tilde\Phi)=E\left[|\langle\tilde\Phi|\Phi\rangle|^2\right] \ \text{and}\ F(\Psi_0)=E\left[|\langle\Psi_0|\Phi_0\rangle|^2\right]. $$ In order to compare the fidelities we will assume that the corrector circuit of $\mathcal C$ does not introduce new errors and it does not increase the execution time. In other words, we are going to estimate the theoretical capacity of the code to correct quantum computing errors. In the case of isotropic errors we shall prove that: \begin{equation} \label{For:Result} F(\Psi_0) \geq F(\tilde\Phi) \geq F(\Psi). \end{equation} This result leads us to the conclusion that the best option to optimize fidelity against isotropic errors is not to use quantum codes. This result goes in the same direction as that obtained in~\cite{LPF}, which indicates that quantum codes do not reduce the variance against isotropic errors. However, the most widely used model of errors in quantum computing is qubit independent errors. The study of this type of quantum error is much more complex than that of isotropic errors, because it does not have the same symmetry. Despite this technical difficulty, we have proved in~\cite{LPFM} that the $5-$qubit code~\cite{BDSW,LMPZ} is not able to reduce the variance against qubit independent errors. This result, together with those obtained in~\cite{LPF} and in this article, clearly reveals the difficulty of the quantum error control challenge and clearly shows that the continuous nature of quantum errors cannot be ignored. There are many works related to the control of quantum computing errors, in addition to those already mentioned above. General studies and surveys on the subject~\cite{Sc,Pr2,Go4,CDT,HFWH,DP,HZKBR,HKR}, about the quantum computation threshold theorem~\cite{AGP,WFSH,ACGW,CT}, quantum error correction codes~\cite{OG,LZLX,CCHF,Gu}, concatenated quantum error correction codes~\cite{BPFHC,ES} and articles related to topological quantum codes~\cite{DPB,NJS}. Lately, quantum computing error control has focused on both coherent errors~\cite{GM,BEKP} and cross-talk errors~\cite{PSVW,BTS}. Finally, we cannot forget the hardest error to control in quantum computing, the quantum decoherence~\cite{Zu}. As we have commented above, these quantum computing errors can be analyzed in the framework of random variables that has been set in~\cite{LPF,LP}. In the conclusions we analyze in more detail the characteristics of the different types of error from the point of view of their control and in view of the result obtained in this paper. The outline of the article is as follows: in section 2 we study the fidelity of the quantum stages $\Psi$, $\Psi_0$ and $\tilde\Phi$; in section 3 we prove the relationship between them given by Formula (\ref{For:Result}); finally, in section 4 we analyze the conclusions that can be obtained from the proved result. \section{Analysis of fidelity} Associated with the $(n,m)-$quantum code $\mathcal{C}$, the following parameters are defined: $d=2^n$ is the dimension of $\mathcal{C}$, $d^{\,\prime} = 2^m$ and $d^{\,\prime\prime}$ is the number of discrete errors that $\mathcal{C}$ corrects. First we are going to study how we can compare the fidelity of the quantum states $\Psi$ and $\tilde\Phi$, which are $n-$qubits encoded with the quantum code $\mathcal{C}$, and the fidelity of the state $\Psi_0$, which is an unencoded $m-$qubit state. The working scheme in these two scenarios is illustrated in Figure \ref{Fig:SchemesEncoded-NonEncoded}. We assume that the $\mathcal{C}$ correction circuit, which is applied after each quantum gate in the coded algorithm, does not introduce new errors and is ideally applied for a time $t=0$. In this way we study the theoretical capacity of $\mathcal{C}$ to control isotropic errors, that is, its capacity to increase the fidelity of the final state $\tilde\Phi$ with respect to $\Psi$, and we can compare it with the fidelity of the final state $\Psi_0$ in the scheme without the quantum code $\mathcal{C}$. \begin{figure}[h] \label{Fig:SchemesEncoded-NonEncoded}\quad \begin{center} \includegraphics[scale=0.4]{Fig_SchemesEncoded-NonEncoded.pdf} \caption{\centerline{Uncoded/coded work scheme.}} \end{center} \end{figure} We analyze the isotropic error as a decoherence error over a unit of time, which corresponds to the time it takes to apply a quantum gate in the coded algorithm. To compare it with the uncoded algorithm we have to bear in mind that the unit of time in this case will be at most the $n-$th part of the unit of time in the coded algorithm. To relate the probability distributions in both cases we use the following equality of variances: $$ V(E)=V(E_1+E_2+\cdots+E_n), $$ where $E$ is the decoherence error during a unit of time in the coded algorithm and $E_1$, $E_2$, \dots $E_n$ are independent decoherence errors corresponding to a unit of time in the uncoded algorithm. Using the following generalization of Formula (\ref{For:VarFormula}) demonstrated in~\cite{LP}: \begin{equation} \label{For:GeneralVarFormula} V(E_1+E_2+\cdots+E_n)=2-2\left(1-\frac{v_u}{2}\right)^n, \end{equation} where $v_u$ is the variance of each of the independent errors, we obtain the following relation of $v_u$ with the variance $v_c$ of the error $E$: \begin{equation} \label{For:RelationshipVariances} v_c=2-2\left(1-\frac{v_u}{2}\right)^n \quad \Leftrightarrow \quad v_u=2-2\left(\frac{2-v_c}{2}\right)^{1/n}. \end{equation} In the case of the normal probability distribution defined in~\cite{LPF,LP}, with the following density function: \begin{equation} \label{For:NormalDistribution} f_n(\sigma,\theta_0)=\frac{(2d-2)!!}{(2\pi)^d}\frac{(1-\sigma^2)}{(1+\sigma^2-2\sigma\cos(\theta_0))^d}, \end{equation} where the parameter $\sigma$ belongs to the interval $[0,1)$, the above variances have a very simple expression and are independent of the dimension: $v_c=2(1-\sigma_c)$ and $v_u=2(1-\sigma_u)$. The relationship between them given in Formula (\ref{For:RelationshipVariances}) translates into a very simple relationship between the corresponding sigma parameters: \begin{equation} \label{For:RelationshipSigmaParameters} \sigma_c=\sigma_u^n \quad \Rightarrow \quad \sigma_u=\sigma_c^{1/n}. \end{equation} From now on we are going to follow the scheme proposed in~\cite{LPF} to calculate the variances of states $\Psi$ and $\tilde\Phi$, but to calculate the fidelities of these states and of state $\Psi_0$. \subsection{Fidelity of $\Psi$ and $\Psi_0$} The state $\Psi$, described in Cartesian coordinates in Formula (\ref{For:QubitFormula}) is represented in spherical coordinates as follows: $$ \begin{array}{l} \Psi = (\theta_0,\,\theta_1,\,\dots,\,\theta_{2d-2})\quad \left\{\begin{array}{l} \vrule height 8pt depth 8pt width 0pt 0\leq\theta_0,\,\dots,\,\theta_{2d-3}\leq\pi \\ \vrule height 12pt depth 2pt width 0pt 0\leq\theta_{2d-2}\leq 2\pi \end{array}\right., \\ \vrule height 16pt depth 8pt width 0pt x_j=\sin(\theta_0)\,\cdots\,\sin(\theta_{j-1})\,\cos(\theta_j)\quad\text{for all}\quad 0\leq j\leq 2d-2, \\ \vrule height 12pt depth 8pt width 0pt x_{2d-1}=\sin(\theta_0)\,\cdots\,\sin(\theta_{2d-2}). \end{array} $$ Using this representation of $\Psi$, the fidelity entered in Formula (\ref{For:DefFidelity}) is as follows: \begin{equation} \label{For:DefFidelitySpherical} F^2(X)=E\left[\cos^2(\theta_0)+\sin^2(\theta_0)\cos^2(\theta_1)\right]=1-E\left[\sin^2(\theta_0)\sin^2(\theta_1)\right]. \end{equation} \begin{thm} \label{Thm:FidelityPsi} The fidelity of the isotropic random variable $\Psi$ with density function $f(\theta_0)$ is equal to: \begin{equation} \label{For:FidelityPsi} F^2(\Psi) = 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-1) \, {\bar E}\left[\sin^{2d}(\theta_0)\right], \end{equation} where $\displaystyle {\bar E}\left[\sin^{2d}(\theta_0)\right]=\int_0^\pi f(\theta_0)\sin^{2d}(\theta_0)d\theta_0$. \end{thm} \begin{proof} We have to calculate the expected value of an expression that depends only on angles $\theta_0$ and $\theta_1$ and the isotropic density function depends only on angle $\theta_0$. Therefore, using Formula (\ref{For:DefFidelitySpherical}): $$ \begin{array}{lll} \vrule height 14pt depth 12pt width 0pt F^2(\Psi) & = & \displaystyle 1 - |S^{2d-3}| \, {\bar E}[\sin^{2d}(\theta_0)] \, \int_0^\pi \sin^{2d-1}(\theta_1)d\theta_1 \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - \dfrac{(2\pi)^{d-1}}{(2d-4)!!} \, 2\dfrac{(2d-2)!!}{(2d-1)!!} \, {\bar E}[\sin^{2d}(\theta_0)] \\ \vrule height 18pt depth 14pt width 0pt & = & \displaystyle 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-1) \, {\bar E}[\sin^{2d}(\theta_0)]. \\ \end{array} $$ We have used equalities from the Appendix. \end{proof} \begin{cor} \label{Cor:FidelityPsiND} The fidelity of the isotropic random variable $\Psi$ with normal distribution $f_n(\sigma_c,\theta_0)$ is equal to: \begin{equation} \label{For:FidelityPsiND} F^2(\Psi) = \dfrac{1+(d-1)\sigma_c^2}{d}. \end{equation} \end{cor} \begin{proof} Using the definition of the normal distribution given in Formula (\ref{For:NormalDistribution}) and the Appendix: $$ \begin{array}{lll} \vrule height 14pt depth 12pt width 0pt F^2(\Psi) & = & \displaystyle 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-1) \, {\bar E}[\sin^{2d}(\theta_0)] \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-1) \, \dfrac{(2d-2)!!}{(2\pi)^d} \, (1-\sigma_c^2) \, \dfrac{(2d-1)!!}{(2d)!!}\pi \\ \vrule height 18pt depth 14pt width 0pt & = & \displaystyle 1 - \dfrac{d-1}{d} \, (1-\sigma_c^2) = \dfrac{1+(d-1)\sigma_c^2}{d}. \\ \end{array} $$ \end{proof} Theorem \ref{Thm:FidelityPsi} and Corollary \ref{Cor:FidelityPsiND} also apply to state $\Psi_0$, changing the parameter $d$ to $d^\prime$. \begin{cor} \label{Cor:FidelityPsi_0} The fidelity of the isotropic random variable $\Psi_0$ with density function $f(\theta_0)$ is equal to: \begin{equation} \label{For:FidelityPsi_0} F^2(\Psi_0) = 1 - 4 \, \dfrac{(2\pi)^{d^\prime-1}}{(2d^\prime-1)!!} \, (d^\prime-1) \, {\bar E}\left[\sin^{2d^\prime}(\theta_0)\right], \end{equation} where $\displaystyle {\bar E}\left[\sin^{2d^\prime}(\theta_0)\right]=\int_0^\pi f(\theta_0)\sin^{2d^\prime}(\theta_0)d\theta_0$. And, if the probability distribution of $\Psi_0$ is normal with density function $f_n(\sigma_u,\theta_0)$, the fidelity is equal to: \begin{equation} \label{For:FidelityPsi_0ND} F^2(\Psi_0) = \dfrac{1+(d^\prime -1)\sigma_u^2}{d^\prime}. \end{equation} \end{cor} To compare the fidelities of $\Psi_0$ and $\tilde\Phi$ we need to obtain their values as a function of their variances $v_u$ and $v_c$ respectively. The relationship between these variances obtained in Formula (\ref{For:RelationshipVariances}) will allow us to relate the fidelities of these states. \begin{thm} \label{Thm:LowerBoundFidelityPsi_0} The fidelity of the isotropic random variable $\Psi_0$ with density function $f(\theta_0)$ satisfy: \begin{equation} \label{For:LowerBoundFidelityPsi_0} F^2(\Psi_0) \geq 1 - \dfrac{2d^\prime-2}{2d^\prime-1}\left(v_u-\left(\dfrac{v_u}{2}\right)^2\right). \end{equation} \end{thm} \begin{proof} First we prove, similar to the proof of Theorem \ref{Thm:FidelityPsi}, the following: $$ \begin{array}{lll} \vrule height 14pt depth 12pt width 0pt F^2(\Psi_0) & = & \displaystyle 1 - |S^{2d^\prime-3}|\, {\bar E}[\sin^{2d^\prime}(\theta_0)] \int_0^\pi\sin^{2d^\prime-1}(\theta_1)d\theta_1 \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - |S^{2d^\prime-3}|\, {\bar E}[\sin^{2d^\prime}(\theta_0)] \int_0^\pi\sin^{2d^\prime-3}(\theta_1)d\theta_1\, \dfrac{\displaystyle \int_0^\pi\sin^{2d^\prime-1}(\theta_1)d\theta_1}{\displaystyle \int_0^\pi\sin^{2d^\prime-3}(\theta_1)d\theta_1} \\ \vrule height 18pt depth 14pt width 0pt & = & \displaystyle 1 - \int_{S^{2d^\prime-1}}f(\theta_0)\sin^2(\theta_0)\, \dfrac{\displaystyle \int_0^\pi\sin^{2d^\prime-1}(\theta_1)d\theta_1}{\displaystyle \int_0^\pi\sin^{2d^\prime-3}(\theta_1)d\theta_1} \\ \end{array} $$ And, using the formulas in the Appendix, we obtain: $$ F^2(\Psi_0) = \displaystyle 1 - E[\sin^2(\theta_0)]\, \dfrac{2d^\prime-2}{2d^\prime-1}. $$ \end{proof} Using Jensen's inequality we obtain a lower bound for $E[\sin^2(\theta_0)]$: $$ \begin{array}{lll} \vrule height 14pt depth 12pt width 0pt (E[1-\cos(\theta_0)])^2 & \leq & E[(1-\cos(\theta_0))^2] \\ \vrule height 14pt depth 12pt width 0pt & = & E[1+\cos^2(\theta_0)-2\cos(\theta_0)] \\ \vrule height 14pt depth 12pt width 0pt & = & E[2-2\cos(\theta_0)-\sin^2(\theta_0)] \\ \vrule height 14pt depth 12pt width 0pt & = & v_u - E[\sin^2(\theta_0)]. \\ \end{array} $$ And then: $$ E[\sin^2(\theta_0)] \leq v_u - (E[1-\cos(\theta_0)])^2 = v_u-\left(\dfrac{v_u}{2}\right)^2. $$ Substituting in the formula of $F^2(\Psi_0)$ the previous lower bound of $E[\sin^2(\theta_0)]$, the proof is concluded: $$ F^2(\Psi_0) \geq 1 - \dfrac{2d^\prime-2}{2d^\prime-1}\left(v_u-\left(\dfrac{v_u}{2}\right)^2\right). $$ \subsection{Fidelity of $\tilde\Phi$} The formula for the fidelity of the state $\tilde\Phi$ is very similar to that of the state $\Psi$, Formula (\ref{For:FidelityPsi}), although the proof is more complex because the quantum code $\mathcal{C}$ is involved. \begin{thm} \label{Thm:FidelityTildePhi} The fidelity of the isotropic random variable $\tilde\Phi$ with density function $f(\theta_0)$ is equal to: \begin{equation} \label{For:FidelityTildePhi} F^2(\tilde\Phi) = 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-d^{\prime\prime}) \, {\bar E}\left[\sin^{2d}(\theta_0)\right], \end{equation} where $\displaystyle {\bar E}\left[\sin^{2d}(\theta_0)\right]=\int_0^\pi f(\theta_0)\sin^{2d}(\theta_0)d\theta_0$. \end{thm} \begin{proof} Taking into account Theorem 3 and Corollary 1 of \cite{LPF} the fidelity of $\tilde\Phi$ is the following: $$ F^2(\tilde\Phi) = E\left[P_0|\langle \Phi| \Pi_0\Psi \rangle|^2\right] + (d^{\prime\prime}-1) \, E\left[P_1|\langle E_1\Phi| \Pi_1\Psi \rangle|^2\right]. $$ where $P_0$ and $P_1$ are the probabilities of measuring the syndromes $0$ and $1$ respectively, $\Pi_0$ and $\Pi_1$ the (normalized) projectors corresponding to the discrete errors $E_0=I$ and $E_1$ associated with the aforementioned syndromes and $E_1\Phi=E_1|0\rangle=|2d^\prime\rangle$. The first expected value in the above expression is equal to $F^2(\Psi)$ by the Formula (\ref{For:DefFidelitySpherical}) and, using Theorem \ref{Thm:FidelityPsi}, it is obtained: $$ \begin{array}{lll} \vrule height 14pt depth 8pt width 0pt E\left[P_0|\langle \Phi| \Pi_0\Psi \rangle|^2\right] & = & E\left[1-\sin^2(\theta_0)\sin^2(\theta_1)\right] \\ \vrule height 16pt depth 12pt width 0pt & = & 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-1) \, {\bar E}\left[\sin^{2d}(\theta_0)\right]. \\ \end{array} $$ And the second is the following: $$ E\left[P_1|\langle E_1\Phi| \Pi_1\Psi \rangle|^2\right] = E\left[\sin^2(\theta_0) \cdots \sin^2(\theta_{2d^\prime-1})\left(1-\sin^2(\theta_{2d^\prime})\sin^2(\theta_{2d^\prime+1})\right)\right]. \\ $$ Using the Appendix the following is obtained: $$ \begin{array}{l} \vrule height 14pt depth 8pt width 0pt \displaystyle E\left[\sin^2(\theta_0)\dots\sin^2(\theta_{2d^\prime-1})\right] = {\bar E}[\sin^{2d}(\theta_0)] \\ \vrule height 20pt depth 14pt width 0pt \displaystyle \cdot \int_0^\pi \sin^{2d-1}(\theta_0) d\theta_1 \cdots \int_0^\pi \sin^{2d-2d^\prime+1}(\theta_{2d^\prime-1}) d\theta_{2d^\prime-1} |S_{2d-2d^\prime-1}| \\ \vrule height 20pt depth 14pt width 0pt \displaystyle = {\bar E}[\sin^{2d}(\theta_0)] \, 2\dfrac{(2d-2)!!}{(2d-1)!!} \, \pi\dfrac{(2d-3)!!}{(2d-2)!!} \cdots 2\dfrac{(2d-2d^\prime)!!}{(2d-2d^\prime+1)!!} \, \dfrac{(2\pi)^{d-d^\prime}}{(2d-2d^\prime-2)!!} \\ \vrule height 20pt depth 14pt width 0pt \displaystyle = {\bar E}[\sin^{2d}(\theta_0)] \, 4 \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-d^\prime). \end{array} $$ Similarly we obtain: $$ E\left[\sin^2(\theta_0)\dots\sin^2(\theta_{2d^\prime+1})\right] = {\bar E}[\sin^{2d}(\theta_0)] \, 4 \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-d^\prime-1). $$ With the last two results the following is obtained: $$ E\left[P_1|\langle E_1\Phi| \Pi_1\Psi \rangle|^2\right] = {\bar E}[\sin^{2d}(\theta_0)] \, 4 \dfrac{(2\pi)^{d-1}}{(2d-1)!!}. \\ $$ Finally we get the result we are looking for: $$ \begin{array}{lll} \vrule height 14pt depth 8pt width 0pt \displaystyle F^2(\tilde\Phi) & = & \displaystyle E\left[P_0|\langle \Phi| \Pi_0\Psi \rangle|^2\right] + (d^{\prime\prime}-1) E\left[P_1|\langle E_1\Phi| \Pi_1\Psi \rangle|^2\right] \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - {\bar E}[\sin^{2d}(\theta_0)] \, 4 \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \left(d-1 -(d^{\prime\prime}-1) \right) \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - {\bar E}[\sin^{2d}(\theta_0)] \, 4 \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \left(d-d^{\prime\prime}\right). \\ \end{array} $$ \end{proof} If the probability distribution of $\Psi$ is normal the fidelity of $\tilde\Phi$ is much simpler. \begin{cor} \label{Cor:FidelityTildePsiND} If $\Psi$ has a normal probability distribution with parameter $\sigma_c$ the fidelity of $\tilde\Phi$ satisfies: \begin{equation} \label{For:FidelityTildePsiND} F^2(\tilde\Phi) = \dfrac{1+(d^\prime -1)\sigma_c^2}{d^\prime}. \end{equation} \end{cor} \begin{proof} To prove the result, it is enough to substitute in Theorem \ref{Thm:FidelityTildePhi} the value of the integral ${\bar E}[\sin^{2d}(\theta_0)]$ from the Appendix and consider that $d = d^\prime d^{\prime\prime}$. \end{proof} To compare the fidelities of $\Psi_0$ and $\tilde\Phi$ we need to obtain $F^2(\tilde\Phi)$ as a function of the variances $v_c$ of the state $\Psi$. \begin{thm} \label{Thm:UpperBoundFidelityTildePsi} If the state $\Psi$ has an isotropic distribution with density function $f(\theta_0)$ such that: \begin{equation} \label{For:Condition_f} \int_0^\pi (1-\cos(\theta_0))\cos(\theta_0)f(\theta_0)\geq 0, \end{equation} the fidelity of $\tilde\Phi$ satisfies: \begin{equation} \label{For:UpperBoundFidelityTildePsi} F^2(\tilde\Phi) \leq 1 - \dfrac{d-d^{\prime\prime}}{2d^\prime-1} \, v_c. \end{equation} \end{thm} \begin{proof} First we prove, similar to the proofs of Theorems \ref{Thm:FidelityPsi} and \ref{Thm:LowerBoundFidelityPsi_0}, the following: $$ \begin{array}{lll} \vrule height 14pt depth 12pt width 0pt F^2(\tilde\Phi) & = & \displaystyle 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, (d-d^{\prime\prime}) \, {\bar E}\left[\sin^{2d}(\theta_0)\right] \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, \dfrac{d-d^{\prime\prime}}{|S^{2d-2}|} \, {\bar E}\left[\sin^{2d}(\theta_0)\right] \, |S^{2d-2}| \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - 4 \, \dfrac{(2\pi)^{d-1}}{(2d-1)!!} \, \dfrac{(2d-3)!!}{2(2\pi)^{d-1}} \, (d-d^{\prime\prime}) \, E\left[\sin^2(\theta_0)\right] \\ \vrule height 20pt depth 14pt width 0pt & = & \displaystyle 1 - 2 \, \dfrac{d-d^{\prime\prime}}{2d-1} \, E\left[\sin^2(\theta_0)\right]. \\ \end{array} $$ Now, using Formula (\ref{For:Condition_f}), we obtain the following lower bound: $$ \begin{array}{lll} \vrule height 12pt depth 8pt width 0pt E\left[\sin^2(\theta_0)\right] & = & \displaystyle E\left[(1-\cos(\theta_0))(1+\cos(\theta_0))\right] \\ \vrule height 18pt depth 12pt width 0pt & \geq & \displaystyle E\left[1-\cos(\theta_0)\right] = \dfrac{v_c}{2}. \\ \end{array} $$ The proof is concluded by introducing the previous lower bound in the expression obtained for $F^2(\tilde\Phi)$. \end{proof} \section{Relationship between the fidelity of the states $\Psi_0$, $\tilde\Phi$ and $\Psi$} The results obtained in the previous section allow us to easily prove the following theorem. \begin{thm} \label{Thm:RelationshipTildePhiPsi} If the state $\Psi$ has an isotropic distribution the following relationship between the fidelities of $\tilde\Phi$ and $\Psi$ holds: \begin{equation} \label{For:RelationshipTildePhiPsi} F^2(\tilde\Phi) \geq F^2(\Psi). \end{equation} \end{thm} \begin{proof} Theorems \ref{Thm:FidelityPsi} and \ref{Thm:FidelityTildePhi} allow us to prove the result directly, considering that $d-1 \geq d-d^{\prime\prime}$. \end{proof} To compare the fidelities of states $\Psi_0$ and $\tilde\Phi$ we need to use Theorems \ref{Thm:LowerBoundFidelityPsi_0} and \ref{Thm:UpperBoundFidelityTildePsi}. However, we must establish a previous result in order to establish the relationship between these states. \begin{lem} \label{Lem:InequalityFidelities} Given $n\in\mathbb{N}$, $n\geq 2$, and $x\in\mathbb{R}$, $0\leq x\leq 4$, the following is satisfied: $$ g(n,x)=2-2\left(1-\dfrac{x}{2}\right)^n-\left(x-\left(\dfrac{x}{2}\right)^2\right) \geq 0. $$ \end{lem} \begin{proof} The change of variable $\displaystyle y=\left(1-\dfrac{x}{2}\right)$ allows us to better analyze the function: $$ g(n,y)=1+y^2-2y^n \quad \text{and} \quad x\in[0,4] \ \Leftrightarrow \ y\in[-1,1]. $$ Property $1,\ y^2\geq |y^n|$ for all $y\in[-1,1]$ allows us to conclude that $g(n,y)\geq 0$ for all $y\in[-1,1]$ and this shows that: $$ g(n,x)\geq 0 \quad \text{for all} \quad x\in[0,4]. $$ \end{proof} The previous lemma allows us to obtain the main result of this article. \begin{thm} \label{Thm:RelationshipPsi_0TildePhi} If states $\Psi_0$ and $\Psi$ have isotropic distributions with variances $v_u$ and $v_c$ respectively and the density function of $\Psi$ satisfies Formula (\ref{For:Condition_f}), the following relationship between the fidelities of $\Psi_0$ and $\tilde\Phi$ holds: \begin{equation} \label{For:RelationshipTildePhiPsi} F^2(\Psi_0) \geq F^2(\tilde\Phi). \end{equation} \end{thm} \begin{proof} Theorems \ref{Thm:LowerBoundFidelityPsi_0} and \ref{Thm:UpperBoundFidelityTildePsi} allow us to prove the result, just establishing that the following inequality holds: $$ \dfrac{d-d^{\prime\prime}}{2d^\prime-1} \, v_c \geq \dfrac{2d^\prime-2}{2d^\prime-1}\left(v_u-\left(\dfrac{v_u}{2}\right)^2\right). $$ Taking into account that $d=d^\prime d^{\prime\prime}$ the above inequality is equivalent to the following: $$ v_c \geq \dfrac{2}{d^{\prime\prime}}\left(v_u-\left(\dfrac{v_u}{2}\right)^2\right). $$ Using the fact that $d^{\prime\prime}\geq 2$ would suffice to prove the first of the following two inequalities: $$ v_c \geq v_u-\left(\dfrac{v_u}{2}\right)^2 \geq \dfrac{2}{d^{\prime\prime}}\left(v_u-\left(\dfrac{v_u}{2}\right)^2\right). $$ Substituting the value of $v_c$ given in Formula (\ref{For:RelationshipVariances}) and using the function $g(n,x)$ of Lemma \ref{Lem:InequalityFidelities} we have: $$ v_c \geq v_u-\left(\dfrac{v_u}{2}\right)^2 \quad \Leftrightarrow \quad g(n,v_u)\geq 0. $$ Finally, Lemma \ref{Lem:InequalityFidelities} allows us to conclude the proof, using the fact that the variance $v_u\in[0,4]$. \end{proof} If the isotropic distributions of $\Psi$ and $\Psi_0$ are normal the condition given in Formula (\ref{For:Condition_f}) for Theorems \ref{Thm:UpperBoundFidelityTildePsi} and \ref{Thm:RelationshipPsi_0TildePhi} is not necessary. Indeed, Corollaries \ref{Cor:FidelityPsiND}, \ref{Cor:FidelityPsi_0} and \ref{Cor:FidelityTildePsiND} clearly imply that: \begin{equation} \label{For:GlobalResult} F(\Psi_0) \geq F(\tilde\Phi) \geq F(\Psi). \end{equation} On the other hand, the condition given by Formula (\ref{For:Condition_f}) for Theorems \ref{Thm:UpperBoundFidelityTildePsi} and \ref{Thm:RelationshipPsi_0TildePhi} is a sufficient condition. However, it is not necessary because it has been obtained by underestimating the fidelity of $\Psi_0$ and overestimating that of $\tilde\Phi$. It is verified for very general isotropic distributions, such as for density functions $f(\theta_0)$ that satisfy the following: $$ f(\theta_0)=0 \quad \text{for all} \quad \theta_0\in\left(\dfrac{\pi}{2},\pi\right]. $$ \begin{figure}[h] \label{Fig:Curves_F} \begin{center} \includegraphics[scale=0.22]{Fig_Fidelities_d16.pdf} \includegraphics[scale=0.22]{Fig_Fidelities_d2.pdf} \caption{\centerline{Representation of fidelities as a function of $\sigma$.}} \end{center} \end{figure} Figure \ref{Fig:Curves_F} shows the curves of $F^2(\Psi_0)$, $F^2(\tilde\Phi)$ and $F^2(\Psi)$ for normal isotropic distributions and $n=5$ ($d=32$), in the extreme cases $d^\prime=16$ ($d^{\prime\prime}=2$) and $d^\prime=2$ ($d^{\prime\prime}=16$). The conclusion of the study carried out in this article, in view of the results summarized in Formula (\ref{For:GlobalResult}), is that the best option to obtain the highest fidelity against isotropic errors is not to use quantum codes. On the other hand, the improvement of the fidelity of $\tilde\Phi$ versus that of $\Psi$ seems to be closely related to the dimension of the subspaces to which these states belong: $d^\prime$ for $\tilde\Phi$ versus $d$ for $\Psi$. See Theorems \ref{Thm:FidelityPsi} and \ref{Thm:FidelityTildePhi} and Corollaries \ref{Cor:FidelityPsiND} and \ref{Cor:FidelityTildePsiND}. \section{Conclusions} In this article we have analyzed the ability of quantum codes to increase fidelity of quantum states affected by isotropic decoherence errors. The results obtained, despite being those expected for this type of quantum errors, are not good from the point of view of controlling errors in quantum computing. The ability of quantum codes to reduce errors does not compensate the multiplication of the number of gates that they require. This fact implies that the best option against isotropic errors is not to use quantum codes. This result is similar to that obtained in~\cite{LPF}: quantum codes do not reduce the variance of isotropic errors; and in~\cite{LPFM}: the $5-$qubit quantum code do not reduce the variance of qubit independent errors. The last result is more worrying since it negatively affects the standard model of error in quantum computing. For this reason, it would be important to study the behavior of fidelity in this case. These results indicate that continuous errors must be taken into account, since it is not possible to ensure that the golden rule of error control ``correct all small errors exactly" is fulfilled. Therefore, the study of the stochastic model of quantum errors, focused on discrete errors, must be extended to continuous errors. For future research, we believe that the continuous quantum computing error model should be further developed. The results on the ability of quantum codes to increase the fidelity or to reduce the variance of quantum errors should be extended to other types of error. It is also important to develop models of the behavior of quantum errors in highly entangled quantum systems. We need to know better the behavior of errors in this type of systems so important for quantum computing. Finally, all these approaches should allow a reformulation of fault-tolerant quantum computing for continuous errors. \section{Appendix} The values of the integrals that have been used throughout the article are included in this Appendix. \bigskip $ \displaystyle\int_{0}^{\pi}\sin^{k}(\theta)d\theta= \left\{\begin{array}{l} \displaystyle 2\,\frac{(k-1)!!}{k!!}\quad\mbox{}\quad k=1,\,3,\,5,\,\,\dots \\ \\ \displaystyle \pi\,\frac{(k-1)!!}{k!!}\quad\mbox{}\quad k=2,\,4,\,6,\,\,\dots \end{array}\right. $ \bigskip $ \displaystyle\int_{0}^{\pi}\frac{\sin^{2d-2}(\theta_0)}{(1+\sigma^2-2\sigma\cos(\theta_0))^d}d\theta_0= \frac{(2d-3)!!}{(2d-2)!!}\frac{\pi}{(1-\sigma^2)}\quad\mbox{}\quad d=1,\,2,\,3,\,\,\dots $ \bigskip $ \displaystyle\int_{0}^{\pi}\frac{\cos(\theta_0)\sin^{2d-2}(\theta_0)}{(1+\sigma^2-2\sigma\cos(\theta_0))^d}d\theta_0= \frac{(2d-3)!!}{(2d-2)!!}\frac{\sigma}{(1-\sigma^2)}\pi\quad\mbox{}\quad d=1,\,2,\,3,\,\,\dots $ \bigskip $ \displaystyle\int_{0}^{\pi}\frac{\sin^{2d}(\theta_0)}{(1+\sigma^2-2\sigma\cos(\theta_0))^d}d\theta_0= \frac{(2d-1)!!}{(2d)!!}\pi\quad\mbox{}\quad d=0,\,1,\,2,\,\,\dots $ \bigskip Starting from the first integral, the surface of a unit sphere of arbitrary even $(2d)$ or odd $(2d-1)$ dimension can be calculated. \bigskip $ \begin{array}{ccl} |{\cal S}_{2d}| & = & \displaystyle \int_0^{\pi}\cdots\int_0^{\pi}\int_0^{2\pi}\sin^{2d-1}(\theta_0)\,\cdots\,\sin^{1}(\theta_{2d-2})\ d\theta_0\,\cdots\,d\theta_{2d-2}d\theta_{2d-1} \\ \\ & = & \displaystyle 2\frac{(2d-2)!!}{(2d-1)!!}\ \frac{(2d-3)!!}{(2d-2)!!}\pi\ 2\frac{(2d-4)!!}{(2d-3)!!}\ \cdots\ \frac{(2-1)!!}{2!!}\pi\ 2\frac{(1-1)!!}{1!!}\ 2\pi \\ \\ & = & \displaystyle \frac{2(2\pi)^d}{(2d-1)!!} \\ \end{array} $ \bigskip $ \begin{array}{ccl} |{\cal S}_{2d-1}| & = & \displaystyle \int_0^{\pi}\cdots\int_0^{\pi}\int_0^{2\pi}\sin^{2d-2}(\theta_0)\,\cdots\,\sin^{1}(\theta_{2d-3})\ d\theta_0\,\cdots\,d\theta_{2d-3}d\theta_{2d-2} \\ \\ & = & \displaystyle \frac{(2d-3)!!}{(2d-2)!!}\pi\ 2\frac{(2d-4)!!}{(2d-3)!!}\ \cdots\ \frac{(2-1)!!}{2!!}\pi\ 2\frac{(1-1)!!}{1!!}\ 2\pi \\ \\ & = & \displaystyle \frac{(2\pi)^d}{(2d-2)!!} \\ \end{array} $
2,869,038,154,476
arxiv
\section{Introduction} \label{sec:intro} In the Standard Model (SM) of particle physics one fundamental complex scalar $SU(2)_L$ doublet field $\Phi$ with hypercharge $+\tfrac{1}{2}$ and a scalar potential $V(\Phi) = \mu^2 \Phi^\dagger \Phi + \lambda (\Phi^\dagger \Phi)^2$ is responsible for the spontaneous breaking of the electroweak symmetry. This Brout-Englert-Higgs (BEH) mechanism successfully explains the non-zero masses of the electroweak ($W^\pm, Z$) gauge bosons, as well as the SM fermion masses via gauge-invariant and renormalizable interactions, restores unitarity of the scattering amplitude, and predicts the existence of one fundamental scalar particle --- the Higgs boson. The discovery of a scalar boson with a mass $M \simeq 125~\mathrm{GeV}$ at the LHC in 2012 and the on-going measurements of its properties thus far confirm this SM picture --- within the current experimental precision. The discovered scalar particle is a ``SM-like'' Higgs boson. On the other hand, the shape of the scalar potential $V(\Phi)$ is yet to be confirmed experimentally.\footnote{An independent determination of the quartic interaction $\lambda$ requires the measurement of the Higgs self-coupling which will not be possible at the LHC to good-enough precision~\cite{Cepeda:2019klc}.} At first glance, the minimality of the scalar sector of the SM as well as its effectiveness in describing all current experimental data are quite convincing that we have finally completed the particle physics picture. At second glance, however, we find that the Higgs boson, or more generally, the scalar potential, is a unique place to anticipate effects from new physics beyond the SM (BSM). The Higgs field may interact with the dark matter (DM) sector through a so-called Higgs portal~\cite{Arcadi:2019lka}, and may play a crucial role in the generation of the baryon asymmetry of the Universe~\cite{Morrissey:2012db,Servant:2013uwa}. BSM theories addressing the so-called hierarchy problem, i.e.~the quadratic sensitivity of the Higgs mass parameter to the UV cutoff scale which in turn requires an ``unnatural'' fine-tuning of the bare mass parameter, generally modify or extend the Higgs sector. One of such theories is Supersymmetry (SUSY), and supersymmetric versions of the SM contain at least two Higgs doublets. Other aspects that may motivate an extension of the scalar sector are a possible improvement of the stability of the vacuum or an explanation of (yet-inconclusive) experimental anomalies seen in the current data (e.g., see Ref.~\cite{Wittbrodt:ALPS2019} and Refs.~\cite{Crivellin:2019mvj}, respectively, for discussions on either topic at this workshop). Quite generally, new physics in the scalar sector can lead to three observational effects: (\emph{i})~modifications of the $125~\mathrm{GeV}$ Higgs boson properties (couplings, decay rates, $CP$-properties); (\emph{ii}) existence of additional electrically neutral or charged scalar bosons; and (\emph{iii}) interactions of the Higgs boson (and other scalar bosons) with other new particles present in the BSM theory (e.g.~supersymmetric particles). Obviously, the Higgs sector is an exciting place to look for new physics effects, and needs to be studied in detail at present and future colliders. So far, experimental results from Run~I and Run~II of the LHC have been rather disillusioning. All measurements of the $125~\mathrm{GeV}$ Higgs boson properties are (within the current experimental precision) in agreement with the SM predictions, and searches for additional scalar states have not found any convincing hints of new particles. In addition, LHC searches for supersymmetric particles or other exotic new particles, DM direct detection experiments, as well as searches for electric dipole moments, have not found evidence for new physics yet. All these experimental facts lead to important constraints on the new physics landscape. Therefore, in this work, we address the following three questions: \begin{enumerate} \setlength\itemsep{-0.2em} \item What do these (non-)observations tell us about new physics? \item How much more can we probe in the future (at the LHC)? \item Have we looked everywhere? Could we have missed a BSM signal? \end{enumerate} Regarding the coupling properties of the $125~\mathrm{GeV}$ Higgs boson, the sensitivity to new physics can be assessed in an (in principle) model-independent way in the framework of an effective field theory, as long as the new physics is too heavy or too weakly coupled to be directly accessible at the experiment (see Ref.~\cite{Mimasu:ALPS2019} for a discussion at this workshop). In contrast, in this work we focus on specific renormalizable BSM models. While such studies are by definition model-dependent, this approach is highly predictive, has no validity restrictions (beyond those inherent to the model) and --- most importantly --- enables us to study the possible complementarity of different observables, e.g.~searches for additional scalar bosons with the Higgs rate measurements, or even farther, with flavor or dark matter observables. In order to confront BSM models with the experimental results from Higgs searches and measurements we largely employ the public computer tools \texttt{HiggsBounds}~\cite{Bechtle:2013wla,Bechtle:2015pma} and \texttt{HiggsSignals}~\cite{Bechtle:2013xfa,Bechtle:2014ewa}. We discuss scalar singlet extensions of the SM in Sec.~\ref{sec:singlets} and scalar doublet extensions in Sec.~\ref{sec:doublets}. We conclude in Sec.~\ref{sec:conclusions}. \section{Models with additional scalar singlets} \label{sec:singlets} \subsection{Adding a real scalar singlet} \label{subsec:singlet} Arguably the simplest extension of the SM Higgs sector is the addition of one real scalar degree of freedom $S$ which is a singlet under the SM gauge group. Assuming a discrete $\mathbb{Z}_2$ symmetry, the scalar potential is given by \begin{align} V(\Phi, S) = \mu_\Phi^2 \Phi^\dagger \Phi + \mu_S^2 S^2 + \lambda_1 (\Phi^\dagger \Phi)^2 + \lambda_2 S^4 + \lambda_3 \Phi^\dagger \Phi S^2. \end{align} If $S$ does not acquire a vacuum expectation value (vev), $\langle S \rangle = 0$, $S$ is stable and thus a highly constrained DM candidate (see Ref.~\cite{Scott:ALPS2019,Athron:2018hpc} for a discussion at this workshop). No mixing between $S$ and the scalar doublet $\Phi$ occurs. In contrast, if $\langle S \rangle \ne 0$ the two scalar fields $S$ and $\Phi$ mix, forming the two physical scalar states $h_{125}$ and $h_S$ (where $h_{125}$ is identified with the observed Higgs boson), and leading to the following collider signatures: \begin{itemize} \setlength\itemsep{-0.2em} \item[(\emph{i})] A reduced signal strength of the $125~\mathrm{GeV}$ Higgs boson as the Higgs couplings to SM fermions and gauge bosons are universally suppressed by the mixing angle, $\sin\alpha$; \item[(\emph{ii})] An additional scalar boson, $h_S$, may be searched for at the LHC, produced and decaying identically as a SM Higgs boson with the same mass, but with a highly reduced signal rate; \item[(\emph{iii})] If $h_S$ is heavy enough it can decay into two SM-like Higgs bosons, $h_S \to h_{125}h_{125}$; \item[(\emph{iv})] If $h_S$ is light enough, $h_{125}$ can decay into two light scalar bosons, $h_{125}\to h_S h_S$. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth, trim=0cm 1cm 0cm 1cm]{HS_kappaBRNP}\hfill \includegraphics[width=0.46\textwidth, trim=0cm 1cm 0cm 1cm]{Z2_CMS_original} \caption{\emph{Real scalar singlet extension of the SM:} Constraints on the mixing angle, $\kappa\equiv \sin\alpha$, and the rate of a possible new physics (NP) decay mode, $\mathrm{BR}(H \to\text{NP}) \equiv \mathrm{BR}(h_{125} \to h_S h_S)$, arising from the most recent Higgs rate measurements from ATLAS and CMS (\emph{left panel}); maximal signal rate for $pp \to h_S \to h_{125} h_{125}$ at the $13~\mathrm{TeV}$ LHC, shown as red solid [all constraints applied] and blue dotted line [only EW-scale constraints applied], compared to the current experimental limit from a combination of CMS searches~\cite{Sirunyan:2018two} (\emph{right panel}).} \label{fig:singlet} \end{figure} Detailed phenomenological studies of this model have been presented e.g.~in Refs.~\cite{Robens:2015gla,Robens:2016xkb,Ilnicka:2018def}. The imprint of the first and fourth signature on the $h_{125}$ properties can be simultaneously described in terms of a universal coupling scale factor $\kappa \equiv \sin\alpha$ and a generic branching ratio (BR) of the Higgs decay to new physics, $\mathrm{BR}(H\to \text{NP}) \equiv \mathrm{BR}(h_{125} \to h_S h_S)$. Taking into account all available Higgs signal rate measurements from LHC Run~I and Run~II (up to $137~\mathrm{fb}^{-1}$ [\emph{status: July 2019}]) we use \texttt{HiggsSignals} to obtain the $68\%~\mathrm{C.L.}$ and $95\%~\mathrm{C.L.}$ allowed parameter region shown in Fig.~\ref{fig:singlet}~(\emph{left}). These constraints imply that the coupling strength of the light scalar $h_S$ must be significantly reduced with respect to the SM value, $g/g_{SM} = \cos\alpha \lesssim 0.26$ or even less.\footnote{Depending on the mass of $h_S$, further constraints from direct LEP Higgs searches~\cite{Barate:2003sz} may be even stronger~\cite{Robens:2015gla,Robens:2016xkb,Ilnicka:2018def}.} More generally, we can infer from Fig.~\ref{fig:singlet}~(\emph{left}) an upper limit on the rate of a new physics decay mode of the $125~\mathrm{GeV}$ Higgs state. For instance, for a Higgs boson with identical couplings as in the SM, $\kappa=1$, we obtain $\mathrm{BR}(H\to \text{NP}) \lesssim 7.2\%$ (at $95\%~\mathrm{C.L.}$).\footnote{This limit is expected to improve to $ \mathrm{BR}(H\to \text{NP}) \le 4.3\%$ at the HL-LHC, see Sec.~6 of Ref.~\cite{Cepeda:2019klc}.} Let's turn to the case where $h_S$ is heavier than $h_{125}$ and consider the collider signature (\emph{iii}). The maximal value of the signal rate $pp \to h_S \to h_{125} h_{125}$ at the $13~\mathrm{TeV}$ LHC that we can obtain in our model within all relevant theoretical and experimental constraints\footnote{This includes requirements of perturbative unitarity, boundedness of the potential, perturbative couplings as well as consistency with Higgs search limits, rate measurements and electroweak (EW) precision observables, see Refs.~\cite{Robens:2015gla,Robens:2016xkb,Ilnicka:2018def} for details. For the blue dotted line, we only impose these constraints at the EW-scale, whereas we apply them up to a high scale $\sim \mathcal{O}(10^{10}~\mathrm{GeV})$ for the red solid line.} is shown in Fig.~\ref{fig:singlet}~(\emph{right}). We compare this with the current experimental limit from a combination of $h_S \to h_{125}h_{125}$ searches by CMS~\cite{Sirunyan:2018two}. We find that with current data, the experimental searches are not yet sensitive to this signature within the $Z_2$-symmetric real scalar singlet model. However, we expect that the LHC will become sensitive to parts of the parameter space with $h_S$ masses $\lesssim 500~\mathrm{GeV}$ in the high-luminosity phase. Complementary to $pp \to h_S \to h_{125} h_{125}$ searches are searches for the collider signature (\emph{ii}), most importantly, in the diboson final states, $pp\to h_S \to W^+ W^-$~and~$ZZ$. Current searches~\cite{Khachatryan:2015cwa,CMS:2017vpy,TheATLAScollaboration:2016bvt} already lead to constraints on the parameter space~\cite{Robens:2015gla,Robens:2016xkb,Ilnicka:2018def}. \subsection{Adding two real scalar singlets} \label{subsec:2singlets} We now extend the SM scalar sector by two real scalar singlet fields, $S$ and $X$.\footnote{Here we show results from Ref.~\cite{Robens:2019kga}, where this model is studied in detail.} The scalar potential is then given by \begin{align} V(\Phi, S) =& + \mu_\Phi^2 \Phi^\dagger \Phi + \mu_S^2 S^2 + \mu_X^2 X^2 + \lambda_\Phi (\Phi^\dagger \Phi)^2 + \lambda_S S^4 + \lambda_X X^4\nonumber \\ & + \lambda_{\Phi S} \Phi^\dagger \Phi S^2 + \lambda_{\Phi X} \Phi^\dagger \Phi X^2 + \lambda_{SX} S^2 X^2, \end{align} where we imposed a $\mathbb{Z}_2 \times \mathbb{Z}_2'$ discrete symmetry, with the following transformation properties: $\mathbb{Z}_2:\, \Phi \to \Phi,\,S\to -S,\, X \to X,$ and $\mathbb{Z}_2':\, \Phi \to \Phi,\, S\to S,\, X \to -X$. We focus here on the case that both singlet fields break the discrete symmetry spontaneously by acquiring a non-zero vev, $\langle S \rangle \ne 0$, $\langle X \rangle \ne 0$.\footnote{If one of the singlet fields has zero vev the corresponding scalar boson is stable and thus a DM candidate.} As a result, all three neutral scalar fields mix, forming the three physical scalar states $h_i$ ($i=1,2,3$) with masses $M_i$ (with $M_1 \le M_2 \le M_3$). As in the previous model in Sec.~\ref{subsec:singlet}, the couplings of the three scalar bosons to SM particles are again universally reduced by the mixing, whereas the Higgs self-couplings are determined by the scalar potential parameters and the vevs through the minimization conditions. For convenience, the model can be parametrized in terms of the three Higgs masses, $M_i$, the vevs $v=246~\mathrm{GeV}$, $\langle S \rangle$, $\langle X \rangle$, and the three rotation angles. One of the Higgs states $h_i$ has to be identified with the observed Higgs boson so that its mass is fixed by $M \simeq 125~\mathrm{GeV}$. We are left with seven free model parameters. \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth, trim=0cm 0.5cm 0cm 1cm]{BP1} \hfill \includegraphics[width=0.46\textwidth, trim=0cm 0.5cm 0cm 1cm]{BP6} \caption{\emph{Two real scalar singlet extension of the SM:} Benchmark planes \textbf{BP1} (\emph{left panel}) and \textbf{BP6} (\emph{right panel}) for Higgs-to-Higgs decay signatures at the LHC [taken from Ref.~\cite{Robens:2019kga}]. The hatched areas indicate excluded regions from theoretical or experimental constraints (\emph{see legend}). See text for further details.} \label{fig:singlets} \end{figure} Regarding the collider phenomenology, the model features several interesting possibilities of Higgs-to-Higgs decay signatures. Fig.~\ref{fig:singlets} displays two representative benchmark planes (BP) (taken from Ref.~\cite{Robens:2019kga}) for LHC searches for Higgs-to-Higgs decays: In the first scenario, \textbf{BP1}, $h_3$ is identified with the observed Higgs boson at $125~\mathrm{GeV}$, and Fig.~\ref{fig:singlets}~(\emph{left}) shows the branching ratio (BR) for the decay $h_3 \to h_1 h_2$, amounting to $\sim (5 -7)\%$ in most of the parameter space. It is produced at nearly identical rates as the SM Higgs boson, i.e.~its total $13~\mathrm{TeV}$ LHC production rate is $\sim 50~\mathrm{pb}$. The second lightest Higgs state, $h_2$, decays dominantly directly to SM particles (mostly $b\bar{b}$) if $M_2 < 2 M_1$ [\emph{below the red line}], or otherwise decays dominantly as $h_2\to h_1h_1$ [\emph{above the red line}], leading to a \emph{cascade} of Higgs-to-Higgs decays. The lightest scalar $h_1$ decays according to the SM Higgs prediction at its mass value $M_1$, i.e., mostly to $b\bar{b}$ if $M_1 \gtrsim 10~\mathrm{GeV}$. Therefore, in this benchmark scenario, the signal process is characterized by either two di-$b$-jet resonances at $M_1$ and $M_2$ in the invariant mass ($M_{bb}$) spectrum, or even three $M_{bb}$ resonances per event pointing to $M_1$. In the latter case where $h_2\to h_1h_1$, it may even be possible to reconstruct $M_2$ from four reconstructed b-jets. However, the experimental challenge of this signature is the softness of the final state objects, possibly demanding the presence of an associated particle in the production process (e.g., one additional jet, $pp \to h_3 + j$, or Higgs-Strahlung, $pp\to h_3 V$ [$V = W^\pm, Z$]). In contrast, in the other benchmark plane, \textbf{BP6}, we assume $h_1$ to be at $125~\mathrm{GeV}$, and focus on the signature $pp\to h_3 \to h_2 h_2$. The $13~\mathrm{TeV}$ LHC signal rate is shown in Fig.~\ref{fig:singlets}~(\emph{right}). If $M_2 > 250~\mathrm{GeV}$, the decay $h_2\to h_1h_1$ happens to $\sim 30\%$, which in turn can lead to a spectacular cascade $pp\to h_3 \to h_2 h_2 \to h_1 h_1 h_1 h_1$, with a rate $\lesssim \mathcal{O}(10~\mathrm{fb})$. If $M_2 \le 250~\mathrm{GeV}$, the most promising signature is $pp\to h_3 \to h_2 h_2 \to W^+W^-W^+W^-$ and we find that a current ATLAS search~\cite{Aaboud:2018ksn} is already sensitive to small parts of the parameter space. \section{Models with an additional scalar doublet} \label{sec:doublets} \subsection{The $CP$-conserving Two-Higgs-Doublet Model (2HDM)} \label{subsec:2HDM} We now extend the SM scalar sector by a second scalar $\mathrm{SU}(2)_L$ doublet field with hypercharge $+\tfrac{1}{2}$ (see Refs.~\cite{Gunion:1989we,Branco:2011iw} for reviews). With a softly-broken $\mathbb{Z}_2$ symmetry ($\Phi_1 \to \Phi_1$, $\Phi_2 \to -\Phi_2$), and assuming $CP$-conservation, the scalar potential in the \emph{general basis} reads \begin{align} V(\Phi_1,\Phi_2) = &+ m_{11} {\Phi_1}^\dagger \Phi_1 + m_{22} {\Phi_2}^\dagger \Phi_2 - [m_{12} {\Phi_1}^\dagger \Phi_2 + \mathrm{h.c.}] +\tfrac{1}{2} \lambda_1 ({\Phi_1}^\dagger \Phi_1)^2 +\tfrac{1}{2} \lambda_2 ({\Phi_2}^\dagger \Phi_2)^2 \nonumber \\ & + \lambda_3 ({\Phi_1}^\dagger \Phi_1)({\Phi_2}^\dagger \Phi_2) + \lambda_4 ({\Phi_1}^\dagger \Phi_2)({\Phi_2}^\dagger \Phi_1) + [\tfrac{1}{2} \lambda_5 ({\Phi_1}^\dagger \Phi_2)^2 +\mathrm{h.c.}] \end{align} with all parameters being real. The particle spectrum consists of two $CP$-even neutral Higgs bosons $h$ and $H$ (with masses $M_h \le M_H$), one $CP$-odd neutral Higgs boson $A$ (with mass $M_A$) and a pair of charged Higgs bosons $H^\pm$ (with mass $M_{H^\pm}$). In order to suppress dangerous flavor-changing neutral currents (FCNCs) at tree-level, the $\mathbb{Z}_2$ symmetry can be promoted to the fermion sector (with four different possible $\mathbb{Z}_2$ charge assignments to the fermion types). In a 2HDM of Type-I only $\Phi_2$ couples to the SM fermions, whereas in Type-II $\Phi_2$ couples to up-type quarks, and $\Phi_1$ couples to down-type quarks and leptons. In the general basis, the mixing of the two $CP$-even Higgs states is described by the rotation angle $\alpha$, and we define $\tan\beta \equiv v_2/ v_1$, where $v_{1,2}$ are the vevs of the neutral components of the doublet fields $\Phi_{1,2}$. The (SM normalized) Higgs couplings to vector bosons $V = W^\pm, Z$ for the physical $CP$-even Higgs states $h$ and $H$ are then given at tree-level by \begin{align} \frac{g_{hVV}}{g_{h_\text{SM} V V}} = \sin (\beta - \alpha) \qquad \mbox{and} \qquad \frac{g_{HVV}}{g_{h_\text{SM} V V}} = \cos (\beta - \alpha). \end{align} In the two possible \emph{alignment limits}, $\sin(\beta - \alpha) \to 1$ or $\cos(\beta - \alpha) \to 1$, the Higgs state $h$ or $H$, respectively, has identical tree-level couplings to SM particles as predicted for a SM Higgs boson. Hence, either Higgs state $h$ or $H$ can be identified as the observed Higgs state at $125~\mathrm{GeV}$. It is therefore interesting to ask: \emph{Will we ever be able to distinguish these two cases?} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth, trim=0cm 0.5cm 0cm 1cm]{c_hsmhphm_mHc_aligned}\hfill \includegraphics[width=0.48\textwidth, trim=0cm 0.5cm 0cm 1cm]{mHc_mh_BR_Hc_Wh_low} \caption{\emph{Two-Higgs doublet model (2HDM) of Type-1}, with heavier Higgs boson $H$ at $125~\mathrm{GeV}$: Higgs-to-diphoton decay rate modification as a function of charged Higgs boson mass, $M_{H^\pm}$, and coupling $g_{HH^+ H^-}/M_{H^\pm}^2$ (\emph{left panel}); minimal decay rate $\mathrm{BR}(H^\pm \to W^\pm h)$ in the $(M_{H^\pm}, M_h)$ plane (\emph{right panel}).} \label{fig:2HDM} \end{figure} It turns out that, within the 2HDM, the answer is yes, due to an interesting interplay between the neutral Higgs bosons $h, H$ (and possibly, $A$) with the charged Higgs boson, $H^\pm$. Let's assume the heavy Higgs state $H$ to be the observed Higgs boson. The Higgs rate measurements then require the alignment limit, $\cos(\beta-\alpha) \to 1$, to be approximate realized. Interestingly, contributions from $H^\pm$ to the $H\to \gamma \gamma$ decay neither decouple with large charged Higgs mass, $M_{H^\pm}$, nor vanish in the alignment limit~\cite{Bernon:2015wef}. The relevant coupling factor behaves as \begin{align} g_{HH^+H^-} \xrightarrow{c_{\beta-\alpha} \to 1} - \left( M_H^2 + 2 M_{H^+}^2 - 2\overline{m}^2 \right) / v \xrightarrow{M_{H^+} \gg M_H} - 2M_{H^+}^2/v, \end{align} because $\overline{m}^2 \equiv 2m_{12}^2/\sin(2\beta) \lesssim \mathcal{O}(v^2) $ imposed by unitarity and stability conditions. It follows that, if the charged Higgs boson is heavy, it will leave an observable trace in the $H\to \gamma\gamma$ rate. This is illustrated in Fig.~\ref{fig:2HDM} (\emph{left}) for the 2HDM of Type-1, showing the rate modification $\mathrm{BR}(H\to \gamma\gamma)/\mathrm{BR}(h_\text{SM}\to \gamma\gamma)$ as a function of $M_{H^\pm}$ and $g_{HH^+H^-}/M_{H^\pm}^2$ for all allowed parameter points with $H$ at $125~\mathrm{GeV}$. We find a decay rate modification of around $-10\%$ at large $M_{H^\pm}$. This brings us to the question: How light can the charged Higgs boson be? In the 2HDM of Type-2 flavor observables --- in particular the $B \to X_s \gamma$ decay rate --- severely constrain the charged Higgs mass, $M_{H^\pm} \gtrsim 600~\mathrm{GeV}$~\cite{Arbey:2017gmh}. In contrast, in Type-1 $M_{H^{\pm}}$ is essentially unconstrained by flavor observables if $\tan\beta\gtrsim 2$. Here, LHC searches for a light (or moderately heavy) charged Higgs boson will be crucial. Indeed, as the second $CP$-even Higgs boson $h$ is very light, $M_h < M_H = 125~\mathrm{GeV}$, and the coupling $g_{H^\pm W^\mp h} \propto \cos(\beta-\alpha)$ is maximal in the alignment limit, the decay $H^\pm \to W^\pm h$ is generally dominant. The \emph{minimal} value of its decay rate for given values of $M_h$ and $M_{H^\pm}$ is shown in Fig.~\ref{fig:2HDM} (\emph{right}) for the allowed Type-1 parameter points.\footnote{The maximal $\text{BR}(H^\pm \to W^\pm h)$ value is close to $100\%$ in almost the whole mass plane.} Most of the current $H^\pm$ searches at the LHC, however, focus on the fermionic final states ($\tau \nu_\tau$, tb), which are insensitive to these scenarios. Direct searches for $H^\pm \to W^\pm h$ decay signatures will therefore be crucial to conclusively discriminate between the $h$ and $H$ interpretation of the observed Higgs state~\cite{Stefaniak:inprogress}. \subsection{The Minimal Supersymmetric Standard Model (MSSM)} \label{subsec:MSSM} At tree-level, the MSSM Higgs sector is a 2HDM Type-2 with quartic couplings fixed by the gauge couplings. It can threrefore be described in terms of two parameters, often chosen to be $M_A$ and $\tan\beta$. However, beyond tree-level, all SUSY parameters affect the Higgs sector. Besides precision Higgs mass and rate measurements, LHC searches for the heavier neutral Higgs bosons $H$ and $A$ decaying to $\tau^+\tau^-$ probe sensitively the parameter space. In Fig.~\ref{fig:MSSM_HL-LHC} we show the current and future HL-LHC sensitivity to the MSSM Higgs sector, employing the recently proposed $M_h^{125}$ and $M_h^{125}(\tilde\chi)$ benchmark scenario~\cite{Bahl:2018zmf}, via Higgs signal rate measurements and $H/A\to \tau^+\tau^-$ searches~\cite{Cepeda:2019klc}. Heavy Higgs masses below $1~\mathrm{TeV}$ will be completely probed. In the $M_h^{125}(\tilde\chi)$ scenario the $H/A\to \tau^+\tau^-$ reach is weakened due to additional $H/A$ decay modes to light neutralinos and charginos, $H/A \to \tilde{\chi}\tilde{\chi}$. Dedicated experimental searches for these decays would be highly complementary and may improve the coverage in the moderate $\tan\beta$ region. \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth, trim=0cm 0.5cm 0cm 1cm]{mh125_S2_HBHS}\hfill \includegraphics[width=0.46\textwidth, trim=0cm 0.5cm 0cm 1cm]{mh125chi_S2_HBHS} \caption{HL-LHC prospects for the MSSM Higgs sector, presented in the $M_h^{125}$ (\emph{left panel}) and $M_h^{125}(\tilde\chi)$ scenario (\emph{right panel}), taken from Sec.~9.5 of Ref.~\cite{Cepeda:2019klc}.} \label{fig:MSSM_HL-LHC} \end{figure} \section{Conclusions} \label{sec:conclusions} We discussed the phenomenological status of simple and popular BSM Higgs sectors, including scalar singlet extensions of the SM, the 2HDM and the MSSM Higgs sector. The LHC results on the $125~\mathrm{GeV}$ Higgs boson and searches for additional Higgs states have important implications for BSM Higgs models, and imply that an approximate \emph{alignment limit} (i.e.~SM-like Higgs couplings at tree-level) is realized. Nevertheless, there is still room for new Higgs discoveries in upcoming LHC runs. Additional Higgs states can be lighter or heavier than the discovered Higgs boson, and experimental searches should aim to cover the full accessible kinematical range. Furthermore, some LHC searches only become sensitive with more data, as illustrated here for LHC searches for resonant double Higgs production in the $\mathbb{Z}_2$-symmetric singlet extension(s). We also pointed out so-far-uncovered collider signatures, including Higgs-to-Higgs decays ($h_i \to h_j h_k$ and $H^\pm \to W^\pm h$), and heavy Higgs decays to neutralinos and charginos ($H/A \to \tilde{\chi}\tilde{\chi}$). \section*{Acknowledgments} We thank the ALPS 2019 organizers for a very stimulating workshop and their hospitality. We are grateful to Jonas Wittbrodt for many helpful discussions. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany‘s Excellence Strategy -- EXC 2121 ``Quantum Universe'' -- 390833306.
2,869,038,154,477
arxiv
\section{Introduction} In \cite{KR}, Kassel and Reutenauer studies the zeta function of the Hilbert scheme of $n$ points in the two-torus. The polynomial counting ideals of codimension $n$ in the Laurent algebra in two variables turns out to have an interesting quotient, whose middle coefficient $a_{n,0}$ has a direct description: \begin{equation*} a_{n,0} =\left|\{d\,:\, d|n\,,\,\frac{\sqrt{2n}}{2}<d\leq \sqrt{2n}\}\right|. \end{equation*} We follow the symbolism from \cite{KR}, which the reader should also consult for more motivation. In a talk at the conference {\em Algebraic geometry and Mathematical Physics 2016}, in honour of A. Laudal's 80th birthday, Kassel discussed the results in \cite{KR} and asked whether the sequence $a_{n,0}$ is bounded or not. Evidently it grows very slowly. The sequence is included in the online encyclopedia of integer sequences as sequence A067742 \cite{oeis}.\\ In this short note, we will show that the sequence is unbounded. The idea is to choose $n$ such that $\sqrt{n/2}$ is a divisor, and to multiply this divisor with a number slightly larger than one repeatedly, making sure that the product still divides $n$ as long as it is smaller than $\sqrt{2n}$. \section{Unboundedness of the sequence} \begin{theorem} Let \begin{equation*} a_{n,0} =\left|\{d\,:\, d|n\,,\,\frac{\sqrt{2n}}{2}<d\leq \sqrt{2n}\}\right|. \end{equation*} Then \begin{equation*} \limsup_{n\to \infty} a_{n,0} = \infty \end{equation*} More precisely, for any $i\geq 1$ define $s_{max}=\ln(2)/\ln(1+i^{-1})$ and \begin{equation} \label{Choicen} n(i)= 2(i+1)^{\lceil 2s_{ max}\rceil}\cdot i^{2\lceil s_{max}\rceil}. \end{equation} Then $\lim_{i\to\infty}a_{n(i),0} =\infty$. \end{theorem} \begin{proof} With the choice of $n(i)$ from \eqref{Choicen}, we have that \begin{equation*} \sqrt{n/2} = (i+1)^{\lceil s_{ max}\rceil}\cdot i^{\lceil s_{max}\rceil}, \end{equation*} a divisor of $n(i)$. For each $s=1,2,\dots,\lfloor s_{max}\rfloor$, consider \begin{equation*} d(s) = \sqrt{n/2}\left(\frac{i+1}{i}\right)^s = (i+1)^{\lceil s_{ max}\rceil+s}\cdot i^{\lceil s_{ max}\rceil-s}. \end{equation*} This divides $n(i)$ as long as $\lceil s_{ max}\rceil+s\leq 2 \lceil s_{ max}\rceil$ and $\lceil s_{ max}\rceil-s\geq 0$, which in both cases translates simply to $s\leq \lfloor s_{max}\rfloor$. Thus we have exhibited a number of divisors, so that \begin{equation*} a_{n(i),0} \geq \lfloor s_{max}\rfloor. \end{equation*} Note also that $s_{max}$ is chosen so that \begin{equation*} \left(\frac{i+1}{i}\right)^{s_{max}} = 2. \end{equation*} Therefore all the $d(s)$ are in the interesting interval. Since \begin{equation*} \lim_{i\to\infty}s_{max}(i) = \lim_{i\to\infty} \frac{\ln 2}{\ln(1+i^{-1})} = \infty \end{equation*} this proves the theorem. \end{proof} The sequence $n(i)$ grows very quickly whereas as the sequence $s_{max}(i)$ grows slowly. It is likely that the minimal $n$ needed to find a given value for $a_{n,0}$ is a lot smaller than what is constructed in the proof.\\ {\bf Acknowledgements}\\ Thanks are due to C. Kassel for telling me about this problem and encouraging me to write down the proof. {\small
2,869,038,154,478
arxiv
\section{Introduction} Although low mass binary stars are the most abundant stars in the galaxy (Henry et al. 1999), their intrinsic faintness inhibits their detection and study. Non-contact eclipsing binary M dwarf systems have great value, as these systems allow accurate estimates of the most basic stellar parameters: mass and radius. Only four\footnote{We refer specifically to binaries where both components are M Dwarfs. There have been, however, a number of M stars whose companion is an F or G MS star (e.g. Pont et al., 2004), see Figure \ref{gr_mr}} such systems are known and have been studied in detail; YY Gem (Bopp 1974; Leung \& Schneider 1978), CM Dra (Lacy 1977; Metcalfe et al. 1996; Kozhevnikova et al. 2004), GJ 2069A (Delfosse et al. 1999; Ribas 2003), and OGLE BW03 V038\footnote{This is a very close although still detached system.} (Maceroni \& Montalban, 2004). The observed properties of each of these systems present discrepancies with the theory of low-mass stellar objects; neither the observed mass-radius nor mass-luminosity relations are well represented by existing models (Benedict 2000); see Figure \ref{gr_mr}. The problem most likely lies in the shortcomings of the physical models, owing to the lack of understanding of the complex atmospheres of such low-mass objects (Baraffe et al. 1998). Enlarging the small existing sample of such systems is therefore desirable, to allow more detailed comparisons between observations and the theory of these ubiquitous, interesting, and complex objects. Here we report preliminary analysis of a fourth such low-mass eclipsing binary. \section{Observations} \subsection{Photometric Observations} The recently discovered spectroscopic binary, TrES--Her0-07621 ($\alpha$=16$^h$50$^m$20.7$^s$, $\delta$=+46$^{\circ}$39$'$01$''$ (J2000), $V$=15.51 $\pm$ 0.08) was first identified through an analysis of photometric time series from the TrES (Trans-Atlantic Exoplanet Survey) network. This network consists of three telescopes: {\it STellar Astrophysics and Research on Exoplanets} (STARE,\footnote{Observatorio del Teide, Tenerife, Spain} Brown \& Charbonneau, 1999), {\it Planet Search Survey Telescope} (PSST\footnote{Lowell Observatory, AZ, USA}, Dunham et al. 2004), and {\it Sleuth}\footnote{Palomar Observatory, CA, USA. \tt http://www.astro.caltech.edu/$\sim$ftod/tres/sleuth.html }. The telescopes are similar in their characteristics, with apertures of 10cm, 2048$\times$2048 pixel CCD detectors and fields of view of 6$^{\circ}\times$6$^{\circ}$. TrES collects long-term time-series photometry in one filter. The photometry run in question spanned 54 days, beginning May 6 2003, and was observed in a band roughly equivalent to Harris R at a cadence of 1 image every 2 minutes. The images were reduced and calibrated by an automatic package developed specifically for these data. TrES--Her0-07621 was observed by both STARE and PSST, but the latter time series proved significantly noisier. We therefore analyzed only the STARE lightcurve. The R magnitude is 14.42 with each point having a formal accuracy of 0.04 mag rms. This lightcurve contains 8781 data points, obtained in 309.5 hours over 54 days, giving a duty cycle of 23.8\%{\bf \footnote{The data are available via STARE website {\tt http://www.hao.ucar.edu/public/research/stare/stare.html} } }. A high-SNR peak in the time series' frequency spectrum at 1.79 cycles per day initiated the study of TrES--Her0-07621. Folding the star's light curve with a period of 1.1208 d showed it to be an eclipsing binary. The light curve also displays sinusoidal out-of-eclipse variations near the photometric period. The star's infrared colors from the 2MASS\footnote{Two Micron All Sky Survey: University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology \tt http://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-dd } catalog are quite red, (Table \ref{facts}), and the USNO-B\footnote {SIMBAD, operated at CDS, Strasbourg, France; the NASA/IPAC Extragalactic Database (NED) and supported by JPL, California Institute of Technology, {\tt http://www.nofs.navy.mil/data/fchpix/}} catalog shows a significant proper motion. Taken together, these facts suggested that the object is a binary M dwarf, with substantial levels of magnetic activity driven by the rapid, tidally-locked rotation of the component stars; this motivated further study. \subsection{Spectroscopic Observations} In September 2004 we obtained spectroscopic observations of TrES--Her0-07621 using the High Resolution Spectrograph (HRS, Tull 1998) on the Hobby-Eberly Telescope (HET). We secured measurements at 4 epochs; each epoch contained three separate exposures taken over approximately one hour --- giving a total of 12 spectra. The analysis was carried out with standard IRAF (Tody 1993) {\tt echelle} and {\tt rv} package tools, including {\tt fxcorr}. We cross-correlate TrES-Her0-07621 with an M2 dwarf (Gl 623) template and extract velocities for both components at four distinct phases. We adopted a radial velocity for the Gl 623 primary of -29.2 km s$^{-1}$, given the orbital phase at which the template was secured and a systematic velocity, V$_{sys}$ = -27.5 km s$^{-1}$, from Marcy \& Moore (1989). The HRS utilizes two CCDs covering the blue and red spectral regions. The data from each chip were analyzed independently, resulting in two velocity estimates. A third velocity estimate was obtained by cross-correlating an artificial H-alpha emission template with the H-alpha emission line found in each exposure. Given the large orbital velocities, there was no blending of correlation peaks at any phase. The three velocities (blue, red, H$\alpha$) are obviously not independent determinations, but do provide an estimate of our internal error. \section{Analysis} Figure \ref{gr_het} shows the component velocities plotted against photometric phase, while figure \ref{gr_fit_res} (top panel) shows the folded photometric light curve. It is evident from the nearly-symmetrical and sinusoidal radial velocity variation and from the highly symmetrical light curve that the orbit is nearly circular, and that the component masses and surface brightnesses are similar. An initial period analysis of the entire STARE lightcurve using the technique of phase dispersion minimization refined the photometric period to 1.1209 $\pm$ 0.0006 days. We predicted and then observed an eclipse on 14 May 2004 using the 1.2m telescope at the Fred L. Whipple Observatory, AZ, USA using SLOAN filters $r$, $i$ and $z$. The long time base provided by this observation allowed us to refine the photometric period. By fitting the lightcurves during (21) eclipse times (we included only totally observed eclipses) to parabolas, we determined all the times of minimum light (center of eclipse) with corresponding error. For the eclipse observed on May 14th 2004, we only used the time of minimum light from the $r$ filter. We also used observations from IAC80 (see below). Using the bootstrap method we refined the period to 1.12079 $\pm$ 0.00001 corresponding to a precision of 1 second. The epoch of secondary minimum, T$_0$, was meanwhile determined to be 2453139.749509 (HJD) $\pm$ 0.000075. TrES--Her0-07621 has a stellar neighbor at a distance of 8'', close enough that the two objects are blended in our STARE observations (STARE has a pixel size of about 11 arcsec). Observations in $R$ and $I$ Johnson filters using the IAC80 at Observatorio del Teide on 30 August 2004 provided a more realistic picture of the depth of one of the eclipses, while also allowing us to confirm the photometric period. We measured the PSF of both the binary and the neighbor using all five images outside of the eclipse. From these we derived the R fractional flux contribution from this companion star of 0.19 $\pm$ 0.04. Because the companion star is also quite red (Table \ref{facts}), the flux should be similar (to within the error) in both Johnson and Harris R filters, and so we can use this number to analyze the STARE time series. Measurement of the contamination of the eclipse signal from TrES--Her0-07621 by the companion star is important, because it must be accounted for when fitting the time-series data to estimate the stellar radii. This neighbor also has a proper motion that is similar in magnitude and direction to that of TrES--Her0-07621, indicating the possibility that TrES--Her0-07621 is at least a triple system, with the eclipsing pair of stars accompanied by a third M dwarf at a distance of hundreds of AU. Adopting the photometric period as the orbital period and introducing its associated error, we fit all 36 radial velocities (blue, red, H$\alpha$) with a Keplerian model using GaussFit (Jefferys, Fitzpatrick \& McArthur, 1988). The model is similar to that used in McArthur et al. (2004). We assume an eccenctricity $e$ of 0. The resulting radial velocity semi-amplitudes are ${\rm K}_1 = 100.54 \ \pm 0.31$ km s$^{-1}$ and ${\rm K}_2 = 101.29 \ \pm 0.31$ km s$^{-1}$, $({\rm M}_1 + {\rm M}_2) \sin^3 i = 0.9547 \ \pm 0.0062$ M$_\odot$, and M$_1$/M$_2 = 1.0075 \ \pm 0.0044$. A formal solution including eccentricity (e = 0.006 $\pm$ 0.002) provided a better solution, reducing $\chi^2$ by 8\%, while reducing the number of degrees of freedom by 3\%. However, we constrain $e$ = 0 for this analysis. We developed a chi-square minimization algorithm to estimate orbital parameters from the light curve, ignoring any variations between eclipses (Figure 3). The input parameters are period $P$, component masses M$_1$ and M$_2$, limb-darkening coefficients (0.7, Claret, 1998, Table 7) and the light from a third nearby star as a fraction of the total light of the system (0.19 $\pm$ 0.04). The code solves for both radii R$_1$ and R$_2$, effective temperature ratio T$_2$/T$_1$, center of minimum eclipse T$_0$ and inclination $i$. Figure \ref{gr_fit_res} shows the resulting fit to the data. The initial estimates for R$_1$, R$_2$, T$_2$/T$_1$ and $i$ were derived from two-dimensional $\chi^2$ contour plots (while keeping the other two parameters fixed). These contour plots presented high correlations between the two radii, constraining their sum while insensitive to their difference; and between radius (R$_1$ or R$_2$) and inclination; larger radius implies smaller inclination. T$_2$/T$_1$ was uncorrelated to both radii and inclination, so its error is given by the corresponding value of this parameter at $\chi^2 + \sigma$ (Press et al, 1986) in the direction of its axis. However, because the other parameters are obviously not independant, R$_1$ and R$_2$ for example, the error spanned the range of radii where the contour value is $\chi^2 + \sigma$, (the full range error ellipsoid). Even with the component masses determined, in absence of a T$_{eff}$ measurement we require the component absolute magnitudes to place these stars on the Mass-Luminosity Relation (MLR). From the TrES data, calibrated using stars within 1$^{\circ}$ that have measured V magnitudes from SIMBAD, we obtain a V-band apparent magnitude of 15.51 $\pm$ 0.08 for the combined 3-star system, giving V-K = 4.63 $\pm$ 0.10 (Table \ref{facts}). Assuming a wavelength independent relative flux we estimate $\Delta$V$_{AB-C}$ = 1.72 from difference J, H and K magnitudes (between the neighbor and the binary). We can also estimate $\Delta$V$_{A-B}$ (between the binary components) = 0.1 $\pm$ 0.05, based on the derived temperature and radii differences. Taking all of the above into account, we estimate the component magnitudes of V$_A$ = 16.37 $\pm$ 0.1, V$_B$ = 16.56 $\pm$ 0.1 and V$_C$ = 17.43 $\pm$ 0.1, where $C$ is the stellar neighbor. From Hawley et al, (2002) color-spectral type relations we estimate an M3 spectral type for each component. We obtain from the Hawley $M_J$ - spectral type relationships component absolute magnitudes of M$_V$ = 11.18, 11.28 $\pm$ 0.3. Accepting this estimate of the luminosities, the distance modulus is $\mu$ ($\sim$16.4-11.2) $\sim$ 5.2, corresponding to d$\sim$110 pc. For this nearby system we have assumed no absorption (A$_V$ = 0). We also use the radii and effective temperature (Table \ref{tbl-orbital}) to determine luminosities, differentially with respect to the Sun (e.g. Benedict et al, 2003). With bolometric corrections as a function of temperature from Flower (1996) we obtain an average d = 118 $\pm$ 13 pc for the two components. \section{Results and Comments} Using all the derived parameters and errors, we refit the lightcurve using our code, and tested these results with the code {\it Nightfall}\footnote{http://www.lsw.uni-heidelberg.de/users/rwichman/Nightfall.html} (see below). Both codes give similar results, their difference being within the error bars. Tables \ref{tbl-system} and \ref{tbl-orbital} summarize the results. Our code does not allow for stellar spots, so we subtracted a smooth function (by a Fourier technique) to remove the out-of-eclipse variations making a rectified lightcurve. We also constrain $e$ = 0. The top panel of Figure \ref{gr_fit_res} shows the synthetic lightcurve (continuous line) corresponding to the model fit (our code) of the folded light curve (small crosses). Phase = 0 corresponds to the secondary eclipse. The bottom panel shows the residuals of the fit. The residuals show no variation as a function of phase indicating an adequate model fit. Because our code is unable to account for spot variability, we inspected the residuals after subtracting the model fit from the unrectified lightcurve. These residuals also showed no evidence of eclipses. We also fit this unrectified lightcurve to find R$_1$, R$_2$, $i$, T$_0$ and T$_2$/T$_1$. The results varied slightly from those for the rectified light curve, but stayed within the error bars (Table \ref{tbl-orbital}). Our original (unrectified) photometric light curve contains non-uniform outside-eclipse variations. Binary systems such as TrES--Her0-07621 are often magnetically active (e.g., Strassmeier et al. 1993). While tidal effects may be important, these non-uniform variations are most likely explained by star spots. We used {\it Nightfall} to model our unrectified lightcurve, because this code allows for the presence of spots on each of the components. Our derived parameters were used as inputs and we attempted to solve for the longitude, latitude and radii of spot(s). There was no unique solution; many combinations of these spot parameters could compensate for the out-of-eclipse variations, although they always presented a 180$^{\circ}$ longitude difference. This preferred longitude difference has also been observed in other active binary systems (see e.g., Henry et al. 1995). The presence of spots can have a significant effect on the accuracy of the derived parameters, such as inclination, temperature and radii (Torres \& Ribas (2002) discuss this for the case of YY Gem). Additional observations, photometry in particular, will be necessary to increase the precision of the radii estimates as well as to learn more about the magnetic behaviour of the stars. This could then provide a link towards a better understanding of the physical processes of these low-mass objects. \acknowledgments We thank Hect\'or Vazquez Ramio (observations on IAC80), and operating staff for STARE. The IAC80 and STARE are operated by the Instituto de Astrof\'isica de Canarias in the Spanish Observatorio del Teide. We thank Mark Everett (observations on 48-inch telescope at Fred L. Whipple Observatory on Mount Hopkins, Arizona USA, operated by Harvard-Smithsonian Center for Astrophysics.) We also thank Rainer Wichmann for the use of the program for the light-curve synthesis, {\it Nightfall}. Support for this work was provided by NASA through grants GO-09408 and GO-09407 the Space Telescope Science Institute, which is operated by the Association of Universities of Research in Astronomy, Inc., under NASA contract NAS5-26555. The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universitat Muenchen, and Georg-August-Universitat, Goettingen. We thank the HET resident astronomers and telescope operators. We thank the referee for their constructive comments.
2,869,038,154,479
arxiv
\section{Introduction} Elliptic flow is an important signature of collective dynamics in non-central heavy-ion collisions at high energies. It is driven by anisotropic pressure gradients built up during the early stage of the collision due to the geometrically anisotropic overlap zone of the colliding nuclei. Moreover, it carries information on such important issues as the equation of state (EOS) and the level of equilibration achieved. Elliptic flow manifests itself in an azimuthal anisotropy of particle yields with respect to the reaction plane. Two-particle azimuthal correlations are sensitive to elliptic flow but at large transverse momenta ($p_T$) are expected to reveal also relics of semihard scattering. We report about successful attempts to trace primeval partonic scattering at $\sqrt{s}$~=~17~GeV by two-particle azimuthal correlations of pions at moderately large $p_T$ ($p_T>$1.2~GeV/$c$). \section{Experimental setup and particle tracking} Figure~\ref{expsetup} shows the CERES experimental setup from 1996. The spectrometer covers a pseudo-rapidity range 2.1~$<\eta<$~2.65 and has full azimuthal acceptance, which is important for studies of azimuthal distributions. Although the overall design was optimized to detect low-mass dilepton pairs~\cite{Lenkeit:1999xu}, CERES offers many capabilities in hadronic physics as well. \begin{figure}[t!] \begin{center} \includegraphics[width=10.5cm]{fig1.eps} \caption{CERES experimental setup in 1996.} \label{expsetup} \end{center} \end{figure} Charged particle tracks are reconstructed on a statistical basis combining information from two radial silicon drift detectors (SDD1, SDD2) placed closely behind the target and a multi-wire proportional chamber (PADC) behind a magnetic field used for momentum determination. Charged pions are identified and distinguished from electrons by smaller ring radii in two ring-imaging Cherenkov detectors (RICH1, RICH2). Since the RICH detectors are filled with CH$_4$ with a high Cherenkov threshold ($\gamma_{th}\simeq$~32), only pions with $p>$~4.5~GeV/$c$ produce Cherenkov light. Figure~\ref{banana} shows the correlation between the ring radius in RICH2 and the azimuthal deflection in magnetic field with the two islands corresponding to negatively and positively charged pions, respectively. Pion momenta are determined from the ring radius measurement due to its higher precision in comparison to the deflection in magnetic field~\cite{Slivova:thesis}. \begin{figure}[h!] \begin{tabular}{lr} \includegraphics[height=6.0cm]{fig2.eps} & \hspace{-2.5cm} \begin{minipage}[t]{8.0cm} \vspace{-5.5cm} \caption{Correlation between the pion ring radius in RICH2 and the azimuthal deflection in the magnetic field measured between SDDs and PADC. The dashed line represents the expected correlation.} \label{banana} \end{minipage} \end{tabular} \end{figure} \section{Centrality determination} We have analyzed $41\cdot10^6$ Pb+Au collisions taken at $\sqrt{s}$~=~17~GeV. The centrality was determined offline using the number of charged particles $N_{ch}$ measured by the SDD in 2~$<\eta<$~3. The $N_{ch}$ distribution corrected for efficiency losses is shown in Fig.~\ref{multiplicity}. As the multiplicity detector used as a centrality trigger suffered from voltage instabilities, which were unfortunately not continously monitored, \begin{figure}[t!] \begin{tabular}{lr} \hspace{-0.8cm} \includegraphics[height=6.4cm]{fig3.eps} & \hspace{-2.8cm} \begin{minipage}[t]{7.0cm} \vspace{-6.0cm} \caption{Charged particle multiplicity $N_{ch}$ measured by the silicon drift detectors in the pseudo-rapidity interval 2~$<\eta<~$3. The data are corrected for efficiency losses and compared to UrQMD calculations for various centrality selections as given in the legend. An overall multiplicative factor of 1.03 was applied to $N_{ch}$ values from UrQMD in order to describe the data in central collisions.} \label{multiplicity} \end{minipage} \end{tabular} \end{figure} we had to use the UrQMD model~\cite{urqmd} to estimate the fraction of geometrical cross section $\sigma_{geo}$ measured. We have concluded that the measured data sample corresponds to the most central (26.0$\pm$1.5)$\%$ of $\sigma_{geo}$. The data sample was divided into 6 centrality classes summarized in Table~\ref{centrality-selection} together with the corresponding fraction of geometrical cross section $\sigma/\sigma_{geo}$, impact parameter $b$, number of participants $N_{part}$, and binary collisions $N_{coll}$ obtained from a Glauber calculation neglecting fluctuations~\cite{Eskola:1989yh}. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Class & Events (10$^6$)& $N_{ch}$ & $\sigma/\sigma_{geo} (\%)$ & $b$ (fm) & $N_{part}$ & $N_{coll}$ \\ \hline \cline{1-7} 1& 7.77 & 147 & 21 - 26 & 6.8 - 7.5 & 159 & 293 \\ \hline 2& 6.58 & 198 & 17 - 21 & 6.0 - 6.8 & 189 & 368 \\ \hline 3& 5.66 & 234 & 13 - 17 & 5.3 - 6.0 & 222 & 453 \\ \hline 4& 6.06 & 273 & 9 - 13 & 4.4 - 5.3 & 255 & 542 \\ \hline 5& 6.05 & 321 & 5 - 9 & 3.4 - 4.4 & 289 & 639 \\ \hline 6& 8.16 & 395 & $<$ 5 & $<$ 3.4 & 336 & 774 \\ \hline \end{tabular} \caption{Definition of centrality classes.} \label{centrality-selection} \end{center} \end{table} \section{Collective elliptic flow} The strength of elliptic flow is commonly quantified by the second Fourier coefficient $v_2$~\cite{Poskanzer:1998} of azimuthal particle distributions with respect to the reaction plane $\Psi_R$ \begin{equation} \frac{dN}{d(\phi-\Psi_R)}=A(1+\sum_{n=1}^\infty 2\; v_n \cos (n(\phi-\Psi_R))). \label{single-rp} \end{equation} {\it A priori}, the reaction plane is unknown and is therefore estimated on an event-by-event basis from charged particle tracks measured by the SDDs using a subevent method. Non-uniformities in the event plane distribution are removed by standard procedures~\cite{Poskanzer:1998}. Depending on centrality of a given collision, the r.m.s. of the event plane resolution is 35-40 degrees. The centrality and $p_T$ dependence of $v_2$ corrected for the event plane dispersion is shown in Fig.~\ref{v2-nch-pt}. In addition, the $v_2(p_T)$ data points are corrected for Bose-Einstein correlations~\cite{Slivova:thesis,Dinh:1999mn,Adamova:2002wi}. Since this correction procedure becomes questionable for central collisions, the centrality dependence of $v_2$ was left uncorrected. We observe that $v_2$ decreases approximately linearly with centrality and vanishes in the most central collisions with no remaining asymmetry in the overlap zone present (Fig.~\ref{v2-nch-pt}a). The $p_T$ dependence of $v_2$ in semicentral collisions (Fig.~\ref{v2-nch-pt}b) shows a linear rise below $p_T$~=~1.5~GeV/$c$. Beyond $p_T\approx$~1.5~GeV/$c$ the slope decreases, possibly indicating a saturation of $v_2$ at high $p_T$ similar to observations at RHIC~\cite{RHIC-combined}. \begin{figure}[t!] \begin{tabular}{lr} \hspace*{-0.4cm} \includegraphics[height=5.5cm]{fig4a.eps} & \hspace*{-0.8cm} \includegraphics[height=5.5cm]{fig4b.eps} \end{tabular} \caption{Centrality (a) and $p_T$ dependence (b) of $v_2$ for charged pions and hadrons as indicated in the legend. Hydrodynamical calculations~\cite{Huovinen} with a phase transition at $T_c$~=~165~MeV are shown for kinetic freeze-out temperature $T_f$~=~120~MeV (solid line) and $T_f$~=~160~MeV (dashed line). The quoted errors are statistical only. The absolute systematic errors vary between 0.5$\%$ and 1.5$\%$ going from semicentral to central collisions.} \label{v2-nch-pt} \end{figure} A direct quantitative comparison with a hydrodynamical calculation~\cite{Huovinen} using an EOS with a first order transition to a quark gluon plasma at temperature $T_c$~=~165~MeV favors a higher freeze-out temperature $T_f$~=~160~MeV rather than a lower one, $T_f$~=~120~MeV. However, $T_f$~=~120~MeV is necessary in order to describe the $p_T$ spectra of protons. Possible explanations might be either incomplete thermalization or a necessity to include viscous effects into calculations~\cite{Teaney}. \section{Azimuthal correlations at high-p$_T$} We turn to the measurement of two-particle azimuthal correlations of high-$p_T$ pions. The two-particle distributions in semicentral collisions corrected for single-track reconstruction efficiency are shown in Fig.~\ref{dtheta-0-20mrad-corrected}. At small opening angles, overlapping rings in the RICH detectors cause a drop in pair reconstruction efficiency which manifests itself as a dip around $\Delta\phi\approx$~0 (Fig.~\ref{dtheta-0-20mrad-corrected}a). This instrumental effect can be cured by using either a Monte-Carlo (MC) correction or by enforcing a full ring separation by imposing a cut in polar angle difference $\Delta\theta$. We have made a compromise between the two methods and use the separation cut $\Delta\theta>$~20~mrad (Fig.~\ref{dtheta-0-20mrad-corrected}b) which still keeps about 60$\%$ of pion data sample while reducing sensitivity to the MC correction by a factor of four. The corrected distributions reveal a strong anisotropy with maxima at close ($\Delta\phi\approx$~0) and back-to-back ($\Delta\phi\approx\pi$) angles. \begin{figure}[t!] \begin{center} \includegraphics[height=5.8cm]{fig5.eps} \caption{Two-pion azimuthal distributions ($p_{{T}}>$1.2~GeV/$c$) for the centrality class C1 (a) without and (b) with a $\Delta\theta>$~20~mrad cut applied. Open symbols: data only after the MC correction for single track reconstruction efficiency. Closed symbols: data after an additional MC correction of the finite two-track resolution.} \label{dtheta-0-20mrad-corrected} \end{center} \end{figure} \begin{figure}[b!] \begin{center} \begin{tabular}{lr} \hspace*{-0.3cm} \includegraphics[height=6.0cm]{fig6a.eps} & \hspace*{-0.7cm} \includegraphics[height=6.0cm]{fig6b.eps} \end{tabular} \caption{Comparison of $v_2$ (circles) obtained from the event plane method and $\sqrt{p_2}$ (triangles) from two-pion azimuthal correlations. (a) Centrality dependence for full azimuth (open triangles) and a range restricted to the back-to-back peak ($|\Delta\phi|\ge$~0.6~rad, closed triangles). (b) $p_T$ dependence for the centrality class C1.} \label{v2-p2-comparison} \end{center} \end{figure} The two-particle distributions can be decomposed again using a Fourier method \begin{equation} \frac{dN}{d\Delta\phi} = B(1+\sum_{n=1}^\infty 2\; p_n \cos (n\Delta\phi)), \label{2part-eq} \end{equation} where $\Delta\phi$ is an azimuthal angle difference between any two pions in a given event. If only correlations due to collective flow are present, then $p_n=v_n^2$. Figure~\ref{v2-p2-comparison} shows a comparison of the $\sqrt{p_2}$ values with $v_2$ obtained from the event plane method. We observe that the $\sqrt{p_2}$ values are systematically larger than $v_2$. A closer look at the centrality dependence shows that the anisotropy in the back-to-back region approaches zero for central collisions while the close-angle correlations persist even in the most central collisions. The gap between $v_2$ and $\sqrt{p_2}$ seems to increase with increasing $p_T$ (Fig.~\ref{v2-p2-comparison}b). Unfortunately, the statistical significance of this measurement is degraded by invoking a two-dimensional window in $p_T$. \begin{figure}[h!] \begin{center} \begin{tabular}{lr} \hspace*{-0.3cm} \includegraphics[height=6.0cm]{fig7a.eps} & \hspace*{-0.7cm} \includegraphics[height=6.0cm]{fig7b.eps} \end{tabular} \caption{Centrality dependence of (a) the Gaussian widths and (b) the yield of close-angle (open symbols) and back-to-back (full symbols) correlation peaks for pions with $p_T>$~1.2~GeV/$c$.} \label{sigma-yield} \end{center} \end{figure} Assuming the observed excess is due to correlations of semihard origin, we have fit the distributions with two Gaussian peaks at $\Delta\phi=0$ and $\pi$ on top of the elliptic flow modulated background. The fit parameters are the Gaussian amplitudes, widths and background, while $v_2$ is fixed from the event plane method. The close-angle peak remains narrow (Fig.~\ref{sigma-yield}a) at $\sigma_0$~=~(0.23$\pm$0.03)~rad averaged over the measured centrality. The corresponding average momentum perpendicular to the partonic transverse momentum~\cite{Rak:2004gk} is $\langle |j_{Ty}| \rangle$~=~(190$\pm$25)~MeV/$c$ which is similar although somewhat lower than ISR~\cite{Angelis:1980bs} and RHIC~\cite{Rak:2004gk} measurements. The back-to-back peak broadens with centrality and escapes detection in central collisions. The last measured point ($N_{coll}=$~600) corresponds to $\langle |k_{Ty}| \rangle$~=~(2.8$\pm$0.6)~GeV/$c$ which agrees well with preliminary results from central Au+Au collisions at RHIC~\cite{Rak:2004gk}. Within the statistical errors, the yield of the close-angle and back-to-back pion pairs, defined as an area under the Gaussian peak (Fig.~\ref{sigma-yield}b), grows linearly with $N_{coll}$ which supports the suggested interpretation of semihard scattering. \begin{figure}[t!] \begin{tabular}{lr} \includegraphics[height=6.0cm]{fig8.eps} & \hspace{-2.5cm} \begin{minipage}[t]{8.0cm} \vspace{-5.5cm} \caption{In-plane (top) and out-of-plane (bottom) two-pion azimuthal correlations for $p_T>$~1.2~GeV/$c$ and $\Delta\theta>$~20~mrad. Dashed lines are the expectations for pure elliptic flow $v_2$ measured by the event plane method. Data are averaged over centrality classes C1, C2, and C3.} \label{inout} \end{minipage} \end{tabular} \end{figure} Due to asymmetry of the overlap zone in non-central collisions, the pion yield might be suppressed if pions propagate perpendicular to the reaction plane rather than along it. We have constructed azimuthal distributions confining one of the pions in the region of $\pm\pi/4$ around the reconstructed event plane ({\it in-plane}) or perpendicular to it ({\it out-of-plane}). The distributions corrected for efficiency losses are shown in Fig.~\ref{inout}. For both in-plane and out-of-plane regions, the data lie above the expectations from elliptic flow~\cite{Bielcikova:2003ku}. After subtracting the flow contributions, we have extracted the ratios of yields in-plane with respect to those out-of-plane. For the close-angle peak this ratio is 1.32$\pm$0.37 and 1.39$\pm$0.44 for the back-to-back component. An additional systematic error of 15$\%$ was estimated due to uncertainties in the subtraction of the elliptic flow contribution. Within the given errors, we can conclude that both components bear only a weak, if any, preference to the reaction plane orientation. \section{Conclusion} In summary, we have discussed properties of semihard azimuthal correlations of high-$p_T$ pions embedded in collective flow at SPS energy. The observed non-flow components, presumably of semihard origin, show similar but also important differences to observations at RHIC~\cite{Adler:2002tq,Adams:2004wz}. The close-angle peak remains narrow at all measured centralities, consistent with fragmentation, while the back-to-back component broadens and disappears in background in the most central collisions. In addition, there seems to be only a weak, if any, preference of semihard pion pairs to the orientation of the reaction plane. This is different from recent findings at RHIC~\cite{Adams:2004wz} showing a stronger suppression of high-$p_T$ correlations out of the reaction plane. \section*{References}
2,869,038,154,480
arxiv
\section{Introduction} Forecast is a major task from statistics and often of crucial importance for decision making. In the simple case when the quantity of interest is univariate and quantitative, point forecast often takes the form of regression where one aims at estimating the conditional mean (or the conditional quantile) of the response variable $Y$ given the available information encoded in a vector of covariates $X$. A point forecast is only a rough summary statistic and should at least be accompanied with an assessment of uncertainty (e.g. standard deviation or confidence interval). Alternatively, probabilistic forecasting and distributional regression \citep{gneitingkatz} suggest to estimate the full conditional distribution of $Y$ given $X$, called the predictive distribution. In the last decades, weather forecast has been a major motivation for the development of probabilistic forecast. Ensemble forecasts are based on a given number of deterministic models whose parameters vary slightly in order to take into account observation errors and incomplete physical representation of the atmosphere. This leads to an ensemble of different forecasts that overall also assess the uncertainty of the forecast. Ensemble forecasts suffer from bias and underdispersion \citep{hamill_1997} and need to be statistically postprocessed in order to be improved. Different postprocessing methods have been proposed, such as Ensemble Model Output Statistics \citep{gneiting_raftery_2005}, Quantile Regression Forests \citep{taillardat_2019} or Neural Networks \citep{schulz_2021} among others. Distributional regression is now widely used beyond meteorology and recent methodological works include deep distribution regression by \cite{li_2021}, distributional random forest by \cite{Cevid_et_al_2021} or isotonic distributional regression by \cite{Henzi_et_al_2021}. The purpose of the present paper is to provide an extension to the framework of distributional regression of the celebrated Stone's theorem \citep{Stone_1977} that states the consistency of local weight algorithm for the estimation of the regression function. The strength of Stone's theorem is that it is fully non-parametric and model-free, with very mild assumptions that covers many important cases such as kernel algorithms and nearest neighbor methods, see e.g. \cite{gyorfi} for more details. We prove that Stone's theorem has a natural and elegant extension to distributional regression with error measured by the Wasserstein distance of order $p\geq 1$. Our result covers not only the case of a one-dimensional output $Y\in\mathbb{R}$ where the Wasserstein distance has a simple explicit form, but also the case of a multivariate output $Y\in\mathbb{R}^d$. The use of the Wasserstein distance is motivated by recent works revealing that it is a useful and powerful tool in statistics, see e.g. the review by \cite{Panaretos_Zemel_2020}. Besides this main result, we characterize, in the case $d=1$ and $p=1$, the optimal minimax rate of convergence on suitable classes of distributions. We also discuss implications of our results to estimate various statistics of possible interest such as the expected shortfall or the probability weighted moment. The structure of the paper is the following. In Section~\ref{sec:background}, we present the required background on Stone's theorem and Wasserstein spaces. Section~\ref{sec:main} gathers our main results, including the extension of Stone's theorem to distributional regression (Theorem~\ref{thm:main}), the characterization of optimal minimax rates of convergence (Theorem~\ref{thm:minimax}) and some applications (Proposition~\ref{prop:appli} and the subsequent examples). All the technical proofs are postponed to Section~\ref{sec:proofs}. \section{Background}\label{sec:background} \subsection{Stone's theorem} In a regression framework, we observe a sample $(X_i,Y_i)$, $1\leq i\leq n$, of independent copies of $(X,Y)\in\mathbb{R}^k\times\mathbb{R}^d$ with distribution $P$. Based on this sample and assuming $Y$ integrable, the goal is to estimate the regression function \[ r(x)=\mathbb{E}[Y|X=x],\quad x\in\mathbb{R}^k. \] Local average estimators take the form \begin{equation}\label{eq:r_n} \hat r_n(x)=\sum_{i=1}^n W_{ni}(x) Y_i \end{equation} with $W_{n1}(x),\ldots,W_{nn}(x)$ the \emph{local weights} at $x$. The local weights are assumed to be measurable functions of $x$ and $X_1,\ldots,X_n$ but not to depend on $Y_1,\ldots, Y_n$, that is \begin{equation}\label{eq:X-property} W_{ni}(x)=W_{ni}(x;X_1,\ldots,X_n),\quad 1\leq i\leq n. \end{equation} For the convenience of notation, the dependency on $X_1,\ldots,X_n$ is implicit. In this paper, we focus only on the case of \emph{probability weights} satisfying \begin{equation}\label{eq:proba_weights} W_{ni}(x)\geq 0,\ 1\leq i\leq n,\quad \mbox{and}\quad \sum_{i=1}^n W_{ni}(x)=1. \end{equation} Stone's Theorem states the universal consistency of the regression estimate in $\mathrm{L}^p$-norm. \begin{theorem}[\cite{Stone_1977}]\label{thm:stone} Assume the probability weights~\eqref{eq:proba_weights} satisfy the following three conditions: \begin{itemize} \item[i)] there is $C>0$ such that $\mathbb{E}\left[\sum_{i=1}^n W_{ni}(X)g(X_i)\right]\leq C\mathbb{E}[g(X)]$ for all $n\geq 1$ and measurable $g:\mathbb{R}^k\to [0,+\infty)$ such that $\mathbb{E}[g(X)]<\infty$; \item[ii)] for all $\varepsilon>0$, $\sum_{i=1}^n W_{ni}(X) \mathds{1}_{\{\|X_i-X\|>\varepsilon \}}\to 0$ in probability as $n\to+\infty$; \item[iii)] $\max_{1\leq i\leq n} W_{ni}(X)\to 0$ in probability as $n\to+\infty$. \end{itemize} Then, for all $p\geq 1$ and $(X,Y)\sim P$ such that $\mathbb{E}[\|Y\|^p]<\infty$, \begin{equation}\label{eq:stone} \mathbb{E}\left[\|\hat r_n(X)-r(X)\|^p\right]\longrightarrow 0 \quad \mbox{as $n\to+\infty$}. \end{equation} Conversely, if Equation~\eqref{eq:stone} holds, then the probability weights must satisfy conditions $i)-iii)$. \end{theorem} \begin{remark} Stone's theorem is usually stated in dimension $d=1$. Since the convergence of random vectors $\hat r_n(X)\to r(X)$ in $\mathrm{L}^p$ is equivalent to convergence in $\mathrm{L}^p$ of all the components, the extension to the dimension $d\geq 2$ is straightforward. Furthermore, more general weights than probability weights can be considered: condition~\eqref{eq:proba_weights} can be dropped and replaced by the weaker assumptions that \[ |W_{ni}(X)|\leq M \quad \mbox{a.s. for some $M>0$.} \] and \[ \sum_{i=1}^n W_{ni}(X)\to 1 \mbox{ in probability}. \] Such general weights will not be considered in the present paper and we therefore stick to probability weights. The reader can refer to \cite{biau_2015} for a complete proof of Stone's theorem together with a discussion. \end{remark} \begin{example}\label{example1} The following two examples of kernel weights and nearest neighbor weights are the most important ones in the literature and we refer to \cite{gyorfi} Chapter~5 and~6 respectively for more details. \begin{itemize} \item The kernel weights are defined by \begin{equation}\label{eq:kernel-weights} W_{ni}(x)=\frac{K\Big(\frac{x-X_i}{h_n}\Big)}{\sum_{j=1}^n K\Big(\frac{x-X_j}{h_n}\Big)},\quad 1\leq i\leq n \end{equation} if the denominator is nonzero, and $1/n$ otherwise. Here the bandwidth $h_n>0$ depends only on the sample size $n$ and the function ${K:\mathbb{R}^k\to[0,+\infty)}$ is called a kernel. In this case, the estimator \eqref{eq:r_n} corresponds to the Nadaraya-Watson estimator of the regression function \citep{nadaraya_1964,watson_1964}. We say that $K$ is a boxed kernel if there are constants $R_2\geq R_1>0$ and $M_2\geq M_1>0$ such that \[ M_1\mathds{1}_{\{\|x\|\leq R_1\}} \leq K(x)\leq M_2\mathds{1}_{\{\|x\|\leq R_2\}},\quad x\in\mathbb{R}^k. \] Theorem~5.1 in \cite{gyorfi} states that, for a boxed kernel, the kernel weights \eqref{eq:kernel-weights} satisfy conditions $i)-iii)$ of Theorem~\ref{thm:stone} if and only if $h_n\to 0$ and $nh_n^k\to +\infty$ as $n\to+\infty$. \item The nearest neighbor (NN) weights are defined by \begin{equation}\label{eq:nearest-neighbor-weights} W_{ni}(x)=\begin{cases} \frac{1}{\kappa_n} & \mbox{if $X_i$ belongs to the $\kappa_n$-NN of $x$}\\ 0 &\mbox{otherwise} \end{cases}, \end{equation} where the number of neighbors $\kappa_n\in\{1,\ldots,n\}$ depends only on the sample size. Recall that the $\kappa_n$-NN of $x$ within the sample $(X_i)_{1\leq i\leq n}$ are obtained by sorting the distances $\|X_i-x\|$ in increasing order and keeping the $\kappa_n$ points with the smallest distances -- as discussed in \cite{gyorfi} Chapter~6, several rules can be used to break ties such as lexicographic or random tie breaking. Theorem~6.1 in the same reference states that the nearest neighbor weights \eqref{eq:nearest-neighbor-weights} satisfy conditions $i)-iii)$ of Theorem~\ref{thm:stone} if and only if $\kappa_n\to +\infty$ and $\kappa_n/n\to 0$ as $n\to+\infty$. \end{itemize} \end{example} \begin{example}\label{example2} Interestingly, some variants of the celebrated Breiman's Random Forest \citep{Breiman_2001} produce probability weights satisfying the assumptions of Stone's theorem. In Breiman's Random Forest, the splits involve both the covariates and the response variable so that the associated weighs $W_{ni}(x)=W_{ni}(x;(X_l,Y_l)_{1\leq l\leq n})$ are not in the form~\eqref{eq:X-property}. \cite{scornet_2016} considers two simplified version of infinite random forest where the associated weights $W_{ni}(x)$ do not depend on the response values and satisfy the so call $X$-property, that is they are in the form~\eqref{eq:X-property}. For totally non adaptive forests, the trees are grown thanks to a binary splitting rule that does not use the training sample and is totally random; the author shows that the probability weights associated to the infinite forest satisfy the assumptions of Stone's theorem under the condition that the number of leaves grows to infinity at a rate smaller than $n$ and the leaf volume tends to zero in probability (see Theorem~4.1 and its proof). For $q$-quantile forest, the binary splitting rules involves only the covariates and the author shows that the weights associated to the infinite forest satisfy the assumptions of Stone's theorem provided the subsampling number $a_n$ satifies $a_n\to+\infty$ and $a_n/n\to 0$ (see Theorem~5.1 and its proof). \end{example} \subsection{Wasserstein spaces} We recall the definition and some elementary facts on Wasserstein spaces on $\mathbb{R}^d$. More details and further results on optimal transport and Wasserstein spaces can be found in the monograph by \cite{Villani_2009}, Chapter 6. For $p\geq 1$, the Wasserstein space $\mathcal{W}_p(\mathbb{R}^d)$ is defined as the set Borel probability measures on $\mathbb{R}^d$ having a finite moment of order $p$, i.e. such that \begin{equation}\label{eq:def-Mp} M_p(\mu)=\Big(\int_{\mathbb{R}^d} \|y\|^p\,\mu(\mathrm{d} y)\Big)^{1/p}<\infty. \end{equation} It is endowed with the distance defined, for $Q_1,Q_2\in \mathcal{W}_p(\mathbb{R}^d)$, by \begin{equation}\label{eq:wasserstein} \mathcal{W}_p(Q_1,Q_2)=\inf_{\pi\in \Pi(Q_1,Q_2)}\left(\int \|y_1-y_2\|^p\,\pi(\mathrm{d} y_1\mathrm{d} y_2)\right)^{1/p}, \end{equation} where $\Pi(Q_1,Q_2)$ denotes the set of measures on $\mathbb{R}^d\times \mathbb{R}^d$ with margins $Q_1$ and $Q_2$. A couple $(Z_1,Z_2)$ of random variables with distributions $Q_1$ and $Q_2$ respectively is called a \textit{coupling}. The Wasserstein distance is thus the minimal distance $\|Z_1-Z_2\|_{\mathrm{L}^p}=\mathbb{E}[\|Z_1-Z_2\|^p]^{1/p}$ over all possible couplings. Existence of optimal couplings is ensured since $\mathbb{R}^d$ is a complete and separable metric space so that the infimum is indeed a minimum. Wasserstein distances are generally difficult to compute, but the case $d=1$ is the exception. A simple optimal coupling is provided by the probability inverse transform: for $i=1,2$, let $Q_i\in \mathcal{W}_p(\mathbb{R})$, $F_i$ denotes its cumulative distribution function and $F_i^{-1}$ its generalized inverse (quantile function). Then, starting from an uniform random variable $U\sim \mathrm{Unif}(0,1)$, an optimal coupling is given by $(Z_1,Z_2)=(F_1^{-1}(U),F_2^{-1}(U))$. Therefore, the Wasserstein distance is explicitly given by \begin{equation}\label{eq:wasserstein1} \mathcal{W}_p(Q_1,Q_2)=\left(\int_0^1 |F_1^{-1}(u)-F_2^{-1}(u)|^p \mathrm{d} u\right)^{1/p}. \end{equation} When $p=1$, a simple change of variable yields \begin{equation}\label{eq:wasserstein2} \mathcal{W}_1(Q_1,Q_2)=\int_{-\infty}^{+\infty} |F_1(u)-F_2(u)| \mathrm{d} u. \end{equation} \section{Main results}\label{sec:main} \subsection{Stone's theorem for distributional regression} We now present the main result of the paper which is a natural extension of Stone's theorem to the framework of distributional regression. Given a distribution $(X,Y)\sim P$ on $\mathbb{R}^k\times\mathbb{R}^d$, we denote by $F$ the marginal distribution of $Y$ and by $F_x$ its conditional distribution given $X=x$. This conditional distribution can be estimated on a sample $(X_i,Y_i)_{1 \leq i\leq n}$ of independent copies of $(X,Y)$ by the weighted empirical distribution \begin{equation}\label{eq:def_wed} \hat F_{n,x}=\sum_{i=1}^n W_{ni}(x) \delta_{Y_i} \end{equation} where $\delta_{y}$ denotes the Dirac mass at point $y\in\mathbb{R}^d$. For probability weights satisfying \eqref{eq:proba_weights}, $\hat F_{n,x}$ is a probability measure and can be viewed as a random element in the complete and separable space $\mathcal{W}_p(\mathbb{R}^d)$. We recall that the weights $W_{ni}(x)=W_{ni}(x;X_1,\ldots,X_n)$ implicitly depend on $X_1,\ldots,X_n$ but not on $Y_1,\ldots,Y_n$. \begin{theorem}\label{thm:main} Assume the probability weights satisfy conditions $i)-iii)$ from Theorem~\ref{thm:stone}. Then, for all $p\geq 1$ and $(X,Y)$ such that $\mathbb{E}[\|Y\|^p]<\infty$, \begin{equation}\label{eq:stone-disreg} \mathbb{E}\big[\mathcal{W}_p^p(\hat F_{n,X},F_X)\big]\longrightarrow 0 \quad \mbox{as $n\to+\infty$}. \end{equation} Conversely, if Equation~\eqref{eq:stone-disreg} holds, then the probability weights must satisfy conditions $i)-iii)$. \end{theorem} It is worth noticing that \[ \mathbb{E}\left[\|\hat r_n(X)-r(X)\|^p\right] \leq \mathbb{E}\big[\mathcal{W}_p^p(\hat F_{n,X},F_X)\big] \] so that Theorem~\ref{thm:main} implies Theorem~\ref{thm:stone} in a straightforward way. The proof of Theorem~\ref{thm:main} is postponed to Section~\ref{sec:proofs}. It first considers the case $d=1$ where the Wasserstein distance is explicitly given by formula~\eqref{eq:wasserstein1}. Then, the results is extended to higher dimension $d\geq 2$ thanks to the notion of max-sliced Wasserstein distance \citep{BG21} which allows to reduce the convergence of measures on $\mathbb{R}^d$ to the convergence of their uni-dimensional projections (a precise statement is given in Theorem~\ref{thm:BG21} below). \subsection{Rates of convergence} We next consider rates of convergence in the minimax sense. Note that similar questions and results have been established in \cite{PDNT_22}, where the second order Cram\'er's distance was considered, i.e. \[ \|\hat F_{n,X}-F_X\|_{L_2}^2=\int_{\mathbb{R}}|\hat F_{n,X}(y)-F_X(y)|^2\,\mathrm{d} y. \] We focus here on the Wasserstein distance $\mathcal{W}_p(\hat F_{n,X},F_X)$ and consider only the case $d=1$ and $p=1$ which allows the explicit expression~\eqref{eq:wasserstein2}. The other cases seem harder to analyze and are beyond the scope of the present paper. Our first result considers the error in Wasserstein distance when $X=x$ is fixed. \begin{proposition}\label{prop:upperbound} Assume $d=1$ and $(X,Y)\sim P$ such that $\mathbb{E}[|Y|]<\infty$. Then, \[ \mathbb{E} \big[\mathcal{W}_1(\hat F_{n,x}, F_x)\big]\leq \mathbb{E} \Big[ \sum_{i=1}^n W_{ni}(x)\mathcal{W}_1(F_{X_i},F_{x})\Big]+ M(x) \mathbb{E}\Big[\sum_{i=1}^n W_{ni}^2(x)\Big]^{1/2}, \] where $M(x)=\int_{\mathbb{R}}\sqrt{F_x(z)(1-F_x(z))}\mathrm{d}z$. \end{proposition} The first term corresponds to an approximation error due to the fact that we use a biased sample to estimate $F_x$. The more regular the model is, the smaller the approximation error is. The second term is an estimation error due to the fact that we use an empirical mean to estimate $F_x$. This estimator error is smaller if the distribution error has a lower dispersion (as measured by $M(x)$) or if $\sum_{i=1}^n W_{ni}^2(x)$ is small. Note that in the case of nearest neighbor weights, $1/\sum_{i=1}^n W_{ni}^2(x)$ is exactly equal to $\kappa$ so that this quantity is often referred to as the \textit{effective sample size} and the estimation error is proportional to the square root of the expected reciprocal effective sample size. In view of Proposition~\ref{prop:upperbound}, we introduce the following classes of functions. \begin{definition}\label{def:class-D} Let $\mathcal{D}(H,L,M)$ be the class of distributions $(X,Y)\sim P$ on $\mathbb{R}^k\times\mathbb{R}$ satisfying: \begin{itemize} \item[a)] $X\in [0,1]^k$ a.s. and $\mathbb{E}|Y|<\infty$, \item[b)] for all $x,x'\in[0,1]^k$, $\mathcal{W}_1(F_x,F_{x'})\leq L\|x-x'\|^H$, \item[c)] for all $x\in[0,1]^k$, $\int_{\mathbb{R}} \sqrt{F_{x}(z)(1-F_{x}(z))}\,\mathrm{d} z \leq M$. \end{itemize} \end{definition} The definition of the class together with Proposition~\ref{prop:upperbound} entails that the expected error is uniformly bounded on the class $\mathcal{D}(H,L,M)$ by \begin{align} &\mathbb{E}\Big[ \mathcal{W}_1(\hat F_{n,X}, F_X)\Big]\nonumber\\ &\leq L\mathbb{E} \Big[ \sum_{i=1}^n W_{ni}(X)\| X_i-X\|^H\Big] + M \mathbb{E}\Big[\sum_{i=1}^n W_{ni}^2(X)\Big]^{1/2}.\label{eq:general-bound} \end{align} As a consequence, Proposition~\ref{prop:upperbound} allows to derive explicit bounds uniformly on $\mathcal{D}(H,L,M)$ for the kernel and nearest neighbor methods from Example~\ref{example1}. For the sake of simplicity, we consider the uniform kernel only. \begin{corollary}\label{cor:kernel} Let $\hat F_{n,X}$ be given by the kernel method with uniform kernel $K(x)=\mathds{1}_{\{\|x\|\leq 1\}}$ and weights given by Equation~\eqref{eq:kernel-weights}. If $P\in\mathcal{D}(H,L,M)$, then \[ \mathbb{E}\big[\mathcal{W}_1(\hat F_{n,X}, F_X)\big] \leq Lh_n^H +M\sqrt{(2+1/n)c_k}(nh_n^k)^{-1/2}+Lk^{H/2}c_k(nh_n^k)^{-1} \] with $c_k=k^{k/2}$. \end{corollary} \begin{corollary}\label{cor:knn} Let $\hat F_{n,X}$ be given by the nearest neighbor method with weights given by Equation~\eqref{eq:nearest-neighbor-weights} and assume $P\in\mathcal{D}(H,L,M)$. Then, \[ \mathbb{E}\big[\mathcal{W}_1(\hat F_{n,X}, F_X)\big] \leq \begin{cases} L8^{H/2}( \kappa_n/n)^{H/2} +M\kappa_n^{-1/2}& \mbox{if } k=1,\\ L\tilde{c}_k^{H/2}( \kappa_n/n)^{H/k} +M\kappa_n^{-1/2}& \mbox{if } k\geq 2, \end{cases} \] where $\tilde{c}_k$ depends only on the dimension $k$ and is defined in \citet[Theorem~2.4]{biau_2015}. \end{corollary} One can see that consistency holds --- i.e. the expected error tends to $0$ as $n\to+\infty$ --- as soon as $h_n\to 0$ and $nh_n^k\to+\infty$ for the kernel method and $\kappa_n/n\to 0$ and $\kappa_n\to +\infty$ for the nearest neighbor method. \medskip The next theorem provides the optimal minimax rate of convergence on the class $\mathcal{D}(H,L,M)$. We say that two sequences of positive numbers $(a_n)$ and $(b_n)$ have the same rate of convergence, noted $a_n\asymp b_n$, if the ratios $a_n/b_n$ and $b_n/a_n$ remain bounded as $n\to+\infty$. \begin{theorem}\label{thm:minimax} The optimal minimax rate of convergence on the class $\mathcal{D}(H,L,M)$ is given by \[ \inf_{\hat F_n} \sup_{P\in \mathcal{D}(H,L,M)}\mathbb{E}[\mathcal{W}_1(\hat F_{n,X}, F_X)] \asymp n^{-H/(2H+k)}. \] \end{theorem} Theorem~\ref{thm:minimax} is the counterpart of \citet[Theorem~1]{PDNT_22} where the minimax rate of convergence for the second order Cram\'er's distance has been considered. The strategy of proof is similar: i) we prove a lower bound by considering a suitable class of binary distributions where the error in Wasserstein distance corresponds to an absolute error in point regression for which the minimax lower rate of convergence is known; ii) we check that the upper bound for the kernel and/or nearest neighbor algorithm has the same rate of convergence as the lower bound, which proves that the optimal minimax rate of convergence has been identified. In particular, our proof shows that the kernel method defined in Equation~\eqref{eq:kernel-weights} reaches the minimax rate of convergence in any dimension $k\geq 1$ with the choice of bandwidth $h_n\asymp n^{-1/(2H+k)}$; the nearest neighbor method defined in Equation \eqref{eq:nearest-neighbor-weights} reaches the minimax rate of convergence in any dimension $k\geq2$ with the number of neighbors $\kappa_n\asymp n^{H/(H+k/2)}$. \begin{remark} Our estimate of the minimax rate of convergence holds only for $d=p=1$ and we briefly discuss what can be expected in other cases. When $p=1$ and $d\geq 2$, one may hope to use the strong equivalence between the max-sliced Wasserstein distance and the Wasserstein distance \citep[Theorem 2.3.ii]{BG21}. This requires to estimate the expectation of a supremum over the sphere and this line of research is left for further work. When $p>1$, even in dimension $d=1$, it seems difficult to obtain bounds for the Wasserstein distance of order $p$ without very strong assumptions. \cite{BL19} consider the rate of convergence of the empirical distribution $\hat F_n=\frac{1}{n}\sum_{i=1}^n \delta_{Y_i}$ for an i.i.d. sample $Y_1,\ldots,Y_n$ with distribution $F$ on $\mathbb{R}$. A first consistency result (Theorem 2.14) states that $\mathbb{E}[\mathcal{W}_p^p(\hat F_n,F)]\rightarrow 0$ as soon as $F$ has a finite moment of order $p\geq 1$. Regarding rates of convergence, they show (Corollary 3.9) that for $p=1$ the standard rate of convergence holds, i.e. $\mathbb{E}[\mathcal{W}_1(\hat F_n,F)]=O(1/\sqrt{n})$, if and only if \[ J_1(F)=\int_{\mathbb{R}} \sqrt{F(z)(1-F(z))}\mathrm{d} z<\infty. \] On the other hand, rate of convergences for higher order $p>1$ require the condition \[ J_p(F)=\int_{\mathbb{R}}\frac{[F(z)(1-F(z))]^{p/2}}{f(z)^{p-1}}\mathrm{d} z<\infty, \] where $f$ is the density of the absolutely continuous component of $F$. They show (Corollary 5.5) that the standard rate holds, i.e. $\mathbb{E}[\mathcal{W}_p^p(\hat F_n,F)]=O(n^{-p/2})$, if and only if $J_p(F)<\infty$. However, this condition is very strong: it does not hold for the Gaussian distribution or for distributions with disconnected support. \end{remark} \subsection{Applications} We briefly illustrate Theorem~\ref{thm:main} with some applications and examples. In statistics, we commonly face the following generic situation: we are interested in a summary statistic $S$ with real values, e.g. quantiles or tail expectation, and we want to assess the effect of $X$ on $Y$ through $S$, that is we want to assess $S_{Y\mid X=x}$. Assuming that $S$ is well-defined for distributions on $\mathbb{R}^d$ with a finite moment of order $p\geq 1$, it can be seen as a map $S:\mathcal{W}_p(\mathbb{R}^d)\to\mathbb{R}$ and then $S_{Y\mid X=x}=S(F_x)$ with $F_x$ the conditional distribution of $Y$ given $X=x$. A natural plug-in estimate of $S_{Y\mid X=x}$ is \[ \hat S_{n,x}=S(\hat F_{n,x}) \quad \mbox{with $\hat F_{n,x}$ defined by \eqref{eq:def_wed}}. \] In this generic situation, our extension of Stone's theorem directly implies the following proposition. Recall that $M_p(\mu)$ is defined in Equation~\eqref{eq:def-Mp}. \begin{proposition}\label{prop:appli} Assume $\mathbb{E}[\|Y\|^p]<\infty$ and $\mathbb{P}(F_X\in\mathcal{C})=1$ where ${\mathcal{C}\subset \mathcal{W}_p(\mathbb{R}^d)}$ denotes the continuity set of the statistic $S:\mathcal{W}_p(\mathbb{R}^d)\to\mathbb{R}$. Then weak consistency holds, i.e. \[ \hat S_{n,X}\longrightarrow S_{Y\mid X} \quad \mbox{in probability as $n\to+\infty$.} \] If furthermore the statistic $S$ admits a bound of the form \begin{equation}\label{eq:linear-bound} |S(\mu)|\leq aM_p^q(\mu)+b,\quad \mbox{with $a,b\geq 0$ and $0<q\leq p$}, \end{equation} then consistency holds in $\mathrm{L}^{p/q}$, i.e. \[ \mathbb{E}\big[|\hat S_{n,X}- S_{Y\mid X}|^{p/q}\big]\longrightarrow 0 \quad \mbox{as $n\to+\infty$} \] \end{proposition} \begin{example} (quantile). For a distribution $G$ on $\mathbb{R}$, we define the associated quantile function \[ G^{-1}(\alpha)=\inf\{z\in\mathbb{R}: G(z)\geq \alpha\},\quad 0<\alpha<1. \] It is well-known that the weak convergence $G_n\stackrel{d}\to G$ implies the quantile convergence $G_n^{-1}(\alpha)\to G^{-1}(\alpha)$ at each continuity point $\alpha$ of $G^{-1}$. Equivalently, considering $\mathcal{P}(\mathbb{R})$ endowed with the weak convergence topology, the $\alpha$-quantile statistic $S_\alpha(G)=G^{-1}(\alpha)$ is continuous at $G$ as soon as $G^{-1}$ is continuous at $\alpha$. In view of this, we let $\mathcal{C}=\{G\in\mathcal{P}(\mathbb{R})\colon G^{-1} \mbox{ continuous on $(0,1)$}\}$ and assume that the conditional distribution satisfies $\mathbb{P}(F_X\in\mathcal{C})=1$. Then weak convergence holds for the conditional quantiles, i.e. \[ \hat F_{n,X}^{-1}(\alpha)\to F_X^{-1}(\alpha)\quad \mbox{in probability}. \] Note that no integrability condition is needed here because we can apply Proposition~\ref{prop:appli} on the transformed data $(X_i,\tilde Y_i)_{1\leq i\leq n}$, where $\tilde Y_i=\mathrm{tan}^{-1}(Y_i)$ is bounded so that convergence in Wasserstein distance is equivalent to weak convergence. If furthermore $Y$ is $p$-integrable, then the bound \begin{align*} |S_\alpha(G)|^p&\leq \frac{1}{\alpha}\int_0^\alpha |G^{-1}(u)|^p \mathrm{d} u + \frac{1}{1-\alpha}\int_\alpha^1 |G^{-1}(u)|^p \mathrm{d} u \\ &\leq \Big(\frac{1}{\alpha}+\frac{1}{1-\alpha}\Big) M_p^p(G) \end{align*} implies the strengthened convergence \[ \hat F_{n,X}^{-1}(\alpha)\to F_X^{-1}(\alpha)\quad \mbox{in $\mathrm{L}^p$}. \] \end{example} \begin{example} (tail expectation) The tail expectation above level $\alpha\in (0,1)$ is the risk measure defined for $G\in\mathcal{W}_1(\mathbb{R})$ by \[ S_\alpha(G)=\frac{1}{1-\alpha}\int_\alpha^1 G^{-1}(u)\,\mathrm{d} u. \] The name comes from the equivalent definition \[ S_{\alpha}(G)=\mathbb{E}[Y\mid Y>G^{-1}(\alpha)],\quad Y\sim G, \] which holds when $G^{-1}$ is continuous at $\alpha$. One can see that \begin{align*} |S_{\alpha}(G_1)-S_{\alpha}(G_2)|&\leq \frac{1}{1-\alpha}\int_\alpha^1 |G^{-1}_1(u)-G^{-1}_1(u)|\,\mathrm{d} u\\ &\leq \frac{1}{1-\alpha}\int_0^1 |G^{-1}_1(u)-G^{-1}_2(u)|\,\mathrm{d} u\\ &= \frac{1}{1-\alpha}\mathcal{W}_1(G_1,G_2). \end{align*} so that $S_\alpha$ is Lipschitz continuous with respect to the Wasserstein distance $\mathcal{W}_1$. As a consequence, the conditional tail expectation $S_\alpha(F_x)$ can be estimated in a consistent way by the plug-in estimator $S_\alpha(\hat F_{n,x})$ since \[ \mathbb{E}[|S_\alpha(\hat F_{n,X})-S_\alpha(F_X)|]\leq \frac{1}{1-\alpha}\mathbb{E}[\mathcal{W}_1(\hat F_{n,X},F_X)]\longrightarrow 0. \] \end{example} \begin{example} (probability weighted moment) A similar result holds for the probability weighted moment of order $p,q>0$ defined by \[ S_{p,q}(G)=\int_0^1 G^{-1}(u) u^p(1-u)^q\,\mathrm{d} u,\quad G\in\mathcal{W}_1(\mathbb{R}). \] (\cite{prob_weig_mome}). The name comes from the equivalent definition \[ S(G)=\mathbb{E}[YG(Y)^p(1-G(Y))^q],\quad Y\sim G, \] which holds when $G^{-1}$ is continuous on $(0,1)$. One can again check that the statistic $S_{p,q}$ is Lipschitz continuous with respect to the Wasserstein distance $\mathcal{W}_1$ since \begin{align*} |S_{p,q}(G_1)-S_{p,q}(G_2)|&\leq \int_0^1 |G^{-1}_1(u)-G^{-1}_2(u)|u^p(1-u)^q\,\mathrm{d} u\\ &\leq \max_{0\leq u\leq 1}u^p(1-u)^q \times \int_0^1 |G^{-1}_1(u)-G^{-1}_2(u)|\,\mathrm{d} u\\ &= \Big(\frac{p}{p+q}\Big)^p\Big(\frac{q}{p+q}\Big)^q\mathcal{W}_1(G_1,G_2). \end{align*} \end{example} \begin{example} (covariance) We conclude with a simple example in dimension $d=2$ where the statistic of interest is the covariance between the two components of $Y=(Y_1,Y_2)$ given $X=x$. Here, we consider \[ S(G)=\int_{\mathbb{R}^2}y_1y_2\, \mathrm{d} G-\int_{\mathbb{R}^2}y_1\, \mathrm{d} G \int_{\mathbb{R}^2}y_2\, \mathrm{d} G,\quad G\in\mathcal{W}_2(\mathbb{R}^2). \] Considering square integrable random vectors $Y=(Y_1,Y_2)$ and $Z=(Z_1,Z_2)$ with distribution $G$ and $H$ respectively, we compute \begin{align*} &|S(G)-S(H)|\\ &=\big|\mathrm{Cov}(Y_1,Y_2)-\mathrm{Cov}(Z_1,Z_2)\big|\\ &=\big|\mathrm{Cov}(Y_1,Y_2-Z_2)-\mathrm{Cov}(Z_1-Y_1,Z_2)\big|\\ &\leq \mathrm{Var}(Y_1)^{1/2}\mathrm{Var}(Y_2-Z_2)^{1/2}+\mathrm{Var}(Z_2)^{1/2}\mathrm{Var}(Z_1-Y_1)^{1/2} \end{align*} were the last line is a consequence of Cauchy-Schwartz inequality. We have the upper bounds \[ \mathrm{Var}(Y_1)^{1/2}\leq M_2(G),\quad \mathrm{Var}(Z_2)^{1/2}\leq M_2(H) \] and, choosing an optimal coupling $(Y,Z)$ between $G$ and $H$, \[ \mathrm{Var}(Z_1-Y_1)^{1/2}\leq \|Y-Z\|_{L^2}=\mathcal{W}_2(G,H),\quad \mathrm{Var}(Y_2-Z_2)^{1/2}\leq\mathcal{W}_2(G,H). \] Altogether, we obtain, \[ |S(G)-S(H)|\leq \big(M_2(G)+M_2(H)\big) \mathcal{W}_2(G,H). \] This proves that $S$ is locally Lipschitz and hence continuous with respect to the distance $\mathcal{W}_2$. Taking $H=\delta_0$, we obtain \[ |S(G)|\leq M_2(G)^2 \] and the bound \eqref{eq:linear-bound} holds with $q=2$. Thus Proposition~\ref{prop:appli} implies that the plug-in estimator \[ S(\hat F_{n,x}) = \sum_{i=1}^n W_{ni}(x)Y_{1i}Y_{2i}-\sum_{i=1}^n W_{ni}(x)Y_{1i}\sum_{i=1}^n W_{ni}(x)Y_{2i} \] is consistent in absolute mean for the conditional covariance \[ S(F_{x}) = \mathbb{E}(Y_1Y_2\mid X=x)-\mathbb{E}(Y_1\mid X=x)\mathbb{E}(Y_2\mid X=x), \] i.e. $\mathbb{E}[|S(\hat F_{n,X})-S(F_X) |]\longrightarrow 0$ as $n\to+\infty$. \end{example} \section{Proofs}\label{sec:proofs} \subsection{Proof of Theorem~\ref{thm:main}} \begin{proof}[Proof of Theorem~\ref{thm:main} - case $d=1$] We first consider the case when $Y$ is uniformly bounded and takes its values in $[-M,M]$ for some $M>0$. Then, it holds \[ F_x(z)= \begin{cases} 0& \mbox{if } z<-M \\ 1& \mbox{if } z\geq M \end{cases}\quad \mbox{and} \quad \hat F_{n,x}(z)= \begin{cases} 0& \mbox{if } z<-M \\ 1& \mbox{if } z\geq M \end{cases}. \] and the generalized inverse functions (quantile functions) are bounded in absolute value by $M$. As a consequence, \begin{align} \mathbb{E}\left[\mathcal{W}_p^p(\hat F_{n,X},F_X)\right]&= \mathbb{E}\left[\int_{0}^1 |\hat F_{n,X}^{-1}(u)-F^{-1}_X(u)|^p \mathrm{d} z \right]\nonumber\\ &\leq (2M)^{p-1} \mathbb{E}\left[\int_{0}^1 |\hat F_{n,X}^{-1}(u)-F^{-1}_X(u)| \mathrm{d} u \right]\nonumber\\ &= (2M)^{p-1}\int_{-M}^M \mathbb{E}\left[|\hat F_{n,X}(z)-F_X(z)|\right] \mathrm{d} z.\label{eq:proof1} \end{align} In this lines, we have used Equations~\eqref{eq:wasserstein1} and \eqref{eq:wasserstein2} together with Fubini's theorem. Consider the regression model $(X,\mathds{1}_{\{Y\leq z\}})\in\mathbb{R}^d\times \mathbb{R}$ where $z\in[-M,M]$ is fixed. The corresponding regression function is \[ x\mapsto \mathbb{E}[\mathds{1}_{\{Y\leq z\}}|X=x]=F_x(z) \] and the local weight estimator associated with the sample $(X_i,\mathds{1}_{\{Y_i\leq z\}})$, $1\leq i\leq n$ is \[ x\mapsto \sum_{i=1}^n W_{ni}(x)\mathds{1}_{\{Y_i\leq z\}}=\hat F_{n,x}(z). \] An application of Stone's theorem with $p=1$ yields \[ \mathbb{E}\left[|\hat F_{n,X}(z)-F_X(z)|\right]\longrightarrow 0,\quad \mbox{as $n\to+\infty$}, \] whence we deduce, by the dominated convergence theorem, \[ \int_{-M}^M \mathbb{E}\left[|\hat F_{n,X}(z)-F_X(z)|\right]\mathrm{d} z\longrightarrow 0. \] The upper bound~\eqref{eq:proof1} finally implies \[ \mathbb{E}\left[\mathcal{W}_p^p(\hat F_{n,X},F_X)\right]\longrightarrow 0. \] \medskip We next consider the general case when $Y$ is not necessarily bounded. For $M>0$, we define the truncation $Y^M$ of $Y$ by \[ Y^M= \begin{cases} -M& \mbox{if } Y<-M \\ Y& \mbox{if } -M\leq Y< M \\ M& \mbox{if } Y\geq M \end{cases}. \] We define similarly $Y_1^M,\ldots,Y_n^M$ the truncations of $Y_1,\ldots,Y_n$ respectively. The conditional distribution associated with $Y^M$ is \[ F_x^{M}(z)=\mathbb{P}(Y^M\leq z|X=x)=\begin{cases} 0& \mbox{if } z<-M \\ F_x(z)& \mbox{if } -M\leq Y< M \\ 1& \mbox{if } z\geq M \end{cases}. \] The local weight estimation built on the truncated sample is \[ \hat F_{n,x}^{M}(z)=\sum_{i=1}^n W_{ni}(x)\mathds{1}_{\{Y_i^M\leq z\}}. \] By the triangle inequality, \[ \mathcal{W}_p(\hat F_{n,x},F_x)\leq \mathcal{W}_p(\hat F_{n,x},\hat F_{n,x}^{M})+\mathcal{W}_p(\hat F_{n,x}^{M},F_x^{M})+\mathcal{W}_p(F_x^{M},F_x), \] whence we deduce \begin{align*} & \mathbb{E}[\mathcal{W}_p^p(\hat F_{n,x},F_x)]\\ \leq & \; 3^{p-1}\left(\mathbb{E}[\mathcal{W}_p^p(\hat F_{n,X},\hat F_{n,X}^{M})]+\mathbb{E}[\mathcal{W}_p(\hat F_{n,X}^{M},F_X^{M})]+\mathbb{E}[\mathcal{W}_p^p(F_X^{M},F_X)] \right). \end{align*} By the preceding result in the bounded case, for any fixed $M$, the second term converge to $0$ as $n\to +\infty$. We next focus on the first and third term. For fixed $X=x$, there is a natural coupling between the distribution $\hat F_{n,x}$ and $\hat F_{n,x}^{M}$ given by $(Z_1,Z_2)$ such that \[ (Z_1,Z_2)=(Y_i,Y_i^M)\quad \mbox{with probability $W_{ni}(x)$}. \] Clearly $Z_1\sim \hat F_{n,x}$ and $Z_2\sim \hat F_{n,x}^{M}$ and this coupling provides the upper bound \begin{equation}\label{eq:coupling-bound} \mathcal{W}_p^p(\hat F_{n,x},\hat F_{n,x}^{M})\leq \|Z_1-Z_2\|_{\mathrm{L}^p}^p=\sum_{i=1}^n W_{ni}(x)|Y_i-Y_i^M|^p . \end{equation} Let us introduce the function $g_M(x)$ defined by \[ g_M(x)=\mathbb{E}\left[|Y-Y^M|^p\mid X=x\right]. \] Using the fact that, conditionally on $X_1,\ldots,X_n$, the random variables $Y_1,\ldots,Y_n$ are independent with distribution $F_{X_1},\ldots,F_{X_n}$, we deduce \[ \mathbb{E}\left[\mathcal{W}_p^p(\hat F_{n,x},\hat F_{n,x}^{M})\right]\leq \mathbb{E}\left[\sum_{i=1}^n W_{ni}(x)g_M(X_i) \right]. \] The condition $i)$ on the weights in Stone's Theorem then implies \[ \mathbb{E}\left[\sum_{i=1}^n W_{ni}(X)g_M(X_i) \right]\leq C\mathbb{E}[g_M(X)]. \] Because $|Y-Y^M|^p$ converges almost surely to $0$ as $M\to+\infty$ and is bounded by $2^p|Y|^p$ which is integrable, Lebesgue's convergence theorem implies \[ \mathbb{E}[g_M(X)]=\mathbb{E}\left[|Y-Y^M|^p\right]\longrightarrow 0 \quad \mbox{as $M\to+\infty$}. \] We deduce that the first term satisfies \[ \mathbb{E}\left[\mathcal{W}_p^p(\hat F_{n,X},\hat F_{n,X}^{M})\right]\leq C\mathbb{E}[g_M(X)]\longrightarrow 0, \quad \mbox{as $M\to +\infty$} \] where the convergence is uniform in $n$. We now consider the third term. Since $Y^M$ is obtained from $Y$ by truncation, the distribution functions and quantile functions of $Y$ and $Y^M$ are related by \[ F_x^{M}(z)=\begin{cases} 0& \mbox{if } z<-M \\ F_x(z)& \mbox{if } -M\leq z< M \\ 1& \mbox{if } z\geq M \end{cases} \] and \[ (F_x^M)^{-1}(u)=\begin{cases} -M & \mbox{if } F_x^{-1}(u)<-M \\ (F_x)^{-1}(u)& \mbox{if } -M\leq F_x^{-1}(u)< M \\ M& \mbox{if } F_x^{-1}(u)\geq M \end{cases}. \] As a consequence \begin{align*} \mathcal{W}_p^p(F_x^{M},F_x)&=\int_{0}^1|(F_x^{M})^{-1}(u)-F_x^{-1}(u)|^p\mathrm{d} u\\ &=\mathbb{E}\left[|Y^M-Y|^p\mid X=x\right]=g_M(x). \end{align*} We deduce \[ \mathbb{E}\left[\mathcal{W}_p^p(F_X^{M},F_X)\right]= \mathbb{E}[g_M(X)]\longrightarrow 0, \quad \mbox{as $M\to +\infty$} \] where the convergence is uniform in $n$. We finally combine the three terms. The sum can be made smaller than any $\varepsilon>0$ by first choosing $M$ large enough so that the first and third terms are smaller than $\varepsilon/3$ and then choosing $n$ large enough so that the second term is smaller than $\varepsilon/3$. This proves Equation~\eqref{eq:stone-disreg} and concludes the proof. \end{proof} In order to extend the proof from $d=1$ to $d\geq 2$, we need the notion of \textit{sliced Wasserstein distance}, see \cite{BG21} for instance. Let $\mathbb{S}^{d-1}=\{u\in\mathbb{R}^d:\|u\|=1\}$ be the unit sphere in $\mathbb{R}^d$ and, for $u\in\mathbb{R}^d$, let $u_*:\mathbb{R}^d\to\mathbb{R}$ be the linear form defined by $u_*(x)=u\cdot x$. The projection in direction $u$ of a measure $\mu$ on $\mathbb{R}^d$ is defined as the pushforward $\mu\circ u_*^{-1}$ which is a measure on $\mathbb{R}$. The inequality $|u\cdot x|\leq \|x\|$ implies that $\mu\circ u_*^{-1}\in \mathcal{W}_p(\mathbb{R})$ for all $\mu\in \mathcal{W}_p(\mathbb{R}^d)$ and $u\in\mathbb{S}^{d-1}$. The sliced and max-sliced Wasserstein distances between $\mu,\nu\in \mathcal{W}_p(\mathbb{R}^d)$ are then defined respectively by \[ S\mathcal{W}_p(\mu,\nu)=\left(\int_{\mathbb{S}^{d-1}} \mathcal{W}_p^p(\mu\circ u_*^{-1},\nu\circ u_*^{-1})\,\sigma(\mathrm{d} u)\right)^{1/p}, \] where $\sigma$ denotes the uniform measure on $\mathbb{S}^{d-1}$ and \[ \overline{SW}_p(\mu,\nu)=\max_{u\in\mathbb{S}^{d-1}} \mathcal{W}_p(\mu\circ u_*^{-1},\nu\circ u_*^{-1}). \] In plain words, the sliced and max-sliced Wasserstein distance are respectively the average and the maximum over all the $1$-dimensional Wasserstein distances between the projections of $\mu$ and $\nu$. The following result is crucial in our proof. \begin{theorem}[\cite{BG21}]\label{thm:BG21} For all $p\geq 1$, $S\mathcal{W}_p$ and $\overline{SW}_p$ are distances on $\mathcal{W}_p(\mathbb{R}^d)$ which are equivalent to $\mathcal{W}_p$, i.e. for all sequence $\mu,\mu_1,\mu_2,\ldots \in \mathcal{W}_p(\mathbb{R}^d)$ \[ S\mathcal{W}_p(\mu_n,\mu)\to 0\quad \Longleftrightarrow\quad \overline{SW}_p(\mu_n,\mu)\to 0\quad \Longleftrightarrow\quad \mathcal{W}_p(\mu_n,\mu)\to 0. \] \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:main} - case $d\geq 2$.] For the sake of clarity, we divide the proof into three steps: \begin{enumerate} \item[1)] we prove that the result holds in max-sliced Wasserstein distance, i.e. $\mathbb{E}[\overline{SW}_p^p(\hat F_{n,X},F_X)]\to 0$; \item[2)] we deduce that $\mathcal{W}_p(\hat F_{n,X},F_X)\to 0$ in probability; \item[3)] we show that the sequence $\mathcal{W}_p^p(\hat F_{n,X},F_X)$ is uniformly integrable. \end{enumerate} Points 2) and 3) together imply $\mathbb{E}[\mathcal{W}_p^p(\hat F_{n,X},F_X)]\to 0$ as required. \medskip Step 1). For all $u\in\mathbb{S}^{d-1}$, the projection $\hat F_{n,X}\circ u_*^{-1}$ is the weighted empirical distribution \[ \hat F_{n,X}\circ u_*^{-1}=\sum_{i=1}^nW_{ni}(X)\delta_{Y_i\cdot u}. \] An application of Theorem~\ref{thm:main} to the $1$-dimensional sample $(Y_i\cdot u)_{i\geq 1}$ yields \begin{equation}\label{eq:control_fixed_u} \mathbb{E}[\mathcal{W}_p^p(\hat F_{n,X}\circ u_*^{-1},F_X\circ u_*^{-1})]\longrightarrow 0. \end{equation} Note indeed that $\mathbb{E}[|Y|^p]<\infty$ implies $\mathbb{E}[|Y\cdot u|^p]<\infty$ and that the conditional laws of $Y\cdot u$ are the pushforward of those of $Y$, i.e. $\mathcal{L}(Y\cdot u\mid X)= F_X\circ u_*^{-1}$. We next consider the max-sliced Wasserstein distance. Regularity in the direction $u\in\mathbb{S}^{d-1}$ will be useful and we recall that the Wasserstein distance between projections depends on the direction in a Lipschitz way. More precisely, according to \citet[Proposition 2.2]{BG21}, \[ |\mathcal{W}_p(\mu\circ u_*^{-1},\nu\circ u_*^{-1}) -\mathcal{W}_p(\mu\circ v_*^{-1},\nu\circ v_*^{-1})| \leq (M_p(\mu)+M_p(\nu))\|u-v\|, \] for all $\mu,\nu\in \mathcal{W}_p(\mathbb{R}^d)$ and $u,v\in\mathbb{S}^{d-1}$ (recall Equation~\eqref{eq:def-Mp} for the definition of $M_p(\mu)$, $M_p(\nu)$). The sphere $\mathbb{S}^{d-1}$ being compact, for all $\varepsilon>0$, one can find $K\geq 1$ and $u_1,\ldots,u_K\in\mathbb{S}^{d-1}$ such that the balls $B(u_i,\varepsilon)$ with centers $u_i$ and radius $\varepsilon$ cover the sphere. Then, due to the Lipschitz property, the max-sliced Wasserstein distance is controlled by \begin{align*} &\overline{SW}_p(\hat F_{n,X},F_X)\\ &=\max_{u\in\mathbb{S}^{d-1}}\mathcal{W}_p^p(\hat F_{n,X}\circ u_*^{-1},F_X\circ u^{-1})\\ &\leq \max_{1\leq k\leq K}\mathcal{W}_p(\hat F_{n,X}\circ u_{k*}^{-1},F_X\circ u_{k*}^{-1})+\varepsilon(M_p(\hat F_{n,X})+M_p(F_{X})). \end{align*} Elevating to the $p$-th power and taking the expectation, we deduce \begin{align*} &\mathbb{E}\big[\overline{SW}_p^p(\hat F_{n,X},F_X)\big] \\ &\leq 3^{p-1} \mathbb{E}\big[\max_{1\leq k\leq K}\mathcal{W}_p^p(\hat F_{n,X}\circ u_{k*}^{-1},F_X\circ u_{k*}^{-1})\big] +3^{p-1}\varepsilon^p (\mathbb{E}\big[M_p^p(\hat F_{n,X})\big] +\mathbb{E}\big[M_p^p(F_{X})\big]). \end{align*} The first term converges to $0$ thanks to Eq.~\eqref{eq:control_fixed_u}, i.e. \[ \mathbb{E}[\max_{1\leq i\leq K}\mathcal{W}_p^p(\hat F_{n,X}\circ u_{i*}^{-1},F_X\circ u_{i*}^{-1})]\longrightarrow 0. \] The second term is controlled by a constant times $\varepsilon^p$ since \[ \mathbb{E}[M_p^p(\hat F_{n,X})]=\mathbb{E}\big[\sum_{i=1}^n W_{ni}(X)\|Y_i\|^p\big]\leq C\mathbb{E}[\|Y\|^p] \] (by property $i)$ of the weights) and \[ \mathbb{E}[M_p^p(F_{X})]=\mathbb{E}\big[\mathbb{E}[\|Y\|^p\mid X]\big]=\mathbb{E}[\|Y\|^p] \] (by the tower property of conditional expectation). Letting $\varepsilon\to 0$, the second term can be made arbitrarily small. We deduce $\mathbb{E}[\overline{SW}_p^p(\hat F_{n,X},F_X)]\to 0$. \medskip Step 2). As a consequence of step 1), $\overline{SW}_p(\hat F_{n,X},F_X)\to 0$ in probability, or equivalently $\hat F_{n,X}\to F_X$ in probability in the metric space $(\mathcal{W}_p(\mathbb{R}^d),\overline{SW}_p)$. Theorem~\ref{thm:BG21} implies that the identity mapping is continuous from $(\mathcal{W}_p(\mathbb{R}^d),\overline{SW}_p)$ into $(\mathcal{W}_p(\mathbb{R}^d),\mathcal{W}_p)$. The continuous mapping theorem implies that $\hat F_{n,X}\to F_X$ in probability in the metric space $(\mathcal{W}_p(\mathbb{R}^d),\mathcal{W}_p)$. Equivalently, $\mathcal{W}_p(\hat F_{n,X},F_X)\to 0$ in probability. \medskip Step 3). By the triangle inequality, \[ \mathcal{W}_p(\hat F_{n,X},F_X)\leq \mathcal{W}_p(\hat F_{n,X},\delta_0)+\mathcal{W}_p(\delta_0,F_X) \] with $\delta_0$ the Dirac mass at $0$. Furthermore, for any $\mu\in \mathcal{W}_p(\mathbb{R}^d)$, \[ \mathcal{W}_p(\mu,\delta_0)=\left(\int_{\mathbb{R}^d}\|x\|^p\,\mu(\mathrm{d} x)\right)^{1/p}=M_p(\mu). \] We deduce \[ \mathcal{W}_p^p(\hat F_{n,X},F_X)\leq 2^{p-1}M_p^p(\hat F_{n,X})+2^{p-1} M_p^p(F_X). \] In order to prove the uniform integrability of the left hand side, it is enough to prove that \begin{equation}\label{eq:ui} \mbox{$M_p^p(F_X)$ is integrable and $M_p^p(\hat F_{n,X})$, $n\geq 1$, is uniformly integrable}. \end{equation} We have \[ M_p^p(F_X)=\mathbb{E}[\|Y\|^p\mid X] \] which is integrable because $\mathbb{E}[\|Y\|^p]<\infty$. Furthermore, \[ M_p^p(\hat F_{n,X})=\sum_{i=1}^n W_{ni}(X)\|Y_i\|^p \] and Stone's Theorem ensures that \[ \sum_{i=1}^n W_{ni}(X)\|Y_i\|^p\longrightarrow \mathbb{E}[\|Y\|^p\mid X] \quad \mbox{in $L^1$}. \] Since the sequence $M_p^p(\hat F_{n,X})$ converges in $L^1$, it is uniformly integrable and the claim follows. \end{proof} \subsection{Proof of Proposition~\ref{prop:upperbound}, Corollaries~\ref{cor:kernel}-\ref{cor:knn} and Theorem~\ref{thm:minimax}} \begin{proof}[Proof of Proposition~\ref{prop:upperbound}] The proof of the upper bound relies on a coupling argument. Without loss of generality, we can assume that the $Y_i$'s are generated from uniform random variables $U_i$'s by the inversion method -- i.e. we assume that $U_i$, $1\leq i\leq n$, are independent identically distributed random variables with uniform distribution on $(0,1)$ that are furthermore independent from the covariates $X_i$, $1\leq i\leq n$ and we set $Y_i=F^{ -1}_{X_i}(U_i)$. Then the sample $(X_i,Y_i)$ is i.i.d. with distribution $P$. In order to compare $\hat F_{n,x}$ and $F_x$, we introduce the random variables $\tilde Y_i=F^{-1}_{x}(U_i)$ and we define \[ \tilde F_{n,x}(z)=\sum_{i=1}^n W_{ni}(x)\mathds{1}_{\{\tilde Y_i\leq z\}}. \] By the triangle inequality, \[ \mathcal{W}_1(\hat F_{n,x}, F_x)\leq \mathcal{W}_1(\hat F_{n,x}, \tilde F_{n,x})+ \mathcal{W}_1(\tilde F_{n,x}, F_x). \] In the right hand side, the first term is interpreted as an \textit{approximation error} comparing the weighted sample $(Y_i,W_{ni}(x))$ to $(\tilde Y_i,W_{ni}(x))$ where the $\tilde Y_i$ have the target distribution $F_x$. The second term is an \textit{estimation error} where we use the weighted sample $(\tilde Y_i,W_{ni}(x))$ with the correct distribution to estimate $F_x$. We first consider the approximation error. A similar argument as for the proof of Equation~\eqref{eq:coupling-bound} implies \[ \mathcal{W}_1(\hat F_{n,x}, \tilde F_{n,x})\leq \sum_{i=1}^n W_{ni}(x)|Y_i-\tilde Y_i|. \] Introducing the uniform random variables $U_i$'s, we get \begin{align*} \mathbb{E}[ \mathcal{W}_1(\hat F_{n,x}, \tilde F_{n,x})]& \leq \mathbb{E}\Big[ \sum_{i=1}^n W_{ni}(x)|F^{-1}_{X_i}(U_i)-F^{-1}_{x}(U_i) |\Big] \\ &=\mathbb{E}\Big[ \sum_{i=1}^n W_{ni}(x)\displaystyle\int_0^1|F^{-1}_{X_i}(u)-F^{-1}_{x}(u)|~\mathrm{d} u \Big] \quad \text{by independence}\\ &= \mathbb{E} \Big[ \sum_{i=1}^n W_{ni}(x)\mathcal{W}_1(F_{X_i},F_{x})\Big], \end{align*} where the equality relies on Equation~\eqref{eq:wasserstein1}. Note that this control of the approximation error is very general and could be extended to the Wasserstein distance of order $p>1$. We next consider the estimation error and our approach works for $p=1$ only. By Equation~\eqref{eq:wasserstein2}, \[ \mathbb{E}[\mathcal{W}_1(\tilde F_{n,x}, F_x)] =\mathbb{E}\Big[ \int_{\mathbb{R}} \Big|\sum_{i=1}^n W_{ni}(x)\big(\mathds{1}_{\{\tilde Y_i\leq z\}}-F_x(z) \big)\Big|\mathrm{d}z\Big]. \] Applying Fubini's theorem and using the upper bound \begin{align*} & \mathbb{E}\Big[ \Big|\sum_{i=1}^n W_{ni}(x)\big(\mathds{1}_{\{\tilde Y_i\leq z\}}-F_x(z) \big)\Big|\Big]\\ \leq&\; \mathbb{E}\Big[ \Big|\sum_{i=1}^n W_{ni}(x)\big(\mathds{1}_{\{\tilde Y_i\leq z\}}-F_x(z) \big)\Big|^2\Big]^{1/2}\\ =&\; \mathbb{E}\Big[\sum_{i=1}^n W_{ni}^2(x)\Big]^{1/2} \sqrt{F_x(z)(1-F_x(z))}, \end{align*} we deduce \[ \mathbb{E}[\mathcal{W}_1(\tilde F_{n,x}, F_x)]\leq \mathbb{E}\Big[\sum_{i=1}^n W_{ni}^2(x)\Big]^{1/2} \int_{\mathbb{R}}\sqrt{F_x(z)(1-F_x(z))}\mathrm{d}z. \] Collecting the two terms yields Proposition~\ref{prop:upperbound}. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:kernel}] For the kernel algorithm with uniform kernel and weights~\eqref{eq:kernel-weights}, we denote by \[ N_n(X)=\sum_{i=1}^n \mathds{1}_{\{X_i\in B(X,h_n)\}} \] the number of points in the ball $B(X,h_n)$ with center $X$ and radius $h_n$. If $N_n\geq 1$, only the points in $B(X,h_n)$ have a nonzero weight which is equal to $1/N_n$. If $N_n=0$, then by convention all the weights are equal to $1/n$. Thus we deduce \[ \mathbb{E}\Big[\sum_{i=1}^nW_{ni}^2(X)\Big]=\mathbb{E}\Big[\frac{1}{N_n(X)}\mathds{1}_{\{N_n(X)\geq 1\}}\Big]+\frac{1}{n}\mathbb{P}(N_n(X)=0) \] and \[ \mathbb{E}\Big[\sum_{i=1}^{n}W_{ni}(X)\|X_i-X\|^H\Big]\leq h_n^H \mathbb{P}(N_n(X)\geq 1)+k^{H/2}\mathbb{P}(N_n(X)=0) \] because the distance to $X$ for the points with non zero weight can be bounded from above by $h_n$ if $N_n(X)\geq 1$ and by $\sqrt{k}$ otherwise (note that $\sqrt{k}$ is the diameter of $[0,1]^k$). Next, we use the fact that, conditionally on $X=x$, $N_n(x)$ has a binomial distribution with parameters $n$ and $p_n(x)=\mathbb{P}(X_1\in B(x,h_n))$. This implies \[ \mathbb{E}\Big[\frac{1}{N_n(X)}\mathds{1}_{\{N_n(X)\geq 1\}}\Big]\leq \mathbb{E}\Big[\frac{2}{np_n(X)}\Big]\leq \frac{2 c_k}{nh_n^k} \] where the first inequality follows from \citet[Lemma~4.1]{gyorfi} and the second one from \citet[Equation~5.1]{gyorfi} where the constant $c_k=k^{k/2}$ can be taken. Similarly, \begin{align*} \mathbb{P}(N_n(X)=0)&=\mathbb{E}[(1-p_n(X))^n]\leq \mathbb{E}[e^{-np_n(X)}] \\ &\leq \big(\max_{u>0} ue^{-u}\big)\times \mathbb{E}\Big[\frac{1}{np_n(X)}\Big]\\ &\leq \frac{c_k}{nh_n^k}. \end{align*} In view of these different estimates, Equation~\eqref{eq:general-bound} entails \begin{align*} \mathbb{E}\big[\mathcal{W}_1(\hat F_{n,X}, F_X)\big] & \leq L \Big(h_n^H+k^{H/2}\frac{c_k}{nh_n^k}\Big) +M \left(\frac{(2+1/n)c_k}{nh_n^k}\right)^{1/2}\\ &\leq Lh_n^H + M\sqrt{(2+1/n)c_k}(nh_n^k)^{-1/2}+Lk^{H/2}c_k(nh_n^k)^{-1}. \end{align*} \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:knn}] For the nearest neighbor weights~\eqref{eq:nearest-neighbor-weights}, there are exactly $\kappa_n$ non-vanishing weights with value $1/\kappa_n$ whence \[ \sum_{i=1}^nW_{ni}^2(X)=\frac{1}{\kappa_n}. \] Furthermore, the $\kappa_n$ nearest neighbors of $X$ satisfy \[ \|X_{i:n}(X)-X\|\leq \|X_{\kappa_n:n}(X)-X\|,\quad i=1,\ldots,\kappa_n. \] In view of this, Equation~\eqref{eq:general-bound} entails \begin{align*} \mathbb{E}\big[\mathcal{W}_1(\hat F_{n,X}, F_X)\big] & \leq L \mathbb{E}\big[\|X_{\kappa_n:n}(X)-X\|^H\big] +M \kappa_n^{-1/2}\\ & \leq L \mathbb{E}\big[\|X_{\kappa_n:n}(X)-X\|^2\big]^{H/2} +M \kappa_n^{-1/2} \end{align*} where the last line relies on Jensen's inequality. We conclude thanks to \citet[Theorem~2.4]{biau_2015} stating that \[ \mathbb{E}\left[\| X_{\kappa_n:n}(X)-X \|^2\right] \leq \begin{cases} 8 (\kappa_n/n) & \mbox{if } k=1,\\ \tilde{c}_k (\kappa_n/n)^{2/k} & \mbox{if } k\geq 2. \end{cases} \] \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:minimax} (lower bound)] The proof of a lower bound for the minimax risk in Wasserstein distance is adapted from the proof of Proposition~3 in \citet[Appendix~C]{PDNT_22} and we give only the main lines. Consider the subclass of $\mathcal{D}(H,L,M)$ where $Y$ is a binary variable with possible values $0$ and $B$. Note that condition c) of Definition~\ref{def:class-D} is automatically satisfied if $B\leq 4M$. The conditional distribution of $Y$ given $X=x$ is characterized by \[ p(x)=\mathbb{P}(Y=B\mid X=x) \] and the Wasserstein distance by \[ \mathcal{W}_1(F_x,F_{x'})=B|p(x)-p(x')|, \] so that property b) of Definition~\ref{def:class-D} is equivalent to \begin{equation}\label{eq:regularity} B |p(x)-p(x')|\leq L \Vert x-x' \Vert^H. \end{equation} Similarly as in \citet[Lemma~1]{PDNT_22}, one can show that a general prediction with values in $\mathbb{R}$ can always be improved (in terms of Wasserstein error) into a binary prediction with values in $\{0,B\}$. Indeed, for a given prediction $\hat F_{n,x}$, the binary prediction \[ \tilde F_{n,x} =(1-\tilde p_n(x))\delta_0 + \tilde p_n(x) \delta_B \] with \[ \tilde p_n(x)=\frac{1}{B} \int_0^B \big(1-\hat F_{n,x}(z)\big)\mathrm{d}z \] always satisfies \[ \mathbb{E}[\mathcal{W}_1(\tilde F_{n,X},F_{X})]\leq \mathbb{E}[\mathcal{W}_1(\hat F_{n,X},F_{X})]. \] This simple remark implies that, when considering the minimax risk on the restriction of the class $\mathcal{D}(H,L,M)$ to binary distributions, we can focus on binary predictions. But for binary predictions, \[ \mathbb{E}[\mathcal{W}_1(\tilde F_{n,X},F_{X})]=B|\tilde p_{n}(X)-p(X)|, \] showing that the minimax rate of convergence for distributional regression in Wasserstein distance is equal to the minimax rate of convergence for estimating the regression function $\mathbb{E}[Y|X=x]=B p(x)$ in absolute error under the regularity assumption~\eqref{eq:regularity} . According to \cite{Stone1980, Stone1982}, a lower bound for the minimax risk in $L^1$-norm is $n^{-H/(2H+k)}$ (in the first paper, we consider the Bernoulli regression model referred to as Model 1 Example 5 and the $L^q$ distance with $q=1$). \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:minimax} (upper bound)] For the kernel method, Corollary~\ref{cor:kernel} states that the expected Wasserstein error is upper bounded by \[ Lh_n^H +M\sqrt{(2+1/n)c_k}(nh_n^k)^{-1/2}+Lk^{H/2}c_k(nh_n^k)^{-1}. \] Minimizing the sum of the first two terms in the right-hand side with respect to $h_n$ leads to $h_n\propto n^{1/(2H+1)}$ and implies that right-hand side is of order $n^{-H/(2H+k)}$ (the last term is negligible). This matches the minimax lower rate of convergence previously stated previously and proves that the optimal minimax risk is of order $n^{-H/(2H+k)}$. For the nearest neighbor method, minimizing the upper bound for the expected Wasserstein error from Corollary~\ref{cor:knn} leads to $$\kappa_n\propto \begin{cases} n^{H/(H+1)}& \mbox{if } k=1\\ n^{H/(H+k/2)}&\mbox{if } k\geq2 \end{cases},$$ with a corresponding risk of order $$ \begin{cases} n^{-H/(2H+2)}& \mbox{if } k=1\\ n^{-H/(2H+k)}&\mbox{if } k\geq2 \end{cases}, $$ whence the nearest neighbor method reaches the optimal rate when $k\geq2$. \end{proof} \subsection{Proof of Proposition~\ref{prop:appli}} \begin{proof}[Proof of Proposition~\ref{prop:appli}] The first point follows from the fact that composition by a continuous application respects convergence in probability. Indeed, as the estimator $\hat F_{n,X}$ converges to $F_X$ in probability for the Wasserstein distance $\mathcal{W}_p$, $S(\hat F_{n,X})$ converges to $S(F_X)$ in probability. In order to prove the consistency in $\mathrm{L}^{p/q}$, it is enough to prove furthermore the uniform integrability of $|S(\hat F_{n,X})-S(F_{X})|^{p/q}$, $n\geq 1$. With the convexity inequality of power functions as $p/q\geq 1$, Equation \eqref{eq:linear-bound} entails \begin{align*} |S(\hat F_{n,X})-S(F_{X})|^{p/q}&\leq 2^{p/q-1}\big(|S(\hat F_{n,X})|^{p/q}+|S(F_{X})|^{p/q}\big)\\ &\leq 2^{p/q-1}\Big((aM_p^q(\hat F_{n,X})+b)^{p/q}+(aM_p^q(F_{X})+b)^{p/q}\Big)\\ &\leq 2^{2(p/q-1)}\Big(a^{p/q} M_p^p(\hat F_{n,X})+a^{p/q} M_p^p(F_{X})+2b^{p/q}\Big). \end{align*} This upper bound together with Equation~\eqref{eq:ui} implies the uniform integrability of $|S(\hat F_{n,X})-S(F_{X})|^{p/q}$, $n\geq 1$, which concludes the proof. \end{proof} \section*{Acknowledgements} The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR) under reference ANR-20-CE40-0025-01 (T-REX project). They are also grateful to Mehdi Dagdoug for suggesting the example of random forest weights (Example~\ref{example2}).
2,869,038,154,481
arxiv
\section{Introduction} The {\it Layout Optimization Problem} (LOP) concerns the physical placement of instruments or pieces of equipment in a spacecraft or satellite. Because these objects have mass, the system is subject to additional constraints (beyond simple Cartesian packing) that affect our solution. The two main constraints that we handle in this paper are (1) the space occupied by a given collection of objects (envelopment), and (2) the non-equilibrium (i.e. imbalance) of the system. The rest of the paper is organized as follows: In Section 2 we first present a detailed description of the problem, and describe previous related research. In Section 3 we describe our algorithm, and in Section 4 we give the results of numerical experiments. \section{The Layout Optimization Problem in Satellites} The Layout Optimization Problem (LOP) was proposed by Feng {\it et al.} \cite{layout:thf} in 1999, and has significant implications for the cost and performance of devices such as satellites and spacecraft. It concerns the {\it two dimensional} physical placement of a collection of {\it objects} (instruments or other pieces of equipment) within a spacecraft/satellite ``cabinet", or {\it container}. The LOP is demonstrably NP-hard \cite{layout:NPhard}. Early work on this problem \cite{layout:lining,layout:tangfei,layout:yuyang,layout:zhouchi} almost always modeled objects as circles in order to simplify the packing process. However, in real-world applications, objects are generally rectangular or polygonal in shape, and modeling them as circles leads to expensive wastage of space. We have recently reported work on solving the rectangular case \cite{icnc07}, and here we report a new algorithm (based on a different approach) to solve the polygonal case. We now briefly introduce related work on the packing of irregular items. Dowsland {\it et al.} use a so-called "Bottom-Left" strategy to place polygonal items in a bin \cite{Dowsland02}, with items having fixed orientations. Poshyanonda {\it et al.} \cite {Poshyanonda02} combine genetic algorithms with artificial neural networks to obtain reasonable packing densities. In other related work, Bergin {\it et al.} study the packing of identical rectangles in an irregular container \cite{BirginCOR06,BirginORS06}. Burke and Kendall have applied simulated annealing to translational polygon packing (i.e., without rotations) \cite{Burke97}. Other authors have applied simulated annealing to solve the problem of rotational polygon packing on a continuous space \cite{heckmann1995saa,Jain92}. Given the additional constraint that imbalance of mass must be minimised, it is difficult to see how these existing methods may be directly applied to the current problem. In what follows we describe a new nonlinear optimization model for the LOP, and then show how it may be solved using simulated annealing. \subsection{Notation and Definitions} Here we describe the formal optimization model, by first explaining our notation for the representation of polygons. We then show how to quantify relations between polygons (such as distance and degree of overlap), which are central to the problem of assessing the overall quality of a layout. \subsubsection{Structure of a polygon} Suppose there are $k$ polygons $(1, 2, \dots, k)$ to be packed. The {\it structure} of a polygon includes both its {\it shape} and its {\it mass}. We use $str(i)$ to denote the initial structure of a polygon, $i$: \begin{equation} str(i) =(n_i,m_i,(r_1,r_2,..., r_{n_i}),(\theta _1, \theta _2, ..., \theta _{n_i}) ) \end{equation} where $m_i$ is the mass of polygon $i$, and $n_i$ is the number of vertices in the graph representation of polygon $i$. The positions of the $n_i$ vertices are defined by two lists of {\it polar} coordinates. List $(r_1,r_2,..., r_{n_i})$ defines the {\it Euclidean distance} from each of the $n_i$ vertices to the polygon's centre of mass, and list $(\theta _1, \theta _2, ..., \theta _{n_i})$ defines the {\it orientation} of each of the $n_i$ vertices relative to the centre of mass. Figure ~\ref{fig_structures} shows how to define the initial structure of a square with edge length 1; in Figure ~\ref{fig_structures} (a), the shape's centre of mass is located at the shape centre, whereas in Figure ~\ref{fig_structures}(b), the centre of mass is located at one vertex. We define the point of reference of each polygon as its center of mass, in order to simplify the notation. \begin{figure} \begin{centering} \includegraphics[scale=0.8]{structure} \caption{Illustration of the structure definition of polygons} \label{fig_structures} \end{centering} \end{figure} \subsubsection{Radius of a polygon} The radius of polygon $i$ is defined as the maximum of $(r_1,r_2,..., r_{n_i})$: \begin{equation} r(i)=\max \{r_1, r_2, ..., r_{n_i}\} \end{equation} With the polygon's centre of mass at its own centre, the circle with radius $r(i)$ defines the minimum-sized circle that can completely cover the polygon. \subsubsection{State of a polygon} We use Cartesian coordinates to record the positions of the polygons, and set the center of the container (that is, the circle) as the original point. We use $sta(i)$ to denote the state of a polygon $i$: \begin{equation} sta(i) =(x_i,y_i,\alpha _i ) \end{equation} where $x_i, y_i$ is the position of the centre of mass, and $\alpha$ defines a rotation angle. Then with $str(i)$ and $sta(i)$, we can draw a polygon, $i$, as in Figure ~\ref{fig_states}. \begin{figure} \begin{centering} \includegraphics[scale=0.8]{state.eps} \caption{Illustration of the state of a polygon} \label{fig_states} \end{centering} \end{figure} \subsubsection{Distance between two polygons} The distance between two polygons is defined as the Euclidean distance between their centres of mass: \begin{equation} dis(i,j)=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2)} \end{equation} \subsubsection {Overlap between two polygons} If two polygons do not overlap, this measure is zero. If two polygons $i$ and $j$ overlap at all, we measure this as: \begin{equation} \label{equ:overlap} ove(i,j)= \max \{0, r(i)+r(j)-dis(i,j) \} \end{equation} \noindent This measurement of overlap has certain characteristics: \begin{itemize} \item In Equation (\ref{equ:overlap}), $r(i)$ and $r(j)$ are two constants, therefore $dis(i,j)$ can be directly obtained by the positions of the two polygons. \item It is clear that $ove(i,j) \geq 0$. To satisfy the {\it non-overlapping} constraint, we should minimize $ove(i,j)$ to zero. \item $ove(i,j)$ is not a continuous function of the positions. As shown in Figure ~\ref{fig_overlaps}, when two squares with edge length 2 are adjacent on one side, their overlap is zero, but when the left square is moved a little to the right, the overlap ``jumps" to $2\sqrt2-2$. \end{itemize} Ascertaining overlap between two polygons is not a difficult problem in computational geometry or computer graphics. In this paper, we look at each edge of one polygon in turn; if it is intersected by any edge of another polygon, then an overlap exists; otherwise, if one polygon is contained {\it within} another, then clearly an overlap exists. So the ascertaining of overlap has complexity $O(mn)$, where $m,n$ are the number of edges of the two polygons. \begin{figure} \begin{centering} \includegraphics[scale=1]{overlaps.eps} \caption{The overlap function is not continuous} \label{fig_overlaps} \end{centering} \end{figure} \subsubsection{State of a layout} A layout $X$ is defined as the combination of the states of $k$ polygons: \begin{equation} X=(x_1, y_1, \alpha _1, x_2, y_2, \alpha _2, ..., x_k, y_k, \alpha _k) \end{equation} \subsubsection{Radius of a layout} If $(X_x, X_y)$ denotes the position of the centre of mass of a layout $X$, then \begin{equation} X_x= \frac{\sum_{i=1}^{k}m_ix_i}{\sum_{i=1}^{k}m_i}, X_y= \frac{\sum_{i=1}^{k}m_iy_i}{\sum_{i=1}^{k}m_i}. \end{equation} We define the radius $r(X)$ of a layout $X$ as the longest Euclidean distance from its centre of mass to any of the vertices of the polygons. Because of the {\it imbalance} constraint, we place the centre of mass of the layout at the container center, so $r(X)$ defines the minimum-sized container. \subsubsection{Overlap of a layout } The overlap of a layout is the sum of all overlaps between its polygons: \begin{equation} ove(X)=\sum\limits_{i=1}^{k}\sum\limits_{j=1,j\not =i}^{k} ove(i,j) \end{equation} \subsubsection{Problem definition} From the definitions above, we obtain an unconstrained optimization problem: \begin{equation} \mbox{minimize } f(X)=\lambda _1 ove(X)+\lambda _2r(X) \label{eq_energy} \end{equation} where $\lambda_ 1,\lambda_ 2$ are two constants. Because the overlap function $ove(X)$ is not continuous, $f(X)$ is not continuous. In general, the overlaps are extremely deleterious, so $\lambda _1$ should be set large enough to prevent their introduction. However, we note that the computation does not introduce overlaps when attempting to decrease the radius of the layout at the final stage of optimization, because of the discontinuous of the $ove$ function. \section{Simulated annealing algorithm} {\it Simulated annealing} (SA) is a probabilistic meta-heuristic that is well-suited to global optimization problems \cite{Kirkpatrick83}. {\it Annealing} refers to the process of heating then slowly cooling a material until it reaches a stable state. The heating enables the material to achieve {\it higher} internal energy states, while the slow cooling allows the material more opportunity to find an internal energy state {\it lower} than the initial state. SA models this process for the purposes of optimization. A point in the search space is regarded as a system state, and the objective function is regarded as the internal energy. Starting from an initial state, the system is perturbed at random, moving to a new state in the neighbourhood, and a change of energy $\Delta E$ takes place. If $\Delta E<$0, the new state is accepted (a {\it downhill} move), otherwise the new state is accepted with a probability $\exp (\frac{-\Delta E}{K_b T})$ (an {\it uphill} move), where $T$ is the temperature at that time and $K_{b }$ is Boltzmann's constant. When the system reaches equilibrium, $T$ is decreased. When the temperature approaches zero, the probability of an ``uphill" move becomes very small, and SA terminates. Let $t_0$ denote the initial temperature, $imax$ denote the maximum number of iterations, $E(x)$ denote the energy function, and $emax$ denote the "stopping" energy.The general pseudo-code for simulated annealing may be written as follows: \begin{tabbing} =====\===\===\===\===\===\ \kill \> {\bf Algorithm 1}: Standard SA\\ \\ \> set the initial state $x$ and initial temperature $t=t_0$, let $i=0$\\ \> {\bf while} $i<imax$ {\bf and} $E(x)> emax$ \\ \> \> \> perturb $x$ in its neighbourhood and get $x'$ \\ \> \> \> {\bf if} $E(x')<E(x)$ {\bf then} $x=x'$ \\ \> \> \> {\bf else} $x=x'$ with probability $\exp (\frac{E(x)-E({x}')}{t})$ \\ \> \> \> decrease the temperature \\ \> \> \> $i=i+1$\\ \> \bf{return} $x$\\ \end{tabbing} The performance of SA may be affected by several parameters: the initial temperature $t_0$, the maximum number of iterations, $imax$, the ``stopping" energy, $emax$, the structure of the neighbourhood, and the schedule of cooling. For a given problem, the values of these parameters should be carefully selected. \subsection{SA for polygon packing} \subsubsection{Neighbourhood structure} In each iteration of SA, we perturb one polygon, thus obtaining a new layout, and then decide whether to accept or reject the new layout by means of an evaluation. In the $i$th iteration, the $(i~mod~ k)$th polygon will be perturbed. Given an initial radius $R_0$, which is large enough to contain the polygons, the neighbourhood for polygon $j$ is defined as: \begin{equation} x_j,y_j \in (\frac{imax}{i-2\times imax}+1.05)\times R_0\times random(-1,1) \label{eq_xyneighbor} \end{equation} \begin{equation} \alpha _j \in(\frac{imax}{i-2\times imax}+1.05) \times \pi \times random(-1,1) \label{eq_aneighbor} \end{equation} Equations (\ref{eq_xyneighbor}) and (\ref{eq_aneighbor}) show that, at the beginning of the algorithm's execution, the position of a polygon may vary by ($-0.55 R_0, 0.55 R_0)$, and its orientation may be perturbed by ($-0.55 \pi, 0.55 \pi$). This neighbourhood is large, and and the polygon can thus ``explore" more space. As the algorithm proceeds, the neighbourhood becomes increasingly smaller. At the end of the algorithm's execution, the neighbourhood shrinks to 0.05 times its original size, then SA chooses the best solution in the neighbourhood. \subsubsection{Temperature decreasing} We use a simple rule to decrease the temperature: every $cmax$ iterations, we let $t=d \times t$, where $d < 1$. \subsubsection{Description of the algorithm} The detailed SA algorithm for polygon packing is therefore described as follows: \begin{tabbing} =====\===\===\===\===\===\ \kill \> {\bf Algorithm 2}: SA for the packing problem\\ \\ \> randomly generate an initial layout $X$. Let $t=t_0$, $i=0$\\ \> {\bf while} $i<imax$ {\bf and} $f(X)> emax$ \\ \> \> \> let $j = (i~mod~k)$\\ \> \> \> randomly select $x_j, y_j, \alpha _j$ by (\ref{eq_xyneighbor}) and (\ref{eq_aneighbor}) and get new $X'$ \\ \> \> \> {\bf if} $f(X')<f(X)$ {\bf then} $X=X'$ \\ \> \> \> {\bf else} $X=X'$ with probability $\exp (\frac{f(X)-f({X}')}{t})$ \\ \> \> \> {\bf if} $i~mod~cmax = cmax-1$ {\bf then}~$t=d \times t$ \\ \> \> \> $i=i+1$\\ \> \bf{return} $x$\\ \end{tabbing} \section{Numerical Results} We are not aware of any standard library of benchmark instances for this {\it particular} problem, although such libraries do exist for other related problems \cite{fekete2007pil}. We therefore take a two-stage approach to testing our algorithm; we first design six instances with {\it known} optima, against which we may initially validate our method. We note that these instances include both convex and nonconvex polygons. After establishing the effectiveness of our SA algorithm, we then test our algorithm against other recently-described methods for {\it rectangle} packing (rectangles, of course, being members of the polygon class), using both existing instances from the literature and new, larger instances. \subsection{Known Optima} The instances with known optima are described in Table \ref {table_optimal_instances}, with graphical representations given in Figure \ref{fig_optimal_instances}. \begin{table} \begin{centering} \caption{Four instances with known optima} \begin{tabular}{@{}c c c l@{}} \hline Instance & $k$ & $R_0$ & Structure \\ \hline 1 & 5 & 2.3 & $str(i)=(4, 30, (\frac{\sqrt{10}}{2}, \frac{\sqrt{10}}{2}, \frac{\sqrt{10}}{2}, \frac{\sqrt{10}}{2})$, \\ &&& $(atan(\frac{1}{3}), \pi-atan(\frac{1}{3}), \pi+atan(\frac{1}{3}), 2\pi-atan(\frac{1}{3})), i=1 ,2$\\ &&& $str(i)=(4, 10, (\frac{\sqrt2}{2}, \frac{\sqrt2}{2}, \frac{\sqrt2}{2},\frac{\sqrt2}{2}), (\frac{\pi}{4}, \frac{3}{4}\pi, \frac{5}{4}\pi, \frac{7}{4}\pi), i=3,4,5$\\ \hline 2 & 5 & 2.8 & $str(i)=(5, 100, (2, 2\sqrt2-2, 2, \sqrt2, \sqrt2),$ \\ &&& $(0, \frac{1}{4}\pi, \frac{1}{2}\pi, \frac{5}{4}\pi, \frac{7}{4}\pi)), i=1,2,3,4$, \\ &&& $str(i)=(4,100,(\sqrt2, \sqrt2, \sqrt2, \sqrt2),(0, \frac{1}{4}\pi, \frac{1}{2}\pi, \frac{3}{2}\pi)), i=5$ \\ \hline 3 & 6 & 3.4 & $str(i)=(3, 100, (2, 2, 2), (0, \frac{2}{3}\pi, \frac{4}{3}\pi ), i=1,2,3,4,5,6$\\ \hline 4 & 12 &5.0 & $str(i)=(3,10,(1,1,1),(0, \pi, \frac{3}{2}\pi )),i=1,2,3,4$,\\ &&& $str(i)=(4,20,(2, 2, \sqrt2, \sqrt2),(0, \pi, \frac{5}{4}\pi, \frac{7}{4}\pi)),i=5,6,7,8$,\\ &&& $str(i)=(4,20,(2, 2, \sqrt2, \sqrt2),(0, \pi, \frac{5}{4}\pi, \frac{7}{4}\pi),i=9,10,11,12$ \\ \hline 5 & 3 & 8.0 & $str(i)=(4,40,(2\sqrt2, ,2\sqrt2, \sqrt2,2\sqrt2 ),(\frac{1}{4}\pi, \frac{3}{4}\pi ,\frac{5}{4}\pi ,\frac{7}{4}\pi)), i=1$,\\ &&& $str(i)=(8, 60, (2\sqrt2, 2\sqrt5,2\sqrt5,2\sqrt5,2\sqrt5,2\sqrt2,2,2), (\frac{1}{4}\pi, atan(2),$ \\ &&& $\pi-atan(2), \pi+atan(2), -atan(2), -\frac{1}{4}\pi, -\frac{1}{2}\pi,\frac{1}{2}\pi)),i=2,3$\\ \hline 6 & 5 & 5.0 & $str(i)=(4,60,(\sqrt2,\sqrt2,\sqrt2,\sqrt2),(\frac{1}{4}\pi,\frac{3}{4}\pi,\frac{5}{4}\pi,\frac{7}{4}\pi)),i=1,2,3,4$\\ &&& $str(i)=(12,500,(\sqrt{10},\sqrt{10},\sqrt2,\sqrt{10},\sqrt{10},\sqrt2,$\\ &&& $\sqrt{10},\sqrt{10},\sqrt2,\sqrt{10},\sqrt{10},\sqrt2),(-atan(\frac{1}{3}),atan(\frac{1}{3}),$\\ &&&$\frac{1}{4}\pi, \frac{1}{2}\pi-atan(\frac{1}{3}), \frac{1}{2}\pi+atan(\frac{1}{3}), \frac{3}{4}\pi,\pi-atan(\frac{1}{3}),\pi+atan(\frac{1}{3}),$\\ &&& $\frac{5}{4}\pi,\frac{3}{2}\pi-atan(\frac{1}{3}),\frac{3}{2}\pi+atan(\frac{1}{3}),\frac{7}{4}\pi)),i=5$\\ \hline \end{tabular} \label{table_optimal_instances} \end{centering} \end{table} \begin{figure}[h] \begin{centering} \includegraphics[scale=0.80]{allinstances.eps} \caption{Instances with known optima.} \label{fig_optimal_instances} \end{centering} \end{figure} For each instance, we use SA to try to find the optimal layout. The value of $imax$ is set to $20000\times k$, the value of $cmax$ is set to $100 \times k$, and the initial temperature is set to 100. The constants $\lambda _1$ and $\lambda _2$ are set to 100 (to induce a large $f(x)$ and adjust the probability of uphill movement). In each case, the algorithm is executed 40 times. Values for the best radius found, $r_{best}$, mean radius, $\bar r$, and variance $v$ are presented in Table \ref{table_results}. Representations of the best results obtained are given in Figure \ref{fig_layouts}. From Table \ref{table_results} and Figure \ref{fig_layouts}, we observe that the SA algorithm can find layouts that are very close to the the optimal configuration for instances 1, 2, and 3, where the number of polygons is relatively small and the overall structures are simple. The optimal radius for instance 1 was originally calculated as $\frac{3}{2} \sqrt{2}=2.121$, but the our results yielded a smaller value. This prompted a re-estimation, giving a new optimum of $\sqrt{4\frac{9}{64}}$). In the first three instances, the errors to the optimum are about $\frac {r_{best}-r_{optimum}}{r_{optimum}}=2\%$. The algorithm performs less well on instance 4, with 12 polygons in a relatively complex configuration. In this case, our algorithm cannot find the best configuration, and the error to the optimum is 15\%. Instances 5 and 6 feature noncovex polygons. Because of the complexity of the shapes, the algorithm is unable to solve these instances to optimality. The errors to the optimal radius are 11\% and 4\% for instances 5 and 6 respectively. \begin{table}[h] \begin{centering} \caption{Numerical results for SA on four instances with known optima} \begin{tabular}{c c c c c} \hline Instance & Est. $r_{optimum}$ & $r_{best}$ & $\overline{r}$ & $v$ \\ \hline 1 & $\sqrt{4\frac{9}{64}}=2.034$ & 2.080 & 2.167 & 0.027\\ 2 & $2\sqrt2=2.828$ & 2.861 & 3.209 & 0.010\\ 3 & $2\sqrt3=3.464$ & 3.522 & 4.065 & 0.157\\ 4 & $3\sqrt2=4.242$ & 4.887 & 5.149 & 0.024\\ 5 & $4\sqrt2=5.656$ & 6.295 & 9.266 & 59.25\\ 6 & $3\sqrt2=4.242$ & 4.423 & 7.034 & 119.44\\ \hline \end{tabular} \label{table_results} \end{centering} \end{table} \begin{figure}[h] \begin{centering} \includegraphics[scale=0.8]{allgraphs.eps} \caption{Best results obtained for instances with known optima} \label{fig_layouts} \end{centering} \end{figure} \subsection{Rectangular Instances} We now test our algorithm on four instances of the LOP containing only rectangular shapes. The first three instances were first described in \cite{zhai1999}, and the fourth in \cite{icnc07}, where all four instances were used to benchmark three different approaches: a genetic algorithm (GA), particle swarm optimisation (PSO) and a hybrid compaction algorithm followed by particle swarm local search (CA-PSLS). Depictions of each instance are depicted in Figure ~\ref{fig_instances_literature}, and full descriptions are given in Table ~\ref{table_literature_instances}. Since both particle-based algorithms out-performed the GA, we do not consider this last method here. \begin{table} \begin{centering} \caption{Four instances from the literature} {\tiny \begin{tabular}{@{}c c c l@{}} \hline Instance & $k$ & $R_0$ & Structure \\ \hline 1& 5 & 20& $str(1)=(4,12,(\sqrt{25},\sqrt{25},\sqrt{25},\sqrt{25}),(atan(\frac{3}{4})),\pi-atan(\frac{3}{4}),\pi+atan(\frac{3}{4}), 2\pi-atan(\frac{3}{4})))$\\ &&& $str(2)=(4,16,(\sqrt{32},\sqrt{32},\sqrt{32},\sqrt{32}),(atan(\frac{4}{4})),\pi-atan(\frac{4}{4}),\pi+atan(\frac{4}{4}), 2\pi-atan(\frac{4}{4})))$\\ &&& $str(3)=(4,15,(\sqrt{34},\sqrt{34},\sqrt{34},\sqrt{34}),(atan(\frac{3}{5})),\pi-atan(\frac{3}{5}),\pi+atan(\frac{3}{5}), 2\pi-atan(\frac{3}{5})))$\\ &&& $str(4)=(4,12,(\sqrt{40},\sqrt{40},\sqrt{40},\sqrt{40}),(atan(\frac{2}{6})),\pi-atan(\frac{2}{6}),\pi+atan(\frac{2}{6}), 2\pi-atan(\frac{2}{6})))$\\ &&& $str(5)=(4,9,(\sqrt{18},\sqrt{18},\sqrt{18},\sqrt{18}),(atan(\frac{3}{3})),\pi-atan(\frac{3}{3}),\pi+atan(\frac{3}{3}), 2\pi-atan(\frac{3}{3})))$\\ \hline 2& 6 & 40& $str(1)=(4,12,(\sqrt{25},\sqrt{25},\sqrt{25},\sqrt{25}),(atan(\frac{3}{4})),\pi-atan(\frac{3}{4}),\pi+atan(\frac{3}{4}), 2\pi-atan(\frac{3}{4})))$\\ &&& $str(2)=(4,16,(\sqrt{32},\sqrt{32},\sqrt{32},\sqrt{32}),(atan(\frac{4}{4})),\pi-atan(\frac{4}{4}),\pi+atan(\frac{4}{4}), 2\pi-atan(\frac{4}{4})))$\\ &&& $str(3)=(4,15,(\sqrt{34},\sqrt{34},\sqrt{34},\sqrt{34}),(atan(\frac{3}{5})),\pi-atan(\frac{3}{5}),\pi+atan(\frac{3}{5}), 2\pi-atan(\frac{3}{5})))$\\ &&& $str(4)=(4,20,(\sqrt{41},\sqrt{41},\sqrt{41},\sqrt{41}),(atan(\frac{4}{5})),\pi-atan(\frac{4}{5}),\pi+atan(\frac{4}{5}), 2\pi-atan(\frac{4}{5})))$\\ &&& $str(5)=(4,25,(\sqrt{50},\sqrt{50},\sqrt{50},\sqrt{50}),(atan(\frac{5}{5})),\pi-atan(\frac{5}{5}),\pi+atan(\frac{5}{5}), 2\pi-atan(\frac{5}{5})))$\\ &&& $str(6)=(4,18,(\sqrt{45},\sqrt{45},\sqrt{45},\sqrt{45}),(atan(\frac{3}{6})),\pi-atan(\frac{3}{6}),\pi+atan(\frac{3}{6}), 2\pi-atan(\frac{3}{6})))$\\ \hline 3 & 9 & 40& $str(1)=(4,12,(\sqrt{25},\sqrt{25},\sqrt{25},\sqrt{25}),(atan(\frac{3}{4})),\pi-atan(\frac{3}{4}),\pi+atan(\frac{3}{4}), 2\pi-atan(\frac{3}{4})))$\\ &&& $str(2)=(4,16,(\sqrt{32},\sqrt{32},\sqrt{32},\sqrt{32}),(atan(\frac{4}{4})),\pi-atan(\frac{4}{4}),\pi+atan(\frac{4}{4}), 2\pi-atan(\frac{4}{4})))$\\ &&& $str(3)=(4,15,(\sqrt{34},\sqrt{34},\sqrt{34},\sqrt{34}),(atan(\frac{3}{5})),\pi-atan(\frac{3}{5}),\pi+atan(\frac{3}{5}), 2\pi-atan(\frac{3}{5})))$\\ &&& $str(4)=(4,20,(\sqrt{41},\sqrt{41},\sqrt{41},\sqrt{41}),(atan(\frac{4}{5})),\pi-atan(\frac{4}{5}),\pi+atan(\frac{4}{5}), 2\pi-atan(\frac{4}{5})))$\\ &&& $str(5)=(4,25,(\sqrt{50},\sqrt{50},\sqrt{50},\sqrt{50}),(atan(\frac{5}{5})),\pi-atan(\frac{5}{5}),\pi+atan(\frac{5}{5}), 2\pi-atan(\frac{5}{5})))$\\ &&& $str(6)=(4,12,(\sqrt{40},\sqrt{40},\sqrt{40},\sqrt{40}),(atan(\frac{2}{6})),\pi-atan(\frac{2}{6}),\pi+atan(\frac{2}{6}), 2\pi-atan(\frac{2}{6})))$\\ &&& $str(7)=(4,18,(\sqrt{45},\sqrt{45},\sqrt{45},\sqrt{45}),(atan(\frac{3}{6})),\pi-atan(\frac{3}{6}),\pi+atan(\frac{3}{6}), 2\pi-atan(\frac{3}{6})))$\\ &&& $str(8)=(4,24,(\sqrt{52},\sqrt{52},\sqrt{52},\sqrt{52}),(atan(\frac{4}{6})),\pi-atan(\frac{4}{6}),\pi+atan(\frac{4}{6}), 2\pi-atan(\frac{4}{6})))$\\ &&& $str(9)=(4,30,(\sqrt{61},\sqrt{61},\sqrt{61},\sqrt{61}),(atan(\frac{5}{6})),\pi-atan(\frac{5}{6}),\pi+atan(\frac{5}{6}), 2\pi-atan(\frac{5}{6})))$\\ \hline 4 & 20 & 100& $str(1)=(4,10,(\sqrt{22.25},\sqrt{22.25},\sqrt{22.25},\sqrt{22.25}),(atan(\frac{2.5}{4})),\pi-atan(\frac{2.5}{4}),\pi+atan(\frac{2.5}{4}), $\\ &&& $2\pi-atan(\frac{2.5}{4})))$\\ &&& $str(2)=(4,8,(\sqrt{20},\sqrt{20},\sqrt{20},\sqrt{20}),(atan(\frac{4}{2})),\pi-atan(\frac{4}{2}),\pi+atan(\frac{4}{2}), 2\pi-atan(\frac{4}{2})))$\\ &&& $str(3)=(4,15,(\sqrt{34},\sqrt{34},\sqrt{34},\sqrt{34}),(atan(\frac{3}{5})),\pi-atan(\frac{3}{5}),\pi+atan(\frac{3}{5}), 2\pi-atan(\frac{3}{5})))$\\ &&& $str(4)=(4,14,(\sqrt{28.25},\sqrt{28.25},\sqrt{28.25},\sqrt{28.25}),(atan(\frac{4}{3.5})), \pi-atan(\frac{4}{3.5}),\pi+atan(\frac{4}{3.5}), $\\ &&& $2\pi-atan(\frac{4}{3.5})))$\\ &&& $str(5)=(4,7.50,(\sqrt{27.25},\sqrt{27.25},\sqrt{27.25},\sqrt{27.25}),(atan(\frac{1.5}{5})),\pi-atan(\frac{1.5}{5}),\pi+atan(\frac{1.5}{5}),$\\ &&& $ 2\pi-atan(\frac{1.5}{5})))$\\ &&& $str(6)=(4,18,(\sqrt{45},\sqrt{45},\sqrt{45},\sqrt{45}),(atan(\frac{3}{6})),\pi-atan(\frac{3}{6}),\pi+atan(\frac{3}{6}), 2\pi-atan(\frac{3}{6})))$\\ &&& $str(7)=(4,12,(\sqrt{40},\sqrt{40},\sqrt{40},\sqrt{40}),(atan(\frac{2}{6})),\pi-atan(\frac{2}{6}),\pi+atan(\frac{2}{6}), 2\pi-atan(\frac{2}{6})))$\\ &&& $str(8)=(4,18,(\sqrt{45},\sqrt{45},\sqrt{45},\sqrt{45}),(atan(\frac{3}{6})),\pi-atan(\frac{3}{6}),\pi+atan(\frac{3}{6}), 2\pi-atan(\frac{3}{6})))$\\ &&& $str(9)=(4,20,(\sqrt{41},\sqrt{41},\sqrt{41},\sqrt{41}),(atan(\frac{5}{4})),\pi-atan(\frac{5}{4}),\pi+atan(\frac{5}{4}), 2\pi-atan(\frac{5}{4})))$\\ &&& $str(10)=(4,5.25,(\sqrt{14.50},\sqrt{14.50},\sqrt{14.50},\sqrt{14.50}),(atan(\frac{1.5}{3.5})),\pi-atan(\frac{1.5}{3.5}),\pi+atan(\frac{1.5}{3.5}),$\\ &&& $ 2\pi-atan(\frac{1.5}{3.5})))$\\ &&& $str(11)=(4,12,(\sqrt{25},\sqrt{25},\sqrt{25},\sqrt{25}),(atan(\frac{3}{4})),\pi-atan(\frac{3}{4}),\pi+atan(\frac{3}{4}), 2\pi-atan(\frac{3}{4})))$\\ &&& $str(12)=(4,6,(\sqrt{18.25},\sqrt{18.25},\sqrt{18.25},\sqrt{18.25}),(atan(\frac{1.5}{4})), \pi-atan(\frac{1.5}{4}),\pi+atan(\frac{1.5}{4}),$\\ &&& $ 2\pi-atan(\frac{1.5}{4})))$\\ &&& $str(13)=(4,15,(\sqrt{34},\sqrt{34},\sqrt{34},\sqrt{34}),(atan(\frac{3}{5})),\pi-atan(\frac{3}{5}),\pi+atan(\frac{3}{5}), 2\pi-atan(\frac{3}{5})))$\\ &&& $str(14)=(4,20,(\sqrt{41},\sqrt{41},\sqrt{41},\sqrt{41}),(atan(\frac{4}{5})),\pi-atan(\frac{4}{5}),\pi+atan(\frac{4}{5}), 2\pi-atan(\frac{4}{5})))$\\ &&& $str(15)=(4,17.50,(\sqrt{37.25},\sqrt{37.25},\sqrt{37.25},\sqrt{37.25}),(atan(\frac{3.5}{5})), \pi-atan(\frac{3.5}{5}),\pi+atan(\frac{3.5}{5}),$\\ &&& $ 2\pi-atan(\frac{3.5}{5})))$\\ &&& $str(16)=(4,15,(\sqrt{42.25},\sqrt{42.25},\sqrt{42.25},\sqrt{42.25}),(atan(\frac{2.5}{6})),\pi-atan(\frac{2.5}{6}),\pi+atan(\frac{2.5}{6}),$\\ &&& $ 2\pi-atan(\frac{2.5}{6})))$\\ &&& $str(17)=(4,12,(\sqrt{40},\sqrt{40},\sqrt{40},\sqrt{40}),(atan(\frac{2}{6})),\pi-atan(\frac{2}{6}),\pi+atan(\frac{2}{6}), 2\pi-atan(\frac{2}{6})))$\\ &&& $str(18)=(4,20,(\sqrt{41},\sqrt{41},\sqrt{41},\sqrt{41}),(atan(\frac{4}{5})),\pi-atan(\frac{4}{5}),\pi+atan(\frac{4}{5}), 2\pi-atan(\frac{4}{5})))$\\ &&& $str(19)=(4,30,(\sqrt{61},\sqrt{61},\sqrt{61},\sqrt{61}),(atan(\frac{5}{6})),\pi-atan(\frac{5}{6}),\pi+atan(\frac{5}{6}), 2\pi-atan(\frac{5}{6})))$\\ &&& $str(20)=(4,9,(\sqrt{18},\sqrt{18},\sqrt{18},\sqrt{18}),(atan(\frac{3}{3})),\pi-atan(\frac{3}{3}),\pi+atan(\frac{3}{3}), 2\pi-atan(\frac{3}{3})))$\\ \hline \end{tabular} \label{table_literature_instances} } \end{centering} \end{table} \begin{figure}[h] \begin{centering} \includegraphics[scale=0.9]{fourrectangles.eps} \caption{Four instances of the LOP using rectangles} \label{fig_instances_literature} \end{centering} \end{figure} We run each algorithm 50 times on each instance, recording the best radius found, $r_{best}$, average radius, $\overline{r}$, standard deviation of radii, $r_{\sigma}$ and average run time in seconds, $\overline{t}$. Each algorithm is coded in C, compiled with g++ 4.1.0 , and run under SUSE Linux 10.1 (kernel version 2.6.16.54-0.2.5-smp) on a computer with dual Intel Harpertown E5462 2.80GHz processors, 4GB of RAM and an 80GB hard drive. The three algorithms (SA, CA-PSLS and PSO) each run over a number of iterations, which is dictated by the value of the constant CYCLE. In this set of experiments, we set CYCLE=3000 for each algorithm. The SA parameter values for $cmax$, initial temperature, $\lambda_ 1$, and $\lambda_ 2$ are set as before, and the $imax$ values set as to 100000, 120000, 108000 and 100000 for instances 1-4 respectively. The results obtained are depicted in Table ~\ref{table_results_new}. \begin{table} \begin{centering} \caption{Results for SA, CA-PSLS and PSO on four rectangular instances from the literature (CYCLE=3000)} \begin{tabular}{c c c c c c c} \hline Instance & Size & Algorithm & $r_{best}$ & $\overline{r}$& $r_{\sigma}$ & $\overline{t}$ (s)\\ \hline 1 & 5 & SA & 12.776 & 13.693 & 0.62 & 0.82 \\ & & CA-PSLS & 10.942 & 11.704 & 0.49 & 4.31 \\ & & PSO & 11.046 & 11.716 & 0.49 & 4.89 \\ \hline 2 & 6 & SA & 16.004 & 17.377 & 0.69 & 1.66 \\ & & CA-PSLS & 14.686 & 15.590 & 0.67 & 7.76 \\ & & PSO & 14.320 & 15.349 & 0.56 & 8.28 \\ \hline 3 & 9 & SA & 20.849 & 22.328 & 0.87 & 6.84 \\ & & CA-PSLS & 18.157 & 19.797 & 1.03 & 21.07 \\ & & PSO & 18.579 & 19.205 & 0.49 & 22.84 \\ \hline 4 & 20 & SA & 29.969 & 31.680 & 0.98 & 92.01 \\ & & CA-PSLS & 27.927 & 33.129 & 5.48 & 125.52\\ & & PSO & 32.596 & 34.426 & 2.23 & 138.00\\ \\ \hline \end{tabular} \label{table_results_new} \end{centering} \end{table} On the first three (small) instances, both particle-based algorithms slightly out-perform the SA method in terms of solution {\it quality}; on average, by 10\%. However, this comes at a significant cost disadvantage in terms of run time; over the first three instances, the particle-based methods require four times the execution time of the SA algorithm to terminate. When the problem size is increased to 20, the benefits of the SA algorithm begin to become apparent, as it out-performs the other two algorithms in terms of both solution quality {\it and} run time. In order to establish the significance of this, we now test all three methods on much larger instances. \subsection{Large Rectangular Instances} We designed instances with 40, 60, 80 and 100 rectangles. Space precludes a detailed description of these, but the full problem set is available from the corresponding author. As before, each method was run 50 times on each instance. Because of the computational cost incurred, we reduced CYCLE to 1000 for each algorithm. The results are depicted in Table ~\ref{table_results_big}, with an example solution for the 40 rectangle instance depicted in Figure ~\ref{fig:40}. \begin{table}[h] \begin{centering} \caption{Results for SA, CA-PSLS and PSO on large instances (CYCLE=1000)} \begin{tabular}{c c c c c c c} \hline Instance & Size & Algorithm & $r_{best}$ & $\overline{r}$& $r_{\sigma}$ & $\overline{t}$ (s)\\ \hline 1 & 40 & SA & 164.061 & 174.586 & 4.81 & 263.24 \\ & & CA-PSLS & 179.508 & 253.627 & 60.42 & 219.84 \\ & & PSO & 242.471 & 276.939 & 24.44 & 197.03 \\ \hline 2 & 60 & SA & 170.284 & 187.312 & 6.32 & 905.75 \\ & & CA-PSLS & 184.984 & 288.642 & 124.26 & 579.31 \\ & & PSO & 272.282 & 317.739 & 26.17 & 451.72 \\ \hline 3 & 80 & SA & 265.654 & 281.087 & 8.54 & 2178.31 \\ & & CA-PSLS & 298.524 & 544.421 & 162.81 & 1016.70 \\ & & PSO & 432.347 & 490.862 & 35.75 & 813.59 \\ \hline 4 & 100 & SA & 406.991 & 423.087 & 7.87 & 4260.54 \\ & & CA-PSLS & 658.352 & 880.537 & 108.65 & 1611.52 \\ & & PSO & 598.265 & 688.785 & 48.06 & 1277.43 \\ \\ \hline \end{tabular} \label{table_results_big} \end{centering} \end{table} \begin{figure}[h] \begin{centering} \includegraphics[scale=1.1]{40.eps} \caption{Best 40 rectangle solution generated by SA ($r=134.07$)} \label{fig:40} \end{centering} \end{figure} The SA method significantly out-performs the other two methods in terms of solution quality, but with an associated cost in terms of run time. However, as shown by the figures for standard deviation, SA offers a consistently high-quality solution method (at a price), whereas the other two algorithms offer solutions of more variable quality, but more quickly. \section{Conclusions} In this paper we describe a novel algorithm based on simulated annealing for the problem of packing weighted polygons inside a circular container. As well as being of significant theoretical interest, this problem has real significance in domains such as satellite design in the aerospace industry. Our algorithm consistently generates high-quality solutions that offer a significant improvement over those generated by other methods. However, this superiority comes with an associated computational overhead, so the choice of method should largely be driven by the anticipated application. Future work will involve improving the method's performance on problems containing nonconvex polygons, as well as its extension into three dimensions. \section*{Acknowledgements} This work was partially supported by the Dalton Research Institute, Manchester Metropolitan University.
2,869,038,154,482
arxiv
\section{Introduction} The quantum effects of the 2 dimensional (2d) gravitational theories are recently measured numerically in the computer simulation with high statistics. In particular the data for the entropy exponent (string susceptibility) in 2d quantum gravity(QG) is the same as the known exact result within a relative precision of $O(10^{-3})$. It is due to the developement of the simulation technique in the dynamical triangulation\cite{ADF,D,KKM} and the findings of new observables in QG such as MINBU distribution \cite{JM,AJT,Th}. The data analysis is done by a rather orthodox approach,i.e., the semiclassical approximation. It has recently been applied to 2d $R^2$-gravity and the simulation data of $<\intx\sqg R^2>$\ and its cross-over phenomenon are successfully explained\cite{ITY}. We list the merits of this approach. \begin{enumerate} \item The semiclassical treatment is, at present, the unique field-theoretical approach which can analyse the mysterious region $(25\geq )c_m\geq 1$. The conformal field theory gives a meaningful result only for some limitted regions of $c_m$. The Matrix model is in the similar situation. \item Comparison with the ordinary quantization is transparent because the ordinary renormalizable field theories ,such as QED and QCD, are quantized essentially in the semiclassical way. In particular,the renormalization properties of (2d) QG are expected to be clarified in the semiclassical approch\cite{S1}. \item This approach can be used for the higher-dimensional QG such as 3d and 4d QG. \end{enumerate} The approach is perturbative, therefore choosing the most appropriate vacuum under the global constraints (such as the area constraint and the topology constraint) is crucial in the proper evaluation. We explain it in Sect.3. We add $R^2$-term to the ordinary 2d gravity for the following reasons. ( We call the ordinary 2d gravity {\it Liouville gravity} in contrast with {\it $R^2$-gravity} for the added one. ) \begin{enumerate} \item For the positive coupling, the term plays the role of suppressing the high curvature and making the surface smooth. For the negative one, the high curvature is energetically favoured and making the surface rough. Therefore we can expect a richer phase structure of the surface configuration. \item The term is higher-derivative ($\pl^4$), therefore it regularizes the ultra-violet behaviour so good\cite{KPZ}. In fact the theory is renormalizable\cite{S1}. \item The Einstein term ($R$-term) is topological in 2 dimension. It does not have a local mode. The simplest interaction which is purely geometrical and has local modes is $R^2$-term. \item In the lattice gravity, $R^2$-term is considered as one of natural irrelevent terms in the continuous limit\cite{BK}. \end{enumerate} The $<\intx\sqg R^2>$\ simulation data for $R^2$-gravity was presented by \cite{TY} and the cross-over phenomenon was clearly found. We present here MINBU distribution data. \section{Lattice Simulation of 2D Quantum R$^2$-Gravity and MINBU Distribution} The distribution of baby universe (BU) is one of important observables in the lattice gravity\cite{JM,AJT,Th}. It was originally introduced to measure the entropy exponent (string susceptibility) efficiently. Fig.1 shows the configuration of a BU with an area B (variable) from the mother universe with an area A (fixed). {\vs 6} \begin{center} Fig.1\q MINBU configuration \end{center} The 'neck' of Fig.1 is composed of three links which is the minimum loop in the dynamically triangulated surface. The configuration is called the minimum neck baby universe (MINBU). MINBU distribution for the Liouville-gravity and its matter-coupled case were already measured\cite{AJT,Th,AT}. First we explain briefly our lattice model of $R^2$-gravity. The surface is regularized by the triangulation. The number of vertices ,where some links (edges of triangles) meet, is $N_0$. The number of links at the i-th vertex ($i=1,2\cdots,N_0$) is $q_i$. The number of triangles($N_2$) is related to $N_0$\ as $N_2=2N_0-4$\ for the sphere topology. The discretized model is then described by \begin{eqnarray} &S_L=-\be_L\frac{4\pi^2}{3}\sum_{i=0}^{N_0}\frac{(6-q_i)^2}{q_i} =-48\pi^2\be_L\sum_{i}\frac{1}{q_i}+\mbox{const}\com &\label{lat.1} \end{eqnarray} where $\be_L$\ is the R$^2$-coupling constant of the lattice model. We do measurement for $\be_L=0,50,100,200,300,-20,-50$\ . We present the MINBU dstribution of $R^2$-gravity with no matter field (pure R$^2$-gravity) in Fig.2 and 3 for $\be_L\geq 0$\ and for $\be_L\leq 0$ respectively. The total number of triangles is $N_2=5000$. For the detail see \cite{TY}. {\vs 6} \begin{center} Fig.2\q MINBU distribution for $\be_L\geq 0$, Pure $R^2$-gravity. \end{center} {\vs 6} \begin{center} Fig.3\q MINBU distribution for $\be_L\leq 0$. Pure $R^2$-gravity. \end{center} As for positive $\be_L$\ (Fig.2), we see clearly the transition point $P_0$ , for each curve, at which the distribution qualitatively changes. For the region $P=B/A\ > P_0$, the birth probability decreases as the size of BU increases. For the region $P < P_0$, the birth probability increases as the size of BU increases. The value of the transition point $P_0$\ depends on $\be$ \ and increases as $\be$\ increases. As for negative $\be$\ (Fig.3), the slope of the curve tends to be sharp as $|\be|$\ increases at least for the region $P<P_1$. The transition point $P_1$\ is not so clear as Fig.2. In Sect 4.2 we interpret these data theoretically using the semiclassical approach explained in Sect 3. \section{Semiclassical Approach} We analyse the simulation data by the semiclassical approach. The $R^2$- gravity interacting with $c_m$-components scalar matter fields is described by \begin{eqnarray} & S=\intx\sqg (\frac{1}{G} R-\be R^2-\mu -\half\sum_{i=1}^{c_m}\pl_a\Phi_i\cdot g^{ab}\cdot \pl_b\Phi_i)\com\q (\ a,b=1,2\ )\com & \label{3.1} \end{eqnarray} where $G$\ is the gravitaional coupling constant, $\mu$\ is the cosmological constant , $\be$\ is the coupling strength for $R^2$-term and $\Phi$\ is the $c_m$- components scalar matter fields. The signature is Euclidean. The partition function , under the fixed area condition\ $ A=\intx \sqg\ $ and with the conformal-flat gauge\ $g_{ab}=\ e^{\vp}\ \del_{ab}$\ , is written as \cite{P}, \begin{eqnarray} & {\bar Z}[A]= \int\frac{\Dcal g\Dcal\Phi}{V_{GC}}\{exp\PLinv S\}~\del(\intx\sqg-A) =exp\PLinv (\frac{8\pi(1-h)}{G}-\mu A)\times Z[A]\com & \nn\\ & Z[A]\equiv\int\Dcal\vp~ e^{+\frac{1}{\hbar} S_0[\vp]}~\del(\intx ~e^\vp - A)\com & \label{3.2}\\ & S_0[\vp]=\intx\ (\frac{1}{2\ga}\vp\pl^2\vp -\be~e^{-\vp}(\pl^2\vp)^2 +\frac{\xi}{2\ga}\pl_a(\vp\pl_a\vp)\ )\com \q \frac{1}{\ga}=\frac{1}{48\pi}(26-c_m)\com & \label{3.3} \end{eqnarray} where $h$\ is the number of handles \footnote{ The sign for the action is different from the usual convention as seen in (\ref{3.2}). }. $V_{GC}$\ is the gauge volume due to the general coordinate invariance. $\xi$\ is a free parameter. The total derivative term generally appears when integrating out the anomaly equation \ $\del S_{ind}[\vp]/\del\vp=\frac{1}{\ga}\pl^2\vp\ $. This term turns out to be very important. \footnote{ The uniqueness of this term, among all possible total derivatives, is shown in \cite{ITY}. } We consider the manifold of a fixed topology of the sphere ,$h=0$\ and the case $\ga>0\ (c_m<26)$. \ $\hbar$\ is Planck constant. \footnote{ In this section only,we explicitly write $\hbar$\ (Planck constant) in order to show the perturbation structure clearly. } $Z[A]$\ is rewritten as, after the Laplace transformation and the inverse Laplace one, \begin{eqnarray} & Z[A]=\int\frac{d\la}{\hbar}\int\Dcal\vp~exp\ \frac{1}{\hbar}[\ S_0[\vp] -\la (\intx e^\vp - A)] &\nn\\ &=\int\frac{d\la}{\hbar}e^{\PLinv \la A} \int\Dcal\vp~exp~\{\PLinv S_\la[\vp]\}\com &\nn\\ &S_\la[\vp]\equiv S_0[\vp]-\la\intx~e^\vp\ &\nn\\ &=\intx\ (\frac{1}{2\ga}\vp\pl^2\vp -\be~e^{-\vp}(\pl^2\vp)^2\ +\frac{\xi}{2\ga}\pl_a(\vp\pl_a\vp)\ -\la~e^\vp\ )\com &\label{3.4} \end{eqnarray} where the $\la$-integral should be carried out along an appropriate contour parallel to the imaginary axis in the complex $\la$-plane. Note that the $\del$-function constraint in (\ref{3.2}) is substituted by the $\la$-integral. The leading order configuration is given by the stationary minimum. \begin{eqnarray} \left. \frac{\del S_\la[\vp]}{\del\vp}\right|_{\vp_c} =\left. \frac{1}{\ga}\pl^2\vp +\be\{ e^{-\vp}(\pl^2\vp)^2-2\pl^2(e^{-\vp}\pl^2\vp)\}-\la e^\vp \right|_{\vp_c}=0\com\nn\\ \left.\frac{d}{d\la}(\la A+S_\la[\vp_c])\right|_{\la_c}=0\com \label{3.5}\\ Z[A]\approx \PLinv exp~\PLinv\{\la_cA+S_{\la_c}[\vp_c]\}\equiv \PLinv exp~\PLinv \Ga^{eff}_c \pr\nn \end{eqnarray} Generally this approximation is valid for a large system. In the present case, the system size is proportional to $\frac{4\pi}{\ga}=\frac{26-c_m}{12}$. We expect the approximation is valid except the region: $c_m\sim 26$. The solution $\vp_c$\ and $\la_c$\ ,which describes the positive-constant curvature solution and is continuous at $\be=0$,\ are given by\cite{ITY} \begin{eqnarray} \vp_c(r )=-ln~\{ \frac{\al_c}{8}(1+\frac{r^2}{A})^2\}\com\q r^2=(x^1)^2+(x^2)^2\com \nn\\ \al_c=\frac{4\pi}{w}\{ w+1-\sqrt{w^2+1 -2\xi w} ~\}\com\q w=16\pi\be'\ga\com\q \be'\equiv \frac{\be}{A}\com \label{3.6}\\ \ga\la_c A=\frac{w}{16\pi}(\al_c)^2-\al_c\com\nn \end{eqnarray} where $\xi$\ must satisfy $-1\leq\ \xi\ \leq\ +1$\ for the realness of $\al_c$. $(x^1,x^2)$\ are the flat (plane) coordinates. The partition function at the classical level is given by \begin{eqnarray} & \Ga^{eff}_c=~ln~Z[A]|_{\hbar^0} =\la_c A+(1+\xi)\frac{4\pi}{\ga}~ln\frac{\al_c}{8}-\frac{\al_c}{\ga}w +C(A)\com & \nn\\ & C(A)=\frac{8\pi (2+\xi)}{\ga}+\frac{8\pi\xi}{\ga} \{~ln(L^2/A)-1~\}+O(A/L^2)\com & \label{3.7}\\ & \frac{L^2}{A}\gg 1\com &\nn \end{eqnarray} where $L$\ is the {\it infrared cut-off} ($r^2\leq L^2$) introduced for the divergent volume integral of the total derivative term. Note that $C(A)$\ does not depend on $\be$\ (or $w$) but on $c_m$\ (or $\ga$) and $A$. Furthermore $C(A)$\ has an arbitrary constant of the form $(8\pi\xi/\ga)\times\mbox{(const)}$\ due to the freedom of the choice of the regularization parameter:\ $L\ra \mbox{(const)}'\times L$. This arbitrary constant turns out to be important. For the case $\be=0$\ , the theory is ordinary 2d gravity and we call it Liouville gravity in contrast with $R^2$-gravity for $\be\not= 0$. For the case $c_m=0$\ , the theory is called the pure gravity in contrast with the matter-coupled gravity $c_m\not= 0$. \section{Semiclassical Analysis of MINBU Distribution} First we explain the free parameter $\xi$. Recent analysis of the present theory at the (1-loop) quantum level has revealed that it is conformal (the renormalization group beta functions=0) for $w\geq 1$\ when we take $\xi=1$\ \cite{S1}. Therefore the value $\xi=1$\ has some meaning purely within the theory. The validity of this choice is also confirmed from a different approach, that is, the comparison of the special case $\be$(or $w$)$=0$\ (Liouville gravity) of the present result with the corresponding result from the conformal field theory (KPZ result)\cite{KPZ}. The asymptotic behaviour of $Z[A]|_{\hbar^0}$ \ at $w=0$\ is given, from (\ref{3.7}), as \begin{eqnarray} Z[A]|_{\hbar^0,w=0}=\left.e^{\Ga^{eff}_c}\right|_{w=0} = exp\{ \frac{4\pi}{\ga}(3-\xi) +(1+\xi)\frac{4\pi}{\ga}ln\frac{1+\xi}{2}+\frac{8\pi\xi}{\ga}ln\frac{L^2}{A} \}\approx \nn\\ A^{-\frac{8\pi\xi}{\ga}}\times\mbox{const}=A^{-\frac{26-c_m}{6}\xi} \times\mbox{const}\com\nn\\ \mbox{as}\ A\rightarrow +\infty\pr\label{cdep.1} \end{eqnarray} On the other hand, the KPZ result is \begin{eqnarray} Z^{KPZ}[A]\sim A^{\ga_s-3}\com\ \ga_s=\frac{1}{12}\{c_m-25-\sqrt{(25-c_m)(1-c_m)}\}+2\pr \label{cdep.2} \end{eqnarray} In order for our result to coincide with the KPZ result in the 'classical limit' $c_m\rightarrow -\infty$\ :\ $Z^{KPZ}[A]\sim A^{+\frac{1}{6}c_m}$\ , we must take \begin{eqnarray} \xi=1\com \label{cdep.3} \end{eqnarray} in (\ref{cdep.1}). In the following of this text we take this value. \footnote{ In the numerical evaluation, we take $\xi=0.99$~ for the practical reason. } The asymptotic behaviour of the present semiclassical result for the Liouville gravity is, taking $\xi=1$\ in (\ref{cdep.1}), \begin{eqnarray} Z[A]\sim A^{-\frac{26-c_m}{6}}\times A^{-1}\com\q A\rightarrow +\infty \com \label{cdep.4} \end{eqnarray} where the additional factor $A^{-1}$~ comes from the $\la$-integral in the expression of $Z[A]$, (\ref{3.4})\cite{S2}. Now we compare the KPZ result and the semiclassical result in the normalized form. \begin{eqnarray} Z^{KPZ}_{norm}[A]\equiv \frac{Z^{KPZ}[A]}{Z^{KPZ}[A]|_{c_m=0}} \sim A^{\ga_s(c_m)-\ga_s(c_m=0)}\com\nn\\ \ga_s(c_m)-\ga_s(c_m=0)=\frac{1}{12}\{c_m+5-\sqrt{(25-c_m)(1-c_m)}\} \com \label{cdep.4b}\\ Z_{norm}[A]\equiv \frac{Z[A]}{Z[A]|_{c_m=0}} \sim A^{+\frac{c_m}{6}} \pr \nn \end{eqnarray} We can numerically confirm that the semiclassical result, $\frac{c_m}{6}$, and the KPZ result, $\ga_s(c_m)-\ga_s(c_m=0)$~, have very similar behaviour for the region $c_m\leq 1$\cite{S2}. Now we go back to the general value of $\be$. The birth-probability of the baby universe with area $B (0 < B < A/2)$ from the mother universe with the total area A is given by\cite{JM} \begin{eqnarray} {n_A(B)}=\frac {3(A-B+a^2)(B+a^2)Z[B+a^2]Z[A-B+a^2]} {A^2\times Z[A]} \nn\\ \approx\frac {3(1-p)pZ[pA]Z[(1-p)A]}{Z[A]}\com \label{cdep.5}\\ \ln~(\frac{ {n_A(B)} }{3}) \approx\ln~(1-p)p+\ln~Z[pA]+\ln~Z[(1-p)A]-\ln~Z[A]\com\nn\\ p\equiv\frac{B}{A}\com\q 0<p<\half\pr \nn \end{eqnarray} We apply the result of $Z[A]$\ in Sect.3 to the above expressions. \subsection{ $c_m$-dependence} First we present the semiclassical prediction for Liouville gravity($\be=0$). The result (\ref{3.7}) for the case $\be=0$\ gives ,taking $\xi=1$, \begin{eqnarray} \ga~ln~Z[rA] =8\pi (\ln~\pi+1)+8\pi\ln(\frac{1}{r}\cdot \frac{L^2}{A})\pr \label{cdep.6} \end{eqnarray} Then the MINBU distribution normalized by the pure garvity ($c_m=0$) is obtained as \begin{eqnarray} \frac{n_A(B)}{n_A(B)|_{c_m=0}}= \{p(1-p)\}^{\frac{c_m}{6}}\times \exp~\{ \frac{c_m}{12}\times \Del\}\com\nn\\ \Del\equiv -2(\ln~\pi +1)-2\ln\frac{L^2}{A}\com \label{cdep.7} \end{eqnarray} where $\Del$\ can be regarded as the free real parameter due to the arbitrariness of the infrared regularization parameter $L$. We know from the result (\ref{cdep.7}) that the MINBU distribution lines for different $c_m$'s cross at the single point $p=p^{*}$\ given by \begin{eqnarray} p^*(1-p^*)=\exp\{-\half~\Del\}\com\q p^*<\half\pr \label{cdep.8} \end{eqnarray} Fig.4 shows three typical cases of $p^*$\ . \vspace{2cm} {\vs 6} \begin{center} Fig.4\q Three typical cases of the solution of (\ref{cdep.8}). \end{center} The choice of $\Del$\ is important to fit the theoretical curve (\ref{cdep.7}) with the data. We show the behaviour of (\ref{cdep.7}) for the three cases:\ 1)\ $\exp(-\half \Del)~\ll \fourth$\ ,Near Point O,Fig.5a\ ;\ 2)\ $\exp(-\half \Del)~>\fourth$\ ,Above Point A ,Fig.5b\ ;\ 3)\ $\exp(-\half \Del)~=\fourth -0$\ ,Near Point A ,Fig.5c. {\vs 6} \begin{center} Fig.5a\q MINBU distribution for Liouville gravity, $\Del=8$ \end{center} {\vs 6} \begin{center} Fig.5b\q MINBU distribution for Liouville gravity, $\Del=1$ \end{center} {\vs 6} \begin{center} Fig.5c\q MINBU distribution for Liouville gravity, $\Del=3$ \end{center} Fig.5a well fits with the known result of the computer simulation\cite{AJT,AT}. This result shows the importance of the infrared regularization. \subsection{$\be$-dependence} We consider the pure gravity($c_m=0$). We plot MINBU dstribution, $ln~n_A(B)$, as the function of $p\ (\ 0.001<p<0.1\ )$\ for various cases of $\be'=\be/A$\ ($\xi=0.99$). Fig.6a and 6b show that for $\be'>0$\ and $\be'<0$\ respectively. {\vs 5} \begin{center} Fig.6a\q MINBU distribution for $\be'\geq 0$. $\xi=0.99,c_m=0$. \end{center} {\vs 5} \begin{center} Fig.6b\q MINBU distribution for $\be'\leq 0$. $\xi=0.99,c_m=0$. \end{center} The above results of Fig.6a and Fig.6b qualitatively coincide with those of Fig.2 and Fig.3, respectively. \vspace{1cm} We list the asymptotic behaviour of $\ln~n_A(B)$\ for the general $\xi$\ and $c_m$\ in Table 1. \vspace{0.5cm} \begin{tabular}{|c|c|c|c|} \hline Phase & (C)\ $0<p\ll -w(\ltsim 1)$ & (B)\ $|w|\ll p$ & (A)\ $0<p\ll w(\ltsim 1)$ \\ \hline $\al^-_p(pA)$ & $8\pi\{1+\frac{1-\xi}{2}\frac{p}{w}$ & $4\pi(1+\xi)\{1-\frac{1-\xi}{2}\frac{w}{p}$ & $\frac{4\pi(1+\xi)p}{w}\times $ \\ & $+O(\frac{p^2}{w^2})\}$ & $+O(\frac{w^2}{p^2})\}$ & $\{1+O(\frac{p}{w})\}$ \\ \hline & $(1-\frac{8\pi\xi}{\ga})\ln~p$ & $(1-\frac{8\pi\xi}{\ga})\ln~p$ & $\{1-\frac{4\pi(1-\xi)}{\ga}\}\ln~p$ \\ $\ln~{n_A(B)}$ &$-\frac{4\pi}{\ga}\frac{w}{p}$ & $+O(\frac{w}{p})$ & $-\frac{4\pi(1+\xi)}{\ga}~\ln~w$ \\ &$+O(\frac{p}{w})$ & +SmallTerm & $+O(\frac{p}{w}) $ \\ & +SmallTerm & & +SmallTerm \\ \hline \multicolumn{4}{c}{\q} \\ \multicolumn{4}{c}{Table 1\ \ Asymp. behaviour of MINBU distribution, (\ref{cdep.5}), }\\ \multicolumn{4}{c}{ for general $c_m$~ and $\xi$. $R>0, w\equiv 16\pi\be'\ga, \ga=\frac{48\pi}{26-c_m}>0,p=\frac{B}{A},$}\\ \multicolumn{4}{c}{ $0<p\ll 1,\ |w|\ltsim 1,\ \mbox{SmallTerm}= \mbox{const} +O(wp)+O(p).$} \end{tabular} \vspace{0.5cm} We characterize each phase in Table 1 as follows. \flushleft{(A)\ $0<p\ll w$:\ Smoothly Creased Surface \footnote{ In \cite{ITY} we called it Free Creased Surface because this is the phase where the free kinetic term ($R^2$-term) dominates.} } The smoothing term, $R^2$, dominates the main configuration and the surface is smooth. The left part $P<P_0(w)$\ for each curve ($w$) in Fig.6a corresponds to this phase. The small BU is harder to be born because it needs high-curvature locally. The large BU is energetically preferable to be born. The area constraint is not effective in this phase. The characteristic scale is $\be$. \flushleft{(B)\ $|w|\ll p$:\ Fractal Surface} The randomness dominates the configuration. The size of BU is so enough large that the $R^2$-term is not effective. The area constraint is neither effective. There is no characteristic scale. The right part $P>P_0(w)$\ for each curve ($w$) in Fig.6a and the right part $P>P_1(w)$\ for each curve ($w$) in Fig.6b correspond to this phase. The MINBU distribution is mainly determined by the random distribution of the surface configuration\cite{BIPZ}. \flushleft{(C)\ $0<p\ll -w$:\ Rough Surface \footnote{ In \cite{ITY} we called it Strongly Tensed Perfect Sphere because the surface tension is negatively large and the shape of the whole surface is near a sphere. At the same time the surface tend to become sharp-pointed because it increases the curvature. We call the surface under this circumstace,simply, Rough Surface.} } Due to the large negative value of $R^2$-coupling, the configuration with the large curvature is energetically preferable on the one hand, it is strongly influenced by the area constraint on the other hand. Therefore the large BU is much harder to be born than (B) because it has a small curvature and a large area. The small BU is much easier to be born than (B) because it has a large curvature and a small area. The left part $P<P_1(w)$\ for each curve ($w$) in Fig.6b corresponds to this phase. The characteristic scale is the total area $A$. \vspace{1cm} \q We see the phase structure of Table 1 is the same as that of \cite{ITY} by the substitution of $w$\ by $w/p$\ . Although both simulations measure the same surface property, the cross-over phenomenon,however, appears differently. In \cite{ITY} the physical quantity $<\intx\sqg R^2>$\ is taken to see the surface property. The cross-over can be seen only by measuring for a range of $w$\ and the transition point is given by a certain value $|w^*|\approx 1$. This is contrasting with the present case. The cross-over can be seen for any $w$. The transition is seen at the point $p^*$\ ,in the MINBU distribution, given by $|w|/p^*\approx 1$. We understand as follows. The MINBU distribution measures the surface at many different 'scales' $B$, whereas the quantity $<\intx\sqg R^2>$\ measures the surface at a fixed 'scale'( $B_1$\ (or $p_1$) in the MINBU terminology). \subsection{ General Case} We consider the general case of $c_m$\ and $\be$. This general case is not yet measured by the Monte Carlo simulation. We present the semiclassical prediction. The analysis so far shows the normalization ((\ref{cdep.4b}) and (\ref{cdep.7})) and the choice of an arbitrary constant due to the infrared regularization (\ref{cdep.7}) are important for the quantitative adjustment. Here, however, we are content with the qualitative behaviour. We donot do the normalization and we ignore the $\ln~\frac{L^2}{A}$~term in the evaluation of this subsection. \flushleft{(1)\ $c_m$-dependence} We stereographically show MINBU distributions for the range:\ $0.001\leq p\leq 0.2,\ -24\leq c_m\leq +24$\ , in Fig.7a($\be'=0$) , Fig.7b($\be'=+10^{-4}$) and Fig.7c($\be'=-10^{-5}$). {\vs 5} \begin{center} Fig.7a\q MINBU dstribution for $0.001\leq p\leq 0.2,\ -30\leq c_m\leq +24$\ . $\be'=0,\xi=0.99$. \end{center} {\vs 5} \begin{center} Fig.7b\q MINBU dstribution for $0.001\leq p\leq 0.2,\ -30\leq c_m\leq +24$\ . $\be'=+10^{-4},\xi=0.99$. \end{center} {\vs 5} \begin{center} Fig.7c\q MINBU dstribution for $0.001\leq p\leq 0.2,\ -30\leq c_m\leq +24$\ . $\be'=-10^{-5},\xi=0.99$. \end{center} No 'ridge' appears in Fig.7a. {}From this, we see matter fields affect the surface dynamics homogeneously at all scales. (This result is natural because the matter coupling constand $c_m$\ does not have the scale dimension.) The slope along the $p$-axis continuously decreases as $c_m$\ increases. In Fig.7b, a ridge runs from a low $p$\ to a high $p$\ as $c_m$\ increases. In Fig.7c,a 'hollow' runs from a high $p$\ to a low $p$\ as $c_m$\ increases. The ridge and the hollow correspond to the series of the cross-over points. In both Fig.7b and Fig.7c, the cross-over becomes dimmer as $c_m$\ increases and becomes sharper as $c_m$\ decreases. \flushleft{(2)\ $\be$-dependence} We stereographically show MINBU distributions for the range:\ $0.001\leq p\leq 0.2,\ -10^{-5}\leq \be'\leq +10^{-4}$\ , in Fig.8a($c_m=0$) , Fig.8b($c_m=+10$) and Fig.8c($c_m=-10$). {\vs 5} \begin{center} Fig.8a\q MINBU dstribution for $0.001\leq p\leq 0.2, \ -10^{-5}\leq \be'\leq +10^{-4}$\ . $c_m=0,\xi=0.99$. \end{center} {\vs 5} \begin{center} Fig.8b\q MINBU dstribution for $0.001\leq p\leq 0.2, \ -10^{-5}\leq \be'\leq +10^{-4}$\ . $c_m=+10,\xi=0.99$. \end{center} {\vs 5} \begin{center} Fig.8c\q MINBU dstribution for $0.001\leq p\leq 0.2, \ -10^{-5}\leq \be'\leq +10^{-4}$\ . $c_m=-10,\xi=0.99$. \end{center} The Fig.8a corresponds to the stereographic display of Fig.6a and 6b. In each of Fig.8a-c, a ridge appears for $\be'>0$\ . For $\be'<0$\ , a tower appears instead of a ridge. For a large positive $c_m$\ ( matter dominated region, $c_m=10$\ in Fig.8b) the undulation of the MINBU dstribution surface \footnote{Do not confuse it with the 2d manifold which the present model of gravity represents.} is small(the cross-over is dim), whereas it is large(the cross-over is sharp) for a large negative $c_m$\ (matter anti-dominated region, $c_m=-10$\ in Fig.8c). \section{Discussion and Conclusion} In the (2d) QG,at present, there exists no simple way to find good physical observables. They have been found by 'try and error'. MINBU is one of good observables to measure the surface property. Quite recently a new observable ,the 'electric resistivity' of the surface, is proposed by \cite{KTY}. By measuring the observable for the matter-coupled Liouville gravity, they observe a cross-over ,near $c_m=1$\ ,from the surface where a complex-structure is well-defined to the surface where it is not well-defined. The analysis of the new obserbable, from the standpoint of the present approach, is important. \q There are some straightforward but important applications of the present analysis :\ 1)\ higher-genus case, 2)\ the case with other higher-derivative terms such as $R^3$\ and $\na R\cdot \na R$\ , 3)\ the quantum effect. As for 2) ,references \cite{TY2} and \cite{Tsuda} have already obtained the Monte Carlo data. \q We have presented the numerical result of MINBU and its theoretical explanation using the semiclassical approximation. The surface properties are characterized. It is confirmed that the present lowest approximation is very efficient to analyse 2d quantum gravity, at least, qualitatively. \q Finally we expect other new observables will be found and many Monte Carlo measurements will be done ,including 3 and 4 dimensional cases, next a few years. The interplay between the measurement by the computer simulation and the theoretical interpretation will become important more and more. We believe this process will lead to the right understanding of the (Euclidean) quantum gravity. \begin{flushleft} {\bf Acknowledgement} \end{flushleft} The authors thank N. Ishibashi and H. Kawai for comments and discussions about the present work.
2,869,038,154,483
arxiv
\section{Introduction} In the recent literature one finds many alternative proposals for modeling and estimating a smooth function. In this article we focus on variants of smoothing splines, called penalized regression splines (\cite{Montoya14}, \cite{brian:eilers:1996}). This is an attractive approach for modeling the nonlinear smoothing effect of covariates. This work discusses the selection of knots given a fixed maximum number of knots. A roughness penalty is introduced to control the selection of knots and consequently to balance the two conflicting goals, goodness of fit and smoothness. Our approach will be through a full Bayesian Lasso with variational inference. It is related to the work of \cite{Osborne99} where an efficient algorithm to calculate the classical Lasso estimator was presented. Our contribution, therefore, includes the application of the mean field variational inference (\cite{Blei17}, \cite{Ormerod10}) for the complete Bayesian lasso penalty (\cite{Park08} and \cite{Mallick14})). Choosing the ideal number of knots and their position is a difficult problem. We propose a two-step procedure related to the work of \cite{Ruppert02}. For regularization and model selection, the proposed procedure starts with a fixed maximum number of knots and then uses a full Bayesian lasso, which combines characteristics of shrinkage and variable selection, to obtain the most significant knots to recover the unknown smooth function. The number of knots is chosen based on an approximation of the predictive distribution in a grid of knots values. The original formulation of the Bayesian Lasso is based on a hierarchical representation of the Laplace distribution, as a mixture of scale normal based on exponential (\cite{Park08}) and more recently as a mixture of uniform with exponential (\cite{Mallick14}). Alternative procedures for selecting the effective number of knots involving least squares and penalized splines regression has been proposed in the recent literature, see (\cite{Spiriti12}, \cite{Montoya14}). It is well known that MCMC often takes a great deal of computational time and is not scalable. Therefore, our proposal is to use variational inference (VI) integrated with a decision theoretical approach to knot selection in regression splines. Both are discussed in detail and have shown to be comparatively better than the alternatives presented in the current literature. The remainder of the paper is organized as follows. In Section 2 presents a review the Bayesian linear model, the variational inference and the hierarchical formulation of the Laplace distribution. In Section 3, shows the full Bayesian Lasso, including the Jeffrey's prior for the hyperparameters and the Bayes factor criterion for knots selection. Section 4 states the knot selection procedure for regression spline in an almost fully automatic algorithm. Section 5 shows a comparative numerical simulation of the performance of the proposed method and other existing approaches in the literature. A data analysis of real datasets is presented in Section 6. \section{ A review of the Methodology } In order to set the notation to be used later, this section presents a brief summary of Bayesian regression models and variational inference techniques and also establishes the framework for our proposal to select knots in regression spline models to be developed in Section \ref{sec:regsplines}. We summarize the conjugate Bayesian analysis of a linear model. In addition, we present an introduction to variational inference and the hierarchical representation of the Laplace distribution. For more details see, \cite{Drugowitsch19}, \cite{deni:mall:smit:1998}, \cite{Berry02}, \cite{goepp:hal:2018} and \cite{Lang04}. \subsection{Bayesian estimation in linear models} Following the notation of \cite{MGL2015} let the linear model be $$ {\bf y} \, | \, \mbox{\boldmath $\beta$}, \phi \sim N(X\mbox{\boldmath $\beta$}, \phi^{-1} I_n) $$ where $y$ is n-vector of observed quantities, $X$ is a known $n \times p$ matrix, $\mbox{\boldmath $\beta$}$ is a p-vector of parameters and $\phi$ is the precision associated with each one of the independent observations. The conjugate prior, a Normal-Gamma, is defined as: \begin{eqnarray*} \mbox{\boldmath $\beta$} | \phi &\sim& N({\bf m}_0, \phi^{-1} C^{-1}_0)\\ \phi &\sim& Ga(a_0, b_0) \end{eqnarray*} where ${\bf m}_0$ and $(\phi \, C_0)^{-1}$ are, respectively, the prior mean and covariance matrix and $a_0, b_0$ are the parameters of the precision prior distribution. The posterior distribution is \begin{eqnarray*} \mbox{\boldmath $\beta$}| \phi, {\bf y}, X &\sim& N({\bf m}_1, \phi^{-1} C_1^{-1}) \\ \phi|{\bf y}, X &\sim& Ga(a_1, b_1)\\ {\bf m}_1 &=& C^{-1}_1 \, ( C_0 {\bf m}_0 + X^T{\bf y} ) \, \, \, \, \, \, \mbox{and} \, \, \, \, \, \, C_1 = C_0 + X^TX\\ a_1 & =& a_0 + \frac{n}{2} \, \, \, \, \, \, \, \mbox{and} \, \, \, \, \, \, b_1 = b_0 + \frac{1}{2}[({\bf y} - X {\bf m}_1)^T {\bf y} + ({\bf m}_0-{\bf m}_1)^T \, C_0 {\bf m}_0] \end{eqnarray*} A very useful extension of the above regression model is the Bayesian hierarchical regression models, which will be extensively used latter. It was proposed in the seminal paper of \cite{LindleySmith72} and a dynamic version was introduced in \cite{DaniMigon:93}. \subsection{Variational Inference - main aspects}\label{subsec:VB} It is well known that Bayesian inference regarding unknown quantities is entirely based on their probabilistic description. Therefore, {\it variational inference (VI)}, a method to deal with the approximation of probability densities is very useful for Bayesian inference. In fact, these techniques can be traced back to the field of machine learning (\cite{Jordan99}). Loosely speaking, they basically exchange {\it sampling}, as in MCMC procedures, for optimization. By choosing a flexible family of approximate densities, an attempt is made to find a member of this family, which minimizes some optimal criterion, for example Kulback-Leibner divergence ($KL$). Variational inference is useful for quickly comparing alternative models and also for dealing with large data sets. \cite{Blei17} pointed out that the accuracy of variational inference has not yet been thoroughly studied and many open questions are still there to be answered. The basic ideas about variational inference can be easily followed in \cite{Blei17} and in \cite{Ormerod10}. Many examples are presented in the \cite{Bishop06} book. Let ${\bf y}$ be a vector of $n$ independent identically distributed observations and ${\bf z}$ a vector including latent variables and the parameters as well. The log marginal data distribution, also known as evidence integral, is denoted by $p({\bf y})$. Evidence integrals that are often unavailable in closed form require exponential time to be evaluated and present difficulties in making the inference for a model such as this. To avoid calculating the evidence integral, one tries to find a lower bound, which is known as {\it ELBO$(q)$ - Evidence Lower Bound} and will be denoted by ${ \cal L}(q)$. It is easy to verify that: $$ \log\, p({\bf y}) = {\cal L}(q) \, + KL(q| |p), $$ where ${\cal L}(q) = \int q({\bf z}) \log \, \frac{p({\bf y},{\bf z})}{q({\bf z})} d{\bf z}$ \, and \, $KL(q | | p) = - \, \int q({\bf z}) \log \, \frac{p({\bf z} | {\bf y})}{q({\bf z})} d{\bf z}$, since \, $p({\bf y}) =\frac{p({\bf y},{\bf z}) /q({\bf z})}{p({\bf z}|{\bf y})/q({\bf z})}$. It is clear that $max_{q} \, {\cal L}(q) \simeq min_{q} \, KL(q | | p)$ and also that $KL(q| |p) \ge 0$ with equality if and only if $p({\bf z}|{\bf y})=q({\bf z})$. In general, it is difficult to obtain this posterior distribution, therefore, the approach is to choose a family of tractable densities. Let's assume the following: $$ q({\bf z}) = \prod_{l=1}^m q_l({\bf z}_l) $$ where a partition of the ${\bf z}$ into $m$ disjoint groups is denoted as ${\bf z}_l$. It is worth pointing out that there is no restriction on the functional forms of the variational densities $q_l({\bf z}_l)$. The central idea is to maximize each factor (blocks of z's) of $q({\bf z})$ in turn. We keep $q_{l\ne h}$ fixed and maximize ${\cal L}(q)$. Note that: \begin{eqnarray} \label{eq:lq} { \cal L}(q) &=& \int \, \prod_{l=1}^m \, q_l({\bf z}_l) [\log\, p({\bf y},{\bf z}) - \, \log\, q_l({\bf z}_l) ] d{\bf z} \nonumber\\ &=& \int \, q_h({\bf z}_h) \, [ \int \, \log\,p({\bf y},{\bf z}) \prod_{l \ne h} q_l({\bf z}_l) \, d{\bf z}_l] \, d{\bf z}_h - \int q_h({\bf z}_h) \, \log\, q_h({\bf z}_h) \,d{\bf z}_h + const \nonumber \\ &=& \int \, q_h({\bf z}_h) \, \log \, \tilde{p}({\bf y}, {\bf z}_h) \, d{\bf z}_h - \int \, q_h({\bf z}_h) \, \log \, q_h({\bf z}_h) \, d{\bf z}_h + const \end{eqnarray} where $ \tilde{p}({\bf y}, {\bf z}_h) = E_{l \ne h} \, \log \, p({\bf y},{\bf z}) + const$. The ${ \cal L}(q)$ will be presented for the specific case of Lasso in subsection \ref{subsec:VBLasso}. Note that it depends on the variational parameters. Worth emphasizing that the problem of approximating the posterior distribution for the parameters of interest was replaced for a maximization problem. The algorithm to solve the optimization problem was introduced by \cite{Bishop06} and denoted by CAVI - coordinate ascent variational inference. The CAVI optimizes one factor of the mean field variational density at a time. Since (\ref{eq:lq}) is equal to $- KL(\cdot||\cdot)$, maximizing it is equivalent to minimizing $KL$. Therefore, the optimal solution is: $$ log \, q^*({\bf z}_l) = E_{l \ne h} [ \log \, p({\bf y}, {\bf z})] + const $$ As one can see, $q^*({\bf z}_l)$ depends on the full conditional distributions, as usually denoted in the MCMC literature (\cite{George:Casella:1992}). Therefore, there is a natural link with Gibbs Sampling but the proposed approach leads to tractable solutions involving only local operations. \subsection{ Hierarchical representation of the Laplace distribution}\label{subsec:Hierarchical_laplace} It is well known that the original Lasso formulation (\cite{Tibishirani96}) is related to the Laplace distribution which can be represented as hierarchical mixture of distributions and is relevant to a hierarchical modeling, which in turn is important for the implementation of Gibbs Sampling. One of these forms of representation is a scale mixture of a Normal distribution with an Exponential distribution (\cite{West87}) and the other is a mixture of a Uniform distribution with a Gamma distribution (\cite{Mallick14}). Specifically, following \cite{Andrews74}, it is easy to verify that the hierarchical representation: ${\beta}|\tau \sim N[0,\tau]$ and $\tau|\lambda \sim \exp\left(\frac{\lambda^2}{2}\right)$ leads, by marginalizing on $\tau$, to the standard Laplace distribution, whose density is: \begin{eqnarray}\label{eq:Laplace} \frac{\lambda}{2} \exp(- \lambda \, |\beta|) = \int_0^{\infty} \left[\frac{ \tau^{-1/2}}{\sqrt{2 \pi}} \, \exp\left(-\frac{ \beta^2 }{2 \tau}\right)\right] \, \, \left[ \frac{\lambda^2}{2} \exp\left(-\frac{\lambda^2}{2}\tau\right)\right] d \tau. \end{eqnarray} The above hierarchical representation of the Laplace distribution is important to introduce the full Bayesian Lasso. The penalty term in the classical Lasso can be interpreted as independent Laplace prior distribution over the regression parameters. Moreover, the posterior mode can be seen as the Lasso estimates. \section{ The full Bayesian Lasso} \label{sec:fullbayesianlasso} Following the hierarchical representation for the Laplace distributions in Subsection \ref{subsec:Hierarchical_laplace}, \cite{Park08} shows a Bayesian formulation of the Lasso regression model. The hierarchical model is defined as: \begin{eqnarray*} {\bf y}|X, \mbox{\boldmath $\beta$}, \phi &\sim& N[X\mbox{\boldmath $\beta$}, \phi^{-1} I_n]\\ \mbox{\boldmath $\beta$}|\phi, \mbox{\boldmath $\tau$} &\sim& N[0, \phi^{-1} {\bf D}_{\tau}] \\ \tau_j|\lambda &\sim& Exp(\lambda) \;\;\;\; \mbox{with} \;\;\; j = 1, \ldots, p \end{eqnarray*} where ${\bf D}_\tau = diag(\tau_1, \ldots, \tau_p)$ and $\tau_j|\lambda$ are conditionally independent for all $j$. The model can be completed with the hyperparameters of the priors $ \phi \sim Ga (a_0, b_0) $ and $\lambda \sim Ga (g_0, h_0) $. In Subsection \ref{Jeffreysprior} we propose an independent Jeffreys prior for $\phi$ and $\lambda$ to automate the Lasso, and this implies supposing $a_0$, $b_0$, $g_0$ and $h_0$ tending to zero. Let $\mbox{\boldmath $\theta$} = (\mbox{\boldmath $\beta$},\phi,\mbox{\boldmath $\tau$},\lambda)$ be the vector of the parameters and the latent variables of the model. The posterior distribution is obtained as proportional to the model distribution times the prior distribution for the latent component and the parameters: $$ p(\mbox{\boldmath $\theta$} | \mathbf{y},X) \propto p( {\bf y} | X, \mbox{\boldmath $\beta$}, \phi) \, \, p(\mbox{\boldmath $\beta$} | \phi, \mbox{\boldmath $\tau$}) \,\, p(\mbox{\boldmath $\tau$}|\lambda) \, \, p(\phi) \, \, p(\lambda). $$ For instance, the above joint posterior is often intractable. An almost obvious numerical approach, since the breakthrough paper of \cite{Gelfand:Smith:1990}, is to use stochastic simulation. \subsection{Jeffreys prior using Fisher decomposition} \label{Jeffreysprior} In order to develop an automatic Bayesian Lasso procedure it is worth to introduce non informative priors for the hyperparameters involved. Following \cite{Fonseca19} and exploring the conditional independence involved in the Lasso model, the Fisher information decomposition for Lasso follows as: \begin{eqnarray}{\label{FisherDec}} I_{\bf y}(\lambda) = I_{\mbox{\boldmath $\tau$}}(\lambda) - E_{{\bf y}}\, [ I_{\mbox{\boldmath $\beta$},\mbox{\boldmath $\tau$}} (\lambda| {\bf y})], \end{eqnarray} where $ I_{\mbox{\boldmath $\beta$},\mbox{\boldmath $\tau$}} (\lambda | {\bf y})$ is the information obtained from the full conditional distribution $ p(\mbox{\boldmath $\beta$}, \mbox{\boldmath $\tau$} | {\bf y}, \lambda)$. We also are using the conditional independence described by the graph that represents the Bayesian Lasso model. We will develop, in turn, each of the components in the expression (\ref{FisherDec}). The quantity $I_{\mbox{\boldmath $\tau$}}( \lambda)$ is based on the independent marginal distribution of $\tau_j$, leading directly to $I_{\mbox{\boldmath $\tau$}}(\lambda) = \frac{p}{\lambda^2}$. In order to obtain $I_{\mbox{\boldmath $\beta$}, \mbox{\boldmath $\tau$}}(\lambda | {\bf y}) $, \, we take advantage of the known full conditional distribution of $(\mbox{\boldmath $\beta$}, \mbox{\boldmath $\tau$}| \lambda, {\bf y}) $ (see (\ref{Gibbs})). Since $(\mbox{\boldmath $\beta$} | \mbox{\boldmath $\tau$}, \lambda, {\bf y}) $ does not depend on $\lambda$, then it is easy to obtain $E_{{\bf y}}\, [I_{\mbox{\boldmath $\beta$},\mbox{\boldmath $\tau$}} (\lambda| {\bf y})] = \frac{p}{\lambda}$. Then substituting in (\ref{FisherDec}), it follows $I_{\bf y}(\lambda) = \frac{p}{\lambda^2} + \frac{p}{\lambda^2}$ and so the prior for $\lambda$ \, is \, $p(\lambda) \propto \lambda^{-1}$. This result is similar to the one reported in \cite{Fonseca19}, using the Uniform Gamma mixture. It is well known that the Jeffrey's prior of $\phi$ is proportional of $\phi^{-1}$. \subsection{ The MCMC formulation}\label{MCMC} Considering the model and the prior distribution already specified, we know that the posterior distribution in this case has an unknown form. Therefore, we can use the MCMC to obtain a sample of the posterior distribution through the complete conditional distributions (Gibbs Sampler). Calculations of complete conditionals are as follows. \begin{eqnarray}\label{Gibbs} (\mbox{\boldmath $\beta$}|{\bf y}, \mbox{\boldmath $\theta$}_{-\mbox{\boldmath $\beta$}} ) &\sim& N\left((X^TX + {\bf D}_{\tau}^{-1})^{-1}X^T{\bf y} , \frac{1}{\phi} (X^TX + {\bf D}_{\tau}^{-1})^{-1}\right) \nonumber \\ (\tau_j|{\bf y}, \mbox{\boldmath $\theta$}_{-\tau_j} ) &\sim& GIG\left(\frac{1}{2}, 2 \lambda, \mbox{\boldmath $\beta$}_j^2 \phi\right) \nonumber \\ (\phi|{\bf y}, \mbox{\boldmath $\theta$}_{-\phi} ) &\sim& Ga\left(\frac{n}{2}+\frac{p}{2}+a_0 , b_0 + \frac{1}{2}[({\bf y}-X\mbox{\boldmath $\beta$})^T({\bf y}-X\mbox{\boldmath $\beta$})+\mbox{\boldmath $\beta$}^T {\bf D}_{\tau}^{-1}\mbox{\boldmath $\beta$}]\right) \nonumber \\ (\lambda|{\bf y}, \mbox{\boldmath $\theta$}_{-\lambda} ) &\sim& Ga\left(g_0 + p, h_0 + \sum_{j=1}^p \tau_j\right) \end{eqnarray} where $\mbox{\boldmath $\theta$}_{-}$ stands for the entire vector $\mbox{\boldmath $\theta$}$ without the parameter followed by symbol "\rule{0.15cm}{0.15mm}", and {\it GIG} denotes the generalized inverse Gaussian distribution. See appendix. \subsection{ The variational approximation applied to Lasso}\label{subsec:VBLasso} In order to obtain a scalable inference procedure, we introduce an alternative methodology. To make the notation consistent, the vector including latent variables and parameters denoted by ${\bf z}$ in the subsection \ref{subsec:VB} is represented in this section by the vector $\mbox{\boldmath $\theta$}$. Let the independent Jeffrey's prior be $p(\phi) \propto \frac{1}{\phi}$ and $p(\lambda) \propto \frac{1}{\lambda} .$ The joint distribution of the observations, latent components and parameters can easily be followed from the Figure \ref{DAG} which in turn summarizes the model. \begin{figure}[h!] \begin{center} \includegraphics[width=15cm]{Grafo2.pdf} \end{center} \caption{Directed acyclic graph}\label{Graph1}\label{DAG} \end{figure} It is worth remembering the expression of the {\it mean field} posterior approximation for the latent components and parameters: $$ \log( q(\mbox{\boldmath $\theta$})) = \log(q_1(\mbox{\boldmath $\beta$}, \phi)) + \log(q_2(\mbox{\boldmath $\tau$}|\lambda)) + \log(q_3(\lambda)) $$ After quoting \cite{Blei17} the optimal $q_l(\mbox{\boldmath $\theta$}_l)$ is proportional to the exponential of the log of the complete conditional distribution that is calculated in (\ref{Gibbs}) $$q^\ast_l(\mbox{\boldmath $\theta$}_l) \propto \exp\{E_{-l}[\log p(\mbox{\boldmath $\theta$}_l|\mbox{\boldmath $\theta$}_{-l},{\bf y})]\}, \;\; l = 1, 2, 3.$$ In the first step, the variational posterior for $\mbox{\boldmath $\beta$}$ and $\phi$, that maximizes the variational bound ${\cal L}(q)$ while holding $q_2(\mbox{\boldmath $\tau$}|\lambda)$ and $q_3(\lambda)$ fixed, is given by \begin{eqnarray*} \log q_1^\ast(\mbox{\boldmath $\beta$},\phi) &=& \log(p({\bf y}|\mbox{\boldmath $\beta$},\phi)) + E_{\tau}[\log(p(\mbox{\boldmath $\beta$},\phi|\mbox{\boldmath $\tau$}))] + const\\ &=& \log N(\mbox{\boldmath $\beta$}|m_\beta, \phi^{-1} C_\beta) \times Ga(\phi|a_\phi,b_\phi) \end{eqnarray*} It is easy to see that this is a normal-gamma distribution with parameters: \begin{eqnarray*} C_\beta^{-1} = E_{\tau} ({\bf D}_\tau^{-1}) + X^TX, \, \, \, \, \, \, &\mbox{and}& \, \, \, \, \, \, m_\beta = C_\beta X^T{\bf y}, \\ a_\phi = a_0 + n/2, \, \, \, \, \, \, &\mbox{and}& \, \, \, \, \, \, b_\phi = b_0 + \frac{1}{2} ({\bf y}^T{\bf y} - m_\beta^T C_\beta^{-1}m_\beta). \end{eqnarray*} Next, the variational distribution of $\mbox{\boldmath $\tau$}$, that maximizes the variational bound ${\cal L}(q)$ while holding $q_3(\lambda)$ fixed is given by \begin{eqnarray*} \log q_2^\ast(\tau_j) &=& E_{\lambda}[\log (p(\tau_j|\lambda))] + E_{\beta,\phi}[\log (p(\beta_j,\phi|\tau_j))] + const\\ &=& \log GIG(\tau_j|c_\tau,d_\tau,f_{\tau_j}) \end{eqnarray*} with GIG being generalized inverse Gaussian distribution, where $$c_\tau = \frac{1}{2}\;;\; d_\tau = 2 E_\lambda[\lambda] \; ; \; f_{\tau_j} = E_{\beta,\phi}[\phi \beta_j^2].$$ Therefore, $$\log q_2^\ast(\mbox{\boldmath $\tau$}) = \log \prod_{j=1}^p GIG(\tau_j|c_\tau,d_\tau,f_{\tau_j}).$$ Finally, we will identify the variational distribution of $\lambda$: \begin{eqnarray*} \log q_3^\ast(\lambda) &=& \log (p(\lambda)) + E_\tau[\log (p(\mbox{\boldmath $\tau$}|\lambda))] + const\\ &=& \log Ga(\lambda|g_\lambda, h_\lambda) \end{eqnarray*} which is a gamma distribution with parameters $$g_\lambda = g_0 + p \; ;\; h_\lambda = h_0 + \sum_{j=1}^p E_\tau(\tau_j).$$ The expected values involved in the definition of the above variational distributions are computed as follows (see the appendix for details and \cite{Jorgensen82}). \begin{eqnarray} E_\tau(\tau_j) &=& \frac{\sqrt{f_{\tau_j}} \kappa_{c_\tau+1}(\sqrt{d_\tau f_{\tau_j}})}{\sqrt{d_\tau} \kappa_{c_\tau}(\sqrt{d_\tau f_{\tau_j}})},\label{eq:Esptau}\\ Var_\tau(\tau_j) &=& \frac{f_{\tau_j}}{d_\tau}\left[\frac{K_{c_\tau+2}(\sqrt{d_\tau f_{\tau_j}})}{K_{c_\tau}(\sqrt{d_\tau f_{\tau_j}})} - \left( \frac{K_{c_\tau+1}(\sqrt{d_\tau f_{\tau_j}})}{K_{c_\tau}(\sqrt{d_\tau f_{\tau_j}})}\right)^2 \right],\label{eq:Vartau}\\ E_\tau[{\bf D}_\tau^{-1}] &=& \mbox{diag} (E_\tau(\tau_1^{-1}), \ldots, E_\tau(\tau_p^{-1})), \;\;\mbox{where} \;\; E_\tau(\tau_j^{-1}) = \frac{\sqrt{d_\tau} \kappa_{c_\tau+1}(\sqrt{d_\tau f_{\tau_j}})}{\sqrt{f_{\tau_j}} \kappa_{c_\tau}(\sqrt{d_\tau f_{\tau_j}})} - \frac{2 c_\tau}{f_{\tau_j}},\nonumber\\ E_{\beta,\phi}[\phi \beta_j^2] &=& m_{\beta_j}^2 a_\phi/b_\phi + (C_\beta)_{jj},\nonumber\\ E_\lambda(\lambda) &=& \frac{g_\lambda}{h_\lambda}\nonumber \end{eqnarray} where $\kappa_p(\cdot)$ is the Bessel modified function of the second kind. The evidence lower bound (ELBO) for this model consists of: \begin{eqnarray*} {\cal{L}} (q) &=& E_{\beta,\phi}(\log p({\bf y}|X,\mbox{\boldmath $\beta$},\phi)) + E_{\beta,\phi,\tau}(\log p(\mbox{\boldmath $\beta$},\phi|\mbox{\boldmath $\tau$})) + E_{\tau,\lambda}(\log p(\mbox{\boldmath $\tau$}|\lambda)) +\\ && + E_{\lambda}(\log p(\lambda)) - E_{\beta,\phi}(\log q_1(\mbox{\boldmath $\beta$},\phi)) - E_{\tau,\lambda}(\log q_2(\mbox{\boldmath $\tau$}|\lambda)) - E_{\lambda}(\log q_3(\lambda)) \end{eqnarray*} Each of the above terms are evaluated as function of the variational parameters, as follows: \begin{eqnarray*} E_{\beta,\phi}(\log p({\bf y}|X,\mbox{\boldmath $\beta$},\phi)) &=& \frac{n}{2}(\psi(a_\phi) - \log b_\phi - \log 2\pi) +\\ && - \frac{1}{2} \left[\frac{a_\phi}{b_\phi} ({\bf y} - X m_\beta)^T({\bf y}-X m_\beta) + tr(X^TX C_\beta)\right]\\ E_{\beta,\phi,\tau}(\log p(\mbox{\boldmath $\beta$},\phi|\mbox{\boldmath $\tau$})) &=& \frac{p}{2} (\psi(a_\phi) - \log b_\phi - \log 2\pi) + (a_0 - 1) (\psi(a_\phi) - \log b_\phi) - b_0 \frac{a_\phi}{b_\phi} +\\ && + \frac{1}{2} \sum_{j=1}^p E_\tau(\log \tau_j) -\frac{1}{2} \sum_{j=1}^p \left[m_{\beta_j} \frac{a_\phi}{b_\phi} + (C_\beta)_{jj}\right] E_\tau\left(\frac{1}{\tau_j}\right)\\ E_{\tau,\lambda}(\log p(\mbox{\boldmath $\tau$}|\lambda)) &=& p (\psi(g_\lambda) - \log h_\lambda) - \frac{g_\lambda}{h_\lambda} \sum_{j=1}^p E_\tau(\tau_j)\\ E_{\lambda}(\log p(\lambda)) &=& g_0 \log h_0 - \log \Gamma(g_0) + (g_0 - 1) (\psi(g_\lambda) - \log h_\lambda) - h_0 \frac{g_\lambda}{h_\lambda}\\ E_{\beta,\phi}(\log q_1(\mbox{\boldmath $\beta$},\phi)) &=& \frac{p}{2} (\psi(a_\phi) - \log b_\phi - \log 2\pi) - \frac{1}{2} \log |C_\beta| + a_\phi \log b_\phi - \log \Gamma(a_\phi) +\\ && + (a_\phi - 1) (\psi(a_\phi) - \log b_\phi) - a_\phi\\ E_{\tau,\lambda}(\log q_2(\mbox{\boldmath $\tau$}|\lambda)) &=& \sum_{j=1}^p \left[ \frac{c_\tau}{2} \log \frac{d_\tau}{f_{\tau_j}} - \log 2 - \log K_{c_\tau}(\sqrt{d_\tau f_{\tau_j}}) + (c_\tau -1) E_\tau(\log \tau_j) \right. +\\ && - \left[ \frac{d_\tau}{2} E_\tau(\tau_j) - \frac{f_{\tau_j}}{2} E_\tau \left(\frac{1}{\tau_j}\right) \right]\\ E_{\lambda}(\log q_3(\lambda)) &=& -\log \Gamma(g_\lambda) + (g_\lambda-1) \psi(g_\lambda) + \log h_\lambda - g_\lambda\\ \end{eqnarray*} The second order Taylor expansion for $\log \tau_j$ at $E(\tau_j)$ is used to obtain the approximation for its expected value: $E(\log \tau_j) \approx \log E(\tau_j) - \frac{Var(\tau_j)}{2 E^2(\tau_j)}$ where the mean and the variance of $\tau_j$ are in equations (\ref{eq:Esptau}) and (\ref{eq:Vartau}). Note that the variational bound depends on the quantities $m_\beta$, $C_\beta$, $b_\phi$, $d_\tau$, $f_{\tau_j}$ e $h_\lambda$. The algorithm updates these quantities in each iteration. The ELBO is maximized and hence ${\cal{L}} (q)$ reaches a plateau with stabilization of those quantities. The algorithm consists of the following steps: \begin{algorithm} \caption{Variational Inference}\label{algo:VI} \footnotesize{ \begin{algorithmic}[0] \\\hrulefill \item\text{\textit{Step} 1. Initialize the variational hyperparameters: $m_\beta$, $C_\beta$, $a_\phi$, $b_\phi$, $g_\lambda$, $h_\lambda$, $c_\tau$, $d_\tau$, $f_{\tau_j}$.} \item\text{\textit{Step} 2.} \While{ELBO does not reach convergence} \For{$l = 1, 2, 3$} \item Compute $q^\ast_l(\mbox{\boldmath $\theta$}_l) \propto \exp\{E_{-l}[\log p(\mbox{\boldmath $\theta$}_l|\mbox{\boldmath $\theta$}_{-l},{\bf y})]\}$ \item \text{\textit{Step} 3. Update the variational hyperparameters based on the expected values.} \State \text{Calculate ELBO} \EndFor \EndWhile \\\hrulefill \end{algorithmic} } \end{algorithm} Convergence can be achieved by analyzing changes to ELBO in consecutive iterations or by analyzing the quantities on which it depends. We end this section by showing the predictive distribution. Let $y^o$ e $y^p$ be the observed and the predicted vectors, respectively. Finally, let $p(\beta,\phi|y^o)$ be its variational component. Then, after some algebraic calculations, we have a Student's t-distribution (St) as follows: \begin{eqnarray*} p(y^p|y^o,X^p) &=& \int \int p(y^p|\mbox{\boldmath $\beta$},\phi) p(\mbox{\boldmath $\beta$}, \phi| y^o) d\mbox{\boldmath $\beta$} d\phi \approx \int \int p(y^p|\mbox{\boldmath $\beta$}, \phi) q_1(\mbox{\boldmath $\beta$},\phi) d\mbox{\boldmath $\beta$} d\phi\\ &=& St\left(y^p|X^Tm_\beta,(1+X^T C_\beta X)\frac{b_\phi}{(a_\phi-1)},2a_\phi\right), \end{eqnarray*} where $q_1(\mbox{\boldmath $\beta$},\phi)$ is the variational approximation of the posterior distribution, a normal-gamma distribution. \subsection{Variable selection} We will discuss, from a Bayesian point of view, three alternative procedures for selecting knots (variables) in penalized regression splines (linear regression). In this work, we propose a new decision criterion based on the Bayes factor. This proposed criterion is fully described below in \ref{BayesFactorDecisionCriteria} \subsubsection{Bayes Factor decision criteria} \label{BayesFactorDecisionCriteria} In general, the selection of predictors/knots in a penalized regression/penalized regression splines () can be seen as a decision problem. Consider the general case where it is necessary to decide for one of the following models ${\cal M}_0: \theta \in \Theta_0$ or ${\cal M}_1: \theta \in \Theta_1$ based on some observations ($D$). An optimal decision will be based on the posterior probabilities, $ p({\cal M}_0 | D) $ and $ p({\cal M}_1 | D) $ and, also on the cost of the wrong decisions. Denote by $a$ the cost associate for the choice of the model ${\cal M} _0$ when, in fact, the true model is ${\cal M} _1$ and let $b$ be the cost of choosing model ${\cal M} _1$ when the true model is ${\cal M} _0$. Therefore, if $ b \, \, p({\cal M} _0 | D) > a \, \, p({\cal M} _1 | D) $ then $ {\cal M} _0 $ should be chosen as the most plausible model for $ \theta $. By Bayes' theorem, the posterior odds are given by the product of the prior odds times the Bayes factor, $FB({\cal M}_0,{\cal M}_1) = {p(D|{\cal M}_0)}/{p(D|{\cal M}_1)}$, where $p(D|{\cal M}_i) = \int_{\Theta_i} p(D|\theta,{\cal M}_i) p(\theta|{\cal M}_i) d\theta, \;\;\; i=0, 1$. Hereafter, we will assume that the prior odds is equal one. Particularly, we consider two alternatives: ${\cal M} _0: \beta_j = 0$ and ${\cal M} _1: \beta_j = \delta$, where $\delta \neq 0$ is a constant to be defined. Under ${\cal M} _0$, let us assume that $\beta_j|D \sim N (0, s_j^2) $ and under ${\cal M}_1$ we have $\beta_j| D \sim N(\delta,s_j^2)$, where $s_j^2=var(\beta_j|D)$. Hence, it is straightforward to get $log(FB({\cal M}_0,{\cal M}_1) )= log( \exp\{-\frac{1}{2} \beta_j^2\} / \exp\{-\frac{1}{2}(\beta_j-\delta)^2\} )= \frac{1}{2} \delta^2 - \beta_j \delta.$ Assuming that at least a moderate evidence against ${\cal M}_1$ corresponds to $FB({\cal M}_0,{\cal M}_1) \ge 3$ and $\beta$ is considered to be significantly distant from the ${\cal M}_0$, if and only if it is greater or equal to the third quartile of the standard normal distribution, that is $\beta_{0.75} = 0.67$, then a quadratic equation must be solved whose root is $ \delta = 2.3 $. Thus, our proposal to select knots in spline regression is described in the following algorithm: \pagebreak \begin{algorithm} \caption{Bayes Factor decision criteria}\label{alg:BFKnotSelection} \footnotesize{ \begin{algorithmic}[0] \\ \hrulefill \item \text{\textit{Step} 1. Take $\beta_{j}^{*}= m_j/s_j$, a standardized point estimate of $\beta_j$, \; $m_j=E[\beta_j | D]$ and $s_j^2=var(\beta_j|D)$.} \item \text{\textit{Step} 2. Compute the Bayes factor at $\widehat{\beta}_j^\star$: $BF({\cal M}_0, {\cal M}_1) = \frac{\exp\{-\frac{1}{2} (\widehat{\beta}_j^\star)^2\}}{\exp\{-\frac{1}{2}(\widehat{\beta}_j^\star-\delta)^2\}},$ \; $\delta = 2.3$.} \item \text{\textit{Step} 3. Compute $\pi^\star = BF({\cal M}_0, {\cal M}_1) /(1+BF({\cal M}_0, {\cal M}_1) )$.} \item \text{\textit{Step} 4. Define BF evidence} \item \text{Choose two positive numbers $a$ and $b$, (with $a = 1$ and $b = 3$, corresponding to a Bayes factor equal to 3)} \If {$\pi{^\star}<\frac{a}{a+b}$} \State ${\cal M}_0$ is rejected and $j^{th}$ predictor is excluded from the model \Else{ \State the $j^{th}$ predictor is not excluded from the model.} \EndIf \\\hrulefill \end{algorithmic} } \end{algorithm} \subsubsection{Other criteria} One procedure, due to (\cite {Li10}), is based on a $ 50 \% $ credible interval. That is, if the credible interval of a given coefficient contains zero, the explanatory variable associated with it must be removed from the model. A second criterion, named scaled neighborhood (\cite{Li10}) corresponds to evaluate the posterior probability of $[-s_j, s_j]$, where $s_j^2= var(\beta_j|y)$ and decide the exclusion of a predictor if this probability exceeds a certain threshold, for instance \cite{Li10} suggests $ {1}/{2} $ as this limit. \section{Knots Selection in Regression Spline}\label{sec:regsplines} We start with the standard setup for nonparametric regression models. Let's suppose we have a collection of observations $(y_i,x_i)$ for $i=1,\ldots, n$ such that \begin{eqnarray}\label{reg_spline1} y_i = f(x_i) + \epsilon_i , \end{eqnarray} where $f(x_i) = E[y_i|x_i]$ are the values obtained by a unknown smooth function $f$ that takes values on the interval $[a,b] \subset \mathbb{R} $ and $\epsilon_i$ is a sequence of random variables that are uncorrelated with mean zero and unknown precision $\phi$. A possible approach to estimate $f$ is to assume that the regression curve $f$ can be well approximated by a spline function. See details in \cite{dias:1999a}. That this, given a sequence of knots $\mbox{\boldmath $\kappa$} = (\kappa_1, \ldots, \kappa_K)$ so that $\kappa_1 < \kappa_2 \ldots < \kappa_{K-1} < \kappa_K $, a spline regression model can be written as: \begin{equation}\label{reg_spline2} f(x, \mbox{\boldmath $\beta$}) = \beta_0 + \sum_{j=1}^{p} \beta_j x^{j} + \sum_{k=1}^{K} \beta_{p+k} (x-\kappa_k)^p_{+} , \end{equation} where $p$ is the degree of the polynomial spline and $\mbox{\boldmath $\beta$}$ is the vector of coefficients of dimension $K+p+1$. The functions $(u)_{+} $ are the well known truncated power basis, $(u)^p_{+} = \max(0,u^p)$. Note that, for a fixed $K$, the vector of knots $\mbox{\boldmath $\kappa$}$ and the set of basis functions $\{1,x,x^2,\ldots, x^p, (x-\kappa_1)^p_{+}, \ldots (x-\kappa_K)^p_{+} \}$, an estimate of $f$, say $\hat{f}$, can be obtained by estimating the vector $\mbox{\boldmath $\beta$}$. Such that, $$ \hat{f} = f(x,\widehat{\mbox{\boldmath $\beta$}}) = \hat{\beta}_0+ \sum_{j=1}^{p} \hat{\beta}_j x^{j} + \sum_{k=1}^{K} \hat{\beta}_{p+k}(x-\kappa_k)^p_{+} .$$ It's well known (\cite{dias:1998a}, \cite{dias:game:2002}, \cite{dias:garcia:2007}, \cite{koop:ston:1991}) when $K$ increases the bias gets smaller causing over-fitting but at the same time substantially increases the variance. On the other hand, if $K$ goes to zero then variance might drastically be reduced causing under-fitting and considerably increases bias. Thus, $K$ acts as the smoothing parameter in regression spline fit and hence it balances the trade-off between over-fitting and under-fitting. A good procedure should not only provide an ideal number of knots (or basis functions) but also quantify the uncertainty of adding or removing them. Figure ~\ref{fig:K-effect} shows the effect of different values of $K$ for a spline regression model. \begin{figure}[h!] \begin{center} {\includegraphics[scale=0.3]{Effect_K.pdf}} \end{center} \caption{Large values of K causes over-fitting.}\label{fig:K-effect} \end{figure} There are other basis functions that can represent a regression function such as B-splines, wavelets radial basis etc. For all, it is still necessary to balance between under-fitting and over-fitting. Even in the case of smoothing splines, the regularization parameter needs to be obtained. Specifically, in this work we are dealing with the following optimization problem: Find $\widehat{\mbox{\boldmath $\beta$}}(\lambda)$ the minimizer of \begin{equation}\label{eq:plsregre} \sum_{i=1}^{n} (y_i - f(x_i,\mbox{\boldmath $\beta$}))^2 + \lambda \sum_{k=1}^{K} |\beta_{p+k}|, \end{equation} where $\lambda$ is the smoothing parameter. For large values of $\lambda$ the solution of this optimization problem tends to the polynomial regression fit, that is over-fitting. Note that the penalty term involves only the coefficients associated to the knots sequence $\kappa_1 < \kappa_2 \ldots < \kappa_K $. Consequently, selecting knots is equivalent to selecting the coefficients that contribute most to the fitting. Under the Bayesian point of view, this work presents a novel and scalable procedure for selecting knots. \subsection{The variational inference for knots selection} Following the idea of the Lasso procedure for variable selection, under the Bayesian point of view, the selection of knots in the spline regression models can be made by assuming an independent Laplace prior distribution for the coefficients associated with the knots. We will denote $\mbox{\boldmath $\beta$} = \left({\mbox{\boldmath $\beta$}^{(1)}}^T,{\mbox{\boldmath $\beta$}^{(2)}}^T\right)^T$ where $\mbox{\boldmath $\beta$}^{(1)} = (\beta_0, \beta_1, \ldots, \beta_p)^T$ is the polynomial coefficients with dimension $p+1$ and $\mbox{\boldmath $\beta$}^{(2)} = (\beta_{p+1}, \ldots, \beta_{p+K})^T$ of dimension $K$ is the penalised coefficients. The hierarchical structure presented in the subsection \ref{subsec:Hierarchical_laplace} is maintained and in this way we complete the model defined by the equations (\ref{reg_spline1}) and (\ref{reg_spline2}): \begin{eqnarray}\label{prior_spline} {\mbox{\boldmath $\beta$}^{(1)}}^T &\sim& N(m_0, C_0)\nonumber\\ {\mbox{\boldmath $\beta$}^{(2)}}^T|\phi, \mbox{\boldmath $\tau$} &\sim& N(0, \phi^{-1} D_{\tau}) \nonumber\\ \tau_j|\lambda &\sim& Exp(\lambda), \;\;\; j=1, \ldots, K \nonumber \\ \phi &\sim& Ga(a_0, b_0)\nonumber \\ \lambda &\sim& Ga(g_0, h_0) \end{eqnarray} The Bayesian inference procedure in this case must be carried out with caution since the vector of coefficients contains the coefficients of the polynomial, which will not be penalized, and the coefficients of the basis functions to which we assume a prior Lasso distribution for knots selection. The design matrix is then partitioned $X = (X_1,X_2)$ with $X_1$ of dimension $(n\times p+1)$ and $X_2$ $(n\times K)$. Then, the i-th row of the matrix $X$ is given by: $$X_i = \{ \underbrace{1,x_i,x_i^2,\ldots, x_i^p}_{X_{1i}}, \underbrace{(x_i-\kappa_1)^p_{+}, \ldots (x_i-\kappa_K)^p_{+} }_{X_{2i}} \}.$$ and $X\mbox{\boldmath $\beta$} = X_1 \mbox{\boldmath $\beta$}^{(1)} + X_2 \mbox{\boldmath $\beta$}^{(2)}.$ In this context, we have a prior distribution of the vector $\mbox{\boldmath $\theta$} = (\beta^{(1)}, \beta^{(2)}, \phi, \tau, \lambda)$ and the posterior distribution is given by: $$ p(\mbox{\boldmath $\theta$}|y) \propto p( y | X, \mbox{\boldmath $\beta$}, \phi) \,\, p(\mbox{\boldmath $\beta$}^{(1)}) \,\, p(\mbox{\boldmath $\beta$}^{(2)} | \phi, \mbox{\boldmath $\tau$}) \,\, p(\mbox{\boldmath $\tau$}|\lambda) \, \, p(\phi) \, \, p(\lambda).$$ Slight adaptations need to be made in the variational inference method and the main one refers to the fact that we now have 4 partitions of the parametric vector giving rise to the following variational densities: $$\log(q(\mbox{\boldmath $\theta$})) = \log(q_1(\beta^{(2)},\phi)) + \log(q_2(\tau)) + \log(q_3(\lambda)) + \log(q_4(\beta^{(1)}))$$ where $\log q_4(\beta^{(1)}) = \log N(\beta^{(1)}|m_{\beta_1}, C_{\beta_1})$. The calculations for this and other variational densities are similar to those developed for the regression model in subsection \ref{subsec:VBLasso} and can be found in the appendix. \subsection{Algorithm for automatic knot selection} An alternative approach to determine the maximum number of knots $K$ is to consider it as an unknown parameter and estimate it. However, as in this work, $ K $ is not a parameter of direct interest since the selection of knots is carried out through the Lasso scheme. Despite this, in order to have a good fit, it is important to properly define the number of knots and their positions. In section \ref{section:app} presents some exercises involving selecting knots and  shows the importance of the correct specification of the value of $K$. Thus, an algorithm will be proposed for the automatic choice of the number of knots $K$ that is based both on the VB algorithm for estimating the Lasso model and on the criterion for selecting variables (knots). The algorithm starts by proposing a grid of possible values of $ K $ the maximum number knots. Naturally, the grid of values is an issue to be discussed. Note that it is not necessary to propose a grid that covers all the natural numbers, since computational time can be excessively high. Moreover, given a maximum number of knots, the most significant knots will be selected. For instance, start the grid with the maximum number of knots $K=20$. By using Lasso and the selection criteria, it is possible to have  6  among these 20 knots as the most significant ones. On the other hand, simulations show that starting with a very large maximum number of knots may cause problems in the selection, since the penalty criteria acts  more severely when the number of knots is extremely big for the size of the data set. See numerical simulations in Section \ref{section:app}. Therefore, our proposal is to provide a grid that increases 10 units at a time, so that the knots are placed in the quantiles or evenly spaced in the explanatory variable domain. This spacing allows us to position knots  in different locations before and after selecting significant knots. In the simulated exercises presented in Section \ref{section:app}, the grid starts with $K=10$ knots. In summary, ELBO is taken as a stopping criterion and the objective is to maximize it. The algorithm consists of: for fixed grid values, start with the lowest value. The model is adjusted via VB together with the selection criteria and then calculates the ELBO. Move to the next grid value and repeat the procedure. As long as the ELBO increases with the grid values, the algorithm continues. The detailed algorithm is given below: \begin{algorithm}\label{Alg:MAxNos} \caption{Maximum Number of Knots } \footnotesize{ \begin{algorithmic} \\\hrulefill \State \text{\textit{Step} 1. $j \gets 1$. Initialize $K_j=10$.} \State \text{\textit{Step} 2. Fit model via VB algorithm.} \State \text{\textit{Step} 3. Compute ELBO.} \State \text{\textit{Step} 4. Apply BF to select the most significant knots.} \State \text{\textit{Step} 5. $K_{j+1}=K_j+10$ and repeat steps 2, 3 and 4.} \If{ELBO($K_{j+1}$) $\geq$ ELBO($K_j$)} \State \text{ Set $K_j \gets K_{j+1}$. Repeat steps 2 to 5.} \EndIf \If{ELBO($K_{j+1}$) $<$ ELBO($K_j$)} \State \text{ Stop and deliver ELBO and the most significant knots} \EndIf \\\hrulefill \end{algorithmic} } \end{algorithm} \section{Simulations Studies} \label{section:app} This section proposes 5 exercises with artificial data. The first three based on Lasso for linear regression models and the last two focused on the use of Lasso to select knots in spline regression. The inference procedure assumes, for all exercises, the following prior distributions: $\phi \sim Ga(0.1,0.1)$, $\lambda \sim Ga(0.1,0.1)$ e $\mbox{\boldmath $\beta$}_1 \sim N(1,100)$ (in case of regression splines). For MCMC, 15,000 iterations were necessary to achieve convergence. The first 5,000 iterations were discarded as the burn in process and one observation was taken for every ten observations to remove autocorrelation, ending with a sample size of 10000. These quantities were obtained by using the criterion found in \cite{RafteryLewis97}, that provides the number of iterations needed to guarantee the convergence in the Gibbs Sampler. The VB algorithm is repeated until the changes in $ m_ \beta $, $ C_ \beta $, $ b_ \phi $, $ d_ \tau $, $ f _ {\tau_j} $ and $ h_ \lambda $ between two consecutive iterations are less than $ 0.01 \% $. When applied, the classic procedure was implemented using the R software glmnet package, which in turn applies 5-fold cross-validation to estimate the penalty parameter $\lambda$. \subsection{Variable Selection for Linear Regression} The goal of these exercises applied to the linear regression models is twofold. Firstly to compare the estimation methods VB and MCMC (eventually we also use the classic Lasso in the comparison). Secondly, we wish to compare the CI, SN and BF selection criteria. Moreover, different sparsity scenarios, variations in the sample size, different correlations between explanatory variables and different values for the accuracy of the model are considered. Specifically, exercise 1 aims to estimate Lasso hyperparameters via VB and MCMC. Only one data set is simulated from which the real values of all parameters and hyperparameters of the model are known. VB presents results similar to MCMC and computational time 14 times shorter. Exercise 2 is based on a simulation study with 100 replicates that presents a lesser sparsity structure. Variations in the sample size and in the correlation among the explanatory variables are considered. Again, VB and MCMC present similar and superior results to the classic Lasso. When the CI, SN and BF selection criteria are compared, BF gives the best results, with high proportions of exclusion for coefficients that are zero and low exclusion proportions for coefficients that are different from zero. Exercise 3 is designed for scenarios with 100 replicates and with greater sparsity when compared to exercise 2. This exercise takes into account cases where $n<p$ and different values for the model's precision. The results are similar of those obtained in simulation 2 \subsubsection{Exercise 1: MCMC vs. VB} The purpose of simulation 1 is to compare the MCMC and VB methods to curve fitting and computational time. For this study we considered $ n = 100 $, $ p = 10 $ and each column of the matrix $X$ was generated from a distribution $ N ({\bf {0}}, I_n) $. For the parameters, were taken $ \phi = 0.4 $, $ \lambda = 5 $ and $ \tau_j | \lambda \sim Exp (\lambda)$, \; $ \forall j $. The regression coefficients and observations were generated considering the Lasso regression model. Table \ref{table:postsumm} shows us a posterior summary of the model parameters. There, one can see the mean and the standard deviation of the approximate posterior obtained by using VB. Also, the posterior mean, the posterior standard deviation via MCMC and the true value of the parameters. Note that the point estimates obtained by the VB are close to those obtained by the MCMC. In addition, for both methods, the results are close to the real values with small standard deviation. This same conclusion can be seen in Figure \ref{fig1:MCMCvsVB}. In fact, Figure ~\ref{fig1:MCMCvsVB} exhibits a graphical comparison between MCMC and VB. The histogram represents the sample of the posterior distribution obtained via MCMC and the curve in red the approximate posterior density obtained by the VB. The green dot indicates the true value of the parameters. Note that the curves approximated by the VB are close to the histograms and both centered on the actual values. The remaining parameters $\tau_j$ show similar results. \pagebreak \begin{table}[h!] \caption{Posterior summary.}\label{table:postsumm} \begin{center} {\footnotesize \begin{tabular}{c|c|c|c|c|c} \hline Parameters & Real & Mean VB & Sd VB & Mean MCMC & Sd MCMC \\ \hline $\beta_1$ & 0.463 & 0.557 & 0.132 & 0.558 & 0.139 \\ $\beta_2$ & 0.116 & -0.046 & 0.136 & -0.048 & 0.138 \\ $\beta_3$ & -1.251 & -1.316 & 0.153 & -1.315 & 0.160 \\ $\beta_4$ & 0.250 & 0.396 & 0.161 & 0.383 & 0.171 \\ $\beta_5$ & -0.319 & -0.078 & 0.137 & -0.079 & 0.142 \\ $\beta_6$ & 0.826 & 0.844 & 0.140 & 0.845 & 0.148 \\ $\beta_7$ & -0.036 & 0.091 & 0.142 & 0.096 & 0.149 \\ $\beta_8$ & 0.144 & 0.074 & 0.136 & 0.081 & 0.143 \\ $\beta_9$ & 0.064 & -0.090 & 0.131 & -0.081 & 0.134 \\ $\beta_{10}$ & -0.298 & -0.370 & 0.150 & -0.369 & 0.163 \\ $\phi$ & 0.4 & 0.473 & 0.066 & 0.473 & 0.070 \\ $\tau_1$ & 0.340 & 0.233 & 0.188 & 0.265 & 0.326 \\ $\tau_2$ & 0.088 & 0.137 & 0.159 & 0.162 & 0.250 \\ $\tau_3$ & 0.148 & 0.401 & 0.231 & 0.453 & 0.398 \\ $\tau_4$ & 0.470 & 0.201 & 0.179 & 0.234 & 0.359 \\ $\tau_5$ & 0.048 & 0.140 & 0.160 & 0.152 & 0.209 \\ $\tau_6$ & 0.162 & 0.296 & 0.205 & 0.345 & 0.333 \\ $\tau_7$ & 0.120 & 0.142 & 0.161 & 0.164 & 0.266 \\ $\tau_8$ & 0.069 & 0.139 & 0.160 & 0.173 & 0.365 \\ $\tau_9$ & 0.027 & 0.140 & 0.161 & 0.148 & 0.244 \\ $\tau_{10}$ & 0.275 & 0.194 & 0.177 & 0.212 & 0.278 \\ $\lambda$ & 5 & 4.745 & 1.493 & 5.600 & 3.776 \\ \hline \end{tabular}} \end{center} \end{table} \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.3]{grafico_MCMCvsVB_PrioriRef1.pdf}}& {\includegraphics[scale=0.3]{grafico_MCMCvsVB_PrioriRef2.pdf}}\\ {\includegraphics[scale=0.3]{grafico_MCMCvsVB_PrioriRef3.pdf}}& {\includegraphics[scale=0.3]{grafico_MCMCvsVB_PrioriRef4.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Comparison MCMC (histogram) versus VB (solid line). The dot marks the actual value of the parameter used to generate the data.}.\label{fig1:MCMCvsVB} \end{figure} Since MCMC and VB present similar results, it is worth to point out the main difference between these estimation methods, which is computational time. For exercise 1, the computational time of the VB was 0.72 seconds while that of the MCMC was 10.15 seconds. In the following exercises these computational times become even more discrepant as we will be dealing with simulations with replicates. \subsubsection{Exercise 2: High correlation} In this exercise a simulation was developed based on 100 replicates $p = 8$, $\mbox{\boldmath $\beta$} = (3, 1.5, 0, 0, 2, 0, 0, 0)^T$ and the design matrix is generated from a multivariate normal distribution with zero mean, variance 1 and two different correlation structures between $x_i$ e $x_j$: 0 e $0.7^{|i-j|}$, $\forall i$ e $j$. Let's consider $\phi = 1/9$ and 3 nested scenarios varying the sample size with $\{n_T,n_V\} = \{20,10\}, \{100,50\}$ e $\{200,100\}$, where $n_T$ e $n_V$ denote the size of the training set and the size of the validation set, respectively. Therefore, we have a total of 6 different scenarios. Note that the explanatory variables are standardized to have mean 0 and variance 1 A Tabela \ref{Cenarios_Sim2} summarize the results of exercise 2. \begin{table}[h!] \caption{Simulation 2 with 100 replicates, $p = 8$ explanatory variables and the vector of coefficients $\mbox{\boldmath $\beta$} = (3, 1.5, 0, 0, 2, 0, 0, 0)^T$.}\label{Cenarios_Sim2} \begin{center} {\footnotesize \begin{tabular}{cccc} \hline Simulation & $n_T$ & $n_V$ & $cov(X_i,X_j)$ \\ \hline S2.1 & 20 & 10 & 0 \\ S2.2 & 100 & 50 & $-$ \\ S2.3 & 200 & 100 & $-$ \\ S2.4 & 20 & 10 & $0.7^{|i-j|}$ \\ S2.5 & 100 & 50 & $-$ \\ S2.6 & 200 & 100 & $-$ \\ \hline \end{tabular}} \end{center} \end{table} The comparison of the MCMC and VB methods is our main objective in this simulation, however, frequentist Lasso is also considered through the glmnet package of the R software. For the frequentist Lasso, a 5-fold cross-validation is used to select the parameter $ \lambda $. In addition, different variable selection criteria will be compared as described in \ref{section:app}: credible interval (CI), scaled neighborhood (SN) and Bayes factor (BF). In order to compare Lasso's predictive power from the different estimation techniques, MCMC, VB and frequentist Lasso, the mean absolute error (MAE) was calculated for each replicate of the validation set using the following expression: \begin{eqnarray} MAE = \frac{1}{n_V} \sum_{i=1}^{n_V} |y_i^P - y_i^V| \end{eqnarray}\label{EAM} where $y_i^P$ are the predicted values in the validation set, obtained from the fitted model after the selection of the coefficients. $y_i^V$ are the observed values in the validation set and $n_V$ is the size of the validation set. Note that MCMC generates a sample of the predictive distribution from each iteration of the method. Then, $y_i^P$ is obtained as follows: $$p(y_i^P|{\bf{y}}) = \frac{1}{AM} \sum_{j=1}^{AM} p(y_i^P|\mbox{\boldmath $\theta$}^{(j)}),$$ where $AM$ is MCMC number of iterations and $\mbox{\boldmath $\theta$}$ is the vector of coefficients. Figure \ref{fig:EAM} shows the box-plots of the mean absolute errors for each of the six proposed scenarios. As the sample size increases, we observe a smaller difference between the three estimation methods. When the sample is small, similar results are obtained between MCMC and VB. These have the median MAE and the lowest dispersion when compared to the frequentist Lasso. Next, we will detail the performance of the selection criteria for each $ \beta_j $. \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.23]{EAM_S21_PS.pdf}}& {\includegraphics[scale=0.23]{EAM_S22_PS.pdf}}& {\includegraphics[scale=0.23]{EAM_S23_PS.pdf}}\\ {\includegraphics[scale=0.23]{EAM_S24_PS.pdf}}& {\includegraphics[scale=0.23]{EAM_S25_PS.pdf}}& {\includegraphics[scale=0.23]{EAM_S26_PS.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{ Mean absolute error (MAE) using (\ref{EAM}) for the 6 scenarios of the simulation 2. Estimation methods: MCMC, VB and frequentist Lasso.}\label{fig:EAM} \end{figure} The Table \ref{tab:selecao} shows the frequency of times that the predictor $ x_j $, $ j = 1, \ldots, 8 $ was excluded in the 100 replicates, considering the three variables selection methods and all six scenarios built in Simulation 2. We present the proportions only for the VB because so far its results are similar to those of the MCMC. Note that for this simulation exercise the BF presents the best results in all scenarios, with a greater proportion of exclusion when the actual values of $ \beta_j $ are zero and a small proportion when the $ \beta $ 's are different from zero. In addition, it is noted that as the sample size increases, the three criteria tend to correctly choose coefficients that are zero and the coefficients that are different from zero. From exercises 1 and 2, one may notice that the approximations of the VB are as good as the results obtained by the MCMC. Nevertheless, the gain in computational time provided by VB is far superior than MCMC. In addition, we saw that BF is a variable selection criterion that presents superior results when compared with CI and SN. In the following subsection we show the performance of the VB estimation method and the BF selection criterion for a more complex numerical experiment with greater sparsity. \begin{table}[h!] \caption{Comparison of the three methods on variable selection accuracy using VB for the six scenarios (the frequency of exclusions for the predictor $x_j$, $j = 1, \ldots, 8$) with $\mbox{\boldmath $\beta$} = (3, 1.5, 0, 0, 2, 0, 0, 0)^T$.}\label{tab:selecao} \begin{center} {\footnotesize \begin{tabular}{p{2.5cm}|c|cccccccc} \hline \centerline{Simulation} & Method & $\beta_1$ & $\beta_2$ & $\beta_3$ & $\beta_4$ & $\beta_5$ & $\beta_6$ & $\beta_7$ & $\beta_8$ \\ \hline\hline \multirow{3}{*}{\centerline{S2.1}} & VB + CI & 0.01 & 0.09 & 0.64 & 0.57 & 0.13 & 0.66 & 0.65 & 0.62\\ & VB + SN & 0.02 & 0.15 & 0.73 & 0.71 & 0.20 & 0.77 & 0.75 & 0.71\\ & VB + BF & 0.00 & 0.09 & 0.88 & 0.70 & 0.13 & 0.82 & 0.75 & 0.92 \\ \hline \hline \multirow{3}{*}{\centerline{S2.2}} & VB + CI & 0.00 & 0.00 & 0.47 & 0.51 & 0.00 & 0.57 & 0.67 & 0.56 \\ & VB + SN & 0.00 & 0.00 & 0.70 & 0.62 & 0.00 & 0.71 & 0.78 & 0.72 \\ & VB + BF & 0.00 & 0.00 & 0.73 & 0.78 & 0.00 & 0.73 & 0.84 & 0.72 \\ \hline\hline \multirow{3}{*}{\centerline{S2.3}} & VB + CI & 0.00 & 0.00 & 0.50 & 0.47 & 0.00 & 0.67 & 0.60 & 0.58 \\ & VB + SN & 0.00 & 0.00 & 0.71 & 0.62 & 0.00 & 0.71 & 0.73 & 0.76 \\ & VB + BF & 0.00 & 0.00 & 0.72 & 0.73 & 0.00 & 0.77 & 0.80 & 0.77 \\ \hline\hline \multirow{3}{*}{\centerline{S2.4}} & VB + CI & 0.02 & 0.09 & 0.53 & 0.53 & 0.19 & 0.71 & 0.60 & 0.64 \\ & VB + SN & 0.06 & 0.11 & 0.70 & 0.62 & 0.24 & 0.75 & 0.73 & 0.74 \\ & VB + BF & 0.02 & 0.09 & 0.65 & 0.80 & 0.18 & 0.85 & 0.81 & 0.88 \\ \hline\hline \multirow{3}{*}{\centerline{S2.5}} & VB + CI & 0.00 & 0.00 & 0.57 & 0.49 & 0.00 & 0.60 & 0.51 & 0.57 \\ & VB + SN & 0.00 & 0.00 & 0.73 & 0.68 & 0.00 & 0.75 & 0.70 & 0.75 \\ & VB + BF & 0.00 & 0.00 & 0.78 & 0.77 & 0.00 & 0.76 & 0.80 & 0.82 \\ \hline\hline \multirow{3}{*}{\centerline{S2.6}} & VB + CI & 0.00 & 0.00 & 0.55 & 0.47 & 0.00 & 0.60 & 0.46 & 0.57 \\ & VB + SN & 0.00 & 0.00 & 0.70 & 0.66 & 0.00 & 0.77 & 0.64 & 0.75 \\ & VB + BF & 0.00 & 0.00 & 0.77 & 0.75 & 0.00 & 0.79 & 0.72 & 0.77 \\ \hline \end{tabular}} \end{center} \end{table} \subsubsection{Exercise 3: High sparsity with small n and large p} \textcolor{blue}{ } In this exercise we consider a situation with sparsity given by $p = 40$ e $\beta = ({\bf{0^T}},{\bf{3^T}},{\bf{0^T}},{\bf{3^T}})^T$, where ${\bf{0}}$ e ${\bf{3}}$ are vectors of dimension 10 and each of their entries are 0 and 3 respectively. The design matrix $X$ is generated from a multivariate normal distribution with mean zero, variance 1 and the correlation between the columns $x_i$ e $x_j$ is equal to 0.5, $\forall i \neq j$. We analyze 4 different scenarios by varying the sample size and the precision parameter $\phi$. The simulated data were analyzed as follows, $\{n_T,n_V\} = \{20,10\}$ e $\{200,100\}$ where $n_T$ e $n_V$ are the size of the training set and the size of the validation set respectively. In addition, we set the precision parameter as $\phi = 1/9$ and $\phi = 1/225$. For each scenario we consider 100 replicates. Table \ref{Cenarios_Sim3} summarizes all the scenarios considered in this simulation exercise 3. It is worth mentioning that in scenarios S3.1 and S3.3 we have $n<p$ \begin{table}[h!] \caption{Scenarios in Simulation 3}\label{Cenarios_Sim3} \begin{center} {\footnotesize \begin{tabular}{cccc} \hline Simulation & $n_T$ & $n_V$ & $\phi$ \\ \hline S3.1 & 20 & 10 & $1/9$ \\ S3.2 & 200 & 100 & - \\ S3.3 & 20 & 10 & $1/225$ \\ S3.4 & 200 & 100 & - \\ \hline \end{tabular}} \end{center} \end{table} \textcolor{blue}{ } Similarly to exercise 2, the MAE was calculated for each replicate as a predictive measure. Figure \ref{fig:EAMS3} shows the box-plots of each scenario for MCMC, VB and Lasso. One may see that the MCMC and VB present similar and superior results to the Lasso when the sample size is small. As the sample increases the results become similar in the 3 approaches. \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.25]{EAM_S31_PS.pdf}}& {\includegraphics[scale=0.25]{EAM_S32_PS.pdf}}\\ {\includegraphics[scale=0.25]{EAM_S33_PS.pdf}}& {\includegraphics[scale=0.25]{EAM_S34_PS.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Mean absolute error (MAE) obtained by using (\ref{EAM}) for the 4 scenarios in exercise 3, comparing the estimation methods MCMC, VB and Lasso.} \label{fig:EAMS3} \end{figure} \textcolor{blue}{ } Figure \ref{fig:selecaobarra} shows the proportions of exclusions (gray bars) and selections (black bars) for each of the 40 coefficients in the 100 replicates, when comparing the estimation methods, VB and Lasso. MCMC was omitted for presenting results similar to VB. In the Bayesian context, the selection criterion used in all scenarios was the BF. It is expected that the black bars will be larger when the true coefficients are different from zero and that the gray bars will be large when the true coefficients are equal to zero. The proportions of the errors are represented by the black bars when the coefficients are zero (type I error) and by the gray bars when the coefficients are different from zero (type II error \textcolor{blue}{ } Thus, it can be seen for $n<p$, both VB and Lasso do not have a good selection and exclusion performance, with a slight advantage of VB. On the other hand, as the sample increases, the VB presents good results, better than those presented by Lasso. Also note that when $n>p$ both VB and Lasso have the same type II error. However, for all coefficients, the type I error is considerably less in VB than in MCMC \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.24]{selecao_FB_VB_S31.pdf}}& {\includegraphics[scale=0.24]{selecao_CLASSICO_S31.pdf}}\\ {\includegraphics[scale=0.24]{selecao_FB_VB_S32.pdf}}& {\includegraphics[scale=0.24]{selecao_CLASSICO_S32.pdf}}\\ {\includegraphics[scale=0.24]{selecao_FB_VB_S33.pdf}}& {\includegraphics[scale=0.24]{selecao_CLASSICO_S33.pdf}}\\ {\includegraphics[scale=0.24]{selecao_FB_VB_S34.pdf}}& {\includegraphics[scale=0.24]{selecao_CLASSICO_S34.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Proportion of selected (black) and excluded coefficients (gray) for the 4 scenarios in exercise 3 with the estimation methods VB (left column ) and Lasso (right column).} \label{fig:selecaobarra} \end{figure} \subsection{Knots selection} \textcolor{blue}{ } As the VB presents results similar to the MCMC, but with considerably less computational time, therefore, in the two exercises applied to the spline regression models, only the VB is used. The goal is to define the maximum number of knots from a grid of values and, in turn, to select the most significant knots and their positions. Exercises 4 and 5 consist of a simulation study of the penalized spline regression model defined in equations (\ref{reg_spline1}), (\ref{reg_spline2}) and (\ref{prior_spline}). What differs in the two simulation studies consists of the number of bumps in the smooth function $f$. In exercise 4, shows an example with 1 bump, while exercise 5 analyzes 2 bumps. In both exercises we have 100 replicates with $ n = 100 $, variance equals to $0.3$ ( $ \phi = 1/0.3 $) and $x_i$ taking equally spaced values in the interval $[0,1]$. In addition, Cubic Splines ($p=3$) are used, the interior knots of the truncated power basis are positioned in the quantiles of the variable $x_i$. The maximum number of knots varies in the grid $K = 10, 20, 30, 40 $ and $ 50 $. The ELBO is used to indicate the maximum number of knots and for the both exercises we have an optimal initial guess of $K=30$ knots. The BF and CI selection criteria indicate that around 8 of these 30 initial knots have a higher frequency of being selected as the most significant ones. \subsubsection{Exercise 4: Single structure/One bump} In exercise 4 we use the smooth function $f$ called "Bump" given by $f(x) = x + 2 \exp \{- (16 * (x-0.5)) ^ 2 \} $. Figures \ref{fig:ajuste_1bump_k10}, \ref{fig:ajuste_1bump_k30} and \ref{fig:ajuste_1bump_k50} show some results for $K = 10, 30 $ and $ 50 $ knots, respectively. In the first line of graphics there are the plots of the data generated for one of the replicates (dots), the true curve (solid line) and the average of the fittings of the 100 replicates for the three selection criteria (dashed lines). The second line shows the proportion of excluded knots, also for the three selection criteria CI, SN and BF. Observe that the positions of the knots most selected as significant are in the rise and fall of the bump. In addition, it should also be noted that as $ K $ increases, the three selection criteria tend to be more rigorous in the penalty, hence, excluding more knots. This occurs more severely in the SN criterion, which presents an average of fittings worse when $ K = 50 $. The results of the CI and BF criteria are similar in all cases. The results for exercises where the maximum number of knots are $ 20 $ and $ 40 $ have been omitted as they are similar, for $ K = 10 $ and $ K = 50 $, respectively \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{ajuste_1bump_IC_k10.pdf}}& {\includegraphics[scale=0.50]{ajuste_1bump_VE_k10.pdf}}& {\includegraphics[scale=0.50]{ajuste_1bump_FB_k10.pdf}}\\ {\includegraphics[scale=0.50]{prop_excluidos_1bump_IC_k10.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_1bump_VE_k10.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_1bump_FB_k10.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Proportion of excluded knots and the average of the fittings (dashed line), K=10.}\label{fig:ajuste_1bump_k10} \end{figure} \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{ajuste_1bump_IC_k30.pdf}}& {\includegraphics[scale=0.50]{ajuste_1bump_VE_k30.pdf}}& {\includegraphics[scale=0.50]{ajuste_1bump_FB_k30.pdf}}\\ {\includegraphics[scale=0.50]{prop_excluidos_1bump_IC_k30.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_1bump_VE_k30.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_1bump_FB_k30.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Proportion of excluded knots and the average of the fittings (dashed line), K=30.}\label{fig:ajuste_1bump_k30} \end{figure} \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{ajuste_1bump_IC_k50.pdf}}& {\includegraphics[scale=0.50]{ajuste_1bump_VE_k50.pdf}}& {\includegraphics[scale=0.50]{ajuste_1bump_FB_k50.pdf}}\\ {\includegraphics[scale=0.50]{prop_excluidos_1bump_IC_k50.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_1bump_VE_k50.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_1bump_FB_k50.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Proportion of excluded knots and the average of the fittings (dashed line), K=50.}\label{fig:ajuste_1bump_k50} \end{figure} Figure \ref{fig:100ajuste_1bump} exhibits the 100 fitted models for each of the replicates considering $ K = 10 $, $ 30 $ and $ 50 $, and the BF selection criterion. One can see that the variability increases as the value of $ K $ increases. The same occurs when considering the other selection criteria. \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{100ajustes_1bump_FB_k10.pdf}}& {\includegraphics[scale=0.50]{100ajustes_1bump_FB_k30.pdf}}& {\includegraphics[scale=0.50]{100ajustes_1bump_FB_k50.pdf}} \\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Fittings for each of the 100 replicates, for different numbers of knots and according to the selection criteria BF.}\label{fig:100ajuste_1bump} \end{figure} Figure \ref{fig:freq_nos_1bump} shows the frequency of the number of knots selected in the 100 replicates for each of the selection criteria (CI, SN and BF) and maximum number of knots ($ K = 10, 30 $ and $ 50 $). Note that CI and BF criteria, when $K = 10$, select between 5 and 6 knots as the most frequent ones. On the other hand, when $ K = 30 $ knots 7 knots among them are selected more frequently. In the case where the maximum number of knots is $ K = 50 $ we have a bimodal behavior for the frequency of the number of selected knots and one can see that the penalty is more severe as $ K $ grows. \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.55]{freq_nos_selecionados_1bump_k10.pdf}}\\ {\includegraphics[scale=0.55]{freq_nos_selecionados_1bump_k30.pdf}}\\ {\includegraphics[scale=0.55]{freq_nos_selecionados_1bump_k50.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Frequency of the number of selected knots in the 100 replicates.}\label{fig:freq_nos_1bump} \end{figure} ELBO as a model comparison measure can be used to propose the maximum number of knots. It is worth mentioning that the larger the ELBO the more that model is preferable. From the Table \ref{tab:elbo_medio_1bump}, which presents the average ELBO for each value of $ K $, we have that the model proposed with $ K = 30 $ knots which selects more frequently 7 of these knots as significant, is preferable. \begin{table}[h!] \caption{Average ELBO } \begin{center} {\footnotesize \begin{tabular}{c|c|c|c|c|c} \hline Criterion & k = 10 & k = 20 & k = 30 & k = 40 & k = 50 \\ \hline FB & -32.24 & -16.10 & {\bf{-2.55}} & -19.59 & -22.23 \\ \hline IC & -32.24 & -16.37 & {\bf{-2.55}} & -20.91 & -46.14\\ \hline SN & {\bf{-37.15}} & -39.14 & -69.16 & -244.80 & -390.80\\ \hline \end{tabular}} \end{center}\label{tab:elbo_medio_1bump} \end{table} \clearpage \subsubsection{Exercise 5: Double structure/Two bumps} Similar to exercise 4, we proposed exercise 5, however, now considering a curve with 2 bumps. The curve $f$ is a mixing two normal distributions: $$f(x) = 0.3 N(x|0.4,0.01) + 0.7 N(x|0.8,0.01),$$ where $N(x|a,b)$ is a normal distribution with mean $a$ and variance $b$. In this exercise we assume $n=300$ and the other conditions of simulation 4 were kept. That is, 100 replicates, $\phi= 1/0.3$, $x_i$ equally space in $[0,1]$, $p=3$, knots placed at the $x_i$ quantiles and maximum number of knots $K= 10, 20, 30, 40, 50$. We omitted the graphics for $K=20$ and $K=40$, as they are similar. The results obtained for the case of 1 bump are similar to those obtained here. Once again, it is possible to see in the Figures \ref{fig:ajuste_2bump_k10}, \ref{fig:ajuste_2bump_k30} and \ref{fig:ajuste_2bump_k50} the average of the fittings of the 100 replicates for the three selection criteria in the dashed lines (first row of plots) and the proportion of excluded knots (second row of plots) for different values of $K$. In the three figures one may notice that the knots that are most selected as significant are positioned in the ups and downs of the bumps. For the case of 2 bumps, the higher the value of $K$, the more severe the exclusion of knots is. The SN criterion does not show good results as the maximum number of knots increases and the performance of the CI and BF criteria are similar in all cases. \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{ajuste_2bump_IC_k10.pdf}}& {\includegraphics[scale=0.50]{ajuste_2bump_VE_k10.pdf}}& {\includegraphics[scale=0.50]{ajuste_2bump_FB_k10.pdf}}\\ {\includegraphics[scale=0.50]{prop_excluidos_2bump_IC_k10.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_2bump_VE_k10.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_2bump_FB_k10.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Proportion of excluded knots and the average of the fittings - K=10.}\label{fig:ajuste_2bump_k10} \end{figure} \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{ajuste_2bump_IC_k30.pdf}}& {\includegraphics[scale=0.50]{ajuste_2bump_VE_k30.pdf}}& {\includegraphics[scale=0.50]{ajuste_2bump_FB_k30.pdf}}\\ {\includegraphics[scale=0.50]{prop_excluidos_2bump_IC_k30.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_2bump_VE_k30.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_2bump_FB_k30.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Proportion of excluded knots and the average of the fittings - K=30.}\label{fig:ajuste_2bump_k30} \end{figure} \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{ajuste_2bump_IC_k50.pdf}}& {\includegraphics[scale=0.50]{ajuste_2bump_VE_k50.pdf}}& {\includegraphics[scale=0.50]{ajuste_2bump_FB_k50.pdf}}\\ {\includegraphics[scale=0.50]{prop_excluidos_2bump_IC_k50.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_2bump_VE_k50.pdf}}& {\includegraphics[scale=0.50]{prop_excluidos_2bump_FB_k50.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Proportion of excluded knots and the average of the fitted models - K=50 nós.}\label{fig:ajuste_2bump_k50} \end{figure} In Figure \ref{fig:100ajuste_2bump} shows the fittings of 100 replicates according to the BF criterion for different numbers of knots. It can be seen that there is less variability than in the case of 1 bump, possibly due to the increase in the sample size to $ n = 300 $ . Nonetheless, it is possible to see that the when the maximum number of knots increases , the variability between the fitted models also increases. \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} {\includegraphics[scale=0.50]{100ajustes_2bump_FB_k10.pdf}}& {\includegraphics[scale=0.50]{100ajustes_2bump_FB_k30.pdf}}& {\includegraphics[scale=0.50]{100ajustes_2bump_FB_k50.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Fit of 100 replicates for different values of knots.}\label{fig:100ajuste_2bump} \end{figure} Our analysis shows that the SN criterion does not provide a good fit for $K = 50$ and in Figure \ref{fig:freq_nos_2bump} we see that this criterion tends to underestimate the number of significant knots as $K$ increases. We will analyze in more detail the frequency of the number of knots selected in the 100 replicates using the CI and BF criteria. Note that both criteria present similar results. In Figure \ref{fig:freq_nos_2bump} we observed that when the maximum number of knots is 10, both the CI and BF criteria indicate more frequently that 7 out of 10 knots are significant. When $K = 30$ or $ K= 50$, the criteria CI and BF most frequently indicate 8 knots as significant. \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.55]{freq_nos_selecionados_2bump_k10.pdf}}\\ {\includegraphics[scale=0.55]{freq_nos_selecionados_2bump_k30.pdf}}\\ {\includegraphics[scale=0.55]{freq_nos_selecionados_2bump_k50.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Frequency of selected knots in 100 replicates.}\label{fig:freq_nos_2bump} \end{figure} In order to define the maximum number of knot, the average ELBO was calculated for each value of $K$ in a fixed grid. For the FB criterion we have that the average ELBO is -74.36 when $K = 10$. This value increases (-68.67 when $K = 20$) until it reaches the maximum value of -67.15 when $K = 30$. The average ELBO value for $K = 40$ is -70.44. Thus, for BF criterion $K = 30$ is the initial guess for the maximum number of knots.The same can be seen in the CI criterion. This result coincides with that obtained in exercise 4. \clearpage \section{Applications to real data} In this section, two applications with real data will be analyzed. The first is considered more usual in the literature in the area and the second addresses a current issue related to the world pandemic of Covid-19. In both cases, the Penalized Spline Regression model with polynomials of degrees 2 and 3 is fiited to the data. In addition, the maximum number varies between 10, 20 and 30. It is noteworthy that in the first example the knots are equally spaced positioned. Lasso, through variational inference (VB), is used in both applications together with the Bayes Factor (BF) criterion for the selection of the most significant knots. ELBO will be the measure considered for comparing models \subsection{Age and Income data} The first data set considers the income and age of 205 Canadians (\cite{Ullah85}. These data have been widely used in applications of non-parametric regression models. See for example \cite{Ruppert02}. A logarithmic transformation was applied to income, as can be seen in the data represented by black dots in the two plot in Figure \ref{fig:ajuste_ageincome}. Table \ref{tab:elbo_ageincome} shows the ELBO measure computed for each fitted model by varying the degree of the polynomial $(p)$ and the maximum number of knots $(k)$. For both $ p = 2 $ and $ p = 3 $, ELBO achieves its maximum value at $ K = 10 $. Comparing these two quantities, the largest ELBO occurs for $ p = 3 $ and $ K = 10 $. The graph on the left of Figure \ref{fig:ajuste_ageincome} shows the fitting of the penalized spline regression model (solid black line) and its credible interval of $95\%$ (gray shaded area). Note that the credible interval covers a large part of the observed points. At the bottom of the graph we have the symbol "x" representing the 10 knots positioned and in black the only knot considered significant among the 10, according to the FB criterion. At the level of comparison, the graph on the right of Figure \ref{fig:ajuste_ageincome} shows the fitted model obtained with the R function "smooth.spline" (solid line in red) and the fitting of the proposed model (black solid line). In this application, the results of these two fitted models are similar. \begin{table}[h!] \caption{ELBO - Age and income data} \begin{center} {\footnotesize \begin{tabular}{c|c|c|c} \hline & k = 10 & k = 20 & k = 30 \\ \hline $p = 2$ & {\bf{-221.49}} & -225.45 & -237.22 \\ \hline $p =3$ & {\bf{-220.77}} & -227.03 & -223.93 \\ \hline \end{tabular}} \end{center}\label{tab:elbo_ageincome} \end{table} \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.49]{ajuste_ageincome_FB_k10_p3.pdf}}& {\includegraphics[scale=0.49]{smoothspline_ageincome_FB_k10_p3.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Left: The fit of the spline regression model (black solid line) with $ p = 3 $ and one significant knot among $K=10$ knots and the log-income and age data (black dots) with the credible interval of $95\%$ (shaded area). Right: The fit of the proposed model (solid black line) and a fitted model using the R function ”smooth.spline" (solid red line) to the log-income and age data (dots).} \label{fig:ajuste_ageincome} \end{figure} \subsubsection{Covid-19 data} \label{subsubsection:Covid-19_data} In order to observe the trend of the daily cases of Covid-19 in the USA and Brazil, the penalized spline regression model was fitted to the data (logarithmic scale). United States data ranges from March 1, 2020 to November 30, 2020, while data from Brazil ranges from March 10, 2020 to November 30, 2020. Figure \ref{fig:casos_covid} shows the number of daily cases of Covid-19 in the USA (left) and Brazil (right) on the original scale. The black dots in the Figures \ref{fig:ajuste_covidUS} and \ref{fig:ajuste_covidBR} show the same data set in logarithmic scale. \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.49]{cases_covid_US.pdf}}& {\includegraphics[scale=0.49]{cases_covid_Brazil.pdf}} \end{tabular} \caption{Daily cases of Covid-19 in US (left) and in Brazil (right).}\label{fig:casos_covid} \end{center} \end{figure} In this example, analyzing real data, we consider models with splines of degrees 2 and 3. The maximum number of knots varies every 10 knots. To find the maximum number of knots, we use the ELBO measure by starting the grid with $K = 10 $ knots. After obtaining the optimal $ K$ value BF criterion is applied to select which knots are the most significant and their positions. The inference procedure was performed from the Bayesian point of view through variational inference. The prior distribution remains the same as for studies with artificial data. Table \ref{tab:elbo_covid} exhibits the results of ELBO for different values of $p$ and $K$, for data from the USA and Brazil. Marked in bold are the cases in which ELBO achieves the highest values for $ p = 2 $ and $ p = 3 $. In the North American case, ELBO is maximum when $ p = 3 $ and $ K = 20 $. It is worth mentioning that among the 20 knots, 9 were significant according to the BF criterion. Considering the Brazil data, one can see that the largest ELBO occurs when $ p = 2 $ and $ K = 10 $, and only 6 of these 10 knots are significant, according to the BF. The following results are presented only for models with the largest ELBO. \pagebreak \begin{table}[h!] \caption{ELBO - US and Brazil Covid-19 data.} \begin{center} {\footnotesize \begin{tabular}{c|c|c|c||c|c|c} \hline & \multicolumn{3}{c||}{US} & \multicolumn{3}{c}{Brazil} \\ \hline & k = 10 & k = 20 & k = 30 & k = 10 & k = 20 & k = 30 \\ \hline $p = 2$ & 57.25 & {\bf{108.96}} & 95.25 & {\bf{-145.32}} & -168.23 & -164.55 \\ \hline $p =3$ & 18.89 & {\bf{116.10}} & 24.53 & -181.52 & {\bf{-179.88}} & -197.66\\ \hline \end{tabular}} \end{center}\label{tab:elbo_covid} \end{table} The plot to the left of the Figure \ref{fig:ajuste_covidUS} shows the fit (solid green line) of the penalized spline regression model with 3 polynomial of degree 3 and 9 significant knots(out of a total of 20 knots) for the US Covid-19 data. The shaded area represents the $ 95\% $ credible interval and it contains most of the observed data (black dots). Note that significant knots (black asterisks) are located such that they cover the bumps of the curve. The "x" in red are the excluded knots that were not considered in the fit. The plot on the right compares the fits of the proposed model (solid green line) with the R function "smooth .spline" (red solid line). Note that the second fit captures, in addition to the signal, the noise contained in the data. \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.49]{ajuste_covidUS_FB_k20_p3.pdf}} & {\includegraphics[scale=0.49]{smoothspline_covidUS_FB_k20_p3.pdf}}\\ \end{tabular} \vspace{-0.5cm} \caption{Left: The fit of the penalized spline regression model (solid green line) with $p=3$ and 9 significant knots (black asterisks) out of $K=20$ knots (x red) with the $95\%$ credible interval (shaded area) and the fit for the logarithm of the number of daily cases of Covid-19 in the USA (black dots). Right: The fit of the proposed model (solid green line) and the fit of a model using the R function "smooth.spline" (solid red line) to the log data of the number of daily cases of Covid-19 in the USA (dots).}\label{fig:ajuste_covidUS} \end{center} \end{figure} Figure \ref{fig:ajuste_covidBR} shows the fit (solid green line) of the proposed model with $ p = 2 $ and 6 significant knots (black asterisks at the bottom of the graph). The excluded knots are represented by "x" in red. The $ 95\% $ credible interval (gray shadow) covers much of the observed data points. The plot on the right makes a comparison between the fit of the proposed model and the R function "smooth.spline" (red solid line). As in the case of the U.S. data, we observed that the fit by using "smooth.spline" does not smooth the data as the proposed model and follows the series random noise more closely. \begin{figure}[h!] \begin{center} \begin{tabular}{cc} {\includegraphics[scale=0.49]{ajuste_covidBR_FB_k10_p2.pdf}} & {\includegraphics[scale=0.49]{smoothspline_covidBR_FB_k10_p2.pdf}}\\ \end{tabular} \end{center}\vspace{-0.5cm} \caption{Left: the fit of the penalized regression spline model (solid green line) with $p =2$ and 6 significant knots (black asterisks) out of $ K=10$ knots (x in red) with the $95\%$ credible interval (shaded area) and the fit of the logarithm of the number of daily cases of Covid-19 in Brazil (black dots). Right: The curve fitting of the proposed model (solid green line) and the fit using R function "smooth.spline" (solid red line) to the logarithm of the number of daily cases of Covid-19 in Brazil (dot).}\label{fig:ajuste_covidBR} \end{figure} \pagebreak \pagebreak \section{Conclusions} This article proposes a new scalable procedure for selecting the number of knots in regression splines: A fully automatic Bayesian Lasso through variational inference. Simulation studies have shown effectiveness of this procedure in modeling different types of data sets. In addition, the numerical exercises show that this approach is much faster than the traditional one that is based on MCMC type algorithms. In real data sets the procedure was able to capture the trend existing in them. Thus providing a better understanding of the data dynamic. \paragraph{Acknowledgments.} This paper was partially supported by Fapesp Grants (RD) 2018/04654, (RD and HSM) 2019/10800-0, (RD) 2019/00787-7. \clearpage
2,869,038,154,484
arxiv
\section{Introduction} Quantum computers can solve certain problems faster than any known classical algorithm: the best-known examples are probably Shor's algorithm \cite{Shor94a, Shor97a} for factoring integers in time polynomial in the number of digits needed to represent them, and Grover's ``search'' algorithm \cite{Grover96a}, which, for example, allows quadratic speedup (from time of order $N$ to time of order $\sqrt{N}$) of ``brute-force search'' for solutions to certain problems. The structure of these algorithms may be understood as based on ``black-box'' or ``query'' algorithms, in which we have as input a function implemented as a ``black-box'' subroutine, and we would like to determine a property of the black-box function with few calls ({\em ``queries''}) to the subroutine. For factoring, the corresponding query algorithm is one in which, given a strictly periodic function as a black-box, we must find its period\footnote{``Strict'' periodicity means that not only does $f$ take the same value when its input is shifted by the period, but it takes distinct values on distinct inputs not obtainable from each other by shifting by a multiple of the period. The situation is slightly more complicated for Shor's algorithm because the function is in fact only approximately strictly periodic, but this makes no essential difference.}; for Grover's, given a 0/1 valued function taking, say, $n$-bit strings as inputs, we must determine if the function is identically zero or not. In abstract models of such black-box computation, called ``query models'' by computer scientists, an instance of a problem is specified by a set of possible black-box functions, and a property of those functions (whose value, in some finite set, may depend on the function), which we want to compute with bounded error (say, less than some constant $\varepsilon$). The {\em query complexity} of an instance is the minimal number of queries needed to compute the property on that instance. The cost of computation done between queries is ignored in this abstract model. Typically we are concerned with a problem having arbitrarily large instances, and with how the query complexity of instances scales with their size--for instance, polynomially in the case of the ``order-finding'' query problem \cite{Cleve2000a} on which Shor's algorithm is based, exponentially (but with half the classical exponent) in some versions of Grover's algorithm. In concrete algorithms such as Shor's factoring algorithm, or applications of Grover's algorithm to speeding up the search for solutions to instances of hard problems, the black box is replaced by an explicit program or circuit, usually a polynomial time program or polynomial size circuit, but the algorithm treats it as a black-box, i.e. does not look at details of the program or circuit, but only provides inputs to it and processes outputs from it. Also, in such concrete algorithms based on black-box ones explicit algorithms must be provided for the computation that takes place betwen the queries, and this, too, is typically polynomial-time in input size. If the abstract black-box complexity of a problem is polynomial, {\em and} concrete algorithms can be founded implementing each black box polynomially, and each inter-box computation polynomially, then the abstract black box algorithm can be converted into a concrete polynomial-time algorithm, as in the case of factoring. Lower bounds on black-box algorithms can imply lower bounds on the performance of concrete algorithms having such a substituted-black-box structure, but for these to be interesting, the possibility of known, easy ways of exploiting the structure of circuits in a concrete algorithm must be built in, for example by applying a lower bound technique to a set of queries including the inverse of the basic black-box transformations, if the circuit model allows (as does the standard quantum one) the easy construction of a polynomial-size circuit for the inverse of a given polynomial-size circuit. Likewise, the ability to apply a black box or not conditional on the value of some qubit should also be included for similar reasons (given a quantum circuit, it is easy to concoct another circuit of essentially the same size that applies the first conditionally). (This, and the point about the inverse, was suggested to me by Daniel Gottesman \cite{Gottesman2005a} at a talk I gave on an earlier version of this paper.) In Grover's algorithm, and many other abstract query algorithms such as the ``Abelian hidden subgroup'' problem that can be abstracted from Shor's algorithm and its predecessors such as Simon's algorithm \cite{Simon97a}, the black-boxes can be viewed as a set of commuting unitaries implementing ``black-box functions'' quantum-coherently. For example, they may compute a Boolean function $f: \{1,..,S\} \mapsto \{0,1\}$ of an input in $\{1,...,S\}$ supplied in an $S$-dimensional quantum register in its ``standard'' or ``computational'' orthonormal basis $\ket{i}, i \in \{1,...,S\}$, and then write the resulting value onto an output qubit by adding it modulo 2 to the value of the output qubit in its standard basis $\{\ket{0}, \ket{1}\}$; thus $O_f \ket{i}\ket{b} = \ket{i} \ket{f(i) \oplus b}$, for standard orthonormal bases of the two registers. For all the various possible such $f$, these ``black-box'' unitaries $O_f$ commute with each other, being diagonal in the basis that is the product of the standard bases. Obviously, one can do something similar for a larger finite set of outputs. Other models for quantum queries to classical functions, such as ``phase queries,'' $O_f \ket{i} \mapsto (-1)^{f(i)}\ket{i}$, equivalent up to a constant factor in the number of queries to the above straightforward quantum-coherent reversible computation of $f$ when conditioning and the adjoint unitaries are included, are sometimes used, and there too all unitaries commute. In this paper, however, we analyze the case where queries involve a not-necessarily commuting set of black-box unitaries. This latter setting is relevant, for example, to algorithms intended to extract information about quantum physical systems, an area of intensive research. Although not all unitaries (e.g. on $n$ qubits, a $2^n$-dimensional quantum system) can be represented with polynomially many (in e.g. $n$) quantum gates, the ones that can are still of great interest. Many interesting questions about unitaries are still superpolynomially hard (relative to P $\neq$ NP) when confined to such polynomially representable unitaries. Thus, just as in the case of ``quantum-coherent classical'' queries, there is the possibility that abstract query algorithms for determining properties of noncommuting quantum black-boxes may lead to efficient and important concrete algorithms. Note, for example, that unitary evolutions induced by ``local'' hamiltonians on a lattice can be well approximated by polynomially many gates \cite{Lloyd96b}. To extract certain information (e.g. about the spectrum) directly from the unitaries themselves involves manipulating $2^n \times 2^n$ matrices. One could imagine using the short classical description of the small quantum circuit directly (i.e. in some way other than running the circuit, thereby going beyond the black-box model) to do the computation more quickly, even classically, but it is not clear that this will be possible and for certain problems, it is not possible in polynomial time unless P = NP. However, there is the tantalizing possibility that least some information may be gotten more efficiently than classically by treating the unitary as a ``black box'' in a quantum computation (legitimate in terms of actual computation time when it has a poly-size quantum circuit). Important candidate examples where the quantum algorithm is better than known classical ones include \cite{Knill1998a}, \cite{Poulin2003a}, \cite{Emerson2004a}, \cite{Poulin2004a}. An important part of the study of quantum computation has been the investigation of lower bounds on the quantum query complexity of various problems. Although lower bounds in query settings do not logically imply lower bounds of the same functional form for concrete versions of corresponding problems, because of the possibility of ``looking inside the black box'' in a concrete situation, many computer scientists view them as a good guide in many situations: for example, the lower bounds on Grover's problem \cite{Bennett97b} matching the $\sqrt{N}$ performance of Grover's algorithm are widely taken as fairly reasonable grounds to expect that quantum computers will not perform NP-hard computations in polynomial time, although they are only part of the story as a crucial part of the question is whether one believes quantum circuits encoding classical computations may have some structure that quantum algorithms can take advantage of better than it is generally thought classical computations can take advantage of classical circuit structure. In this paper, we provide a new formulation of the quantum query computation model with unitary black-box queries. It closely parallels the formulation for quantum-coherent classical queries in \cite{BSS2003}; all of the results in this paper have counterparts there and many of the ideas used in their proofs are related (indeed some parts are essentially identical) as well. As for quantum-coherent classical queries, our formulation takes the form of a theorem showing that a query algorithm for a problem instance exists if, and only if, a feasible solution to a certain set of semidefinite programming (SDP) constraints exists. This formulation contains, we think, the mathematical ``essence'' of quantum query complexity: much information concerning details of the unitaries implementing the between-query evolution in the standard picture, but irrelevant to the algorithm's query complexity, is not present in our picture. The formulation allows us to derive space bounds for unitary query computations. It allows us to exploit the ``revolution'' of the last 15 years or so in conic, especially semidefinite, programming, leading to polynomial-time methods for solving these optimization problems, to obtain a polynomial algorithm for estimating the quantum query complexity of a problem instance. \iffalse And it holds the promise, though the details will be given elsewhere, of providing a unified method for obtaining lower bound techniques for such query algorithms. \fi \section{Mathematical and notational preliminaries} In this next section, we will formalize two equivalent notions of quantum query algorithm, and use them to formally define quantum query complexity. First, however, we record some mathematical conventions, terminology, and facts we will use. We often define a set $S$ as the set of all things referred to by some expression $Expr(X)$ containing a variable, as the variable ranges over another set, say $T$; we write this as: $S := \{Expr(x)\}_{x \in T}$. The pure states of quantum systems, which are vectors in a complex inner product space of finite dimension $d$ (we'll sometimes refer to it as a Hilbert space), will often be identified, usually without comment, with the isomorphic linear space of $d \times 1$ matrices (``column vectors'') over ${\mathbb{C}}$, where the matrix is identified with the matrix elements of the state in some special basis. This special basis will the basis used to define operators on that space. Thus the space of operators on a quantum space will also usually be implicitly identified with a space of matrices, and states, both pure and mixed, on tensor products of quantum spaces will also be identified with spaces of matrices, whose entries are interpreted as the operators' matrix elements in the product of the standard bases for the individual spaces. Dirac notation will sometimes, but not exclusively, be used for vectors, or for projection operators, when especially when these represent, or directly correspond to, quantum states of a query computer. We write $M(d)$ for the space of $d \times d$ complex Hermitian matrices. The notion of ``purification'' of a mixed state $\rho^{H_1}$ (operator on a Hilbert space $H_1$) will also be used. This is a ``pure'' state $\ket{\Psi^{H_1H_2}} \in H_1 \otimes H_2$ such that ${\rm tr}\; _{H_2} \proj{\Psi^{H_1 H_2}} = \rho^{H_1}$. We write $d_i$ for the dimension of $H_i$, $i \in \{1,2\}$. It is a well known fact that {\em any} finite-dimensional positive semidefinite matrix $\rho$ on $H_1$ has a ``purification'' in $H_1 \otimes H_2$ as long as $d_2$ is at least $rank(\rho)$. A sometimes useful way of thinking about states on tensor products of spaces, and the partial trace, is in a block matrix picture: identifying the space of operators on $H_1 \otimes H_2 $ with the space of $d_1 \times d_1$ block matrices with blocks in $M(d_2)$ (viewed as arrays of matrix elements of the operator in the tensor product of standard bases for $H_1, H_2$). Then the partial trace over $H_1$ of a matrix $M$ is the sum of its diagonal blocks, whereas its partial trace over $H_2$ is the matrix of traces of its blocks. Parenthetical superscripts, like $G^{(X,Y)}$, indicate blocks of a block matrix. Superscripts are used (as we have just done) to denote which system an operator acts on, or a vector belongs to (in the latter case they occur within the ket or bra notation), subscripts to index vectors or matrices belonging to an indexed set of such objects. ``Functional'' notation like $\ket{\Psi(t)}$, $\rho(t)$ also indicates dependence on an index, but its use will be confined to quantum states and variables directly related to quantum states, such as the variables in the ``primal'' semidefinite programs we define below, that correspond closely to quantum algorithms. The reason for this is that occasionally we want a quantum state to depend on which black box has been supplied to a quantum algorithm as ``input'', and we reserve subscripts, as in $\ket{\Psi_X}$, to indicate this dependence on an an input $X$. We often write, for example, the $X,Y$ matrix element of $G$ as $G[X,Y]$; when an object is a quantum state, Dirac notation such as $\bra{X} G \ket{Y}$ may be used as well. Because of the other uses to which we put subscripts, they are never used to indicate matrix elements. We use the notation $|S|$ to indicate the cardinality of a set $S$. When a Hilbert space is defined in terms of a distinguished orthonormal basis indexed by a set $S$ (i.e., defined as the free complex inner product space over the set $S$), we may also use $S$ to refer to the Hilbert space itself. Quite generally, we also write $|S|$ for the dimension of a Hilbert space $S$; there it does {\em not}, of course, refer to its cardinality. \section{Formulation of quantum query algorithms and complexity} We will use both a ``black-box'' and an equivalent ``explicit input'' model of quantum query complexity. In the black box model, a problem is given by specifying a set $S$ of ``black-box'' unitary operators, a finite set $T$, and a function $g: S \rightarrow T$. The problem is to design an algorithm that, for all $X \in S$, computes $g(X)$, exactly or with zero or bounded error. We will mostly be interested in the bounded error case. The computer state will be written as a superposition of basis vectors $\ket{w}\ket{i} \in Q \otimes W$, where the first, $n$-dimensional, register $Q$ is the ``query register'', on which the unitary $U \in S$ acts, and the second register, $W$, is workspace. For what follows, we will let $S$ be a finite set of unitaries in order to avoid having ``matrices'' indexed by infinite sets, or operators on infinite-dimensional spaces, though we expect generalizations to infinite sets of unitaries to be straightforward. \begin{definition} A {\em finitary query problem instance} in the unitary-queries model ({\em problem} for short) is an integer $n$, a finite set $S$ of $n \times n$ unitaries, a finite set $T$, and a function $g: S \rightarrow T$. \end{definition} \begin{definition} A $q$-query quantum algorithm (QQA) for a problem $P = (n, S, T, g)$ is an integer $|W|$ (the ``workspace dimension''), a sequence $U_0, U_1, ..., U_{q}$ of $n|W| \times n|W|$ unitary matrices (the ``inter-query unitaries''), and an indexed set of $|T|$ projectors $\{P_z\}_{z \in T}$, that are $n|W| \times n|W|$ matrices. \end{definition} On a black-box unitary input $X$, such an algorithm runs as follows. We consider a computer whose Hilbert space is $Q \otimes W$, the tensor product of an $n$-dimensional ``query register'' $Q$ which has a distinguished orthonormal basis indexed by $0, ..., n-1$, and a $|W|$-dimensional ``workspace'' $W$ with a distinguished basis $0, ..., |W|-1$. We define an action of the unitary matrices $U_i$ on this space by interpreting them as the matrices of unitary operators $Q \otimes W$ in an ordered basis $\ket{i}\ket{j}$ (with the fast running index corresponding to $Q$). We start with the computer state $\ket{0}\ket{0}$, and alternate the unitaries $U_i$ with the fixed query unitary $X$ (which acts only on the register $Q$, i.e. we apply $X \otimes I$ to the computer). Thus at time $t$ (immediately after the $t$-th query) the state of the computer when $X$ is input is: $\ket{\phi_X(q)} = U_t (X \otimes I) U_{t-1} (X \otimes I) \cdots U_1 (X \otimes I) U_0\ket{0}\ket{0}$. After $q$ queries, the projectors $\{P_z\}_z$ are measured, obtaining an outcome $z \in T$, interpreted as the value of $g(X)$, with probability \begin{equation} p(z) = \dmelement{\Phi_X(q)}{P_z}\;. \end{equation} The special case of computing a Boolean function $f: \{0,1\}^m \mapsto \{0,1\}$ using phase queries corresponds to the commuting set of unitaries $S = \{U_x\}_{x \in \{0,1\}^m}$, defined by $\melement{i}{U_x}{j} = \delta_{ij} (-1)^{x_i}$. \begin{definition} \label{def: QQA computes g} We say an algorithm $A = (|W|, \{U_0, ... U_{q-1}\}, \{P_z\}_{z \in T})$ solves the problem $P := (n, S, T, g)$ with bounded error $\varepsilon$ (or for short, $\varepsilon$-computes $g$), iff with $\ket{\phi_X(q)}$ defined as above, for all $z \in T$ and $X \in S$ such that $g(X)=z$, \begin{equation} \label{QQA output condition} \dmelement{\Phi_X(q)}{P_z} \ge \varepsilon\;. \end{equation} \end{definition} We sometimes call such a QQA a ``$(q,\varepsilon)$-QQA for $g$''. \begin{definition} \label{def: quantum query complexity} The {\em quantum query complexity} $QQC_\epsilon(g)$ of a function $g$ is the least integer $q$ such that there exists a $q, \varepsilon$-QQA for $g$. \end{definition} At times it will be useful to consider an ``extended computer'' whose Hilbert space is $I \otimes Q \otimes W$, with $Q$, $W$ as before and $I$ an $|S|$-dimensional ``input'' register with a distinguished orthonormal basis $\{\ket{X}\}_{X \in S}$. With such a construction, we can give an extended ``explicit input'' version of quantum query algorithms. The matrices $X$ acting on $Q$ are replaced by a single unitary matrix $\Omega$ acting on $I \otimes Q$ by ``reading the input out of I in the standard basis'' and, conditional on reading input $X$, doing the unitary $X$ on the register $Q$. That is, in the tensor product basis $\ket{X}\ket{i}$, $\Omega$ acts via: \begin{equation} \Omega \ket{X}\ket{i} = \ket{X} X\ket{i}\;. \end{equation} Thus the matrix $\Omega$ written in this basis, with $Q$'s basis the fast-running index, is block-diagonal, with the unitaries $X$ as the diagonal blocks. We can view the first $t$ steps of an algorithm as acting on such a computer (starting in an initial state $\ket{\Psi^{IQW}}$) to produce a state $\ket{\Psi^{IQW}(t)}$ defined as follows: \begin{definition} \label{def: state of extended computer at t} \begin{equation} \label{eq: state of extended computer at t} \ket{\Psi^{IQW}(t)} := (I^I \otimes U_t^{QW})(\Omega^{IQ} \otimes I^W) (I^I \otimes U^{QW}_{t-1}) (\Omega^{IQ} \otimes I^W) \cdots (I^I \otimes U^{QW}_1) (\Omega^{IQ} \otimes I^W) (I^I \otimes U^{QW}_0) \ket{\Psi^{IQW}}\;. \end{equation} \end{definition} Here we have introduced superscripts on unitaries to indicate which systems they act on, and superscripts inside kets to indicate the systems they belong to. These are not always used, however; sometimes we let the context make it clear what an operator acts on. Notice that the queries $\Omega$ do not touch the workspace (and only touch the input register to read it in the standard basis), while the inter-query unitaries may arbitrarily entangle $Q$ and $W$, but do not touch the ``notional'' input register $I$. As we will see in the proof of the main theorem, some of the variables in the semidefinite program we will now define, and which appears in our first main theorem characterizing query complexity, can be interpreted as the density matrices of the subsystems $I \otimes Q$ or $I$ of such an extended query computer whose query and work registers are started in $\ket{0}\ket{0}$, and whose input register is started in an unnormalized equal superposition of inputs (so that $\ket{\Psi^{IQW}} = \sum_{X \in S} \ket{X}\ket{0}\ket{0}$). \section{SDP characterization of quantum query complexity: primal formulation} \begin{definition}[Semidefinite program $P(g,q,\varepsilon)$] \label{def: primal} By $P(g,q, \varepsilon)$ we mean the following semidefinite program feasibility problem: Find $|S|n \times |S|n$ positive semidefinite Hermitian matrices $\rho^{IQ}(t), ~t \in \{0,...,q-1\}$, an $|S| \times |S|$ PSD Hermitian matrix $\rho^I(q)$ and $|S| \times |S|$ PSD matrices $\Gamma_z, {\rm ~for~all~} z \in T$, satisfying the constraints: \begin{eqnarray} \label{initial unitary constraint} {\rm tr_Q} \rho^{IQ}(0) = E \\ {\rm tr}\; _Q \rho^{IQ}(t) = {\rm tr}\; _Q \Omega \rho^{IQ}(t-1) \Omega^\dagger \label{first computation constraints} \end{eqnarray} for $t \in \{0,..., q-1\}$, where $E$ is the constant all-ones matrix), \begin{eqnarray} \label{final computation constraint} \rho^I(q) = {\rm tr}\; _Q \Omega \rho^{IQ}(q-1) \Omega^\dagger\;, \\ \label{output1}\sum_{z \in T} \Gamma_z &=& \rho^{I}(q) \\ \label{output2}\Delta_z*\Gamma_z & \succeq & (1-\varepsilon)\Delta_z, \end{eqnarray} where the constant diagonal matrix $\Delta_z$ is defined by $\Delta_z(X,X)=1$ if $g(X)=z$, else $0$. $*$ denotes the elementwise (aka Schur or Hadamard) product of matrices. \end{definition} Using this, we state the following theorem, which is the first main result of the paper: \begin{theorem} \label{theorem: main characterization theorem} A $q$-query, $\varepsilon$-error quantum algorithm to compute $g: S \rightarrow T$ exists if and only if a feasible solution to $P(g, q, \varepsilon)$ does. Furthermore, for each particular feasible solution $\left\langle \{\rho^{IQ}(t)\}_t, \rho^I(q), \{\Gamma_z\}_{z \in T}\right\rangle$ there is a $(q, \varepsilon)$-$QQA$ that computes $g$, for which the dimension $r$ of the working memory is no larger than the greater of of $|S|N$ and $\ceil{\sum_{z \in T} rank(\Gamma_Z)/N}$. Since the latter is no greater than $\ceil{|S||T|/N}$, it follows that any $(q,\varepsilon)$-$QQA$ computing $g$ may be implemented with workspace dimension no greater than $\max{|S|N, \ceil{|S||T|/N}}$ in addition to the $N$-dimensional query register. \end{theorem} In terms of qubits, then, the algorithm needs no more than $\max{ \left\{ \ceil{\log{|S|}} + \ceil{\log{N}}, \ceil{\log{S}} + \ceil{\log{|T|}\right\} } - \floor{\log{N}} }$ qubits of workspace in addition to the $\ceil{\log{N}}$-qubit query register. \noindent {\bf Proof:} We prove first the implication from the existence of a $(q,\varepsilon)$-QQA solving the problem to the existence of a feasible solution to $P(g,q, \varepsilon)$, establishing it by constructing the latter from the former. We do this by defining matrices $\rho^{IQ}(t)$, $\Gamma_z$ in terms of the objects of the QQA, and showing that they satisfy the constraints (\ref{initial unitary constraint}--\ref{output2}) on the variables of the same names in the definition of $P(g,q, \varepsilon)$. We begin by showing that in order to tell whether an algorithm will succeed in $\varepsilon$-computing the function $g$ no matter what the input, {\em all} we need to know is whether the geometry (the inner products) of the final computer states $\ket{\Phi^{QW}_X(q)}$ allows these states to, roughly (i.e. up to $\varepsilon$), lie in a set of orthogonal subspaces such that the vectors $\ket{\Phi^{QW}_X}$ in each subspace share the same value of $g(X)$. (They may have to be isometrically embedded in a larger space to do this.) Formally, this gives an SDP which we now construct. We array the inner products in a matrix $G(q)$ defined \begin{definition} \label{def: gram matrix M} $\tilde{M}(q)[X,Y] := \inner{\Phi^{QW}_X(q)}{\Phi^{QW}_Y(q)}$. \end{definition} For later use, similar matrices $G(t)$ may be defined for all $t$ between $0$ and $q$ inclusive, using the conditional computer states $\ket{\Psi^{IQW}(t)}$ after the $t$-th query and post-query unitary. The $t=0$ case, before any query, is of course the all-ones matrix. Because these are matrices of inner products (sometimes called ``Gram matrices''), they are necessarily positive semidefinite. The condition that the geometry of the final inner products is correct may be stated as a semidefinite programming feasibility problem with a constraint involving $G(q)$: \begin{definition}[SDP $O(g, \varepsilon, M)$] \label{def: output SDP} For a problem $g$, real number $\varepsilon$ between zero and one, and $|S| \times |S|$ positive semidefinite matrix $M$, the program $O(g, \varepsilon, M)$ is the following: Find $|S| \times |S|$ PSD matrices $\{\Gamma_z\}_{z \in T}$ such that \begin{eqnarray} \sum_{z \in T} \Gamma_z & = & G \label{O1}\\ \Delta_z * \Gamma_z & \succeq & (1-\varepsilon)\Delta_z.\label{O2} \end{eqnarray} \end{definition} The proof of the following lemma essentially repeats part of the proof of the main theorem in \cite{BSS2003}. \begin{lemma} \label{lemma: QQA output implies feasibility of output SDP} The SDP $O(g, \varepsilon, G(q))$, where $G(q)$ is defined as in Definition \ref{def: gram matrix M} above to be the final-state inner-products matrix of a QQA for $g$, is feasible if the QQA $\varepsilon$-computes $g$. \end{lemma} \noindent {\bf Proof of lemma:} The feasible solution is obtained by defining $\Gamma_z$ as the matrices with components: \begin{equation} \label{def: feasible gammaz} \Gamma_z[X,Y] := \dmelement{\Phi^{QW}_X(q)}{P_z}\;. \end{equation} Satisfaction of the constraint (\ref{O1}) follows because $\sum_z P_z = I$, while (\ref{O2}) is guaranteed by Eq. (\ref{QQA output condition}) in Definition \ref{def: QQA computes g}. \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} The definition of $\Gamma_z$ just given is also the one will use to show feasibility of $P(q, \varepsilon)$. Lemma \ref{lemma: QQA output implies feasibility of output SDP} has a suitable converse (see below). Thus to decide, from the final inner-products matrix $G(q)$, whether the value of $g$ has been $\varepsilon$-computed or not, is a question of semidefinite program feasibility. However, essentially because the action of the queries is not linear on the matrices $G(t)$ that we defined based on the QQA (the inner-product matrices of the input-conditioned states after query $t$), we cannot formulate linear constraints on variables corresponding to $G(t)$ that enforce the condition that the final inner-products matrix must arise from the initial one via queries and pre- and post-query unitaries. We need different, though related, quantities to fomulate that condition as a linear constraint. These quantities are most easily and intuitively described by going to the ``explicit inputs'' formulation described above, with overall state space $IQW$ including the ``virtual input register'' $I$ started in an unnormalized uniform superposition of inputs. It is easily seen, using the definitions of $\Omega$, $\ket{\Psi^{QW}_X(t)}$, and $\ket{\Psi^{IQW}(t)}$, that \begin{equation} \ket{\Psi^{IQW}(t)} = \sum_{X \in S}\ket{X^I} \ket{\Phi^{QW}_X(t)}\;. \end{equation} We define $\rho^{IQW}(t) := \proj{\Psi^{IQW}(t)}$, and density matrices such as $\rho^{IQ}(t):= {\rm tr}\; _W \rho^{IQW}(t)$, etc.... It is then easily seen by direct calculation that \begin{eqnarray} \label{eq: final-states gram matrix is input register density matrix} \bra{X}\rho^{I}(t)\ket{Y} = \inner{\Phi_X(t)}{\Phi_Y(t)}\;, \end{eqnarray} and consequently that the matrix of $\rho^I(t)$ in the standard basis that labels inputs, is just the Gram matrix $G(t)$ of Definition \ref{def: gram matrix M}. We will generally identify operators with their matrices in the standard tensor product basis for $IQW$, and hence if the QQA $\varepsilon$-computes $g$, the program $O(g, \varepsilon, \rho^I(q))$ with $\rho^I(q) := {\rm tr}\; _Q[\rho^{IQ}(q)]$ in place of $M$, is feasible. Moreover, the quantities $\rho^{IQ}(t)$ are exactly those necessary to formulate the computational constraints linearly, as we now show. Since in our analysis we will at times consider separately the effects of the query and of the post-query unitary, we also define $\ket{\Phi^{QW}_X(t+)} := (X^Q \otimes I^W) \ket{\Phi^{QW}_X(t)}$, $\ket{\Phi^{IQW}(t+)} := (\Omega^{IQ} \otimes I^W )\ket{\Phi^{IQW}(t)}$, and $\rho^{IQ}(t+)$ as the ``density'' matrix ${\rm tr}\; _W \left[ \proj{\Psi^{IQW}(t+)} \right]$; these are the vectors and density matrix after the $t$-th query but before the $t$-th post-query unitary. Since the post-query unitary $U^{QW}(t)$ does not touch $I$, $\rho^I(t) = \rho^I((t-1)+)$, or in other words: \begin{equation} {\rm tr}\; _Q \rho^{IQ}(t) = {\rm tr}\; _Q \rho^{IQ}((t-1)+)\;. \end{equation} Since the query is just the implementation of the unitary $\Omega$ on $IQ$, we have: \begin{equation} \rho^{IQ}((t-1)+) = \Omega \rho^{IQ}(t-1) \Omega^\dagger \;. \end{equation} Eliminating the unnecessary quantities $\rho^{IQ}((t-1)+)$, we can combine the two preceding sets of equations into a single set (indexed by $t$) of linear equations: \begin{equation} {\rm tr}\; _Q \rho^{IQ}(t) = {\rm tr}\; _Q \Omega \rho^{IQ}(t-1) \Omega^\dagger. \end{equation} In other words, the quantities $\rho^{IQ}(t)$ satisfy the constraints (\ref{first computation constraints}). It is also clear that $\rho^{IQ}(0)$ as defined from the algorithm satisfies (\ref{initial unitary constraint}), because $U^{QW}(0)$ does not touch $I$, and $\ket{\Psi^{IQW}}$ has the all-ones matrix as its reduced density matrix. Furthermore, since as stated in Eq. (\ref{eq: final-states gram matrix is input register density matrix}), $G(t) \equiv \rho^{I}(t)$ and the latter is just ${\rm tr}\; _Q \rho^{IQ}(t)$, we have from Lemma \ref{lemma: QQA output implies feasibility of output SDP} and its proof that $\Gamma_z$ as defined in that proof satisfy the constraints (\ref{output1}) and (\ref{output2}). Thus we have shown the first direction of the theorem (existence of a QQA implies feasible solution to the SDP). It remains to show the other direction, that the existence of a feasible solution for $P(g,q, \varepsilon)$ implies that of an $\varepsilon$-QQA solving the problem with the stated amount of workspace. Again it is a straightforward construction, though we must keep track of the amount of workspace used in the algorithm we construct. In this part of the proof $\rho^{IQ}(t)$, $\Gamma_z$ will be taken to be the feasible values of the variables of the same names in Definition \ref{def: primal}; it will turn out, of course, that when we have constructed the desired QQA, they will coincide with the quantities of the same names, $\rho^{IQ}(t)$, $\Gamma_z$, obtainable from that QQA via the definitions in the first part of our proof. The construction begins with a converse of Lemma \ref{lemma: QQA output implies feasibility of output SDP}. \begin{lemma} \label{lemma: output SDP feasibility implies existence of final-state vectors} If the SDP $O(g, \varepsilon, M)$, has feasible solution $\{\Gamma_z\}_{z \in T}$ there exists a set of vectors $\{\ket{\Psi_X}\}_{X \in T}$ in a Hilbert space of dimension no greater than $\sum_{z \in T} rank(\Gamma_z) \le |S||T|$ and projectors $P_z$ acting on that space such that $M$ is the Gram matrix of $\{\ket{\Psi_X}\}_{X \in T}$ and $P_z$ satisfy Eqs. (\ref{def: feasible gammaz}) and (\ref{QQA output condition}). \end{lemma} \noindent {\bf Sketch of proof of Lemma \ref{lemma: output SDP feasibility implies existence of final-state vectors}:} The proof (with notational differences) may be found in \cite{BSS2003}; it proceeds by constructing vectors $\ket{\Theta_X}$ of length $|S|$ and a ``POVM'' consisting of $|S| \times |S|$ PSD matrices $\{R_z\}_{z \in T}$ such that $\sum_{z \in T} R_z = I$ and $\dmelement{\Theta_X }{R_z} \ge \varepsilon$, and then Naimark-extending the POVM to a set of projectors $P_z$ in a larger space and identifying $\ket{\Psi_X}$ as the corresponding embeddings of the vectors $\ket{\Theta_X}$ in the larger space. This ensures that $\ket{\Psi_X}$ satisfy (\ref{QQA output condition}). The minimal dimension required for the Naimark extension is $\sum_{z \in T} rank(\Gamma_z)$. \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} Since Eqs. (\ref{output1}) and (\ref{output2}) just state that $O(q, \varepsilon, M)$ with ${\rm tr}\; _Q \rho^{IQ}(q)$ substituted for $M$ is satisfied, Lemma \ref{lemma: output SDP feasibility implies existence of final-state vectors} gives us vectors $\{\ket{\Psi_X(q)}\}_{X \in T}$ in a Hilbert space $H$ of dimension $|H| := \sum_{z \in T} rank(\Gamma_z)$ whose Gram matrix is ${\rm tr}\; _Q(\rho^{IQ}(q))$ and projectors $P_z$ on that space, which together satisfy Eqs. (\ref{def: feasible gammaz}) and (\ref{QQA output condition}). We may give $H$ the structure $Q \otimes W$ with $Q$ $|S|$-dimensional and the dimension of $W$ large enough to guarantee that $dim(Q \otimes W) \ge \sum_{z \in T} rank(\Gamma_z)$; $|W| = \ceil{|H|/|N|} \le \ceil{|S||T|/|N|}$ suffices. Given the vectors $\ket{\Psi_X(q)} \in Q \otimes W$, we can construct the state $\ket{\Psi^{IQW}} := \sum_X \ket{X^I}\ket{\Psi^{QW}_X(q)}$. By construction, this state's reduced density matrix for system $I$ will equal the feasible $\rho^I(q)$. Now suppose we have $\ket{\Psi^{IQW}(t)}$ such that its reduced density matrix coincides with the feasible value $\rho^{IQ}(t)$ (or, for the case $t=q$, some arbitrary $\rho^{IQ}(t)$ whose $I$ density matrix coincides with the feasible $\rho^I(q)$). We construct $U^{QW}$ such that $\ket{\Psi^{IQW}(t-1)} := (\Omega^{IQ\dagger} \otimes I^W)(I^I \otimes U^{QW\dagger}) \ket{\Psi^{IQW}(t)}$, has $IQ$ reduced density matrix equal to $\rho^{IQ}(t-1)$ (or, for the case $t=1$, to the all-ones matrix). To do this, first note that any purification of $\Omega \rho^{IQ}(t-1) \Omega^{\dagger}$ into $W$ (and there exist many so long as $|W| \ge |I||Q|$) is also a purification of $\rho^I(t) := {\rm tr_Q} \rho^{IQ}(t)$ into $QW$, by the constraint (\ref{first computation constraints}). Moreover, by acting via a unitary $U^{QW\dagger}$ on $\ket{\Psi^{IQW}(t)}$, we can reach such a purification of $\rho^{I}(t)$ that is also a purification of $\Omega \rho^{IQ}(t-1) \Omega^{\dagger}$, as long as $W$ has dimension at least $|S|N$. We let a $U^{QW}$ that achieves this be the $t$-th unitary, $U^{QW}(t)$ of our algorithm, and define $\ket{\Psi^{IQW}(t-1)} := (\Omega^{IQ\dagger} \otimes I^W)(I^I \otimes U^{QW\dagger}) \ket{\Psi^{IQW}(t)}$. Thus, ${\rm tr_Q} \proj{\Psi^{IQW}(t-1)} = \rho^{IQ}(t-1)$, as claimed. We apply this step beginning with the states $\ket{\Psi^{IQW}(q)}$ already constructed, until we get state $\ket{\Psi^{IQW}(0)}$ which by construction will have the all-ones matrix as its reduced density matrix, and thus $\ket{\Psi^{IQW}(0)} = \sum_X \ket{X} \ket{\chi^{QW}}$, where WLOG we can choose $U^{QW}(0)$ so that $U^{QW^\dagger}(0)\ket{\chi^{QW}} = \ket{0}$. Thus the sequence $U^{QW}(0),...,U^{QW}(q)$, and the indexed set $\{P_z\}_{z \in T}$ we have constructed are a quantum algorithm that $\varepsilon$-computes $g$, and the dimension of $W$ satisfies the claimed bound, which derives from the bounds on $|Q \otimes W|$ of $\sum_z rank(\Gamma_z)$ (from the Naimark extension at the output) and $|S|N$ (from the workspace needed to reach an arbitrary purification of a fixed $\rho^{IQ}$ in the post-query unitary step). \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} \noindent {\bf Remark:} For those who like the matrix picture, thinking of the matrix $G(t)$ of $\rho^{IQ}(t)$ in the standard basis blocked according to $X$ and $Y$, we see that during the query each block is updated according to a fixed block-dependent linear map: \begin{equation} G^{(X,Y)} \mapsto Y G X^\dagger\;. \end{equation} This is just conjugation by the block-diagonal unitary matrix whose $(X,X)$ block is $X$ (i.e., the matrix of $\Omega$). \iffalse To obtain these quantities it is necessary to consider the inner products, not just of the overall computer states conditional on all inputs, but of the states $\ket{\psi^{X,i}}$ of the workspace {\em relative} to some orthonormal basis of states $(\ket{i})$ for the query register. The matrix of these, while still of course PSD, is now indexed not just by pairs of inputs, but by pairs of inputs and query register basis vectors, so it may be viewed as a block matrix; its diagonal is no longer all ones, though it has the same trace as $\tilde{M}$ (namely $|S|$). We call it $G(t)$. \fi \iffalse Thus $M(t)$ gives the matrix elements of an operator $\rho(t)$ on a tensor product $I \otimes Q$, where $I$ is a Hilbert space with a basis indexed by inputs $S$, and we called the other $Q$ because its basis is indexed by the $i$. We may view $I$ as a notional ``input register'' with an orthonormal basis $\ket{X}$, $X \in S$, indexed by the unitaries (``inputs'') that may be queried. If we imagine starting the extended computer in an (unnormalized) equal superposition $\sum_X \ket{X}\ket{0}\ket{0}$ of inputs, and let evolution during a query be represented by a {\em single fixed} overall unitary $\Omega$ that does $X$ on the query register conditional on having $\ket{X}$ in the input register, the computer state at time $t$ can be written as $\sum_{X,i} \ket{X} \ket{i} \ket{\phi^{X,i}}$, where $\ket{i}$ is the standard basis for the query register and $\ket{\phi^{X,i}}$ are the relative states already mentioned. Then the matrix $\tilde{M}$ is just the (unnormalized) density matrix of the input register, while the matrix $M$ is just that of the tensor product of input and query registers. Equation (\ref{important}) just says that the input register density matrix is the partial trace, over the query register, of the joint input and query register density matrix. [....] \fi \iffalse The overall computer state \begin{equation} \sum_{X,k}\ket{X}\ket{k}\ket{\phi^{X,k}} \end{equation} goes to \begin{equation} \sum_{X,k} \ket{X} X\ket{k}\ket{\phi^{X,k}}\;. \end{equation} Relative to the state $\ket{X}\ket{i}$ of input and query registers, we now have states: \begin{eqnarray} \ket{\phi^{X,i}+} & := & \sum_k \bra{i}X\ket{k}\ket{\phi^{X,i}} \nonumber \\ {}& = & \sum_k X_{ik} \ket{\phi^{X,i}} \;, \end{eqnarray} where the numbers $X_{ik} := \melement{i}{X}{k}$, the matrix elements of $X$ in the standard query basis. The inner products of these relative states are \begin{equation} \inner{\phi^{X,i}+}{\phi^{Y,j}+} = \sum_{kk'} X_{ik}^* Y_{jk'} \inner{\phi^{X,k}}{\phi^{Y,k'}}\;. \end{equation} \fi Using this we can express the constraints (\ref{first computation constraints}) in terms of the matrix $M$ viewed as blocked according to $X,Y$. Each of the $q$ constraints on the matrices $\rho^{IQ}(t)$ (which states that an $|S| \times |S|$ matrix calculated from $\rho^{IQ}$, namely its partial trace $\rho^{I}$, is equal to another such matrix), becomes $\frac{|S|(|S|+1)}{2}$ constraints each stating that the trace of an $(X,Y)$ block of some matrix is equal to that of another: \begin{equation} {\rm tr}\; [ G^{(X,Y)}(t) ] = {\rm tr}\; [Y G^{(X,Y)}(t-1+) X^\dagger]\;, \end{equation} or, in the case $t=q$, a similar set of constraints with no trace on the LHS. This is because the matrix of the partial trace in question is the matrix of traces of the blocks; since the block matrix is Hermitian, only $|S|(|S|+1)/2$ blocks, say those on and above the main diagonal of blocks, are independent. Equivalently, \begin{equation} {\rm tr}\; [ G^{(X,Y)}(t)] = {\rm tr}\; [X^\dagger Y G^{(X,Y)}(t-1+)]\;. \end{equation} \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} \section{The dual SDP} In order to find the SDP feasibility problem dual to the one just given, we begin by stating a very general theorem concerning feasibility of conic program constraint sets. \begin{theorem} \label{theorem: general duality theorem} Let $K$ be a closed, pointed, generating convex cone in an $m$-dimensional real vector space $V$, with a distinguished inner product $(\cdot ,\cdot )$. Let $W$ be a $p$-dimensional real vector space, also equipped with a distinguished inner product (written similarly). Let $K^* \subset V$ be the cone dual to $K$ according $V$'s inner product. Let $A$ be a fixed linear transformation from $V$ to $W$ whose kernel is $\{0\}$, and let $b \in W$ be a constant nonzero vector. Let $A^*: W \rightarrow V$ be the linear map ``dual'' or ``adjoint'' to $A$, defined by $(w, Av) = (A^*w, v)$. (For example, if $V$ and $W$ are viewed as spaces of column vectors of lengths $m$ and $p$ respectively equipped with the inner products $(u,v) = u^t v$, and $A$ is represented by its $p \times m$ matrix $\hat{A}$, then $A^*$'s matrix is $\hat{A}^t$.) Consider the conic programming feasible set defined by: \begin{equation} P:= \{ x \in V : Ax = b, x \ge_K 0 \} \;, \end{equation} This set is empty (the constraints are ``infeasible'') if and only if the dual feasible set \begin{equation} D := \{ y \in W : A^* y \ge_{K^*} 0, (b, y) < 0 \} \end{equation} is nonempty (the dual constraints are ``feasible''). \end{theorem} \noindent {\bf Proof: } First let $x$ belong to $P$. Suppose that $A^* y \in K^*$, so $y$ satisfies the first condition defining $D$. We show that $(b,y) \ge 0$, so that $y \notin D$. $A^*y \in K^*$ implies (since $x \in K$) that $(x, A^*y) \ge 0$. Thus $(Ax, y) \ge 0$; since $x \in P$, $Ax=b$, so $(b,y) \ge 0$. Next, suppposing $P$ infeasible we construct a point in $D$. Consider the $A$-image of $K$, denoted $AK$. By the assumption that $A$'s kernel is $\{0\}$, and for example Theorem 9.1 of \cite{Rockafellar70a}, $AK$ is a closed convex cone. Now, $b \notin AK$, for if it were, its preimage would belong to $P$, contradicting the supposition. Therefore, by (for example) Theorems 11.1, 11.3, and 11.7 of \cite{Rockafellar70a}, there exists a hyperplane through the origin properly separating $b$ and $AK$; this hyperplane is the zero-set of a linear functional $L(x) := (y,x)$ determined by a vector $y \in W$. Thus (cf. the proof of Thm. 11.1 in \cite{Rockafellar70a}) $(b,y) < 0$, and for all $z \in AK$, $(z, y) \ge 0$. The latter is equivalent to: for all $x \in K$, $(Ax, y) \equiv (x, A^*y) \ge 0$. Thus $y \in D$. \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} Lemma 2 of \cite{BSS2003} was a special case of this, for a particular cone $K$ and a particular form of the linear map $A$. In \cite{BSS2003} we then further specialized the Lemma to the case in which the primal feasible set $P$ was the SDP characterizing the existence of a quantum query algorithm for classical Boolean queries. We now proceed by giving a generalization of Lemma 2 of \cite{BSS2003} which is still a special case of the above theorem, but which is sufficiently general to encompass the SDP characterizing quantum query complexity with arbitrary queries. \iffalse In anticipation of specializing it to this quantum case, in the general theorem we use the same names for the dimensional parameters as we will have in the special case. \fi \begin{lemma} \label{lemma: specialized duality lemma} Let $K \subset W$ be the product of $k$ cones of PSD Hermitian matrices (with the $r$-th cone a cone of $d_r \times d_r$ matrices), in the obvious $W$ (direct sum of the spaces of $d_r \times d_r$ Hermitian matrices). Let $V$ be the direct sum of $k$ copies of $H(s)$ for some fixed $s$. Let ${\bf A}$ be a fixed $k \times k$ matrix whose entries are linear maps ${\cal A}_{\alpha, \beta}: H(d_\beta) \mapsto H(s)$. Let ${\bf B}$ be a nonzero element of $V$, i.e. a $k$-tuple of matrices $B_\beta$, with $B_\beta \in H(s)$. Equip $V$ and $W$ with the trace inner products, $(A, B) := {\rm tr}\; AB$. (Matrices in $V$ and $W$ are block-diagonal, $k$ blocks by $k$ blocks.) Consider the ``primal'' feasible set: \begin{equation} P := \{ X \in W : \sum_\beta {\cal A}_{\alpha \beta}(X_\beta) = B_{\alpha}, X \in K\}\;, \end{equation} and the ``dual'' feasible set \begin{equation} D := \{Y \in V: \sum_{\beta} {\cal A}^*_{\beta \alpha}(Y_{\beta}) \ge_{K} 0\;, \sum_{\beta} {\rm tr}\; Y_\beta B_\beta < 0 \}\;. \end{equation} Suppose further that the only feasible solution to $P_0$ (the primal problem with $B_\alpha$ set equal to zero) is $0 \in W$. Then if $D$ is feasible, $P$ is infeasible, and vice versa. \end{lemma} We caution the reader not to confuse the variable matrices $X_\alpha$, $Y_\beta$ appearing in the SDP above with the variables $X$ and $Y$ that we commonly let range over input unitaries in $S$. We will rarely use these notations together, and only when it is clear from the context what is meant, and in any case we never use subscripts on the input unitaries, nor do we ever omit subscripts from the primal and dual variables of the above type of program. \noindent {\bf Remark:} Note that ${\cal A}^*$ is the linear map often called by quantum information theorists ${\cal A}^\dagger$, defined by ${\rm tr}\; F {\cal A}^\dagger(G) = {\rm tr}\; {\cal A}(F) G$ (for all $F$ in the input space and $G$ in the output space, though it suffices to require it for bases of these spaces given linearity). In the case where ${\cal A}$ is completely positive, i.e. ${\cal A}: G \mapsto \sum_i A_i G A_i^\dagger$, ${\cal A}^\dagger$ may be defined via ${\cal A}^\dagger: G \mapsto \sum_i A_i^\dagger G A_i$. \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} The program $P(g, \varepsilon, q)$ is a case of $P$, for which $W$ is the direct sum of $q$ copies of $H(|S|n)$ and $2|T|+1$ copies of $H(|S|)$, and $V$ is the direct sum of $q + 2 + |T|$ copies of $H(|S|)$. In terms of the associated query algorithm, the $q$ copies of $H(|S|n)$ in $W$ are where the density matrices $\rho^{IQ}(t)$ will live, one of the copies of $H(|S|)$ is for $\rho^I(q)$ and the other $2|T|$ copies of $H(|S|)$ are for the the output conditions: $|T|$ of them, indexed by $z \in T$, for an additive decomposition of the final $\rho^{I}$ into positive matrices $\Gamma_z$ representing the portion of the output matrix for which the final measurement has result $z$, and $|T|$ more for slack variable matrices $\Pi_z$, used to transform the inequality conditions on the $\Gamma_z$, for succesful computation, into equality conditions. These inequality conditions are are $\Delta_z * \Gamma_z \succeq (1 - \varepsilon) \Gamma_z$; requiring the slack variables $\Pi_z$ to be positive while enforcing the equality constraint $\Delta_z * \Gamma_z - \Pi_z = (1 - \varepsilon) \Delta_z$ is equivalent to imposing the inequality constraint on the $\Gamma_z$. Thus the vector of primal variables $X_\beta$ is indexed as follows: for $0 \le \beta \le q-1$, $X_\beta = \rho^{IQ}(\beta)$; for $\beta = q$, $X_\beta = \rho^I(q)$; for $\beta = q+z (z \in T \equiv \{1,...,|T|\}), X_\beta = \Gamma_z$; for $\beta = q+1+ |T|+ z (z \in T \equiv \{1,...,|T|\}), X_\beta = \Pi_z$. We now specify the maps ${\cal A}_{\alpha,\beta}$ and constant vector ${\bf B} = [B_\alpha]$. We will give rows of the matrix ${\cal A}_{\alpha, \beta}$, followed by the corresponding RHS constant $B_\alpha$, since each row and $B_\alpha$ corresponds to a constraint; the constraints will be naturally grouped by type. For $0 \le \alpha \le q-1$, ${\cal A}_{\alpha, \alpha}$ is the partial trace map $G^{IQ} \mapsto {\rm tr}\; _Q(G^{IQ})$, and (for $1 \le \alpha \le q$) ${\cal A}_{\alpha -1, \alpha}: G^{IQ} \mapsto - {\rm tr}\; _Q(G^{IQ})$, with the rest of the maps zero for $\alpha, \beta$ in this range. The corresponding RHS constants are $B_0 = E$ (where $E$ is the all-ones matrix in $H(|S|)$), and $B_\alpha = 0$ ($1 \le \alpha \le q$); thus far we have imposed all the trace constraints on query-updating (constraints $0,...,q-1$ give the effect of the pre-query unitary and query, while constraint $q$ gives the effect of the unitary following the last query). ${\cal A}_{q+1, q}$ is minus the identity map, while ${\cal A}_{q+1,q + z}$, for $z \in |T|$, is the identity map ${\rm id}: X \mapsto X$ (and the other maps ${\cal A}_{q+1, x}$ are zero). The corresponding RHS constants are zero: this imposes the constraint that the $\Gamma_z$ are an additive decomposition of $\rho^{I}(q)$ into positive matrices. Finally, for $\alpha = q + 1 + z, z \in |T|$, ${\cal A}_{\alpha, q + z}: X \mapsto \Delta_z * X$, ${\cal A}_{\alpha, q + 1 + z + |T|} = - {\rm id}$, and the rest of them are zero. And the corresponding RHS constants, $B_\alpha: \alpha = q+1 +z, z \in T,$ are zero matrices. These just impose the output conditions, in the equality-constraint form with slack variables given above. To make this clearer, we display in Appendix B the constraints in the form $Ax = b$, where $A$ is the matrix of maps $A_{\alpha \beta}$, $x$ and $b$ are column vectors of matrices $X_\alpha$, $B_\beta$; we also display there the dual matrix-multiplication part of the dual constraints. Appendix B serves as a useful aid to verifying that the procedure about to be described for deriving the dual of $P(g, \varepsilon, q)$ is carried out correctly, and that problem $D(g, \varepsilon, q)$ below is the result. The dual feasible set is obtained, using Theorem 2, by transposing the matrix of maps, and replacing each map with its dual. When ${\cal A} : I \otimes Q \rightarrow I$ is the partial trace map, its dual ${\cal A}^*: I \rightarrow I \otimes Q$ is given by ${\cal A}^*: L \mapsto L \otimes I$ (where, to clear up ambiguous notation, $I$ in this last specification refers to the identity matrix on the system $Q$, not to the system $I$ itself as it does in the preceding two). For ${\cal A}: G \mapsto {\rm tr}\; _Q (\Omega G \Omega^\dagger)$, we have ${\cal A}^*: L \mapsto \Omega^\dagger (L \otimes I) \Omega$. \noindent {\bf Remark:} We can give more explicit forms of these maps (and incorporate the special form of $\Omega$, in the second case). Viewing elements of $I \otimes Q$ as block matrices blocked according to $X,Y \in S$, and elements of $I$ as matrices with elements indexed by pairs $X,Y \in S$, we have, when ${\cal A}$ is the partial trace map, that ${\cal A}^*$ takes $M$ to the matrix whose blocks are $M_{XY} I$. For ${\cal A}^{*}: G \mapsto \Omega^\dagger (M \otimes I) \Omega$, the output matrix is the one whose blocks are $M_{XY} Y^\dagger X$. Id is of course dual to itself, and so, as is easily verified, are the maps $M \mapsto \Delta_z * M$. \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} We thus obtain a version of the dual program $D(g, \varepsilon, q)$. The dual variables are $q + 1 + |T|$ $|S| \times |S|$ Hermitian matrices $Y_\beta$ whose matrix elements are indexed by input-pairs $(X,Y) \in S \times S$. The first $q$, corresponding to the primal query updating constraints, we call $L_t,$ $(t \in \{0,...q-1\})$; the next, corresponding to the primal constraint that the $\Gamma_z$ add up to $\rho^I(q)$, we call $L_q$; and the last $|T|$, each corresponding to the output constraint on a primal variable $\Gamma_z$, we call $\Lambda_z (z \in T)$. We must find such matrices satisfying the constraints: \begin{eqnarray} L_{(t-1)} \otimes I - \Omega^\dagger (L_{t} \otimes I) \Omega \succeq 0 ~~(1 \le t \le q) \\ L_q = L_{q+1} \\ L_{q+1} \succeq - \Delta_z * \Lambda_{z+1}, ~~(1 \le z \le |T|) \\ -\Lambda_z \preceq 0 ~~(1 \le z \le |T|) \\ \sum_{X,Y \in S} (L_0)_{X,Y} + (1 - \varepsilon) {\rm tr}\; \sum_{z \in T} \Delta_z * \Lambda_z < 0 \;. \end{eqnarray} Redefining the $\Lambda_z$ to be the negatives of the $\Lambda_z$ above, so as to have them be PSD, changing some signs, and dropping the redundant variable $L_{q+1}$, we formally define the dual program: \begin{definition} \label{def: dual SDP} The semidefinite program (feasibility problem) $D(g, \varepsilon, q)$ is defined as the problem of finding $q+1 ~~|S| \times |S|$ Hermitian matrices $L_q, q \in \{0,...q\}$ and $|T|~~ |S| \times |S|$ Hermitian matrices $\Lambda_z$ for $z \in T$, with matrix elements indexed by $S \times S$, such that: \begin{eqnarray} L_{(t-1)} \otimes I \succeq \Omega^\dagger (L_{t} \otimes I) \Omega^\dagger ~~(1 \le t \le q) \label{first constraint in dual}\\ L_q \succeq \Delta_z * \Lambda_{z}, ~~(1 \le z \le |T|) \\ \Lambda_z \succeq 0 ~~(1 \le z \le |T|) \\ \sum_{X,Y \in S} (L_0)_{X,Y} < (1 - \varepsilon) \sum_{z \in T} \sum_{X: g(X)=z} ~~(\Lambda_z)_{X,X} \;. \end{eqnarray} \end{definition} Comparison to the program $\hat{P}(f,t,\varepsilon)$ of Theorem 2 in \cite{BSS2003} shows that they are identical except for the first constraint (the query-updating one), and that when $\Omega$ has the special form corresponding to classical phase queries to input strings $x$ (when $x$ is in the input register), then $D$ above specializes to $\hat{P}$ of \cite{BSS2003}. Note that the constraint (\ref{first constraint in dual}) says that the block matrix whose $X,Y$ block is the $N \times N$ matrix $L_t[X,Y] X^\dagger Y - L^{t-1}[X,Y] I$ is positive semidefinite. An immediate consequence of Theorem \ref{theorem: main characterization theorem} and \ref{lemma: specialized duality lemma}, is the following Theorem. \begin{theorem} \label{theorem: algorithms and duality} With $S$, $T$ as above, a $q$-query, $\varepsilon$-error quantum algorithm to compute $g: S \rightarrow T$ exists if and only if a feasible solution to $D(g, q, \varepsilon)$ does not. \end{theorem} \section{Relaxation, duality, and a generalized spectral adversary method} \subsection{Relaxation to the pairwise output condition: primal and dual programs} We now consider relaxing the primal program by substituting the weaker output condition of ``pairwise near-orthogonality,'' also known as the ``Ambainis condition'' \cite{Ambainis2000a}: \begin{equation} |\rho^I(q)[X,Y]| \le 2 \sqrt{\varepsilon(1 - \varepsilon)} {\rm ~ when ~} g(X) \ne g(Y)\;. \end{equation} We call it ``pairwise near-orthogonality'' because, by (\ref{eq: final-states gram matrix is input register density matrix}), when $\rho^{IQ}(q)$ is viewed as the unnormalized density matrix of the input register in the explicit-inputs model, $|\rho^{IQ}(q)[X,Y]|$ is the modulus of the inner product of the $QW$ computer states conditional on inputs $X$ and $Y$ in the ``black-box'' model, so it states that these conditional states are nearly (for small $\varepsilon$) orthogonal if $X$ and $Y$ have different values of $g$; a necessary, but not sufficient, condition for them to be the final states in a successful computation of $g$. In order to formulate this as a semidefinite constraint, we need constant matrices $V^{XY} \in M(|S|)$, for all {\em unordered} pairs $(X,Y)$ of $X,Y \in S$ such that $g(X) \ne g(Y)$ (we call this set $\tilde{R}$ for future reference). For each such pair we define $V^{XY}$ to be the matrix whose $X,Y$ and $Y,X$ matrix elements are $1$, and whose other matrix elements are all zero. We also need the constant matrices $W^{XY}$ for the same unordered input-pairs, but whose $X,X$ and $Y,Y$ matrix elements are $1$ (and whose others are zero). Then the Ambainis output condition is equivalent to the conditions: \begin{equation} V^{XY}*\rho^I(q) + 2\sqrt{\varepsilon(1-\varepsilon)} W^{XY} \succeq 0\; \end{equation} where $(X,Y) \in R$. We won't need the output variables $\Gamma_z$ in this case, but we will need a slack variable $\Pi^{XY} \succeq 0$ for each of the $|R|$ unordered pairs, to get equality constraints \begin{equation} V^{XY}*\rho^I(q) + 2\sqrt{\varepsilon(1-\varepsilon)} W^{XY} = \Pi^{XY}\;. \end{equation} Thus the dual program is to find $|S| \times |S|$ Hermitian matrices $L^t$, $0 \le t, \le q$ and $\Upsilon_{XY}$, $X, Y \in S$, such that: \begin{eqnarray} L_{t-1} \otimes I \succeq \Omega( L_{t} \otimes I) \Omega^\dagger ~(1 \le t \le q) \\ L_{q} \succeq \sum_{(X,Y) \in R} (V^{XY} * \Upsilon_{XY}) \\ \Upsilon_{X,Y} \succeq {\bf 0}~ (X,Y) \in R) \\ \sum_{X,Y \in S} L_0[{X,Y}] < - 2 \sqrt{\varepsilon(1 - \varepsilon)} \sum_{(X,Y) \in R} {\rm tr}\; (\Upsilon^{XY} W^{XY})\;, \end{eqnarray} and $(\Upsilon_{XY})_{MN} = 0$ unless $MN \in \{XX, XY, YX, YY\}$. Rewriting this in terms of the variables $K_t := - L_{q-t}$ we formally define the dual program $D_A$. \begin{definition}\label{def: dual of relaxed SDP} \begin{eqnarray} K_0 \preceq - \sum_{(X,Y) \in R} (V^{XY} * \Upsilon_{XY}) \label{special constraint on K0} \\ K_{t-1} \otimes I \preceq \Omega (K_{t} \otimes I) \Omega^\dagger ~(1 \le t \le q) \label{relaxed dual query constraint} \\ \label{constraint: upsilons psd} \Upsilon_{X,Y} \succeq {\bf 0}~ ((X,Y) \in R) \\ \label{africabrass} \sum_{X,Y \in S} K_q[X,Y] > 2 \sqrt{\varepsilon(1 - \varepsilon)} \sum_{(X,Y) \in R} {\rm tr}\; \Upsilon^{XY} \;, \end{eqnarray} and $(\Upsilon_{XY})_{MN} = 0$ unless $MN \in \{XX, XY, YX, YY\}$. \end{definition} \subsection{A generalized spectral adversary method} We next obtain, from this dual program, a generalization of Theorem 4 of \cite{BSS2003}, giving a lower bound directly on the number of queries in an algorithm $\varepsilon$-computing a function, in terms of relatively easily computed properties of the function and a ``weight matrix'' $\Gamma$ that we are free to choose. This gives a generalization of the so-called ``spectral adversary method'' for quantum query complexity lower bounds. We use the notation $\lambda(M)$ for the largest eigenvalue of a matrix $M$. \begin{theorem} Let $S$ be a finite set of unitary $|S| \times |S|$ matrices, and let $g: S \mapsto T$, $T$ a finite set. Let $\Gamma$ be a nonnegative real symmetric $|S| \times |S|$ matrix indexed by $S$, such that $\Gamma_{X,Y} = 0$ whenever $g(X) = g(Y)$. Then \begin{equation} QQC_{\varepsilon}(g) \ge \frac{ (1 - 2 \sqrt{\varepsilon(1 - \varepsilon)}) \lambda(\Gamma) }{ 2 \lambda( \Gamma \otimes I - \Omega ( \Gamma \otimes I) \Omega^\dagger) } \;. \end{equation} \end{theorem} \begin{proof} To prove this, we construct, for any $\Gamma$ as above and $q$ below the bound given in the theorem, a sequence $K_t : 0 \le t \le q, \Upsilon_{XY}, \{X,Y\} \in R$ that is a feasible solution to $D_A(g, q, \varepsilon)$. Note that by the standard Perron-Frobenius theory of nonnegative matrices \cite{Horn85a}, $\Gamma$ has a normalized eigenvector $v$ with nonnegative entries, whose eigenvalue is $\Gamma$'s largest, i.e. $\lambda(\Gamma)$. We define $K_t := ( \Gamma - t \alpha I)*vv^t$, where $\alpha := 2 \lambda( \Gamma \otimes I - \Omega ( \Gamma \otimes I) \Omega^\dagger)$. We also define $\Upsilon_{XY}$ via \begin{eqnarray} \Upsilon_{XY}[X,X] = \Upsilon_{XY}[Y, Y] = -\Upsilon_{XY}[X,Y] = -\Upsilon_{XY}[Y,X] := \nonumber \\ K_0[X,Y] \equiv \Gamma[X,Y]v[X]v[Y]\;, \end{eqnarray} with its other matrix elements zero. These are manifestly positive semidefinite, satsfying (\ref{constraint: upsilons psd}). That (\ref{special constraint on K0}) is satisfied with equality is also immediate from the definitions. To verify that (\ref{relaxed dual query constraint}) is satisfied, we have a look at \begin{eqnarray} \Omega(K_t \otimes I)\Omega^\dagger - K_{t-1} \otimes I = \Omega((\Gamma - t\alpha I)*vv^t)\otimes I) \Omega^\dagger -((\Gamma - (t-1) \alpha I)*vv^t) \otimes I \\ = (vv^t\otimes E) * \Omega(\Gamma - t \alpha I) \otimes I) \Omega^\dagger - (vv^t \otimes E)*((\Gamma - (t-1) \alpha I) \otimes I) \\ = (vv^t \otimes E)* \left[\Omega(\Gamma \otimes I) \Omega^\dagger - t \alpha (I \otimes I) - \Gamma \otimes I + (t-1) \alpha(I \otimes I) \right] \\ = (vv^t\otimes E)* \left[ \Omega(\Gamma \otimes I) \Omega^\dagger - \Gamma \otimes I - \alpha(I \otimes I) \right] \label{last line}\;. \end{eqnarray} Note that in the second equality we used the identity \begin{equation} Z^\dagger( X * M \otimes I) Z \equiv (M \otimes E)*Z^\dagger(X \otimes I)Z\;, \end{equation} which does not hold for general $Z$, but does hold when (as in the cases $Z = \Omega, Z = I$ that we use) $Z$ is block-diagonal when the blocks are indexed by a basis for the input register (the register that we write on the left in tensor products). The matrix in (\ref{last line}) is positive semidefinite by the definition of $\alpha$, so the constraint (\ref{relaxed dual query constraint}) is indeed satisfied. Finally, the constraint (\ref{africabrass}) is satisfied because \begin{equation} \sum_{(X,Y) \in R} {\rm tr}\; \Upsilon_{XY} = \sum_{(X,Y) \in R} 2 \Gamma[X,Y]v[X]v[Y] = v^t \Gamma v = \lambda(\Gamma)\;, \end{equation} while $\sum_{XY} K_q[X,Y] = \lambda(\Gamma) - q \alpha$, which by our assumption on $q$ is greater than or equal to $2 \sqrt{\varepsilon(1 - \varepsilon)} \lambda(\Gamma)$. \hspace*{\fill}\mbox{\rule[0pt]{1.5ex}{1.5ex}} \end{proof} It is easily seen that this Theorem specializes to Theorem 4 of \cite{BSS2003}. \acknowledgments We thank the DOE and NSF for support. \begin{appendix} \section{The matrix multiplications appearing in the primal and dual constraints} In this section we use the notation ${\rm tr_Q} $ to denote the partial trace map from $I \otimes Q$ to $I$, $\Omega$ to denote the map $G \mapsto \Omega G \Omega^\dagger$, $\Delta_k*$ to denote the map $G \mapsto \Delta_k * G$ (where $*$ is the elementwise matrix product); juxtaposition of maps to indicate composition (thus ${\rm tr_Q} \Omega: G \mapsto {\rm tr}\; _Q(\Omega G \Omega^\dagger)$, and the superscript $^*$ to indicate the dual map. We also use the facts that the maps $\Delta_k*$ are self-dual and that the dual $\Omega^*$ of the map $\Omega$ is the map $\Omega^\dagger: M \mapsto \Omega^\dagger M \Omega$. \subsection{Unrelaxed constraints} With this notation, the matrix multiplication portion of the primal constraints is: \begin{equation} \left( \begin{array}{cccc|cccc|ccccc} {\rm tr_Q} & & & & & & & & & \\ -{\rm tr_Q} \Omega & {\rm tr_Q} & & & & & & & & \\ & \ddots & \ddots & & & & & & & \\ & & -{\rm tr_Q} \Omega & {\rm id} & & & & & & \\ \hline & & & -{\rm id} & {\rm id} & {\rm id} & \cdots & {\rm id} & & & \\ \hline & & & & \Delta_1* & & & & & -{\rm id} & & \\ & & & & & \Delta_2* & & & & & -{\rm id} & \\ & & & & & & \ddots & & & & & \ddots & \\ & & & & & & & \Delta_{|T|}* & & & & & -{\rm id} \end{array} \right) \left[ \begin{array}{c} \rho^{IQ}(0) \\ \rho^{IQ}(1) \\ \vdots \\ \rho^{IQ}(q-1) \\ \hline \rho^{I}(q) \\ \hline \Gamma_1 \\ \Gamma_2 \\ \vdots \\ \Gamma_{|T|} \\ \hline \Pi_1 \\ \vdots \\ \Pi_{|T|} \end{array} \right] = \left[ \begin{array}{c} E \\ {\bf 0} \\ \vdots \\ {\bf 0} \\ \hline {\bf 0} \\ \hline (1 - \varepsilon) \Delta_1 \\ (1 - \varepsilon) \Delta_2 \\ \vdots \\ (1 - \varepsilon) \Delta_{|T|} \end{array} \right] \end{equation} The matrix multiplication part of the dual constraints is: \begin{equation} \left( \begin{array}{ccccc|c|ccc} {\rm tr_Q} ^* & - \Omega^\dagger {\rm tr_Q} ^* & & & & & & & \\ & {\rm tr_Q} * & -\Omega^\dagger {\rm tr_Q} ^* & & & & & & \\ & & \ddots & \ddots & & & & & \\ & & & {\rm tr_Q} ^* & - \Omega^\dagger {\rm tr_Q} ^* & & & & \\ & & & & {\rm id} & -{\rm id} & & & \\ \hline & & & & & {\rm id} & \Delta_1* & & \\ & & & & & \vdots & & \ddots & \\ & & & & & {\rm id} & & & \Delta_{|T|}* \\ \hline & & & & & & -{\rm id} & & \\ & & & & & & & \ddots & \\ & & & & & & & & -{\rm id} \end{array} \right) \left[ \begin{array}{c} L_0 \\ L(1) \\ \vdots \\ L_q \\ \hline L(q+1) \\ \hline \Lambda_1 \\ \Lambda_2 \\ \vdots \\ \Lambda_{|T|} \end{array} \right] \succeq \left[ \begin{array}{c} {\bf 0} \\ \vdots \\ {\bf 0} \\ \hline {\bf 0} \\ \vdots \\ {\bf 0} \\ \hline {\bf 0} \\ \vdots \\ {\bf 0} \end{array} \right] \end{equation} \subsection{Relaxed constraints (pairwise output condition)} Primal matrix multiplication constraints: \begin{equation} \left( \begin{array}{cccc|c|ccccc} {\rm tr_Q} & & & & & & & & \\ -{\rm tr_Q} \Omega & {\rm tr_Q} & & & & & & & & \\ & \ddots & \ddots & & & & & & & \\ & & -{\rm tr_Q} \Omega & {\rm tr_Q} & & & & & & \\ & & & -{\rm tr_Q} \Omega &{\rm id} & & & & & \\ \hline & & & & -V_{X_1,Y_1}* & {\rm id} & & \\ & & & & -V_{X_2,Y_2}* & & {\rm id} & \\ & & & & \vdots & & & \ddots & \\ & & & & & & & & {\rm id} \end{array} \right) \left[ \begin{array}{c} \rho^{IQ}(0) \\ \rho^{IQ}(1) \\ \vdots \\ \rho^{IQ}(q) \\ \hline \rho^{I}(q) \\ \hline \Pi_{X_1,Y_1} \\ \Pi_{X_2,Y_2}\\ \vdots \\ \Pi_{X_{|R|},Y_{|R|}} \end{array} \right] = \left[ \begin{array}{c} E \\ {\bf 0} \\ \vdots \\ {\bf 0} \\ {\bf 0} \\ \\ \hline 2 \sqrt{\varepsilon(1 - \varepsilon)} W_{X_1,Y_1} \\ 2 \sqrt{\varepsilon(1 - \varepsilon)} W_{X_2,Y_2} \\ \vdots \\ 2 \sqrt{\varepsilon(1 - \varepsilon)} W_{X_{|R|},Y_{|R|}} \end{array} \right] \end{equation} From the above we get the dual matrix multiplication constraints: \begin{equation} \left( \begin{array}{ccccc|ccc} {\rm tr_Q} ^* & -\Omega^\dagger {\rm tr_Q} ^* & & & & & & \\ & {\rm tr_Q} ^* & -\Omega^\dagger {\rm tr_Q} ^* & & & & & \\ & & \ddots & \ddots & & & & \\ & & & {\rm tr_Q} ^* & - \Omega^\dagger {\rm tr_Q} ^* & & & \\ \hline & & & & {\rm id} & - V_{X_1,Y_1}* & \cdots & - V_{X_{|R|},Y_{|R|}}*\\ \hline & & & & & {\rm id} & & \\ & & & & & & \ddots & \\ & & & & & & & {\rm id} \end{array} \right) \left[ \begin{array}{c} L_0 \\ L(1) \\ \vdots \\ L_q \\ \hline \Upsilon_1 \\ \Upsilon_2 \\ \vdots \\ \Upsilon_{X_{|R|},Y_{|R|}} \end{array} \right] \succeq \left[ \begin{array}{c} {\bf 0} \\ \vdots \\ {\bf 0} \\ \hline {\bf 0} \\ \hline {\bf 0} \\ \vdots \\ {\bf 0} \end{array} \right] \end{equation} \end{appendix} \vspace*{-5mm}
2,869,038,154,485
arxiv
\section{Introduction} The present project is motivated by insufficient knowledge about long-lived radioisotopes, which can be produced during proton therapy \cite{review, janis}. Except for the high linear transfer of energy, the efficiency of particle therapy can also be augmented by induced radioactivity. During radioactive decays, different particles (which deposit energy in surrounding tissues) are emitted and synergistic effect can occur. The efficiency of tumour cell killing by mixed radiation is higher than that for a separated radiation. The main goal of the project is an appraisal of dose from the induced radioactivity deposited in irradiated and surrounding tissues. Gamma-ray spectroscopy and lifetime measurements can be used to determine the amount of isotopes produced during irradiation. To provide the consistency of the result, the Geant4/GATE \cite{gate, geant4} simulations are used. \section{Materials and Methods} The experiment was performed at the Institute of Nuclear Physics of the Polish Academy of Sciences in Cracow. The proton beam accelerated to 60 MeV (proton energy for eye therapy) was provided by the \mbox{AIC-144} isochronous cyclotron and the samples were irradiated with doses in the range from 30 Gy to 500 Gy. To achieve a homogeneous distribution of the dose in the sample, a technique called Spread Out of the Bragg Peak \cite{SOBP} was used. Liver and bone samples were irradiated. These samples are composed of not only light nuclides like hydrogen, carbon or oxygen, but also of heavier ones, like potassium or iron. Furthermore, those tissues have an ability to accumulate much heavier elements. The pig liver was chosen because of its composition, which is similar to the human one, and its easy availability. The samples of bone were prepared from a beef meal with a few additional drops of water. Liver and bone samples were frozen using liquid nitrogen so that they kept their shape during irradiation by a horizontal beam. \section{Results} The energy spectrum of gamma rays emitted by the beef bone irradiated with the dose equal to 250 Gy is presented in Fig. 1. The spectrum was measured using a HPGe detector. There are several notable peaks: the 511 keV $\beta^+$ annihilation peak, 147 keV, 1157 keV and 2127 keV. Except for the 1157 keV line $\left(^{44}\textrm{Sc}\right)$, they originate from $^{34m}$Cl \cite{baza}. \begin{figure}[htb] \centerline{% \includegraphics[width=12.5cm]{Fig_1}} \caption{Gamma-ray energy spectrum of a bone irradiated with a dose of 250 Gy, measured with a HPGe detector placed in a low-background lead shield. The measurement started 2 hours after irradiation and lasted for 2 minutes.} \label{Fig:Bone} \end{figure} Fig. 2 presents the gamma-ray energy spectrum of an irradiated pig liver measured using a LaBr$_3$ scintillator detector. There are three notable peaks: the annihilation peak, 683 keV and 1460 keV. Three main sources of the annihilation peak are $\beta^+$ decays of $^{11}$C, $^{13}$N and $^{18}$F, which is confirmed by the time spectrum exhibiting the three decay constants of those isotopes. The line at the energy of 1460 keV originates from the natural background ($^{40}$K). The peak with energy around 680 keV has no confirmed origin. The most probable source of this $\gamma$-ray line is $^{204}$At because of the energy and intensity of the gamma-ray line and a similar half-life. For the further calculation, it was assumed that this isotope is the source of the observed radiation. \begin{figure}[htb] \centerline{% \includegraphics[width=12.5cm]{Fig_2}} \caption{ Gamma-ray energy spectrum of liver irradiated with a dose of 500 Gy measured with a LaBr$_3$ detector. The measurement started 10 minutes after irradiation and lasted for 100 seconds.} \label{Fig:Liver} \end{figure} To estimate the dose from proton-induced radioactive isotopes, Monte Carlo simulations were performed using the Geant4/GATE package \cite{gate,geant4}. In the simulation, the radioactive isotopes were located in the centre of a water sphere of 4 cm diameter. Fig. 3 presents examples of spatial dose distributions obtained for two isotopes, $^{11}$C~(left) and $^{34m}$Cl (right). \begin{figure}[htb] \centerline{% \includegraphics[width=12.5cm]{Fig_3}} \caption{Spatial projections of dose distribution in water from point-like sources of $^{11}$C~(left) and $^{34m}$Cl (right).} \label{Fig:Dist} \end{figure} \newpage In order to calculate the dose delivered to the surrounding tissue from the observed radioisotopes, the irradiated sample volume and the total received dose were taken into account (see Tab. 1). \vspace{1cm} \begin{table}[htb] \centering \renewcommand{\arraystretch}{1.2} \caption{Dose from notable radioisotopes.} \begin{tabular}{ccc} \hline Radioisotope & Dose [Gy$_{isotope}$/Gy$_{therapy}$ cm$^{3}$] & Tissue\\ \hline $^{11}$C & 8.7 $\cdot$10$^{-9}$ & Liver\\ $^{13}$N & 1.9 $\cdot$10$^{-9}$& Liver\\ $^{18}$F & 4.3 $\cdot$10$^{-11}$& Liver\\ $^{34m}$Cl & 3.9 $\cdot$10$^{-9}$ & Bone\\ $^{204}$At & 7.8 $\cdot$10$^{-10}$ & Liver\\ \hline \end{tabular} \label{} \end{table} \section{Conclusions} Based on the presented results, there is no indication that the induced radioactivity, created during eye proton therapy, changes significantly the global therapeutic effects. Some curious gamma-ray lines were observed, for example that at 683 keV, that should be studied further. In order to test the local effects of the induced radioactivity, radiobiological studies should be performed. \section{Forthcoming Research} During particle therapy of deeply located tumours, a more energetic proton beam is used. Therefore an extension of the present experimental activities to higher beam energies is needed. The experiment will be continued also for other types of therapeutic beams, like carbon ions and neutrons. \noindent The last step of this project will explore the influence of radioactivity induced in the irradiated tissue on the surrounding non-irradiated cells. \section*{\centering Acknowledgements } The authors would like to thank Dr. Jan Swakoń and his team for performing irradiations and general support. Several samples were measured in the laboratory of Prof. Jerzy Mietelski. We are grateful to him and his collaborators for kind cooperation.
2,869,038,154,486
arxiv
\section{Introduction} Every mathematician learns that the axioms of an ordered field (the properties of the numbers 0 and 1, the binary relation $<$, and the binary operations $+$, $-$, $\times$, $\div$) don't suffice as a basis for real analysis; some sort of heavier-duty axiom is required. Unlike the axioms of an ordered field, which involve only quantification over elements of the field, the heavy-duty axioms require quantification over more complicated objects such as nonempty bounded {\em subsets} of the field, Cauchy {\em sequences} of elements of the field, continuous {\em functions} from the field to itself, ways of cutting the field into ``left'' and ``right'' {\em components}, etc. Many authors of treatises on real analysis remark upon (and prove) the equivalence of various different axiomatic developments of the theory; for instance, Korner \cite{refKa} shows that the Dedekind Completeness Property (every nonempty set that is bounded above has a least upper bound) is equivalent to the Bolzano-Weierstrass Theorem in the presence of the ordered field axioms. There are also a number of essays, such as Hall's \cite{refHa} and Hall and Todorov's \cite{refHT}, that focus on establishing the equivalence of several axioms, each of which asserts in its own way that the real number line has no holes in it. Inasmuch as one of these axioms is the Dedekind Completeness Property, we call such axioms {\bf completeness properties} for the reals. (In this article, ``complete'' will always mean ``Dedekind complete'', except in subsection 5.1. Readers should be warned that other authors use ``complete'' to mean ``Cauchy complete''.) More recently, Teismann \cite{refT} has written an article very similar to this one, with overlapping aims but differing emphases, building on an unpublished manuscript by the author \cite{refP}. One purpose of the current article is to stress that, to a much greater extent than is commonly recognized, many theorems of real analysis are completeness properties. The process of developing this observation is in some ways akin to the enterprise of ``reverse mathematics'' pioneered by Harvey Friedman and Stephen Simpson (see e.g.\ \cite{refSd}), wherein one deduces axioms from theorems instead of the other way around. However, the methods and aims are rather different. Reverse mathematics avoids the unrestricted use of ordinary set theory and replaces it by something tamer, namely, second order arithmetic, or rather various sub-systems of second order arithmetic (and part of the richness of reverse mathematics arises from the fact that it can matter very much which subsystem of second order arithmetic one uses). In this article, following the tradition of Halmos' classic text ``Naive Set Theory'' \cite{refHb}, we engage in what might be called naive reverse mathematics, where we blithely quantify over all kinds of infinite sets without worrying about what our universe of sets looks like. Why might a non-logician care about reverse mathematics at all (naive or otherwise)? One reason is that it sheds light on the landscape of mathematical theories and structures. Arguably the oldest form of mathematics in reverse is the centuries-old attempt to determine which theorems of Euclidean geometry are equivalent to the parallel postulate (see \cite[pp. 276--280]{refGM} for a list of such theorems). The philosophical import of this work might be summarized informally as ``Anything that isn't Euclidean geometry is very different from Euclidean geometry.'' In a similar way, the main theme of this essay is that anything that isn't the real number system must be different from the real number system in many ways. Speaking metaphoricaly, one might say that, in the landscape of mathematical theories, real analysis is an isolated point; or, switching metaphors, one might say that the real number system is rigid in the sense that it cannot be subjected to slight deformations. An entertaining feature of real analysis in reverse is that it doesn't merely show us how some theorems of the calculus that look different are in a sense equivalent; it also shows us how some theorems that look fairly similar are {\em not} equivalent. Consider for instance the following three propositions taught in calculus classes: \noindent (A) The Alternating Series Test: If $a_1 \geq a_2 \geq a_3 \geq \dots$ and $a_n \rightarrow 0$, then $\sum_{n=1}^{\infty} (-1)^n a_n$ converges. \noindent (B) The Absolute Convergence Theorem: If $\sum_{n=1}^\infty |a_n|$ converges, then $\sum_{n=1}^\infty a_n$ converges. \noindent (C) The Ratio Test: If $|a_{n+1}/a_n| \rightarrow L$ as $n \rightarrow \infty$, with $L < 1$, then $\sum_{n=1}^{\infty} a_n$ converges. \noindent To our students (and perhaps to ourselves) the three results can seem much of a muchness, yet there is a sense in which one of the three theorems is stronger than the other two. Specifically, one and only one of them is equivalent to completeness (and therefore implies the other two). How quickly can you figure out which is the odd one out? At this point some readers may be wondering what I mean by equivalence. (``If two propositions are theorems, don't they automatically imply each other, since by the rules of logic every true proposition materially implies every other true proposition?'') Every proposition $P$ in real analysis, being an assertion about ${\mathbb{R}}$, can be viewed more broadly as a family of assertions $P(R)$ about ordered fields $R$; one simply takes each explicit or implicit reference to ${\mathbb{R}}$ in the proposition $P$ and replaces it by a reference to some unspecified ordered field $R$. Thus, every theorem $P$ is associated with a property $P(\cdot)$ satisfied by ${\mathbb{R}}$ and possibly other ordered fields as well. What we mean when we say that one proposition of real analysis $P$ {\it implies\/} another proposition of real analysis $P'$ is that $P'(R)$ holds whenever $P(R)$ holds, where $R$ varies over all ordered fields; and what we mean by the {\it equivalence\/} of $P$ and $P'$ is that this relation holds in both directions. In particular, when we say that $P$ is a completeness property, or that it can serve as an axiom of completeness, what we mean is that for any ordered field $R$, $P(R)$ holds if and only if $R$ satisfies Dedekind completeness. (In fact, Dedekind proved \cite[p.\ 33]{refBr} that any two ordered fields that are Dedekind complete are isomorphic; that is, the axioms of a Dedekind complete ordered field are {\bf categorical}.) To prove that a property $P$ satisfied by the real numbers is {\it not\/} equivalent to completeness, we need to show that there exists an ordered field that satisfies property $P$ but not the completeness property. So it's very useful to have on hand a number of different ordered fields that are {\it almost\/} the real numbers, but not quite. The second major purpose of the current article is to introduce the reader to some ordered fields of this kind. We will often call their elements ``numbers'', since they behave like numbers in many ways. (This extension of the word ``number'' is standard when one speaks of a different variant of real numbers, namely $p$-adic numbers; however, this article is about {\em ordered} fields, so we will have nothing to say about $p$-adics.) A third purpose of this article is to bring attention to Dedekind's Cut Property (property (3) of section 2). Dedekind singled out this property of the real numbers as encapsulating what makes ${\mathbb{R}}$ a continuum, and if the history of mathematics had gone slightly differently, this principle would be part of the standard approach to the subject. However, Dedekind never used this property as the basis of an axiomatic approach to the real numbers; instead, he constructed the real numbers from the rational numbers via Dedekind cuts and then verified that the Cut Property holds. Subsequently, most writers of treatises and textbooks on real analysis adopted the Least Upper Bound Property (aka the Dedekind Completeness Property) as the heavy-duty second order axiom that distinguishes the real number system from its near kin. And indeed the Least Upper Bound Property is more efficient than the Cut Property for purposes of getting the theory of the calculus off the ground. But the Cut Property has a high measure of symmetry and simplicity that is missing from its rival. You can explain it to average calculus students, and even lead them to conjecture it on their own; the only thing that's hard is convincing them that it's nontrivial! The Cut Property hasn't been entirely forgotten (\cite{refAS}, \cite[p.\ 53]{refKb}, and \cite{refMe}) and it's well-known among people who study the axiomatization of Euclidean geometry \cite{refG} or the theory of partially ordered sets and lattices \cite{refWaa}. But it deserves to be better known among the mathematical community at large. This brings me to my fourth purpose, which is pedagogical. There is an argument to be made that, in the name of intellectual honesty, we who teach more rigorous calculus courses (often billed as ``honors'' courses) should try to make it clear what assumptions the theorems of the calculus depend on, even if we skip some (or most) of the proofs in the chain of reasoning that leads from the assumptions to the central theorems of the subject, and even if the importance of the assumptions will not be fully clear until the students have taken more advanced courses. It is most common to use the Dedekind Completeness Property or Monotone Sequence Convergence Property for this purpose, and to introduce it explicitly only late in the course, after differentiation and integration have been studied, when the subject shifts to infinite sequences and series. I will suggest some underused alternatives. Note that this article is {\it not} about ways of constructing the real numbers. (The Wikipedia page \cite{refWa} gives both well-known constructions and more obscure ones; undoubtedly many others have been proposed.) This article is about the axiomatic approach to real analysis, and the ways in which the real number system can be characterized by its internal properties. The non-introductory sections of this article are structured as follows. In the second section, I'll state some properties of ordered fields $R$ that hold when $R = {\mathbb{R}}$. In the third section, I'll give some examples of ordered fields that resemble (but aren't isomorphic to) the field of real numbers. In the fourth section, I'll show which of the properties from the second section are equivalent to (Dedekind) completeness and which aren't. In the fifth section, I'll discuss some tentative pedagogical implications. It's been fun for me to write this article, and I have a concrete suggestion for how the reader may have the same kind of fun. Start by reading the second section and trying to decide on your own which of the properties are equivalent to completeness and which aren't. If you're stumped in some cases, and suspect but cannot prove that some particular properties aren't equivalent to completeness, read the third section to see if any of the ordered fields discussed there provide a way to see the inequivalence. And if you're still stumped, read the fourth section. You can treat the article as a collection of math puzzles, some (but not all) of which can be profitably contemplated in your head. A reminder about the ground-rules for these puzzles is in order. Remember that we are to interpret every theorem of real analysis as the particular case $R={\mathbb{R}}$ of a family of propositions $P(R)$ about ordered fields $R$. An ordered field is a collection of elements (two of which are named $0_R$ and $1_R$, or just $0$ and $1$ for short), equipped with the relations $<$ and operations $+$, $-$, $\times$, $\div$ satisfying all the usual ``high school math'' properties. (Note that by including subtraction and division as primitives, we have removed the need for existential quantifiers in our axioms; e.g., instead of asserting that for all $x \neq 0$ there exists a $y$ such that $x \times y = y \times x=1$, we simply assert that for all $x \neq 0$, $x \times (1 \div x) = (1 \div x) \times x = 1$. It can be argued that instead of minimizing the number of primitive notions or the number of axioms, axiomatic presentations of theories should minimize the number of existential quantifiers, and indeed this is standard practice in universal algebra.) In any ordered field $R$ we can define notions like $|x|_R$ (the unique $y \geq 0$ in $R$ with $y = \pm x$) and $(a,b)_R$ (the set of $x$ in $R$ with $a < x < b$). More complicated notions from real analysis can be defined as well: for instance, given a function $f$ from $R$ to $R$, we can define $f'(a)_R$, the ``derivative of $f$ at $a$ relative to $R$'', as the $L \in R$ (unique if it exists) such that for every $\epsilon > 0_R$ there exists $\delta > 0_R$ such that $|(f(x)-f(a))/(x-a) - L|_R < \epsilon$ for all $x$ in $(a-\delta,a)_R \cup (a,a+\delta)_R$. The subscripts are distracting, so we will omit them, but it should be borne in mind that they are conceptually present. What goes for derivatives of functions applies to other notions of real analysis as well, such as the notion of a convergent sequence: the qualifier ``relative to $R$'', even if unstated, should always be kept in mind. The version of naive set theory we will use includes the axiom of countable choice. A reasonably large subset of real analysis can be set up without countable choice, but many important theorems such as Bolzano-Weierstrass rely on countable choice in an essential way. Whether or not one believes that the axiom of countable choice is true, distrusting countable choice requires a fair amount of foundational sophistication, and therefore cannot in my opinion be considered a truly ``naive'' stance. Every ordered field $R$ contains an abelian semigroup ${\mathbb{N}}_R = \{ 1_R, \ 1_R+1_R, \ 1_R+1_R+1_R, \ \dots\}$ isomorphic to ${\mathbb{N}}$; ${\mathbb{N}}_R$ may be described as the intersection of all subsets of $R$ that contain $1_R$ and are closed under the operation that sends $x$ to $x+1_R$. Likewise every ordered field $R$ contains an abelian group ${\mathbb{Z}}_R$ isomorphic to ${\mathbb{Z}}$, and a subfield ${\mathbb{Q}}_R$ isomorphic to ${\mathbb{Q}}$. We shall endow an ordered field $R$ with the {\bf order topology}, that is, the topology generated by basic open sets of the form $(a,b)_R$. \section{Some theorems of analysis} The following propositions about an ordered field $R$ (and about associated structures such as $[a,b]=[a,b]_R=\{x \in R:\ a \leq x \leq b\}$) are true when the ordered field $R$ is taken to be ${\mathbb{R}}$, the field of real numbers. (1) The Dedekind Completeness Property: Suppose $S$ is a nonempty subset of $R$ that is bounded above. Then there exists a number $c$ that is an upper bound of $S$ such that every upper bound of $S$ is greater than or equal to $c$. (2) The Archimedean Property: For every $x \in R$ there exists $n \in {\mathbb{N}}_R$ with $n > x$. Equivalently, for every $x \in R$ with $x > 0$ there exists $n \in {\mathbb{N}}_R$ with $1/n < x$. (3) The Cut Property: Suppose $A$ and $B$ are nonempty disjoint subsets of $R$ whose union is all of $R$, such that every element of $A$ is less than every element of $B$. Then there exists a cutpoint $c \in R$ such that every $x < c$ is in $A$ and every $x>c$ is in $B$. (Or, if you prefer: Every $x \in A$ is $\leq c$, and every $x \in B$ is $\geq c$. It's easy to check that the two versions are equivalent.) Since this property may be unfamiliar, we remark that the Cut Property follows immediately from Dedekind completeness (take $c$ to be the least upper bound of $A$). (4) Topological Connectedness: Say $S \subseteq R$ is open if for every $x$ in $S$ there exists $\epsilon > 0$ so that every $y$ with $|y-x| < \epsilon$ is also in $S$. Then there is no way to express $R$ as a union of two disjoint nonempty open sets. That is, if $R = A \cup B$ with $A,B$ nonempty and open, then $A \cap B$ is nonempty. (5) The Intermediate Value Property: If $f$ is a continuous function from $[a,b]$ to $R$, with $f(a) < 0$ and $f(b) > 0$, then there exists $c$ in $(a,b)$ with $f(c) = 0$. (6) The Bounded Value Property: If $f$ is a continuous function from $[a,b]$ to $R$, there exists $B$ in $R$ with $f(x) \leq B$ for all $x$ in $[a,b]$. (7) The Extreme Value Property: If $f$ is a continuous function from $[a,b]$ to $R$, there exists $c$ in $[a,b]$ with $f(x) \leq f(c)$ for all $x$ in $[a,b]$. (8) The Mean Value Property: Suppose $f: [a,b] \rightarrow R$ is continuous on $[a,b]$ and differentiable on $(a,b)$. Then there exists $c$ in $(a,b)$ such that $f'(c) = (f(b)-f(a))/(b-a)$. (9) The Constant Value Property: Suppose $f: [a,b] \rightarrow R$ is continuous on $[a,b]$ and differentiable on $(a,b)$, with $f'(x) = 0$ for all $x$ in $(a,b)$. Then $f(x)$ is constant on $[a,b]$. (10) The Convergence of Bounded Monotone Sequences: Every monotone increasing (or decreasing) sequence in $R$ that is bounded converges to some limit. (11) The Convergence of Cauchy Sequences: Every Cauchy sequence in $R$ is convergent. (12) The Fixed Point Property for Closed Bounded Intervals: Suppose $f$ is a continuous map from $[a,b] \subset R$ to itself. Then there exists $x$ in $[a,b]$ such that $f(x) = x$. (13) The Contraction Map Property: Suppose $f$ is a map from $R$ to itself such that for some constant $c < 1$, $|f(x) - f(y)| \leq c|x-y|$ for all $x,y$. Then there exists $x$ in $R$ such that $f(x) = x$. (14) The Alternating Series Test: If $a_1 \geq a_2 \geq a_3 \geq \dots$ and $a_n \rightarrow 0$, then $\sum_{n=1}^{\infty} (-1)^n a_n$ converges. (15) The Absolute Convergence Property: If $\sum_{n=1}^\infty |a_n|$ converges in $R$, then $\sum_{n=1}^\infty a_n$ converges in $R$. (16) The Ratio Test: If $|a_{n+1}/a_n| \rightarrow L$ in $R$ as $n \rightarrow \infty$, with $L < 1$, then $\sum_{n=1}^{\infty} a_n$ converges in $R$. (17) The Shrinking Interval Property: Suppose $I_1 \supseteq I_2 \supseteq \dots$ are bounded closed intervals in $R$ with lengths decreasing to 0. Then the intersection of the $I_n$'s is nonempty. (18) The Nested Interval Property: Suppose $I_1 \supseteq I_2 \supseteq \dots$ are bounded closed intervals in $R$. Then the intersection of the $I_n$'s is nonempty. \section{Some ordered fields} The categoricity of the axioms for ${\mathbb{R}}$ tells us that any ordered field that is Dedekind-complete must be isomorphic to ${\mathbb{R}}$. So one (slightly roundabout) way to see that the ordered field of rational numbers ${\mathbb{Q}}$ fails to satisfy completeness is to note that it contains too few numbers to be isomorphic to ${\mathbb{R}}$. The same goes for the field of real algebraic numbers. There are even bigger proper subfields of the ${\mathbb{R}}$; for instance, Zorn's Lemma implies that among the ordered subfields of ${\mathbb{R}}$ that don't contain $\pi$, there exists one that is maximal with respect to this property. But most of the ordered fields we'll wish to consider have the opposite problem: they contain too {\em many} ``numbers''. Such fields may be unfamiliar, but logic tells us that number systems of this kind must exist. (Readers averse to ``theological'' arguments might prefer to skip this paragraph and the next and proceed directly to a concrete construction of such a number system two paragraphs below, but I think there is value in an approach that convinces us ahead of time that the goal we seek is not an illusory one, and shows that such number systems exist without commiting to one such system in particular.) Take the real numbers and adjoin a new number $n$, satisfying the infinitely many axioms $n > 1$, $n > 2$, $n > 3$, etc. Every finite subset of this infinite set of first-order axioms (together with the set of ordered-field axioms) has a model, so by the compactness principle of first-order logic (see e.g.\ \cite{refDM}), these infinitely many axioms must have a model. (Indeed, I propose that the compactness principle is the core of validity inside the widespread student misconception that 0.999\dots is different from 1; on some level, students may be reasoning that if the intervals $[0.9,1.0)$, $[0.99,1.00)$, $[0.999,1.000)$, etc.\ are all nonempty, then their intersection is nonempty as well. The compactness principle tells us that there must exist ordered fields in which the intersection of these intervals is nonempty. Perhaps we should give these students credit for intuiting, in a murky way, the existence of non-Archimedean ordered fields!) But what does an ordered field with infinite elements (and their infinitesimal reciprocals) look like? One such model is given by rational functions in one variable, ordered by their behavior at infinity; we call this variable $\omega$ rather than the customary $x$, since it will turn out to be bigger than every real number, under the natural imbedding of ${\mathbb{R}}$ in $R$. Given two rational functions $q(\omega)$ and $q'(\omega)$ decree that $q(\omega) > q'(\omega)$ iff $q(r) > q'(r)$ for all sufficiently large real numbers $r$. One can show (see e.g.\ \cite{refKa}) that this turns ${\mathbb{R}}(\omega)$ into an ordered field. We may think of our construction as the process of adjoining a formal infinity to ${\mathbb{R}}$. Alternatively, we can construct an ordered field isomorphic to ${\mathbb{R}}(\omega)$ by adjoining a formal infinitesimal $\epsilon$ (which the isomorphism identifies with $1/\omega$): given two rational functions $q(\varepsilon)$ and $q'(\varepsilon)$, decree that $q(\varepsilon) > q'(\varepsilon)$ iff $q(r) > q'(r)$ for all positive real numbers $r$ sufficiently close to 0. Note that this ordered field is non-Archimedean: just as $\omega$ is bigger than every real number, the positive element $\varepsilon$ is less than every positive real number. We turn next to the field of formal Laurent series. A formal Laurent series is a formal expression $\sum_{n \geq N} a_n \varepsilon^n$ where $N$ is some non-positive integer and the $a_n$'s are arbitrary real numbers; the associated finite sum $\sum_{N \leq n < 0} a_n \varepsilon^n$ is called its {\bf principal part}. One can define field operations on such expressions by mimicking the ordinary rules of adding, subtracting, multiplying, and dividing Laurent series, without regard to issues of convergence. The leading term of such an expression is the nonvanishing term $a_n \varepsilon^n$ for which $n$ is as small as possible, and we call the expression positive or negative according to the sign of the leading term. In this way we obtain an ordered field. This field is denoted by ${\mathbb{R}}((\varepsilon))$, and the field ${\mathbb{R}}(\varepsilon)$ discussed above may be identified with a subfield of it. In this larger field, a sequence of Laurent series converges if and only if the sequence of principal parts stabilizes (i.e., is eventually constant) and for every integer $n \geq 0$ the sequence of coefficients of $\varepsilon^n$ stabilizes. In particular, $1,\varepsilon,\varepsilon^2,\varepsilon^3,\dots$ converges to 0 but $1,\frac12,\frac14,\frac18,\dots$ does not. The same holds for $R = {\mathbb{R}}(\varepsilon)$; the sequence $1,\frac12,\frac14,\frac18,\dots$ does not converge to 0 relative to $R$ because every term differs from 0 by more than $\varepsilon$. Then come the really large non-Archimedean ordered fields. There are non-Archimedean ordered fields so large (that is, equipped with so many infinitesimal elements) that the ordinary notion of convergence of sequences becomes trivial: all convergent sequences are eventually constant. In such an ordered field, the only way to ``sneak up'' on an element from above or below is with a generalized sequence whose terms are indexed by some uncountable ordinal, rather than the countable ordinal $\omega$. Define the {\bf cofinality} of an ordered field as the smallest possible cardinality of an unbounded subset of the field (so that for instance the real numbers, although uncountable, have countable cofinality). The cardinality and cofinality of a non-Archimedean ordered field can be as large as you like (or dislike!); along with Cantor's hierarchies of infinities comes an even more complicated hierarchy of non-Archimedean ordered fields. The cofinality of an ordered field is easily shown to be a regular cardinal, where a cardinal $\kappa$ is called {\bf regular} iff a set of cardinality $\kappa$ cannot be written as the union of fewer than $\kappa$ sets each of cardinality less than $\kappa$. In an ordered field $R$ of cofinality $\kappa$, the right notion of a sequence is a ``$\kappa$-sequence'', defined as a function from the ordinal $\kappa$ (that is, from the set of all ordinals $\alpha < \kappa$) to $R$. A sequence whose length is less than the cofinality of $R$ can converge only if it is eventually constant. Curiously, if one uses this generalized notion of a sequence, some of the large ordered fields can be seen to have properties of generalized compactness reminiscent of the real numbers. More specifically, for $\kappa$ regular, say that an ordered field $R$ of cofinality $\kappa$ satisfies the $\kappa$-Bolzano-Weierstrass Property if every bounded $\kappa$-sequence $(x_{\alpha})_{\alpha<\kappa}$ in $R$ has a convergent $\kappa$-subsequence. Then a theorem of Sikorski \cite{refSc} states that for every uncountable regular cardinal $\kappa$ there is an ordered field of cardinality and cofinality $\kappa$ that satisfies the $\kappa$-Bolzano-Weierstrass Property. For more background on non-Archimedean ordered fields (and generalizations of the Bolzano-Weierstrass Property in particular), readers can consult \cite{refJS} and/or \cite{refSa}. Lastly, there is the Field of surreal numbers ${\bf No}$, which contains {\it all\/} ordered fields as subfields. Following Conway \cite{refCo} we call it a Field rather than a field because its elements form a proper class rather than a set. One distinguishing property of the surreal numbers is the fact that for any two sets of surreal numbers $A,B$ such that every element of $A$ is less than every element of $B$, there exists a surreal number that is greater than every element of $A$ and less than every element of $B$. (This does not apply if $A$ and $B$ are proper classes, as we can see by letting $A$ consist of 0 and the negative surreal numbers and $B$ consist of the positive surreal numbers.) See the Wikipedia page \cite{refWb} for information on other ordered fields, such as the Levi-Civita field and the field of hyper-real numbers. \section{Some proofs} Here we give (sometimes abbreviated) versions of the proofs of equivalence and inequivalence. \bigskip The Archimedean Property (2) does not imply the Dedekind Completeness Property (1): The ordered field of rational numbers satisfies the former but not the latter. (Note however that (1) does imply (2): ${\mathbb{N}}_R$ is nonempty, so if ${\mathbb{N}}_R$ were bounded above, it would have a least upper bound $c$ by (1). Then for every $n \in {\mathbb{N}}_R$ we would have $n+1 \leq c$ (since $n+1$ is in ${\mathbb{N}}_R$ and $c$ is an upper bound for ${\mathbb{N}}_R$), implying $n \leq c-1$. But then $c-1$ would be an upper bound for ${\mathbb{N}}_R$, contradicting our choice of $c$ as a least upper bound for ${\mathbb{N}}_R$. This shows that ${\mathbb{N}}_R$ is not bounded above, which is (2).) \bigskip The Cut Property (3) implies completeness (1): Given a nonempty set $S \subseteq R$ that is bounded above, let $B$ be the set of upper bounds of $S$ and $A$ be its complement. $A$ and $B$ satisfy the hypotheses of (3), so there exists a number $c$ such that everything less than $c$ is in $A$ and everything greater than $c$ is in $B$. It is easy to check that $c$ is a least upper bound of $S$. (To show that $c$ is an upper bound of $S$, suppose some $s$ in $S$ exceeds $c$. Since $(s+c)/2$ exceeds $c$, it belongs to $B$, so by the definition of $B$ it must be an upper bound of $S$, which is impossible since $s > (s+c)/2$. To show that $c$ is a least upper bound of $S$, suppose that some $a < c$ is an upper bound of $S$. But $a$ (being less than $c$) is in $A$, so it can't be an upper bound of $S$.) \bigskip In view of the preceding result, the Cut Property is a completeness property, and to prove that some other property is a completeness property, it suffices to show that it implies the Cut Property. Hereafter we will write ``Property ($n$) implies completeness by way of the Cut Property (3)'' to mean ``($n$) $\Rightarrow$ (3) $\Rightarrow$ (1).'' When a detour through the Archimedean Property is required as a lemma to the proof of the Cut Property, we will write ``Property ($n$) implies completeness by way of the Archimedean Property (2) and the Cut Property (3)'' to give a road-map of the argument that follows. \bigskip Topological Connectedness (4) implies completeness by way of the Cut Property (3): We prove the contrapositive. Let $A$ and $B$ be sets satisfying the hypotheses of the Cut Property but violating its conclusion: there exists no $c$ such that everything less than $c$ is in $A$ and everything greater than $c$ is in $B$. (That is, suppose $A,B$ is a ``bad cut'', which we also call a {\bf gap}.) Then for every $a$ in $A$ there exists $a'$ in $A$ with $a' > a$, and for every $b$ in $B$ there exists $b'$ in $B$ with $b' < b$. From this it readily follows that the sets $A$ and $B$ are open, so that Topological Connectedness must fail. \bigskip The Intermediate Value Property (5) implies completeness by way of the Cut Property (3): We again prove the contrapositive. (Indeed, we will use this mode of proof so often that henceforth we will omit the preceding prefatory sentence.) Let $A,B$ be a gap. The function that is $-1$ on $A$ and $1$ on $B$ is continuous and violates the conclusion of the Intermediate Value Property. \bigskip The Bounded Value Property (6) does not imply completeness: Counterexamples are provided by the theorem of Sikorski referred to earlier (near the end of Section 3), once we prove that the $\kappa$-Bolzano-Weirestrass Property implies the Bounded Value Property. (I am indebted to Ali Enayat for suggesting this approach and for supplying all the details that appear below.) Suppose that $\kappa$ is a regular cardinal and that $R$ is an ordered field with cofinality $\kappa$ satisfying the $\kappa$-Bolzano-Weierstrass Property. Then I claim that $R$ satisfies the Bounded Value Property. For, choose an increasing unbounded sequence $(x_\alpha \, : \, \alpha \in \kappa)$ of elements of $R$. Suppose that $f$ is continuous on $[a,b]$ but that there exists no $B$ with $f(x) \leq B$ for all $x \in [a,b]$. For each $\alpha \in \kappa$, choose some $t_\alpha \in [a,b]$ with $f(t_{\alpha}) > x_{\alpha}$. The $t_\alpha$'s are bounded, so by the $\kappa$-Bolzano-Weierstrass Property there exists some subset $U$ of $\kappa$ such that $(t_\alpha \, : \, \alpha \in U)$ is a $\kappa$-subsequence that converges to some $c \in [a,b]$. (Note that $U$ must be unbounded; otherwise $(t_\alpha \, : \, \alpha \in U)$ would be a $\beta$-sequence for some $\beta < \kappa$ rather than a $\kappa$-sequence.) By the continuity of $f$, the sequence $(f(t_\alpha) \, : \, \alpha \in U)$ converges to $f(c)$. We now digress to prove a small lemma, namely, that every convergent $\kappa$-sequence $(r_\alpha \in R \, : \, \alpha \in \kappa)$ is bounded. Since $(r_\alpha \, : \, \alpha \in \kappa)$ converges to some limit $r$, there exists a $\beta < \kappa$ such that for all $\alpha \geq \beta$, $r_\alpha$ lies in $(r-1,r+1)$; then the tail-set $\{r_\alpha \, : \, \alpha \geq \beta\}$ is bounded. On the other hand, since $\kappa$ is a regular cardinal, and since $R$ has cofinality $\kappa$, the complementary set $\{r_\alpha \, : \, \alpha < \beta\}$ is too small to be unbounded. Taking the union of these two sets, we see that the set $\{r_\alpha \in R \, : \, \alpha \in \kappa\}$ is bounded. Applying this lemma to the convergent sequence $(f(t_\alpha) \, : \, \alpha \in U)$, we see that $(f(t_\alpha) \, : \, \alpha \in U)$ is bounded. But this is impossible, since the set $U$ is unbounded and since our original increasing sequence $(x_\alpha \, : \, \alpha \in \kappa)$ was unbounded. This contradiction shows that $f([a,b])$ is bounded above. Hence $R$ has the Bounded Value Property, as claimed. (Note the resemblance between the preceding proof and the usual real-analysis proof that every continuous real-valued function on an interval $[a,b]$ is bounded.) One detail omitted from the above argument is a proof that uncountable regular cardinals exist (without which Sikorski's theorem is vacuous). The axiom of countable choice implies that a countable union of countable sets is countable, so $\aleph_1$, the first uncountable cardinal, is a regular cardinal. \bigskip The Extreme Value Property (7) implies completeness by way of the Cut Property (3): Suppose $A,B$ is a gap, and for convenience assume $1 \in A$ and $2 \in B$ (the general case may be obtained from this special case by straightforward algebraic modifications). Define $$f(x) = \left\{ \begin{array}{ll} x & \mbox{if $x \in A$}, \\ 0 & \mbox{if $x \in B$} \end{array} \right.$$ for $x$ in $[0,3]$. Then $f$ is continuous on $[0,3]$ but there does not exist $c \in [0,3]$ with $f(x) \leq f(c)$ for all $x$ in $[0,3]$. For, such a $c$ would have to be in $A$ (since $f$ takes positive values on $[0,3] \cap A$, e.g.\ at $x=1$, while $f$ vanishes on $[0,3] \cap B$), but for any $c \in [0,3] \cap A$ there exists $c' \in [0,3] \cap A$ with $c' > c$, so that $f(c') > f(c)$. \bigskip The Mean Value Property (8) implies the Constant Value Property (9): Trivial. \bigskip The Constant Value Property (9) implies completeness by way of the Cut Property (3): Suppose $A,B$ is a gap. Again consider the function $f$ that equals $-1$ on $A$ and 1 on $B$. It has derivative 0 everywhere, yet it isn't constant on $[a,b]$ if one takes $a \in A$ and $b \in B$. \bigskip The Convergence of Bounded Monotone Sequences (10) implies completeness by way of the Archimedean Property (2) and the Cut Property (3): If $R$ does not satisfy the Archimedean Property, then there must exist an element of $R$ that is greater than every term of the sequence $1,2,3,\dots$. By the Convergence of Bounded Monotone Sequences, this sequence must converge, say to $r$. This implies that $0,1,2,\dots$ also converges to $r$. Now subtract the two sequences; by the algebraic limit laws that are easily derived from the ordered field axioms and the definition of limits, one finds that $1,1,1\dots$ converges to 0, which is impossible. Therefore $R$ must satisfy the Archimedean Property. Now suppose we are given a cut $A,B$. For $n \geq 0$ in ${\mathbb{N}}$, let $a_n$ be the largest element of $2^{-n} {\mathbb{Z}}_R$ in $A$ and $b_n$ be the smallest element of $2^{-n} {\mathbb{Z}}_R$ in $B$. $(a_n)$ and $(b_n)$ are bounded monotone sequence, so by the Convergence of Bounded Monotone Sequences they converge, and since (by the Archimedean Property) $a_n - b_n$ converges to 0, $a_n$ and $b_n$ must converge to the same limit; call it $c$. We have $a_n \leq c \leq b_n$ for all $n$, so $|a_n - c|$ and $|b_n - c|$ are both at most $2^{-n}$. From the Archimedean Property it follows that for every $\epsilon > 0$ there exists $n$ with $2^{-n} < \epsilon$, and for this $n$ we have $a_n \in A$ and $b_n \in B$ satisfying $a_n > c - \epsilon$ and $b_n < c + \epsilon$ Hence for every $\epsilon > 0$, every number less than or equal to $c - \epsilon$ is in $A$ and every number greater than or equal to $c + \epsilon$ is in $B$. Therefore every number less than $c$ is in $A$ and every number greater than $c$ is in $B$, which verifies the Cut Property. \bigskip The Convergence of Cauchy Sequences (11) does not imply completeness: Every Cauchy sequence in the field of formal Laurent series converges, but the field does not satisfy the Cut Property. Call a formal Laurent series {\bf finite} if it is of the form $\sum_{n \geq 0} a_n \varepsilon^n$; otherwise, call it {\bf positively infinite} or {\bf negatively infinite} according to the sign of its leading term $a_n$ ($n < 0$). If we let $A$ be the set of all finite or negatively infinite formal Laurent series, and we let $B$ be the set of all positively infinite formal Laurent series, then $A,B$ is a gap. On the other hand, it is easy to show that this ordered field satisfies property (11). Note also that if one defines the norm of a formal Laurent series $\sum_{n \geq N} a_n \varepsilon^n$ (with $a_N \neq 0$) as $2^{-N}$ and defines the distance between two series as the norm of their difference, then one obtains a complete metric space whose metric topology coincides with the order topology introduced above. \bigskip The Fixed Point Property for Closed Bounded Intervals (12) implies completeness by way of the Cut Property (3): Let $A,B$ be a gap of $R$. Pick $a$ in $A$ and $b$ in $B$, and define $f:[a,b] \rightarrow R$ by putting $f(x) = b$ for $x \in A$ and $f(x) = a$ for $x \in B$. Then $f$ is continuous but has no fixed point. \bigskip The Contraction Map Property (13) implies completeness by way of the Archimedean Property (2) and the Cut Property (3): Here is an adaptation of a solution found by George Lowther \cite{refL}. First we will show that $R$ is Archimedean. Suppose not. Call $x$ in $R$ {\bf finite} if $-n < x < n$ for some $n$ in ${\mathbb{N}}_R$, and {\bf infinite} otherwise. Let $$f(x) = \left\{ \begin{array}{ll} \frac12 x & \mbox{if $x$ is infinite}, \\ x + \frac12 g(x) & \mbox{if $x$ is finite} \end{array} \right.$$ with $$g(x) = 1 - \frac{x}{1+|x|},$$ a decreasing function of $x$ taking values in $(0,2)$. For all finite $x,y$ with $x>y$ one has $(g(y)-g(x))/(x-y) \geq 1/(1+|x|)(1+|y|)$ (indeed, the left-hand side minus the right-hand side equals 0 in the cases $x > y \geq 0$ or $0 \geq x > y$ and equals $-2xy/(1+x)(1-y)(x-y) > 0$ in the case $x > 0 > y$), so for all $x>y$ in $[-a,a]$ we have $(g(y)-g(x))/(x-y) \geq 1/(1+a)^2$, implying that $|(f(x)-f(y))/(x-y)| \leq 1 - \frac12 / (1+a)^2$ for all $x,y$ in $[-a,a]$. Taking $c = 1 - \frac12 / \omega^2$ with $\omega > n$ for all $n$ in ${\mathbb{N}}_R$, we obtain $|f(x)-f(y)| < c|x-y|$ for all finite $x,y$. This inequality can also be shown to hold when one or both of $x,y$ is infinite. Hence $f$ is a contraction map, yet it has no finite or infinite fixed points; contradiction. Now we want to prove that $R$ satisfies the Cut Property. Suppose not. Let $A,B$ be a gap of $R$, and let $a_n = \max \, A \cap 2^{-n} {\mathbb{Z}}_R$ and $b_n = \min \, B \cap 2^{-n} {\mathbb{Z}}_R$. Since $A,B$ is a gap, neither of the sequences $(a_n)_{n=1}^{\infty}$ and $(b_n)_{n=1}^{\infty}$ can be eventually constant, so there exist $n_1 < n_2 < n_3 < \dots$ such that the sequences $(x_k)_{k=1}^\infty$ and $(y_k)_{k=1}^\infty$ with $x_k = a_{n_k}$ and $y_k = b_{n_k}$ are strictly monotone, with $0 < y_k - x_k \leq 2^{-k}$. By the Archimedean Property, every element of $A$ lies in $(-\infty,x_1]$ or in one of the intervals $[x_{k},x_{k+1}]$, and every element of $B$ lies in $[y_1,\infty)$ or in one of the intervals $[y_{k+1},x_{k}]$. Now consider the continuous map $h$ that has slope $\frac12$ on $(-\infty,x_1]$ and on $[y_1,\infty)$, sends $x_k$ to $x_{k+1}$ and $y_k$ to $y_{k+1}$ for all $k$, and is piecewise linear away from the points $x_k$, $y_k$; it is well-defined because these intervals cover $R$, and by looking at its behavior on each of those intervals we can see that it has no fixed points. On the other hand, $h$ is a contraction map with contraction constant $\frac12$. Contradiction. \bigskip The Alternating Series Test (14) does not imply completeness: In the field of formal Laurent series, every series whose terms tend to zero (whether or not they alternate in sign) is summable, so the Alternating Series Test holds even though the Cut Property doesn't. \bigskip The Absolute Convergence Property (15) does not imply completeness: The field of formal Laurent series has the property that every absolutely convergent series is convergent (and indeed the reverse is true as well!), but it does not satisfy the Cut Property. \bigskip The Ratio Test (16) implies completeness by way of the Archimedean Property (2) and the Cut Property (3): Note that the Ratio Test implies that $\frac12 + \frac14 + \frac18 + \dots$ converges, implying that $R$ is Archimedean (the sequence of partial sums $\frac12$, $\frac34$, $\frac78$, \dots isn't even a Cauchy sequence if there exists an $\epsilon > 0$ that is less than $1/n$ for all $n$). Now we make use of the important fact (which we have avoided making use of up till now, for esthetic reasons, but which could be used to expedite some of the preceding proofs) that every Archimedean ordered field is isomorphic to a subfield of the reals. (See the next paragraph for a proof.) To show that a subfield of the reals that satisfies the Ratio Test must contain every real number, it suffices to note that every real number can be written as a sum $n \pm \frac12 \pm \frac14 \pm \frac18 \pm \dots$ that satisfies the hypotheses of the Ratio Test. \bigskip Every Archimedean ordered field is isomorphic to a subfield of the reals: For every $x$ in $R$, let $S_x$ be the set of elements of ${\mathbb{Q}}$ whose counterparts in ${\mathbb{Q}}_R$ are less than $x$, and let $\phi(x)$ be the least upper bound of $S_x$. The Archimedean Property can be used to show that $\phi$ is an injection, and with some work one can verify that it is also a field homomorphism. For more on completion of ordered fields, see \cite{refSb}. \bigskip The Shrinking Interval Property (17) does not imply completeness: The field of formal Laurent series satisfies the former but not the latter. For details, see \cite[pp.\ 212--215]{refEf}. \bigskip The Nested Interval Property (18) does not imply completeness: The surreal numbers are a counterexample. (Note however that the field of formal Laurent series is {\it not\/} a counterexample; although it satisfies the Shrinking Interval Property, it does not satisfy the Nested Interval Property, since for instance the nested closed intervals $[n,\omega/n]$ have empty intersection. This shows that, as an ordered field property, (18) is strictly stronger than (17).) To verify that the surreal numbers satisfy (18), consider a sequence of nested intervals $[a_1,b_1] \supseteq [a_2,b_2] \supseteq \dots$ If $a_i = b_i$ for some $i$, say $a_i = b_i = c$, then $a_j = b_j = c$ for all $j>i$, and $c$ lies in all the intervals. If $a_i < b_i$ for every $i$, then $a_i \leq a_{\max\{i,j\}} < b_{\max\{i,j\}} \leq b_j$ for all $i,j$, so every element of $A = \{a_1, a_2, \dots\}$ is less than every element of $B = \{b_1, b_2, \dots\}$. Hence there exists a surreal number that is greater than every element of $A$ and less than every element of $B$, and this surreal number lies in all the intervals $[a_i,b_i]$. Thus ${\bf No}$ satisfies the Nested Interval Property but being non-Archimedean does not satisfy completeness. If one dislikes this counterexample because the surreal numbers are a Field rather than a field, one can instead use the field of surreal numbers that ``are created before Day $\omega_1$'', where $\omega_1$ is the first uncountable ordinal. See \cite{refCo} for a discussion of the ``birthdays'' of surreal numbers. For a self-contained explanation of a related counterexample that predates Conway's theory of surreal numbers, see \cite{refD}. For a counterexample arising from non-standard analysis, see the discussion of the Cantor completeness of Robinson's valuation field $^\rho \mathbb R$ in \cite{refHa} and \cite{refHT}. Because these counterexamples are abstruse, one can find in the literature and on the web assertions like ``The Nested Interval Property implies the Bolzano-Weierstrass Theorem and vice versa''. It's easy for students to appeal to the Archimedean Property without realizing they are doing so, especially because concrete examples of non-Archimedean ordered fields are unfamiliar to them. \bigskip To summarize: Properties (1), (3), (4), (5), (7), (8), (9), (10), (12), (13), and (16) imply completeness, while properties (2), (6), (11), (14), (15), (17), and (18) don't. The ordered field of formal Laurent series witnesses the fact that (11), (14), (15), and (17) don't imply completeness; some much bigger ordered fields witness the fact that (6) and (18) don't imply completeness; and every non-Archimedean ordered field witnesses the fact that (2) doesn't imply completeness. One of the referees asked which of the properties (6), (11), (14), (15), (17), and (18) imply completeness in the presence of the Archimedean Property (2). The answer is, All of them. It is easy to show this in the case of properties (11), (14), (15), (17), and (18), using the fact that every Archimedean ordered field is isomorphic to a subfield of the reals (see the discussion of property (16) in section 4). The case of (6) is slightly more challenging. \bigskip Claim: Every Archimedean ordered field with the Bounded Value Property is Dedekind complete. Proof: Suppose not; let $R$ be a counterexample, and let $A,B$ be a bad cut of $R$. Let $a_n = \max A \cap 2^{-n} {\mathbb{Z}}_R$ and $b_n = \min B \cap 2^{-n} {\mathbb{Z}}_R$, so that $|a_n - b_n| = 2^{-n}$. Since $A,B$ is a bad cut, neither of the sequences $(a_n)_{n=1}^{\infty}$ and $(b_n)_{n=1}^{\infty}$ can be eventually constant. Let $f_n(x)$ be the continuous function that is 0 on $(-\infty,a_n\!-\!2^{-n}]$, 1 on $[a_n,b_n]$, 0 on $[b_n\!+\!2^{-n},\infty)$, and piecewise linear on $[a_n\!-\!2^{-n},a_n]$ and $[b_n,b_n\!+\!2^{-n}]$. The interval $[a_n-2^{-n},b_n+2^{-n}]$ has length $\leq 3 \cdot 2^{-n}$, which goes to 0 in $R$ as $n \rightarrow \infty$ since $R$ is Archimedean. Any $c$ belonging to all the intervals $[a_n-2^{-n},b_n+2^{-n}]$ would be a cutpoint for $A,B$, and since the cut $A,B$ has been assumed to have no cutpoint, $\cap_{n=1}^{\infty} [a_n\!-\!2^{-n},b_n\!+\!2^{-n}]$ is empty. It follows that for every $x$ in $R$ only finitely many of the intervals $[a_n\!-\!2^{-n},b_n\!+\!2^{-n}]$ contain $x$, so $f(x) = \sum_{n=1}^{\infty} f_n(x)$ is well-defined for all $x$ (since all but finitely many of the summands vanish). Furthermore, the function $f(x)$ is continuous, since a finite sum of continuous functions is continuous, and since for every $x$ we can find an $m$ and an $\epsilon>0$ such that $f_n(y) = 0$ for all $n > m$ and all $y$ with $|x-y| < \epsilon$ (so that $f$ agrees with the continuous function $\sum_{n \leq m} f_n$ on a neighborhood of $x$). Finally note that $f$ is unbounded, since e.g.\ $f(x) \geq n$ for all $x$ in $[a_n,b_n]$. Alternatively, one can argue as follows: The Archimedean Property implies countable cofinality (specifically, ${\mathbb{N}}_R$ is a countable unbounded set), and an argument of Teismann \cite{refT} shows that every ordered field with countable cofinality that satisfies property (6) is complete. \bigskip It is worth noting that for all 18 of the propositions listed in section 2, the answer to the question ``Does it imply the Dedekind Completeness Property?''\ (1) is the same as the answer to the question ``Does it imply the Archimedean Property?''\ (2). A priori, one might have imagined that one or more of properties (3) through (18) would be strong enough to imply the Archimedean Property yet not so strong as to be a completeness property for the reals. \bigskip This is not the end of the story of real analysis in reverse; there are other theorems in analysis with which one could play the same game. Indeed, some readers may already be wondering ``What about the Fundamental Theorem of Calculus?'' Actually, the FTC is really two theorems, not one (sometimes called FTC I and FTC II in textbooks). They are not treated here because this essay is already on the long side for a Monthly article, and a digression into the theory of the Riemann integral would require a whole section in itself. Indeed, there are different ways of defining the Riemann integral (Darboux's and Riemann's come immediately to mind), and while they are equivalent in the case of the real numbers, it is possible that different definitions of the Riemann integral that are equivalent over the reals might turn out to be different over ordered fields in general; thus one might obtain different varieties of FTC I and FTC II, some of which would be completeness properties and others of which would not. It seemed best to leave this topic for others to explore. An additional completeness axiom for the reals is the ``principle of real induction'' \cite{refCl}. \section{Some odds and ends} \subsection{History and terminology} It's unfortunate that the word completeness is used to signify both Cauchy completeness and Dedekind completeness; no doubt this doubleness of meaning has contributed to the misimpression that the two are equivalent in the presence of the ordered field axioms. It's therefore tempting to sidestep the ambiguity of the word ``complete'' by resurrecting Dedekind's own terminology (``{\it Stetigkeit\/}'') and referring to the completeness property of the reals as the {\bf continuity\/} property of the reals --- where here we are to understand the adjective ``continuous'' not in its usual sense, as a description of a certain kind of function, but rather as a description of a certain kind of set, namely, the kind of set that deserves to be called a continuum. However, it seems a bit late in the day to try to get people to change their terminology. It's worth pausing here to explain what Hilbert had in mind when he referred to the real numbers as the ``complete Archimedean ordered field''. What he meant by this is that the real numbers can be characterized by a property referred to earlier in this article (after the discussion of property (16) in section 4): every Archimedean ordered field is isomorphic to a subfield of the real numbers. That is, every Archimedean ordered field can be embedded in an Archimedean ordered field that is isomorphic to the reals, and no further extension to a larger ordered field is possible without sacrificing the Archimedean Property. Hilbert was saying that the real number system is the (up to isomorphism) unique Archimedean ordered field that is not a proper subfield of a larger Archimedean ordered field; vis-a-vis the ordered field axioms and the Archimedean Property, ${\mathbb{R}}$ is complete in the sense that nothing can be added to it. In particular Hilbert was not asserting any properties of ${\mathbb{R}}$ as a metric space. Readers interested in the original essays of Dedekind and Hilbert may wish to read Dedekind's ``Continuity and irrational numbers'' and Hilbert's ``On the concept of number'', both of which can be found in \cite{refEw}. \subsection{Advantages and disadvantages of the Cut Property} The symmetry and simplicity of the Cut Property have already been mentioned. Another advantage is shallowness. Although the word has a pejorative sound, shallowness in the logical sense can be a good thing; a proposition with too many levels of quantifiers in it is hard for the mind to grasp. The proposition ``$c$ is an upper bound of $S$'' (i.e., ``for all $s \in S$, $s \leq c$'') involves a universal quantifier, so the proposition ``$c$ is the least upper bound of $S$'' involves two levels of quantifiers, and the proposition that for every nonempty bounded set $S$ of reals there exists a real number $c$ such that $c$ is a least upper bound of $S$ therefore involves four levels of quantifiers. In contrast, the assertion that $A,B$ is a cut of $R$ involves one level of quantifiers, and the assertion that $c$ is a cutpoint of $A,B$ involves two levels of quantifiers, so the assertion that every cut of $R$ determines a cutpoint involves only three levels of quantifiers. Note also that the objects with which the Dedekind Completeness Property is concerned --- arbitrary nonempty bounded subsets of the reals --- are hard to picture, whereas the objects with which the Cut Property is concerned --- ways of dividing the number line into a left-set and a right-set --- are easy to picture. Indeed, in the context of foundations of geometry, it is widely acknowledged that some version of the Cut Property is the right way to capture what Dedekind called the continuity property of the line. It should also be mentioned here that the Cut Property can be viewed as a special case of the Least Upper Bound Property, where the set $S$ has a very special structure. This makes the former more suitable for doing naive reverse mathematics (since a weaker property is easier to verify) but also makes it slightly less convenient for doing forward mathematics (since a weaker property is harder to use). If one starts to rewrite a real analysis textbook replacing every appeal to the Least Upper Bound Property by an appeal to the Cut Property, one quickly sees that one ends up mimicking the textbook proofs but with extra, routinized steps (``\dots and let $B$ be the complement of that set'') that take up extra space on the page and add no extra insight. So even if one wants to assign primacy to the Cut Property, one would not want to throw away the Least Upper Bound Property; one would introduce it as a valuable consequence of the Cut Property. Lastly, we mention a variant of the Cut Property, Tarski's Axiom 3 \cite{refWc}, that drops the hypothesis that the union of the two sets is the whole ordered field $R$. This stronger version of the axiom is equivalent to the one presented above. Like the Least Upper Bound Property, Tarski's version of the Cut Property is superior for the purpose of constructing the theory of the reals but less handy for the purpose of ``deconstructing'' it. \subsection{Implications for pedagogy} As every thoughtful teacher knows, logical equivalence is not the same as pedagogical equivalence. Which completeness property of the reals should we teach to our various student audiences, assuming we teach one at all? Here the author drops the authorial ``we'' (appropriate for statements of a mathematical and historical nature that are, as far as the author has been able to assess, accurate) and adopts an authorial ``I'' more appropriate to statements of opinion. I think the reader already knows that I am quite taken with the Cut Property as an axiom for the reals, and will not be surprised to hear that I would like to see more teachers of calculus, and all teachers of real analysis, adopt it as part of the explicit foundation of the subject. What may come as a bigger surprise is that I see advantages to a different completeness axiom that has not been mentioned earlier in the article, largely because I have not seen it stated in any textbook (although Burns \cite{refBu} does something similar, as I describe below). When we write $.3333\dots$, what we mean (or at least one thing we mean) is ``The number that lies between .3 and .4, and lies between .33 and .34, and lies between .333 and .334, etc.'' That is, {\em a decimal expansion is an ``address'' of a point of the number line}. Implicit in the notation is the assumption that for every decimal expansion, such a number {\it exists\/} and is {\it unique\/}. These assumptions of existence and uniqueness are part of the mathematical undermind (the mathematical subconscious, if you prefer) of the typical high schooler. After all, it never occurs to a typical high school student whether there might be more than one number 0.5, or whether there might be no such number at all (though balking at fractions is common for thoughtful students at an earlier age); so it's tempting to carry over the assumption of existence and uniqueness when the teacher makes the transition from finite decimal to infinite decimals. Part of what an honors calculus teacher should do is undermine the mathematical undermind, and convince the students that they've been uncritically accepting precepts that have not yet been fully justified. The flip side of von Neumann's adage ``In mathematics you don't understand things; you just get used to them'' is that once you get used to something, you may mistakenly come to believe you understand it! Infinite decimals can come to seem intuitive, on the strength of their analogy with finite decimals, and the usefulness of infinite decimals makes us reluctant to question the assumptions on which they are based. But mathematics is a liberal art, and that means we should bring difficulties into the light and either solve them honestly or duck them honestly. And the way a mathematician ducks a problem honestly is to formulate the problematic assumption as precisely and narrowly as possible and call it an axiom. Specifically, I would argue that one very pedagogically appropriate axiom for the completeness of the reals is one that our students have been implicitly relying on for years: the Strong Nested Decimal Interval Property, which asserts that for all infinite strings $d_0,d_1,d_2,\dots$ of digits between 0 and 9, there exists a unique real number in the intervals $[.d_0, .d_0 + .1]$, $[.d_0 d_1, .d_0 d_1 + .01]$, $[.d_0 d_1 d_2, .d_0 d_1 d_2 + .001]$, etc. (I call it ``Strong'' because, unlike the ordinary Nested Interval Property, it asserts uniqueness as well as existence.) The reader who has made it through the article thus far should have no trouble verifying that this is indeed a completeness property of the reals, and one can use it to give expeditious proofs of some of the important theorems of the calculus, at least in special cases. (Example: To show that the Intermediate Value Theorem holds for weakly increasing functions, one can home in on the place where the function vanishes by considering decimal approximations from both sides.) This choice of axiom does not affect the fact that the main theorems of the calculus have proofs that are hard to understand for someone who is taking calculus for the first or even the second time and who does not have much practice in reading proofs; indeed, I would say that the art of reading proofs goes hand-in-hand with the art of writing them, and very few calculus students understand the forces at work and the constraints that a mathematician labors under when devising a proof. But if we acknowledge early in the course that the Strong Nested Decimal Interval Property (or something like it) is an assumption that our theorems rely upon, and stress that it cannot be proved by mere algebra, we will be giving our students a truer picture of the subject. Furthermore, the students will encounter infinite decimals near the end of the two-semester course when infinite series are considered; now an expression like $.3333\dots$ means $3 \times 10^{-1} + 3 \times 10^{-2} + 3 \times 10^{-3} + 3 \times 10^{-4} + \dots$. (Burns \cite{refBu} adopts as his completeness axioms the Archimedean Property plus the assertion that every decimal converges.) The double meaning of infinite decimals hides a nontrivial theorem: Every infinite decimal, construed as an infinite series, converges to a limit, specifically, the unique number that lies in all the associated nested decimal intervals. We do not need to prove this assertion to give our students the knowledge that this assertion has nontrivial content; we can lead them to see that the calculus gives them, for the first time, an honest way of seeing why $.9999\dots$ is the same number as $1.0000\dots$ (a fact that they may have learned to parrot but probably don't feel entirely comfortable with). Students should also be led to see that the question ``But how do we know that the square root of two really exists, if we can't write down all its digits or give a pattern for them?''\ is a fairly intelligent question. In what sense do we know that such a number exists? We can construct the square root of two as the length of the diagonal of a square of side-length one, but that trick won't work if we change the question to ``How do we know that the {\it cube\/} root of two really exists?'' To those who would be inclined to show students a construction of the real numbers (via Dedekind cuts and Cauchy sequences), I would argue that a student's first exposure to rigorous calculus should focus on other things. It takes a good deal of mathematical sophistication to even appreciate why someone would want to prove that the theory of the real numbers is consistent, and even more sophistication to appreciate why we can do so by making a ``model'' of the theory. Most students enter our classrooms with two workable models of the real numbers, one geometrical (the number line) and one algebraic (the set of infinite decimals). Instead of giving them a third picture of the reals, it seems better to clarify the pictures that they already have, and to assert the link between them. In fact, I think that for pedagogical purposes, it's best to present both the Cut Property and the Strong Nested Decimal Interval Property, reflecting the two main ways students think about real numbers. And it's also a good idea to mention that despite their very different appearances, the two axioms are deducible from one another, even though neither is derivable from the principles of high school mathematics. This will give the students a foretaste of a refreshing phenomenon that they will encounter over and over if they continue their mathematical education: two mathematical journeys that take off in quite different directions can unexpectedly lead to the same place. \paragraph{Acknowledgments.} Thanks to Matt Baker, Mark Bennet, Robin Chapman, Pete Clark, Ricky Demer, Ali Enayat, James Hall, Lionel Levine, George Lowther, David Speyer, and other contributors to {\tt MathOverflow} (\url{http://www.mathoverflow.net}) for helpful comments; thanks to Wilfried Sieg for his historical insights; thanks to the referees for their numerous suggestions of ways to make this article better; and special thanks to John Conway for helpful conversations. Thanks also to my honors calculus students, 2006--2012, whose diligence and intellectual curiosity led me to become interested in the foundations of the subject. \raggedright
2,869,038,154,487
arxiv
\section{INTRODUCTION}\label{section:intro} \label{introduction} Nonparametric testing of independence or interaction between random variables is a core staple of machine learning and statistics. The majority of nonparametric statistical tests of independence for continuous-valued random variables rely on the assumption that the observed data are drawn \emph{i.i.d.} \cite{Feuerverger93,gretton2007kernel,Szekely2007,GreGyo10,HelHelGor13}. The same assumption applies to tests of conditional dependence, and of multivariate interaction between variables \cite{Zhang2011,KanUsh98,FukGreSunSch08,sejdinovic2013kernel,PatSenSze15}. For many applications in finance, medicine, and audio signal analysis, however, the \emph{i.i.d.}~assumption is unrealistic and overly restrictive. While many approaches exist for testing interactions between time series under strong parametric assumptions \cite{kirchgassner2012introduction,ledford1996statistics}, the problem of testing for general, nonlinear interactions has seen far less analysis: tests of pairwise dependence have been proposed by \cite{GaiRupSch10,besserve_statistical_2013,chwialkowski2014wild, chwialkowski2014kernel}, where the first publication also addresses mutual independence of more than two univariate time series. The two final works use as their statistic the Hilbert-Schmidt Indepenence Criterion, a general nonparametric measure of dependence \citep{gretton2005measuring}, which applies even for multivariate or non-Euclidean variables (such as strings and groups). The asymptotic behaviour and corresponding test threshold are derived using particular assumptions on the mixing properties of the processes from which the observations are drawn. These kernel approaches apply only to pairs of random processes, however. The Lancaster interaction is a signed measure that can be used to construct a test statistic capable of detecting dependence between three random variables \citep{lancaster1969chi,sejdinovic2013kernel}. If the joint distribution on the three variables factorises in some way into a product of a marginal and a pairwise marginal, the Lancaster interaction is zero everywhere. Given observations, this can be used to construct a statistical test, the null hypothesis of which is that the joint distribution factorises thus. In the \emph{i.i.d.}~case, the null distribution of the test statistic can be estimated using a permutation bootstrap technique: this amounts to shuffling the indices of one or more of the variables and recalculating the test statistic on this bootstrapped data set. When our samples instead exhibit temporal dependence, shuffling the time indices destroys this dependence and thus doing so does not correspond to a valid resample of the test statistic. Provided that our data-generating process satisfies some technical conditions on the forms of temporal dependence, recent work by \citet{leucht2013dependent}, building on the work of \citet{shao2010dependent}, can come to our rescue. The wild bootstrap is a method that correctly resamples from the null distribution of a test statistic, subject to certain conditions on both the test statistic and the processes from which the observations have been drawn. In this paper we show that the Lancaster interaction test statistic satisfies the conditions required to apply the wild bootstrap procedure; moreover, the manner in which we prove this is significantly simpler than existing proofs in the literature of the same property for other kernel test statistics \citep{chwialkowski2014wild,chwialkowski2014kernel}. Previous proofs have relied on the classical theory of $V$-statistics to analyse the asymptotic distribution of the kernel statistic. In particular, the Hoeffding decomposition gives an expression for the kernel test statistic as a sum of other $V$-statistics. Understanding the asymptotic properties of the components of this decomposition is then conceptually tractable, but algebraically extremely painful. Moreover, as the complexity of the test statistic under analysis grows, the number of terms that must be considered in this approach grows factorially.\footnote{See for example Lemma 8 in Supplementary material A.3 of \citet{chwialkowski2014kernel}. The proof of this lemma requires keeping track of $4!$ terms; an equivalent approach for the Lancaster test would have $6!$ terms. Depending on the precise structure of the statistic, this approach applied to a test involving 4 variables could require as many as $8!=40320$ terms.} We conjecture that such analysis of interaction statistics of 4 or more variables would in practice be unfeasible without automatic theorem provers due to the sheer number of terms in the resulting computations. In contrast, in the approach taken in this paper we explicitly consider our test statistic to be the norm of a Hilbert space operator. We exploit a Central Limit Theorem for Hilbert space valued random variables \cite{dehling2015bootstrap} to show that our test statistic converges in probability to the norm of a related population-centred Hilbert space operator, for which the asymptotic analysis is much simpler. Our approach is novel; previous analyses have not, to our knowledge, leveraged the Hilbert space geometry in the context of statistical hypothesis testing using kernel $V$-statistics in this way. We propose that our method may in future be applied to the asymptotic analysis of other kernel statistics. In the appendix, we provide an application of this method to the Hilbert Schmidt Independence Criterion (HSIC) test statistic, giving a significantly shorter and simpler proof than that given in \citet{chwialkowski2014kernel} The Central Limit Theorem that we use in this paper makes certain assumptions on the mixing properties of the random processes from which our data are drawn; as further progress is made, this may be substituted for more up-to-date theorems that make weaker mixing assumptions. \paragraph{OUTLINE:} In Section \ref{section:main}, we detail the Lancaster interaction test and provide our main results. These results justify use of the wild bootstrap to understand the null distribution of the test statistic. In Section \ref{section:details}, we provide more detail about the wild bootstrap, prove that its use correctly controls Type I error and give a consistency result. In Section \ref{section:experiments}, we evaluate the Lancaster test on synthetic data to identify cases in which it outperforms existing methods, as well as cases in which it is outperformed. In Section \ref{section:proofs}, we provide proofs of the main results of this paper, in particular the aforementioned novel proof. Further proofs may be found in the Supplementary material. \section{LANCASTER INTERACTION TEST}\label{section:main} \subsection{KERNEL NOTATION} Throughout this paper we will assume that the kernels $k,l,m$, defined on the domains $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$ respectively, are characteristic \citep{sriperumbudur2011universality}, bounded and Lipschitz continuous. We describe some notation relevant to the kernel $k$; similar notation holds for $l$ and $m$. Recall that $\mu_X := \mathbb{E}_X k(X,\cdot) \in \mathcal{F}_k$ is the mean embedding \citep{smola2007hilbert} of the random variable $X$. Given observations $X_i$, an estimate of the mean embedding is $\tilde{\mu}_X = \frac{1}{n}\sum_{i=1}^n k(X_i,\cdot)$. Two modifications of $k$ are used in this work: \begin{align} \bar{k}(x,x') &= \langle k(x,\cdot)-\mu_X,k(x',\cdot)-\mu_X\rangle, \\ \tilde{k}(x,x') &= \langle k(x,\cdot)-\tilde{\mu}_X, k(x',\cdot)-\tilde{\mu}_X \rangle \end{align} These are called the \emph{population centered kernel} and \emph{empirically centered kernel} respectively. \subsection{LANCASTER INTERACTION} The Lancaster interaction on the triple of random variables $(X,Y,Z)$ is defined as the signed measure $\Delta_LP = \mathbb{P}_{XYZ} - \mathbb{P}_{XY}\mathbb{P}_{Z} - \mathbb{P}_{XZ}\mathbb{P}_{Y} - \mathbb{P}_{X}\mathbb{P}_{YZ} + 2\mathbb{P}_{X}\mathbb{P}_{Y}\mathbb{P}_{Z}$. This measure can be used to detect three-variable interactions. It is straightforward to show that if any variable is independent of the other two (equivalently, if the joint distribution $\mathbb{P}_{XYZ}$ factorises into a product of marginals in any way), then $\Delta_LP = 0$. That is, writing $\mathcal{H}_X = \{X \independent (Y,Z)\}$ and similar for $\mathcal{H}_Y$ and $\mathcal{H}_Z$, we have that \begin{equation}\label{eqn:lancaster-zero} \mathcal{H}_X \enspace \lor \enspace \mathcal{H}_Y \enspace \lor \enspace \mathcal{H}_Z \Rightarrow \Delta_LP=0 \end{equation} The reverse implication does not hold, and thus no conclusion about the veracity of the $\mathcal{H}_{\raisebox{-0.25ex}{\scalebox{1.5}{$\cdot$}}}$ can be drawn when $\Delta_LP=0$. Following \citet{sejdinovic2013kernel}, we can consider the mean embedding of this measure: \begin{align} \mu_L = \int k(x,\cdot) l(y,\cdot) m(z,\cdot) \Delta_LP \end{align} Given an \emph{i.i.d.}~sample $(X_i,Y_i,Z_i)_{i=1}^n$, the norm of the mean embedding $\mu_L$ can be empirically estimated using empirically centered kernel matrices. For example, for the kernel $k$ with kernel matrix $K_{ij} = k(X_i,X_j)$, the empirically centered kernel matrix $\tilde{K}$ is given by \[ \tilde{K}_{ij} = \langle k(X_i,\cdot)-\tilde{\mu}_X, k(X_j,\cdot) -\tilde{\mu}_X \rangle, \] By \citet{sejdinovic2013kernel}, an estimator of the norm of the mean embedding of the Lancaster interaction for \emph{i.i.d.}~samples is \begin{equation}\label{eqn:lancaster} \|\hat \mu_L\|^2 = \frac{1}{n^2}\left(\tilde{K}\circ\tilde{L}\circ\tilde{M}\right)_{++} \end{equation} where $\circ$ is the Hadamard (element-wise) product and $A_{++} = \sum_{ij}A_{ij}$, for a matrix $A$. \subsection{TESTING PROCEDURE} In this paper, we construct a statistical test for three-variable interaction, using $n\|\hat \mu_L\|^2$ as the test statistic to distinguish between the following hypotheses: $\mathcal{H}_0: \mathcal{H}_X \enspace \lor \enspace \mathcal{H}_Y \enspace \lor \enspace \mathcal{H}_Z $\\ $\mathcal{H}_1: \mathbb{P}_{XYZ}$ does not factorise in any way The null hypothesis $\mathcal{H}_0$ is a composite of the three `sub-hypotheses' $\mathcal{H}_X$, $\mathcal{H}_Y$ and $\mathcal{H}_Z$. We test $\mathcal{H}_0$ by testing each of the sub-hypotheses separately and we reject if and only if we reject each of $\mathcal{H}_X$, $\mathcal{H}_Y$ and $\mathcal{H}_Z$. Hereafter we describe the procedure for testing $\mathcal{H}_Z$; similar results hold for $\mathcal{H}_X$ and $\mathcal{H}_Y$. \citet{sejdinovic2013kernel} show that, under $\mathcal{H}_Z$, $n \|\hat \mu_L\|^2 $ converges to an infinite sum of weighted $\chi$-squared random variables. By leveraging the \emph{i.i.d.}~assumption of the samples, any given quantile of this distribution can be estimated using simple permutation bootstrap, and so a test procedure is proposed. In the time series setting this approach does not work. Temporal dependence within the samples makes study of the asymptotic distribution of $n \|\hat \mu_L\|^2 $ difficult; in Section \ref{experiment2} we verify experimentally that the permutation bootstrap used in the \emph{i.i.d}~case fails. To construct a test in this setting we will use asymptotic and bootstrap results for mixing processes. Mixing formalises the notion of the temporal structure within a process, and can be thought of as the rate at which the process forgets about its past. For example, for Gaussian processes this rate can be captured by the autocorrelation function; for general processes, generalisations of autocorrelation are used. The exact assumptions we make about the mixing properties of processes in this paper are discussed in Section \ref{section:details}, and we will refer to them as \textit{suitable mixing assumptions} for brevity in statements of results throughout this paper. \subsection{MAIN RESULTS} It is straightforward to show that the norm of the mean embedding \eqref{eqn:lancaster} can also be written as \[ \|\hat \mu_L\|^2 = \frac{1}{n^2}\left(\widetilde{\tilde{K}\circ\tilde{L}}\circ\tilde{M}\right)_{++}\] Our first contribution is to show that the (difficult) study of the asymptotic null distribution of $ \|\hat \mu_L\|^2$ can be reduced to studying population centered kernels \[ \| \hat \mu^{(Z)}_{L,2} \|^2 =\frac{1}{n^2}\left(\overline{\overline{K}\circ\overline{L}}\circ\overline{M}\right)_{++} \] where e.g. \[ \overline{K}_{ij} = \langle k(X_i,\cdot)-\mu_X, k(X_j,\cdot) -\mu_X \rangle, \] Specifically, we prove the following: \begin{theorem}\label{theorem:norm-conv-in-prob} Suppose that $(X_i,Y_i,Z_i)_{i=1}^n$ are drawn from a random process satisfying suitable mixing assumptions. Under $\mathcal{H}_Z$, $\lim_{n \to \infty} ( n\| \hat \mu^{(Z)}_{L,2} \|^2 - n\|\hat \mu_L\|^2 ) =0 $ in probability. \end{theorem} Our proof of Theorem \ref{theorem:norm-conv-in-prob} relies crucially on the following Lemma which we prove in Supplementary material \ref{supp:hilbert-clt} \begin{lemma}\label{lemma:hilbertCLT} Suppose that $(X_i)_{i=1}^n$ is drawn from a random process satisfying suitable mixing assumptions and that $k$ is a bounded kernel on $\mathcal{X}$. Then $\|\hat\mu_X - \mu_X\|_k = O_P(n^{-\frac{1}{2}})$ \end{lemma} \begin{proof}\textit{(Theorem \ref{theorem:norm-conv-in-prob})} We provide a short sketch of the proof here; for a full proof, see Section \ref{section:proofs}. The key idea is to note that we can rewrite $n\|\hat \mu_L\|^2$ in terms of the population centred kernel matrices $\overline{K}$, $\overline{L}$ and $\overline{M}$. Each of the resulting terms can in turn be converted to an inner product between quantities of the form $\hat\mu - \mu$, where $\hat\mu$ is an empirical estimator of $\mu$, and each $\mu$ is a mean embedding or covariance operator. By applying Lemma \ref{lemma:hilbertCLT} to the $\hat\mu - \mu$, we show that most of these terms converge in probability to 0, with the residual terms equaling $n\| \hat \mu^{(Z)}_{L,2} \|^2$. \end{proof} As discussed in Section \ref{section:intro}, the essential idea of this proof is novel and the resulting proof is significantly more concise than previous approaches \citep{chwialkowski2014kernel,chwialkowski2014wild}. Theorem \ref{theorem:norm-conv-in-prob} is useful because the statistic $\| \hat \mu^{(Z)}_{L,2} \|^2$ is much easier to study under the non-\emph{i.i.d.}~assumption than $\|\hat \mu_L\|^2$. Indeed, it can expressed as a $V$-statistic (see Section \ref{subsection:v-statistic}) \[ V_n = \frac{1}{n^2} \mathlarger{\sum}_{1\leq i,j \leq n} \overline{\overline{k} \otimes \overline{l}}\otimes \overline{m} (S_i,S_j) \] where $S_i = (X_i,Y_i,Z_i)$. The crucial observation is that \[ h := \overline{\overline{k} \otimes \overline{l}}\otimes \overline{m} \] is well behaved in the following sense. \begin{theorem}\label{theorem:degenerate-kernel} Suppose that $k$, $l$ and $m$ are bounded, symmetric, Lipschitz continuous kernels. Then $h$ is also bounded symmetric and Lipschitz continuous, and is moreover degenerate under $\mathcal{H}_Z$ i.e $\mathbb{E}_{S}h(S,s)=0$ for any fixed $s$. \end{theorem} \begin{proof} See Section \ref{section:proofs} \end{proof} The asymptotic analysis of such a $V$-statistic for non-\emph{i.i.d.}~data is still complex, but we can appeal to prior work: \citet{leucht2013dependent} showed a way to estimate any given quantile of such a $V$-statistic under the null hypothesis using a method called the wild bootstrap. This, combined with analysis of the $V$-statistic under the alternative hypothesis provided in Theorem 2 of \citet{chwialkowski2014wild}\footnote{Note that similar results are presented in \citet{leucht2013dependent} as specific cases.}, results in statistical test (see Algorithm \ref{alg:Lancaster}). \begin{algorithm}[tb] \caption{Test $\mathcal{H}_Z$ with Wild Bootstrap} \label{alg:Lancaster} \begin{algorithmic} \STATE {\bfseries Input:} $\tilde{K}$, $\tilde{L}$, $\tilde{M}$, each size $n\times n$, $N$= number of bootstraps, $\alpha=$ p-value threshold \STATE $n\|\hat{\mu}_L\|^2 = \frac{1}{n}\left(\widetilde{\left( \tilde{K} \circ \tilde{L}\right) }\circ \tilde{M} \right)_{++}$ \STATE samples = zeros(1,N) \FOR{$i=1$ {\bfseries to} $N$} \STATE Draw random vector W according to Equation \ref{equation:bootstrap} \STATE samples[$i$] = $\frac{1}{n}W^\intercal\left( \widetilde{\left( \tilde{K} \circ \tilde{L}\right) }\circ \tilde{M} \right)W$ \ENDFOR \IF{sum($n\|\hat{\mu}_L\|^2 >$ samples)$>\frac{\alpha}{N}$} \STATE Reject $\mathcal{H}_Z$ \ELSE \STATE Do not reject $\mathcal{H}_Z$ \ENDIF \end{algorithmic} \end{algorithm} In Section \ref{section:details} we discuss the wild bootstrap and provide results regarding consistency and Type I error control. \subsection{MULTIPLE TESTING CORRECTION} In the Lancaster test, we reject the composite null hypothesis $\mathcal{H}_0$ if and only if we reject all three of the components. In \citet{sejdinovic2013kernel}, it is suggested that the Holm-Bonferroni correction be used to account for multiple testing \citep{holm1979simple}. We show here that more relaxed conditions on the p-values can be used while still bounding the Type I error, thus increasing test power. Denote by $\mathcal{A}_*$ the event that $\mathcal{H}_*$ is rejected. Then \begin{align*} \mathbb{P}(\mathcal{A}_0) &= \mathbb{P}(\mathcal{A}_X \land \mathcal{A}_Y \land \mathcal{A}_Z) \\ &\leq \min\{\mathbb{P}(\mathcal{A}_X), \mathbb{P}(\mathcal{A}_Y), \mathbb{P}(\mathcal{A}_Z)\} \end{align*} If $\mathcal{H}_0$ is true, then so must one of the components. Without loss of generality assume that $\mathcal{H}_X$ is true. If we use significance levels of $\alpha$ in each test individually then $\mathbb{P}(\mathcal{A}_X) \leq \alpha$ and thus $\mathbb{P}(\mathcal{A}_0) \leq \alpha$. Therefore rejecting $\mathcal{H}_0$ in the event that each test has p-value less than $\alpha$ individually guarantees a Type I error overall of at most $\alpha$. In contrast, the Holm-Bonferonni method requires that the sorted p-values be lower than $[\frac{\alpha}{3},\frac{\alpha}{2},\alpha]$ in order to reject the null hypothesis overall. It is therefore more conservative than necessary and thus has worse test power compared to the `simple correction' proposed here. This is experimentally verified in Section \ref{section:experiments}. \section{THE WILD BOOTSTRAP}\label{section:details} In this section we discuss the wild bootstrap and provide consistency and Type I error results for the proposed Lancaster test. \subsection{TEMPORAL DEPENDENCE} There are various formalisations of memory or `mixing' of a random process \citep{doukhan1994mixing,bradley2005basic,dedecker2007weak}; of relevance to this paper is the following : \begin{definition} A process $(X_t)_{t}$ is \emph{$\beta$-mixing} (also known as \emph{absolutely regular}) if $\beta(m) \longrightarrow 0$ as $m\longrightarrow \infty$, where \[ \beta(m) = \frac{1}{2} \sup_n \sup \sum_{i=1}^I \sum_{j=1}^J | \mathbb{P}(A_i \cap B_j) - \mathbb{P}(A_i)\mathbb{P}(B_j)| \] where the second supremum is taken over all finite partitions $\{A_1,\ldots, A_I \}$ and $\{B_1,\ldots, B_J\}$ of the sample space such that $A_i \in \mathcal{F}_1^n$ and $B_j \in \mathcal{F}_{n+m}^\infty$ and $\mathcal{F}_b^c = \sigma(X_b,X_{b+1},\ldots,X_{c})$ \end{definition} A related notion is that of $\tau$-mixing. This is a property required to apply the wild bootstrap method of \citet{leucht2013dependent}, but we do not discuss $\tau$-mixing here since it is implied by $\beta$-mixing under the assumption that $X_i$ has finite $p$-th moment for any $p>1$. \subsection*{SUITABLE MIXING ASSUMPTIONS} We assume that the random process $S_i = (X_i,Y_i,Z_i)$ is $\beta$ mixing with mixing coefficients satisfying $\beta(m) = o(m^{-6})$. Throughout this paper we refer to this assumption as \emph{suitable mixing assumptions}. \subsection{$V$-STATISTICS}\label{subsection:v-statistic} A $V$-statistic of a 2-argument, symmetric function $h$ given observations $\mathcal{S}_n = \{S_1,\ldots,S_n\}$ is \citep{serfling2009approximation}: \[ V_n = \frac{1}{n^2} \mathlarger{\sum}_{1\leq i,j \leq n} h(S_i,S_j)\] We call $nV_n$ a \emph{normalised} $V$-statistic. We call $h$ the \emph{core} of $V$ and we say that $h$ is \emph{degenerate} if, for any $s_1$, $\mathbb{E}_{S_2 \sim \mathbb{P}}[h(s_1,S_2)] = 0$, in which case we say that $V$ is a \emph{degenerate $V$-statistic}. Many kernel test statistics can be viewed as normalised $V$-statistics which, under the null hypothesis, are degenerate. As mentioned in the previous section, $\|\hat \mu^{(Z)}_{L,2}\|^2$ is a $V$-statistic. Theorems \ref{theorem:norm-conv-in-prob} and \ref{theorem:degenerate-kernel} together imply that, under $\mathcal{H}_Z$, it can be treated as a degenerate $V$-statistic. \subsection{WILD BOOTSTRAP} If the test statistic has the form of a normalised $V$-statistic, then provided certain extra conditions are met, the wild bootstrap of \citet{leucht2013dependent} is a method to directly resample the test statistic under the null hypothesis. These conditions can be categorised as concerning: (1) appropriate mixing of the process from which our observations are drawn; (2) the core of the $V$-statistic. The condition on the core that is of crucial importance to this paper is that it must be degenerate. Theorem \ref{theorem:degenerate-kernel} justifies our use of the wild bootstrap in the Lancaster interaction test. Given the statistic $nV_n$, \citet{leucht2013dependent} tells us that a random vector $W$ of length $n$ can be drawn such that the bootstrapped statistic\footnote{Note that for fixed $\mathcal{S}_n$, $nV_b$ is a random variable through the randomness introduced by $W$} \[nV_b=\frac{1}{n}\sum_{i,j}W_{i}h(S_i,S_j)W_{j}\] is distributed according to the null distribution of $nV_n$. By generating many such $W$ and calculating $nV_b$ for each, we can estimate the quantiles of $nV$. \subsection{GENERATING $W$} The process generating $W$ must satisfy conditions (B2) given on page 6 of \citet{leucht2013dependent} for $nV_b$ to correctly resample from the null distribution of $nV_n$. For brevity, we provide here only an example of such a process; the interested reader should consult \citet{leucht2013dependent} or Appedix A of \citet{chwialkowski2014wild} for a more detailed discussion of the bootstrapping process. The following bootstrapping process was used in the experiments in Section \ref{section:experiments}: \begin{equation}\label{equation:bootstrap} W_t = e^{-1/l_n}W_{t-1} + \sqrt{1 - e^{-2/l_n}}\epsilon_t \end{equation} where $W_1$, $\epsilon_1, \ldots, \epsilon_t$ are independent $\mathcal{N}(0,1)$ random variables. $l_n$ should be taken from a sequence $\{l_n\}$ such that $\lim_{n\longrightarrow\infty}l_n = \infty$; in practice we used $l_n=20$ for all of the experiments since the values of $n$ were roughly comparable in each case. \subsection{CONTROL OF TYPE I ERROR} The following theorem shows that by estimating the quantiles of the wild bootstrapped statistic $nV_b$ we correctly control the Type I error when testing $\mathcal{H}_Z$. \begin{theorem}\label{theorem:quantiles-converge} Suppose that $(X_i,Y_i,Z_i)_{i=1}^n$ are drawn from a random process satisfying suitable mixing conditions, and that $W$ is drawn from a process satisfying (B2) in \citet{leucht2013dependent}. Then asymptotically, the quantiles of \[nV_b = \frac{1}{n}W^\intercal\left( \overline{\left( \bar{K} \circ \bar{L}\right) }\circ \bar{M} \right)W\] converge to those of $ n\| \hat \mu_L\|^2$. \end{theorem} \begin{proof} See Supplementary material \ref{supp:quantile-proof} \end{proof} \subsection{(SEMI-)CONSISTENCY OF TESTING PROCEDURE} Note that in order to achieve consistency for this test, we would need that $\mathcal{H}_0 \iff \Delta_LP = 0$. Unfortunately this does not hold - in \citet{sejdinovic2013kernel} examples are given of distributions for which $\mathcal{H}_0$ is false, and yet $\Delta_LP = 0$. However, the following result does hold: \begin{theorem}\label{theorem:consistent} Suppose that $\Delta_LP \not =0$. Then as $n\longrightarrow\infty$, the probability of correctly rejecting $\mathcal{H}_0$ converges to 1. \end{theorem} \begin{proof} See Supplementary material \ref{supp:consistent} \end{proof} At the time of writing, a characterisation of distributions for which $\mathcal{H}_0$ is false yet $\Delta_LP=0$ is unknown. Therefore, if we reject $\mathcal{H}_0$ then we conclude that the distribution does not factorise; if we fail to reject $\mathcal{H}_0$ then we cannot conclude that the distribution factorises. \section{EXPERIMENTS}\label{section:experiments} The Lancaster test described above amounts to a method to test each of the sub-hypotheses $\mathcal{H}_X, \mathcal{H}_Y, \mathcal{H}_Z$. Rather than using the Lancaster test statistic with wild bootstrap to test each of these, we could instead use HSIC. For example, by considering the pair of variables $(X,Y)$ and $Z$ with kernels $k\otimes l$ and $m$ respectively, HSIC can be used to test $\mathcal{H}_Z$. Similar grouping of the variables can be used to test $\mathcal{H}_X$ and $\mathcal{H}_Y$. Applying the same multiple testing correction as in the Lancaster test, we derive an alternative test of dependence between three variables. We refer to this HSIC based procedure as \emph{3-way HSIC}. In the case of \emph{i.i.d.}~observations, it was shown in \citet{sejdinovic2013kernel} that Lancaster statistical test is more sensitive to dependence between three random variables than the above HSIC-based test when pairwise interaction is weak but joint interaction is strong. In this section, we demonstrate that the same is true in the time series case on synthetic data. \subsection{WEAK PAIRWISE INTERACTION, STRONG JOINT INTERACTION}\label{experiment1} This experiment demonstrates that the Lancaster test has greater power than 3-way HSIC when the pairwise interaction is weak, but joint interaction is strong. Synthetic data were generated from autoregressive processes $X$, $Y$ and $Z$ according to: \begin{align*} X_t &= \frac{1}{2}X_{t-1} + \epsilon_t\\ Y_t &= \frac{1}{2}Y_{t-1} + \eta_t\\ Z_t &= \frac{1}{2}Z_{t-1} + d |\theta_t|\text{sign}(X_t Y_t) + \zeta_t\\ \end{align*} where $X_0, Y_0, Z_0, \epsilon_t, \eta_t, \theta_t$ and $\zeta_t$ are \emph{i.i.d.}~$\mathcal{N}(0,1)$ random variables and $d\in\mathbb{R}$, called the \emph{dependence} coefficient, determines the extent to which the process $(Z_t)_t$ is dependent on $(X_t,Y_t)_t$. Data were generated with varying values of $d$. For each value of $d$, 300 datasets were generated, each consisting of 1200 consecutive observations of the variables. Gaussian kernels with bandwidth parameter 1 were used on each variable, and 250 bootstrapping procedures were used for each test on each dataset. Observe that the random variables are pairwise independent but jointly dependent. Both the Lancaster and 3-way HSIC tests should be able to detect the dependence and therefore reject the null hypothesis in the limit of infinite data. In the finite data regime, the value of $d$ affects drastically how hard it is to detect the dependence. The results of this experiment are presented in Figure \ref{weak-pairwise-strong-joint}, which shows that the Lancaster test achieves very high test power with weak dependence coefficients compared to 3-way HSIC. Note also that when using the simple multiple testing correction a higher test power is achieved than with the Holm-Bonferroni correction. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[scale=0.6]{UAI_Figure1.pdf}} \caption{Results of experiment in Section \ref{experiment1}. (S) refers to the simple multiple correction; (HB) refers to Holm-Bonferroni. The Lancaster test is more sensitive to dependence than 3-way HSIC, and test power for both tests is higher when using the simple correction rather than the Holm-Bonferroni multiple testing correction.} \label{weak-pairwise-strong-joint} \end{center} \vskip -0.2in \end{figure} \subsection{FALSE POSITIVE RATES}\label{experiment2} This experiment demonstrates that in the time series case, existing permutation bootstrap methods fail to control the Type I error, while the wild bootstrap correctly identifies test statistic thresholds and appropriately controls Type I error. Synthetic data were generated from autoregressive processes $X$, $Y$ and $Z$ according to: \begin{align*} X_t &= aX_{t-1} + \epsilon_t\\ Y_t &= aY_{t-1} + \eta_t\\ Z_t &= aZ_{t-1} + \zeta_t\\ \end{align*} where $X_0, Y_0, Z_0, \epsilon_t, \eta_t$ and $\zeta_t$ are \emph{i.i.d.}~$\mathcal{N}(0,1)$ random variables and $a$, called the \emph{dependence coefficient}, determines how temporally dependent the processes are. The null hypothesis in this example is true as each process is independent of the others. The Lancaster test was performed using both the Wild Bootstrap and the simple permutation bootstrap (used in the \emph{i.i.d.}~case) in order to sample from the null distributions of the test statistic. We used a fixed desired false positive rate $\alpha = 0.05$ with sample of size 1000, with 200 experiments run for each value of $a$. Figure \ref{wildBootstrap_is_necessary} shows the false positive rates for these two methods for varying $a$. It shows that as the processes become more dependent, the false positive rate for the permutation method becomes very large, and is not bounded by the fixed $\alpha$, whereas the false positive rate for the Wild Bootstrap method is bounded by $\alpha$. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[scale=0.6]{UAI_Figure2.pdf}} \caption{Results of experiment in section \ref{experiment2}. Whereas the wild bootstrap succeeds in controlling the Type I error across all values of the dependence coefficient, the permutation bootstrap fails to control the Type I error as it does not sample from the correct null distribution as temporal dependence between samples increases.} \label{wildBootstrap_is_necessary} \end{center} \vskip -0.2in \end{figure} \subsection{STRONG PAIRWISE INTERACTION}\label{experiment3} This experiment demonstrates a limitation of the Lancaster test. When pairwise interaction is strong, 3-way HSIC has greater test power than Lancaster. Synthetic data were generated from autoregressive processes $X$, $Y$ and $Z$ according to: \begin{align*} X_t &= \frac{1}{2}X_{t-1} + \epsilon_t\\ Y_t &= \frac{1}{2}Y_{t-1} + \eta_t\\ Z_t &= \frac{1}{2}Z_{t-1} + d(X_t + Y_t) + \zeta_t\\ \end{align*} where $X_0, Y_0, Z_0, \epsilon_t, \eta_t$ and $\zeta_t$ are \emph{i.i.d.}~$\mathcal{N}(0,1)$ random variables and $d\in\mathbb{R}$, called the \emph{dependence} coefficient, determines the extent to which the process $(Z_t)_t$ is dependent on $X_t$ and $Y_t$. Data were generated with varying values for the dependence coefficient. For each value of $d$, 300 datasets were generated, each consisting of 1200 consecutive observations of the variables. Gaussian kernels with bandwidth parameter 1 were used on each variable, and 250 bootstrapping procedures were used for each test on each dataset. In this case $Z_t$ is pairwise-dependent on both of $X_t$ and $Y_t$, in addition to all three variables being jointly dependent. Both the Lancaster and 3-way HSIC tests should be capable of detecting the dependence and therefore reject the null hypothesis in the limit of infinite data. The results of this experiment are presented in Figure \ref{strong-pairwise}, which demonstrates that in this case the 3-way HSIC test is more sensitive to the dependence than the Lancaster test. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[scale=0.6]{UAI_Figure3.pdf}} \caption{Results of experiment in Section \ref{experiment3}. (S) refers to the simple multiple correction; (HB) refers to Holm-Bonferroni. The Lancaster test is less sensitive to dependence than 3-way HSIC, and test power in both cases is higher when using the simple correction rather than the Holm-Bonferroni multiple testing correction.} \label{strong-pairwise} \end{center} \vskip -0.2in \end{figure} \subsection{FOREX DATA} Exchange rates between three currencies (GBP, USD, EUR) at 5 minute intervals over 7 consecutive trading days were obtained. The data were processed by taking the returns (difference between consecutive terms within each time series, $x_t^r = x_t-x_{t-1}$) which were then normalised (divided by standard deviation). We performed the Lancaster test, 3-way HSIC and pairwise HSIC on using the first $800$ entries of each processed series. All tests rejected the null hypothesis. The Lancaster test returned $p$-values of 0 for each of $\mathcal{H}_X$, $\mathcal{H}_Y$ and $\mathcal{H}_Z$ with $10000$ bootstrapping procedures. We then shifted one of the time series and repeated the tests (i.e.~we used entries $1$ to $800$ of two of the processed series and entries $801$ to $1600$ of the third). In this case, pairwise HSIC still detected dependence between the two unshifted time series, and both Lancaster and 3-way HSIC did not reject the null hypothesis that the joint distribution factorises. The Lancaster test returned $p$-values of $0.2708$, $0.2725$ and $0.1975$ for $\mathcal{H}_X$, $\mathcal{H}_Y$ and $\mathcal{H}_Z$ respectively. In both cases, the Lancaster test behaves as expected. Due to arbitrage, any two exchange rates should determine the third and the Lancaster test correctly identifies a joint dependence in the returns. However, when we shift one of the time series, we break the dependence between it and the other series. Lancaster correctly identifies here that the underlying distribution does factorise. \section{DISCUSSION AND FUTURE RESEARCH}\label{section:discussion} We demonstrated that the Lancaster test is more sensitive than 3-way HSIC when pairwise interaction is weak, but that the opposite is true when pairwise interaction is strong. It is curious that the two tests have different strengths in this manner, particularly when considering the very similar forms of the statistics in each case. Indeed, to test $\mathcal{H}_Z$ using the Lancaster statistic, we bootstrap the following: \begin{align*} n\|\Delta_L\hat{P}\|^2 = \frac{1}{n}\left(\widetilde{\left( \tilde{K} \circ \tilde{L}\right) }\circ \tilde{M} \right)_{++} \end{align*} while for the 3-way HSIC test we bootstrap: \begin{align*} nHSIC_b = \frac{1}{n}\left(\widetilde{\left( K \circ L\right) }\circ \tilde{M} \right)_{++} \end{align*} These two quantities differ only in the centring of $K$ and $L$, amounting to constant shifts in the respective feature spaces of the kernels $k$ and $l$. This difference has the consequence of quite drastically changing the types of dependency to which each statistic is sensitive. A formal characterisation of the cases in which the Lancaster statistic is more sensitive than 3-way HSIC would be desirable. \section{PROOFS}\label{section:proofs} An outline of the proof of Theorem \ref{theorem:norm-conv-in-prob} was given in Section \ref{section:main}; here we provide the full proof, as well as a proof of Theorem \ref{theorem:degenerate-kernel}. \begin{proof}(Theorem \ref{theorem:norm-conv-in-prob}) By observing that \begin{align*} & \phi_X(X_i)- \frac{1}{n}\sum_k\phi_X(X_k) \\ = \enspace& (\phi_X(X_i) - \mu_X) - \frac{1}{n}\sum_k (\phi_X(X_k) - \mu_X)\\ = \enspace&\bar\phi_X(X_i)- \frac{1}{n}\sum_k\bar\phi_X(X_k) \end{align*} we can therefore expand $\tilde{K}$ in terms of $\bar{K}$ as \begin{align*} &\tilde{K}_{ij} \\ &= \langle\phi_X(X_i)- \frac{1}{n}\sum_k\phi_X(X_k),\phi_X(X_j) - \frac{1}{n}\sum_k\phi_X(X_k)\rangle \\ &= \langle\bar\phi_X(X_i)- \frac{1}{n}\sum_k\bar\phi_X(X_k),\bar\phi_X(X_j) - \frac{1}{n}\sum_k\bar\phi_X(X_k)\rangle \\ &= \bar{K}_{ij} - \frac{1}{n}\sum_k\bar{K}_{ik} - \frac{1}{n}\sum_k\bar{K}_{jk} + \frac{1}{n^2}\sum_{kl}\bar{K}_{kl} \end{align*} and expanding $\tilde{L}$ and $\tilde{M}$ in a similar way, we can rewrite the Lancaster test statistic as \begin{align*} n\|\hat \mu_L\|^2 &= \frac{1}{n}(\bar{K} \circ \bar{L}\circ \bar{M})_{++} &&- \frac{2}{n^2}((\bar{K}\circ \bar{L}) \bar{M})_{++} \\&- \frac{2}{n^2}((\bar{K} \circ \bar{M}) \bar{L})_{++} &&- \frac{2}{n^2}((\bar{M} \circ \bar{L}) \bar{K})_{++} \\&+ \frac{1}{n^3}(\bar{K} \circ \bar{L})_{++} \bar{M}_{++} &&+ \frac{1}{n^3}(\bar{K} \circ \bar{M})_{++} \bar{L}_{++} \\&+ \frac{1}{n^3}(\bar{L} \circ \bar{M})_{++} \bar{K}_{++} &&+ \frac{2}{n^3}(\bar{M}\bar{K}\bar{L})_{++} \\&+ \frac{2}{n^3}(\bar{K}\bar{L}\bar{M})_{++} &&+ \frac{2}{n^3}(\bar{K}\bar{M}\bar{L})_{++} \\&+ \frac{4}{n^3}tr(\bar{K}_+ \circ \bar{L}_+ \circ \bar{M}_+) &&- \frac{4}{n^4}(\bar{K} \bar{L})_{++} \bar{M}_{++} \\& - \frac{4}{n^4}(\bar{K}\bar{M})_{++}\bar{L}_{++} &&- \frac{4}{n^4}(\bar{L}\bar{M})_{++} \bar{K}_{++} \\&+ \frac{4}{n^5}\bar{K}_{++} \bar{L}_{++} \bar{M}_{++} \\ \end{align*} We denote by $C_{XYZ} = \mathbb{E}_{XYZ}[\bar\phi_X(X)\otimes\bar\phi_Y(Y)\otimes\bar\phi_Z(Z)]$ the population centred covariance operator with empirical estimate $\bar{C}_{XYZ} = \frac{1}{n}\sum_i\bar\phi_X(X_i)\otimes\bar\phi_Y(Y_i)\otimes\bar\phi_Z(Z_i)$. We define similarly the quantities $C_{XY}, C_{YZX}, \ldots$ with corresponding empirical counterparts $\bar{C}_{XY}, \bar{C}_{YZX}, \ldots$ where for example $C_{YZ} = \mathbb{E}_{YZ}[\bar\phi_Y(Y)\otimes\bar\phi_Z(Z)]$ Each of the terms in the above expression for $n\|\hat \mu_L\|^2$ can be expressed as inner products between empirical estimates of population centred covariance operators and tensor products of mean embeddings. Rewriting them as such yields: \begin{align*} n\|\hat \mu_L\|^2 &= n\langle \bar{C}_{XYZ},\bar{C}_{XYZ} \rangle \\& - 2n\langle \bar{C}_{XYZ},\bar{C}_{XY}\otimes\bar{\mu}_Z \rangle \\& - 2n\langle \bar{C}_{XZY},\bar{C}_{XZ}\otimes\bar{\mu}_Y \rangle \\& - 2n\langle \bar{C}_{YZX},\bar{C}_{YZ}\otimes\bar{\mu}_X \rangle \\& + n\langle \bar{C}_{XY}\otimes\bar{\mu}_Z,\bar{C}_{XY}\otimes\bar{\mu}_Z \rangle \\& + n\langle \bar{C}_{XZ}\otimes\bar{\mu}_Y,\bar{C}_{XZ}\otimes\bar{\mu}_Y \rangle \\& + n\langle \bar{C}_{YZ}\otimes\bar{\mu}_X,\bar{C}_{YZ}\otimes\bar{\mu}_X \rangle \\& + 2n\langle \bar{\mu}_Z\otimes\bar{C}_{XY},\bar{C}_{ZX}\otimes\bar{\mu}_Y \rangle \\ & \enspace \vdots \end{align*} \begin{align*} &+2n\langle \bar{\mu}_X\otimes\bar{C}_{YZ},\bar{C}_{XY}\otimes\bar{\mu}_Z \rangle \\& + 2n\langle \bar{\mu}_X\otimes\bar{C}_{ZY},\bar{C}_{XZ}\otimes\bar{\mu}_Y \rangle \\& + 4n\langle \bar{C}_{XYZ},\bar{\mu}_X \otimes\bar{\mu}_Y \otimes \bar{\mu}_Z \rangle \\& - 4n\langle \bar{C}_{XY}\otimes \bar{\mu}_Z,\bar{\mu}_X \otimes\bar{\mu}_Y \otimes \bar{\mu}_Z \rangle \\& - 4n\langle \bar{C}_{XZ}\otimes \bar{\mu}_Y,\bar{\mu}_X \otimes\bar{\mu}_Z \otimes \bar{\mu}_Y \rangle \\& - 4n\langle \bar{C}_{YZ}\otimes \bar{\mu}_X,\bar{\mu}_Y \otimes\bar{\mu}_Z \otimes \bar{\mu}_X \rangle \\& + 4n\langle \bar{\mu}_X \otimes\bar{\mu}_Y \otimes \bar{\mu}_Z,\bar{\mu}_X \otimes\bar{\mu}_Y \otimes \bar{\mu}_Z \rangle \\ \end{align*} \vspace{-0.5cm} By assumption, $\mathbb{P}_{XYZ} =\mathbb{P}_{XY}\mathbb{P}_{Z}$ and thus the expectation operator also factorises similarly. As a consequence, $C_{XYZ}=0$. Indeed, given any $A \in \mathcal{F}_X \otimes \mathcal{F_Y} \otimes \mathcal{F_Z}$, we can consider $A$ to be a bounded linear operator $\mathcal{F_Z} \longrightarrow\mathcal{F}_X \otimes \mathcal{F_Y} $. It follows that\footnote{We can bring the $\mathbb{E}_Z$ inside the inner product in the penultimate line due to the Bochner integrability of $\bar{\phi}_Z(Z)$, which follows from the conditions required for $\mu_Z$ to exist \citep{steinwart2008support}. } \begin{align*} &\mathbb{E}_{XYZ}\langle A, \bar{C}_{XYZ}\rangle \\ &= \frac{1}{n}\sum_i \mathbb{E}_{XY}\mathbb{E}_{Z}\langle A, \bar{\phi}_X(X_i)\otimes \bar{\phi}_Y(Y_i) \otimes \bar{\phi}_Z(Z_i) \rangle\\ &= \frac{1}{n}\sum_i \mathbb{E}_{XY}\mathbb{E}_{Z}\langle\bar{\phi}_X(X_i)\otimes \bar{\phi}_Y(Y_i), A \bar{\phi}_Z(Z_i) \rangle_{\mathcal{F}_{X}\otimes \mathcal{F}_Y}\\ &= \frac{1}{n}\sum_i \mathbb{E}_{XY}\langle\bar{\phi}_X(X_i)\otimes \bar{\phi}_Y(Y_i), A \mathbb{E}_{Z} \bar{\phi}_Z(Z_i) \rangle_{\mathcal{F}_{X}\otimes \mathcal{F}_Y}\\ & = 0\\ \end{align*} \vspace{-0.9cm} We conclude that $C_{XYZ} = \mathbb{E}_{XYZ} \bar{C}_{XYZ} = 0$. Similarly, $C_{XZY}$, $C_{YZX}$, $C_{XZ}$, $C_{YZ}$ are all 0 in their respective Hilbert spaces. Lemma \ref{lemma:beta} tells us that each subprocess of $(X_i,Y_i,Z_i)$ satisfies the same $\beta$-mixing conditions as $(X_i,Y_i,Z_i)$, thus by applying Lemma \ref{lemma:hilbertCLT} it follows that $\|\bar{C}_{XZY}\|$, $\|\bar{C}_{YZX}\|$, $\|\bar{C}_{XZ}\|$, $\|\bar{C}_{YZ}\|$, $\|\bar{\mu}_X\|$, $\|\bar{\mu}_Y\|$, $\|\bar{\mu}_Z\| = O_P\left(n^{-\frac{1}{2}}\right)$. Therefore \begin{align*} n\|&\hat \mu_L\|^2 \xrightarrow{O_P(n^{-\frac{1}{2}})} n\langle \bar{C}_{XYZ},\bar{C}_{XYZ} \rangle \\ &- 2n\langle \bar{C}_{XYZ},\bar{C}_{XY}\otimes\bar{\mu}_Z \rangle - 2n\langle \bar{C}_{XZY},\bar{C}_{XZ}\otimes\bar{\mu}_Y \rangle \\ &= \frac{1}{n}((\bar{K}\circ \bar{L}) \circ \bar{M})_{++}\\& - \frac{2}{n^2}((\bar{K}\circ \bar{L})\bar{M})_{++} + \frac{1}{n^3}(\bar{K}\circ \bar{L})_{++}\bar{M}_{++} \end{align*} since all the other terms decay at least as quickly as $O_P(\frac{1}{\sqrt{n}})$. This is shown here for $n\langle \bar{\mu}_X\otimes\bar{C}_{YZ},\bar{C}_{XY}\otimes\bar{\mu}_Z \rangle$; the proofs for the other terms are similar. \begin{align*} &n\langle \bar{\mu}_X\otimes\bar{C}_{YZ},\bar{C}_{XY}\otimes\bar{\mu}_Z \rangle \\ &\leq n \| \bar{\mu}_X\otimes\bar{C}_{YZ}\| \|\bar{C}_{XY}\otimes\bar{\mu}_Z \| \\ & = n\sqrt{\langle \bar{\mu}_X\otimes\bar{C}_{YZ} , \bar{\mu}_X\otimes\bar{C}_{YZ} \rangle} \sqrt{\langle \bar{C}_{XY}\otimes\bar{\mu}_Z, \bar{C}_{XY}\otimes\bar{\mu}_Z \rangle} \\ \end{align*} \begin{align*} & = n\sqrt{\langle \bar{\mu}_X, \bar{\mu}_X \rangle \langle \bar{C}_{YZ} , \bar{C}_{YZ} \rangle} \sqrt{\langle \bar{C}_{XY}, \bar{C}_{XY} \rangle \langle \bar{\mu}_Z, \bar{\mu}_Z \rangle} \\ & = n \| \bar{\mu}_X\|\|\bar{C}_{YZ}\| \|\bar{C}_{XY}\|\|\bar{\mu}_Z \| \\ & = n \mathsmaller{O_P\left(\frac{1}{\sqrt{n}}\right)} \mathsmaller{O_P\left(\frac{1}{\sqrt{n}}\right)} O_P(1) \mathsmaller{O_P\left(\frac{1}{\sqrt{n}}\right)} = \mathsmaller{O_P\left(\frac{1}{\sqrt{n}}\right)} \end{align*} It can be shown that $\bar{K}\circ \bar{L}$ in the above expression can be replaced with $\overline{\bar{K}\circ \bar{L}}$ while preserving equality. That is, we can equivalently write \begin{align*} n\|\Delta_L \hat{P}\|^2 & \longrightarrow \frac{1}{n}((\overline{\bar{K}\circ \bar{L}}) \circ \bar{M})_{++}\\& - \frac{2}{n^2}((\overline{\bar{K}\circ \bar{L}})\bar{M})_{++} + \frac{1}{n^3}(\overline{\bar{K}\circ \bar{L}})_{++}\bar{M}_{++} \end{align*} This is equivalent to treating $\bar{k}\otimes\bar{l}$ as a kernel on the single variable $T:=(X,Y)$ and performing another recentering trick as we did at the beginning of this proof. By rewriting the above expression in terms of the operator $\bar{C}_{TZ}$ and mean embeddings $\mu_T$ and $\mu_Z$, it can be shown by a similar argument to before that the latter two terms tend to 0 at least as $O_P(n^{-\frac{1}{2}})$, and thus, substituting for the definition of $\|\hat \mu^{(Z)}_{L,2} \|^2$, \begin{align*} n \|\hat \mu_{L} \|^2 \xrightarrow{O_P(\frac{1}{\sqrt{n}})} n \|\hat \mu^{(Z)}_{L,2} \|^2 \end{align*} as required. \end{proof} \begin{proof}(Theorem \ref{theorem:degenerate-kernel}) Note that $\mathbb{E}_{XYZ} = \mathbb{E}_{XY}\mathbb{E}_Z$ under $\mathcal{H}_Z$. Therefore, fixing any $s_j = (x_j,y_j,z_j)$ we have that \begin{align*} \mathbb{E}_{S_i}&h(S_i,s_j) = \mathbb{E}_{X_iY_i} \mathbb{E}_{Z_i}\overline{\bar{k}\otimes\bar{l}}\otimes\bar{m} (S_i,s_j) \\ &= \langle\mathbb{E}_{X_iY_i}\bar{\phi}(X_i)\otimes\bar{\phi}(Y_i) - C_{XY},\bar{\phi}(x_j)\otimes\bar{\phi}(y_j) - C_{XY}\rangle \\ &\quad \quad \quad \quad \times \langle \mathbb{E}_{Z_i}\bar{\phi}(Z_i),\bar{\phi}(z_j)\rangle \\ &= \langle 0 ,\bar{\phi}(x_j)\otimes\bar{\phi}(y_j) - C_{XY}\rangle \\ &\quad \quad \quad \quad \times \langle 0 ,\bar{\phi}(z_j)\rangle = 0 \end{align*} Therefore $h$ is degenerate. Symmetry follows from the symmetry of the Hilbert space inner product. For boundedness and Lipschitz continuity, it suffices to show the two following rules for constructing new kernels from old preserve both properties (see Supplementary materials \ref{supp:bounded-and-lipschitz} for proof): \begin{itemize} \setlength\itemsep{0em} \item $k \mapsto \bar{k}$ \item $(k,l) \mapsto k \otimes l$ \end{itemize} It then follows that $h = \overline{\bar{k}\otimes\bar{l}}\otimes\bar{m}$ is bounded and Lipschitz continuous since it can be constructed from $k$, $l$ and $m$ using the two above rules. \end{proof} \newpage \section*{References}
2,869,038,154,488
arxiv
\section{Introduction}\label{Sec1} We begin with our main definition. \begin{definition}\label{MainDef} Let $G$ be a finite group and for $g,h\in G$ let \[ \Sigma(g,h):=\bigcup_{i=1}^{|G|}\bigcup_{k\in G}\{(g^i)^k,(h^i)^k,((gh)^i)^k\}. \] A set of elements $\{\{x_1,y_1\},\{x_2,y_2\}\}\subset G\times G$ is a \emph{Beauville structure of} $G$ if and only if $\langle x_1,y_1\rangle=\langle x_2,y_2\rangle=G$ and \begin{equation} \Sigma(x_1,y_1)\cap\Sigma(x_2,y_2)=\{e\}.\tag{$\dagger$} \end{equation} If $G$ has a Beauville structure then we call $G$ a \emph{Beauville group}. Let $G$ be a Beauville group and let $X =\{\{x_1, y_1\},\{x_2, y_2\}\}$ be a Beauville structure for $G$. We say that $G$ and $X$ are \emph{strongly real} if there exists an automorphism $\phi\in\mbox{Aut}(G)$ and elements $g_i\in G$ for $i = 1, 2$ such that \begin{equation} g_i\phi(x_i)g_i^{-1}=x_i^{-1}\mbox{ and }g_i\phi(y_i)g_i^{-1}=y_i^{-1}\tag{$\star$} \end{equation} for $i=1,2$. \end{definition} Beauville groups were originally introduced in connection with a class of complex surfaces of general type known as Beauville surfaces. These surfaces posses many useful geometric properties: their automorphism groups and fundamental groups are relatively easy to compute and these surfaces are rigid in the sense of admitting no non-trivial deformations and are thus isolated points in the moduli space of surfaces of general type. They provide cheap counterexamples and are a useful testing ground for conjectures. What makes these surfaces so easy to work with is the fact that doing so boils down to working inside the corresponding Beauville group and structure. For $p\geq5$ abelian strongly real Beauville $p$-groups are easy to construct: writing $C_n$ for the cyclic group of order $n$, in \cite[Section 3]{C} Catanese classified the abelian Beauville groups proving that they are precisely the groups $C_n\times C_n$ where $n>1$ is coprime to 6. Consequently, if $p\geq5$ we can construct infinitely many abelian strongly real Beauville $p$-groups by simply setting $n$ to be a power of $p$ (though Catanese's result also tells us that there are no abelian Beauville 2-groups or 3-groups.) Non-abelian examples are much harder to construct. As far as the author is aware the only previously published examples of non-abelian strongly real Beauville $p$-groups is a pair of 2-groups constructed by the author in \cite[Section 7]{MoreF}. There is sound reason for believing the case of $p$ odd is harder. In \cite{HM} Helleloid and Martin prove that automorphism group of a finite $p$-group is almost always a $p$-group. In particular, if $p$ is odd, then typically no automorphism like the $\phi$ in condition ($\star$) of Definition \ref{MainDef} exists since such an automorphism must necessarily have even order. Even ignoring the strongly real condition, Beauville $p$-groups are more difficult to construct in general --- a commonly used trick for showing that condition $(\dagger)$ of Definition \ref{MainDef} is satisfied is to find a Beauville structure such that $o(x_1)o(y_1)o(x_1y_1)$ is coprime to $o(x_2)o(y_2)o(x_2y_2)$ but this clearly cannot be done in a $p$-group since every non-trivial element has an order that's a power of $p$. Further motivation comes from the fact that in some sense `most' finite groups are $p$-groups \cite{obrian1,obrian2} and thus establishing the general picture in this case a long way to establishing the wider picture in general. Despite the above difficulties, a number of authors have found a variety of ingenious constructions for them \cite{BBF,BBPV1,BBPV2,MoreF,new,FGJ,Gul1,JW,SV}. Our main result is as follows. \begin{theorem}\label{MainThm} Let $p$ be an odd prime and let $q$ and $r$ be powers of $p$. If $q$ and $r$ are sufficiently large, then groups $C_q\wr C_r/Z(C_q\wr C_r)$ are strongly real Beauville groups \end{theorem} Whilst this document was in preparation, in \cite{Gul2} G\"{u}l gave the first construction of infinite families of strongly real Beauville $p$-groups for $p$ odd. Our result relies on a more general construction and unlike the one given in \cite{Gul2} there are infinitely many orders of $p$-groups for which our construction gives multiple groups of the same order. For example when $(q,r)=(3^{28},3^3)$ or $(q,r)=(3^3,3^5)$ we obtain groups of order $3^{731}$ which cannot be isomorphic since they have centers of different orders. For general surveys on these and related matters see \cite{BSurvey,FSurvey,JSurvey,S,WSurvey}. \section{Proof of the Main Theorem} \subsection{A General Lemma} The following general lemma is straightforward to prove but useful. \begin{lemma}\label{MainLem} Let $G$ be a finite group; let $Z\leq G$ be a characteristic subgroup; let $t\in Aut(G)$ and let $x_1,y_1,x_2,y_2\in G$ have the properties that \begin{equation} \Sigma(x_1,y_1)\cap\Sigma(x_2,y_2)\subseteq Z,\tag{$\dagger\dagger$} \end{equation} $\langle x_1,y_1\rangle=\langle x_2,y_2\rangle=G$ and \begin{equation} x_i^t=x_i^{-1}\mbox{ and }y_i^t=y_i^{-1}\mbox{ for }i=1,2.\tag{$\star\star$} \end{equation} Then $G/Z$ is a strongly real Beauville group. \end{lemma} \begin{proof} By hypothesis $\langle x_i,y_i\rangle=G$ for $i=1,2$ and by condition ($\star\star$) we have that $\Sigma(x_1Z,y_1Z)\cap\Sigma(x_2Z,y_2Z)=\{e\}$. Now $t$ induces an automorphism of $G/Z$ that can be used to define a strongly real Beauville structure since for $i=1,2$ \begin{tabular}{rclr} $(x_iZ)^t$&=&$x_i^tZ^t$&\\ &=&$x_i^tZ$&[since $Z$ is a characteristic subgroup, by hypothesis]\\ &=&$x_i^{-1}Z$&[since $x_i^t=x_i^{-1}$, by hypothesis]\\ &=&$(x_iZ)^{-1}$& \end{tabular} \noindent and similarly $(y_iZ)^t=(y_iZ)^{-1}$. It follows that $\{\{x_1Z,y_1Z\},\{x_2Z,y_2Z\}\}\subset G/Z$ gives a strongly real Beauville structure since homomorphic images of generating sets are generating sets. \end{proof} \subsection{The Groups}\label{TheGroups} By Lemma \ref{MainLem} the problem of showing that $G/Z$ is a strongly real Beauville group can be lifted to the potentially much simpler task of working inside the group $G$ instead. Let $p$ be a prime and let $q$ and $r$ be powers of $p$. We will consider the wreath product $C_q\wr C_r$. Intuitively, this is a natural class of groups to consider since having a large abelian subgroup, the subgroup isomorphic to $C_q^r$, ensures that most elements have a large centralizer and so conjugacy classes are small thus making it easier to satisfy condition ($\dagger$) of Definition \ref{MainDef}. Unfortunately, wreath products like these can never be Beauville groups --- for any generating pair $\{g,h\}$ we have that $\Sigma(g,h)$ contains the (non-trivial) center of the group making it impossible to satisfy condition ($\dagger$) of Definition \ref{MainDef}. The above lemma, however, enables us to work around this problem. To give explicit names to elements of these groups we will define our copy of $C_q\wr C_r$ by the following presentation \[ \bigg\langle x,y\,\bigg|\,x^q,y^r,[x,x^{y^i}]\mbox{ for }i=1,\ldots,\frac{r-1}{2}\bigg\rangle. \] \subsection{A Representation}\label{RepSec} To help show that condition ($\star\star$) of Lemma \ref{MainLem} is satisfied by the elements we will be considering we will calculate the traces of matrices representing the elements of our groups. To do this we first consider a permutation representation on the points $\{1,\ldots,qr\}$. For the element $x$ we take the permutation \[ \Xi:=\bigg(\frac{r+1}{2},\frac{r+1}{2}+r,\frac{r+1}{2}+2r,\ldots,\frac{r+1}{2}+(q-1)r\bigg) \] and for the element $y$ we take the permutation \[ \Upsilon:=(1,2,\ldots,r)(r+1,r+2,\ldots,2r)\cdots(qr-r+1,qr-r+2,\ldots,qr) \] so that $\Xi$ cyclicly permutes the midpoints of the cycles defining $\Upsilon$. To construct the element $\phi$ in Definition \ref{MainDef} we consider the involution \[ t:=\bigg(1,qr\bigg)\bigg(2,qr-1\bigg)\cdots\bigg(\frac{qr+1}{2}-1,\frac{qr+1}{2}+1\bigg). \] Direct calculation shows that that given the $x$ and $y$ above we have that $x^t=x^{-1}$ and $y^t=y^{-1}$. Moreover, $t$ is an automorphism of the group since it simply sends each of the defining relations in the above presentation to to one that is immediately implied by them. Next we construct a degree $qr+2$ representation of these elements which is denoted in Figure 1. This representation is given as $W\oplus V_1\oplus V_2$ where $W$ is acted on by the permutation matrices defined by $\Xi$ and $\Upsilon$ whilst $V_1$ and $V_2$ are linear representations included to give the matrices traces that will distinguish the various elements we are interested in. \begin{figure} \[ x\mapsto X:=\left(\begin{array}{c|c|c} \Xi&&\\ \hline &\zeta_q&\\ \hline &&1 \end{array}\right)\mbox{, } y\mapsto Y:=\left(\begin{array}{c|c|c} \Upsilon&&\\ \hline &1&\\ \hline &&\zeta_r \end{array}\right) \] \label{MainFig}\caption{The representation. Here we write permutations to denote their corresponding permutation matrix; 1 to denote the $1\times1$ identity matrix and $\zeta_n$ denotes a primitive $n^{th}$ root of unity.} \end{figure} \subsection{The Beauville Structure} In this subsection we finally give the Beauville structure that proves Theorem \ref{MainThm} We claim that the elements $x$, $y$, $x^yxx^{y^{-1}}$ and $xyx$ satisfy the hypotheses of Lemma \ref{MainLem} and thus $C_q\wr C_r/Z(C_q\wr C_r)$ is a strongly real Beauville group since the center of a group is always a characteristic subgroup. We saw in Section \ref{TheGroups} that the automorphism $\phi$ defined by conjugation by the element $t$ satisfies condition ($\star\star$). Writing $Tr(A)$ to denote the trace of the matrix $A$ we note that for non-identity powers of the matrices \[ Tr(X^i)=qr-q+1+\zeta_q^i\mbox{, }Tr(Y^i)=1+\zeta_r^i\mbox{ and }Tr((XY)^i)=\zeta_q^i+\zeta_r^i \] and \[ Tr((X^YXX^{Y^{-1}})^i)=qr-3q+1+\zeta_q^{3i}\mbox{, }Tr((XYX)^i)=\zeta_q^{2i}+\zeta_r^i \] \[ \mbox{ and }Tr((X^YXX^{Y^{-1}}XYX)^i)=\zeta_q^{5i}+\zeta_r^i. \] Since all the non-central powers of these all have distinct traces it follows that no non-trivial power of $x$, $y$ and $xy$ can be conjugate to any non-trivial power of $xyx$, $x^yxx^{y^{-1}}$ and $xyxx^yxx^{y^{-1}}$, aside from the powers of them lie in the center. It follows that these elements satisfy condition ($\dagger\dagger$) of Lemma \ref{MainLem}. Finally it only remains to prove that our elements generate the whole group. We defined $x$ and $y$ so that $\langle x,y\rangle=C_q\wr C_r$ so it only remains to show that $xyx$ and $x^yxx^{y^{-1}}$ together generate the group. To do this note that it is sufficient to express $x$ as a word in $xyx$ and $x^yxx^{y^{-1}}$ since once we have $x$ we have that \[ x^{-1}(xyx)x^{-1}=(x^{-1}x)y(xx^{-1})=y. \] First suppose that $p\not=3$. Note that $x^yxx^{y^{-1}}((x^yxx^{y^{-1}})^{-1})^{xyx}=x^{y^{-1}}x^{-y^2}$ and conjugating this by $xyx$ gives us $xx^{-y^3}$. Conjugating this by $(xyx)^3$ gives us $x^{y^3}x^{-y^6}$ and so we have $xx^{-y^3}(x^{y^3}x^{-y^6})^{-1}=xx^{-y^6}$. Since 3 is coprime to the order of $y$ we can easily repeat this to obtain $xx^{-y^i}$ for any $i$. In particular we have the elements $xx^{-y}$ and $xx^{-y^{-1}}$ and so $(x^yxx^{y^{-1}})(xx^{-y})(xx^{-y^{-1}})=x^3$. Since 3 is coprime to the order of $x$ we can power this up to finally obtain $x$. Note that if $r=3$ the above will not work since $x^yxx^{y^{-1}}$ generates the center of the group in this case and so $\langle xyx, x^yxx^{y^{-1}}\rangle$ is abelian and thus not the whole group. We thus insist that $r>3$ when $p=3$. (It is natural to consider such a restriction since the group $C_3\wr C_3$ is known to be too small to be a Beauville group, let alone a strongly real one and so the same is obviously true of any quotient of it --- see \cite[Corollary 1.9]{BBF}) In this case it turns out to more convenient to use $x^{y^2}x^yxx^{y^{-1}}x^{y^{-2}}$ instead of $x^yxx^{y^{-1}}$. Whilst the trace calculations for this element will clearly be a little different in this case, they nonetheless show that the only powers of this element and its product with $xyx$ that are conjugate to any of $x$, $y$ or $xy$ lie in the center so it is a perfectly valid candidate. Moreover it is also clearly inverted by our automorphism. To prove that this pair generates an argument entirely analogous to that of the previous paragraph can be carried out going via the elements $xx^{-y^5}$ which works since 5 is coprime to 3. \begin{comment} \section{$p=2$} Whilst Lemma \ref{MainLem} clearly also applies to the case $p=2$, the construction described in the previous section doesn't work in this case. The permutations used in this case cannot resemble the ones that were given in Section \ref{RepSec} since there is no `middle' in the cycles representing $y$ for a cycle representing $x$ to permute. To work around this we change to considering a slightly different class of groups. Instead of considering the whole of $C_q\wr C_r$, we restrict to the subgroup generated by the permutations \[ \Xi:=(2,r+2,2r+2,\ldots,qr-r+2)(r-1,2r-1,\ldots,qr-r-1) \] and \[ \Upsilon:=(1,2,\ldots,r)(r+1,r+2,\ldots,2r)\cdots(qr-r,qr-r+1,\ldots,qr). \] Note that to avoid $r-1=3$ we add the restriction that $r>4$. We also consider the automorphism defined by \[ t:=\bigg(1,qr\bigg)\bigg(2,qr-1\bigg)\ldots\bigg(\frac{qr}{2},\frac{qr}{2}+1\bigg). \] We again need to avoid small cases --- the smallest Beauville 2-groups have order $2^7$ \cite[Corllory 1,9]{BBF} and so $G$ need to have order at least $2^8$. Whilst it may look more natural to define $\Xi$ with a cycle starting at 1, starting further along ensures that the cycles of $x$, $x^y$ and $x^{y^{-1}}$ are all disjoint and so $x^yxx^{y^{-1}}$ does not power-up to the same class of involutions as $x$. Moreover, using $xyx$ does not work this time since it powers up to the same class of (non-central) involutions as $y$. Instead, using the elements $yxy$ solves this problem and much of the construction of the previous section carries through in the much the same way. THESE DON'T GENERATE \end{comment} \section{Concluding Remarks} Here we have explicitly constructed some families of strongly real Beauville $p$-groups, however it would be of interest to have a general test for being a strongly real Beauville $p$-group, akin to the general conditions for a $p$-group being a Beauville group given by Fern\'{a}ndez-Alcober and G\"{u}l in \cite{new} or by Jones and Wolfart in \cite[Theorem 11.2]{JW}. We also remark that the special case of $q=r=p\geq5$ was first shown to be Beauville groups that are not necessarily strongly real by Jones and Wolfart in \cite[Exercise 11.1]{JW}. Whilst,hilst infinitely many strongly real Beauville $p$-groups are now known to exist for each odd prime it would be interesting to know how frequently they occur. \begin{q} \begin{enumerate} \item[(a)] How does the proportion of Beauville groups of order $p^n$ that are strongly real vary as $n$ increases? \item[(b)] How does the proportion of Beauville groups of order $p^n$ that are strongly real vary as $p$ increases? \end{enumerate} \end{q} Finally a related question about the proportion of $p$-groups that are Beauville are posed in \cite[Question 1.8]{BBF}. An even more obvious and pressing problem is the following: \begin{p} Construct infinitely many strongly real Beauville 2-groups. \end{p}
2,869,038,154,489
arxiv
\section{\label{}} \section{Introduction} In the search for new phenomena, one well-motivated extension to the Standard Model (SM) is supersymmetry (SUSY). One very promising mode for SUSY discovery at hadron colliders is that of chargino-neutralino associated production with decay into three leptons. Charginos decay into a single lepton through a slepton $$\tilde{\chi}_1^{\pm} \rightarrow ~ \tilde{l}^{(*)} ~\nu_l \rightarrow \tilde{\chi}_1^{0} ~l^{\pm} ~\nu_l $$ and neutralinos similarly decay into two detectable leptons $$\tilde{\chi}_2^{0} \rightarrow ~ \tilde{l}^{\pm(*)} ~l^{\mp} \rightarrow \tilde{\chi}_1^{0} ~l^{\pm} ~l^{\mp} $$. The detector signature is thus three SM leptons with associated missing energy from the undetected neutrinos and lightest neutralinos, $\tilde{\chi}_1^0$ (LSP), in the event. Many previous searches have used all three leptons for detection \cite{rut_note,2009arXiv0910.1931F}. The most generic form of SUSY is the MSSM model which, in many parameter spaces, gives the lepton signature that interests us \cite{susy_primer}. Unfortunately there are far too many free parameters in this model to test generically. In the past it has been tradition to use a specific gravity mediated SUSY breaking model called mSugra. For this analysis we adopt a more generic method, in which we present results in terms of exclusions in sparticle masses as opposed to mSugra parameter space. We construct simplified models of SUSY wherein we do not hope to develop a full model of SUSY, but an effective theory that can be easily translated to describe kinematics of arbitrary models. We set the masses at the electroweak scale and include the minimal suite of particles necessary to describe the model and effectively decouple all other particles, by setting their masses $> $ TeV range. We also tune the couplings of the particles to mimic models that preferentially decay to taus. Specific models will determine permitted decay modes \cite{Ruderman:2010kj}. Different models' SUSY breaking method will determine allowed decay modes in broad categories. In this analysis we present two types of generic models. The first is a simplified gravity breaking model similar to mSugra; the second is a simplified gauge model, which encompasses a broad suite of theories with gauge mediated SUSY breaking (GMSB). The simplified gravity model we generally have electroweak ($W^{\pm}$) production of $\tilde{\chi}_1^{\pm}, \tilde{\chi}_2^{0}$ pairs. $\tilde{\chi}_1^{\pm}$ then decays to $\tilde{l}^{\pm}, \nu_l$ and $\tilde{\chi}_2^{0}$ goes to $\tilde{l}^{\pm} l^{\mp}$. All the sleptons decay as normal $\tilde{l}^{\pm} \rightarrow l^{\pm}, \tilde{\chi}_1^{0}$. We can tune the branching ratio to slepton flavors. For each SUSY point, we choose two branching ratios BR($\tilde{\chi}_2^{0}, \tilde{\chi}_1^{\pm} \rightarrow \tilde{\tau} + X) = 1, 1/3$. We choose the masses of the $\tilde{\chi}_1^{\pm}$ and $\tilde{\chi}_2^{0}$ to be equal. The simplified gauge model is motivated by gauge mediated SUSY breaking scenarios. Generally, the LSP is the gravitino which is very light: in the sub-keV range. Also, charginos do not couple to right handed sleptons in these models, therefore all chargino decays are to taus, so BR($\tilde{\chi}_1^{\pm} \rightarrow \tilde{\tau}_1 \nu_{\tau}) = 1 $ always. The $\tilde{\chi}_2^{0}$ can decay to all lepton flavors. The final feature of this model is that $\tilde{\chi}_1^{\pm}$ or $\tilde{\chi}_2^{0}$ don't decay through SM bosons. \section{Analysis Overview} \label{sec:overview} Our approach is to look for the two same signed leptons from trilepton events since the opposite signed pair has the disadvantage of large standard model backgrounds from electroweak Z decay. We select one electron or muon and one hadronically decaying tau. Requiring a hadronicaly decaying tau adds sensitivity to high tan$ \beta$ SUSY space. Our main backgrounds therefore will be SM W + Jets where the W boson decays to an electron or muon and the jet fakes a hadronic tau in our detector. Our background model is comprised of two distinct types. We use Monte Carlo to account for common SM processes naturally entering the background as well as processes with real taus that might contain a fake lepton. Any process involving a jet faking a tau is covered in our tau fake rate method, these processes would be W + Jet, conversion+Jet and QCD. In all these processes, the jet fakes a tau and a lepton comes from the other leg of the event. Our fake rate is measured in a sample of pure QCD jets \cite{2009PhRvL.103t1801A}. We validate the measurement by applying it to three distinct orthogonal regions to our signal. We select our dilepton events and first understand the opposite signed lepton-tau region. After applying an $H_T$ cut, we develop confidence that we understand the primary and secondary backgrounds, $Z\rightarrow \tau \tau$ and W + jets respectively. We then look at the same signed signal region, where we expect to be dominated by our fake rate background. To set limits in the M(Chargino) vs. M(Slepton) plane, a grid of signal points is generated. We optimize a $\rm\,/\!\!\!\!{\it E_T}$ cut as a function of model parameters for each point to increase our sensitivity to signal. Limits are then found at each point, and iso-contours are interpolated to form our final limits on SUSY process cross section. \section{Dataset And Selection} \label{sec:data} We use 1.96 TeV $p\bar{p}$ collision data from the Fermilab Tevatron corresponding to 6.0 $fb^{-1}$ of integrated luminosity from the CDF II detector. The data is triggered by requiring one lepton object, and electron or muon; as well as a cone isolated tau like object. We then apply standard CDF selection cuts to the objects. Electrons and muons are required to have an $E_T$ ($P_T$) cut of 10 GeV. One pronged taus have a $P_t$ cut of 15 GeV/c and three pronged taus have a 20 GeV/c cut. The $P_T$ for a tau is considered to be the visible momentum: the sum of the tracks and $\pi^0$'s in the isolation cone. To reduce considerable QCD backgrounds we apply a cut on $H_T$ defined as the sum of the tau, lepton and $\rm\,/\!\!\!\!{\it E_T}$ in the event. The $H_T$ cut is 45,50,55 GeV/c for the $\tau_1-\mu$, $\tau_1-e$ and $\tau_3 - \ell$ channels. We cut events were $d\phi (l,\tau) < 0.5$ as well as events with OS leptons within 10 GeV of the Z boson mass. $\rm\,/\!\!\!\!{\it E_T}$ is corrected for all selected objects and any jets observed in the event. Monte Carlo is scaled to reflect trigger inefficiencies as well as inefficiencies from lepton and tau reconstruction. \section{Backgrounds} Our background model is comprised of two distinct types. We use Monte Carlo to simulate detector response to Diboson, $t\bar{t}$, Z boson processes as well as real taus from W decay. These processes are normalized to their SM cross section and weighted by scale factors to account for inefficiencies in trigger, ID and reconstruction. Any process involving a jet faking a tau is covered in our tau fake rate method, these processes would be W + Jet, conversion+Jet and QCD. In all these processes, the jet fakes a tau and a lepton comes from the other leg of the event. We measure the fake rate in a sample of QCD jets. Our rate is defined as the ratio of tau objects to loose taus where loose taus are tau like objects that pass our trigger. Because the trigger has very decent tau discriminating ability, this relative fake rate is fairly high. In terms of applying the fake rate to fakeable objects, in order to not overestimate our fake contributions we have a subtraction procedure for the preponderance of real taus that pass through our trigger. The measurement of the fake rate in the leading jet and sub leading QCD jet constitutes the systematic on the measurement. We validate our tau fake rates in three different orthogonal regions to our signal. These regions reflect the three processes the fake rate will account for in the analysis. \section{OS Validation} Before we look at signal data in out blind analysis, a major validation step is to confirm agreement in the OS region. This region is dominated by $Z\rightarrow \tau \tau$ decays, which gives us confidence in our scale factor application. The secondary background in this region is W+ Jets, which serves as an additional check on our fake rate background. As can be seen in Table~\ref{table:oscr} as well as in Figure~ \ref{fig:os_plots} and we have good confidence in our background model. \begin{table}[h] \centering \mbox{ \begin{tabular}{|l|r|} \hline \multicolumn{2}{|l|}{CDF Run II Preliminary $6.0\ \textrm{fb}^{-1}$} \\ \multicolumn{2}{|l|}{OS \ $ \ell - \tau $} \\ \hline \hline Process & Events $\pm$ stat $\pm$ syst \\ \hline Z$\rightarrow \tau \tau $ & $6967.3\pm 56.4\pm 557.4$ \\ Jet$\rightarrow \tau $ & $4526.5 \pm 26.8 \pm 1064.5$ \\ Z$\rightarrow \mu \mu $ & $262.5 \pm 20.1 \pm 21.0$ \\ Z$\rightarrow e e $ & $82.5 \pm 8.6 \pm 6.6 $ \\ W$ \rightarrow \tau \nu $ & $371.5 \pm 12.4 \pm 36.4 $ \\ t$\bar{\textrm{t}} $ & $36.3 \pm 0.3 \pm 5.1 $ \\ Diboson & $61.3 \pm 0.9 \pm 6.0 $ \\ \hline Total & 12308.0 $\pm\ 67.3\pm 1202.3 $\\ Data & 12268\\ \hline \end{tabular} } \caption{Total OS control region.} \label{table:oscr} \end{table} \begin{figure}[h!] \begin{tabular}{|c|c|} \hline \includegraphics[width=8cm,clip=]{figs/blessedplots/os/electron_et.eps} & \includegraphics[width=8cm,clip=]{figs/blessedplots/os/muon_m_pt.eps}\\ \hline \end{tabular} \caption{Plots of the OS Control Region, Electron $E_T$ (left) and Muon $P_T$ (right). \label{fig:os_plots}} \end{figure} \subsection{Observed Data and Limit Setting} After gaining confidence in the OS control region, we unblind the analysis and set limits on our models. For each signal point, we choose a $\rm\,/\!\!\!\!{\it E_T}$ cut that optimizes the $s/\sqrt{b}$ at that point. To allow simple interpretation, we form an analytical expression for the $\rm\,/\!\!\!\!{\it E_T}$ cut as a function of model parameters. Because of large QCD and conversion backgrounds at low $\rm\,/\!\!\!\!{\it E_T}$ all limit setting is done above $\rm\,/\!\!\!\!{\it E_T} = 20\ GeV$. The results are below in table~\ref{table:result_total_metcut}. Kinematic plots of the SS region are in Figure~\ref{fig:ss_plots}. \begin{table}[h] \centering \mbox{ \begin{tabular}{|l|r|} \hline \multicolumn{2}{|l|}{CDF Run II Preliminary $6.0\ \tn{fb}^{-1}$} \\ \multicolumn{2}{|l|}{SS \ $ \ell - \tau $} \\ \hline \hline Process & Events $\pm$ stat $\pm$ syst \\ \hline Z$\rightarrow \tau \tau $ & $10.2\pm 2.2\pm 0.8$ \\ Jet$\rightarrow \tau $ & $1152.7 \pm 15.2 \pm 283.1$ \\ Z$\rightarrow \mu \mu $ & $0.0 \pm 0.0 \pm 0.0$ \\ Z$\rightarrow e e $ & $0.0 \pm 0.0 \pm 0.0 $ \\ W$ \rightarrow \tau \nu $ & $96.9 \pm 6.4 \pm 9.5 $ \\ t$\bar{\tn{t}} $ & $0.7 \pm 0.0 \pm 0.1 $ \\ Diboson & $4.3 \pm 0.2 \pm 0.4 $ \\ \hline Total & 1264.8 $\pm\ 16.6\pm 283.3 $\\ Data & 1116\\ \hline \end{tabular} } \caption{SS signal region used in limit setting, $\rm\,/\!\!\!\!{\it E_T} > 20 \ GeV $. Both Electron and Muon Channels.} \label{table:result_total_metcut} \end{table} \begin{figure}[h!] \begin{tabular}{|c|c|} \hline \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/electron_et.eps} & \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/electron_et_log.eps}\\ \hline \end{tabular} \caption{Plots of the SS Signal Region, Electron $E_t$ (left) and a log version (right). \label{fig:ss_plots}} \end{figure} \begin{figure}[h!] \begin{tabular}{|c|c|} \hline \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/electron_ht.eps} & \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/electron_met.eps}\\ \hline \end{tabular} \caption{Plots of the SS Signal Region, Electron $H_t$ (left) and a electron $\rm\,/\!\!\!\!{\it E_T}$ (right). \label{fig:ss_plots}} \end{figure} \begin{figure}[h!] \begin{tabular}{|c|c|} \hline \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/muon_m_pt.eps} & \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/muon_m_pt_log.eps}\\ \hline \end{tabular} \caption{Plots of the SS Signal Region, Muon $P_t$ (left) and a log version (right). \label{fig:ss_plots}} \end{figure} \begin{figure}[h!] \begin{tabular}{|c|c|} \hline \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/muon_met.eps} & \includegraphics[width=8cm,clip=]{figs/blessedplots/ss/muon_t_clusteret.eps}\\ \hline \end{tabular} \caption{Plots of the SS Signal Region, Muon $\rm\,/\!\!\!\!{\it E_T}$ (left) and tau cluster $E_T$ (right). \label{fig:ss_plots}} \end{figure} After the $\rm\,/\!\!\!\!{\it E_T}$ cut is applied at each point, we find SUSY production cross section limits and interpolate these contours in the M(Chargino) vs. M(Slepton) plane. The final results can be found in Figures \ref{fig:exp_gauge_2d} through \ref{fig:exp_gravity_lsp220_2d}. \begin{figure}[h!] \begin{tabular}{|c|c|} \hline \includegraphics[width=8cm,clip=]{figs/limits/gauge_c0.eps} & \includegraphics[width=8cm,clip=]{figs/limits/gauge_c1.eps} \\ \hline \end{tabular} \caption{Expected limits (pb) for Simplified Gauge Model for BR to taus of 100\% ( left), and 33\%(right) \label{fig:exp_gauge_2d}} \end{figure} \begin{figure}[h!] \begin{tabular}{cc} \includegraphics[width=8cm,clip=]{figs/limits/gravity_lsp120_c0.eps} & \includegraphics[width=8cm,clip=]{figs/limits/gravity_lsp120_c1.eps} \\ \end{tabular} \caption{Expected limits (pb) for Simplified Gravity Model with LSP = 120 GeV for BR to taus of 100\% (left), 33\% (right). \label{fig:exp_gravity_lsp120_2d}} \end{figure} \begin{figure}[h!] \begin{tabular}{cc} \includegraphics[width=8cm,clip=]{figs/limits/gravity_lsp220_c0.eps} & \includegraphics[width=8cm,clip=]{figs/limits/gravity_lsp220_c0.eps} \\ \end{tabular} \caption{Expected limits (pb) for Simplified Gravity Model with LSP = 220 GeV for BR to taus of 100\% (left), 33\% (right). \label{fig:exp_gravity_lsp220_2d}} \end{figure} \bigskip \clearpage
2,869,038,154,490
arxiv
\section{Introduction} The Cosmic Ray (CR) spectrum has been measured to unprecedented accuracy between the eneries of $\sim10^9\mbox{eV}$ and $\sim10^{20}\mbox{eV}$. Two features in the spectrum where there is a change in the CR spectral shape, the second Knee at the energy $\sim0.6\mbox{EeV}$ \citep[e.g.][]{2007APh....27...76A, 2012APh....36...31B, 2014APh....53..120B} and the Ankle at $\sim 5\mbox{EeV}$ \citep[e.g.][]{2000NuPhS..87..345W, 2005PhRvD..72h1301D}, have been considered as the transition from a Galactic dominated spectrum to an extragalactic dominated spectrum \citep[see e.g.][]{2007APh....27...61A, 2012APh....39..129A}. At the transition, a change in the composition is not unexpected \citep[e.g.][]{1993A&A...274..902S}. The High Resolution Fly’s Eye (HiRes) collaboration reported \citep{2010PhRvL.104p1101A} that the CR composition is dominated by protons above $1.6\mbox{EeV}$. This result can indicate on a transition below the Ankle. The Telescope Array collaboration (TA) measurements \citep{2012JPhCS.404a2037J} are consistent with those of HiRes for a predominately protonic composition. These results are consistent with an earlier report by HiRes \citep{2008PhRvL.100j1101A} that claims to observe the Greisen-Zatsepin-Kuzmin (GZK) cutoff. The GZK cutoff predicted by \citet{1966PhRvL..16..748G} and \citet{1966JETPL...4...78Z} independently, is an upper limit of $E\sim 50\mbox{EeV}$ to the CR spectrum due to interactions of Ultra High Energy Cosmic Ray (UHECR) protons ($E\gtrsim 1\mbox{EeV}$) on the Cosmic Microwave Background (CMB) photons via pion photoproduction process. If UHECRs are dominated by heavier nuclei, the steepening of the spectrum is not as sharp as the GZK and it occurs at lower energies \citep{2013APh....41...73A, 2013APh....41...94A}. For this reason, the GZK cutoff, assuming that it is pinpointed with adequate energy resolution, is considered to be a signature of a proton dominated composition of UHECRs. The Pierre Auger collaborations results \citep{2010PhRvL.104i1101A, 2014NIMPA.742...22B} are inconsistent with those of HiRes and TA, and show a gradual increase in the average mass of UHECRs with energy. This raises the possibility that the highest energy UHECRs are not protons, but it is consistent with their being partly ($\sim 1/2$) protons, whereas mixed composition models predict that at the highest energies there are no significant number of light nuclei. Moreover, there are difficulties with fitting the UHECR spectrum with an admixture of heavy elements \citep{2014JCAP...10..020A}. In this paper we therefore consider the UHECR to be protons, or to be half protons by composition, bearing in mind that it is only a hypothesis. The issue we consider is whether the secondary $\gamma$-rays that they produce are consistent with the extragalactic diffuse gamma ray background. If not, this can be taken as further evidence a) that they are mostly heavy nuclei (which typically requires them to have a harder spectrum than $E^{-2.0}$) \citep{2011A&A...535A..66D,2014JCAP...10..020A}, or b) that they are not extragalactic. UHECRs are widely thought to be accelerated at astrophysical shocks \citep[for a review see e.g.][]{1987PhR...154....1B}. The energy spectrum of the accelerated particles is assumed to be a power law $N(E) \propto E^{-\alpha}$ with a spectral index of $\alpha \gtrsim 2$. Observations confirm particles of energies $E>10^{20}\mbox{eV}$ and even an $E=3\times 10^{20}\mbox{eV}$ event \citep{1993PhRvL..71.3401B} has been detected. Possible sources that might be able to accelerate CRs up to energies $E\gtrsim 10^{20}\mbox{eV}$, among others are Active Galactic Nuclei (AGNs) and radio galaxies \citep[see e.g.][]{1995ApJ...454...60N, 1997JPhG...23....1B, 2000PhR...327..109B, 2004RPPh...67.1663T}. UHECR protons propagating in space are interacting on the CMB photons and initiating an electromagnetic cascade. The result is observable diffuse $\gamma$-rays \citep{1972JPhA....5.1419W}. The two main interactions of protons on the CMB are pair production $p + \gamma_{_{CMB}} \rightarrow p + e^+ + e^-$, at energies $2.4 \mbox{EeV} \lesssim E_p \lesssim 50\mbox{EeV}$, and pion photoproduction $p + \gamma_{_{CMB}} \rightarrow n + \pi^+$, $p + \gamma_{_{CMB}} \rightarrow p + \pi^0$ at higher energies. The neutral pions ultimately decay into high energy photons while the positive pions decay into high energy photons, positrons, and neutrinos. The electrons and positrons, that emerge from the decays, interact with the background photons via inverse Compton process $e + \gamma_b \rightarrow e' + \gamma$. High energy photons are produced with a mean energy of \citep{1970RvMP...42..237B} $\varepsilon_{\gamma} = 4/3 (E_e/m_ec^2)^2 \varepsilon_b $, where $E_e$ is the energy of the incoming electron, $m_e$ is the rest mass of the electron, $c$ is the speed of light, and $\varepsilon_b$ is the energy of the background photon. The high energy photons interact with the Extragalactic Background Light (EBL) via pair production process $\gamma + \gamma_{_{EBL}} \rightarrow e^+ + e^-$, producing a pair of electron and positron with energy of $E_e=\varepsilon_{\gamma}/2$ each. These two processes, inverse Compton and pair production, drive the development of an electromagnetic cascade. The cascade develops until the energy of the photons drops bellow the pair creation threshold $\varepsilon_{th} = (m_ec^2)^2/\varepsilon_b$ . At this stage the photons stop interacting, while electrons continue losing energy and producing photons via inverse Compton. The cascade results in photons of energy below $\thicksim$ 1TeV that contribute to the isotropic diffuse $\gamma$-ray emission. The isotropic diffuse $\gamma$-ray emission, also known as Extragalactic Gamma Ray Background (EGRB), was first detected by the SAS 2 satellite \citep{1977ApJ...217L...9F, 1978ApJ...222..833F} and interpreted as of extragalactic origin. Later on, \citet{1998ApJ...494..523S} confirmed the existence of the EGRB by analyzing the EGRET data. In this work, we fit the recently reported \citep{2015ApJ...799...86A} EGRB spectrum by Fermi Large Area Telescope (Fermi LAT) collaboration between $100\mbox{MeV}$ and $820\mbox{GeV}$. \section{UHECR spectrum calculations} In this section we follow \citet{2006PhRvD..74d3005B} and calculate UHECR spectrum under the assumptions of a pure proton composition, a homogeneous distribution of sources between redshift $z=0$ to the maximal redshift $z_{max}$ and continuous energy losses. For the energy loss rates of UHECR protons in interactions on the CMB we use the calculations made by \citet{2006PhRvD..74d3005B} as well. The differential equation describing the energy loss rate of a UHECR proton at redshift $z$ is \begin{equation} -\frac{dE}{dt} = E H(z) + b(E,t) \end{equation} where $E$ is the proton energy at epoch t, $b(E,t)$ is energy losses of UHECR protons of energy $E$ at epoch $t$ to pair production and to pion photoproduction, and $H(z) = H_0(\Omega_m(1+z)^3 + \Omega_{\Lambda})^{1/2}$ is the Hubble constant at redshift $z$ with the parameters $\Omega_m = 0.27$, $\Omega_{\Lambda} = 0.73$ and $H_0 = 70 \mbox{km} \ \mbox{sec}^{-1}\mbox{Mpc}^{-1}$. Changing variable $dt/dz =-1/(H(z)(1+z))$ and integrating we obtain \begin{equation} \label{eq:gen_energy} E(E_{z_0},z) = E_{z_0} + \int_{z_0}^z dz' \frac{E}{1+z'} + \int_{z_0}^z dz' \frac{b(E,z')}{H(z')(1+z')} \end{equation} Equation (\ref{eq:gen_energy}) describes the energy of a CR proton at redshift $z$ where its energy at redshift $z=z_0$ is $E_{z_0}$. The first integral from the left describes the energy that a CR proton is losing due to the expansion of the universe. The second integral describes the energy that a CR proton is losing due to its interactions on the CMB photons. Assuming a power law distribution for UHECR protons, the production rate of the particles at redshift $z$ per unit energy per unit comoving volume is \begin{equation} \label{eq:np_z} Q_p(E,z) = K_1F(z) E(E_0,z)^{-\alpha} \end{equation} where $K_1$ is constant, $\alpha$ is the power law index, $F(z) = \mbox{const}\times(1+z)^m$ is the density evolution of the UHECR sources as a function of the redshift, and $m$ is called the evolution index. We assume that the number of protons is conserved. The interactions of protons on the CMB always involve a proton, except for the positive pion production. In this case, the outgoing neutron beta decays very fast to proton, electron, and anti neutrino electron. So eventually the total number of protons is conserved. Then, the number of particles per comoving volume at redshift $z$ is calculated as \begin{equation} \label{eq:num_particles} n_p(E_{z_0},z_0)dE_{z_0} = K_1\int_{z_0}^{z_{max}}\frac{dt}{dz} dzF(z) E^{-\alpha}dE \end{equation} The diffuse flux of UHECRs at the present time would be \begin{equation} \label{eq:cr_flux} J_p(E_0) = K_1\frac{c}{4\pi}\int_0^{z_{max}}\frac{dz}{H(z)(1+z)}F(z) \ E^{-\alpha} \ \frac{dE}{dE_0} \end{equation} The UHECR spectrum is normalized to the experimental data through $K_1$. The spectrum in equation (\ref{eq:cr_flux}) is determined by the four parameters: $\alpha$, $m$, $z_{max}$, and $ E_{max}$. $E_{max}$ is the maximal energy that a CR proton can be accelerated to. So in equation (\ref{eq:cr_flux}) the energy of a CR particle is limited by $E(E_0,z)\leq E_{max}$. For each set of these four parameters, a different UHECR spectrum can be calculated and normalized to the experimental data. \subsection{Energy density of photons resulted from UHECR interactions} The energy density of photons originating from UHECR interactions, can be calculated by integrating over all energy losses of UHECR protons to the electromagnetic cascade. In the pion photoproduction process, UHECR protons are losing energy to the electromagnetic cascade and to the production of neutrinos. The fraction of energy that goes into the electromagnetic cascade in this process is $\sim 0.6$ \citep{2001PhRvD..64i3010E}. We denote by $b_{em}$ the relative energy losses of UHECR protons to the electromagnetic cascade. The amount of energy lost to the electromagnetic cascade, by a proton of energy $E$, propagating from redshift $z+dz$ to redshift $z$, is $b_{em}(E,z)dz/(H(z)(1+z))$. Multiplying this amount of energy by the number of protons of energy $E$ per unit volume at redshift $z$, given in equation (\ref{eq:num_particles}), and integrating over all proton energies, we get the total energy density of photons produced by UHECR protons at redshift $z$ \begin{equation} \label{eq:omega(z)} \omega_c(z)dz = dz\int_0^{E_{max}}dE\frac{ b_{em}(E,z)}{H(z)(1+z)}n_p(E,z)(1+z)^3 \end{equation} where $n_p(E,z)(1+z)^3$ is the proper density of the protons. Integrating over all redshifts and deviding by $(1+z)^4$ (as photons lose energy as $1/(1+z)$ and a unit volume expands by a factor of $(1+z)^3$) we get \begin{equation} \omega_c = \int_0^{E_{max}}dE\int_0^{z_{max}}dz\frac{ b_{em}(E,z)}{H(z)(1+z)^2}n_p(E,z) \label{eq:energy_density_0} \end{equation} This energy density depends on the UHECR parameters $\alpha$, $m$, $z_{max}$, and $E_{max}$. \section{The spectrum of $\gamma$-rays originating from UHECR interactions} \label{sec:EGRB} The generation rate of $\gamma$-ray photons originating from UHECR interactions at redshift $z$ per unit energy per unit volume is calculated as \begin{equation} Q_{\gamma}(\varepsilon,z) = K_2(z) \left \{ \begin{array}{lcl} \left(\frac{\varepsilon}{\varepsilon_{\chi}}\right)^{-3/2} & \mbox{for} & \varepsilon < \varepsilon_{\chi} \\ \left(\frac{\varepsilon}{\varepsilon_{\chi}}\right)^{-2} & \mbox{for} & \varepsilon_{\chi} \leq \varepsilon \leq \varepsilon_a \end{array}\right. \label{eq:gen_rate_photons} \end{equation} The spectral indexes were found by \citet{1975Ap&SS..32..461B}. The normalization factor $K_2(z)$ is constant in energy but depends on the redshift. For convenience we write equation (\ref{eq:gen_rate_photons}) in the following way \begin{equation} Q_{\gamma}(\varepsilon,z) = K_2(z)\mathcal{Q}_{\gamma}(\varepsilon,z) \label{eq:Q_def} \end{equation} $\varepsilon_a = (m_ec^2)^2/\varepsilon_{_{EBL}}$ is the threshold energy for pair production by a photon scattering on the EBL. Suppose a photon of energy $\varepsilon_a$ is interacting on the EBL. This photon will produce an electron and a positron of energy $\varepsilon_a/2$ each. This electron (or positron) will interact via inverse Compton on background photons, producing a photon of energy $\varepsilon_{\chi}= 1/3(\varepsilon_a/m_ec^2)^2\varepsilon_{b}$. So, photons of energies $\varepsilon_{\chi}\leq\varepsilon\leq \varepsilon_a$ do not interact with the background photons, but electrons continue to produce high energy photons of energies in this range. At energies below $\varepsilon_{\chi}$, photons are created by electrons of energies below $\varepsilon_a/2$. The $\gamma$-ray spectrum at the present time is calculated as \begin{equation} J_{\gamma}(\varepsilon) = \frac{c}{4\pi} \int_0^{z_{max}} \left(\frac{dt}{dz}\right)dz \frac{K_2(z) \mathcal{Q}_{tot}(\varepsilon (1+z),z)}{(1+z)^2}\exp\left(-\tau(\varepsilon,z)\right) \label{equ:gamma_flux} \end{equation} with \begin{equation} K_2(z) = \frac{\omega_c(z)H(z)(1+z)}{\int_0^{\infty}\mathcal{Q}_{tot} \varepsilon d\varepsilon \exp\left(-\tau\left(\varepsilon/(1+z),z\right)\right)} \label{eq:norm_photons} \end{equation} where $\mathcal{Q}_{tot}$ is the total contribution at redshift $z$ from all photons in the EBL spectrum. $\tau(\varepsilon,z)$ is the optical depth for pair production for a cascade photon propagating through the EBL from redshift $z$ to redshift $z=0$, observed at the present time with energy $\varepsilon$, given by \begin{equation} \tau_{\gamma \gamma}(\varepsilon,z) = \int_0^z dz' \frac{dl}{dz'} \int_{-1}^1 d\mu \frac{1-\mu}{2} \int_{E_{th}}^\infty d\varepsilon_b \ n \left( \varepsilon_b,z' \right) \ \sigma_{\gamma \gamma} \left(\varepsilon \left(1+z'\right),\varepsilon_b, \theta \right) \label{eq:tau} \end{equation} where $dl/dz = cdt/dz$ is the cosmological line element, $\theta$ is the angle between the interacting photons, $\mu=\cos(\theta)$, $\varepsilon_b$ is the energy of an EBL photon, and $n(\varepsilon_b,z)$ is the number of photons of energy $\varepsilon_b$ at redshift $z$ per unit volume per unit energy. $E_{th}$ is the threshold energy for the pair production process given by \begin{equation} E_{th} = \frac{2(m_ec^2)^2}{\varepsilon (1+z) (1-\cos(\theta))} \end{equation} The pair production cross section $\sigma_{\gamma \gamma}$ is given by \citep{1955jauch, 1967PhRv..155.1404G} \begin{equation} \sigma_{\gamma \gamma}(\varepsilon_1,\varepsilon_2,\theta) = \frac{3\sigma_T}{16}(1-\beta^2) \left[2\beta(\beta^2-2) + (3-\beta^4) \ln \left(\frac{1+\beta}{1-\beta} \right) \right] \end{equation} where $\sigma_T$ is the Thomson cross section and \begin{equation} \beta = \sqrt{1 - \frac{2(m_ec^2)^2}{\varepsilon_1 \varepsilon_2 (1-\cos(\theta))}} \end{equation} For the calculations of the $\gamma$-ray spectrum at redshift $z$, we use the best fit model in \citet{2004A&A...413..807K} as an EBL model. \section{Fitting the Fermi LAT data} \label{sec:fit} In this section we fit the EGRB measured by Fermi LAT. $\gamma$-rays from UHECRs and from Star Forming Galaxies (SFGs) cannot explain the most energetic data points of Fermi LAT. A high energy $\gamma$-ray flux is required. We consider here a possible $\gamma$-ray flux from Dark Matter (DM) decay as the highest energy contribution to the EGRB. \subsection{Components} For calculating the contribution from SFGs, we use the $\gamma$-ray spectrum of our Galaxy from \citet{2015ApJ...799...86A}. We use the Sum of all modeled components in the right panel of Figure 4 (Model A) in \citet{2015ApJ...799...86A} and subtract the total EGB (Model A) in \citet{2015ApJ...799...86A}, Figure 8. We assume that each SFG in the universe produces this $\gamma$-ray spectrum. We assume that the mass density of SFGs in the universe is half of the mass density of the universe, i.e $\sim 5\times 10 ^{-31} \mbox{gr}/ \mbox{cm}^{3}$. Further, we assume that SFGs evolve in time as the Star Formation Rate (SFR) (equation \ref{equ:SFR}). Lastly, we assume that the $\gamma$-rays from the SFGs are attenuated by the EBL when propagating in space (see Section \ref{sec:EGRB} for details). Under these assumptions we can calculate the $\gamma$-ray spectrum from SFGs in the universe. As a high energy contribution to the EGRB, we use $\gamma$-rays from DM of mass $mc^2=3\mbox{TeV}$, decaying into bosons ($W^+W^-$). The spectrum we use is taken from Figure 7 in \citet{2012JCAP...10..043M}. The DM lifetime that was used by \citet{2012JCAP...10..043M} in Figure 7 is $\tau= 1.2\times 10^{27}\mbox{sec}$. We adjust this lifetime to optimize the fit. The reason for using the $W^+W^-$ channel is the improvement of the fit at high energies. Other decay channels, such as $\mbox{DM} \rightarrow \mu^+\mu^-$ or $\mbox{DM} \rightarrow b\bar{b}$, do not give such a good fit at high energies. As an example, we also show a fit using the $\mu^+\mu^-$ decay channel. We examine four possibilities for the evolution of UHECR sources: SFR, Gamma Ray Bursts (GRBs), AGNs type-1, and BL Lacertae objects (BL Lacs). \newline The SFR function is taken from \citet{2008ApJ...683L...5Y} \begin{equation} F_{_{SFR}}(z) \propto \left\{ \begin{array}{lcl} (1+z)^{3.4} & \mbox{for} & z \leq 1 \\ (1+z)^{-0.3} & \mbox{for} & 1 < z \leq 4 \\ (1+z)^{-3.5} & \mbox{for} & 4 < z \end{array}\right. \label{equ:SFR} \end{equation} \newline As suggested by \citet{2007PhRvD..75h3004Y}, we take the GRB evolution function to be $F_{_{GRB}}(z) \propto F_{_{SFR}}(z)^{1.4}$. So we get \begin{equation} F_{_{GRB}}(z) \propto \left\{ \begin{array}{lcl} (1+z)^{4.8} & \mbox{for} & z \leq 1 \\ (1+z)^{1.1} & \mbox{for} & 1 < z \leq 4 \\ (1+z)^{-2.1} & \mbox{for} & 4 < z \end{array}\right. \label{equ:GRB} \end{equation} \newline There is a significant difference in the evolution functions of different luminosity AGNs. \citet{2005A&A...441..417H} calculated the evolution functions of AGNs of four different luminosities in the soft X-ray band ($0.5\mbox{-}2\mbox{KeV}$): Low Luminosity AGNs (LLAGNs) $L_X=10^{42.5} \mbox{erg sec}^{-1}$, Medium Low Luminosity AGNs (MLLAGNs) $L_X= 10^{43.5} \mbox{erg sec}^{-1}$, Medium High Luminosity AGNs (MHLAGNs) $L_X= 10^{44.5} \mbox{erg sec}^{-1}$, and High Luminosity AGNs (HLAGNs) $L_X= 10^{45.5} \mbox{erg sec}^{-1}$. LLAGNs are considered as not being able to accelerate CRs to ultra high energies \citep[see e.g.][]{2004NJPh....6..140W}. We thus discuss in this work the possibility that MLLAGNs, MHLAGNs, or HLAGNs are the sources of UHECRs: \begin{equation} F_{_{MLLAGN}}(z) \propto \left\{ \begin{array}{lcl} (1+z)^{3.4} & \mbox{for} & z \leq 1.2 \\ 10^{0.32(1.2-z)} & \mbox{for} & 1.2 < z \end{array}\right. \label{equ:MLLAGN} \end{equation} \begin{equation} F_{_{MHLAGN}}(z) \propto \left\{ \begin{array}{lcl} (1+z)^{5} & \mbox{for} & z \leq 1.7 \\ 2.7^5 & \mbox{for} & 1.7 < z \leq 2.7 \\ 10^{0.43(2.7-z)} & \mbox{for} & 2.7 < z \end{array}\right. \label{equ:AGN} \end{equation} \begin{equation} F_{_{HLAGN}}(z) \propto \left\{ \begin{array}{lcl} (1+z)^{7.1} & \mbox{for} & z \leq 1.7 \\ 2.7^{7.1} & \mbox{for} & 1.7 < z \leq 2.7 \\ 10^{0.43(2.7-z)} & \mbox{for} & 2.7 < z \end{array}\right. \label{equ:AGN} \end{equation} \newline For BL Lacs we use two different evolution functions. Various studies have found BL Lacs to evolve very slowly, or not evolve at all \citep[see e.g.][]{2002ApJ...566..181C,2007ApJ...662..182P}. Thus, the first evolution function we use is corresponding to a no evolution scenario: \begin{equation} F_{_{NoEvBL}} \propto (1+z)^0 \label{equ:HSP} \end{equation} \newline The second function is related to a subclass of BL Lacs - High Synchrotron Peaked (HSP) objects. \citet{2014ApJ...780...73A} used a set of 211 BL Lac objects detected by Fermi LAT during the first year of operation \citep{2010ApJ...715..429A}. \citet{2014ApJ...780...73A} have found that the number density of HSP BL Lacs is strongly increasing with time (i.e. with decreasing $z$) and that the number density of the 211 BL Lacs sample is almost entirely driven by this population at $z\leq1$. For these objects, the evolution function can be described roughly by: \begin{equation} F_{_{HSP}} \propto (1+z)^{-6} \label{equ:BL_Lac_NoEvo} \end{equation} \newline Most of the UHECR spectra are calculated with two power law indexes, defined as: \begin{equation} \alpha = \left\{ \begin{array}{rcl} \alpha_1 & \mbox{for} & E \leq E_{br} \\ \alpha_2 & \mbox{for} & E_{br} < E \end{array}\right. \end{equation} \subsubsection{Blazars} Blazars have been considered \citep[e.g][]{2015ApJ...800L..27A,2015MNRAS.450.2404G} as possible sources of $\gamma$-rays that explain the EGRB measurements, since most of the resolved sources are blazars. We show that the joint $\gamma$-ray flux from blazars and from UHECRs evolving as AGNs, GRBs, or SFR is too high and in most cases violates the limits imposed by the Fermi LAT data even without any additional contribution from SFGs or DM. UHECRs that evolve as BL Lacs, on the other hand, have enough room to fit the Fermi LAT data together with blazars. For a blazars contribution, we use the spectrum reported in \citet{2015ApJ...800L..27A}. This spectrum includes the resolved blazar sources. In order to get the unresolved blazar spectrum, we subtract the Fermi LAT resolved sources \citep{2015ApJ...799...86A} from the blazars spectrum. \subsection{Results} In figures \ref{fig:SFR_HiRes}-\ref{fig:BL_Lac_Blazars}, we present our results. Blazars are included only in figures \ref{fig:blazars} and \ref{fig:BL_Lac_Blazars}. In figures \ref{fig:SFR_HiRes}, \ref{fig:SFR_Auger}, \ref{fig:GRB_HiRes}, \ref{fig:GRB_Auger}, \ref{fig:MLLAGN}, \ref{fig:MLLAGN_Auger}, \ref{fig:AGN_HiRes}, and \ref{fig:BL_Lac} we show UHECR spectra for different parameters, the corresponding $\gamma$-ray fluxes (thin lines), and the total flux from the three sources (thick lines): SFGs, UHECRs, and DM decay. In these figures, the dashed-dotted curves differ from the solid blue line in one parameter, in order to show the sensitivity of the spectra to the chosen parameters. We show cases where the UHECR spectra are normalized to the HiRes data as well as cases where the spectra are normalized to the Auger data. The Auger data have a statistical uncertainty of $22\%$ \citep{2010PhRvL.104i1101A, 2014NIMPA.742...22B}. In the plots where we present the HiRes data, we also present the Auger data with energy increased by $16\%$. The recalibrated Auger data and the HiRes data agree well. In Figure \ref{fig:SFR_HiRes} we show fits to the Fermi LAT data with UHECRs that are adjusted to the HiRes data. The evolution model is SFR, except for one curve that is corresponding to the MHLAGNs model. The DM in this plot is assumed to have a lifetime of $\tau=4.61\times10^{27}\mbox{sec}$. In the upper panels of this plot we show the UHECR spectra and in the lower panels we show the corresponding $\gamma$-ray fluxes (the same lines as in the upper panels), with the SFGs (magenta dashed line) and DM (violet dashed line) contributions, and the sum of the three components (thick lines). In the left panels we show the sensitivity of the spectra to the chosen parameters, as the dashed-dotted lines differ from the blue line in one parameter. The thick dashed blue line in the lower left panel is the same as the thick solid blue line but without the DM contribution. In addition, in the lower right panel of this figure we show how our estimate for the SFGs spectrum is compared to the spectrum in other works. The thin dotted black line and the thick dotted black line are SFGs spectra calculated by \citet{2014JCAP...09..043T} and by \citet{2014ApJ...786...40L} respectively. Our line is in a good agreement with both these spectra. The maximum $\gamma$-ray contribution from UHECRs evolving as SFR is obtained at redshift $z\sim 7$, and in fact the contribution to the $\gamma$-ray flux from sources with redshifts above $z=4$ is only a few percent of the total flux. Above $z\sim7$, the contribution is negligible. In Figure \ref{fig:EGRB_DM_limits} we show the total contribution from SFGs, UHECRs, and DM for different DM lifetimes. The blue line in this figure is the same as the blue line in Figure \ref{fig:SFR_HiRes}. As can be seen from the figure, for this set of UHECR parameters, the shortest DM lifetime allowed in order to respect the bounds set by the Fermi LAT data is $\tau=3.75\times 10^{27}\mbox{sec}$. DM with shorter lifetime (such as $2.5\times10^{27}\mbox{sec}$ presented in the figure) will cause a violation of the limits imposed by the data. In the upper panel of Figure \ref{fig:shading} we show the uncertainties related to Figure \ref{fig:SFR_HiRes} for SFR evolution, with a maximum redshift of $z_{max}=7$ and a maximum acceleration energy of $E_{max}=10^{20}\mbox{eV}$. The blue band in Figure \ref{fig:shading} is the possible range of $\gamma$-ray fluxes related to these parameters. Its lower limit is the thin solid orange curve in the lower right panel of Figure \ref{fig:SFR_HiRes} and its upper limit is the thin solid blue curve in the lower left panel of Figure \ref{fig:SFR_HiRes}. The violet band represents DM contribution for lifetime values in the range $3.8\mbox{-}5.5\times10^{27}\mbox{sec}$. Magenta dashed line is the SFGs contribution. The green band is the total of the three components: UHECRs, DM, and SFGs. In order to show the importance of the DM contribution to the fit at high energies, we also show the yellow band, which is the sum of the contributions from UHECRs and SFGs (without DM). As can be seen, the high energy part of Fermi LAT data cannot be fitted with only UHECRs and SFGs. In the lower panel of Figure \ref{fig:shading} we show a fit to the Fermi LAT data, using DM decay in the $\mu^+\mu^-$ channel, with mass $mc^2=30\mbox{TeV}$ and a lifetime of $\tau=1.1\times10^{28}\mbox{sec}$. The $\mbox{DM}\rightarrow\mu^+\mu^-$ spectrum was taken from \citet{2012JCAP...10..043M} and the lifetime was adjusted (\citet{2012JCAP...10..043M} used $\tau=2\times10^{27}\mbox{sec}$). The thin dashed green line in this figure is the $\mbox{DM}\rightarrow\mu^+\mu^-$ contribution. The thick dashed green line is the sum of $\mbox{DM}\rightarrow\mu^+\mu^-$, SFGs, and UHECRs (with parameters written in the plot). We compare this fit to the one with the contribution from the $W^+W^-$ channel (thick violet dashed line). As can be seen, The $W^+W^-$ channel gives a much better fit at high energies. Using the $\mu^+\mu^-$ DM, we cannot fit all the high energy data points while keeping the $\gamma$-ray flux below the upper limit of the highest energy data point. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{f1.eps} \includegraphics[width=0.49\textwidth]{f2.eps} \includegraphics[width=0.49\textwidth]{f3.eps} \includegraphics[width=0.49\textwidth]{f4.eps} \caption{\textbf{Upper panels}: UHECR spectra for different sets of parameters, normalized to the HiRes and the recalibrated Auger data (the parameters are written in the plots). All spectra are corresponding to the SFR evolution, except the dashed-dotted dark red curve which is corresponding to the MHLAGNs model. The dashed-dotted lines in the left panels differ from the solid blue line in one parameter. \textbf{Lower panels}: The $\gamma$-ray fluxes corresponding to the different UHECR spectra in the upper panels are represented in the lower panels by the same lines as in the upper panels. The SFGs contribution and the DM ($W^+W^-$ channel, $mc^2=3\mbox{TeV}$) contribution with a lifetime of $\tau=4.61\times10^{27}\mbox{sec}$ are shown. The totals of the three components (SFGs, UHECRs, and DM) are represented by thick lines (same color and style as the thin lines for the same parameters). The thick dashed blue line in the lower left panel is the same as the thick solid blue line but without the contribution of DM. The thin dotted black line and the thick dotted black line in the lower right panel are SFGs spectra calculated by \citet{2014JCAP...09..043T} and by \citet{2014ApJ...786...40L} respectively.} \label{fig:SFR_HiRes} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{f5.eps} \caption{Total contribution of $\gamma$-rays from SFGs, UHECRs, and DM for different DM lifetimes. The solid blue line is the same as the solid blue line in Figure \ref{fig:SFR_HiRes}. Dashed violet, dashed black, and dashed green lines are the contributions from DM for lifetimes of $4.61\times 10^{27},3.75\times 10^{27},2.5 \times 10^{27}\mbox{sec}$ respectively. Dashed magenta line is the SFGs contribution. Dashed-Dotted lines are the totals of SFGs, UHECRs, and DM.} \label{fig:EGRB_DM_limits} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{f6.eps} \includegraphics[width=0.85\textwidth]{f7.eps} \caption{\textbf{Upper panel:} Uncertainties in the total $\gamma$-ray flux from SFGs, UHECRs evolving as SFR, and DM. The UHECR band (blue) is corresponding to the area between the thin orange solid line and the thin blue solid line in the lower panels of Figure \ref{fig:SFR_HiRes}. This band reflects the uncertainties in $\gamma$-ray spectra from UHECRs that evolve in time as SFR, have a maximal acceleration energy of $E_{max}=10^{20}\mbox{eV}$, with maximal redshift of $z_{max}=7$, and are adjusted to the HiRes and recalibrated Auger data. The DM band (violet) is corresponding to lifetimes of $3.8\mbox{-}5.5\times10^{27}\mbox{sec}$. The yellow band is the total $\gamma$-ray flux from UHECRs and SFGs (without the DM). The green band is the total of UHECRs, SFGs, and DM. \textbf{Lower Panel:} Comparison of a fit to the Fermi LAT data using the $W^+W^-$ decay channel (thick dashed violet line) to a fit using the $\mu^+\mu^-$ channel (thick green dashed line). The DM contributions are shown in thin magenta and green dashed lines. The contributions from UHECRs and SFGs are also shown.} \label{fig:shading} \end{figure} In Figure \ref{fig:SFR_Auger}, we normalize the SFR curves to the unrecalibrated Auger data. The DM lifetime in this fit is $\tau=3.87\times10^{27}\mbox{sec}$. The lowest DM lifetime possible in this case is $3.06\times 10^{27}\mbox{sec}$. A lower lifetime will give a too high flux. The $\gamma$-ray fluxes here are lower than in Figure \ref{fig:SFR_HiRes}, since the Auger data have a lower flux than the HiRes data. This is why we need a higher flux at high energies, from DM decay, in order to fit the Fermi LAT data. Here, as in Figure \ref{fig:SFR_HiRes}, the dashed-dotted lines differ from the solid blue line in one parameter and the thick lines are the sum of the three contributions (UHECRs, SFGs, abd DM). Note that in the left panels, the line corresponding to a maximum acceleration energy of $E_{max}=10^{21}\mbox{eV}$ (dashed-dotted green) and the line corresponding to the spectral index of $\alpha_{1,2}=2.5,2.2$ (dashed-dotted dark red) have almost identical $\gamma$-ray spectra. This is why the total of the $\alpha_{1,2}=2.5,2.2$ line, SFGs, and DM cannot be seen behind the thick dashed-dotted green line. In Figure \ref{fig:blazars} we show the total contribution of $\gamma$-rays from blazars and from UHECRs evolving as SFR. The unresolved blazar spectrum as extrapolated by \citet{2015ApJ...800L..27A} from the resolved blazar data is represented in Figure \ref{fig:blazars} by the thin black dashed line. We consider the maximum and minimum $\gamma$-ray fluxes from UHECRs that are adjusted to HiRes data and from UHECRs that are adjusted to Auger data. The maximum redshift is $z_{max}=7$ and the maximum acceleration energy is assumed to be $E_{max}=10^{20}\mbox{eV}$. In all cases, the total flux of $\gamma$-rays from blazars and from UHECRs evolving as SFR is too high. In the HiRes case, the sum of blazars and the maximum contribution from UHECRs is already violating the limits imposed by the Fermi LAT data. The total with the minimum contribution from UHECRs is reaching the edge of the Fermi uncertainties, leaving no room for high or low energy components. In the case of Auger data, the totals are lower, but still too high and it is very unlikely that a fit to the entire data will be possible without exceeding the boundaries. Also in this figure we show the sum of all unresolved components (blazars, SFGs, and radio galaxies) in \citet{2015ApJ...800L..27A}, Figure 3 (resolved point sources have been removed), with an additional contribution from $3\mbox{TeV}$ $W^+W^-$ decay DM with lifetime of $4.61\times10^{27}\mbox{sec}$. For $3\mbox{TeV}$ $W^+W^-$ DM, this value of lifetime is the minimum possible to add to \citet{2015ApJ...800L..27A} model, in order to respect the limits set by Fermi LAT data. In Figure \ref{fig:GRB_HiRes} we do the same as in Figure \ref{fig:SFR_HiRes}, but for the GRB evolution model. The lifetime of the DM is $\tau=5\times10^{27}\mbox{sec}$. As opposed to the SFR cases, in the GRB model, the $\gamma$-rays are violating the limits imposed by the Fermi LAT data, unless we cut off the maximum redshift. This violation can be seen in Figure \ref{fig:GRB_HiRes} in sources with maximum redshift of $z_{max}=4$. The $\gamma$-ray spectrum from UHECRs with a spectral index of $\alpha_{1,2}=2.34,2.22$, break energy $E_{br}=8\times10^{18}\mbox{eV}$, maximum acceleration energy $E_{max}=10^{20}\mbox{eV}$, and maximum redshift $z_{max}=4$ (thin dashed-dotted orange line) is exceeding the boundaries set by Fermi LAT even without the contributions from SFGs and DM. The thin solid black line, corresponding to the parameters $\alpha_{1,2}=2,2.35$, $E_{br}=8\times10^{18}\mbox{eV}$, $E_{max}=10^{20}\mbox{eV}$, and $z_{max}=4$ is not violating the limits imposed by the data, but the sum of it and the contributions from SFGs and DM is violating them. The main difference between these two lines is in the energy of transition from Galactic to extragalactic CRs. While in the orange line case, the transition is below the second Knee at $\sim 0.3\mbox{EeV}$, the transition in the case of the black line is at the Ankle at $\sim 5\mbox{EeV}$. For two UHECR spectra with the same $E_{max}$ and $z_{max}$, a lower energy of transition means a higher flux of $\gamma$-rays. The $\gamma$-ray energy density corresponding to the orange curve is $37\%$ higher than the energy density corresponding to the black curve. In Figure \ref{fig:GRB_Auger}, the UHECR spectra are normalized to the unrecalibrated Auger data. The DM contribution has a lifetime of $\tau=4\times10^{27}\mbox{sec}$. The evolution model is GRB. Here, as in Figure \ref{fig:GRB_HiRes}, the maximum redshift of the UHECR sources has to be cut off in order to respect the boundaries set by Fermi LAT data. In figures \ref{fig:MLLAGN} and \ref{fig:MLLAGN_Auger}, the UHECR sources are assumed to be MLLAGNs. In Figure \ref{fig:MLLAGN}, the spectra are normalized to the HiRes data and in Figure \ref{fig:MLLAGN_Auger} they are normalized to the Auger data. The MLLAGN sources give almost the same $\gamma$-ray spectra as the SFR and there is no need in a cutoff in the redshift. As in the SFR case, the $\gamma$-ray flux from both UHECRs and blazars is too high to fit the Fermi LAT data. In Figure \ref{fig:MLLAGN} the DM lifetime is $4.29\times 10^{27}\mbox{sec}$. In Figure \ref{fig:MLLAGN_Auger} the DM lifetimes is $3.75\times 10^{27}\mbox{sec}$ and it can be as low as $2.86\times 10^{27}\mbox{sec}$ without violating the limits imposed by the data. In Figure \ref{fig:AGN_HiRes}, the curves are corresponding to the MHLAGNs model, except for one curve which is corresponding to the GRBs model. The spectra are normalized to the HiRes data. In the MHLAGNs case, the cut off in redshift needs to be very low if we do not want to violate the limits imposed by the data. As can be seen in the figure, the limits are violated even for a $z_{max}=1.5$ spectrum. In the case of HLAGNs as UHECR sources, the $\gamma$-ray flux is higher and the required cutoff in the redshift is even lower. It is unlikely then, that HLAGNs would be the sources of UHECRs, unless they are in the nearby universe. In Figure \ref{fig:BL_Lac}, the UHECRs are adjusted to the HiRes data and their assumed sources are non-evolving BL Lacs (left panels) and HSP BL Lacs (right panels). The HSP BL Lacs evolve as $(1+z)^m$ with a very negative $m$ and thus have both a) a very low secondary $\gamma$-ray contribution and b) a relatively high cutoff energy. It can be seen from the figure that a fit to the Fermi LAT data is marginal with $\gamma$-rays from UHECRs evolving as HSP, $\gamma$-rays from SFGs, and $\gamma$-rays from DM. The total flux at $(4-9)\times10^{10}\mbox{eV}$ may be slightly too low while the total flux at high energies is the highest possible in order to respect the bounds set by Fermi LAT. The non-evolving BL Lacs give a higher $\gamma$-ray flux than the HSPs. Some sets of UHECR parameters (such as $\alpha_{1,2}=2.7,2.5$, $E_{br}=8\times10^{18}\mbox{eV}$, $E_{max}=10^{20}\mbox{eV}$, and $z_{max}=7$) provide better fits to the Fermi LAT data than other sets of parameters (such as $\alpha_{1,2}=2,2.4$, $E_{br}=8\times10^{18}\mbox{eV}$, $E_{max}=10^{20}\mbox{eV}$, and $z_{max}=7$). In Figure \ref{fig:BL_Lac_Blazars} we show the total $\gamma$-ray contribution from UHECRs evolving as BL Lacs (non-evolving BL Lacs in the upper panel and HSP BL Lacs in the lower panel), the sum of all unresolved components (blazars, SFGs, and radio galaxies), as extrapolated by \citet{2015ApJ...800L..27A} from the resolved blazar data, and DM. As opposed to the other evolution models (see Figure \ref{fig:blazars}), there is enough room for $\gamma$-rays originating both from blazars and from UHECRs that evolve as BL Lacs. The UHECR parameters in this figure are $\alpha_{1,2}=2.7,2.5$, $E_{br}=8\times10^{18}\mbox{eV}$, $E_{max}=10^{20}\mbox{eV}$, $z_{max}=7$ (upper panel) and $\alpha=2.7$, $E_{max}=10^{20}\mbox{eV}$, $z_{max}=7$ (lower panel). The unresolved part of the sum of all components in \citet{2015ApJ...800L..27A} is represented by the thick dashed black line, the sum of it and the $\gamma$-rays from UHECRs is represented by the thick dashed blue line, and the sum of it and the $\gamma$-rays from UHECRs and from DM is represented by the thick solid blue line. The DM lifetimes are $1.3\times10^{28}\mbox{sec}$ (non-evolving BL Lacs) and $5.9\times10^{27}\mbox{sec}$ (HSP BL Lacs). It can be seen from the figure that the DM contribution improves the fit at high energies because we consider here only astrophysical sources that are too soft. However, blazars come closest to fitting the Fermi LAT data without DM. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{f8.eps} \includegraphics[width=0.49\textwidth]{f9.eps} \includegraphics[width=0.49\textwidth]{f10.eps} \includegraphics[width=0.49\textwidth]{f11.eps} \caption{The same as in Figure \ref{fig:SFR_HiRes}, but for different UHECR parameters and the UHECRs are normalized to the unrecalibrated Auger data. The evolution scenario here is SFR. The DM lifetime in this case is $\tau=3.87\times10^{27}\mbox{sec}$.} \label{fig:SFR_Auger} \end{figure} \begin{figure} \centering \includegraphics[width=1\textwidth]{f12.eps} \caption{The total flux of $\gamma$-rays originating from UHECRs evolving as SFR and from blazars. The thin dashed black line is the unresolved blazar spectrum as extrapolated by \citet{2015ApJ...800L..27A} from the resolved blazar data. The thick dotted black line is the sum of all unresolved components (blazars, SFGs, and radio galaxies) in \citet{2015ApJ...800L..27A}, Figure 3 (resolved point sources have been removed) with an additional contribution from $3\mbox{TeV}$ $W^+W^-$ decay DM with lifetime of $4.61\times10^{27}\mbox{sec}$. The thin solid orange and the thin solid blue lines represent the minimum and the maximum $\gamma$-ray fluxes available from UHECRs evolving as SFR, adjusted to the HiRes data, with $E_{max}=10^{20}\mbox{eV}$, and $z_{max}=7$. For the thin orange line: $\alpha_{1,2}=2,2.4$ and $E_{br}=8\times10^{18}\mbox{eV}$. For the thin blue line: $\alpha_{1,2}=2.5,2.3$ and $E_{br}=8\times10^{18}\mbox{eV}$. The thin dashed-dotted magenta and the thin dashed-dotted green lines represent the minimum and the maximum $\gamma$-ray fluxes available from UHECRs evolving as SFR, adjusted to the unrecalibrated Auger data, with $E_{max}=10^{20}\mbox{eV}$, and $z_{max}=7$. The thin dashed-dotted magenta line has the same parameters as the thin orange line. The thin dashed-dotted green line has the same parameters as the thin solid blue line but with $\alpha_2=2.4$. The thick lines are the totals of blazars and UHECRs.} \label{fig:blazars} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{f13.eps} \includegraphics[width=0.49\textwidth]{f14.eps} \includegraphics[width=0.49\textwidth]{f15.eps} \includegraphics[width=0.49\textwidth]{f16.eps} \caption{The same as in Figure \ref{fig:SFR_HiRes}, but for UHECRs evolving as GRBs. The DM lifetime here is $\tau= 5\times10^{27}\mbox{sec}$.} \label{fig:GRB_HiRes} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{f17.eps} \includegraphics[width=0.49\textwidth]{f18.eps} \includegraphics[width=0.49\textwidth]{f19.eps} \includegraphics[width=0.49\textwidth]{f20.eps} \caption{The same as in Figure \ref{fig:SFR_Auger}, but for UHECRs evolving as GRBs. The DM lifetime here is $\tau= 4\times10^{27}\mbox{sec}$.} \label{fig:GRB_Auger} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{f21.eps} \includegraphics[width=0.9\textwidth]{f22.eps} \caption{UHECRs evolving as MLLAGNs and normalized to the HiRes data. \textbf{Upper Panel}: UHECR spectra. The dashed-dotted lines differ from the solid blue line in one parameter. \textbf{Lower Panel}: $\gamma$-ray fluxes corresponding to the UHECR spectra in the upper panel. Thick lines are the sum of the three components: SFGs, UHECRs, and DM. The thick dashed blue line is the same as the thick solid blue line but without DM contribution. The DM lifetime here is $\tau= 4.29\times10^{27}\mbox{sec}$.} \label{fig:MLLAGN} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{f23.eps} \includegraphics[width=0.9\textwidth]{f24.eps} \caption{The same as in Figure \ref{fig:MLLAGN}, but for UHECR spectra that are normalized to the Auger data. The DM lifetime here is $\tau= 3.75\times10^{27}\mbox{sec}$.} \label{fig:MLLAGN_Auger} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{f25.eps} \includegraphics[width=0.49\textwidth]{f26.eps} \includegraphics[width=0.49\textwidth]{f27.eps} \includegraphics[width=0.49\textwidth]{f28.eps} \caption{The same as in Figure \ref{fig:SFR_HiRes}, but the UHECRs are evolving as MHLAGNs (except one curve that is corresponding to the GRB model). The DM lifetime here is $\tau= 5.45\times10^{27}\mbox{sec}$.} \label{fig:AGN_HiRes} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{f29.eps} \includegraphics[width=0.49\textwidth]{f30.eps} \includegraphics[width=0.49\textwidth]{f31.eps} \includegraphics[width=0.49\textwidth]{f32.eps} \caption{The same as in Figure \ref{fig:SFR_HiRes}, but the UHECRs are evolving as BL Lacs. In the left panels are the non-evolving BL Lacs and in the right panels are the HSP BL Lacs which evolve as $(1+z)^m$ with a very negative $m$. The DM lifetimes here are $\tau= 3.53\times10^{27}\mbox{sec}$ for the non-evolving BL Lacs and $\tau= 3.08\times10^{27}\mbox{sec}$ for the HSP BL Lacs.} \label{fig:BL_Lac} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{f33.eps} \includegraphics[width=0.8\textwidth]{f34.eps} \caption{The total flux of $\gamma$-rays originating from UHECRs evovling as BL Lacs and from blazars. \textbf{Upper panel:} Non-evolving BL Lacs. The thin blue solid line is corresponding to $\gamma$-rays originating from UHECRs with the parameters: $\alpha_{1,2}=2.7,2.5$, $E_{br}=8\times10^{18}\mbox{eV}$, $E_{max}=10^{20}\mbox{eV}$, and $z_{max}=7$. The thick dashed black line is the sum of all unresolved components (blazars, SFGs, and radio galaxies) in \citet{2015ApJ...800L..27A}, Figure 3 (resolved point sources have been removed). The thick dashed blue line is the sum of the dashed black line and the thin solid blue line. The thick solid blue line is the sum of the dashed black line, the thin solid blue line, and $3\mbox{TeV}$ $W^+W^-$ decay DM with $\tau=1.3\times10^{28}\mbox{eV}$. \textbf{Lower panel:} The same as in the upper panel, but for HSP BL Lacs with $\alpha=2.7$, $E_{max}=10^{20}\mbox{eV}$, and $z_{max}=7$. And for DM with $\tau=5.9\times10^{27}\mbox{sec}$.} \label{fig:BL_Lac_Blazars} \end{figure} \section{Conclusions} GRB, AGN, and star formation were all more common in the past $z\gtrsim 1$, with a comoving density now varying as $(1+z)^m$ with $m\gtrsim 3$. Had UHECR sources been active at $z\sim 1$, photons from these backgrounds would pair produce with these UHECRs and the pairs would ultimately make secondary $\gamma$-radiation. $\gamma$-rays originating as primaries from SFGs and as secondaries from UHECRs, with an additional high energy contribution (e.g from DM decay), can provide a good fit to the EGRB measured by Fermi LAT. We found that between the evolution models: SFR, GRBs, MLLAGNs, MHLAGNs, HLAGNs, and BL Lacs, the preferable ones for UHECR sources are SFR, MLLAGNs, and BL Lacs. $\gamma$-rays from UHECRs, whose sources evolve in time as SFR, as MLLAGNs, or as BL Lacs, do not violate the bounds set by Fermi LAT measurements. A hypothetical class of UHECR sources that evolve as SFR or as MLLAGNs, is in fact found to be quite robust and most choices of free parameters (provided that the spectral index of the UHECRs was below $\sim2.5$) gave good fits to both the EGRB and the UHECR spectra. This is consistent with findings of other authors \citep{2015MNRAS.451..751G,2015PhRvD..92b1302G}. The secondary $\gamma$-rays from these UHECRs are softer than the diffuse high energy $\gamma$-ray background observed by the Fermi LAT, and these evolutionary models all give a better fit if a contribution from decaying DM particles with masses of $\sim 3 $ TeV are included. In the case of BL Lacs whose comoving density does not decline or even increases with time (i.e. with decreasing $z$), good fits to the Fermi LAT data could be achieved with a decaying DM contribution. But, the contribution of secondary $\gamma$-rays from these UHECRs is not a major contribution, so other astrophysical theories for the origin of the diffuse EGRB are not preempted by the hypothesis that UHECRs come from extragalactic sources with a non-declining comoving density. The DM contribution here appears much more significant than in the other (stronger) evolution scenarios, because the other major contributor, SFG, are assumed to give a softer spectrum than secondary $\gamma$-rays from UHECRs. However, the possibility that the entire EGRB can be explained by ultimately resolvable blazars has already been suggested, and, as these blazars become better resolved, the lower limits on the lifetime of TeV DM particles will rise. The power in BL Lac objects is about $8\cdot10^{37}\mbox{erg} \ \mbox{sec}^{-1} \ \mbox{Mpc}^{-3}$ \citep{2014ApJ...780...73A}, as compared to $1.5\cdot 10^{36}$, $ 1.7\cdot 10^{37}$, and $6.9\cdot 10^{37}$ for HLAGNs, MHLAGNs, and MLLAGNs respectively \citep{2005A&A...441..417H}, so it is not surprising that BL Lac objects would dominate UHECR production at present. However, other types of AGN were more active in the past, and their past contribution to the EGRB may have competed with that of BL Lac objects. One might then wonder whether the EGRB imposes a limit on their being as efficient in producing UHECRs. Conceivably, there could be a physical reason why UHECR production becomes more efficient with cosmic time, e.g. because galactic magnetic fields grow and $E_{max}$ therefore increases. In this work we assumed a pure proton composition. A mixed composition produces a somewhat lower EGRB, but a good fit to the data requires a very hard injection spectrum with a spectral index of $\alpha \sim 1\mbox{-}1.6$ \citep{2014JCAP...10..020A}. Such a hard spectrum, might not be achieved within the standard acceleration mechanisms that predict a steeper spectrum, with $\alpha \geq 2$. Thus, the UHECR spectrum, to its highest energies, may have a large proton component. In fitting the Fermi LAT data, we made use in $\gamma$-rays from DM of mass $mc^2=3\mbox{TeV}$, decaying into $W^+W^-$. The lifetimes we used in the fits that did not include blazars were $\tau=3.08\mbox{-}5.45\times10^{27}\mbox{sec}$. In the fits where blazars were included (Figure \ref{fig:BL_Lac_Blazars}), longer lifetimes were needed: $\tau=5.9\times10^{27}\mbox{sec}$ and $\tau=1.3\times10^{28}\mbox{sec}$. The shortest lifetime possible (and still provide a good fit) in this work was $2.86\times 10^{27}\mbox{sec}$ and it was obtained in the MLLAGNs model for UHECRs normalized to the Auger data. A lower limit of $\sim6.2\times10^{26}\mbox{sec}$ was found recently by \citet{2015arXiv150707001D} for DM of the same properties. \citet{2015arXiv150707001D} assumed that DM decay produces a $e^+e^-$ flux that contributes to the AMS-02 experiment data \citep{2014PhRvL.113l1102A, 2014PhRvL.113v1102A, 2014PhRvL.113l1101A}. A more strict lower limit on the DM lifetime was obtained by \citet{2015JCAP...09..023G} by fitting the AMS-02 data of anti-proton to proton ratio. The lower limit that was obtained (for DM of the same parametes as we used in this work) by \citet{2015JCAP...09..023G} is $\tau\sim1.3\times10^{27}\mbox{sec}$. In this work we obtained somewhat stricter lower limits on the DM lifetimes, especially in the fits that included blazars (Figure \ref{fig:BL_Lac_Blazars}). In Figure \ref{fig:DM_constraints} we show the lower limits on the DM lifetimes for the $W^+W^-$ channel obtained by \citet{2015arXiv150707001D} and \citet{2015JCAP...09..023G}, and the range of limfetimes possible for the different models in this work. If the blazar contribution is as high as is claimed \citep[e.g][]{2015ApJ...800L..27A,2015MNRAS.450.2404G}, then it would not leave enough room for producing extragalactic UHECRs with AGN-like, GRB-like or even SFR-like source evolution. The diffuse gamma ray background that is expected for these evolutionary scenarios to accompany the UHECR production, when added to the blazar contribution, sticks out above the Fermi LAT measurements of the diffuse EGRB. By contrast, UHECRs that evolve in time as BL Lacs, produce low enough $\gamma$-ray flux that have enough room to fit the Fermi LAT data together with blazars. The addition of a high energy component from DM decay improves the fit even here, but the DM plays less of a role, so the improvement should not be taken as a strong evidence for the existence of the decaying DM component. Another, unforced possibility, is that the UHECRs, even at the highest energy, are Galactic \citep{2016GalacticUHECRs}. Here the energy requirements are greatly reduced and Galactic GRBs could easily provide enough energy \citep{1993ApJ...418..386L, 2011ApJ...738L..21E}. \section*{Acknowledgements} This work was supported by the Joan and Robert Arnow Chair of Theoretical Astrophysics, the Israel-U.S. Binational Science Foundation, and the Israel Science Foundation, including an ISF-UGC grant. We thank Dr. Noemi Globus for useful discussions. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{f35.eps} \caption{Lower limits on the DM lifetimes for a $DM \rightarrow W^+W^-$ decay as a function of the DM mass as obtained by \citet{2015arXiv150707001D} (black line) and by \citet{2015JCAP...09..023G} (red line). The vertical blue line is the possible values of lifetimes for the fits that do not include blazars. The vertical magenta line is the DM lifetimes for the fits that include blazars.} \label{fig:DM_constraints} \end{figure}
2,869,038,154,491
arxiv
\section{Introduction.} Strong correlations lead to unconventional behavior. This is in particular true for heavy fermion materials, where elements with partially filled f-shells contribute strongly localized $f$-electrons that are perceived as local moments by the conduction electrons of the s-, p- or d-shells. It is this interaction between conduction electrons and $f$-electrons, which leads to the emergence of a variety of unconventional phases. Two questions which are still debated concern the nature of the quantum critical point, which is found between a magnetically ordered and a disordered phase in some of these materials \cite{SAC+00,SRIS01,GS03,Si06,GSS08,HOK+10,Si10,SS10,GSGB15} and the nature of superconductivity in others \cite{ABB+89,TKI+98,SAM+01,BFH+02,YGD+03,HGdN+07,NSW+10,SAF+11,SSW+13,Ste14,ESM+16,SW16}. From a theoretical point of view these questions are often addressed within a paradigmatic model of heavy fermion systems, namely the Kondo lattice model (KLM) \cite{Don77}, whose phase diagram on a two-dimensional (2D) square lattice at $T=0$ is the topic of this paper. In this model, the coupling between localized f-spins and conduction electrons results at weak coupling in an effective RKKY interaction between the f-spins, which leads to an antiferromagnetic (AF) ordering. In the limit of strong coupling, the local spins are screened by the conduction electrons and local Kondo singlets between conduction electrons and f-spins are formed. At zero temperature, the AF order is destroyed at a critical coupling strength $J_c$, which amounts to a quantum critical point (QCP). For this QCP, two scenarios are discussed. In case of a local QCP \cite{SRIS01,GS03,ZGS03} the breakdown of antiferromagnetic order coincides with the absence of Kondo screening (so-called Kondo breakdown). If only the AF long-range order vanishes at the QCP, it is a QCP of Hertz-Millis-Moriya type \cite{Her76,Mil93,MT95,GSS08}. The second important phenomenon treated in our paper is the emergence of unconventional (non-phonon-mediated) superconductivity (SC) encountered in certain heavy fermion systems. Such phases are often found in the vicinity to an antiferromagnetic QCP \cite{SS10}, but superconductivity due to magnetic spin fluctuations associated to other types of order have also been reported \cite{MGM+96,BFH+02,HGdN+07,HW15}. In some compounds, superconductivity and antiferromagnetism are even reported to coexist \cite{YKM+07,NSW+10}. Within the KLM the aforementioned RKKY interaction leads to an antiferromagnetic QCP, and numerical approaches have indeed reported d-wave SC close to this QCP recently \cite{ABF13,AFB14,WT15,Ots15}. The scope of this paper is to investigate these aspects by obtaining and characterizing the phase diagram of the KLM via the variational cluster approximation (VCA). Since it is a cluster method, it is able to take into account the $k-$dependence in Green functions, in contrast to previous dynamical mean-field theory (DMFT) studies. In particular, we want to gain further insights into the realization of d-wave SC and the possibility of stabilizing s-wave SC as reported by Bodensiek \textit{et al.} \cite{BZV+13}. Where possible, the results will be compared to the scenarios obtained by other methods, such as dual Fermions \cite{Ots15}, dynamical cluster approximation (DCA) \cite{MA08,MBA10}, real-space DMFT (rDMFT) \cite{PK15} and (variational) Monte Carlo ((V)MC) approaches \cite{Ass99,ABF13,AFB14}. The paper is organized as follows: In the next section the VCA method, which we adopt to study the KLM and the physical quantities of interest, is introduced. Next, in Sec.~\ref{sec:phasedia} we discuss the phase diagram for fillings $0.8 \lesssim n \lesssim 1$, which is the main result of this paper. In Sec.~\ref{KLM_PM} we illustrate the results of the paramagnetic phase at and away from half-filling. Section~\ref{KLM_AF} is then devoted to the investigation of magnetism in this region of the phase diagram. In particular, we discuss the ground state and the characteristic Fermi surfaces for three distinct regimes at weak, intermediate and strong coupling. In Sec.~\ref{KLM_SC} we study superconductivity for local s-wave and nodal d-wave order. Concerning s-wave SC, we find only mean-field-like solutions and no local superconductivity is induced by correlation effects. In contrast, robust d-wave SC is found and analyzed in detail. In Sec.~\ref{EOM}, we complement this numerical study by considering the equations of motion (EOM) for the pairing susceptibility. Finally, we discuss the interplay of d-wave SC and AF in Sec.~\ref{SCd+AF}. Summary and Outlook in Sec.~\ref{Summary} conclude the paper. \section{Model and Technique} In this paper we study the Kondo-lattice model (KLM) \begin{equation} \label{eq:KLM} \begin{split} \mathcal{H} =& -\sum_{\langle i,j\rangle, \alpha} {t}_{ij}(\hat{c}^{\dagger}_{i\alpha} \hat{c}^{}_{j\alpha} + \mathrm{H.c.} ) - \mu \sum_i \hat{n}^{}_i \\&+ \frac{J}{2} \sum_{i,\alpha,\beta} \mathbf{\hat{S}}_i^{} \cdot \hat{c}_{i\alpha}^{\dagger} \mathbf{\sigma}_{\alpha\beta}^{} \hat{c}_{i\beta} \end{split} \end{equation} using the variational cluster approximation (VCA) at zero temperature. Here, the creation and annihilation operators of a conduction electron on site $i$ with spin $\sigma$ are denoted by $c_{i\sigma}^{(\dagger)}$ and the spin operators of the localized spin on site $i$ by $\mathbf{\hat{S}}_i$. The first two terms describe a tight binding band with nearest neighbor hopping ${t}_{ij}$ on a square lattice and a chemical potential $\mu$, which controls the filling of the system. Throughout the paper, we choose an isotropic hopping on the lattice, i.e. $t_{ij} = t$ for neighboring sites $i$ and $j$, and ${t}_{ij}=0$ else. The last term of Eq.~\eqref{eq:KLM} is the antiferromagnetic spin-spin Heisenberg interaction between localized spins ($f$-electrons) and conduction band electrons ($c$-electrons), where $\mathbf{\sigma}$ represents the vector of Pauli matrices. The VCA is a well-established cluster method for strongly correlated electron systems \cite{PAD03,DAH+04,SLMT05,AAPH06a,NAAvdL12,MY15}. It is based on the framework of self-energy functional theory \cite{Pot03a}, where the self-energy functional (SEF) \begin{equation} \Omega(\Sigma) = F(\Sigma) + \text{Tr} \ln\left( G^{-1}_0-\Sigma \right) \label{Eq:SEF1} \end{equation} is used to calculate the grand potential $\Omega$. Here, $G_0$ denotes the non-interacting Green function and $F(\Sigma)=\Phi^{\mathrm{LW}}(G(\Sigma)) - \text{Tr}(\Sigma G(\Sigma))$ is the Legendre-transformed Luttinger-Ward functional $\Phi^{\mathrm{LW}}$ \cite{LW60,Pot06a}. $\Omega$ is obtained at the stationary point of $\Omega(\Sigma)$ with respect to all possible self-energies, which means that $\delta\Omega(\Sigma)/\delta\Sigma=0$. Since the Luttinger-Ward functional is universal in the sense that it only depends on the interaction, it is the same for a reference system with identical interaction terms. Using such a reference system, one can rewrite the self-energy functional and obtains \begin{equation} \Omega(\Sigma^{\prime}) = \Omega^{\prime}(\Sigma^{\prime}) + \text{Tr}\ln{(G_0^{-1}-\Sigma^{\prime})}-\text{Tr}\log{G^{\prime}}, \label{Eq:SEF2} \end{equation} where all quantities of the reference system have been denoted by a prime and $G^{\prime}$ is the interacting Green function of the reference system. The approximation of VCA consists in choosing a tiling of the original system into identical clusters as a reference system. For the cluster system the grand potential $\Omega^\prime$, the self energy $\Sigma^\prime$ and the interacting Green function $G^\prime$ can be calculated at zero temperature using exact diagonalization. This approximation amounts to restricting the variational space of self-energies in Eq.(\ref{Eq:SEF1}) to those, which can be realized on a cluster of finite size. In practice, the one-body terms of the cluster are varied and lead to a change in the cluster self-energies. One determines them such that the SEF is stationary with respect to their corresponding cluster self-energy. Out of the large set of one-body terms that could possibly be added to the cluster one chooses a subset as variational parameters. A more detailed derivation and discussion of the technique can be found in References \onlinecite{Pot03,Pot03a,NST08,Pot12}. Although VCA has been used on a variety of purely electronic models so far, it turned out to be difficult to treat spin interactions directly \cite{FP14}${}^,$\footnote{Instead, \textit{Laubach et al.} showed in Ref. \onlinecite{LJR+16} that it is possible to treat the Hubbard model in the limit $U\rightarrow\infty$ in order to study the Heisenberg model within VCA. However, the enlarged local Hilbert space of the Hubbard model as compared to the Heisenberg model prevented the authors from studying larger clusters.}. So far, in the context of heavy fermion systems, it has only been used to study the periodic Anderson model \cite{MY15}, which is related to the KLM in the limit of infinitely large Coulomb repulsion \cite{SN02}. In this work, however, we apply it to the KLM, Eq.~\eqref{eq:KLM}, which is an electronic system with pure spin interactions, though local ones. Here, VCA is applied to the KLM in its standard form, i.e., quantities that enter the calculation of the SEF like the non-interacting Green function of the lattice $G_0$ and the interacting Green function of the cluster $G^{\prime}$ are Green functions only of the electronic part of the KLM. In this way, the on-site spin interactions are included in the cluster self-energies. However, propagation of the $f$-spins is not included and would require an extension of the technique, e.g. based on an adapted Luttinger-Ward functional \cite{CPR05}. In order to fix the density $n$ to a preset value, the grand potential (approximated by the self-energy functional) is Legendre transformed to the free energy \cite{BP10} and $\mu$ is used as a variational parameter. As further variational parameters we then choose the hopping on the cluster $t^{\prime}_{ij}$ , the chemical potential of the reference system $\mu^{\prime}$ to ensure thermodynamic consistency \cite{AAPH06}, and the strengths of potential Weiss fields, further discussed below. In the main part of the paper, we use an isotropic hopping on the cluster, which is why we skip the site indices and refer to the variational parameter by $t^{\prime}$. The restriction to one isotropic variational parameter is justified as discussed in Appendix \ref{App:Hopping}. \subsection{Observables of Interest within VCA} The focus of VCA is the calculation of one-body expectation values. The electron density $n$ is obtained by computing \begin{equation} \langle{n}\rangle=-\frac{\partial\Omega}{\partial\mu}\stackrel{\mu^{\prime}=\mu^{\prime}_{\text{opt}}}=\frac{1}{N}\int_{\mathcal{C}<}\frac{dz}{2\pi i}\text{Tr}\left[ \delta_{RR^{\prime}}\delta_{\sigma\sigma^{\prime}}\mbf{G}(\mbf{k},z)\right], \end{equation} where $\mathbf{G}$ denotes the one-particle Green function and the contour of the integration surrounds the negative real frequency axis counterclockwise. The vector $\mathbf{R}$ runs over the cluster sites, $\sigma$ denotes the value of the spin, and $\mu^\prime_{\rm opt}$ is the value of the cluster chemical potential at the saddle point of the SEF. As usual in VCA, to allow for the possibility of long-range order on the cluster one has to add a fictitious Weiss field to the cluster Hamiltonian only. In the case of magnetism it takes the form \begin{equation} \mathcal{H}_{\text{AF}}=M\sum_{\mathbf{R}} e^{i\mathbf{Q}\cdot\mathbf{R}}(n_{R\uparrow}-n_{R\downarrow}), \end{equation} where the wave vector $Q=(\pi,\pi)$ corresponds to N\'eel antiferromagnetism and $\mathbf{R}$ runs over the cluster sites.\footnote{Although it would be interesting to investigate spin density waves with incommensurate ordering wavevectors, within VCA one usually uses commensurate $Q$ vectors. The reason is that the reference system is made up of identical clusters, which treat short-ranged spatial correlations inside the clusters exactly. A way to treat longer-ranged, incommensurate ordering vectors might consist in using supercluster constructions. } The strength $M$ of this field is then used as a variational parameter. Finally, the staggered magnetization of c-electrons can be obtained at the stationary point as \begin{equation} m^{c}=\frac{1}{N}\int_{\mathcal{C}<} \frac{dz}{2\pi i}\text{Tr} [e^{i\mathbf{Q}\cdot\mathbf{R}}(-1)^{\sigma}\mathbf{G}_{\mathbf{R}\sigma,\mathbf{R}\sigma}(z)]. \end{equation} Note that the staggered magnetization of f-spins is only available on the cluster as the Green function of the system does only contain excitations with respect to the conduction electrons, \begin{equation} m^{f}_{cl}=\frac{1}{N}\sum_{\mbf{R}}e^{i\mbf{Q}\cdot\mbf{R}}\langle n^f_{\mbf{R},\uparrow}-n^f_{\mbf{R},\downarrow}\rangle. \label{eq:struc} \end{equation} Superconductivity (SC) is captured in a similar way by adding the Weiss field \begin{equation} \mathcal{H}_{\text{SC}} = D\sum_{i,j}\left(\Delta_{ij} c_{i\uparrow}^{}c_{j\downarrow}^{} + \mathrm{H..c.} \right), \end{equation} where $D$ is the strength of the Weiss field and $\Delta_{ij}$ the geometric factor which is adapted to the specific superconducting channel studied. In particular, in this paper we focus on local s-wave SC with \begin{equation} \Delta_{ij}=\delta_{ij}, \end{equation} and on d-wave SC with \begin{equation} \Delta_{ij}=\begin{cases} +1: r_i-r_j=\pm e_x \\ -1: r_i-r_j=\pm e_y, \end{cases} \end{equation} in the case of $d_{x^2-y^2}$ SC, where $e_{x/y}$ denote unit vectors along the lattice directions. \begin{figure}[b!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig1} \caption{Schematic phase diagram as obtained with VCA using a $3\times2$ cluster. At half-filling the system is an antiferromagnetic insulator for $J<J_c=1.94t$ and a paramagnetic insulator for $J>J_c$. Away from half-filling there are three phases: A paramagnetic metal close to half-filling at strong coupling, a d-wave superconductor and a coexistence region of antiferromagnet and d-wave superconductor. Furthermore, when suppressing SC, within the AF phase at $J/t\sim 1.24$ an additional transition line between AF phases with different Fermi surface topology appears (not shown), as discussed in Sec.~\ref{KLM_AF_OffHF}.} \label{fig:PD_sketch} \end{center} \end{figure} \section{Phase Diagram} \label{sec:phasedia} The main findings of this work are summarized in the phase diagram sketched in Fig. \ref{fig:PD_sketch}. At half-filling the system is insulating for all coupling strengths $J/t$. Two different insulating phases are found, namely an antiferromagnetic (AF) phase at small couplings $J<J_c$ which is induced by the effective RKKY interaction and a paramagnetic phase for larger couplings $J>J_c$ caused by Kondo screening. The transition between both phases is continuous, which renders the transition point $J_c$ to be a quantum critical point. As soon as the system is doped away from half-filling, three different phases occur. Close to half-filling and for strong couplings $J/t$, the system is in a paramagnetic metallic phase with a large Fermi surface. It extends down to the critical coupling strength $J_c$, but when doping further away from half-filling, d-wave superconducting order builds up. At weak couplings $J<J_c$, doping the system results in a coexistence of antiferromagnetism and d-wave superconductivity. Inside this phase, both phenomena show within the VCA cooperative behavior for $J/t\gtrsim1.2$ and competitive behavior for smaller couplings as discussed in Sec.~\ref{SCd+AF}. When keeping the ratio $J/t$ fixed, the more the system is doped away from half-filling the more the antiferromagnetic order is reduced and it finally vanishes continuously at a critical electron filling $n_c(J)$. There, the coexistence phase goes over to a pure d-wave superconductor. When only considering AF, at the aforementioned value $J/t \sim 1.2$, the Fermi surface changes its topology when keeping the filling $n$ constant and varying $J/t$, as discussed in Sec.~\ref{KLM_AF_OffHF}. At lower fillings within the AF phase, this is accompanied by a jump in the value of the AF order parameter, indicating for a discontinuous transition, which becomes continuous at fillings $n \gtrsim 0.97$. In the forthcoming sections we will describe in detail how this phase diagram has been obtained. Furthermore, putative s-wave SC, pure antiferromagnetic and d-wave SC solutions, as well as their interplay will be discussed. \section{Paramagnetic phase of the KLM} \label{KLM_PM} In this section, we start our investigation of the phase diagram by treating the simplest possible approach in the VCA. It consists in only considering paramagnetic (PM) solutions, i.e., we neglect possible long-range order at this first stage. As VCA does not allow for phases with broken symmetries unless proper Weiss fields are added, it is possible to investigate such PM solutions at all coupling strengths irrespective of the 'true' ground state of the system. As we will see later in Secs.~\ref{KLM_SC_d} and \ref{SCd+AF}, this phase is the correct physical solution at large coupling close to half-filling. However, even for other parameter regimes, the paramagnetic solution serves as a starting point and is always considered a reference solution: Even if additional solutions with broken symmetries such as AF or SC occur, one needs to check, whether their energy is lower than this normal state solution without broken symmetry. \subsection{Kondo insulator at half-filling} \begin{figure}[b!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig2}\newline \caption{Quasiparticle gap $\Delta_{\mathrm{qp}}$ at half-filling in the paramagnetic ($J>J_c$) and antiferromagnetic $(J<J_c$) insulator obtained by using a $3\times2$ cluster. The solutions for the pure paramagnet (PM) without addition of an AF Weiss-field as well as the solution when including AF are shown. The quantity plotted amounts to $\mu({n=1.001})$ (for details see text). \label{fig_HF_AFPM_GAPS}} \end{center} \end{figure} The simplest type of insulator which can be encountered here is the (atomic) Kondo insulator at half-filling that consists of local singlets between f-spins and conduction electrons. Here, we investigate the paramagnetic solution by doping the system away from half-filling and determine the spectral gap $\Delta\mu=\Delta_{qp}/2$. One characteristic of an insulator is a vanishing electronic compressibility $\chi_e = \frac{1}{n^2} \frac{\partial n}{\partial \mu}$. This corresponds to a plateau at half-filling in a $n$-versus-$\mu$ plot, which we investigate as a function of $J/t$ for the paramagnetic solution. Instead of calculating the electron filling as a function of chemical potential it is possible to obtain the quasiparticle gap by doping the system slightly away from half-filling. Using $\Delta_{\text{qp}}=\lim_{\epsilon\rightarrow0^+} \mu(n=1+\epsilon)$ it is possible to extract the quasiparticle gap more efficiently. In order to determine the quasiparticle gap one then calculates the stationary points at $n=1+\epsilon$ as a function of $J/t$. The choice of a finite $\epsilon$ leads to an error in $\Delta_{\text{qp}}$, but for fillings close to half-filling (e.g. for $n=1.001$) the system is metallic and the chemical potential nearly coincides with the quasiparticle gap $\Delta_{\text{qp}}$ at half-filling. The so-obtained quasiparticle gap is shown in Fig. \ref{fig_HF_AFPM_GAPS}. As expected from the strong-coupling picture of a Kondo insulator the quasiparticle gap grows linearly in $J/t$ for large coupling. However, at intermediate couplings $J/t\lesssim 1.8$ deviations from this behavior are found and the gap reduces much faster. Finally, at $J/t\lesssim 0.8$ the gap vanishes and the system becomes metallic. Based on the exact, finite-size extrapolated QMC results of Ref.~\onlinecite{Ass99}, one expects to find a finite quasiparticle gap for all positive (antiferromagnetic) values of $J/t$ if considering the correct ground state for every value of $J/t$. This is an important aspect, as only for $J>J_c$ we expect the PM solution to be the ground state. Below $J_c$ the system orders antiferromagnetically (see next section) and the gapless paramagnet is not the ground state solution at $T=0$. \subsection{Paramagnetic phase away from half-filling} \label{PMDoped} \begin{figure}[t!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig3} \caption{Local density of states [DOS, see Eq.~\eqref{eq:dos}], at large coupling $J/t=8.0$ close to half-filling ($n=0.95$). The two resonances are separated by roughly $3/2J$ and correspond to the break-up of Kondo singlets.} \label{fig_FS_PM_offHF} \end{center} \end{figure} In the previous section we saw that for strong coupling $J/t$ the ground state of the Kondo lattice model at half-filling is paramagnetic. More precisely, it consists of singlets between the local spins and the conduction band electrons. At large $J/t$ all electrons are bound into such singlets and the ground state is hence insulating. This picture changes once the system is doped away from half-filling, as Kondo singlets are broken up and electrons can again move through the system. This induces a metal which naturally displays a Fermi surface (FS). We will discuss the behavior of the FS in the various regions of the phase diagram in detail in Sec.~\ref{KLM_KB}. An example for the PM solution discussed here is the right panel of Fig.~\ref{fig_3x2_AF_OffHF_FS}, which shows the Fermi surface at $J/t=3.0$ and an electron filling of $n=0.95$. Despite the low electron filling the Fermi surface is large and measuring its area gives a value of $n_{\mathrm{FS}}\approx 1.95$. Such a large value is expected from the Friedel sum rule \cite{LA61,Lan66,Mar82}, as the local f-spins take part in the charge transport in form of the Kondo singlets and therefore contribute with $n_{\mathrm{FS}}^f=1$ to the Fermi surface volume. The existence of the Kondo singlets away from half-filling is also clearly seen in the local density of states (DOS) \begin{equation} \label{eq:dos} \mathrm{DOS}(\omega) = -\frac{1}{\pi} \lim_{\eta\rightarrow 0^+}\mathrm{Im}~G(\omega+i\eta) , \end{equation} where $G(\omega)=\sum_k G(k, \omega)$, and which is shown in Fig.~\ref{fig_FS_PM_offHF} for large coupling $J/t=8.0$ close to half-filling. It shows two peaks, which are separated by roughly $3J/2$, which is roughly twice the energy of a singlet, $|E_{\rm singlet}| = 3J/4$. The peaks hence can be related to the break-up of Kondo singlets, which we expect from the Kondo insulator at strong couplings and half-filling \cite{Col15a}. Instead of two isolated sharp peaks, due to the finite contribution by the hopping term, the structures are broadened. Since the system is doped away from the symmetry point at half-filling, the two signals are not perfectly symmetric with respect to the center of the gap between them. \section{Magnetic Phase diagram} \label{KLM_AF} The competition between Kondo singlet formation and RKKY interaction sets the stage for Doniach's phase diagram \cite{Don77}. One possibility to study this competition is the investigation of antiferromagnetic correlations, as RKKY interaction mediates an antiferromagnetic ordering of the f-spins, which induces such an ordering for the conduction electrons, too. At half-filling, these antiferromagnetic correlations are expected to be suppressed from a certain $J_c/t$ on as at this point Kondo singlet formation is energetically favorable and absorbs free electrons which could mediate the antiferromagnetic ordering. \subsection{The antiferromagnet at half-filling} \label{KLM_AF_HF} We start our investigation of the AF solution of the KLM at half-filling. In this case, it is possible to use clusters with up to eight physical sites ($4\times2$) when considering only the strength of the AF Weiss field and of the isotropic hopping on the cluster as minimal set of variational parameters. This allows to study a finite-size extrapolation of the critical coupling strength $J_c$ and to compare the extrapolated results at infinite cluster size to the exact QMC results of Ref.~\onlinecite{Ass99}. In the previous section the paramagnet has been analyzed at half-filling and showed discrepancies from exact results at small and intermediate coupling strengths. This leads to deviating quasiparticle gaps at intermediate couplings and even to metallic behavior at weak coupling in case of the $3\times2$ cluster. However, such a scenario is an artifact caused by suppressing AF order in the VCA: When including and optimizing the SEF for an additional AF Weiss field, at small couplings clearly an AF phase with lower energy than the PM is stabilized. The resulting AF ordering and the perfect nesting of the corresponding Fermi surface leads to an AF gap, even at very small $J$, see Fig.~\ref{fig_HF_AFPM_GAPS}. Hence, an insulating phase is stabilized, which, however, is conceptually different from the paramagnetic insulator, which is caused by spin singlet formation. This happens at the critical coupling strength $J_c/t$, at which the magnetization vanishes and the SEF shows only one stationary point, namely the 'trivial' one corresponding to the previously discussed paramagnetic insulator. Therefore, VCA at half-filling correctly shows an insulator for all finite values of $J$. However, in contrast to the QMC results of Ref.~\onlinecite{Ass99}, the quasiparticle gap shows a peak when decreasing the coupling strength below $J_c/t$ instead of a monotonous decrease for all coupling strengths $J<J_c$. This feature has also been reported for other approximative techniques such as dynamical cluster approximation\cite{MBA10}. We now turn to the AF order parameter. Figure \ref{fig_AF_m_clusters} shows in the top panel the staggered magnetization $m$ of the conduction band electrons. Note, however, that the staggered magnetization of the localized spins $m_f$ can be calculated on the cluster only as the Green function contains only information on the conduction electrons. Hence, in contrast to $m$, $m_f$ is not calculated as the expectation value using the system's Green function. For this reason, we turn to the staggered magnetization of the conduction band electrons $m$ instead. As the choice of sign is somewhat arbitrary (the self-energy functional is symmetric with respect to the Weiss field strength $M_c$), in subsequent plots we will only show positive values of $m$. The strength of $m$ increases for small couplings with the coupling strength and reaches a maximum at a value $J_{\mathrm{max}/t}$ which depends on the cluster size. For larger values of $J/t$ it decreases rapidly and vanishes smoothly at the critical coupling $J_c$, where the system stays insulating, but without magnetic order. Compared to the DMFT results of Ref.~\onlinecite{Bod13}, where a NRG solver was used to precisely capture the low energies inherent to Kondo (lattice) systems, the magnetization curve has the same characteristics, but the absolute values differ. In particular, the value of the critical coupling strength $J_c/t$ is lowered. This can be understood by recalling that the electronic fluctuations inside the $4\times2$ cluster are a natural antagonist to AF order. By increasing the size of the cluster, the spatial extent of the fluctuations which enter the reference self-energy grows and thereby changes the variational space for the determination of stationary points. Hence, it is useful to systematically increase the cluster size and in this way to determine the value of $J_c/t$ after a finite-size extrapolation. \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{Fig4a} \includegraphics[width=0.45\textwidth]{Fig4b} \caption{Top: staggered magnetization $m$ for the different cluster sizes and geometries shown. As a reference, values of $J_c/t$ obtained with dual fermions \cite{Ots15}, quantum Monte Carlo \cite{Ass99}, and DMFT+NRG \cite{Bod13} are shown. Bottom: finite-size extrapolations of the value of the critical coupling $J_c/t$ to the infinite-size cluster limit. The blue (solid) line takes into account only the regularly shaped clusters, as indicated, while the green (dashed) line considers the remaining cluster types. The scaling factor equals the number of intra-cluster links divided by twice the number of cluster sites.} \label{fig_AF_m_clusters} \end{center} \end{figure} Such a finite-size scaling would be best controlled when using clusters of different sizes but the same geometry, e.g., quadratic clusters. Unfortunately, the accessible clusters are limited to small total sizes and one needs to also resort to other cluster shapes, e.g., ladder geometries. In Ref.~\onlinecite{S08}, S\'en\'echal \textit{et al.} compared different scaling factors and tested the outcome of a finite-size extrapolation in case of the Hubbard model on a two-dimensional square lattice, where larger clusters could be used due to the smaller local basis. The most promising scaling factor of this comparison, which is also applied in the lower panel of Fig.~\ref{fig_AF_m_clusters}, consists in taking the number of links on the cluster and dividing it by twice the number of cluster sites. It scales to one in the limit of infinite cluster size and takes the ratio of boundary and bulk into account. \newline When taking into account all available cluster sizes for the KLM, also quite pathological ones such as $1\times2$ and $CL4$ (see Fig.~\ref{fig_AF_m_clusters}), which include dangling sites, it is not a surprise that the critical value $J_c/t$ spreads a lot depending on the cluster geometry. However, an extrapolation by considering only the "ladder" clusters $2\times2$, $3\times2$ and $4\times2$ (blue fit in the lower panel of Fig.~\ref{fig_AF_m_clusters}) results in an infinite-cluster-size value of $J_c/t=1.48\pm0.28$, which is very close to the QMC results of Ref.~\onlinecite{Ass99} which give $J_c/t = 1.45 \pm 0.05$ in the thermodynamic limit. The VCA results are therefore in agreement with this numerically exact approach within our estimated error bars. However, larger clusters are needed to confirm the result of the ladder cluster extrapolation and to improve on this rough fit. Nevertheless, it is remarkable that the critical value obtained by this rough fit is much better than the one obtained using DMFT+NRG in Ref.~\onlinecite{BZV+13}. In a recent study, Ref.~\onlinecite{Ots15}, Otsuki obtained similar values for $J_c/t$ by applying the dual fermion approach to the KLM, further indicating that it is important for the physics of the KLM to allow for spatial fluctuations beyond the single impurity ansatz used in DMFT. \begin{figure}[b!] \includegraphics[width=0.45\textwidth]{Fig5} \caption{Staggered magnetization, expectation value of the Spin-spin correlator $\langle\vec{S}\cdot\vec{s}\rangle=\partial \Omega/\partial J$ and local $f$-spin susceptibility $\chi^f_{\text{loc}}=\partial^2\Omega/\partial h_f^2\left\vert_{h_f\rightarrow0}\right.$ as a function of $J/t$ at half-filling using a $3\times2$ cluster. Black lines are fits of the spin-spin correlator, blue lines denote their derivatives. They have been fitted with second order polynomials in the AF and PM regions up to $J_c/t\pm0.05$. The set of variational parameters includes $\{M_c,t^{\prime}\}$.} \label{fig_AF_GS_Sisi} \end{figure} \subsubsection{Further properties of the ground state at half-filling} After having identified the AF properties of the KLM at half-filling, we briefly present additional aspects of the ground state which will become relevant in the following sections. A detailed discussion can be found in Appendix \ref{App:Details_PM_HF}. Two quantities that are sensible to Kondo singlet formation between $f$-spins and $c$-electrons are the local spin-spin correlator $\langle \vec{S_i}\cdot\vec{s_i}\rangle = \frac{\partial \Omega}{\partial J}$ and the local magnetic susceptibility of the $f$-spins $\chi^f_{\text{loc}}$, see Eq.~\eqref{Eq:Chiloc}. In Fig.~\ref{fig_AF_GS_Sisi} we show our results for both quantities obtained for a $3\times2$ cluster. Qualitatively, the results stay the same for the $4\times2$ cluster so that we will base our discussion on the results for the smaller clusters. Also note that, due to the higher computational cost, for the $4\times2$ clusters only $M_c$ could be used as variational parameter. In Appendix \ref{App:Hopping}, the influence of also including the cluster hopping $t^{\prime}$ into the set of variational parameters for the $3\times2$ clusters is discussed. In a recent rDMFT study \cite{PK15} the spin-spin correlator $\langle S_+\cdot s_-\rangle$ was used to investigate the magnetic phase transitions in the doped KLM. It showed small discontinuities at first order transitions and had no features in case of second order transitions. As far as the transition from a paramagnetic to an antiferromagnetic insulator is considered, the correlator $\langle \vec{S_i}\cdot\vec{s_i}\rangle$ seems to be a good indicator for the transition point $J_c$, too: At the value of $J_c/t$ at which the staggered magnetization becomes finite, the gradient of $\langle \vec{S_i}\cdot\vec{s_i}\rangle$ jumps and its curvature changes sign. In the strong coupling limit, the spin-spin correlator converges to a value of $-3/4$ per site which is expected for pure Kondo singlets. When approaching zero coupling, the correlator goes to zero as well, which means that electrons and f-spins are uncorrelated. To investigate Kondo singlet formation between $f$-spins and conduction electrons one often studies the local magnetic susceptibility $\chi_{\text{loc}}$ \cite{HK13}. Compared to the spin-spin correlator, which is finite for all non-zero $J/t$, the local susceptibility allows for a clearer identification of a putative Kondo breakdown via a divergence. In Fig.~\ref{fig_AF_GS_Sisi} we show results for the local magnetic susceptibility of the $f$-spins, which is calculated via the self-energy functional at the stationary point.\footnote{ Since we do not have access to the $f$-spins via the Green function, this is the only way to access this susceptibility. To calculate the total local magnetic susceptibility (including the contribution of the $c$ electrons) one needs to use spin-dependent variational parameters \cite{BP10}, which strongly increases the computational cost. As both the $f$-spins and the $c$-electrons are part of the Kondo singlets at strong coupling, it is sufficient to investigate the local susceptibility of one of its constituents. } To calculate the local $f$-spin susceptibility, we added a small magnetic field term to the Hamiltonian acting locally on one of the $f$-spins: $h_fS_i^z$. For small field strengths $\vert h_f\vert\leq0.03$, we extracted\footnote{In practice, we determined $\chi$ from a polynomial fit of $\Omega(h_f)$ in the AF and from $\Omega(h_f)\sim \ln(\cosh(h_f))$ in the PM. The latter corresponds to the expected behavior of the magnetization in case of a paramagnet, $m\sim \cosh(h_f)$.} the local susceptibility \begin{equation} \chi^f_{\text{loc}}=\left.\frac{\partial^2\Omega(h_f)}{\partial h_f^2}\right\vert_{h_f\rightarrow0}. \label{Eq:Chiloc} \end{equation} Just as for $\langle\vec{S}\cdot\vec{s}\rangle$, the local susceptibility allows for a clear identification for the onset of AF at $J_c$, but does not show any clear indications for changes within the AF phase. This indicates that there is no Kondo breakdown within the AF phase. The question remains, as to whether the divergence seen in Fig.~\ref{fig_AF_GS_Sisi} could be indicative for Kondo breakdown. However, one generically expects such a divergence at a continuous transition from a PM to AF phase, so that it remains difficult to address this point using the local susceptibility. Another quantity which shows the difference between the AF ground state and the PM alternative solution at half-filling for $J<J_c$ is the spin-dependent local DOS as shown in Fig.~\ref{fig_3x2_AF_HF_DoS}. It is obtained by a staggered average over the sites inside the cluster, \begin{equation} \tilde{\rho}_{\sigma}(\omega)=\frac{1}{N} \sum_{\mathbf{R}} e^{i\mathbf{Q}\cdot\mathbf{R}}\rho_{\mathbf{R}\sigma}(\omega), \end{equation} where $\rho_{\mathbf{R}\sigma}(\omega)$ denotes the local density of states on cluster site $R$. \begin{figure}[b!] \begin{center} \includegraphics[width=0.45\textwidth]{Fig6} \caption{Density of states (DOS) at half-filling for several $J/t$ in the antiferromagnetic phase. The rest of the parameters are the same as in Fig.~\ref{fig_3x2_AF_HF_DoS_J1} and the scale of $\mathrm{DOS}(\omega)$ is kept the same for all panels.} \label{fig_3x2_AF_HF_DoS} \end{center} \end{figure} At small interaction strengths two pronounced main resonances (MR) at the band edges are visible, which are separated by the quasiparticle gap, and right next to them small side resonances (SR) can be found (see, e.g., as shown for the value $J/t=0.8$ in Fig.~\ref{fig_3x2_AF_HF_DoS}). If one limits the discussion to the filled part of the DOS for one of the sublattices, the main resonance is mainly made up of the majority-spin electrons and the side resonance predominantly of minority-spin electrons, although with a considerable admixture of majority-spin electrons. Increasing $J$ affects the main and side resonance differently: the weight of the main resonance first increases but then decreases again with a maximum at $J/t\approx 1.2$, which coincides with the point of maximal staggered magnetization. For $J/t>1.2$, the main peak starts to split into two peaks $\mathrm{MR}_{1/2}$, which have initially similar weight. When approaching the critical interaction strength $J_c$, the weight of the peaks diminishes and redistributes between $\mathrm{MR}_1$ and $\mathrm{MR}_2$ such that the peak $\mathrm{MR}_1$ closer to the side resonances retains more weight. The side resonance moves to larger frequencies when $J/t$ is increased and gains a bit of weight. At the transition, the side resonance of the minority electrons and $\mathrm{MR}_1$ of the majority electrons merge to one new side resonance and the resonance $\mathrm{MR}_2$ is made up equally from both spin-up and -down electrons. This reshuffling of weight when approaching the phase transition reflects the competition between the two different mechanisms that are responsible for the formation of a quasiparticle gap. In the antiferromagnetic region close to the transition there are already strong precursors of the paramagnetic insulator visible in form of a notable contribution of minority spin electrons to the peak at the gap edges. In contrast, for $J/t \lesssim 1.2$ this contribution is not clearly visible, and the gap is stabilized by AF fluctuations. The energy of the side resonance roughly agrees with the value of $3/4J$. When identifying this resonance with a Kondo singlet peak, it is thereby possible to trace the existence of Kondo singlets even back to the weak interaction regime deep in the antiferromagnetic phase. In other words, the antiferromagnetic order is far from perfect and the ground state in the antiferromagnetic region can be interpreted as still having a finite amount of Kondo singlets. This is in agreement with the behavior of the local susceptibility, which in Fig.~\ref{fig_AF_GS_Sisi} does not indicate any change within the AF phase, and will be discussed in more detail in Sec.~\ref{KLM_KB}. \subsection{Doping the antiferromagnet} \label{KLM_AF_OffHF} \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{Fig7} \caption{Comparison of the PM and the AF solution at $J=1.6t < J_c^{\mathrm{AF}}$ as a function of electron filling $n$. For $n> n_c\approx0.91$ (vertical dashed line in the plot) the AF solution is energetically preferred (see upper panel). The lower panel shows the value of the staggered magnetization of the antiferromagnetic solution. For $n<n_c$ the staggered magnetization $m^c$ is zero and the solution is therefore paramagnetic. The dashed lines are fits to the data points and only a guide to the eye.} \label{fig_AFvsPM_Off_HF} \end{center} \end{figure} The situation in the half-filled Kondo lattice model is quite special: in the ground state, every local moment can be screened in the large $J$ limit with exactly one electron to form a singlet at each site. Removing conduction electrons from the half-filled system creates unpaired local moments which can be interpreted as spinfull c-holes. In the paramagnetic case, Kondo singlets can still be formed for sufficiently large Kondo coupling, although the electrons gain mobility due to vacancies in the conduction band (see Sec.~\ref{PMDoped}). For smaller values of $J$ doping the system is also interesting as the number of electrons which mediate the antiferromagnetic order of the local spins via RKKY interaction is then reduced. Naively, one would hence expect, that the antiferromagnetic correlations diminish when the system gets doped. In Fig.~\ref{fig_AFvsPM_Off_HF} the energy of the AF and PM solutions as a function of electron filling are compared. It can be seen that the AF solution exists down to a critical electron filling of $n_c\approx0.915$. When approaching this filling from above, the staggered magnetization of the AF solution vanishes continuously and the corresponding energy approaches the one of the PM solution. As its energy is always lower than the one of the PM, the AF metallic phase is realized close to half-filling and goes over to a PM phase continuously when approaching $n_c$. \newline In the DMFT study of Ref.~\onlinecite{OKK09}, Otsuki \textit{et al.} identified an AF ground state at $J=0.2W$ (with $W$ the bandwidth, which in the 2D case treated here is $W=8t$) down to fillings $n_c(J=0.2W) \approx 0.9$. In our VCA approach, we obtain a similar result at $J/t \approx 1.6$, as shown in Fig.~\ref{fig_3x2_AF_OffHF_PD}, which shows the magnetic phase diagram close to half-filling, which we obtained using a $3\times2$ cluster. \begin{figure}[t!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig8} \caption{Staggered magnetization of the AF solution as a function of electron filling $n$ and $J/t$ using a $3\times2$ cluster with variational parameters $\mu,\mu^{\prime},t^{\prime}$ and $M_c$. Circles denote data points, their filling is color-coded and shows the value of the staggered magnetization $m$. The dashed lines denote transitions. The one on the left label by ``1st'' indicates a discontinuity of the value of $m$ when keeping $n$ fixed and varying $J/t$. At fillings $n\gtrsim0.97$ this becomes continuous (not shown). The dashed line to the right labeled by ``2nd'' indicates the continuous phase transition between AF and PM metal.} \label{fig_3x2_AF_OffHF_PD} \end{center} \end{figure} In this figure, the gray regions denote PM solutions, while the AF region is shown in blue, where the staggered magnetization is color coded. Aside from the magnetic properties of the system it is also interesting to investigate the metal-insulator transition as a function of electron density $n$. In the half-filled case it was shown in the previous subsection that the AF insulator goes over to a Kondo insulator at $J_c$. When doping the PM (Kondo) insulator it was also shown that the system turns metallic once the system is doped away from half-filling. \newline Here, the ``new'' transitions are the one from AF insulator to AF metal for $J<J_c$ at $n=1\rightarrow 1-\epsilon$ and from AF to PM metal for $J<J_c$ at $n_{c,\mathrm{AF}}$. Note that so far we have been neglecting superconductivity. Therefore, one needs to test for stability against SC order, which is further discussed in Sec.~\ref{SCd+AF}. \begin{figure}[b!] \begin{center} \includegraphics[width=0.45\textwidth]{Fig9} \caption{Density of states (DOS) for $J/t=1.6$ using the $3\times2$ cluster with $\{\mu,\mu^{\prime},t^{\prime},M_c\}$ as variational parameters. The electron fillings $0.91< n <1$ correspond to the AF metal phase. The panels show fillings close to the AF insulator ($n=0.99$), close to the PM metal ($n=0.92$) and right in the middle of the phase. The results obtained at $n=0.85$ are deep in the PM metal phase. The DOS has been obtained on a $k$-grid of $100\times100$ points and an artificial broadening of $\eta=0.05$ has been added.} \label{fig_3x2_AF_OffHF_DoS} \end{center} \end{figure} In order to make the difference between AF and PM metals more visible, Fig.~\ref{fig_3x2_AF_OffHF_DoS} shows the DOS at $J/t=1.6$ for different electron densities $n$. In contrast to Fig.~\ref{fig_3x2_AF_HF_DoS}, where the transition from AF to PM was studied at half-filling as a function of coupling strengths, the spectrum is not particle-hole symmetric. Still, the redistribution of electron density between up- and down-electrons when approaching the magnetic order-disorder transition is similar. The relative position of the three peaks does not change much when reducing the electron filling, but their size changes from the pronounced two main resonance structure known from the half-filled case to a roughly equally sized three-peak structure close to the transition. Already at $n=0.99$ the main resonances are not entirely made up from the majority spins, but also carry little weight from the minority spins. The difference between the position of the third peak and the middle of the gap at positive frequencies is at $n=0.99$ roughly $1.15$ which is close to the value of $3/4J=1.2$. When doping further away from half-filling the relative peak position changes and differs more and more from $3/4J$. This is somewhat expected as the value of $3/4J$ was obtained by assuming a perfect Kondo insulator. Once electrons are removed even at strong coupling the singlets are mobile and the resulting dispersion changes. Finally, in the paramagnetic region electrons of both spins contribute at each energy equally to the density of states. \newline This redistribution has consequences for the composition of the Fermi surface, which is discussed in the following subsection. \subsubsection{Changing Fermi surface topology} \label{KLM_KB} Once the system is doped away from half-filling it is metallic and hence possesses a Fermi surface. Deep in the paramagnetic phase (strong coupling $J$) mobile Kondo singlets form, which leads to a large Fermi surface (see Sec.~\ref{PMDoped}). It is interesting to consider the changes of the FS at the onset of AF order and at the transition within the AF phase. Since the FS is closely related to the spectral function $A(\mathbf{k},\omega)=-\frac{1}{\pi}\lim_{\eta\rightarrow0^+}\mathrm{Im}~G(\mathbf{k},\omega+i\eta)$, we will mainly discuss this quantity here, as it is directly accessible by the VCA. In Appendix~\ref{App:LSR} we will discuss the results for the FS obtained from further analyzing $A(\mathbf{k},\omega)$ and see that it reflects the same behavior. \newline \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{Fig10} \caption{Characteristic spectral function $A(\mathbf{k},i\eta)$ for the three regimes that can be identified in the AF solution off half-filling, here shown at filling $n=0.90$ with broadening $\eta=0.01$: PM metal with a large Fermi surface (d) and AF metal with small Fermi surface (a). At intermediate coupling the Fermi surface volume jumps to a much higher value (b), but the system stays AF (c).} \label{fig_3x2_AF_OffHF_FS} \end{center} \end{figure} Figure \ref{fig_3x2_AF_OffHF_FS} shows characteristic results for $A(\mathbf{k}, \omega = 0+i \eta)$ (with broadening $\eta = 0.01$) close to half-filling ($n=0.90$) at the Fermi energy for three different regimes. Aside from the already described PM case, one can identify two distinct phases by looking at the spectral function of the resulting metallic solutions. In the AF phase the Brillouin zone halves, which is indicated in the figure by a dashed line. At small coupling strengths, $A(\mathbf{k},i \eta)$ shows a closed structure, and when focusing on the inner sheet , it resembles the small closed Fermi surface that is found in the weak coupling region by other numerical techniques such as dual fermions \cite{Ots15} or rDMFT \cite{PK15}. The structures in $A(\mathbf{k},i \eta)$ at strong and weak coupling are essentially those found before in the emission spectrum by VMC \cite{ABF13} and in the FS by DCA \cite{MBA10} studies. For larger coupling strengths close to the transition to the PM, the surface topology is different [Figs.~\ref{fig_3x2_AF_OffHF_FS}(b) and \ref{fig_3x2_AF_OffHF_FS}(c)]. Small closed structures appear, but the doubling of the surface that corresponds to AF long-range order still persists. Figure~\ref{fig_3x2_AF_OffHF_FS}(b) shows the drastic change in topology at $J_{c,\mathrm{FS}}/t=1.24$ from one open (AF${}_2$) to two closed sheets in AF${}_1$. We interpret the fact that the structures in AF${}_2$ are not symmetric in the $x$- and $y$-direction as a finite-size effect of the asymmetric $3\times2$ cluster used in Fig.~\ref{fig_3x2_AF_OffHF_FS}. Since the correlation length diverges at the critical point it is not surprising that it exceeds the cluster size in its vicinity, which then causes finite-size effects. However, the AF${}_2$ phase is stable within all three cluster sizes, which strongly suggests that it persists for larger clusters. An intermediate AF phase was also observed within VMC and DCA \cite{ABF13,MBA10}, although with different topology. Still, in those studies the FS also contained closed structures in form of hole pockets. Since the topology in the AF${}_2$ phase still changes for different cluster sizes, a more systematic study of this phase using larger cluster sizes would be useful. It is this very region close to the transition to the paramagnet where the spin-spin correlator $\langle \vec{S}_i \cdot \vec{s}_i \rangle$ at half-filling suggests a considerable admixture of Kondo singlets in the ground state. Figure~\ref{fig_AF_GS_Sisi_n095}(a) shows the staggered magnetization, the spin-spin correlator and the local magnetic susceptibility of the $f$-spins for a filling of $n=0.95$. Compared to the half-filled case, the antiferromagnetic phase is divided into two regions with the aforementioned different (Fermi surface) topologies. The transition between both AF phases at $J_{c,\mathrm{FS}}/t=1.24$ is discontinuous as both $m$ and $\langle\vec{S} \cdot \vec{s}\rangle$ jump at the transition. For smaller electron densities the transition between the AF phases remains to be discontinuous down to an electron density of $n=0.8$. There, the second AF solution $\mathrm{AF}_2$ vanishes and when increasing the coupling strength in the AF metal $\mathrm{AF}_1$ the system directly jumps to the PM metal solution. Therefore, the transition between AF metal and PM metal is of first order for $n\lesssim 0.8$. When approaching half-filling, the jump in the staggered magnetization reduces until the transition between both AF phases becomes continuous at $n=0.97$. As at half-filling, the local magnetic susceptibility $\chi^f_{\text{loc}}$ shows a divergence at the onset of AF. However, inside the AF the local susceptibility has an additional increase at $J_{c,\mathrm{FS}}$. In Fig.~\ref{fig_AF_GS_Sisi_n095}(b) the jump in the staggered magnetization is shown for the three cluster sizes studied here. As can be seen, the jump size at the transition is finite for all cluster sizes, but not systematic, so that a finite-size extrapolation of both, the jump size and the jump position, are not possible with the available cluster sizes. A discontinuous transition between the AF phases was also found using VMC approaches \cite{WO07,ABF13}, but within the framework of DCA \cite{MBA10} and rDMFT \cite{PK15} the transition between the two AF phases is found to be continuous instead. {Note that} only the VMC works at zero temperature like the VCA, so that the continuous nature of the transitions seen by DCA and rDMFT could be due to finite temperatures. \begin{figure}[t!] \includegraphics[width=0.495\textwidth]{Fig11} \caption{(a) Staggered magnetization $m^c_\text{AF}$, expectation value of the Spin-spin correlator $\langle\vec{S}\cdot\vec{s}\rangle=\partial \Omega/\partial J$ and local susceptibility $\chi^f_{\text{loc}} = \partial^2 \Omega/\partial h^2_f|_{h_f \to 0}$ as a function of $J/t$ at $n=0.95$ using a $3\times2$ cluster. (b) Staggered magnetization around the AF${}_1$-AF${}_2$ transition for the three indicated clusters.} \label{fig_AF_GS_Sisi_n095} \end{figure} \begin{figure}[b!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig12} \caption{DOS (top panel) and relative position of the resonances in the DOS (bottom panel) at $n=0.95$ for different coupling strengths in the antiferromagnetic region obtained from the AF solution of the VCA. A change of the peak structure close to the Fermi energy at $\omega=0$ happens at $J/t\gtrsim 1.2$: the main peak splits and an additional side resonance develops. Close to $J_c$ the structure already resembles the three peak structure known from the Kondo insulating region for $J>J_c$.} \label{fig_3x2_AF_OffHF_DoS_J} \end{center} \end{figure} The top panel of Fig.~\ref{fig_3x2_AF_OffHF_DoS_J} shows part of the DOS close to the Fermi surface for different values inside the AF phase at $n=0.95$. At the largest value of the coupling strength shown, which is close to the critical value $J_c/t \approx 1.75$, three distinct peaks are visible which are similar to the characteristic three-peak structure of the paramagnet at $J>J_c$. When reducing $J$, the two peaks closest to the Fermi energy approach each other and finally seem to merge at $J/t\approx1.2$. However, the bipartite character of this main resonance can be still seen in a shoulder at $\omega\approx-0.18$. Further decreasing the coupling strength results in a more and more broadened shoulder which at small coupling strengths such as $J/t=0.6$ can only be guessed to still exist. More importantly, the leftmost peak which might be identified at half-filling as being an indicator for Kondo singlet formation survives even down to small coupling strengths. The relative peak position with respect to the center of the gap to the right of MR$_2$, plotted in the bottom panel, is reduced for decreasing coupling strength and jumps at the Fermi surface change between the two AF phases. However, it is difficult to quantify the peak height of this signal from the DOS as it would be necessary to remove the background which, is \textit{a priori} unknown. At this point, we return to the question of whether the Kondo breakdown happens at finite or at zero coupling strengths. In particular, the local susceptibility and the spin-spin correlator allow to investigate whether the Kondo breakdown coincides with the discontinuous transition between the two AF phases. In the $\mathrm{AF}_1$ phase close to the critical coupling strength $J_{c,\mathrm{FS}}/t$, the spin-spin correlator is still smaller than $-1/4$, which indicates that the contribution of Kondo singlets is still finite. However, for weak coupling $\langle\vec{S}\cdot\vec{s}\rangle>-0.25$ so that no definite statement of the presence of Kondo singlets can be made. In case of a transition from itinerant to localized heavy fermions (ILT), the local magnetic susceptibility was found to diverge in a study by \textit{Hoshino} and \textit{Kuramoto} \cite{HK13}. There, the Kondo-Heisenberg model was treated within DMFT and the ILT was only observed for finite non-local spin-interactions $J_H$ whereas the heavy fermions were found to be itinerant for the plain Kondo lattice model ($J_H=0$) \cite{HK13}. However, here we find a divergence of the local susceptibility only at the onset of AF and not at $J_{c,\mathrm{FS}}$. This further supports the absence of a breakdown of Kondo singlets in the AF region, in particular at the transition point between the two AF phases. To conclude this discussion, it is difficult to fully exclude the Kondo breakdown scenario for very small coupling strengths. The density of states, the spin-spin correlator and spectra indicate that in the AF region close to $J_c$ Kondo singlets make part of the charge carriers. This speaks against a local quantum critical point, where Kondo breakdown and onset of antiferromagnetic order coincide at $J_c$. Although a jump in the staggered magnetization indicates a phase transition within the AF phase, the absence of a divergence in $\chi_f$ does not suggest a change in the composition of the heavy-fermion state at $J_{c,\mathrm{FS}}$. Nevertheless, the topological differences between the Fermi surface in the weak-coupling and intermediate-coupling regimes are evident. By using larger clusters it would be interesting to check, to what amount finite-size effects enter in the FS structures that have been discussed here and how the Kondo singlet peak evolves as a function of cluster size. Another possibility to further investigate putative Kondo breakdown scenarios would be to include $f$-propagators in the VCA by basing it on an adapted Luttinger-Ward functional \cite{CPR05}. It also has to be noted that other techniques which work directly in $k$-space, such as DCA \cite{MA08,MBA10} or rDMFT \cite{PK15}, might be able to investigate the Fermi surface evolution more precisely. \section{Superconductivity} \label{KLM_SC} In this section, we investigate both local s-wave and nodal $d_{x^2-y^2}$ superconductivity in the KLM. Section \ref{SCs} presents the main results obtained by VCA for s-wave SC, and some additional details of the calculations can be found in Appendices \ref{Details_swave} and \ref{swave_AF_off}. It will be shown that only mean-field-like solutions and no local SC due to correlation effects are present. However, robust d-wave SC is found and investigated in Sec.~\ref{KLM_SC_d}. Finally, the interplay of d-wave SC and AF is analyzed in Sec.~\ref{SCd+AF}. Section~\ref{EOM} complements these numerical results by considering the EOM for the pairing susceptibilities. For small clusters, VCA is known to prefer superconducting solutions even at half-filling as seen in the Hubbard model \cite{LTP+15}, which is used as a model system for high-temperature superconductors such as the cuprates \cite{SLMT05}. This occurs especially if the system only has a small gap as then allowing for pairing to another quantum sector results in an energy gain which may be sufficient to overcome this gap. Nevertheless, VCA allowed for a qualitative study of superconductivity in the Hubbard model \cite{SLMT05,AAPH06}, motivating us to apply this technique to the KLM. \begin{table*}[t!] \begin{center} \begin{tabular}{|l|c|c|c|l|}\hline No.&$D$&$\mu^{\prime}$&$t^{\prime}$&Characteristics\\ \hline\hline 1&$\downarrow$&$\uparrow$&$\uparrow$&Two solutions with $t^{\prime}\approx t$\\\hline 2&$\downarrow$&$\uparrow$&$\downarrow$&Two solutions: One has $t^{\prime}=0$, the other one is thermodynamically unstable \\\hline 3&$\downarrow$&$\downarrow$&$\uparrow$&One solution with $t^{\prime}\approx t$, but thermodynamical stability violated off half-filling\\\hline 4&$\downarrow$&$\downarrow$&$\downarrow$&One solution with $t^{\prime}=0$\\\hline \end{tabular} \caption{Different types of stationary points investigated around half-filling at $J/t=1.5$ when varying $D_s,\ \mu^{\prime}$ and $t^{\prime}$ when considering s-wave SC. The arrows in columns 2-4 indicate whether $\Omega$ has a maximum ('$\uparrow$') or a minimum ('$\downarrow$') with respect to the variational parameter.} \label{Tab_SCs_J15_tc} \end{center} \end{table*} \subsection{Absence of local s-wave superconductivity in the KLM} \label{SCs} Our starting point for the study of s-wave superconductivity is the observation of such a phase in Ref.~\onlinecite{BZV+13}. By using DMFT with a NRG solver, \textit{Bodensiek et al.} identified a broad region off half-filling and for coupling strengths $J/W\gtrsim0.1$ where the anomalous expectation value $\Phi_s :=\langle c_{\uparrow}c_{\downarrow}\rangle$ had a very small, but finite value. Local pairing was already observed in the KLM by mean-field approaches \cite{HKS12,MY13}, but the superconducting state found within DMFT is conceptually different as the pairing does not occur between $c$- and $f$-electrons. Instead, the superconductivity is only mediated by the antiferromagnetic spin fluctuations and pairs are formed in the conduction band only. In contrast to this DMFT study, we here examine the existence of this unconventional scenario by including spatial fluctuations, as treated by the VCA. \subsubsection{Half-filling} At half-filling, adding only an s-wave SC Weiss field leads to a stable stationary point of the SEF. However, this solution has a large Weiss field strength and including the intra-cluster hopping strength $t^{\prime}$ in the set of variational parameters reveals the local nature of the solution: the hopping $t^{\prime}$ is zero and the reference system consists of decoupled, locally superconducting sites. This artificial mean-field-like solution is the only stable non-trivial stationary point at half-filling. Other true many-body solutions, where s-wave superconductivity is caused by the interaction, are not found. A detailed analysis of s-wave SC at half-filling can be found in Appendix \ref{swaveHF}.\\ \subsubsection{s-wave superconducting solutions off half-filling} \label{swaveOffHF} When changing the chemical potential to leave half-filling, the SEF shows multiple stationary points with respect to the variational parameters. It is therefore important to decide, which stationary point corresponds to the physical solution. The set of variational parameters is four dimensional and includes both the chemical potential of the lattice and the cluster, the SC Weiss field, and the cluster hopping strength. Due to the large number of variational parameters, the $2\times2$ cluster is used to find different stationary points and to discard unphysical solutions. Afterwards, the $3\times2$ cluster is used for calculations on the remaining solution only. In order to take the 'correct' quantum space, one has to choose one combination (of often at least two possible combinations) of chemical potentials $\mu$ and $\mu^{\prime}$ . In general, one expects to be able to tune the filling by changing the chemical potential $\mu$ around the 'natural cluster fillings' $n_{\text{cl}}=N_e/L$. Even a quite weak Weiss field, which couples two adjacent quantum sectors (e.g. with $N_e$ and $N_e-2$), could lead to an energy gain which supersedes an energy gap between these sectors. Especially for weakly gapped systems this could lead to artificial superconducting solutions due to this overcompensation effect. When including an s-wave superconducting Weiss field and following a stationary point, one often encounters situations where the self-energy jumps or shows a kink as a function of one of the variational parameters. At these points, which have been encountered before in VCA \cite{BP10}, convergence to the correct stationary point is not ensured anymore. Overall, the addition of superconducting Weiss fields poses the problem of choosing the right stationary point, which here means to also choose out of different quantum sectors. One has to first of all search and review all the possible saddle points and then decide which ones have to be considered. For stable solutions the SEF should have maxima or minima with respect to the four variational parameters leading to eight possible types of stationary points. Focusing on the stationary points where the SEF is minimal with respect to the Weiss field strength leads to the four types of stationary points listed in Table~\ref{Tab_SCs_J15_tc}. When evaluating different stationary points, the following criteria are used to assess the solutions. One criterion is the value of the cluster hopping parameter $t^{\prime}$. 'Atomic' solutions with $t^{\prime}=0$ amount to reference systems with decoupled cluster sites that locally form a superconducting singlet state. They are considered to be artificial mean-field solutions and not to represent superconductivity due to many-body effects, and hence will be discarded. Another important criterion is thermodynamical stability. The electron filling $n$ can be obtained either by calculating the derivative of the (approximated) grand potential or by calculating the trace of the VCA Green function. In order to have thermodynamical stability, both ways of calculating $n$ should lead to the same value. Despite having included the cluster chemical potential in the set of variational parameters, in some cases thermodynamic stability is violated. These two criteria already reduce the number of realistic solutions of Table~\ref{Tab_SCs_J15_tc} with $t^{\prime}\neq0$ and thermodynamic stability to the one solution of type 1. There, the stationary point of the SEF is a maximum with respect to $\mu^{\prime}$ and $t^{\prime}$ and a minimum with respect to $D_c$. Since the $2\times2$ cluster shows anomalies in the hopping parameter $t^{\prime}$ on the cluster already for intermediate values of $J/t$, the $3\times2$ cluster is considered in order to investigate the most promising stationary points off half-filling. \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{Fig13} \caption{SEF and cluster hopping strength $t^\prime$ at the stationary point of the SEF with respect to $\mu,\mu_c$ and $t^{\prime}$ as a function of the Weiss field strength $D_s$. Plotted are two solutions for $J/t=2.4$ at $n=0.9$, one corresponding to a maximum with respect to $t^{\prime}$ (filled symbols), the other one corresponding to a minimum (empty symbols). Note the SEF having an extremum in either case at $D_s =0$, indicating no s-wave SC order being stabilized.} \label{fig_J24_SCs_SEF_t} \end{center} \end{figure} In contrast to our previous analysis, where no (clearly physical) s-wave solution was found, we now take the converged parameters of the paramagnetic solution at a filling of $n\approx0.95$ as a starting point and add an s-wave superconducting Weiss field to the set of variational parameters. Again, various stationary points are found, but most of them are identified as unphysical according to the above criteria and therefore neglected. In Fig.~\ref{fig_J24_SCs_SEF_t} the cluster hopping parameter $t^{\prime}$ after maximization (solution 1) or minimization (solution 2) is plotted as a function of the superconducting Weiss field strength $D_s$. If one wants to exclude 'unphysical' solutions, where the cluster consists of isolated sites and a resulting local self-energy enters the calculation of the SEF, one can restrict the search to small values of $D_s$ shown in this figure. As can be seen from Fig.~\ref{fig_J24_SCs_SEF_t}, no stationary point is found in this region ($D_s<0.4$) except the non-superconducting solution at $D_s=0$. The qualitative picture shown in Fig.~\ref{fig_J24_SCs_SEF_t} also holds for smaller coupling strengths and fillings around $n=0.90$ and we believe that it prevails for even smaller fillings. For small $D_s$, a solution with reasonable intra-cluster hopping strength $t^{\prime}\sim t$ is found, but already for comparably small values of $D_c\sim 0.4-0.5$ the cluster hopping drops to zero. At this point, the cluster hopping of the second solution, which is zero for small Weiss field strengths, diverges. The solution breaks down and one is left with the case of decoupled sites inside the reference cluster that was discussed before. We show in Appendix \ref{swave_AF_off} that treating both s-wave SC and AF does not stabilize a (different) s-wave SC solution. The only stable solution showing s-wave SC is the one with vanishing hopping $t^{\prime}$ on the reference system. An s-wave SC solution caused by correlations is not found. In the following, we consider d-wave SC instead. \subsection{Nodal d-wave superconductivity} \label{KLM_SC_d} Before treating the possible coexistence of AF and SC order, in the spirit of the investigation so far, we will only add a SC Weiss field to the paramagnetic case as additional variational parameter to check for the possible existence of d-wave SC at all. This is in particular interesting, since d-wave superconductivity is often found experimentally in heavy fermion systems \cite{SW16} and also numerical studies of the Kondo lattice model indicate the existence of a d-wave superconducting phase \cite{AFB14,Ots15}. Although in the recent study of Ref.~\onlinecite{Ots15} by Otsuki a p-wave superconductor was found for coupling strengths around the critical point $J_c$, we will not further investigate this type of SC order due to the already very large number of variational parameters and the accompanying complexity of our treatment. Instead, we focus in this section on superconductivity with $d_{x^2-y^2}$ symmetry and leave the investigation of this further interesting SC channel with VCA to future studies. First, the paramagnetic solution will be taken as a starting point to investigate superconductivity by adding a Weiss field with d-wave symmetry. In Sec.~\ref{SCd+AF} antiferromagnetism will be treated on equal footing with superconductivity and the interplay of both symmetry-broken phases will be discussed. In the case of s-wave SC, the Cooper pairs form locally and clusters are affected in a uniform way by the Weiss field. The geometry and size of the cluster enter the calculation through the intra-cluster hopping and in case of antiferromagnetism through the mediated effective RKKY interaction only. This changes for the case of extended pairing, such as the non-local $d_{x^2-y^2}$ SC - due to its geometry, the $2\times2$ cluster is for instance known to favor $d$-wave pairing, which might bias the result. For this reason the $2\times2$ cluster will only be briefly discussed and mainly used as a reference to the $3\times2$ cluster, for which most of the results will be shown. As long as no AF Weiss field is used in addition to the SC one, the paramagnetic phase diagram at half-filling has to be used as a starting point. In contrast to the 'full' phase diagram which by including AF shows an insulator at arbitrary coupling strength $J/t$ at half-filling, the paramagnetic phase diagram also shows a metallic phase at half-filling. To be more precise, the system is metallic in the weak coupling region and becomes insulating when the coupling exceeds some value $J>J_{c,INS}$, where Kondo screening is large enough to form an insulator consisting of the Kondo singlets. \begin{figure}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{Fig14} \caption{Anomalous expectation value when considering d-wave SC. The figure shows $\Phi_d$ obtained using a $3\times2$ cluster and variational parameters $\mu^{\prime}, t^{\prime}$ and $D$. The circles are data points, the color code indicates the value of $\Phi_d$ at these points. For $J\leq J^{SC}_{ano}\approx0.9$ there is a superconducting solution even at half-filling, but one has to be careful as this region coincides with an anomaly in $t^{\prime}$. The interaction strength $J$ where the anomaly occurs is quite close to the one found in the paramagnetic solution ($J^{PM}_{ano}\approx 0.84$).} \label{fig_SC_d_PD_3x2} \end{center} \end{figure} When looking at the anomalous expectation value $\Phi_{d_{x^2-y^2}}=\langle c_{i\uparrow}c_{i\pm e_x\downarrow} - c_{i\uparrow}c_{i\pm e_y\downarrow}\rangle$, which serves as the order parameter of a superconducting phase, it is not surprising that no superconductivity is found in the insulating region (see Fig.~\ref{fig_SCd_AF}). In contrast to the expectation that there should not be any superconductivity at all at half-filling because of the insulating phases, one finds d-wave superconductivity for weak coupling up to $J/t\approx 1.1$. This is the region where the incomplete (paramagnetic) treatment of the system showed an anomaly in the cluster hopping. For the $2\times2$ cluster the cluster hopping has a minimum at $J/t\sim1.4$ and starts to grow for smaller coupling; in case of the $3\times2$ and $4\times2$ clusters $t^{\prime}$ even showed a kink when plotted as a function of $J$ at $J/t\sim0.8$, which marked the phase transition to a metal for smaller coupling strengths. As will be shown below, for the $3\times2$ cluster the onset of superconductivity is comparable to this coupling strength $J_{c,INS}$. Although this reveals the need of including antiferromagnetism into the calculations, it still provides the correct starting point for the strong coupling region, i.e. the paramagnetic region $J>J_{c,AF}$. There, leaving half-filling should be valid as antiferromagnetism is not realized at or off half-filling as was shown in Sec. \ref{KLM_AF}. Leaving the study of the region with coupling $J<J_c\approx 2.05 t$ to the next subsection where antiferromagnetism is included in the investigation, it remains to consider here the region with $J>J_c$. Still, it is interesting that the maximum of the anomalous expectation value ($J/t\sim2.1$) amounts to the region where antiferromagnetic fluctuations lead to the onset of AF long-range order if one permits this type of ordering. The corresponding electron density at the maximum is roughly $n\approx0.65$. Close to half-filling the paramagnetic metal persists at coupling stengths $J>J_c$ and only doping of $\epsilon\sim0.1-0.2$ leads to a finite $\Phi_d$. When lowering the electron density further, the size of the anomalous expectation value diminishes and finally goes to zero at small $n$. \begin{figure}[t!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig15} \caption{Anomalous expectation value $\Phi_d$ as a function of $J/t$ at half-filling for the pure SC solution (blue) and when including also AF (green) using a $3\times2$ cluster. For the latter case the staggered magnetization is shown in black.} \label{fig_SCd_AF} \end{center} \end{figure} Before including an antiferromagnetic Weiss field, d-wave superconductivity is investigated in the region around $J_c$ using a $3\times2$ cluster (see Fig.~\ref{fig_SC_d_PD_3x2}). For this cluster, $J_c/t\approx1.95$. The overall phase diagram compares qualitatively well to the one of the $2\times2$ cluster and it even gives quantitatively similar results. \subsection{Equations of motion for the pairing susceptibility} \label{EOM} As discussed in Sec.~\ref{SCs}, there is no evidence for local s-wave SC in the VCA treatment of the KLM. Here, we consider a complementary approach by studying generic features of the EOM for the pairing susceptibilities. EOM in the context of the KLM have been used before, e.g., for small clusters or in combination with a mean-field approach \cite{HRN03,WG00}. Here, however, we adapt the approach specifically to treat SC. Since we expect SC to be induced by the interaction, we here sketch the main results for the EOM of the interaction term of the KLM and leave further details to Appendix~\ref{EOM:appendix}. It is convenient to rewrite the interaction part of the KLM Hamiltonian, Eq.~\eqref{eq:KLM}, in terms of annihilation (creation) operators for the electrons in the conduction band $c^{(\dag)}_{i,\sigma}$ and the localized electrons in the $f$-band $f^{(\dag)}_{i,\sigma}$, respectively, giving \cite{CLGI03} \begin{equation}\label{KLM} H_I= \sum\limits_{i,\sigma} \biggl[\frac{J_z}{4} \big( n_{i,\sigma} N_{i,\sigma} - n_{i,\bar \sigma} N_{i,\sigma}\big) + \frac{J_{\perp}}{2} \big( c^{\dag}_{i,\sigma} c_{i,\bar \sigma} f^{\dag}_{i,\bar \sigma} f_{i,\sigma}\big) \biggl]. \end{equation} The operators $n_{i,\sigma}$ ($N_{i,\sigma}$) represent the on-site occupation number of the conduction band ($f$-band) electrons with spin $\sigma$ on site $i$. For the latter, the constraint $N_{i,\sigma} + N_{i,\bar{\sigma}}=1$ has to hold. In case of the isotropic KLM treated here, the coupling strengths are $J_z=J_{\perp} \equiv J$. The EOM for the pairing susceptibility is obtained by considering \begin{equation}\label{Gap} C(t) = \langle \Delta_{ij}^{\dagger}(t)\Delta_{i^{\prime}j^{\prime}}(0)\rangle \end{equation} for the pairing operators $$\Delta_{ij}^{\dagger} = F(i,j)c^{\dagger}_{i\uparrow}c^{\dagger}_{j\downarrow}.$$ The function $F(i,j)$ takes into account the geometry of the pairing order parameter. For the various channels of interest, it reads as \begin{equation}\label{Delta} F(i,j) := \left \{ \begin{array}{cl} \delta_{i,j} & \text{s-wave} \\ &\\ +1: r_i-r_j=\pm e_x, \\ +1: r_i-r_j=\pm e_y, & \text{extended s-wave.} \\ &\\ +1: r_i-r_j=\pm e_x, \\ -1: r_i-r_j=\pm e_y, & \text{d${}_{x^2-y^2}$.} \\ &\\ +1: r_i-r_j=\pm (e_x+e_y), \\ -1: r_i-r_j=(e_x-e_y), & \text{d${}_{xy}$.} \\ \end{array} \right. \end{equation} The time derivative of $\Delta^{\dag}_{ij}(t)$ is given by the Heisenberg equation \begin{equation}\label{EOM_Gap} \frac{d}{dt} \Delta^{\dag}_{ij}(t) =-i \left[c^{\dag}_{i,\sigma}(t)c^{\dag}_{j,\bar \sigma}(t) , H_{I} \right]. \end{equation} Details of the calculation and the resulting expressions are found in App.~\ref{EOM:appendix}. One finds that \[ \frac{d}{dt} \Delta^{\dag}_{ij}(t) =0 \qquad \mbox{for s-wave pairing.} \] This is an interesting observation, since it shows that the dynamical properties of the pairing susceptibility in the s-wave channel will not depend on the interaction, and hence are the same as the ones of free electrons. Note that this result does not completely rule out the possibility for having s-wave SC. However, it restricts the possible mechanisms which might stabilize it to ones, in which the frequency dependence of the pairing susceptibility does not play a role. This needs further investigations, which we leave for future research. However, together with the lack of evidence for s-wave pairing in our VCA treatment and also in further numerical approaches \cite{Ots15}, this further restricts the possibility of realizing such a phase in the KLM. However, this does not hold for the extended s-wave and the d-wave channels. Note that the results for the d-wave channel [e.g., Eq.~\eqref{Delta_dwawe}] contain an interesting aspect: the time derivative of $\Delta_{ij}^\dag$ contains expressions which are a product of two creation operators (and hence a pair of two fermions) and of the staggered magnetization of the local moments. This indicates that AF order should directly contribute to the dynamics of the SC response function, so that a coexistence or even a cooperative interplay between AF and d-wave SC order might come into appearance. In the next section such a possible cooperative interplay between SC and AF order is further investigated using the VCA. \subsection{Competition of antiferromagnetism and superconductivity} \label{SCd+AF} In the previous subsection we have seen, that away from half-filling superconducting phases can manifest in the phase diagram of the Kondo lattice model. References \onlinecite{MSV86,SLH86,BBE86} show that antiferromagnetic spin fluctuations can assist anisotropic even-parity pairing such as the $d$-wave superconductivity investigated here. Also the EOM treatment in {Sec.}~\ref{EOM} {brought} up the question for possible cooperative interplay of AF and d-wave SC order. Here, we are going to investigate this aspect in detail. Especially for small couplings $J/t$ another symmetry breaking enters in the form of antiferromagnetic ordering of the conduction electrons. The interplay of these two effects are known to be important for d-wave superconductivity in the Hubbard model as possibly realized in high-temperature superconductors \cite{SLMT05}. In principle, there are three scenarios which are possible. The most improbable is that the two phases are independent and, hence, considering magnetism and superconductivity leads to the same magnetization and anomalous expectation value as treating these effects separately. It is also possible that both phases coexist and that they either compete, which means that the onset of antiferromagnetism reduces superconductivity and vice versa, or that they cooperate, in that case the superconductivity would be enhanced due to the antiferromagnetic ordering. However, considering the results of the previous section, approaching this question within VCA might seem to be tricky as we have encountered problems for weak coupling. Nevertheless, as this is the very region where at least in the normal phase antiferromagnetism dominates, one has to consider both broken symmetries together to properly address this weak- to intermediate-coupling regime. As shown in Sec.~\ref{KLM_AF}, antiferromagnetism already sets in for intermediate interaction strengths where the divergence of $t^{\prime}$ does not yet pose a problem, but one has to bear in mind that any doping of the system reduces the antiferromagnetic correlations. Hence, doping the system sufficiently in order to observe superconductivity might already be too much doping to observe antiferromagnetism. Especially for intermediate-coupling strengths close to $J_c^{AF}$ this means that one has to investigate a very narrow $\mu$ window corresponding to small doping. We have used a $3\times2$ cluster to revisit the half-filled system at $J<J_c^{AF}$, this time using both a d-wave superconducting and an antiferromagnetic Weiss field at the same time. For all coupling strengths the solution coincides with the AF insulator that was already found in Sec.~\ref{KLM_AF}. While the stationary point with $M=0$ still exists, a comparison of the corresponding energies shows that the antiferromagnetically ordered phase is always lower in energy. Especially at weak coupling allowing both for superconductivity and for antiferromagnetism results in an antiferromagnetic insulator and no superconducting solution with lower energy is found at half-filling. \begin{figure}[b!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig16} \caption{Results when considering d-wave SC and AF away from half-filling obtained using a $3\times2$ cluster at $J/t=1.8$. The top panel shows the ground state energy as a function of filling for the various cases indicated. The vertical lines indicate the transition points at which AF order vanishes when considering AF only or AF+SC, respectively. The center and bottom panels show the anomalous expectation value and the staggered magnetization as a function of electron density $n$ for the superconducting, the antiferromagnetic and the coexisting SC+AF solutions as indicated.} \label{fig_SC+AF_J18} \end{center} \end{figure} In case of the $3\times2$ cluster the critical coupling strength is $J_{c,AF}\approx 1.95$. Here, we have a closer look at the behavior in the two AF phases identified in Sec.~\ref{KLM_AF_OffHF} by considering $J/t = 1.8$ in Fig.~\ref{fig_SC+AF_J18} and $J/t = 1.2$ in Fig.~\ref{fig_SC+AF_J12}, respectively. By keeping $J/t$ at these values, we avoid crossing the transition line between the two AF phases. Figure \ref{fig_SC+AF_J18} shows both the pure antiferromagnetic and the pure d-wave superconducting solutions as well as a solution with coexistence of superconductivity and antiferromagnetism. Starting with the 'pure' solutions at $J/t=1.8$, for small doping the antiferromagnetic solution has a lower energy than the superconducting one, but their energies cross at $n\approx 0.98$. From considering these two phases only, the system would be antiferromagnetic for $0.98\leq n\leq1$ and d-wave superconducting for fillings smaller than $0.98$. However, compared to the 'pure' phases, the solution with coexistence of AF and SC has the lowest energy and should therefore be realized in the system. In the coexistence region the anomalous expectation value does not change much compared to the solely superconducting solution with $M_c=0$. At the same time the staggered magnetization is enhanced compared to the AF solution. The coexistence region therefore enlarges the superconducting region to a value close to half-filling and extends the antiferromagnetic region down to an electron density of $n\approx0.955$. In this sense, the interplay of antiferromagnetism and d-wave superconductivity at $J/t=1.8$ can be considered to be cooperative. Outside of the coexistence region, no antiferromagnetic order is present and the solution coincides with the one shown in Fig.~\ref{fig_SC_d_PD_3x2} where d-wave superconductivity without antiferromagnetism was considered. \begin{figure}[t!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig17} \caption{The same as in Fig.~\ref{fig_SC+AF_J18} showing the energy (top panel) and the order parameters (center and bottom panels) as a function of filling $n$, this time for $J/t=1.2$.} \label{fig_SC+AF_J12} \end{center} \end{figure} When reducing the coupling strength, the interplay between antiferromagnetism and superconductivity changes. Figure \ref{fig_SC+AF_J12} shows the same quantities for a coupling of $J/t=1.2$. Still, there exists a coexistence region which has lowest energy and which is therefore preferred as compared to the pure AF and SC solutions. Close to half-filling ($n\gtrsim0.98$), the coexistent solution has comparable order parameters as the pure solutions. For smaller electron density, the staggered magnetization of this solution is reduced compared to the pure antiferromagnetic phase, but the anomalous expectation value is perceptibly larger than the one of the pure d-wave solution. When extrapolating the staggered magnetization, it becomes clear that the antiferromagnetic region will be reduced compared to the antiferromagnetic phase diagram shown in Fig.~\ref{fig_3x2_AF_OffHF_PD}. At the electron density $n_c$ where the antiferromagnetic order breaks down, the pure superconducting solution should be recovered. Until then (i.e. for $n_c<n<0.98$), the anomalous expectation value is larger than in the pure d-wave solution. The interplay of both symmetry-breaking mechanisms is therefore characterized by a competition between superconductivity and antiferromagnetism at $J/t=1.2$. \begin{figure}[b!] \begin{center} \includegraphics[width=0.495\textwidth]{Fig18} \caption{Values of the order parameters as a function of $J/t$ and $n$ when simultaneously considering d-wave SC and AF. The circles are data points. The values for the anomalous expectation value (intensity color coded in red) and for the staggered magnetization (color coded in blue) as obtained using a $3\times2$ cluster are displayed. The dashed line indicates the transition line at which the AF continuously disappears when increasing $J/t$.} \label{fig_SC+AF_PD} \end{center} \end{figure} The results of the investigation of the interplay of antiferromagnetism and d-wave superconductivity are summarized in Fig.~\ref{fig_SC+AF_PD}. It shows the coexistence region of antiferromagnet and superconductor. This region is limited by an electron density of $n=1$, where superconductivity breaks down, and by a critical coupling strength $J_c(n)$ (indicated by a dashed line in Fig.~\ref{fig_SC+AF_PD}), which marks the breakdown of the antiferromagnet. Note that the transition between two AF phases with different Fermi surface topology discussed in Sec.~\ref{KLM_AF_OffHF} seems to be absent when considering AF and SC simultaneously. Instead, the staggered magnetization $m$ changes smoothly around $J/t\sim1.24$. This can be explained by the increase of $m$ in the 'cooperative' region $J/t\gtrsim 1.24$ and the reduction of $m$ in the 'competing' region $J/t\lesssim 1.24$ compared to the 'pure' AF solution. Hence, we do not find supporting evidence for the existence of two distinct SC+AF phases, while a more careful investigation might still identify subtle behavior related to this effect. \section{Summary and Outlook} \label{Summary} Exploiting the strength of VCA that phases with broken symmetry can either be probed or actively avoided by choosing a suitable variational space, the different phases of the KLM were analyzed separately, as well as their interplay. {This leads to the main result of this work, which} is the phase diagram displayed in Fig.~\ref{fig:PD_sketch}. As the half-filled KLM has special features in the phase diagram, it has been analyzed separately from the doped model. {In the paramagnetic phase, d}ue to the absence of Weiss fields, the set of variational parameters is comparatively small so that it was possible to investigate cluster size effects and to identify a necessary minimal set of variational parameters. Differences between the $2\times2$, the $3\times2$, and the $4\times2$ clusters were found at small couplings, however, the influence of multiple (anisotropic) hopping strengths for asymmetric clusters turned out not to change the results qualitatively. At half-filling and strong coupling, all clusters lead to a Kondo insulator with {quasiparticle gap, which is essentially independent from the cluster size}. However, the asymmetric $3\times2$ and $4\times2$ clusters showed an unexpected transition to a paramagnetic metal at $J/t\approx 0.8$. As confirmed by adding an AF Weiss-field, in this region a gapped long-range antiferromagnetic order emerges due to effective RKKY interaction, so that the metallic phase at half-filling has not been investigated in further detail. Doping the system away from half-filling at strong couplings resulted in a metallic phase with a large Fermi surface. The Fermi surface area in this region amounts to the sum of electron density and f-spin density, which indicates the participation of f-spins in the charge-transfer process via mobile Kondo singlets. The addition of an AF Weiss field as variational parameter allows for the investigation of an emerging antiferromagnetic phase below a critical coupling strength $J_c/t$. At half-filling, {finite-size extrapolation is possible}. Despite the smallness of the clusters, the extrapolation reveals an infinite-cluster-size value of $J_c/t^{\mathrm{VCA}}\approx1.48\pm0.28$, which agrees within error bars with the one obtained by numerically exact QMC methods \cite{Ass99} at half-filling, $J_c/t^{\mathrm{QMC}} \approx 1.45 \pm 0.05$. Although only three cluster sizes for ladder-shaped clusters could be used, it is impressive to see that the extrapolated value nicely fits to these exact results, giving us confidence for the further results obtained by VCA. {Unfortunately, only very limited cluster sizes can be treated for this model, so that a finite-size extrapolation in most cases is not possible. However, due to the excellent results at half-filling, we expect that such an extrapolation would lead to results similar to those obtained} with numerically exact approaches such as tensor-network methods \cite{VC04,VMC08,COBV10}. {I}t would be desirable to develop improved cluster solvers {for larger clusters, so that in future work a finite-size extrapolation within the VCA for arbitrary parameters would become possible}. {At half-filling, the transition from PM to AF insulator is seen in the spin-spin correlator $\langle \vec{S} \cdot \vec{s} \rangle$, the local susceptibility $\chi_f$, and the DOS.} Off half-filling ($1 > n \gtrsim 0.8$), the AF solution becomes metallic and the staggered magnetization {decreases} when reducing the electron filling. At some critical filling $n_c(J)$, the magnetization vanishes smoothly and the system continuously goes over to a PM metal. {The AF} metal possesses two different regions, one at weak coupling with a small Fermi surface and the other one at larger coupling showing closed pocket structures. This needs further investigations, and we believe that using larger cluster sizes would be very helpful to better understand these features of the Fermi surface. Our results for the spin-spin correlator, the local susceptibility, and the DOS indicate the existence of Kondo singlets in the antiferromagnetic phase over a wide region of parameters, and they seem to persist down to weak coupling strengths. Such an existence of Kondo singlets in the AF phase has been reported before, e.g., by variational Monte Carlo (VMC) \cite{WO07,ABF13}, real-space DMFT (rDMFT) \cite{PK15}, and dynamical cluster approximation (DCA) \cite{MA08}, and contradicts the Kondo breakdown scenario of mean-field theory \cite{ABF13}. However, when considering only AF Weiss fields in the VCA, we also identify a discontinuous transition within the AF phase at lower values of the filling, which turns continuous for $n \gtrsim 0.97$. This is similar to what has been reported in Refs.~\onlinecite{WO07,ABF13}, while rDMFT \cite{PK15} and DCA \cite{MA08} studies identify a continuous transition, albeit at finite temperature. Note that as the VCA in its current formulation is limited to the electronic degrees of freedom and does not include excitations of the $f$-spins in the Green function, it is difficult to address the question of Kondo screening in a direct approach. Including spin excitations in an extension of standard VCA might allow for a detailed future investigation of Kondo screening. In addition to the AF properties, we investigated the phase diagram for possible s-wave SC phases. This was recently reported by a DMFT+NRG approach \cite{BZV+13}, but, however, has not been found with DMFT or DMFT-like techniques using different impurity solvers since. Already at half-filling using the $2\times2$ cluster a seemingly superconducting solution could be traced back to a solution of atomic mean-field nature. Extending the variational space, additional stationary points in the self-energy functional were investigated. {Most of them can be discarded, since they either correspond to artificial mean-field solutions of isolated SC sites, or because they violate thermodynamic stability.} To exclude artificial solutions, a parameter regime for the Weiss field strength was identified and investigated for the most promising stationary points using the $3\times2$ cluster. In this parameter regime of small Weiss fields, the only stable solutions correspond to a PM metal and not to s-wave SC. Also allowing for a possible coexistence with AF did not lead to stable s-wave SC solutions. Interestingly, EOM for the s-wave pairing susceptibility show no $J$ dependence, which might be a further indication that s-wave SC is suppressed in the KLM on a square lattice. In contrast, d-wave SC is stabilized over a wide range of the parameters treated here. However, also here care needs to be taken: including only the d-wave SC Weiss field at half-filling leads to a stable SC solution for weak couplings, which disappears when also including AF in the treatment. Off half-filling, also in the presence of AF, d-wave SC is stabilized, and a coexistence region for couplings $J< J_c$ is found, which persists down to a critical density $n_c(J)$. Inside this coexistence phase two regions were found, in which within the VCA treatment at small couplings AF and d-wave SC seem to be in competition, while close to $J_c$ both appear to act cooperatively. In order to establish a connection to experiments, the model should be extended {, in particular also concerning the existence of a quantum critical point.} {Possibilities are to include additional terms suppressing Kondo singlets, which might lead to a tunable Kondo breakdown. An example are long-range hopping terms, which lead to frustration and have been used in the past \cite{CN10,MBA10, AFB14,WT15}.} Another {possibility is to extend} the model to the Heisenberg-Kondo lattice model \cite{PPN07,XD08,HK13,YGL14}. In both cases the modifications of the Kondo lattice model can also lead to changes in the superconducting channel \cite{AFB14, XD08, YGL14}. {The additional Heisenberg interaction leads to a} $d$-wave SC condensate, which consists of magnetic pairs of $f$ electrons on neighboring sites, as well as composite pairs containing two conduction electrons \cite{FC10}. Recently, it was shown that singlet pairing correlations are enhanced in the vicinity of a Kondo-destruction quantum critical point \cite{PDIS15}, which motivates the study of $d$- and $s$-wave SC for the Heisenberg-Kondo lattice model. However, {in order to be able to study these scenarios with the VCA, one needs to include the possibility to treat non-local interactions}. In order to better connect with results obtained by techniques such as DMFT, dual fermions or DCA, the VCA study could also be extended by using clusters with additional bath sites. This would allow to consider in addition to the spatial fluctuations also dynamical fluctuations between cluster and bath sites, an aspect which might help to better understand the discrepancy between DMFT+NRG and VCA results with respect to s-wave superconductivity. Including bath sites in VCA also offers a route to investigate the transition from d- to p-wave superconductivity found in Ref. \onlinecite{Ots15} with a complementary cluster technique. \acknowledgments We are grateful to the late T. Pruschke who initiated this project. We acknowledge helpful discussions with O. Bodensiek as well as computer support by the GWDG and the GoeGrid project. B.L. and S.R.M. are grateful for financial support by the Deutsche Forschungsgemeinschaft (DFG) through research unit FOR 1807, Project No. P7.
2,869,038,154,492
arxiv
\section{Introduction} \label{sec:intro} The study of impact cratering and ejecta generation has been at the forefront of recent exploration missions toward small bodies of the Solar System. In 2005, NASA's mission Deep Impact collided with comet Tempel1 at a speed of more than 6 km/s \citep{blume2003deep}; in 2019, JAXA's mission Hayabusa2 carried out an impact experiment on asteroid Ryugu \citep{tsuda2013system,tsuda2019hayabusa2,tsuda2020hayabusa2}, obtaining images and videos of the impact event and the subsequent crater formation. In 2021, the NASA Double Asteroid Redirection Test (DART) mission \citep{CHENG_DART_2018} has been launched toward the Didymos binary system as the first part of the Asteroid Impact and Deflection Assessment (AIDA) joint mission between ESA and NASA \citep{CHENG_AIDA_2015,Rivkin_2021}. DART has impacted Dimorphos on the 26th of September 2022, while the CubeSat LICIACube \citep{2019TortoraLICIA} has recorded the event, sending back to Earth the first images of the impact. The ESA HERA mission \citep{2020EPSC_Michel,2018cospar_Michel} will follow DART as the second part of the AIDA mission to investigate in depth the effects of the impact event, analysing the crater formation, the deflection, and the fate of the ejecta. Ejecta models are of utmost importance in studying impact phenomena such as the ones associated with these missions. They are used to predict the size of the crater, the number of generated fragments, and the initial conditions of the ejected particles. This information is exploited to predict the fate of ejecta in time and estimate the share of particles re-impacting and escaping the target body. Most of the ejecta models currently available are based on scaling relationships that have been observed in experimental impacts. These relationships are based on point-source solutions, which link impacts of different sizes, velocities, and gravitational accelerations. These scaling relationships are based upon the Buckingham $\pi$ theorem \citep{buckingham1914physically} of dimensional analysis and have had an extensive development over the years \citep{holsapple2007crater,holsapple2012momentum,housen2011ejecta}. Also based on scaling relationships and on principles derived by the Maxwell Z-model of crater excavation \citep{maxwell1975modeling,maxwell1977simple}, Richardson developed a further set of scaling relationships \citep{richardson2007ballistics,richardson2009cratering,richardson2011modeling} for the analysis of the Deep Impact mission to comet Tempel1. In his work, Richardson also derived relationships to describe the in-plane and out-of-plane components of the ejection angles and how they vary with the impact angle. The work of Housen and Holsapple has been extensively used in recent years to model impact craters. Several works have been dedicated to predict the dynamical fate of the ejecta resulting from the impact of the DART spacecraft with Dimorphos. \cite{yu2017ejecta,yu2018ejecta} performed full-scale simulations of the DART impact, modelling the shape of Didymos as a combination of tetrahedral simplices and Dimorphos as an ellipsoid. In their work, the fate of the ejected particles is studied drawing a few hundred of thousand of samples, assuming a normal impact and studying two possible types of materials for the target asteroid. \cite{rossi2022dynamical} also analyse the evolution of the ejecta plume after the DART impact focusing on the fate of the particles at different time scales, in order to assess the possibility of the future ESA Hera mission \citep{2020EPSC_Michel} to find particles at its arrival. \cite{fahnestock2022pre}, simulate the ejecta plume to obtain synthetic images, representative of the camera view of LICIACube and an Earth-orbiting telescope. Ejecta models have also been used to analyse the effects on the safety and operations of a mission scenarios, as it was the case for Hayabusa2 \citep{soldini2017assessing}. Additionally, when the impact event can be observed, such as for Hayabusa2, ejecta models can help characterise the properties of the target body like the type and strength of the material by comparing the predicted and observed effects \citep{arakawa2020artificial}. The ejecta models are also fundamental to understanding and analysing the momentum transfer associated with an impact event, which can then characterise the deflection capabilities of the impact \citep{holsapple2012momentum,rossi2022dynamical}. In this work, we present the development of an ejecta cloud distribution-based model used to describe the ejecta parameters (i.e., size, launch position, ejection speed, and ejection direction) via Probability Density Functions (PDFs). The formulation we present starts with a review of existing modelling techniques based on experimental correlations \citep{holsapple2007crater,holsapple2012momentum,housen2011ejecta,richardson2007ballistics,richardson2009cratering,richardson2011modeling,sachse2015correlation}. After the review process, we identified common aspects and differences between the various modelling techniques. The work of Housen and Holsapple \citep{holsapple2007crater,holsapple2012momentum,housen2011ejecta} is mainly dedicated to scaling relationships describing normal impacts. As we are interested in a generic model, also valid for oblique impacts, we integrated the work of Richardson \citep{richardson2007ballistics,richardson2009cratering,richardson2011modeling} to extend the distribution-based model to a generic oblique impact. In addition, following the work of \cite{sachse2015correlation}, we introduce a correlation between the particle size and the ejection speed, which consider larger particles more likely to be ejected at lower speeds. The work of synthesis we have performed on previous experimental correlations has resulted in a modular formulation in which the ejecta cloud distribution is a combination of probability distribution functions and conditional distributions that describe the parameters characterising the ejecta. By introducing such a modularity, we allow different models to be plugged-in, as long as they can be described as PDFs \citep{trisolini2022scitech}. In this work, we present three different formulations of the ejecta model and assess how they affect the prediction of the ejecta fate. To do so, the ejecta correlations have been reformulated into probability distribution functions and analytical solutions for the corresponding Cumulative Distribution Functions (CDFs) have been obtained. Leveraging the knowledge of the CDFs of the ejecta cloud distribution, we can directly generate sample that follow the cloud distribution. In addition, we can exploit the CDF to analytically integrate the ejecta distribution to estimate the number and mass of particles within specific initial conditions. This feature is exploited in this work to introduce a sampling methodology based on \emph{representative fragments}. With this methodology, each sample represents an ensemble of fragments, to better characterise the overall behaviour of the ejecta cloud. The associated fragments are estimated integrating the distribution in a neighbourhood of the selected sample. The ability to predict the fate of the ejecta generated by an impact relies on the knowledge of the impactor and target properties, and the impact scenario; however, it also depends on the selection of the ejecta model and the definition of its parameters. Previous works have considered impacts on specific target asteroids and comets (e.g., comet Tempel1, asteroid Ryugu, and the Didymos system) and focused on the effect of the target material and equivalent strength \citep{richardson2007ballistics,holsapple2007crater}, the impact location \citep{yu2017ejecta,yu2018ejecta}, or the asteroid environment. Petit and Farinella \citep{petit1993modelling} combined different scaling laws to model the outcome of the impact between asteroids or other small bodies in the solar system. In this work, instead, we focus on the ejecta modelling decisions and study how they can affect the overall evolution of the ejecta \citep{trisolini2022iac}. In fact, as several formulations are available in literature for the initialisation of the initial conditions of the ejecta, it is interesting to compare the different modelling techniques available, integrated within the distribution-based formulation we developed. Specifically, we investigate, among others, the effect of the particle size range, different models for the ejection speed, and different distributions for the in-plane and out-of-plane components of the ejection angles. As the study of the motion and fate of the ejecta relies on the models used for their initialisation, it is important to understand to what extent different modelling techniques and different assumptions can affect our predictions. \bigbreak \cref{sec:distributions} describes the developed distribution-based formulation for the ejecta model and its mathematical derivation. \cref{sec:sampling} introduces the sampling methodology based on the representative fragments, while \cref{sec:dynamics} describes the dynamical environment for the simulations. \cref{sec:sensitivity} shows the results of the sensitivity analysis on the modelling techniques and \cref{sec:conclusions} summarises the conclusions of the study. \section{Ejecta field distribution} \label{sec:distributions} This section provides the description of the modelling technique used to represent the characteristics of an ejecta field after a hypervelocity impact onto celestial bodies. Specifically, we focus the attention on the impact of small-kinetic impactor onto small-bodies surfaces. The proposed modelling principles is based on the scaling relations experimentally derived by \cite{housen2011ejecta,holsapple2007crater,richardson2007ballistics} and leverages their work to build a continuous model, where the ejecta field is described using a combination of probability density functions so that the particle number density is recovered as a function of the ejection variables. In addition, the relevant cumulative distributions are derived, which can be used to directly sample the distribution and obtain the relevant initial conditions after the impact event. Finally, the distribution can be integrated to estimate the number of particles having specified ejection conditions, allowing a better understanding of the overall fate of the ejected particles. The most general expression of the ejecta distribution is a function $p(\mathbf{x})$ that represents the particles number density as a function of the ejection parameters, $\mathbf{x}$. The ejection parameters considered in this work are the particle radius, $s$, the particle launch position (i.e., the radial distance from the centre of the crater to the ejection location), $r$, the particle speed, $u$, and the in-plane and out-of-plane components of the ejection angle, $\xi$ and $\psi$, respectively. \cref{fig:ejection_params} shows the physical meaning of the ejection parameters in a local horizontal reference frame, tangent to the asteroid surface at the impact point. In general, the size and ejection angles are always present in the models; speed and position are instead mutually exclusive as they can be related according to experimental correlations \citep{housen2011ejecta,richardson2007ballistics}. Both cases have been analysed: \cref{subsec:position-dist} describes a model that considers the particle's launch position, $r$, while \cref{subsec:speed-dist,subsec:correlated-dist} both describe a model based on the ejection speed, $u$. \begin{figure}[!htb] \centering \includegraphics[width=2.6in]{images/ejection_params} \caption{Schematics of the ejection parameters used for the ejecta distributions. The reference frame \emph{xyz} is a local horizontal frame tangent to the asteroid surface.} \label{fig:ejection_params} \end{figure} The following sections describes the different formulations developed for the ejecta distributions. Three main types of formulations have been considered. A first formulation, identified as \emph{position-based} where the distribution variables are $\mathbf{x} = \left\{ s, r, \xi, \psi \right\}$ and the speed is then obtained from the particle's launch position, $r$. A second formulation, identified as \emph{speed-based}, which instead drops the dependency on $r$ and for which the distribution variables are $\mathbf{x} = \left\{ s, u, \xi, \psi \right\}$. A last formulation that is a \emph{correlated} version of the \emph{speed-based} formulation in which the size and speed of the ejected particles are correlated. In fact, it is reasonable to expect that, on average, smaller particles have higher velocities and vice versa \citep{sachse2015correlation}. In the other two formulations, size and speed are not correlated and all particle sizes can assume any ejection velocity in the distribution range. \cref{sec:sensitivity} will present a comparison between the different formulations. \subsection{Position-based distribution} \label{subsec:position-dist} The \emph{position-based} distribution formulation, derives the particle number density as a function of $s$, $r$, $\xi$, and $\psi$. As previously mentioned the model is based on experimental correlations, which have been mainly developed for normal impacts. In this work, we wish to derive a general distribution for oblique impacts by combining the results from \cite{housen2011ejecta} and \cite{richardson2007ballistics}. For a normal impact, we consider the distribution in $\mathbf{x}$ to have the following expression: \begin{equation} \label{eq:normal-dist} p(\mathbf{x}) = p(s, r, \xi_n, \psi_n) = p_s (s) \cdot p_{\xi_n} (\xi_n) \cdot p_{\psi_n|r} (\psi_n|r) \cdot p_r (r) \rm, \end{equation} \noindent where $\xi_n$ and $\psi_n$ are the in-plane and out-of-plane ejection angles relative to a normal impact (i.e., an impact perpendicular to the surface of the target). We observe that, even for a normal impact, the out-of-plane ejection angle, $\psi_n$, depends on the launch position, $r$ \citep{richardson2007ballistics,richardson2011modeling}; therefore, we represent it with a conditional distribution. However, to increase the generality of our formulation, we derive the distribution for a generic oblique impact; therefore we introduce the following transformation \citep{richardson2007ballistics,richardson2011modeling}: \begin{equation} \label{eq:transformation-position-based} \mathcal{T} : \begin{cases} \xi = \xi_n \\ \psi = \psi_n - \frac{\pi}{6} \cos \phi \left( \frac{1 -\cos \xi}{2} \right) \left(1 - \frac{r}{r_{\rm max}} \right)^2 \end{cases} \rm , \end{equation} \noindent where $\phi$ is the impact angle, measured from a plane tangent to the target surface, and $r_{\rm max}$ is the maximum launch position, which does not necessarily coincide with the transient crater radius \citep{housen2011ejecta}. The variable $\xi$ is measured from the x-axis (as shown in \cref{fig:ejection_params}), which is defined as the projection of the impactor's incoming surface-relative velocity vector onto the plane tangent to the target surface. The expression of $\psi$ of \cref{eq:transformation-position-based} is derived from the work of \cite{richardson2007ballistics,richardson2011modeling} and expresses the variation of the out-of-plane ejection angle as function of the distance from the crater's centre and the in-plane ejection angle, $\xi$. By using this transformation, we can obtain the ejecta distribution for an oblique impact, starting from a normal impact as follows: \begin{equation} p(s, r, \xi, \psi) = \frac{p(s, r, \xi_n, \psi_n)}{|J(\mathcal{T})|} \rm , \end{equation} \noindent where $|J(\mathcal{T})|$ is the determinant of the Jacobian of the transformation of \cref{eq:transformation-position-based}, which, in this case, equals to one. Alongside the transformation of variables, we need to take into account the dependency of the distributions from other variables. \cref{subsubsec:inplane-dist-position-based,subsubsec:outplane-dist-position-based} will show that the in-plane and out-of-plane ejection angles depend on a subset of the ejection variables so that the expression for the ejecta distribution can be re-written as: \begin{equation} \label{eq:conditional-position-based} p(s, r, \xi, \psi) = p_s(s) \cdot p_{\psi | \xi, r} (\psi | \xi, r; \phi) \cdot p_{\xi | r} (\xi | r; \phi) \cdot p_r (r) \rm , \end{equation} \noindent where $p_{\psi | \xi, r} (\psi | \xi, r; \phi)$ is the conditional distribution of $\psi$ given $\xi$ and $r$, and having fixed the impact angle $\phi$; $p_{\xi | r} (\xi | r; \phi)$ is the conditional distribution of $\xi$ given $r$ and fixing $\phi$. Therefore, we have a combination of probability distributions in $s$ and $r$, and conditional distributions in $\psi$ and $\xi$. We will now describe in detail all these contributions to the overall ejecta distribution. \subsubsection{Particle size} \label{subsubsec:size-dist} In this formulation, the size distribution is considered to be independent from all the other variables. The distribution derives from the power law expression of the reverse cumulative distribution of the number of particles as a function of their size \citep{krivov2003impact}. \begin{equation} \label{eq:cum_size_unc} G(> s) = N_r \cdot s^{-\bar{\alpha}} \rm . \end{equation} Here, $N_r$ is a multiplicative factor that can be determined from mass conservation. Following the notation of Sachse \citep{sachse2015correlation}, $\bar{\alpha}$ is the exponent defining the slope of the power law. Differentiating \cref{eq:cum_size_unc}, we obtain the differential density distribution function, which has the following expression: \begin{equation} \label{eq:size-distribution} p_s(s) = \frac{d G (< s)}{ds} = \frac{d \left( N_{\rm tot} - G (> s) \right)}{ds} = \bar{\alpha} N_r s^{-1 - \bar{\alpha}} \rm , \end{equation} \noindent with $s_{\rm min} \leq s \leq s_{\rm max}$. We can then obtain $N_r$ from mass conservation as follows: \begin{equation} \label{eq:nr} M_{\rm tot} = \frac{4}{3} \pi \rho \int_{s_{\rm min}}^{s_{\rm max}} s^3 p_s(s) \, d\!s \quad \rightarrow \quad N_r = \frac{3 (3 - \bar{\alpha}) M_{\rm tot}}{4 \bar{\alpha} \left( s_{\rm max}^{3-\bar{\alpha}} - s_{\rm min}^{3-\bar{\alpha}}\right) \pi \rho} \rm , \end{equation} \noindent where $M_{\rm tot}$ is the total mass ejected from the crater, $\rho$ is the density of the asteroid, and $s_{\rm min}$ and $s_{\rm max}$ are the minimum and maximum particle radii, respectively. The minimum and maximum radii are free parameters of the model that can be selected by the user. Commonly used values are 10-100 \si{\micro\meter} for the minimum diameter and 1-10 \si{\centi\meter} for the maximum one \citep{yu2018ejecta}. The total mass ejected can instead be derived from experimental correlations as follows \citep{housen2011ejecta}: \begin{equation} \label{eq:mtot} M_{\rm tot} = k \rho \left[ (n_2 R_c)^3 - (n_1 a)^3 \right] \rm , \end{equation} \noindent where $a$ is the impactor diameter, and $k$, $n_1$, and $n_2$ are coefficients depending on the type of material and impact derived from experimental correlations. The works of Housen and Holsapple contain extensive coverage for the derivation and usage of these parameters. The interested reader is referred to their work \citep{housen2011ejecta,holsapple2007crater,holsapple2012momentum}. \subsubsection{Launch position} \label{subsubsec:launch-dist} The probability distribution of the launch position can be derived from the expression of the mass ejected within a distance $r$ from the crater origin. This expression has been derived by Housen \citep{housen2011ejecta} and has the following form: \begin{equation} \label{eq:mx} M(<r) = k \rho \left[r^3 - (n_1 a)^3 \right] \quad\quad \text{with} \quad n_1 a \leq r \leq n_2 R_c \rm . \end{equation} We can thus obtain the CDF in $r$ by simply dividing by the total mass (\cref{eq:mtot}). \begin{equation} \label{eq:r-cdf} P_r (<r) = \frac{k \rho}{M_{\rm tot}} \left( r^3 - r_{\rm min}^3 \right) \rm , \end{equation} \noindent where $r_{\rm min} = n_1 a$. Analogously, we identify $r_{\rm max} = n_2 R_c$ as the maximum launch position (i.e., the maximum radial distance from the centre of the crater). By differentiating the CDF, we get the probability distribution of \cref{eq:r-distribution} \citep{pishro2016introduction}. \begin{equation} \label{eq:r-distribution} p_r (r) = \frac{3 k \rho}{M_{\rm tot}} r^2 \quad\quad \text{with} \quad r_{\rm min} \leq r \leq r_{\rm max} \rm . \end{equation} \subsubsection{In-plane ejection angle} \label{subsubsec:inplane-dist-position-based} The distribution of the in-plane ejection angle expresses the azimuthal variation of the ejected samples. For normal impacts, this distribution is uniform and the fragments are ejected with the same probability within the range 0$^\circ$-360$^\circ$. For oblique impacts, instead, the uniformity is lost and the distribution starts to assume more complex patterns such as the one of \cref{fig:crater-oblique}. \begin{figure}[htb!] \centering \includegraphics[width=2.6in]{images/crater-oblique-impact} \caption{Image of the ejecta field resulting from an oblique impact on the Moon for a crater of about 0.5 \si{\kilo\meter} in diameter \citep{richardson2011modeling}.} \label{fig:crater-oblique} \end{figure} Richardson proposed an expression for the distribution of such patterns \citep{richardson2011modeling}. However, it was considered too complex for the implementation in the proposed framework as it is not integrable. Therefore, starting from the work of Richardson, we propose the following expression: \begin{equation} \label{eq:xi-distribution} p_{\xi|r}(\xi | r; \phi) = \frac{1}{2 \pi} \left[ 1 - \cos \phi \left( \cos 2\xi \cdot \cos^3 \xi - \frac{1}{5} \cos \xi \cdot \cos^4 2\xi \right) \left( 1 - \frac{r^8}{r_{\rm max}^8} \right) \right] \rm . \end{equation} In addition, differently from Richardson, we have introduced $r_{\rm max}$, instead of the crater radius, to maintain the generality of the model among both Richardson \citep{richardson2011modeling}, and Housen and Holsapple \citep{housen2011ejecta} correlations. As it is possible to observe, \cref{eq:xi-distribution} is a conditional distribution of $\xi$ given $r$. The angle $\xi$ is measured from the direction of the incoming projectile; therefore, $\xi = 180^\circ$ is downstream to the incoming projectile. An example of the distribution as function of the impact angle is shown in \cref{fig:xi-lobed-dist-variation}, where we see the peak of the distribution along the projectile direction and two other smaller peaks on the sizes representing symmetrical \emph{lobes}. For a 90\si{\degree} impact angle, the distribution degenerates to uniform. \begin{figure}[htb!] \centering \includegraphics[width=2.6in]{images/xi-lobed-impact-angle} \caption{Variation of the in-plane ejection angle \emph{lobed} probability distribution function with respect to the impact angle.} \label{fig:xi-lobed-dist-variation} \end{figure} As discussed in \cite{richardson2011modeling}, a distribution of the type of \cref{eq:xi-distribution} cannot fully describe the complexity of the cratering and ejection phenomenon; however, it can be used for a first analysis of the behaviour of the ejected particles after the impact. Alongside the distribution of \cref{eq:xi-distribution}, we can also introduce a simpler and more manageable description, where only the main lobe is taken into account and the dependency on the launch position, $r$, is dropped. In this case, we model the in-plane ejection angle with a Gaussian distribution: \begin{equation} \label{eq:xi-distribution-gaussian} p_{\xi}(\xi) = \mathcal{N} (\mu_\xi, \sigma_\xi) \rm , \end{equation} \noindent where $\mu_\xi$ and $\sigma_\xi$ are the mean and standard deviation of the distribution respectively. As the main lobe is in the downstream direction with respect to the incoming projectile, we have $\mu_\xi = 180^\circ$. For the standard deviation, instead, we assume it varies linearly with the impact angle as follows: \begin{equation} \label{eq:sigma-xi} \sigma_\xi = \frac{2}{5} \pi \cdot \frac{\phi - \phi_{\rm min}}{\phi_{\rm max} - \phi_{\rm min}} \quad \text{with} \quad \phi_{\rm min} \leq \phi < \phi_{\rm max} \rm, \end{equation} \noindent where $\phi_{\rm min}$ and $\phi_{\rm min}$ are the minimum and maximum impact angles, respectively. For the presented model, $\phi_{\rm max} = 90^\circ$, while $\phi_{\rm min} = 20^\circ$ as the experimental models on which the distribution formulation is based are valid only down to an impact angle of 20$^\circ$ (\cref{fig:xi-dist-variation}). The variation fo the standard deviation in \cref{eq:sigma-xi} was extrapolated from the work of \cite{yamamoto2002measurement}. In addition, care must be taken for normal impacts; in this case, the distribution is not Gaussian anymore and we fall back to a uniform distribution. \begin{figure}[htb!] \centering \includegraphics[width=2.6in]{images/xi-impact-angle} \caption{Variation of the in-plane ejection angle Gaussian probability distribution function with respect to the impact angle.} \label{fig:xi-dist-variation} \end{figure} \cref{eq:xi-distribution,eq:xi-distribution-gaussian} describe two options for the characterisation of the in-plane component of the ejection angle after an oblique impact. These distributions can be seen as a starting point for the characterisation of the ejecta field. For example, by using images of the impact event these distributions may be tuned to better describe the event in exam. A different formulation may also be used; for example, a Gaussian mixture model may better be fitted to image data. Given the modularity of the presented formulation, such options can be explored in future works. \subsubsection{Out-of-plane ejection angle} \label{subsubsec:outplane-dist-position-based} The out-of-plane component of the ejection angle, $\psi$, defines how steep the launch angle of the particles is with respect to the body surface. In several studies, this angle is assumed constant and equal to 45$^\circ$ for all the ejected particles. However, as also shown by experimental results, different particles will possess different ejection angles. According to \cite{richardson2007ballistics}, the ejection angle is typically within 27\si{\degree} and 63\si{\degree}. \bigbreak We consider two different options for the distribution in $\psi$. For the distribution in exam, we first derive the distribution for normal impacts and then apply the transformation of \cref{eq:transformation-position-based} to obtain the oblique formulation. The first option is a spherically uniform distribution as follows: \begin{equation} \label{eq:psin-dist} p_{\psi_n} (\psi_n) = \frac{\cos \psi_n}{\sin \psi_{n,\rm max} - \sin \psi_{n,\rm min}} \quad \text{with} \:\: \psi_{n,\rm min} \leq \psi_n \leq \psi_{n,\rm max} \rm, \end{equation} \noindent which is defined within the aforementioned limits of $\psi_{n,\rm min} =$ 27\si{\degree} and $\psi_{n,\rm max} =$ 63\si{\degree} \citep{richardson2007ballistics}. Inverting the transformation of \cref{eq:transformation-position-based} and substituting into \cref{eq:psin-dist}, we obtain the distribution in $\psi$: \begin{equation} \label{eq:psi-distribution-uniform} p_{\psi|\xi, r} (\psi| \xi, r; \phi) = \frac{\cos \left( \psi + K_\psi (\xi, r; \phi) \right) }{\sin \psi_{n,\rm max} - \sin \psi_{n,\rm min}} \quad \text{with} \:\: \psi_{\rm min} \leq \psi \leq \psi_{\rm max} \rm, \end{equation} \noindent where $K_\psi (\xi, r; \phi) = \frac{\pi}{6} \cos \phi \left( \frac{1 -\cos \xi}{2} \right) \left(1 - \frac{r}{r_{\rm max}} \right)^2$. Note that that now we have a conditional distribution in $\psi$, given $r$ and $\xi$. In addition, $\psi_{\rm min} = \psi_{n,\rm min} - K_\psi(\xi, r; \phi)$ and $\psi_{\rm max} = \psi_{n,\rm max} - K_\psi(\xi, r; \phi)$; therefore, also the limits depend on $\xi$ and $r$. \bigbreak For the second option, we start from the work of \cite{richardson2007ballistics}. Here, it is observed that the ejection angle tends to decrease with the distance from the impact point, $r$. A simple linear scaling is considered as follows: \begin{equation} \label{eq:psin} \psi_n (r) = \psi_0 - \psi_d \cdot \frac{r}{r_{\rm max}} \rm, \end{equation} \noindent where $\psi_0 = 52.4^\circ \pm 6.1^\circ$ is the starting angle and $\psi_d = 18.4^\circ \pm 8.2^\circ$ is the total drop angle, using 2$\sigma$ errors \citep{richardson2007ballistics}. The values of $\psi_0$ and $\psi_d$ have been derived by laboratory shots performed by Cintala et al. \citep{cintala1999ejection}. These values can be used as starting points; however, they can be changed and tailored to the specific impact, using for example, direct imaging of the impact event. As the expression of \cref{eq:psin} is expressed as a combination of means and standard deviations, we can assume they can be treated as Gaussian distributions and combine them to have: \begin{equation} p_{\psi_n|r} (\psi_n|r) = \mathcal{N} (\mu_n, \sigma_n) \rm, \end{equation} \noindent where $p_{\psi_n|r} (\psi_n|r)$ is a conditional probability distribution of $\psi_n$, given the position, $r$. The mean and standard deviation derive from the combination of the characteristics of the distribution of $\psi_0$ and $\psi_d$, as follows: \begin{equation} \begin{cases} \mu_n = \mu_0 - \mu_d \cdot \frac{r}{r_{\rm max}} \\ \sigma_n^2 = \sigma_0^2 + \sigma_d^2 \cdot \left( \frac{r}{r_{\rm max}} \right)^2 \end{cases} \rm \end{equation} \noindent and $\mu_0 =$ 52.4$^\circ$, $\sigma_0 =$ 3.05$^\circ$, $\mu_d =$ 18.4$^\circ$, $\sigma_d =$ 4.1$^\circ$. Similarly to \cref{eq:psi-distribution-uniform}, we obtain the distribution for an oblique impact as follows: \begin{equation} \label{eq:psi-distribution-gaussian} p_{\psi|\xi, r} (\psi| \xi, r; \phi) = \mathcal{N} (\mu_n - K_\psi (\xi, r; \phi), \sigma_n) \end{equation} \subsubsection{Ejection speed} \label{subsubsec:u-dist-position-based} In this \emph{position-based} formulation, the ejection speed is a derived quantity that depends on the other variables, which are sampled from the distribution described in \cref{subsubsec:size-dist,subsubsec:launch-dist,subsubsec:inplane-dist-position-based,subsubsec:outplane-dist-position-based}. The expression for the ejection speed for oblique impacts has the following form \citep{richardson2007ballistics}: \begin{equation} \label{eq:ejection-speed-oblique} u_{\rm ej} = \sqrt{\left( u_n \sin \psi_n \right)^2 + \left( \frac{u_n \sin \psi_n}{\tan \psi} \right)^2} = u_n \cdot \frac{\sin \psi_n}{\sin \psi} \rm, \end{equation} \noindent where $u_n$ is the ejection speed associated to an equivalent normal impact. In principle, \cref{eq:ejection-speed-oblique} expresses the variation of the ejection speed as a function of the impact angle. This expression is a function of $r$, $\xi$, and $\psi$. \bigbreak The normal ejection speed, $u_n$ can have different expressions; in this work, we selected two commonly used experimentally derived expressions. The first one has been derived by Housen et al \citep{housen2011ejecta,holsapple2007crater} and has the following form: \begin{equation} \label{eq:speed-housen} u_n (r) = C_1 \cdot U \cdot \left[ \frac{r}{a} \cdot \left( \frac{\rho}{\delta} \right)^\nu \right]^{-\frac{1}{\mu}} \cdot \left( 1 - \frac{r}{r_{\rm max}} \right)^p \rm, \end{equation} \noindent where $C_1$, $\mu$, $\nu$, $p$, and $n_2$ are coefficients depending on the target material, $\delta$ is the density of the impactor and $U$ is the impactor speed. As \cref{eq:speed-housen} refers to normal impacts, $U$ is the impactor velocity normal to the impact surface. Therefore, for an oblique impact, $U = U_{\rm imp} \cdot \sin \phi$, where $U_{\rm imp}$ is the absolute magnitude of the impactor speed. The second expression, instead, has been derived by Richardson \citep{richardson2007ballistics} and is the following: \begin{equation} \begin{cases} \label{eq:speed-richardson} u_n (r) &= \sqrt{u_e^2 - C_{vpg}^2 \cdot g \cdot r - C_{vps}^2 \cdot \frac{Y}{\rho}} \\ u_e &= C_{vpg} \cdot \sqrt{ g \cdot R_{c,g} } \cdot \left( \frac{r}{ n_2 R_{c,g} } \right)^{-\frac{1}{\mu}} \end{cases} \rm, \end{equation} \noindent where $C_{vpg}$ and $C_{vps}$ are proportionality constants in the gravity and strength regime, respectively, $g$ is the net acceleration at the impact point on the surface of the small body accounting for both the small body's gravitation and its rotation, $ R_{c,g}$ is the crater radius in the gravity regime, and $Y$ is the strength of the small body material. \cref{fig:speed-comparison} compares the two velocity expression of \cref{eq:speed-housen} and \cref{eq:speed-richardson} for the same normal impact. The most noticeable feature is the different magnitude of the speed in the high-velocity region, where the model derived by Richardson returns higher speed up to about 200 \si{\meter\per\second}. \begin{figure}[htb!] \centering \includegraphics[width=2.6in]{images/speed-comparison} \caption{Example of the comparison between the ejection speed obtained with \cref{eq:speed-housen} (orange) and \cref{eq:speed-richardson} (blue).} \label{fig:speed-comparison} \end{figure} \subsection{Speed-based distribution} \label{subsec:speed-dist} The second formulation we propose is identified as \emph{speed-based}. Here, we remove the dependency on the launch position, $r$; therefore, the ejection location on the target surface is the same for all the particles, thus assuming that the crater size is small with respect to the target radius. The consequence of this assumption is the substitution of the position distribution, $p_r (r)$, with a speed distribution, $p_u (u)$. In addition, for the in-plane and out-of-plane ejection angles, we average out the contribution of the launch position (\cref{subsubsec:inplane-dist-speed-based,subsubsec:outplane-dist-speed-based}). In this case the transformation from normal to oblique impact is thus: \begin{equation} \label{eq:transformation-speed-based} \mathcal{T} : \begin{cases} u = \sqrt{\left( u_n \sin \psi_n \right)^2 + \left( \frac{u_n \sin \psi_n}{\tan \psi} \right)^2} = u_n \cdot \frac{\sin \psi_n}{\sin \psi} \\ \xi = \xi_n \\ \psi = \psi_n - \frac{\pi}{18} \cos \phi \left( \frac{1 -\cos \xi}{2} \right) \left(1 - \frac{r_{\rm min}}{r_{\rm max}} \right)^2 = \psi_n - \bar{K}_\psi (\xi; \phi) \end{cases} \rm , \end{equation} \noindent where the first equation derives from \cref{eq:ejection-speed-oblique}, observing that $\sin \psi$ and $\sin \psi_n$ are always between zero and 1, because the out-of-plane ejection angle belongs to the interval $\left[ 0^\circ, 90^\circ \right]$. The third equation derives from the averaging over $r$ of the corresponding expression of \cref{eq:transformation-position-based}. We will see in the following sections that introducing the transformation of \cref{eq:transformation-speed-based}, the distribution can then be expressed as follows: \begin{equation} \label{eq:conditional-speed-based} p(s, u, \xi, \psi) = p_s(s) \cdot p_{u | \xi, \psi} (u | \xi, \psi; \phi) \cdot p_{\psi | \xi} (\psi | \xi; \phi) \cdot p_\xi (\xi) \rm . \end{equation} In the following sections, the building blocks of \cref{eq:conditional-speed-based} will be described, except for the size distribution, which is equivalent to \cref{subsubsec:size-dist}. \subsubsection{In-plane ejection angle} \label{subsubsec:inplane-dist-speed-based} In \cref{subsubsec:inplane-dist-position-based}, we introduced two different expressions for the in-plane ejection angle distribution. For the expression based on a single lobe and modelled by a Normal distribution, no further modification is needed as the expression is already independent from the launch position, $r$. The expression of \cref{eq:xi-distribution}, instead, depends on $r$; therefore, we remove this contribution by averaging out in $r$ as follows: \begin{equation} \label{eq:xi-dist-lobed-speed-based} \begin{split} p_\xi (\xi) &= \frac{1}{r_{\rm max} - r_{\rm min}} \int_{r_{\rm min}}^{r_{\rm max}} p_{\xi|r} (\xi | r) \, d\!r = \\ &= \frac{1}{2\pi} \left[1 - \bar{K}_{\xi r} \cdot \cos \phi \left( \cos 2\xi \cdot \cos^3 \xi - \frac{1}{5} \cos \xi \cdot \cos^4 2\xi \right) \right] \rm . \end{split} \end{equation} \noindent where $\bar{K}_{\xi r} = \left[ 8 - 9 \frac{r_{\rm min}}{r_{\rm max}} + \left( \frac{r_{\rm min}}{r_{\rm max}} \right)^9 \right] / \left[ 9 \left( r_{\rm max} - r_{\rm min} \right) \right]$. \subsubsection{Out-of-plane ejection angle} \label{subsubsec:outplane-dist-speed-based} In a similar fashion, we can obtain the out-of-plane ejection angle distribution by averaging over $r$ the equations of \cref{subsubsec:outplane-dist-position-based}. For the spherically uniform formulation, the contribution of the launch position is limited to the transformation of \cref{eq:transformation-position-based}. Therefore, to remove the dependency, we simply use the averaged expression of \cref{eq:transformation-speed-based}. The resulting distribution is analogous to \cref{eq:psi-distribution-uniform}, with the only substitution of $K_\psi (\xi, r; \phi)$ with $\bar{K}_\psi (\xi; \phi)$. \bigbreak For the Gaussian distribution formulation, we proceed in a similar fashion by averaging out \cref{eq:psin} as follows: \begin{equation} \begin{split} \bar{\psi} &= \frac{1}{r_{\rm max} - r_{\rm min}} \int_{r_{\rm min}}^{r_{\rm max}} \left( \psi_0 - \psi_d \frac{r}{r_{\rm max}} \right) \, d\!r \\ &= \psi_0 - \frac{r_{\rm max} + r_{\rm min}}{2 \cdot r_{\rm max}} \cdot \psi_d \end{split} \rm . \end{equation} Therefore, following the same procedure of \cref{subsubsec:outplane-dist-position-based}, we obtain the distribution of the out-of-plane ejection angle for a normal impact, averaged over the launch position: \begin{equation} p_{\psi_n} (\psi_n) = \mathcal{N} (\bar{\mu}_n, \bar{\sigma}_n) \rm , \end{equation} \noindent with modified values of the mean and standard deviations as follows: \begin{equation} \begin{cases} \bar{\mu}_n &= \mu_0 - \frac{r_{\rm max} + r_{\rm min}}{2 \cdot r_{\rm max}} \cdot \mu_d\\ \bar{\sigma}_n^2 &= \sigma_0^2 + \left( \frac{r_{\rm max} + r_{\rm min}}{2 \cdot r_{\rm max}} \right)^2 \cdot \sigma_d^2 \end{cases} \rm . \end{equation} Finally, the Gaussian model of the out-of-plane ejection angle, $\psi$, distribution of the \emph{speed-based} formulation for oblique impacts has the following expression: \begin{equation} \label{eq:psi-dist-gaussian-speed-based} p_{\psi|\xi} (\psi| \xi; \phi) = \mathcal{N} (\bar{\mu}_n - \bar{K}_\psi (\xi; \phi), \bar{\sigma}_n) \rm . \end{equation} \subsubsection{Ejection speed} \label{subsubsec:u-dist-speed-based} The ejection speed distribution is the main difference between the \emph{position-based} and the \emph{speed-based} formulations. To derive the speed distribution, we assume the distribution is of the form $p_{u_n} (u_n) = C_u \cdot u_n^{-1 -\bar{\gamma}}$, where $u_n$ is the ejection speed after a normal impact \citep{sachse2015correlation} and $\bar{\gamma}$ is an exponent that determines the slope of the speed distribution. The value of $\bar{\gamma}$ depends on the characteristics of the target material. Comparing the speed distribution with experimental correlations, we derive that $\bar{\gamma} = 3\mu$ \citep{trisolini2022scitech}. To compute the constant $C_u$, we simply impose that the integral of the probability density function is equal to unity, so that the final expression of the distribution is: \begin{equation} \label{eq:un-dist-speed-based} p_{u_n} (u_n) = \frac{\bar{\gamma}}{u_{n, min}^{-\bar{\gamma}} - u_{n, max}^{-\bar{\gamma}}} u_n^{-1 - \bar{\gamma}} \rm . \end{equation} The values of the minimum and maximum ejection speeds can be provided by the user. Possible values can be derived from \cref{eq:speed-housen}. Values for the minimum speed can also be derived from the simplified expressions for the \emph{knee velocity}, $v{*}$ that is the approximate value of the slowest ejecta velocity, which is found from laboratory experiment \citep{holsapple2012momentum}. The \emph{knee velocity} has two expressions depending whether the impact is in the strength ($v_g^{*}$) or in the gravity ($v_s^{*}$) regime: \begin{equation} \label{eq:knee-velocity} \begin{cases} v_g^{*} &= K_{\rm vs} \sqrt{\frac{Y}{\rho}} \\ v_s^{*} &= K_{\rm vg} U \left( \frac{g}{a U^2} \right)^{\frac{1}{2+\mu}} \end{cases} \rm . \end{equation} It is interesting to note here the main difference between the \emph{speed-based} and \emph{position-based} formulation. The speed distribution of the \emph{position-based} formulation (\cref{eq:speed-richardson}, \cref{eq:speed-housen}), has a minimum ejection speed, at the crater rim, equal to zero. The \emph{speed-based} formulation, instead, has a minimum speed, which cannot be equal to zero as \cref{eq:un-dist-speed-based} becomes indefinite. The speed distribution is thus characterised by a power-law with a presence of a low-speed cut-off, which eliminates the low-velocity transition after the \emph{knee velocity} (\cref{fig:dist_comparison}). \bigbreak The expression of \cref{eq:un-dist-speed-based} is for a normal impact. We need now to transform it to an oblique impact. The procedure is similar to the one of the ejection angles, so that we have: \begin{equation} \label{eq:speed-based-tranformation} p_u (u) = \frac{p_{u_n} (u_n)}{|J(\mathcal{T})|} \rm . \end{equation} \noindent where $J(\mathcal{T}) = \frac{\sin \psi_n}{\sin \psi}$ is the Jacobian of the transformation of \cref{eq:transformation-speed-based}. Therefore, we have: \begin{equation} \label{eq:u-dist-oblique-speed-based} \begin{split} p_{u|\psi \xi} (u | \psi, \xi; \phi) &= p_{u_n} (u_n) \cdot \frac{\sin \psi}{\sin \psi_n} = \\ &= p_{u_n} \left( \frac{u \sin \psi}{\sin \left( \psi - \bar{K}_\psi (\xi; \phi) \right)} \right) \cdot \frac{\sin \psi}{\sin \left( \psi - \bar{K}_\psi (\xi; \phi) \right)} \end{split} \rm , \end{equation} \noindent where we have inverted the expressions of $u_n$ and $\psi_n$ as functions of $u$ and $\psi$ to complete the transformation. As in the previous cases, passing from a normal impact to an oblique impact, the distribution becomes conditioned, so that we now have a distribution in $u$, given $\psi$ and $\xi$. \subsection{Correlated speed-based distribution} \label{subsec:correlated-dist} This third formulation directly derives form \cref{subsec:speed-dist}; therefore, they share several characteristics. Specifically, they share the same distributions of the in-plane and out-of-plane ejection angle components so that, also for this formulation, we can refer to \cref{subsubsec:inplane-dist-speed-based,subsubsec:outplane-dist-speed-based}. The main difference, instead, resides in the size and speed distribution, which in this case is correlated. Therefore, we can write the generic expression for our distribution as follows: \begin{equation} \label{eq:conditional-speed-based-corr} p(s, u, \xi, \psi) = p_{su | \xi, \psi} (s, u | \xi, \psi; \phi) \cdot p_{\psi | \xi} (\psi | \xi; \phi) \cdot p_\xi (\xi) \rm . \end{equation} In \cref{subsubsec:size-speed-dist}, we focus on describing in detail the expression for the correlated size vs. speed distribution. \subsubsection{Size vs. speed} \label{subsubsec:size-speed-dist} The size-speed correlated distribution is derived from the work of Sachse \citep{sachse2015correlation} and has the following expression: \begin{equation} \label{eq:su-corr-sachse} p_{s, u_n} (s, u_n) = A s^{-1-\bar{\alpha}} u_n^{-1-\bar{\gamma}} \cdot \Theta \left[ b s^{-\bar{\beta}} - u_n \right] \end{equation} \noindent where $\Theta$ is the Heaviside step function, and $A$, $b$, $\bar{\alpha}$, $\bar{\beta}$, and $\bar{\gamma}$ are parameters that characterise the shape of the distribution function. \cref{fig:dist_correlated_example} shows an example of the correlated distribution in size and speed for a normal impact. As we can notice, the maximum possible ejection speed decreases with increasing particle size. A detailed description of the procedure to select and derive these parameters for the correlated distribution is presented in Appendix \cref{app:corr-params}. \begin{figure}[htb!] \centering \includegraphics[width=2.6in]{images/dist_corr} \caption{Example of the correlated distribution in size and speed for a normal impact.} \label{fig:dist_correlated_example} \end{figure} Similarly to \cref{subsubsec:u-dist-speed-based}, the starting expression for the distribution only refers to normal impact. As we are interested in generic impacts, we must perform a transformation to obtain a distribution that is valid also for oblique impacts. The transformation is analogous to \cref{eq:speed-based-tranformation} of \cref{subsubsec:u-dist-speed-based} so that we have the following expression for the correlated distribution: \begin{equation} \label{eq:su-corr-oblique} \begin{split} p_{s, u | \psi, \xi} (s, u | \psi, \xi) &= A \cdot s^{-1 - \bar{\alpha}} \cdot u^{-1 - \bar{\gamma}} \cdot \left( \frac{\sin \psi}{\sin \left( \psi - \bar{K}_\psi (\xi; \phi) \right)} \right)^{-\bar{\gamma}} \cdot \\ &\cdot \Theta \left[ b s^{-\bar{\beta}} - \frac{u \sin \psi}{\sin \left( \psi - \bar{K}_\psi (\xi; \phi) \right)} \right] \end{split} \end{equation} \subsection{Summary} \label{subsec:summary-distributions} At this point, it is useful to summarise the different formulations and their relevant models. As we have seen in \cref{subsec:position-dist,subsec:speed-dist,subsec:correlated-dist}, we have three distributions formulations (i.e., \emph{position-based}, \emph{speed-based}, and \emph{correlated}). For each of these formulations, we have derived the particle density distribution in the form of a combination of conditional distributions as function of size, $s$, position, $r$, speed, $u$, and ejection angles, $\xi$ and $\psi$. For some of these variables, namely the speed and the ejection angles, we have identified different modelling techniques based on the type of formulation and on past literature. \cref{tab:summary-models} presents a summary of the models used for the different formulations, along with the relevant equations, in order to increase the understanding of the structure of the different models proposed. In addition, this highlights the modular nature of the proposed distribution-based formulations, which can accommodate different types of models as long as they are expressed in terms of density distributions. \begin{table}[htb!] \centering \caption{Summary of the ejecta distribution formulations and the of the available models. \label{tab:summary-models}} \begin{tabular}{lc|ccc} \hline Var. & Model & Position-based & Speed-based & Correlated \\ & & (\cref{subsec:position-dist}) & (\cref{subsec:speed-dist}) & (\cref{subsec:correlated-dist}) \\ \hline \hline $s$ & & \cref{eq:size-distribution} & \cref{eq:size-distribution} & \cref{eq:su-corr-oblique} \\ \hline $r$ & & \cref{eq:r-distribution} & - & - \\ \hline $u$ & \specialcell{Richardson \\ Housen} & \specialcell{\cref{eq:ejection-speed-oblique,eq:speed-richardson} \\ \cref{eq:ejection-speed-oblique,eq:speed-housen}} & \cref{eq:u-dist-oblique-speed-based} & \cref{eq:su-corr-oblique} \\ \hline $\xi$ & \specialcell{Lobed \\ Gaussian} & \specialcell{\cref{eq:xi-distribution} \\ \cref{eq:xi-distribution-gaussian}} & \specialcell{\cref{eq:xi-dist-lobed-speed-based} \\ \cref{eq:xi-distribution-gaussian} } & \specialcell{\cref{eq:xi-dist-lobed-speed-based} \\ \cref{eq:xi-distribution-gaussian}} \\ \hline $\psi$ & \specialcell{Uniform \\ Gaussian} & \specialcell{\cref{eq:psi-distribution-uniform} \\ \cref{eq:psi-distribution-gaussian}} & \specialcell{\cref{eq:psi-distribution-uniform} \\ \cref{eq:psi-dist-gaussian-speed-based}} & \specialcell{\cref{eq:psi-distribution-uniform} \\ \cref{eq:psi-dist-gaussian-speed-based}} \\ \hline \end{tabular} \end{table} \section{Sampling methodology} \label{sec:sampling} The sampling methodology is a key ingredient to the understanding of the fate of the ejecta after an impact with a small body. Having defined our ejecta models via continuous distributions, we can exploit this formulation to sample the ejecta distribution. Because we have expressed the distributions as the product of either independent or conditional probability distributions (\cref{eq:conditional-position-based,eq:conditional-speed-based,eq:conditional-speed-based-corr}), we can follow the order of these distributions to perform a random sampling of each of the presented formulations. As an example, let us consider the \emph{position-based} distribution of \cref{eq:conditional-position-based}. In this case, we can directly sample the distribution in size and position using the cumulative distributions derived from \cref{eq:size-distribution,eq:r-distribution}, because they are independent from the other variables. Then we can obtain the samples of the ejection angles. First, we perform the sampling of the in-plane distribution that is a conditional distribution in $r$. To do so, we use the already obtained launch positions and we draw samples from the distribution in $\xi$ inverting its cumulative distribution. Now that we have both the samples in $r$ and $\xi$, we can use the CDF of the conditional distribution in $\psi$ to sample the out-of-plane component of the ejection angle. Finally, we have the full set of random samples from the ejecta distribution. \cref{fig:scatter-sampling} shows an example of ten thousand samples drawn from a size-speed distribution. We can observe that most of the samples concentrate in the region of small diameters and small speeds. This is an expected behaviour as both the size and speed distributions are power laws. It is clear that, to fully characterise an impact, it is necessary to draw a large amount of samples, particularly if we are also interested in the dynamical fate of larger particles. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/scatter-random} \caption{ } \label{fig:scatter-sampling} \end{subfigure} \hfill \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/scatter-latin} \caption{ } \label{fig:latin-sampling} \end{subfigure} \caption{Examples of sampling strategies of the size-speed distribution. (a) random variates (b) space-filling.} \end{figure} To address this drawback, different sampling strategies can be assessed. Specifically, in this work, a \emph{space-filling} strategy is adopted so that the entire domain defined by the ejecta distribution is covered. \cref{fig:latin-sampling} shows an example of such strategy, using a latin-hypercube algorithm \citep{SMT2019,Jin2003LatinHypercube} for the sample generation and a log-log filling technique. That is, the samples are created to best fill a space in log-log coordinates. A space-filling strategy in both logarithmic and linear coordinates can be performed. In this work. the logarithmic strategy is used for the ejecta size and speed, while a linear strategy for the ejection angles and ejection position. The idea behind this type of sampling is that we can have samples that better characterise the ejecta field and, therefore, a more representative prediction of the ejecta fate around asteroids and other small bodies. In addition, we can exploit the ejecta distribution to assign to each sample a number of \emph{representative fragments}. That is, the single sample we have drawn carries the information of an ensemble of fragments that we associate to it. In this way, also during the propagation we can understand the behaviour of the whole field of fragments generated by the impact. The computation of the number of \emph{representative fragments} comprise the following steps: \begin{itemize} \item Subdivide the ejecta domain into a grid. The grid can be as fine as needed but such that at least few samples are in each bin. The grid can be specified either in a linear or in a logarithmic space. \item For each bin, we can integrate the ejecta distribution to compute the number of fragments associated to the bin (this procedure can be performed analytically using the cumulative distribution functions (Appendix \cref{app:cumulative-dist})). \item Assign to each sample inside the bin a number of \emph{representative fragments} by equally subdividing the total fragment associated to the bin by the number of samples in the bin: \begin{equation} \tilde{n}_r^{ij} = \frac{N_f^{j}}{N_s^{j}} \rm , \end{equation} where $\tilde{n}_r^{ij}$ are the \emph{representative fragments} associated to the i-th sample belonging to the j-th bin, and $N_f^{j}$ and $N_s^{j}$ are the total fragments and samples of the j-th bin. \end{itemize} \cref{fig:rep-fragments-example} shows an example of the results of the aforementioned procedure for a two-dimensional size-speed distribution. \cref{fig:size-speed-pdf} shows the particle density distribution (it is possible to observe the grid in which the distribution is subdivided), while \cref{fig:rep-fragments} shows ten thousand samples drawn in a space-filling log-log space and the associated number of representative fragments that is proportional to the integral of the distribution of \cref{fig:size-speed-pdf} and the number of samples within each bin. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/dist-integral-example} \caption{ } \label{fig:size-speed-pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/dist-scatter-nr} \caption{ } \label{fig:rep-fragments} \end{subfigure} \caption{(a) Example of a size-speed distribution with highlighted the integration grid for the representative fragments. (b) Corresponding set of space-filling samples with associated representative fragments.} \label{fig:rep-fragments-example} \end{figure} \section{Dynamics} \label{sec:dynamics} The adopted dynamical model is the Photo-gravitational Hill Problem that is the extension of the classical Hill problem to a radiating primary \citep{soldini2017assessing}. The equations of motion are expressed in non-dimensional form in a synodic reference frame centred in the asteroid. The x-axis is along the Sun-asteroid direction, pointing outwards, the z-axis is along the direction of the angular momentum of the asteroid orbit, and the y-axis completes the right-hand system. \begin{equation} \begin{cases} \ddot{x} - 2 \dot{y} = - \frac{x}{r^3} + 3 x + \beta \\ \ddot{y} + 2 \dot{x} = - \frac{y}{r^3} \\ \ddot{z} = - \frac{z}{r^3} - z \end{cases} \end{equation} \noindent where $x$, $y$, and $z$ are the non-dimensional particle positions with respect to the centre of the asteroid in the synodic frame, and $r = \sqrt{x^2 + y^2 + z^2}$ is the particle's distance from the centre of the asteroid. The lightness parameter $\beta$ can be expressed as follows \citep{pinto2020}: \begin{equation} \beta = \frac{P_0}{c} \frac{AU^2}{\mu_a^{\frac{1}{3}} \mu_{\rm Sun}^{2/3}} \frac{3 (1 + c_{\rm R})}{2 \rho_{\rm p} d_{\rm p}} \rm . \end{equation} Where $P_0$ = 1367 \si{\watt\per\meter\squared} is the solar flux at 1 AU, $c$ is the speed of light, AU is the astronomical unit, $\mu_{\rm Sun}$ and $\mu_a$ are the gravitational parameter of the Sun and the asteroid, respectively, $\rho_{\rm p}$ is the particle density and $d_{\rm p}$ the particle diameter. The reflectivity coefficient, $c_{\rm R}$, is a number between 0 and 1, where 1 is for fully reflective surfaces. Eclipses are taken into account using a cylindrical shadow model via a modified lightness parameter, $\beta^{*}$: \begin{equation} \beta^{*} = \begin{cases} \beta \quad & \text{if } x \leq 0 \\ \beta \cdot f(\sigma) \quad & \text{otherwise} \end{cases} \rm , \end{equation} \noindent where $f(\sigma) = \left( 1 + e^{-s \cdot \sigma} \right)^{-1}$ is a sigmoid function with steepness parameter $s$, which, in this work is equal to 8 \citep{pinto2020}. The variable $\sigma = r_x - R_a$, with $r_x = \sqrt{y^2 + z^2}$ distance to the $x$-axis, and $R_a$ mean radius of the asteroid. \bigbreak The Photo-gravitational Hill Problem has been selected as the dynamical model for the study because it is a relatively simple model that allows taking into account the main forces acting on the ejected fragments, such as the gravity of the asteroid and the solar radiation pressure. At the same time, it allows maintaining the dynamics sufficiently simple to focus our analysis on the ejecta distribution models and the effects that different modelling choices have on the fate of the ejecta. \section{Sensitivity analysis} \label{sec:sensitivity} The objective of this work is to assess how the modelling decisions concerning the ejecta models affect the overall evolution of the particles around the asteroid. As the initial ejection conditions and the size of the particles are fundamental in shaping the trajectory evolution of the samples, we want to understand what is the impact of specific modelling choices. Specifically, in this work we focus on the characteristics of the ejecta model, while we neglect (and leave to a future study) the effect of the impact location, the size, shape, and orbit of the asteroid. The parameters of the ejecta distributions considered in the sensitivity analysis are: \begin{itemize} \item the minimum particle size, $s_{\rm min}$; \item the slope coefficient of the particle size distribution, $\bar{\alpha}$; \item the speed distribution formulation: Housen (\cref{eq:ejection-speed-oblique,eq:speed-housen}) or Richardson (\cref{eq:ejection-speed-oblique,eq:speed-richardson}); \item the distribution formulation: \emph{position-based} (\cref{subsec:position-dist}) or \emph{speed-based} (\cref{subsec:speed-dist}); \item the correlation between fragment size and ejection speed (\cref{subsec:correlated-dist,subsec:speed-dist}); \item the out-of-plane ejection angle distribution: Uniform or Gaussian; \item the value of the impact angle, $\phi$. \item the effect of the asteroid rotation; \end{itemize} The selection of these parameters is based on their influence on the ejecta models and on the level of uncertainty associated to them. In the following sections, each of these points is analysed and their effects on the overall ejecta fate is discussed. \bigbreak \noindent\emph{Target and impactor properties}. For a better comparison among the different parameters, we define common characteristics for the target and the impactor. In addition, we fix some of the parameters of the ejecta models. For the asteroid, we select a spherical body with a diameter of 1 \si{\kilo\meter}, a density of 2.6 \si{\gram\per\centi\meter\cubed}, an albedo of 0.1, and a rotational period of 16 \si{\hour}. The semimajor axis of the asteroid's orbit is 1.755 AU. The rotational axis of the asteroid is perpendicular to its orbital plane. The characteristics of the asteroid are average values from the NASA Small Bodies Database \citep{nasa_sbdb}, except the asteroid radius for which a larger radius has been selected. For the impactor, we select a mass of 2 \si{\kilo\gram}, a diameter of 0.15 \si{\meter}, and an impact speed of 2 \si{\kilo\meter\per\second} that is a geometry equivalent to the small carry-on impactor used by the Hayabusa2 mission \citep{tsuda2019hayabusa2,tsuda2020hayabusa2}. The decision of fixing these parameters is here taken to focus the analysis on the modelling decisions relative to the ejecta model and understand the effect they can have in the prediction of the fate of the ejecta and particularly to identify the most influential modelling choices. \bigbreak \noindent\emph{Simulation setup}. The simulations have a time span of 2 months. At the end of the simulation, we refer to the particles as \emph{escaping} if their distance is greater than the Hill sphere, as \emph{impacting} if they re-impact the asteroid's surface, and \emph{orbiting} if they have not impacted nor escaped the asteroid. The impact characteristics are kept constant, with a normal impact ($\phi$ = 90\si{\degree}) on the North pole of the asteroid, for all the simulations except for \cref{subsec:sensitivity-rotation}. A polar impact is specifically selected to remove the dependency on the asteroid rotation in the sensitivity analysis of the other parameters. \bigbreak \noindent\emph{Asteroid material and strength}. Two types of materials are used in the analysis, a sand-like very low-strength material and the moderate-strength Weakly Cemented Basalt (WCB). \cref{tab:materials} shows the reference properties of these materials. \begin{table}[htb!] \centering \caption{\label{tab:materials} Material properties \citep{housen2011ejecta}.} \begin{tabular}{lcc} \hline & Sand & WCB \\ \hline \hline $\mu$ & 0.41 & 0.46 \\ $C_1$ & 0.55 & 0.18 \\ $k$ & 0.3 & 0.3 \\ $n_1$ & 1.2 & 1.2 \\ $n_2$ & 1.3 & 1 \\ $Y_{\rm ref}$ (\si{\mega\pascal}) & 0 & 0.45 \\ \hline \end{tabular} \end{table} The coefficients $\mu$, $C_1$, $k$, $n_1$, and $n_2$ are characteristics of the material and determine the shape of the ejecta distributions described in \cref{sec:distributions}. $Y_{\rm ref}$ is the reference strength of the material. These values are typically varied to perform sensitivity analyses with respect to the asteroid's strength. However, as we are focusing on the influence of the ejecta models, the strength values have been fixed to a value of 0 \si{\mega\pascal} for the sand-like material and 5 \si{\kilo\pascal} for the WCB. In this last case, we reduced the value with respect to the reference in \cref{tab:materials} to more closely model weakly cohesive soils similar to regolith \citep{holsapple2012momentum,richardson2007ballistics}. Finally, we also select default parameters for the ejecta distribution. Specifically, the default particle size range is between $s_{\rm min} = \num{5e-6}$ \si{\meter} and $s_{\rm max} = \num{5e-3}$ \si{\meter} (i.e., particle diameters between 10 \si{\micro\meter} and 1 \si{\centi\meter}). In addition, we always assume the $\bar{\gamma}$ coefficient is equal to $3\mu$ (\cref{subsubsec:u-dist-position-based}); therefore, it is only dependent on the material. \bigbreak \noindent\emph{Sampling and statistical analysis}. Given the statistical nature of the sampling procedure described in \cref{sec:sampling} and that we associate to each sample a number of representative fragments, for each analysis, we perform multiple runs. Specifically, we perform 20 runs and in each run we draw 100 000 samples. In this way, we can assess the robustness and variability of the obtained results, particularly concerning the estimate of the number of fragments (via the representative fragments). Additionally, it is necessary to define the sampling of the distribution. We adopt the procedure described in \cref{sec:sampling}. Because we are interested in the fate of the ejecta in the neighbourhood of the asteroid, we limit the sampling to the fragments with an ejection speed below the escape speed of the asteroid. Therefore, the ejection speed is sampled between $u_{\rm min}$ and $u_{\rm esc}$. Similarly, as the launch position is connected to the ejection speed, even the sampling in $r$ must be limited. Specifically, we sample between $r_{\rm esc}$ and $r_{\rm max}$, where $r_{\rm esc}$ is the launch position corresponding to the escape speed. All the ejection location closer than the "escape radius" have ejection speed greater than the escape speed. For both the escape speed and launch location we use a 16 bins grid with a log-scaling subdivision. In a similar fashion, the size distribution is sampled between the minimum and maximum provided particle sizes ($s_{\rm min}$, $s_{\rm max}$) on a 20 bins grid with a log-scaling subdivision. The in-plane component of the ejection angle, $\xi$ is sampled between 0\si{\degree} and 360\si{\degree} on a linear grid with 36 bins. The out-of-plane component, $\psi$, is sampled on a linear grid with 8 subdivisions. The sampling range depends on the model adopted. Between a $\psi_{\rm min}$ and $\psi_{\rm max}$ for a Uniform distribution (see \cref{eq:psi-distribution-uniform} in \cref{subsubsec:outplane-dist-position-based}) and in the range $\mu \pm 3 \sigma$ for the Gaussian model (see \cref{eq:psi-distribution-gaussian} in \cref{subsubsec:outplane-dist-position-based}). \subsection{Slope coefficient ($\bar{\alpha}$) of the size distribution} \label{subsec:alpha-sensitivity} The coefficient $\bar{\alpha}$ of the size distribution (\cref{subsubsec:size-dist}) defines the slope of the distribution that is the number of particles ejected as a function of their size. Larger $\bar{\alpha}$ coefficients produce distributions with a higher portion of small fragments. Because the size distribution is a power law, it can change significantly with the value of the slope coefficient. According to \cite{krivov2003impact}, typical value of the $\bar{\alpha}$ coefficient for basalt and granite targets is between 2.4 and 2.7, and a good choice in many applications is a value between 2.4 and 2.6. Following the results of Krivov, we test the sensitivity of the ejecta fate as function of three values of $\bar{\alpha} \in \{ 2.40,\, 2.55,\, 2.70 \}$. For this test case, the \emph{position-based} formulation has been used (\cref{subsec:position-dist}) with a uniform distribution in the in-plane ejection angle, $\xi$ (normal impact), and a Gaussian distribution for the out-of-plane ejection angle, $\psi$ (\cref{subsubsec:outplane-dist-position-based}). Both target materials of \cref{tab:materials} have been considered. \bigbreak First, we compare the fraction of particles orbiting, escaping, and impacting after two months, for the three $\bar{\alpha}$ values considered. \cref{tab:alpha-results-samples,tab:alpha-results-fragments} shows the results for the samples and the estimated fragments, respectively, for the three $\bar{\alpha}$ values and the two materials considered. Because we are have twenty runs for each simulation, we present the results for the average percentage of samples and fragments ($\langle \cdot \rangle$) and percent Relative Standard Deviation (RSD) (i.e., the standard deviation divided by the mean). \cref{tab:alpha-results-samples} shows the statistics for the samples. We can observe that for both materials most of the samples (more than 90\%) re-impact the asteroid. Most of the other samples escape, while only a very small percentage is still orbiting after two months. The results on the Relative Standard Deviation show stable results for both the impacting and escaping samples, while a larger variability is associated with the orbiting samples as they are far fewer. It is interesting to observe that the results are very stable for the different values of $\bar{\alpha}$. Therefore, the influence of the scaling coefficient, $\bar{\alpha}$, on the samples fate appears to be limited. \begin{table}[htb!] \centering \caption{\label{tab:alpha-results-samples} Samples fractions and corresponding percent relative standard deviations for the three $\bar{\alpha}$ coefficients and two materials in exam.} \begin{tabular}{l|c|cc|cc|cc} \hline Material & $\bar{\alpha}$ & $\langle N_{\rm imp} \rangle$ & RSD$_{\rm imp}$ & $\langle N_{\rm esc} \rangle$ & RSD$_{\rm esc}$ & $\langle N_{\rm orb} \rangle$ & RSD$_{\rm orb}$ \\ \hline \hline \multirow{3}{*}{Sand} & 2.40 & 98.48\% & 0.028\% & 1.51\% & 1.87\% & 0.006\% & 49.36\% \\ & 2.55 & 98.48\% & 0.028\% & 1.51\% & 1.85\% & 0.007\% & 37.25\% \\ & 2.70 & 98.48\% & 0.039\% & 1.51\% & 2.58\% & 0.006\% & 44.83\% \\ \hline \multirow{3}{*}{WCB} & 2.40 & 91.07\% & 0.031\% & 8.89\% & 0.31\% & 0.039\% & 9.80\% \\ & 2.55 & 91.07\% & 0.055\% & 8.89\% & 0.54\% & 0.038\% & 14.08\% \\ & 2.70 & 91.07\% & 0.060\% & 8.89\% & 0.63\% & 0.038\% & 16.97\% \\ \hline \end{tabular} \end{table} \cref{tab:alpha-results-fragments} shows equivalent results for the number of fragments estimated via the representative fragments. Comparing \cref{tab:alpha-results-fragments} with \cref{tab:alpha-results-samples} we observe few interesting features. First, the percentage of samples impacting, escaping and orbiting differs from their corresponding samples, in particular for the WCB case. This is a consequence of the representative fragments methodology as a different weight is associated to each sample. The orbiting fragments are a very small percentage in both cases and corresponds to few hundreds or thousands of samples, although their variability is higher as highlighted by the percent RSD. The combination of a distribution-based ejecta model (\cref{sec:distributions}) with the representative fragments sampling methodology (\cref{sec:sampling}) is thus able to characterise the fate of the impacting and escaping ejecta with a small variability. As for the samples of \cref{tab:alpha-results-samples}, the fate of the ejecta is stable with respect to the change in $\bar{\alpha}$; the main difference resides in the total number of fragments generated by the impact (see \cref{tab:alpha-results-fragments}). In fact, larger $\bar{\alpha}$ generates more fragments. We can thus infer that the selection of the slope coefficient, $\bar{\alpha}$, only minimally influences the ultimate fate of the ejecta; however, it scales the total amount of particles generated. \begin{table}[htb!] \centering \caption{\label{tab:alpha-results-fragments} Fragments fractions and corresponding percent relative standard deviations for the three $\bar{\alpha}$ coefficients and two materials in exam.} \begin{tabular}{l|c|c|cc|cc|cc} \hline Material & $\bar{\alpha}$ & $N_{\rm tot}$ & $\langle N_{\rm imp} \rangle$ & RSD$_{\rm imp}$ & $\langle N_{\rm esc} \rangle$ & RSD$_{\rm esc}$ & $\langle N_{\rm orb} \rangle$ & RSD$_{\rm orb}$ \\ \hline \hline \multirow{3}{*}{Sand} & 2.40 & \num{3.94e13} & 99.74\% & 1.23\% & 0.26\% & 6.80\% & \num{8.8e-10}\% & 67.17\% \\ & 2.55 & \num{8.11e13} & 99.75\% & 1.03\% & 0.25\% & 8.56\% & \num{1.2e-9}\% & 148.84\% \\ & 2.70 & \num{1.57e14} & 99.73\% & 1.59\% & 0.27\% & 5.67\% & \num{9.5e-10}\% & 224.87\% \\ \hline \multirow{3}{*}{WCB} & 2.40 & \num{6.38e10} & 76.63\% & 1.04\% & 23.37\% & 2.55\% & \num{4.52e-7}\% & 46.14\% \\ & 2.55 & \num{1.31e11} & 76.44\% & 1.02\% & 23.56\% & 1.68\% & \num{3.65e-5}\% & 297.92\% \\ & 2.70 & \num{2.54e11} & 75.96\% & 0.57\% & 24.04\% & 1.87\% & \num{7.68e-8}\% & 54.32\% \\ \hline \end{tabular} \end{table} Finally, we also look at the fate of the ejecta as a function of time. \cref{fig:alpha_frag_time} shows the results for the WCB material for both the impacting and escaping fragments. Specifically, \cref{fig:alpha_diff} shows the differential distributions of the impacting and escaping particles in time for the three analysed values of $\bar{\alpha}$. At each time instant, we appreciate how many particles escape and impact the asteroid. \cref{fig:alpha_cum} instead shows the corresponding cumulative distribution that is the percentage of particles escaping and impacting within a given time, $t$. From \cref{fig:alpha_cum} we can observe that the behaviour of the normalised distribution is almost identical in all the three cases. The only difference is the absolute number of particles involved, which can be appreciated from \cref{fig:alpha_diff} and \cref{tab:alpha-results-fragments}. Therefore, a variation of $\bar{\alpha}$ within the studied interval does not result in a corresponding variation in the overall behaviour of the fragments. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/alpha_diff_frag_all} \caption{ } \label{fig:alpha_diff} \end{subfigure} \hfill \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/alpha_cum_frag_norm_all} \caption{ } \label{fig:alpha_cum} \end{subfigure} \caption{Fragments evolution in time. (a) Number of fragments impacting (blue) and escaping (orange) at time $t$ for the three $\bar{\alpha}$ cases. (b) Corresponding normalised cumulative distribution of the fragments in time.} \label{fig:alpha_frag_time} \end{figure} \subsection{Minimum particle size ($s_{\rm min}$)} \label{subsec:sensitivity-smin} As described in \cref{subsubsec:size-dist}, the size distribution requires definition of the particles size range to be considered. The selection of the upper threshold of the range, $s_{\rm max}$, is usually selected arbitrarily or as a fraction of the total excavated mass \citep{sachse2015correlation}. The selection of the lower threshold, $s_{\rm min}$ is somewhat arbitrary; in \cite{yu2017ejecta} a value of 50 \si{\micro\meter} is used, in \cite{yu2018ejecta} a value of 5 \si{\micro\meter} is used instead, while in \cite{sachse2015correlation} the lower threshold is selected based on the sensitivity of the instrument used to detect the particles. Given that the highest particle density correspond to the lower end of the size range, we decided to focus our attention on the selection of $s_{\rm min}$. Taking as reference the work of Yu and Michel \citep{yu2017ejecta,yu2018ejecta}, we perform the comparison as a function of two values of $s_{\rm min} \in \{ 5\, \si{\micro\meter},\, 50\, \si{\micro\meter} \}$. The selection of the minimum particle size threshold influences the definition and sampling of the ejecta distribution. Because the integral of the size distribution must satisfy the mass conservation, changing $s_{\rm min}$ does not only change the minimum fragment size but also the total amount of fragments and the relative percentage of fragments of a given size. For this test case, the \emph{position-based} formulation is used (\cref{subsec:position-dist}) with a uniform distribution for the $\xi$ angle and a Gaussian distribution for the $\psi$ angle (\cref{subsubsec:outplane-dist-position-based}). The results are presented for the WCB material; a similar behaviour can be observed also for sand-like materials. \cref{fig:smin_cum_frag_time} shows the normalised cumulative distributions of the average impacting and escaping fragments for the two $s_{\rm min}$ values considered. We can observe that the behaviour of the 0.05 \si{\milli\meter} case shows a less steep curve for the impacting fragments and a delay to higher times for the escaping fragments. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{images/smin_wcb_cum_frag_norm_all} \caption{Normalised cumulative distributions of impacting (blue) and escaping (orange) fragments as function of two values of $s_{\rm min}$ (line style) for the WCB material case.} \label{fig:smin_cum_frag_time} \end{figure} The first behaviour is representative of particles that take longer impact the asteroid and this is because they have, on average, larger diameters. The percentage of impacting particles over the total generated fragments is about 98\% for the 0.05 \si{\milli\meter} case and about 94\% for the 0.005 \si{\milli\meter} case. This latter case, has a higher percentage of escaping particles because their average smaller size causes the fragments to be more easily swept away by the solar radiation pressure. This is also the reason behind the "delayed" behaviour of the cumulative distribution of the escaping particles. In fact, larger particles will take on average longer to escape the neighbourhood of the asteroid. \cref{fig:smin_snap} shows instead the percentage of samples (a) and the number of fragments (b) still orbiting the asteroid at specified snapshots in time. Similarly to what observed in \cref{subsec:alpha-sensitivity}, the behaviour of the samples differs from the evolution of the fragments. While the percentage of samples follows a similar behaviour for both cases, the fragments have a distinct behaviour. Specifically, while the trend is similar, the total number of fragments associated to the samples differs. As expected, a lower number of fragments characterises the 0.05 \si{\milli\meter} case, because a smaller number of particles is required to satisfy the mass conservation equation (\cref{eq:nr}). \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/smin_wcb_ns_orb_by_snap} \caption{ } \label{fig:smin_ns_snap} \end{subfigure} \hfill \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/smin_wcb_nf_orb_by_snap} \caption{ } \label{fig:smin_nr_snap} \end{subfigure} \caption{Number of samples (a) and fragments (b) at selected snapshot times for $s_{\rm min}$ = 5 \si{\micro\meter} (solid line with dots) and $s_{\rm min}$ = 50 \si{\micro\meter} (dashed line with triangles).} \label{fig:smin_snap} \end{figure} From the analysis we can observe that changing the lower threshold of the size range, $s_{\rm min}$ has two main effects on the generation of the ejecta curtain and its fate around the asteroid. First, the total number of generated fragments varies and decreases as the lower threshold increases. Second, the evolution of the particles results in a lower accretion rate of the impacting fragments and a delayed escape from the sphere of influence of the asteroid of the escaping fragments. \subsection{Ejecta speed model} \label{subsec:sensitiviy-speed-model} As shown in \cref{subsubsec:u-dist-position-based}, the presented ejecta model allows the selection of two different ejecta speed formulations; specifically, \cref{eq:speed-housen} developed by Housen \citep{housen2011ejecta} and \cref{eq:speed-richardson} developed by Richardson \citep{richardson2007ballistics}. To understand the effect of the choice of the ejecta speed formulation, we perform two sets of simulations, changing only the speed model. For this test case, the \emph{position-based} formulation is used (\cref{subsec:position-dist}) with a uniform distribution for the $\xi$ angle and a Gaussian distribution for the $\psi$ angle (\cref{subsubsec:outplane-dist-position-based}). Because the fate of an ejected fragment is strongly influenced by its initial velocity, it is interesting to study the effect that a modelling choice can have on the outcome of an impact simulation. In addition, it is interesting to understand if the effects of the ejecta speed model change as a function of the target material. For this reason both materials of \cref{tab:materials} have been considered. \cref{tab:speed-results-fragments} shows the percentage of fragments impacting, escaping and still orbiting after the simulation period of two months, together with the RDS to measure the relative robustness of the prediction. The total number of fragments does not depend on the speed model used; however, it is function of the target material. Specifically, we have a total of approximately \num{2.65e13} fragments for the sand-like material and of \num{1.35e12} fragments for the WCB material. \begin{table}[htb!] \centering \caption{\label{tab:speed-results-fragments} Fragments fractions and corresponding percent relative standard deviations for the Housen and Richardson ejecta speed formulations and the two materials in exam.} \begin{tabular}{l|c|cc|cc|cc} \hline Material & Model & $\langle N_{\rm imp} \rangle$ & RSD$_{\rm imp}$ & $\langle N_{\rm esc} \rangle$ & RSD$_{\rm esc}$ & $\langle N_{\rm orb} \rangle$ & RSD$_{\rm orb}$ \\ \hline \hline \multirow{2}{*}{Sand} & Housen & 99.80\% & 1.30\% & 0.19\% & 4.43\% & \num{2.11e-9}\% & 140.18\% \\ & Richardson & 98.59\% & 1.32\% & 1.41\% & 1.98\% & \num{2.12e-9}\% & 93.72\% \\ \hline \multirow{2}{*}{WCB} & Housen & 97.63\% & 0.95\% & 2.37\% & 4.86\% & \num{3.20e-7}\% & 260.48\% \\ & Richardson & 55.90\% & 1.75\% & 44.10\% & 0.66\% & \num{1.11e-7}\% & 76.03\% \\ \hline \end{tabular} \end{table} The results of \cref{tab:speed-results-fragments} shows some interesting features. If we first focus on the sand-like material, similarly to \cref{subsec:alpha-sensitivity}, we observe that almost the entirety of the fragments eventually re-impact the asteroid. Both the Housen and Richardson models show similar percentages for impacting, escaping, and orbiting fragments with a marginally higher share of escaping fragments for the Richardson case. However, if we look at the WCB row, we observe a substantial difference between the two models. In fact, with the Housen velocity profile we have about 97\% of re-impacting particles, while using the Richardson model, only about 56\% re-impact and almost all the remaining fragments escape. Despite the two speed models differing the most at high speeds (\cref{fig:speed-comparison}), the low-gravity environment of asteroids makes the modelling of low velocity regions of utmost importance. In fact, a minor difference in these small ejection speeds (around tens of centimetres per second) can lead to a substantial difference in the overall fate of the ejecta. \cref{tab:speed-results-fragments} shows that this behaviour is also connected to the material type. For very low strength materials (such as Sand), this effect is limited because both models flatten to very low ejections speeds, considerably lower than the escape velocity of the asteroid. If we now focus on the WCB case, we can compare the normalised cumulative distributions of the impacting and escaping fragments (\cref{fig:wcb-speed-cum}). We observe a more pronounced difference for the impacting particles with respect to the escaping ones. As expected, using the Richardson model, we have particles starting to escape the system before the Housen case. On the other hand, the impacting particles show a less steep cumulative distribution; therefore, they tend to re-impact more gradually and in longer times. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{images/speed_wcb_cum_frag_norm_all} \caption{Normalised cumulative distributions of impacting (blue) and escaping (orange) fragments as function of the two ejecta speed models (line style) for the WCB material case.} \label{fig:wcb-speed-cum} \end{figure} \cref{fig:speed-wcb-hammer} shows the distribution of the number of fragments re-impacting the asteroid surface in a 10 deg $\times$ 10 deg grid in right ascension ($\alpha$) and declination ($\delta$) for the Housen speed model. For each grid cell, we compute the total number of representative targets "falling" into the cell and average this value over the 20 impact simulations we perform. This plot is used as a reference to show the distribution of the ejecta blanket for the reference impact scenario to give a better understanding of the following comparisons. In addition, \cref{fig:speedpos_nf_ranges} shows equivalent plots for the, subdividing the fragments' distribution as a function of their size. Specifically, we consider three different size ranges: $d_p \in \left[ 10\,\si{\micro\metre}, 100\,\si{\micro\metre} \right] $, $d_p \in \left[ 100\,\si{\micro\metre} - 1\,\si{\milli\metre} \right]$, and $d_p \in \left[ 1\,\si{\milli\metre} - 1\,\si{\centi\metre} \right]$. As expected, we can observe how the global distribution of \cref{fig:speed-wcb-hammer} tends to follow the distribution of smaller particles as they are in greater quantity. As the particle size increases, the fragments tend to distribute more uniformly on the asteroid's surface as they are less influenced by the effect of solar radiation pressure. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{images/speed_wcb_hammer_nimp_housen} \caption{Average number of representative fragments impacting on the asteroid surface represented with a Hammer projection. Each 10\si{\degree}$\times$10\si{\degree} bin in right ascension and declination shows the sum of the representative fragments impacting in it, averaged over the set of 20 simulations performed.} \label{fig:speed-wcb-hammer} \end{figure} \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/hammer_nimp_housen_range_0} \caption{ } \label{fig:speed-wcb-hammer-range-0} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/hammer_nimp_housen_range_1} \caption{ } \label{fig:speed-wcb-hammer-range-1} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{images/hammer_nimp_housen_range_2} \caption{ } \label{fig:speed-wcb-hammer-range-2} \end{subfigure} \caption{Average number of representative fragments impacting on the asteroid surface represented with a Hammer projection. The different plots represent particles belonging to different size ranges. a) $d_p \in \left[ 10\,\si{\micro\metre}, 100\,\si{\micro\metre} \right] $. b) $d_p \in \left[ 100\,\si{\micro\metre} - 1\,\si{\milli\metre} \right]$. c) $d_p \in \left[ 1\,\si{\milli\metre} - 1\,\si{\centi\metre} \right]$.} \label{fig:speedpos_nf_ranges} \end{figure} To better understand the difference between the two ejecta speed model, we can perform the difference between these distributions for the two cases. \cref{fig:speed-wcb-hammer-diff} shows the relative percent difference between a distribution of impacting particles such as \cref{fig:speed-wcb-hammer} for the Richardson and Housen models: the difference between the representative fragments computed for the Richardson case ($\langle \tilde{n}_{r, r}^{\rm imp} \rangle^{ij}$) and the Housen case ($\langle \tilde{n}_{r, h}^{\rm imp} \rangle^{ij}$) is normalised by the total number of fragments, which are identical for both simulations. The light yellow regions correspond to areas where the number of impacting fragments is almost identical for the two models. These are the regions at negative declinations and on the Sun-facing hemisphere of the asteroid. These are also the regions where fewer fragments land as the effect of SRP tends to blow the fragments in direction opposite to the Sun. The red regions identify areas where the Housen model generates more impacts. These are located in the neighbourhood of the impact crater because of the lower ejection speeds predicted by this model. The blue regions instead are characterised by a larger amount of impacts for the Richardson model. The maximum and minimum percent difference are of the order of 1\%. This may seem a small value; however, 1\% of the total number of fragments is in the order of \num{1e9} particles. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{images/speed_wcb_hammer_diff_frag} \caption{Relative percent difference between the distribution of the average number of impacting representative fragments obtained with the Richardson and the Housen models.} \label{fig:speed-wcb-hammer-diff} \end{figure} The choice of the ejecta speed model can thus influence the overall fate of the ejecta, especially for higher strengths materials such as the WCB. The differences between the ejecta fates is mainly due to the difference in the low-velocity region of the two models. This causes the share of ejecta belonging to the impacting and escaping categories; it also influences the rate of accretion and the distribution of the impacting particles onto the asteroid's surface. \subsection{Position-based vs. speed-based formulation} \label{subsec:sensitivity-pos-vel} As described in \cref{sec:distributions}, we have introduced two formulation for the developed distribution-based ejecta model; one formulation based on sampling the ejection location, $r$, and identified as \emph{position-based}, and another formulation based on sampling directly the ejection speed, $u$, and identified as \emph{speed-based}. The main difference between the two formulations resides in the presence of a cut-off speed, the \emph{knee-velocity} (\cref{eq:knee-velocity}), in the \emph{speed-based} formulation, below which no mass is ejected. The \emph{knee-velocity} is always larger than the minimum ejection speed of the \emph{position-based} formulation; therefore, a difference between the two formulations is expected. To quantify this difference, we perform two sets of simulations. One set uses the \emph{position-based} formulation (\cref{subsec:position-dist}) with a uniform distribution for the $\xi$ angle and a Gaussian distribution for the $\psi$ angle (\cref{subsubsec:outplane-dist-position-based}). The other set uses the \emph{speed-based} formulation with the corresponding distributions for the $\xi$ and $\psi$ angles (i.e., \cref{subsubsec:inplane-dist-speed-based,subsubsec:outplane-dist-speed-based}, respectively). For this test case only the results for the WCB material are presented. \bigbreak Similar to the previous section, we show the normalised cumulative distributions of impacting and escaping fragments in \cref{fig:speedpos-wcb-cum}. We can clearly observe the effect of the \emph{knee velocity} in the cumulative distribution of the impacting particles. The \emph{speed-based} distribution, in fact, shows a significant "delay" in the average impact time of the fragments. After one hour, almost 90\% of the fragments generated with the \emph{position-based} formulation have re-impacted, while only about 5\% of the fragments generated with the \emph{speed-based} formulation. However, it is important to note that the total amount of generated fragments is different for the two formulations, that is, \num{3.816e11} for the \emph{position-based} and \num{2.217e11} for the \emph{speed-based}, which corresponds to about 42\% fewer fragments. In addition, the share of fragments re-impacting the asteroid is different for the two formulations. On one side we have 93.36\% of the total number of fragments for the \emph{position-based} formulation vs. 54.51\% for the \emph{speed-based}. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{images/speedpos_wcb_cum_frag_norm_all} \caption{Normalised cumulative distributions of impacting (blue) and escaping (orange) fragments as function of the two ejecta model formulations (line style) for the WCB material case.} \label{fig:speedpos-wcb-cum} \end{figure} Analogously to \cref{fig:speed-wcb-hammer}, we show in \cref{fig:speedpos-wcb-hammer} the relative percent difference between the distribution of impacting particles onto a Hammer projection of the asteroid surface obtained with the \emph{speed-based} and \emph{position-based} formulations. In this case, because the total number of fragments is different for the two formulations, we normalise with respect to the total fragments of the \emph{position-based} formulation. The results show how the distribution of impacting fragments is influenced by the choice of the model. We observe a clear pattern in the impact distribution. A first red region closer to the impact location that identifies an area with a larger amount of impacts predicted by the \emph{position-based} model. A second region, highlighted by the blue shades, identifies regions with a higher predicted amount of impacts of \emph{speed-based} model. These area are further away from the impact location because a larger amount of faster particles is generated. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{images/speedpos_wcb_hammer_diff_frag_percent} \caption{Relative percent difference between the distribution of the average number of impacting representative fragments obtained with the \emph{speed-based} and \emph{position-based} formulations.} \label{fig:speedpos-wcb-hammer} \end{figure} Similarly to \cref{subsec:sensitiviy-speed-model}, also the comparison between the \emph{speed-based} and \emph{position-based} formulations is influenced by the magnitude between the \emph{knee velocity} and its difference with respect to the minimum speed of the \emph{position-based} formulation. This in turn is influenced by the type and strength of the target material and its density. Specifically, the higher the strength of the material the higher is the difference in the fate of the ejecta. As mentioned at the beginning of \cref{sec:sensitivity}, the previous results have been obtained for a WCB material with an equivalent strength of 5 \si{\kilo\pascal}. To understand the influence of the strength on this analysis, \cref{fig:speedpos_nf} shows the difference in the evolution of the fragments at selected snapshots for a WCB target with equivalent strength of (a) 1 \si{\kilo\pascal} and (b) of 5 \si{\kilo\pascal}. We can observe that in the lower strength case the fragment evolution of the two formulations shows a comparable behaviour. Instead, we observe a greater discrepancy once the strength is increased to 5 \si{\kilo\pascal}. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/speedpos_wcb1k_nf_orb_by_snap} \caption{ } \label{fig:speedpos_wcb1k_nf} \end{subfigure} \hfill \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/speedpos_wcb5k_nf_orb_by_snap} \caption{ } \label{fig:speedpos_wcb5k_nf} \end{subfigure} \caption{Number of fragments at selected snapshots for the two distribution formulation. (a) WCB material with strength $Y = 1$ \si{\kilo\pascal} (b) WCB material with strength $Y = 5$ \si{\kilo\pascal}.} \label{fig:speedpos_nf} \end{figure} \subsection{Correlated vs. Uncorrelated distribution} \label{subsec:sensitivity-correlated} This section presents the comparison between the speed-averaged formulations with and without the correlation between the particle size and the ejection speed (\cref{subsec:speed-dist,subsec:correlated-dist}). As mentioned in \cref{sec:distributions}, the difference between these two formulations resides in the speed limitation the correlated distribution has for increasing particle size: the larger the particle the lower the maximum admissible ejection speed is. This test case compares the two formulations in terms of the overall ejecta behaviour. Alongside the correlated and uncorrelated size-speed distributions, we use a uniform distribution for the $\xi$ angle and a Gaussian distribution for the $\psi$ angle (\cref{subsubsec:outplane-dist-speed-based}). The material selected for this test case is a strengthless sand-like material. \bigbreak Some of the results for the overall ejecta evolution can by observed in \cref{fig:corruncorr_cum,fig:corruncorr_nf}. Here we observe the normalised cumulative distribution for the impacting and escaping fragments and the total number of fragments still orbiting the asteroid as selected snapshots in time. \cref{fig:corruncorr_cum} shows that the escaping particles have almost an identical behaviour over time, while a difference can be observed for the impacting fragments. Specifically, the cumulative distribution associated to the correlated formulation is less steep; therefore, we have a larger percentage of fragments surviving longer around the asteroid, at least in the first stages after the impact. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{images/corr_sand_cum_frag_norm_all} \caption{Normalised cumulative distributions of impacting (blue) and escaping (orange) fragments for the correlated and uncorrelated formulations (line style) for the sand-like material case.} \label{fig:corruncorr_cum} \end{figure} Other features can be observed in \cref{fig:corruncorr_nf}. First, the total number of fragments is higher for the correlated case: given that larger particles with higher velocities are not admissible, a larger number of smaller fragments is necessary to satisfy the mass conservation constraint. Second, in the latest stages of the propagation (after about 100 hours), the number of fragments for the correlated case decrease more rapidly and no more fragments are present after approximately 700 hours, while fragments are still present for the uncorrelated case. Because in the correlated formulation larger fragments have lower ejection velocities, there is also a smaller probability that they keep orbiting the asteroid for longer timescales. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{images/corr_sand_nf_orb_by_snap} \caption{Number of fragments at selected snapshots for the correlated and uncorrelated formulation.} \label{fig:corruncorr_nf} \end{figure} This feature can be better observed in \cref{fig:corruncorr_t_dp}, which shows the average number of representative fragments re-impacting the asteroid as function of the impact time (on the x-axis) and particle diameter (on the y-axis). \cref{fig:corruncorr_t_dp_corr} clearly shows the effect of the size-speed correlation on larger particles as they present a decreasing upper limit of the impact time as their diameter increases. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/corr_sand_timp_dp_uncorrelated} \caption{ } \label{fig:corruncorr_t_dp_uncorr} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/corr_sand_timp_dp_correlated} \caption{ } \label{fig:corruncorr_t_dp_corr} \end{subfigure} \caption{Average number of representative fragments re-impacting the asteroid as function of the impact time and the particle diameter. (a) Uncorrelated distribution. (b) Correlated distribution.} \label{fig:corruncorr_t_dp} \end{figure} Overall, introducing the size-speed correlation seems to have a limited influence on the overall fate of the ejecta. It has to be noted, however, that we are focusing on outcome of a cratering event on small bodies and limited to ejection velocities within the escape speed of the considered target. As this velocity is small, the correlation effect has a more limited influence. In case a wider range of ejection velocities is considered a greater impact may be expected. \subsection{Out-of-plane ejection angle ($\psi$) distribution} \label{subsec:sensitivity-psi} In this section, we consider the possible difference caused by the selection of different models for the out-of-plane component of the ejection angle, $\psi$. As described in \cref{sec:distributions}, we consider two types of distributions, Uniform and a Gaussian. Again, for the test case, the \emph{position-based} formulation is used (\cref{subsec:position-dist}) with a uniform distribution for the $\xi$ angle and the Housen formulation for the ejection speed (\cref{subsubsec:u-dist-position-based}). The considered material is the WCB (\cref{tab:materials}) Following the previous examples, \cref{fig:psi-wcb-cum} shows the normalised cumulative distribution of the impacting and escaping fragments as function of the out-of-plane ejection angle model. From the figure we can observe only a marginal difference in the overall ejecta fate due to the modelling assumption concerning the ejection angle. The evolution of the escaping fragments is superimposed for the two models, while a small difference can be observed for the impacting fragments. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{images/psi_wcb_cum_frag_norm_all} \caption{Normalised cumulative distributions of impacting (blue) and escaping (orange) fragments as function of the two ejecta model formulations (line style) for the WCB material case.} \label{fig:psi-wcb-cum} \end{figure} \cref{fig:psi-wcb-hammer} shows instead the difference between the distribution on the surface of the asteroid of the average impacting particles in a grid of right ascension and declination between the Gaussian models ($\langle \tilde{n}_{r, g}^{\rm imp} \rangle^{ij}$) and the Uniform model ($\langle \tilde{n}_{r, u}^{\rm imp} \rangle^{ij}$). The difference is again expressed as a relative percentage that is the difference between the number of impact per cell in the Gaussian case minus the one in the Uniform case, divided by the total averaged number of fragments generated in the Uniform case. The obtained distribution shows that there is a central region along the Sun-asteroid direction that tends to have a predominance of fragments generated by the Uniform model. However, we also observe a non-smooth behaviour with few cells that shows a majority of impacts for the Gaussian case. The regions around this central area, instead, shows a majority of fragments generated in the Gaussian case. Nonetheless, we should note that the relative difference is below 1\% of the total amount of fragments (peaks of the order of $\pm$ 0.3\%) that correspond to a number of fragments in the order of \num{1e8}. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{images/psi_wcb_hammer_diff_frag_percent} \caption{Relative percent difference between the distribution of the average number of impacting representative fragments obtained with the Gaussian and Uniform models of out-of-plane ejection angle ($\psi$) distribution.} \label{fig:psi-wcb-hammer} \end{figure} Therefore, when looking at the overall fate of the ejecta, the selection of the ejection angle model does not substantially influence the percentage of impacting and escaping particles nor their rate of accretion and escape. On the other hand, it can influence the distribution of the impact location on the asteroid as shown in \cref{fig:psi-wcb-hammer}. \subsection{Impact angle} \label{subsec:sensitivity-impact-angle} In this section, we consider the sensitivity to the impact angle, $\phi$, which defines the inclination of the impactor velocity vector with respect to a plane tangent to the asteroid's surface at the impact point. Therefore, a 90 degrees impact angle corresponds to a normal impact. The effect of the impact angle is twofold. On one hand, the shallower the impact angle is the smaller the normal component of the impact velocity is, which in turn affects the effectiveness of the cratering process. In fact, lower normal components of the impact velocity result in smaller impact craters. On the other hand, the presence of an impact angle different from 90 degrees changes the shape of the ejecta plume, as shown in \cref{sec:distributions}. In this section, we analyse the effects of changing impact angle; specifically, we vary the impact angle from 30$^\circ$ to 90$^\circ$ with increments of 15$^\circ$. The impact location is the Norh pole of the asteroid. The incoming direction of the projectile is along the x-axis of the synodic frame and in the same direction; therefore, moving away from the Sun. The remaining configuration is as follows: the \emph{position-based} formulation is used (\cref{subsec:position-dist}) with a \emph{Lobed} distribution for the $\xi$ angle (\cref{eq:xi-distribution}), a Gaussian distribution for the $\psi$ angle (\cref{eq:psi-distribution-gaussian}), and the Housen formulation for the ejection speed (\cref{subsubsec:u-dist-position-based}). The considered material is the WCB (\cref{tab:materials}) with an equivalent strength of 5 \si{\kilo\pascal}. \cref{fig:phi_frag_time} shows the fate of the ejecta as a function of time for both the impacting and escaping fragments. Specifically, \cref{fig:phi_diff} shows the differential distributions of the impacting and escaping particles in time for the five analysed values of impact angle, $\phi$. \cref{fig:phi_cum} instead shows the corresponding cumulative distribution that is the percentage of particles escaping and impacting within a given time, $t$. From \cref{fig:phi_cum} we can observe that the behaviour of the normalised distribution is almost identical in all the cases. A behaviour that is also corroborated by the similar differential trends that can be observed in \cref{fig:phi_diff}. The only difference we can observe is the absolute number of particles involved. Therefore, we observe that a variation of the impact angle does not substantially change the overall behaviour of the fragments, but it is most probably a cause of local changes in the spatial distribution of the fragments. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/phi_wcb_diff_frag_all} \caption{ } \label{fig:phi_diff} \end{subfigure} \hfill \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/phi_wcb_cum_frag_norm_all} \caption{ } \label{fig:phi_cum} \end{subfigure} \caption{Fragments evolution in time. (a) Number of fragments impacting (blue) and escaping (orange) at time $t$ for the analysed impact angles. (b) Corresponding normalised cumulative distribution of the fragments in time.} \label{fig:phi_frag_time} \end{figure} \cref{fig:phi_snap} shows instead the percentage of samples (a) and the number of fragments (b) still orbiting the asteroid at specified snapshots in time. In this case, we observe a difference in the samples evolution in the central portion of the analysed time frame (between about 10 and 200 hours), with a steeper decrease of samples as the impact angle increases. This behaviour, however, does not directly translate into a significant change in the evolution of the number of fragments. As shown in \cref{fig:phi_nf_snap}, the overall trend is similar among the different impact angles. However, we again observe the difference in the total number of fragments produced by the varying impact angle. In fact, the total number of ejected fragments is a function of the normal component of the impact velocity, which changes as the sine of the impact angle. Consequently, we observe a reduction of generated fragments as the impact angle decreases. \begin{figure}[htb!] \centering \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/phi_wcb_ns_orb_by_snap} \caption{ } \label{fig:phi_ns_snap} \end{subfigure} \hfill \begin{subfigure}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{images/phi_wcb_nf_orb_by_snap} \caption{ } \label{fig:phi_nf_snap} \end{subfigure} \caption{Number of samples (a) and fragments (b) at selected snapshot times for the different impact angles analysed.} \label{fig:phi_snap} \end{figure} \subsection{Asteroid rotation} \label{subsec:sensitivity-rotation} In this section, we consider the sensitivity to the asteroid rotation. While it is not strictly related to the ejecta modelling, including the contribution of the asteroid rotation can be regarded as a modelling assumption. The effect of the asteroid rotation is an additional velocity change that must be added to the ejection speed vector defined by the quantities sampled from the ejecta model distribution. To evaluate the effect of the rotational speed, we change the impact conditions with respect to the previous test cases. In this case, we adopt an equatorial impact (instead of a polar impact), to maximise the effect of the rotation (still assuming the rotational axis of the asteroid is perpendicular to the orbital plane). The impact location has therefore a latitude of 0\si{\degree} and a longitude of 90\si{\degree} (measured from the x-axis of the synodic system). As in the previous cases, the ejecta model formulation is the \emph{position-based} one with a uniform distribution for the $\xi$ angle and a Gaussian distribution for the $\psi$ angle (\cref{subsubsec:outplane-dist-position-based}). The ejection speed model is the Housen formulation (\cref{eq:speed-housen}). The considered material is again the WCB with an equivalent strength of 5 \si{\kilo\pascal}. Three different rotational states have been considered: \emph{no-rotation}, a rotational period of 16\si{\hour} (average over the asteroid database), and a rotational period of 2.5\si{\hour} (lower end of the admissible rotational period for rubble pile asteroids, before the family of very fast rotating asteroids \citep{pravec2002asteroid}). \bigbreak The results of the simulations shows that the share of impacting and escaping fragments is almost identical when we compare the slow rotational period of 16\si{\hour} with the case without rotation, corresponding to approximately 94.5\% of impacting and 5.5\% of escaping fragments, respectively. When we consider the fast-rotating case with a period of 2.5\si{\hour}, the partition changes to an 87.6\% of impacting fragments and a 12.4\% of escaping ones. If we look at the velocity contribution due to the asteroid rotation for a point at the equator, we have an addition of approximately 0.17 \si{\meter\per\second} for a period of 2.5\si{\hour} and of approximately 0.027 \si{\meter\per\second} for the 16\si{\hour} period. These contributions correspond to the 29\% and 4.5\% of the asteroid escape velocity, respectively. We can thus expect an increasing effect of the rotational period on the behaviour of the ejecta as the period decreases. If we look at the different behaviours in time, \cref{fig:rot_cum} shows the normalised cumulative distributions for the impacting and escaping fragments. Also from this plot, we observe that differences are visible only for the fast rotating case, whose contribution, in this case, is to change the frequency at which the particles re-impact the asteroid and escape the system. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{images/rot_wcb_cum_frag_norm_all} \caption{Normalised cumulative distributions of impacting (blue) and escaping (orange) fragments as function of the rotational state (line style).} \label{fig:rot_cum} \end{figure} Similarly to \cref{fig:psi-wcb-hammer}, \cref{fig:rot16-wcb-hammer-diff,fig:rot2.5-wcb-hammer-diff} show the relative difference between the distribution of impacting fragments onto the asteroid's surface, normalised with respect to the total number of ejected fragments. \cref{fig:rot16-wcb-hammer-diff} shows the difference between the 16h period and the no-rotation case, while \cref{fig:rot2.5-wcb-hammer-diff} shows the difference between the 2.5h period and the no-rotation case. We first notice that even the 16h produces differences in the distribution of fragments despite the effect on the overall fate of the ejecta being limited. We observe a neat separation between to regions in correspondence of the impact point ($\alpha = 90^\circ$, $\delta = 0^\circ$). Along the Eastward direction we have a predominance of fragments for the case that includes the rotational period, while the opposite is observed Westwards. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{images/rot_wcb_hammer_diff_frag_percent_16h} \caption{Relative percent difference between the distribution of the average number of impacting representative fragments obtained with the 16h period case and the no-rotation case.} \label{fig:rot16-wcb-hammer-diff} \end{figure} \cref{fig:rot2.5-wcb-hammer-diff} shows that as the rotational speed increases, the shift is more pronounced and the difference between the distributions becomes higher. In fact, we pass from a maximum difference of 2-3\% of the total fragments in \cref{fig:rot16-wcb-hammer-diff} to about 10\% in \cref{fig:rot2.5-wcb-hammer-diff}. This in turn corresponds to an order of magnitude difference between the number of fragments, from \num{1e9} to \num{1e10}. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{images/rot_wcb_hammer_diff_frag_percent_2.5h} \caption{Relative percent difference between the distribution of the average number of impacting representative fragments obtained with the 2.5h period case and the no-rotation case.} \label{fig:rot2.5-wcb-hammer-diff} \end{figure} \section{Conclusions and Discussion} \label{sec:conclusions} In this work, the development of a modular distribution-based ejecta model is presented. The model is a combination of probability distribution functions that characterise the main parameters governing the ejection phenomenon, i.e., the particle size, the ejection position and speed, and the ejection direction (in-plane and out-of-plane angles). Combining the single distributions, a full four-dimensional characterisation of the ejecta cloud can be obtained as a function of the target and the impactor properties. From the probability density functions, we obtain analytical expressions of the corresponding cumulative distribution functions, which are the base for sampling the distribution and determining the associated \emph{representative fragments}. Three different formulations of the ejecta model have been presented in this work; however, the modularity of the presented model can be leveraged to introduce new and improved correlations, as long as they can be expressed as PDFs or conditional distributions. In addition, the presented analytical formulation can be exploited to estimate the number of particles satisfying specific conditions: by finding the ranges of conditions that satisfy a given scenario (e.g., particles trapped in quasi-stable orbits), the ejecta distribution can be integrated to find the number of particles associated to it. The developed ejecta distributions are used to compare different modelling techniques and assumptions and assess the sensitivity of the overall ejecta fate. In this work, we focus on few aspects of the ejecta modelling and analyse the contribution of some relevant parameters: the minimum particle size that defines the range of the distribution, the slope coefficient of the size distribution, the type of model used for the ejection speed and the ejection angle, the type of distribution formulation used and the possible correlation between the size and speed of the fragments. We perform the sensitivity analysis focusing on the overall behaviour of the ejecta cloud. This translated to the analysis of the share of particles escaping, re-impacting, and still orbiting the asteroid after a two-months period and in understanding the particles' behaviour in time as a whole. The analysis thus took into account the cumulative distribution in time of the impacting and escaping particles. In addition, an interesting aspect was the distribution of the re-impacting particles onto the asteroid surface. From the analyses, we observed interesting results that can better inform our future choices about modelling impact crater events and the resulting ejecta clouds. First, we observed that the selection of the slope coefficient, $\bar{\alpha}$, the type of distribution of the out-of-plane ejection angle, $\psi$, and the size-speed correlation do not significantly influence the overall fate of the ejecta in terms of fragment evolution in time; although, a minor influence of the size-speed correlation can be observed for re-impacting fragments. The correlated formulation also generates a higher number of generated fragments and a lower presence of surviving fragments in the latest stages of the simulation. It must be noted that the difference introduced by the size-speed correlation may be higher when the considered speed range is larger that is when the considered asteroid has a bigger gravitational parameter. in fact, the greater the gravitational parameter, the higher is the escape speed, which is the upper limit chosen for the ejection speed in this study. The asteroid rotation has also a marginal influence on the ejecta evolution, as long as the rotational speed is limited. \cref{subsec:sensitivity-rotation}, in fact, showed that for faster rotations the contribution can be relevant in influencing both the rate of impact and escape of the fragments and their distribution on the asteroid surface when they re-impact. Similarly, the type of distribution of the out-of-plane ejection can have an influence on the impacting particles distribution (\cref{fig:psi-wcb-hammer}). Our analyses also showed a limited effect of the impact angle on the overall ejecta fate as the main contribution was the variation of the total number off ejected fragments as a consequence of a smaller normal component of the impact speed as the impact angle decreases. However, further analyses on the effect of the impact angle should be performed to assess the effect it may have on the local distribution of the ejecta plume, especially in the instants right after the impact event. Changing the minimum size of the particles, $s_{\rm min}$, affected the overall number of generated fragments and their behaviour in time. The effect was more pronounced for escaping particles that reached escape condition in double the time and decreased their rate of escape. As the particles were on average bigger, also the rate of re-impact resulted reduced. Finally, both the selection of the distribution formulation (from the \emph{position-based} and the \emph{speed-based} models) and the type of model of the ejecta speed had the greatest influence on the behaviour of the ejecta, particularly on the impacting particles. A strong difference in the rate of accretion has been observed (\cref{fig:speedpos-wcb-cum,fig:wcb-speed-cum}) and in the distribution of the impacting particles on the asteroid surface (\cref{fig:speedpos-wcb-hammer,fig:speed-wcb-hammer-diff}). Additionally, the difference between the \emph{position-based} and the \emph{speed-based} formulations showed to be dependent on the target material \cref{fig:speedpos_nf}. This is mainly due to the difference in the cut-off speed of the \emph{speed-based} formulation; in fact, the \emph{position-based} formulation showed lower influence form the target strength. In the end, regardless of the type of analysis performed, an effective and robust analysis of the fate of the ejecta cannot ignore the intrinsic uncertainties introduced by the modelling assumptions. From a perspective of initial conditions generation, the \emph{position-based} ejecta formulation guarantees the highest flexibility as it allows generating samples from the entirety of the ejection speed spectrum. This characteristics allows the analysis also of low velocity particles, which is of particular importance when considering low-gravity environments such as the ones of asteroids. Given the small escape velocities, a large number of small and slow particles may be generated by an hypervelocity impact and is thus important to be able to generate initial conditions also for these particles. In the case of the \emph{position-based} formulation, we can also select the ejection speed model we want to adopt. As we have seen in the sensitivity analysis, this choice indeed influences the fate of the particles. However, it is not trivial to decide which model should be preferred among the ones proposed by Housen and Richardson. Additional tests and real mission data is probably required to make a more informed decision and, possibly, better tune the parameters defining these models. When looking at the \emph{speed-based} formulations, instead, we incur in the risk of neglecting parts of the low-velocity portion of the ejecta cloud. Therefore, this formulation is less suitable to analyse the fate of the ejecta in the neighbourhood of asteroids, especially when considering smaller asteroids and higher strength materials. However, the model can still capture the bulk of the ejecta cloud, by modelling most of the ejected mass. In addition, it can be useful when only the information on the ejection speed is required or of interest, without passing through the ejection location inside the crater. In fact, most of the time, the ejection location can be approximated with a point on the asteroid's surface. Finally, the idea of using a combination of distributions to model the overall ejecta cloud in a modular fashion can be leveraged to plug-in new and more accurate models, but also to tailor the parameters of the models given new inputs from actual impacts; for example, by comparing the simulated results to images of impact cratering events such as the ones of Hayabusa2 and DART. \section*{Acknowledgements} This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 896404 - CRADLE.
2,869,038,154,493
arxiv
\section{Introduction} We are motivated to accelerate solver methods for differential-algebraic equations (DAEs). Many mathematical methods are based on such combination of differential and algebraic equations, e.g., simulations of the power systems, constrained mechanical systems, singular perturbations, see \cite{stott_1979} and \cite{brenan1989}. We start with respect to assume, that the considered partial differential-algebraic equations (PDAEs) can be semi-discretized and written into an initial value problem of differential algebraic equations (DAEs) in the form: \begin{eqnarray} A \frac{d y(t)}{dt} + B y(t) = f(t) , \; t \in [t_0, T] , y(t_0) = y_0 , \end{eqnarray} where $A \in \mathbb{C}^{m \times m}$ is a singular and $B \in \mathbb{C}^{m \times m}$ is a non-singular complex matrix with rank $m$ and $f(t): [t_0, T] \rightarrow \mathbb{C}^m$ is a sufficient smooth right hand side. For solving such problem initial value problems, waveform-relaxation (WR) methods are developed and investigated by many authors, see \cite{leim_1991}, \cite{lerara_1982} and \cite{walle93}. The main idea is to decompose the partitions of large systems into iteratively coupled smaller subsystems and solve such subsystems independently over the integration intervals (also called time-windows), see \cite{walle93}. For the performance of the algorithms, since recent years two-stage strategy is introduced in WR methods, means a first splitting in blocks for pure parallel splitting. For each processor, we apply additional a splitting , i.e., an inner splitting instead of a direct method, see \cite{bao_2011}. We propose additional a multistage splitting, means, we assume to split additional the inner splitting such that we could also preform the inner splitting with parallel splitting, while only the last inner splitting is done serial. The new class of multi-stage waveform-relaxation (MSWR) methods are discussed in the following, with respect to the three-stage waveform-relaxation (TH-SWS) method. For simplification, we deal with \begin{eqnarray} \frac{d y(t)}{dt} + B y(t) = f(t) , \; t \in [t_0, T] , y(t_0) = y_0 , \end{eqnarray} where $B = M_1 -N_1$ and the outer iteration is obtained as \begin{eqnarray} && \frac{d y^{k+1}(t)}{dt} + M_1 y^{k+1}(t) = N_1 y^k(t) + f(t), \\ && y^{k+1}(t_0) = y_0 , \; k = 1, 2, \ldots, \end{eqnarray} where $B = M_1 - N_1$, then we apply the first inner iteration: \begin{eqnarray} && \frac{d z^{\nu+1}(t)}{dt} + M_2 z^{\nu+1}(t) = N_2 z^{\nu}(t)+ N_1 y^k(t) + f(t), \\ && z^{\nu+1}(t_0) = y^k_0 , \; k = 1, 2, \ldots, \nu = 0, \ldots, \nu_k -1 , \end{eqnarray} where $M_1 = M_2 - N_2$ and we obtain $y^{k+1} = z^{\nu_k}$. The last or second inner iteration is given as: \begin{eqnarray} && \frac{d \tilde{z}^{\mu+1}(t)}{dt} + M_3 \tilde{z}^{\mu+1}(t) = N_3 \tilde{z}^{\mu}(t)+ N_2 z^{\nu}(t)+ N_1 y^k(t) + f(t), \\ && \tilde{z}^{\mu+1}(t_0) = z^{\nu+1}(t_0) = y^k_0 , \\ && \; k = 1, 2, \ldots, \nu = 0, \ldots, \nu_k - 1 , \mu = 0, \ldots, \mu_{\nu_k} - 1 , \nonumber \end{eqnarray} where $M_2 = M_3 - N_3$ and we obtain $y^{k+1} = z^{\nu_k} = \tilde{z}^{\mu_{\nu_k}}$ . We have at least $\mu_{\nu_k}$ inner first iterations within the inner second iteration ${\nu_k}$ and within the $k+1$ outer iterations. Means we deal with three-stage iterative methods. For more flexibility in the approaches, we apply a multisplitting method, which is based on the partition of the unit, see \cite{leary1985}. We decompose the solution into several units, means we have: \begin{eqnarray} \label{synch} && y^{k+1} = \sum_{p = 1}^L E_{p} y^{p, k+1} , \\ && \sum_{p = 1}^L E_{p} = I , \end{eqnarray} where $I \in \mathbb{C}^{m \times m}$ is the unit matrix and $E_p$ are diagonal and the diagonal entries are given as $E_{p, ii} \ge 0$ for $i = 1, \ldots, m$. We can solve $L$ independent waveform relaxation schemes in parallel, while the synchronization or update is done with the Equation (\ref{synch}). The paper is outlined as following. In Section \ref{solver}, we discuss the different hierarchy of solver methods. The numerical experiments are presented in Section \ref{num}. The conclusions are done in Section \ref{conc}. \section{Hierarchy of Solver schemes} \label{solver} In the following, we deal with the following hierarchy of solver schemes: \begin{enumerate} \item One-stage WR schemes, \item Two-stage WR schemes, \item Three-stage WR schemes, \item Multisplitting WR schemes: Jacobian-, Gauss-Seidel-Types, \end{enumerate} where, we simplifiy the inversion of the matrices between the one-stage to the three-stage method, means the simplification of the inversion is done via additional inner iteratvie stages. Further the multisplitting approach allows to be more flexible in the parallelization of a multi-stage method. \subsection{One-stage WR method} In the following, we discuss the one-stage WR method. \begin{enumerate} \item We have the following WR method (in parallel): \begin{eqnarray} \label{mobile1_1_2} && ( M_A + h M_1) y_{n+1}^{k+1} = h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k + M_A y_n^{k+1} \\ && - N_A y_n^{k} + h f_{n+1} , \nonumber \\ && y_0^{k+1} = y_0, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J , \nonumber \end{eqnarray} \item We have the following WR method (in serial): \begin{eqnarray} \label{mobile1_1_2} && ( M_A + h M_1) y_{n+1}^{k+1} = h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k + M_A y_n \\ && - N_A y_n + h f_{n+1} , \nonumber \\ && y_{n+1}^{0} = y_n , \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J , \nonumber \end{eqnarray} where we apply the algorithm with $p = 50, q =6 , J = 20, h=0.1$, here we have $\Delta x =1.0$ and $\frac{D}{\Delta x^2} = 1.0$. We have 2 possible stopping criteria: \begin{enumerate} \item Error bound: \\ We have an stopping error norm with $|| y_{n+1}^{k+1} - y_{n+1}^k|| \le 10^{-3}$. \item Fix number of outer-iterative steps: $K =20 $. \end{enumerate} \end{enumerate} \subsection{Two-Stage WR method} The two-stage WR method is given as: \begin{enumerate} \item We have the following Two-Stage WR method (in parallel): \begin{eqnarray} \label{mobile1_1_2} && ( M_A + h M_2) z_{n+1}^{\nu+1} = h N_2 z_{n+1}^{\nu} + h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k + M_A z_n^{\nu+1} \\ && - N_A y_n^{k} + h f_{n+1} , \nonumber \\ && z_0^{\nu+1}(t_0) = y^{k}(t_0)= y_0, \; k = 0, 1, \ldots, K, \nonumber \\ && \nu = 0, 1, \ldots, \nu_k, \; n = 0, 1,2, \ldots, J. \nonumber \end{eqnarray} \item We have the following Two-Stage WR method (in serial): \begin{eqnarray} \label{mobile1_1_2} && ( M_A + h M_2) z_{n+1}^{\nu+1} = h N_2 z_{n+1}^{\nu} + h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k + M_A y_n \\ && - N_A y_n + h f_{n+1} , \nonumber \\ && z_{n+1}^{0}(t_n) = y_{n+1}^{k} , \; \nu = 0, 1, \ldots, \nu_k, \mbox{inner iteration}\nonumber \\ && y_{n+1}^{k+1} = z_{n+1}^{\nu_k}, \; k = 0, 1, \ldots, K, \mbox{outer iteration} \nonumber\\ && z_{n+1}^{0}(t_n) = y_n, \; \mbox{initialization}\nonumber \\ && n = 0, 1,2, \ldots, J ,\nonumber. \end{eqnarray} Two-Stage WR algorithm (serial) \ref{multi_2} is the given as: \begin{algorithm}[H] Given the initial vector $y_0 = y(0)$ , \\ $z_{n+1}^{0}(0) = y_0,$ \\ \For {$n = 0, 1, \ldots, J$} { \For {$k = 0, 1, \ldots, K$} { \For {$\nu = 0, 1, \ldots, \nu_k$} { $ ( M_A + h M_2) z_{n+1}^{\nu+1} = h N_2 z_{n+1}^{\nu} + h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k + M_A y_n - N_A y_n + h f_{n+1} , $ } $ y_{n+1}^{k+1} = z_{n+1}^{\nu_k}, $ \\ $ z_{n+1}^{0} = y_{n+1}^{k+1} , $ } $y_{n+1} = z_{n+1}^{\nu_K} , $\\ $z_{n+1}^{0}(t_{n+1}) = y_{n+1},$ } \caption{\label{multi_2} Two-Stage WR algorithm (serial)} \end{algorithm} where we apply the algorithm with $p = 50, q =6, J = 20, h=0.1$. Here we have $\Delta x =1.0$ and $\frac{D}{\Delta x^2} = 1.0$. We have 2 possible stopping criteria: \begin{enumerate} \item Error bound: \\ We have an stopping error norm : \\ for the outer iteration with $|| y_{n+1}^{k+1} - y_{n+1}^k|| \le 10^{-3}$, \\ and for the inner iteration $|| z_{n+1}^{\nu+1} - z_{n+1}^{\nu}|| \le 10^{-3}$ \item Fix number of outer-iterative steps: $K = 5$ and inner iterative steps $\nu_k = 4$. \end{enumerate} \end{enumerate} \subsection{Three-Stage WR method} The three-stage WR method is given as: \begin{enumerate} \item We have the following Three-Stage WR method (in parallel): \begin{eqnarray} \label{mobile1_1_2} && ( M_A + h M_3) \tilde{z}_{n+1}^{\mu+1} = h N_3 \tilde{z}_{n+1}^{\mu} + h N_2 z_{n+1}^{\nu} + h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k \nonumber \\ && + M_A \tilde{z}_n^{\mu+1} - N_A y_n^{k} + h f_{n+1} , \nonumber \\ && \tilde{z}_0^{\mu+1}(t_0) = z_0^{\nu+1}(t_0) = y^{k}(t_0)= y_0, \; k = 0, 1, \ldots, K, \nonumber \\ && \nu = 0, 1, \ldots, \nu_k, \mu = 0, 1, \ldots, \mu_{\nu_k}, \; n = 0, 1,2, \ldots, J. \nonumber \end{eqnarray} \item We have the following Two-Stage WR method (in serial): \begin{eqnarray} \label{mobile1_1_2} && ( M_A + h M_3) \tilde{z}_{n+1}^{\mu+1} = h N_3 \tilde{z}_{n+1}^{\mu} + h N_2 z_{n+1}^{\nu} + h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k \nonumber \\ && + M_A y_n - N_A y_n + h f_{n+1} , \nonumber \\ && \tilde{z}_{n+1}^{0}(t_n) = z_{n+1}^{\nu_k} , \; \mu = 0, 1, \ldots, \mu_{\nu_k}, \mbox{first inner iteration}\nonumber \\ && z_{n+1}^{0}(t_n) = y_{n+1}^{k} , \; \nu = 0, 1, \ldots, \nu_k, \mbox{second inner iteration}\nonumber \\ && y_{n+1}^{k+1} = \tilde{z}_{n+1}^{\mu_{\nu_k}}, \; k = 0, 1, \ldots, K, \mbox{outer iteration} \nonumber\\ && \tilde{z}_{n+1}^{0}(t_n) = y_n, \; \mbox{initialization}\nonumber \\ && n = 0, 1,2, \ldots, J ,\nonumber. \end{eqnarray} Three-Stage WR algorithm (serial) \ref{multi_3} is the given as: \begin{algorithm}[H] Given the initial vector $y_0 = y(0)$ , \\ $z_{n+1}^{0}(0) = y_0,$ \\ \For {$n = 0, 1, \ldots, J$} { \For {$k = 0, 1, \ldots, K$} { \For {$\nu = 0, 1, \ldots, \nu_k$} { \For {$\mu = 0, 1, \ldots, \mu_{\nu_k}$} { $( M_A + h M_3) \tilde{z}_{n+1}^{\mu+1} = h N_3 \tilde{z}_{n+1}^{\mu} + h N_2 z_{n+1}^{\nu} + h ( N_1 + \frac{1}{h} N_A) y_{n+1}^k + M_A y_n - N_A y_n + h f_{n+1} , $ } $ z_{n+1}^{\nu+1} = \tilde{z}_{n+1}^{\mu_{\nu}}, $ \\ $ \tilde{z}_{n+1}^{0} = z_{n+1}^{\nu+1} , $ } $ y_{n+1}^{k+1} = z_{n+1}^{\nu_k}, $ \\ $ z_{n+1}^{0} = y_{n+1}^{k+1} , $ } $y_{n+1} = z_{n+1}^{\nu_K} , $\\ $z_{n+1}^{0}(t_{n+1}) = y_{n+1},$ } \caption{\label{multi_3} Three-Stage WR algorithm (serial)} \end{algorithm} where we apply the algorithm with $p = 50, q =6 , J = 20, h=0.1$. Further we have $\Delta x =1.0$ and $\frac{D}{\Delta x^2} = 1.0$. We have 2 possible stopping criteria: \begin{enumerate} \item Error bound: \\ We have an stopping error norm : \\ for the outer iteration with $|| y_{n+1}^{k+1} - y_{n+1}^k|| \le 10^{-3}$, \\ and for the inner iteration $|| z_{n+1}^{\nu+1} - z_{n+1}^{\nu}|| \le 10^{-3}$ \\ and for the second inner iteration $|| \tilde{z}_{n+1}^{\mu+1} - \tilde{z}_{n+1}^{\mu}|| \le 10^{-3}$ \\ \item Fix number of outer-iterative steps: $K = 5$ and inner iterative steps $\nu_k = 2$, $\mu_{\nu_k} = 2$. \end{enumerate} \end{enumerate} \subsection{Multisplitting WR method} We have the following Multi-splitting WR method (in serial/parallel): \begin{eqnarray} \label{mobile1_1_2} && ( M_{A_l} + h M_{1, l}) y_{n+1}^{l, k+1} = h ( N_{1, l} + \frac{1}{h} N_{A_l}) \left( \sum_{m=1}^L E_{l, m} y_{n+1}^{l, k} \right) + M_A y_n \\ && - N_A y_n + h f_{n+1} , \nonumber \\ && y_{n+1}^{l, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J , \nonumber \end{eqnarray} where we apply the algorithm with $p = 50, q =6 , J = 20, h=0.1$ and the error norm $|| y_{n+1}^{k+1} - y_{n+1}^k|| \le 10^{-3}$, here we have $\Delta x =1.0$ and $\frac{D}{\Delta x^2} = 1.0$. Further $L$ are the number of the processors. Without loosing the generality of the method, we concentrate on the following to $L = 2$. \subsubsection{Jacobian-Method} The first processor is computing: \begin{eqnarray} \label{mobile1_1_2} && ( M_{A_1} + h M_{1, 1}) y_{n+1}^{1, k+1} = h ( N_{1, 1} + \frac{1}{h} N_{A_1}) \left( E_{1, 1} y_{n+1}^{1, k} + E_{1, 2} y_{n+1}^{2, k} \right) + M_{A} y_n \nonumber \\ && - N_{A} y_n + h f_{n+1} , \\ && y_{n+1}^{1, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J , \nonumber \end{eqnarray} The second processor is computing: \begin{eqnarray} \label{mobile1_1_2} && ( M_{A_2} + h M_{1, 2}) y_{n+1}^{2, k+1} = h ( N_{1, 2} + \frac{1}{h} N_{A_2}) \left( E_{2, 1} y_{n+1}^{1, k} + E_{2, 2} y_{n+1}^{2, k} \right) + M_{A} y_n \nonumber \\ && - N_{A} y_n + h f_{n+1} , \\ && y_{n+1}^{2, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J . \nonumber \end{eqnarray} where we decide if we have to switch of the mixing means: means if we have fulfilled: \begin{eqnarray} \label{mobile1_1_2} && || \left( E_{1, 1} y_{n+1}^{1, k} + E_{1, 2} y_{n+1}^{2, k} \right) - y_{n+1}^{1, k-1} || \le || y_{n+1}^{1, k} - y_{n+1}^{1, k-1} || \\ && || \left( E_{2, 1} y_{n+1}^{1, k} + E_{2, 2} y_{n+1}^{2, k} \right) - y_{n+1}^{2, k-1} || \le || y_{n+1}^{2, k} - y_{n+1}^{2, k-1} || \end{eqnarray} we do not switch of the mixing, but if the mixing has a larger error we have: \begin{eqnarray} \label{mobile1_1_2} && y_{n+1}^{k} = y_{n+1}^{1, k} , \end{eqnarray} \begin{remark} The multisplitting is switched off, if one partial solution is much more accurate, than the other partial solution. Then we only apply the best approximation. \end{remark} \subsubsection{Gauss-Seidel-Method (decoupled version, serial with 2 processors)} In this version, we apply the wel-known standard Gauss-Seidel method, which has the drawback of the serial treatment with the results. The first processor is computing: \begin{eqnarray} \label{mobile1_1_2} && ( M_{A_1} + h M_{1, 1}) y_{n+1}^{1, k+1} = h ( N_{1, 1} + \frac{1}{h} N_{A_1}) \left( E_{1, 1} y_{n+1}^{1, k} + E_{1, 2} y_{n+1}^{2, k} \right) + M_{A} y_n \nonumber \\ && - N_{A} y_n + h f_{n+1} , \\ && y_{n+1}^{1, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J , \nonumber \end{eqnarray} \begin{remark} Here, we can apply the result of the first processor, if we assume, that part is much more faster to the second processor. \end{remark} The second processor is computing: \begin{eqnarray} \label{mobile1_1_2} && ( M_{A_2} + h M_{1, 2}) y_{n+1}^{2, k+1} = h ( N_{1, 2} + \frac{1}{h} N_{A_2}) \left( E_{2, 1} y_{n+1}^{1, k+1} + E_{2, 2} y_{n+1}^{2, k} \right) + M_{A} y_n \nonumber \\ && - N_{A} y_n + h f_{n+1} , \\ && y_{n+1}^{2, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J . \nonumber \end{eqnarray} where we decide if we have to switch of the mixing means: means if we have fulfilled: \begin{eqnarray} \label{mobile1_1_2} && || \left( E_{1, 1} y_{n+1}^{1, k} + E_{1, 2} y_{n+1}^{2, k} \right) - y_{n+1}^{1, k-1} || \le || y_{n+1}^{1, k} - y_{n+1}^{1, k-1} || \\ && || \left( E_{2, 1} y_{n+1}^{1, k+1} + E_{2, 2} y_{n+1}^{2, k} \right) - y_{n+1}^{2, k-1} || \le || y_{n+1}^{2, k} - y_{n+1}^{2, k-1} || \end{eqnarray} we do not switch of the mixing, but if the mixing has a larger error we have: \begin{eqnarray} \label{mobile1_1_2} && y_{n+1}^{k} = y_{n+1}^{1, k} , \end{eqnarray} \begin{remark} The multisplitting is switched off, if one partial solution is much more accurate, than the other partial solution. Otherwise, we apply the mixture of the results based on the multisplitting method. \end{remark} \subsubsection{Gauss-Seidel-Method (decoupled version)} The first processors compute: \begin{eqnarray} \label{mobile1_1_2} && \left( ( M_{A_1} + h M_{1, 1}) - h ( N_{1, 1} + \frac{1}{h} N_{A_1}) E_{1, 1} \right) y_{n+1}^{1, k+1} \nonumber \\ && = h ( N_{1, 1} + \frac{1}{h} N_{A_1}) \left( E_{1, 2} y_{n+1}^{2, k} \right) + M_{A} y_n - N_{A} y_n + h f_{n+1} , \\ && y_{n+1}^{1, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J , \nonumber \end{eqnarray} The second processors compute: \begin{eqnarray} \label{mobile1_1_2} && \left( ( M_{A_2} + h M_{1, 2}) - h ( N_{1, 2} + \frac{1}{h} N_{A_2}) E_{2, 2} \right) y_{n+1}^{2, k+1} \nonumber \\ && = h ( N_{1, 2} + \frac{1}{h} N_{A_2}) \left( E_{2, 1} y_{n+1}^{1, k} \right) + M_{A} y_n - N_{A} y_n + h f_{n+1} , \\ && y_{n+1}^{2, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J . \nonumber \end{eqnarray} \subsubsection{Gauss-Seidel-Method (coupled version)} Here, we apply the coupled version of the GS method, which means one processor is faster with the computation and the other processor can profit from the improved computations. The processors compute: \begin{eqnarray} \label{mobile1_1_2} && \left( ( M_{A_1} + h M_{1, 1}) - h ( N_{1, 1} + \frac{1}{h} N_{A_1}) E_{1, 1} \right) y_{n+1}^{1, k+1} \nonumber \\ && = h ( N_{1, 1} + \frac{1}{h} N_{A_1}) \left( E_{1, 2} \tilde{y}_{n+1}^{2, k+1} \right) + M_{A} y_n - N_{A} y_n + h f_{n+1} , \\ && \left( ( M_{A_2} + h M_{1, 2}) - h ( N_{1, 2} + \frac{1}{h} N_{A_2}) E_{2, 2} \right) y_{n+1}^{2, k+1} \nonumber \\ && = h ( N_{1, 2} + \frac{1}{h} N_{A_2}) \left( E_{2, 1} \tilde{y}_{n+1}^{1, k+1} \right) + M_{A} y_n - N_{A} y_n + h f_{n+1} , \\ && y_{n+1}^{1, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J , \nonumber \\ && y_{n+1}^{2, 0} = y_n , \; l = 1, \ldots, L, \; k = 0, 1, \ldots, K, \; n = 0, 1,2, \ldots, J . \nonumber \end{eqnarray} where we have two cases: \begin{itemize} \item If Processor $1$ is faster than Processor $2$: \begin{eqnarray} \label{mobile1_1_2} && \tilde{y}_{n+1}^{1, k+1} = y_{n+1}^{1, k+1} , \\ && \tilde{y}_{n+1}^{2, k+1} = y_{n+1}^{2, k} , \end{eqnarray} \item If Processor $2$ is faster than Processor $1$: \begin{eqnarray} \label{mobile1_1_2} && \tilde{y}_{n+1}^{1, k+1} = y_{n+1}^{1, k} , \\ && \tilde{y}_{n+1}^{2, k+1} = y_{n+1}^{2, k+1} . \end{eqnarray} \end{itemize} \begin{remark} For all multisplitting methods, we can also extend the one-stage waveform-relaxation method to a multi-stage waveform-relaxation method. \end{remark} \section{Numerical Experiments} \label{num} In a first experiment, we apply a partial differential algebraic equation (PDAE), which combines partial differential and algebraic equations. We choose an experiment, which is based on the following two equations: \begin{eqnarray} \label{mobile1_1_2} && \partial_t c_1 + \nabla \cdot {\bf F} c_1 = f_1(t) , \; \mbox{in} \; \Omega \times [0, t] , \\ && \nabla \cdot {\bf F} c_2 = f_2(t) , \; \mbox{in} \; \Omega \times [0, t] , \\ && {\bf F} = - D \nabla , \end{eqnarray} and we have the following DAE problem: \begin{eqnarray} \label{mobile1_1_2} && A \partial_t c + B c = f(t) , \; \mbox{in} \; [0, t] , \end{eqnarray} The analytical solution is given as: \begin{eqnarray} \label{mobile1_1_2} && y = [\cos(t), \sin(t), t, \cos(t), \sin(t), t, \ldots, \cos(t), \sin(t), t] \in {\rm I}\! {\rm R}^m , \end{eqnarray} and we have to calculate $f(t)$ as: \begin{eqnarray} \label{mobile1_1_2} && f(t) = A \partial_t y(t) + B y(t) , \; \mbox{in} \; [0, t] , \end{eqnarray} where $y$ is the analytical solution. We apply the error in $L_2$ or $L_{max}$-norm means: \begin{eqnarray} \label{mobile1_1_2} && err_{L_2}(t) = \frac{1}{\Delta x} ( \sum_{i=1}^I ( y_{ana}(x_i, t) - y_{num}(x_i, t) )^2 )^{1/2} , \\ && err_{max}(t) = \max_{i=1}^I || y_{ana}(x_i, t) - y_{num}(x_i, t) || . \end{eqnarray} In the following we deal with the semidiscretized equation given with the matrices: \begin{eqnarray} \label{eq20} && A = \left( \begin{array}{c c c c c} I & & & & \\ & \ddots & & & \\ & & I & & \\ & & & 0 & \\ & & & & 0 \end{array} \right) \in {\rm I}\! {\rm R}^{m \times m} , \end{eqnarray} where $I, 0 \in {\rm I}\! {\rm R}^{p \times p}$ We have the following two operators for the splitting method: \begin{eqnarray} B_1 & = & \left(\begin{array}{rrrrr} 4 & -1 & ~ & ~ & ~ \\ -1 & 4 & -1 & ~ & ~ \\ ~ & \ddots & \ddots & \ddots & ~ \\ ~ & ~ & -1 & 4 & -1 \\ ~ & ~ & ~ & -1 & 4 \end{array}\right) \in {\rm I}\! {\rm R}^{p \times p} \end{eqnarray} \begin{eqnarray} B & = & \frac{D}{\Delta x^2}\cdot \left(\begin{array}{rrrrr} B_1 & -I & ~ & ~ & ~ \\ -I & B_1 & -I & ~ & ~ \\ ~ & \ddots & \ddots & \ddots & ~ \\ ~ & ~ & -I & B_1 & -I \\ ~ & ~ & ~ & -I & B_1 \end{array}\right) \in {\rm I}\! {\rm R}^{m\times m} \end{eqnarray} with $p q = m$, where we assume $ \frac{D}{\Delta x^2} = 1$. Means $A, B$ are $m \times m$ block-matrices. We have the following splitting: \begin{eqnarray} \label{eq20} && N_A = \left( \begin{array}{c c c c c} 0 & & & & \\ & \ddots & & & \\ & & 0 & & \\ & & & \frac{1}{100} I & \\ & & & & 0 \end{array} \right) , \end{eqnarray} \begin{eqnarray} N_1 & = & \frac{D}{\Delta x^2}\cdot \left(\begin{array}{rrrrr} 2 I & I & ~ & ~ & ~ \\ I & 2 I & I & ~ & ~ \\ ~ & \ddots & \ddots & \ddots & ~ \\ ~ & ~ & I & 2 I & I \\ ~ & ~ & ~ & I & 2 I \end{array}\right) \in {\rm I}\! {\rm R}^{m \times m} \end{eqnarray} \begin{eqnarray} M_2 & = & \frac{D}{\Delta x^2}\cdot \left(\begin{array}{rrrrr} 8 I & 0 & ~ & ~ & ~ \\ 0 & 8 I & I & ~ & ~ \\ ~ & \ddots & \ddots & \ddots & ~ \\ ~ & ~ & 0 & 8 I & 0 \\ ~ & ~ & ~ & 0 & 8 I \end{array}\right) \in {\rm I}\! {\rm R}^{m \times m} \end{eqnarray} where $I, 0 \in {\rm I}\! {\rm R}^{p \times p}$ \begin{eqnarray} M_3 & = & \frac{D}{\Delta x^2}\cdot \left(\begin{array}{rrrrr} 10 I & 0 & ~ & ~ & ~ \\ 0 & 10 I & I & ~ & ~ \\ ~ & \ddots & \ddots & \ddots & ~ \\ ~ & ~ & 0 & 10 I & 0 \\ ~ & ~ & ~ & 0 & 10 I \end{array}\right) \in {\rm I}\! {\rm R}^{m \times m} \end{eqnarray} where $I, 0 \in {\rm I}\! {\rm R}^{p \times p}$ We have the following operators: \begin{eqnarray} \label{mobile1_1_2} && M_A = A + N_ A , \\ && M_1 = B + N_1 , \\ && N_2 = M_2 - M_1 , \\ && N_3 = M_3 - M_2 , \end{eqnarray} Further we have the following matrices: \begin{eqnarray} \label{eq20} && N_{A_1} = \left( \begin{array}{c c c c c} 0 & & & & \\ & \ddots & & & \\ & & 0 & & \\ & & & \frac{1}{100} I & \\ & & & & 0 \end{array} \right) , N_{A_2} = \left( \begin{array}{c c c c c} 0 & & & & \\ & \ddots & & & \\ & & 0 & & \\ & & & 0 & \\ & & & & \frac{1}{100} I \end{array} \right) \in {\rm I}\! {\rm R}^{m \times m} , \end{eqnarray} \begin{eqnarray} && N_{1,1} = \frac{D}{\Delta x^2}\cdot \left(\begin{array}{rrrrr} 2 I & I & ~ & ~ & ~ \\ I & 2 I & I & ~ & ~ \\ ~ & \ddots & \ddots & \ddots & ~ \\ ~ & ~ & I & 2 I & I \\ ~ & ~ & ~ & I & 2 I \end{array}\right) \in {\rm I}\! {\rm R}^{m \times m} , \\ && N_{1,2} = \frac{D}{\Delta x^2}\cdot \left(\begin{array}{rrrrr} 3 I & I & ~ & ~ & ~ \\ I & 3 I & I & ~ & ~ \\ ~ & \ddots & \ddots & \ddots & ~ \\ ~ & ~ & I & 3 I & I \\ ~ & ~ & ~ & I & 3 I \end{array}\right) \in {\rm I}\! {\rm R}^{m \times m} \end{eqnarray} and the overlapping matrices are given as (where we deal with a symmetric overlap, means in both directions of the diagonal matrix): \begin{enumerate} \item Overlap is between block-matrices at $m/2$ and $m/2 + 1$): \begin{eqnarray} \label{eq20} && E_{1, 1} = \left( \begin{array}{c c c c c c c} I & & & & & & \\ & \ddots & & & & & \\ & & I & & & & \\ & & & \alpha_1 I & & & \\ & & & & 0 & & \\ & & & & & \ddots& \\ & & & & & & 0 \end{array} \right) , \\ && \nonumber \\ && E_{1, 2} = \left( \begin{array}{c c c c c c c} 0 & & & & & & \\ & \ddots & & & & & \\ & & 0 & & & & \\ & & & \alpha_2 I & & & \\ & & & & I & & \\ & & & & & \ddots& \\ & & & & & & I \end{array} \right) \in {\rm I}\! {\rm R}^{m \times m} , \end{eqnarray} and the next decomposition: \begin{eqnarray} \label{eq20} && E_{2, 1} = \left( \begin{array}{c c c c c c c} I & & & & & & \\ & \ddots & & & & & \\ & & I & & & & \\ & & & \alpha_3 I & & & \\ & & & & 0 & & \\ & & & & & \ddots& \\ & & & & & & 0 \end{array} \right) , \\ && \nonumber \\ && E_{2, 2} = \left( \begin{array}{c c c c c c c} 0 & & & & & & \\ & \ddots & & & & & \\ & & 0 & & & & \\ & & & \alpha_4 I & & & \\ & & & & I & & \\ & & & & & \ddots& \\ & & & & & & I \end{array} \right) \in {\rm I}\! {\rm R}^{m \times m} , \end{eqnarray} where $\alpha_1 + \alpha_2 = 1$ and $\alpha_3 + \alpha_4 = 1$, Here we have the overlap $o= 1$, means $E_{1,1}$ and $E_{2,2}$ have only one line overlap with $I$. \begin{remark} An extension is to apply different overlapping areas in the $E_{1}$ and $E_2$ decomposition. \end{remark} \item The largest overlap is $o = m/2 -1$ (where we assume $m$ is even), means we overlap nearly the full matrices except the lowest and uppermost entry, see: \begin{eqnarray} \label{eq20} && E_{1, 1} = \left( \begin{array}{c c c c c c c c} I & & & & & & & \\ & \alpha_1 I & & & & & & \\ & & \ddots & & & & & \\ & & & \alpha_1 I & & & & \\ & & & & \alpha_1 I & & & \\ & & & & & \ddots & & \\ & & & & & & \alpha_1 I & \\ & & & & & & & 0 \end{array} \right) , \\ && \nonumber \\ && E_{1, 2} = \left( \begin{array}{c c c c c c c c} 0 & & & & & & & \\ & \alpha_2 I & & & & & & \\ & & \ddots & & & & & \\ & & & \alpha_2 I & & & & \\ & & & & \alpha_2 I & & & \\ & & & & & \ddots & & \\ & & & & & & \alpha_2 I & \\ & & & & & & & I \end{array} \right) \in {\rm I}\! {\rm R}^{m \times m} , \end{eqnarray} and the next decomposition: \begin{eqnarray} \label{eq20} && E_{2, 1} = \left( \begin{array}{c c c c c c c c} I & & & & & & & \\ & \alpha_3 I & & & & & & \\ & & \ddots & & & & & \\ & & & \alpha_3 I & & & & \\ & & & & \alpha_3 I & & & \\ & & & & & \ddots & & \\ & & & & & & \alpha_3 I & \\ & & & & & & & 0 \end{array} \right) , \\ && \nonumber \\ && E_{2, 2} = \left( \begin{array}{c c c c c c c c} 0 & & & & & & & \\ & \alpha_4 I & & & & & & \\ & & \ddots & & & & & \\ & & & \alpha_4 I & & & & \\ & & & & \alpha_4 I & & & \\ & & & & & \ddots & & \\ & & & & & & \alpha_4 I & \\ & & & & & & & I \end{array} \right) \in {\rm I}\! {\rm R}^{m \times m} . \end{eqnarray} \begin{remark} An extension is to apply different overlapping areas in the $E_{1}$ and $E_2$ decomposition. \end{remark} \end{enumerate} Further we have \begin{eqnarray} \label{eq20} E_1 = E_2 = \left( \begin{array}{c c c c c c c} I & & & & & & \\ & \ddots & & & & & \\ & & I & & & & \\ & & & I & & & \\ & & & & I & & \\ & & & & & \ddots& \\ & & & & & & I \end{array} \right) \in {\rm I}\! {\rm R}^{m \times m}, \end{eqnarray} We have the following operators: \begin{eqnarray} \label{mobile1_1_2} && M_{A_1} = A + N_{A_1} , \\ && M_{1,1} = B + N_{1,1} , \\ && M_{A_2} = A + N_{A_2} , \\ && M_{1,2} = B + N_{1,2} , \\ && E_1 = E_{1,1} + E_{1,2} , \\ && E_2 = E_{2,1} + E_{2,2} , \end{eqnarray} \begin{remark} We compared the errors between the multi-level WR and the Multisplitting WR with Jacobian and Gauss-Seidel types. For the multi-level methods, we present the benefit in the higher level methods, while we only invert smaller matrices. The highest accuracy is given with the one-level method and the MS-Gauss-Seidel method, while the inversion matrix has the largest amount of information, but the methods are at least very expensive. We could also improve the accuracy of the MS methods based on the different overlapping means for $o=1$, we have only one overlap, while $o=m/2-1$ has the largest overlap. The balance and optimal values are between. \end{remark} We apply the numerical example and obtain for the one-stage, two-stage and three-stage method the following results in Figure \ref{one-three-stage}. \begin{figure}[ht] \begin{center} \includegraphics[width=5.0cm,angle=-0]{L2Error_oneStage-threeStage_error_bound.eps} \includegraphics[width=5.0cm,angle=-0]{maxError_oneStage-threeStage_error_bound.eps} \end{center} \caption{\label{one-three-stage} The errors between the exact and numerical scheme of the one-, two- and three-stage method is given (left hand side: $L_2$-errors, right hand side: $L_{\infty}$-error).} \end{figure} \begin{remark} We obtain the same accuracy of the three and two-stage method as for the one-stage method. This means, that we can reduce the computational amount of work and received the same accurate result. \end{remark} We apply the numerical example and obtain for the Multisplitting method the following results in Figure \ref{multi_1}. \begin{figure}[ht] \begin{center} \includegraphics[width=5.0cm,angle=-0]{L2Error_Jacobi_GS_Stages.eps} \includegraphics[width=5.0cm,angle=-0]{L2Error_GS.eps} \end{center} \caption{\label{multi_1} The errors between the exact and numerical scheme of the one-stage, two-stage, three-stage, Jacobi- and GS Multisplitting method is presented (left hand side: one-stage, two-stage, three-stage, Jacobi-, uncoupled GS-method, right hand side: one-stage, two-stage, three-stage, Jacobi-, uncoupled and coupled GS-methods.} \end{figure} \begin{remark} We obtain the same accuracy of the three and two-stage method as for the one-stage method. This means, that we can reduce the computational amount of work and received the same accurate result. The multisplitting method has also the same accuracy as the different multi-stage methods, here, we have the benefit of the parallel versions. \end{remark} \section{Conclusions and Discussions} \label{conc} We discuss multi-stage waveform-relaxation methods and multisplitting methods for differential algebraic equations. While the multi-stage waveform-relaxation methods can reduce their computational work with simplifying the inverse matrices, the multisplitting methods have their benefits in parallelizing their procedure. We test the ideas in a first partial differential algebraic equation and see the benefit in the multi-stage waveform-relaxation method. In future, we will discuss the numerical analysis of the different methods and present more numerical examples. \bibliographystyle{plain}
2,869,038,154,494
arxiv
\section{Introduction} Entanglement is the crucial resource for quantum information processing and as such the "currency" to pay with in almost all applications. For two-partite quantum states measures have been developed that uniquely specify the value of this resource. In contrast, for n-partite states the picture changes significantly. First, one has to distinguish not only between fully separable or entangled, but also between genuine n-partite, bi-, and tri- separable entangled states, etc. Second, even states with the same level of separability are different in the sense that they have, for example, different Schmidt rank \cite{Ter00a} or that they cannot be transformed into each other, e.g., by, local unitary (LU) or, more generally, by stochastic local operations and classical communication (SLOCC) \cite{Duer00, Ver02}. From an experimental point of view, classifying states according to the latter property is reasonable, as states from one SLOCC-class are suited for the same multi-party quantum communication applications. Thus, for the usage of multi-partite states it is of importance to know not only the \emph{amount} but also the \emph{type} of entanglement contained in a particular state. In other words, the value \emph{and} the type of the "currency" is what matters. Tools to detect the entanglement of a state exist, most prominently entanglement witnesses \cite{witness}. An alternative method, relying on the correlations between results obtained by local measurements, are Bell inequalities. Being originally devised to test fundamental issues of quantum physics they allow to distinguish entangled from separable two-qubit quantum systems \cite{Gis91,Terhal00}. Bell inequalities, meanwhile extended to three- and more partite quantum states \cite{Mer90,more2,ZBWW}, can thus serve as witness for both entanglement and the violation of local realism. Recently it was observed that for each graph state all non-vanishing correlations (or even a restricted number thereof) form a Bell-inequality, which is maximally violated only by the respective quantum state \cite{Sca05, Gueh05}. In particular, the Bell inequality for the four-qubit cluster state is not violated at all by GHZ states \cite{Sca05}. Naturally several questions arise: Whether one can in general apply such Bell inequalities to discriminate particular states from other classes of multi-partite entangled states, if so, whether they can also be constructed and applied for non-graph states, and finally, whether there are other operators that allow to experimentally discriminate entanglement classes. In this article we address these problems starting from Bell inequalities. We present a way to construct Bell operators \cite{Bra92} that are \emph{characteristic} for a particular quantum state, i.e., operators that have maximal expectation value for this multi-partite state, only. With respect to experimental applications we further aim that the expectation value can be obtained by a minimal number of measurement settings. Under certain conditions, we can relax the initial requirement that characteristic operators have to be also Bell operators, which allows further reduction of the number of settings. Comparison of the experimentally obtained expectation values with the maximal expectation values for states from other entanglement classes enables us to clearly distinguish observed states from other multi-party entangled states. In order to construct a Bell operator, we exploit the fact that certain correlations between measurement results on individual qubits are specific for multi-partite quantum states \cite{ZBWW}. All correlations for a state $\ket{X}$ are summarized by the correlation tensor $T$. If we focus on the case of four qubits, then $T_{ijkl}=\bra{X}(\sigma_i \otimes \sigma_j \otimes \sigma_k \otimes \sigma_l)\ket{X}$, with $i,j,k,l \in \{0,x,y,z\}$, where $\sigma_0=\openone$ and $\sigma_{x,y,z}$ are the Pauli spin operators. To obtain a Bell operator $\hat{\mathcal{B}}_X$ which is characteristic for a state $\ket{X}$, we require that $\ket{X}$ is the eigenstate of $\hat{\mathcal{B}}_X$ with the highest eigenvalue $\lambda_{\mathrm{max}}$. If the eigenstate is not degenerate, this implies that $\hat{\mathcal{B}}_X$, acting on another state cannot lead to an expectation value greater or equal $\lambda_{\mathrm{max}}$. An operator, which is in general not a Bell operator, but trivially fulfills the condition to have $\ket{X}$ as the only eigenstate with $\lambda_{\mathrm{max}}=1$, is the projector or fidelity operator $\hat{\mathcal{F}}_X=\,\ketbra{X}$ and \begin{equation} \hat{\mathcal{F}}_X=\frac{1}{16} \sum_{i,j,k,l} T_{ijkl} \, (\sigma_i \otimes \sigma_j \otimes \sigma_k \otimes \sigma_l). \end{equation} For most of the relevant quantum states the major part of the 256 coefficients $T_{ijkl}$ is zero. Therefore, the number of measurement settings necessary for the evaluation of $\hat{\mathcal{F}}_X$ is much smaller than for a complete state tomography. We consider the non-vanishing terms as relevant correlations for characterizing the state and take them as a starting point for the construction of $\hat{\mathcal{B}}_X$. As we will see in the following two examples, there are quantum states for which a small subset of the relevant correlations is enough to construct $\hat{\mathcal{B}}_X$. Once this is accomplished one can calculate the upper bound, $v_Y^\ast$, on the expectation values $v_Y=\bra{Y}\hat{\mathcal{B}}_X\ket{Y}=\langle \hat{\mathcal{B}}_X\rangle_Y$ for states $\ket{Y}$ which belong to other classes than $\ket{X}$. Consequently, a state under investigation with $\langle\hat{\mathcal{B}}_X\rangle_Z=v_Z$ cannot be an element of any class of states with $v_Y^\ast<v_Z$. Note, $\langle \hat{\mathcal{B}}_X\rangle$ induces a particular ordering of states which is neither absolute nor related to some entanglement of the states and, similarly to the entanglement witness, depends on the operator $\hat{\mathcal{B}}_X$. Yet, now we do not only detect higher or lower degree of entanglement: we distinguish different types of entanglement. One might say that a state with a higher $\langle \hat{\mathcal{B}}_X\rangle$ is more "$\ket{X}$-type" entangled. The same is true for a mixed state $\rho$ with expectation value $v_\rho = \mathrm{Tr}[\hat{\mathcal{B}}_X \rho]=\langle \hat{\mathcal{B}}_X \rangle_\rho$, in the sense that it cannot solely be expressed as a mixture of pure states $\ket{Y_i}$ with $v_{Y_i}^\ast<v_\rho$, but it has to contain contributions with a higher "X-type" entanglement. Summarizing, we point at the fact that one can obtain a witness of "$\ket{X}$-type" entanglement by constructing a discrimination operator, which has $\ket{X}$ as non-degenerate eigenvector with the highest eigenvalue. After all, such an operator is not unique, neither does it necessarily have to be a Bell operator. However, a Bell operator unconditionally detects the entanglement of the investigated state, even if the state space is not fully known. For example, witness operators might detect a state to be entangled though a description of measurement results based on local realistic models, or for that purpose, based on separable states in higher dimensional Hilbert spaces, is possible \cite{Aci06}. If one trusts in the representation of the state, as shown below, even more efficient operators for state discrimination can be devised. Let us now apply our method to the state $\ket{\Psi_{4}}$ \cite{WZ}: \begin{eqnarray} \ket{\Psi_{4}}&=\frac{1}{\sqrt{3}}(\ket{0011}+\ket{1100}-\frac{1}{2}(\ket{0101}\notag\\&+\ket{0110}+\ket{1001}+\ket{1010})). \end{eqnarray} This state was observed in multi-photon experiments \cite{Eib03} and can be used, for example, for decoherence free quantum communication \cite{Bou04}, quantum telecloning \cite{Mur99}, and multi-party secret sharing \cite{Gae07}. The fidelity operator for that state $\hat{\mathcal{F}}_{\Psi_{4}}$ contains 40 relevant correlation operators $(\sigma_i \otimes \sigma_j \otimes \sigma_k \otimes \sigma_l)$, out of which 21 describe four-qubit correlations (i.e. do not contain $\sigma_0$). Already 10 are enough to construct a characteristic Bell operator that has $\ket{\Psi_{4}}$ as non-degenerate eigenstate with maximum eigenvalue $\lambda_{\mathrm{max}}=1$: \begin{eqnarray} 6 \,\hat{\mathcal{B}}_{\Psi_{4}} &=& \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_x} + \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_x} \notag \\ & -& \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_x} + \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_z} \notag \\ &+& \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_z} - \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_x} \otimes \ensuremath{\sigma_x} \nonumber \\ &+& \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_z} - \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_z} \notag \\ &+& \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_z} + \ensuremath{\sigma_z} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_y} \otimes \ensuremath{\sigma_z}. \label{eqn:PSI4} \end{eqnarray} \begin{table} \caption{\label{tab:psiviolation} Maximal expectation values $\langle\hat{\mathcal{B}}_{\Psi_{4}}\rangle$} \begin{ruledtabular} \begin{tabular}{c || l | l } State & under LU & under SLOCC \\ \hline \ket{\Psi_{4}} & 1.000 & 1.000\\ \ket{\ensuremath{D_4^{(2)}}} & 0.926 & 0.926\\ \ket{GHZ} & 0.805 & 0.805\\ \ket{C} & 0.515 & 0.764\\ \ket{W} & 0.736 & 0.758\\ $\ket{\textrm{bi-sep}}$ & 0.722 & 0.749\\ $\ket{\textrm{sep}}$ & 0.217 &0.217 \end{tabular} \end{ruledtabular} \end{table} $\hat{\mathcal{B}}_{\Psi_{4}}$ can be used to discriminate an experimentally observed state with respect to other four-qubit states. With the chosen normalization we obtain the limit for any local realistic theory by replacing $\sigma_i$ by some locally predetermined values $I_i= \pm1$, leading to the inequality $|\langle \hat{\mathcal{B}}_{\Psi_{4}} \rangle_{\mathrm{avg}} | \leq \frac{2}{3}$. Table \ref{tab:psiviolation} shows the bounds on the expectation value of $\hat{\mathcal{B}}_{\Psi_{4}}$ acting on some classes of prominent four-qubit states (including a fully separable state $\ket{\textrm{sep}}$, any bi-separable state $\ket{\textrm{bi-sep}}$, as well as the four-partite entangled Dicke state $D^{(2)}_4$ \cite{DICKE}, the GHZ \cite{GHZ1}, W \cite{Duer00} and Cluster ($C$) \cite{Rau01} state). These bounds were obtained by numerical optimization over either LU- or SLOCC-transformations, respectively. In particular with the bound for an arbitrary bi-separable state $\hat{\mathcal{B}}_{\Psi_{4}}$ provides also a sufficient condition for genuine four-partite entanglement. \begin{table}[t] \caption{\label{tab:dickeviolation} Maximal expectation values $\langle\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}\rangle$} \begin{ruledtabular} \begin{tabular}{c || l | l } State & under LU & under SLOCC \\ \hline \ket{\ensuremath{D_4^{(2)}}} &1.000&1.000\\ \ket{\Psi_{4}} &0.889&0.889\\ \ket{GHZ} &0.833&0.833\\ \ket{C} &0.500&0.706\\ $\ket{\textrm{bi-sep}}$ &0.667&0.667\\ \ket{W} &0.613&0.619\\ $\ket{\textrm{sep}}$ & 0.178 &0.178 \end{tabular} \end{ruledtabular} \end{table} We now employ these results for the analysis of experimental data. To observe the state $\ket{\Psi_{4}}$ we used photons generated by type II non-collinear spontaneous parametric down conversion (SPDC) and a variable linear optics setup. Essentially, a four photon emission into two modes is overlapped on a polarizing beam splitter (PBS) and subsequently split into four modes. Depending on the setting of a half-wave plate (in our case oriented at 45$^\circ$) preceding the PBS and conditioned on detecting a photon in each of the four outputs, a variety of states can be observed \cite{us}. The fidelity of the experimental state $\rho_{\Psi_{4}}$, determined from 21 four-qubit correlations, was $\mathcal{F}_{\Psi_{4}}=\mathrm{Tr}[\hat{\mathcal{F}}_{\Psi_{4}}\rho_{\Psi_{4}}]=0.90 \pm 0.01$. The analysis of the experimental state using the Bell operator $\hat{\mathcal{B}}_{\Psi_{4}}$ required less than half of the measurement settings and leads to $v_{\rho_{\Psi_{4}}}=0.91 \pm 0.02$ (see Fig.~\ref{fig:w2setup}a). This value is, according to Table \ref{tab:psiviolation}, sufficient to prove that the experimental state is genuine four-qubit entangled and cannot be of W-, Cluster-, or GHZ-type in the sense described above. The class of states that can experimentally not be excluded as it has the second largest expectation value in Table \ref{tab:psiviolation} is represented by the so-called symmetric four qubit Dicke state \cite{DICKE,Kie06} \begin{eqnarray} \ket{\ensuremath{D_4^{(2)}}}=&\frac{1}{\sqrt{6}} (\ket{0011}+\ket{0101}+\ket{0110} \notag \\ & +\ket{1001}+\ket{1010}+\ket{1100}). \end{eqnarray} In turn, for the Dicke state a separate, characteristic Bell operator $\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}$ can be constructed. Again, $\ket{\ensuremath{D_4^{(2)}}}$ has 40 correlation operators with non zero expectation value, out of which 21 describe original four-qubit correlations. Naturally, the exact values of the correlations $T_{ijkl}$ differ compared to \ket{\Psi_{4}}. In the case of $\ket{\ensuremath{D_4^{(2)}}}$ they are such that eight of the correlation operators are already sufficient for the construction of $\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}$: \begin{eqnarray}\label{eqn:dop} 6 \,\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}} &=& -\,\sigma_x \otimes \sigma_z \otimes \sigma_z \otimes \sigma_x -\sigma_x \otimes \sigma_z \otimes \sigma_x \otimes \sigma_z \notag \\ & & -\,\sigma_x \otimes \sigma_x \otimes \sigma_z \otimes \sigma_z +\sigma_x \otimes \sigma_x \otimes \sigma_x \otimes \sigma_x \notag \\ & &-\,\sigma_y \otimes \sigma_z \otimes \sigma_z \otimes \sigma_y -\sigma_y \otimes \sigma_z \otimes \sigma_y \otimes \sigma_z \notag \\ & & -\,\sigma_y \otimes \sigma_y \otimes \sigma_z \otimes \sigma_z +\sigma_y \otimes \sigma_y \otimes \sigma_y \otimes \sigma_y , \end{eqnarray} with $\lambda_{\mathrm{max}}=1$ for $\ket{\ensuremath{D_4^{(2)}}}$. This operator has a remarkable structure: It is of the form $\ensuremath{\sigma_x} \otimes M_3 + \ensuremath{\sigma_y} \otimes M^\prime_3$, where $M_3$ and $M^\prime_3$ are three-qubit Mermin inequality operators \cite{Mer90, footnote2}. Thus, by applying a kind of GHZ-argument \cite{GHZ1}, the bound for any local realistic theory can be determined to be $ |\langle \hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}} \rangle_{\mathrm{avg}} | \leq \frac{2}{3}$. Table \ref{tab:dickeviolation} shows the maximal expectation values of $\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}$ by the same set of four-qubit states as before. Considering the structure of $\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}$, further omitting correlation operators, for example one whole block $\ensuremath{\sigma_x} \otimes M_3$ (or $\ensuremath{\sigma_y} \otimes M^\prime_3$), leaves us with a four-qubit Mermin-type Bell operator. The corresponding Bell inequality is still violated by $\ket{\ensuremath{D_4^{(2)}}}$. However, it is not characteristic anymore for $\ket{\ensuremath{D_4^{(2)}}}$ as it is maximally violated by the state $\ket{GHZ}_y=\frac{1}{\sqrt{2}}(\ket{RRRR}\pm\ket{LLLL})$ and the bi-separable state $\ket{BS}=\frac{1}{\sqrt{2}}(\ket{+}(\ket{RRR}\pm i\ket{LLL}))$ (where $\ket{\pm}=\frac{1}{\sqrt{2}}(\ket{0}\pm \ket{1})$ and $\ket{R,L}=\frac{1}{\sqrt{2}}(\ket{0}\pm i\ket{1})$ are the eigenstates of \ensuremath{\sigma_x}~and $\ensuremath{\sigma_y}$, respectively). It is a particular property of the Dicke state to have correlations in two planes (x-z- and y-z-plane) of the Bloch sphere, whereas a GHZ state, for instance, is correlated only in one plane (here the x-z-plane). This quite characteristic feature is reflected in the construction of $ \hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}$. \begin{figure} \includegraphics[width=8.5cm,clip]{fig1.eps} \caption{Histogramms of the four-photon coincidence statistics for the different measurement settings. Slots at the ordinate indicate different events for a particular basis setting: e.g. $0011$ for basis zzzz means detection of photons in the state $\ket{HHVV}$. a) Statistics of the ten correlation measurements, required for the evaluation of the operator $\hat{\mathcal{B}}_{\Psi_{4}}$. b) Statistics of the eight correlation measurements, required for the evaluation of the operator $\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}$.} \label{fig:w2setup} \end{figure} Recently, an experiment has been performed to observe the state $\ket{\ensuremath{D_4^{(2)}}}$ \cite{Kie06}. In order to increase the state fidelity $\mathcal{F}$ by a higher degree of indistinguishability, here we reduced the filter bandwidth from 3~nm to 2~nm, resulting in $\mathcal{F}=0.92 \pm 0.02$ (compared to $\mathcal{F}=0.84 \pm 0.01$ in \cite{Kie06}). For the state's experimental analysis with the Bell operator (\ref{eqn:dop}) we find $v_{\rho_{\ensuremath{D_4^{(2)}}}}=0.90 \pm 0.04$ (see Fig.~\ref{fig:w2setup}b), from which we can conclude that it is genuine four-qubit entangled and cannot be, e.g., of W-, Cluster- or GHZ-type. Yet, this value is again just at the limit to separate against $\ket{\Psi_{4}}$. If one is sure about the structure of the state space, that means that in our case it is spanned by four qubits, we can equally well use other operators instead of the Bell operators. Let us first drop some of the correlations from $\hat{\mathcal{B}}_{\ensuremath{D_4^{(2)}}}$, e.g., the terms $(\sigma_x \otimes \sigma_x \otimes \sigma_x \otimes \sigma_x)$ and $(\sigma_y \otimes \sigma_y \otimes \sigma_y \otimes \sigma_y )$. The resulting discrimination operator $\hat{\mathcal{D}}_{\ensuremath{D_4^{(2)}}}$ is not a Bell operator anymore, but still has $\ket{\ensuremath{D_4^{(2)}}}$ as the only eigenstate with maximal eigenvalue $\lambda_{\mathrm{max}}=1$ (after proper normalization). Interestingly, as seen in Table \ref{tab:dicke6}, it introduces a new ordering of states with a bigger separation between $\ket{\ensuremath{D_4^{(2)}}}$ and $\ket{\Psi_{4}}$. With $v^{\mathcal{D}}_{\rho_{\ensuremath{D_4^{(2)}}}}=0.90 \pm 0.05$ we can discriminate against this state with a better significance. Note, the reordering, which results in the GHZ state having now the second highest eigenvalue, indicates that this operator analyzes the various states from a different point of view. This is quite plausible as it uses different correlations for the analysis. An even more radical change in the point of view is possible with the data we dropped above, i.e., $(\sigma_x \otimes \sigma_x \otimes \sigma_x \otimes \sigma_x)$ and $(\sigma_y \otimes \sigma_y \otimes \sigma_y \otimes \sigma_y )$. Relying on the particular symmetries of the Dicke state, from these measurements we can evaluate the discrimination operator $\hat{\mathcal{D}^\prime}_{\ensuremath{D_4^{(2)}}}=\frac{1}{6}((\frac{1}{2}\sum_k \sigma_x^k)^2+(\frac{1}{2}\sum_k \sigma_y^k)^2)$, where e.g.~$\sigma_{x/y}^3=\openone \otimes \openone \otimes \sigma_{x/y} \otimes \openone$ \cite{Tot05c}. Comparing the observed value $v_{\rho_{\ensuremath{D_4^{(2)}}}}^{\mathcal{D}^\prime}=0.96 \pm 0.013$ with the bounds for other states (Table \ref{tab:dicke6}) we see that we can discriminate our state against all states of the respective classes with only two settings. Analogous considerations can be applied for the construction of characteristic operators for other states \cite{Tot05b}, where the number of settings scales polynomially with the number of qubits compared to the exponentially increasing effort for state tomography. \begin{table} \caption{\label{tab:dicke6} Alternative characteristic operators for $D_4^{(2)}$} \begin{ruledtabular} \begin{tabular}{c | c | c} State & $|\langle\hat{\mathcal{D}}_{\ensuremath{D_4^{(2)}}}\rangle|$ (SLOCC) & $|\langle\hat{\mathcal{D}^\prime}_{\ensuremath{D_4^{(2)}}}\rangle|$ (SLOCC) \\ \hline \ket{\ensuremath{D_4^{(2)}}} & 1.000 & 1.000 \\ \ket{GHZ} & 0.905 & 0.937\\ \ket{C} & 0.871 & 0.905\\ \ket{W} & 0.869 & 0.905\\ \ket{\Psi_{4}} & 0.869 & 0.901\\ $\ket{\textrm{bi-sep}}$ &0.750 &0.872\\ $\ket{\textrm{sep}} $& 0.192 & 0.139 \end{tabular} \end{ruledtabular} \end{table} In conclusion, here we showed that characteristic \mbox{(Bell-)}operators, i.e., operators for which a particular state only has maximal expectation value, allow to distinguish this state from the ones out of other classes of multi-partite entangled states. A simple, though not yet constructive, method to design discrimination operators is based on the correlations between local measurement settings that are typical for the respective quantum state. The low number of measurement settings significantly diminishes the effort compared with standard analysis. Employing characteristic symmetries and properties of the state under investigation can even further reduce the effort to a number of settings which scales polynomially with the number of qubits, thereby rendering the new method a truly efficient tool for the characterization of multi-partite entanglement. We thank D{.~}Bru\ss , M{.~}Horodecki, and M{.~}Wolf for stimulating discussions. We acknowledge the support by the DFG-Cluster of Excellence MAP, the DAAD/MNiSW exchange program, the EU Projects QAP and SECOQC. W.W. is supported by QCCC of the ENB and the Studienstiftung des dt. Volkes, W.L. by FNP.
2,869,038,154,495
arxiv
\section{Introduction} \setcounter{equation}{0} The detailed investigation of $\alpha$-decay is a topic that leads to a thorough understanding of the application of quantum mechanics to atomic and nuclear physics, since it is necessary to have a good knowledge of metastable states \cite{B1,B2} and of the Jeffreys-Wentzel-Kramers-Brillouin (hereafter, JWKB) method applied to the Schr\"{o}dinger equation for stationary states \cite{B2,B3,B4}. In particular, in the description of the JWKB method, the introductory textbooks on quantum mechanics fail, even nowadays, to present the remarkable results obtained in Refs. \cite{B3,B4}, which are written in a very clear and pedagogical style. Our paper lies precisely within this framework. In section $2$ we outline some relevant features of the phase-integral method and of the associated derivation of connection formulae. In section $3$ we consider the basic equations for the elementary stationary theory of $\alpha$-decay. Section $4$ develops a more accurate model of the stationary theory in section 3, and improves the theoretical estimate of the lowest $s$-wave metastable state. The mean life of the radioactive nucleus is evaluated in section $5$ for Uranium. Section $6$ describes the phase-integral algorithm for evaluating stationary states by means of a suitable choice of freely specifiable base function \cite{B3,B4}, while concluding remarks are presented in section $7$. \section{Phase integral method and connection formulae} \setcounter{equation}{0} Both in one-dimensional problems and in the case of central potentials in three-dimensional Euclidean space, the Schr\"{o}dinger equation for stationary states leads eventually to a second-order ordinary differential equation having the form \cite{B1,B2,B3,B4} \begin{equation} \left[{d^{2}\over dz^{2}}+R(z)\right]\psi(z)=0, \; \; \; R(z)={2m \over \hbar^{2}}(E-V(z)), \label{(2.1)} \end{equation} where $V(z)$ is either the potential in one spatial dimension, or an effective potential that includes also the effects of angular momentum. The notation $z$ for the independent variable means that one can study Eq. (2.1) in the complex field, restricting attention to real values of $z$, denoted by $x$, only at a later stage. In the phase-integral method, one looks for two linearly independent, exact solutions of Eq. (2.1) in the form \begin{equation} \psi(z)=A(z) e^{\pm i w(z)}. \label{(2.2)} \end{equation} Since the Wronskian of two linearly independent solutions of Eq. (2.1) is a non-vanishing constant, while the Wronskian of the functions (2.2) is $-2iA^{2}{dw \over dz}$, for consistency one finds \begin{equation} A={{\rm constant}\over \sqrt{dw \over dz}}. \label{(2.3)} \end{equation} One can therefore write (up to a multiplicative constant) \begin{equation} \psi(z)={1 \over \sqrt{dw \over dz}}e^{\pm i w(z)} ={1 \over \sqrt{q(z)}} e^{\pm i \int^{z} q(\zeta)d\zeta}, \label{(2.4)} \end{equation} where $w(z) \equiv \int^{z}q(\zeta)d\zeta$ is said to be the {\it phase integral}, while $q(z)$ is the {\it phase integrand} \cite{B3,B4}. In quantum mechanical problems, we shall agree to call {\it classically forbidden} the open interval of the independent variable where the energy $E$ of the particle is strictly less than the potential $V$: $E < V$. Conversely, if $E>V$, we shall talk of {\it classically allowed} region. For the former, let $x_{1}$ be an internal point, where the stationary state takes the exact form (hereafter, the independent variable is always real) \begin{equation} \psi(x_{1})= c(x_{1}){1 \over \left | \sqrt{q(x_{1})} \right | } e^{|w(x_{1})|} +d(x_{1}){1 \over \left | \sqrt{q(x_{1})} \right | } e^{-|w(x_{1})|}, \label{(2.5)} \end{equation} $c$ and $d$ being real-valued functions. For the latter, let $x_{2}$ be an internal point, where the stationary state takes the exact form \begin{equation} \psi(x_{2})= a(x_{2}){1 \over \left | \sqrt{q(x_{2})} \right | } e^{i|w(x_{2})|} +b(x_{2}){1 \over \left | \sqrt{q(x_{2})} \right | } e^{-i|w(x_{2})|}. \label{(2.6)} \end{equation} From the detailed theory in section $2.4$ of Ref. \cite{B4}, one knows that (hereafter, the star denotes complex conjugation) \begin{equation} a(x_{2})=\left \{ {1 \over 2 \alpha} c(x_{1}) +i \alpha \Bigr[\gamma c(x_{1})-d(x_{1})\Bigr] \right \} e^{i \left({\pi \over 4}-\beta \right)}, \label{(2.7)} \end{equation} \begin{equation} b(x_{2})=a^{*}(x_{2}). \label{(2.8)} \end{equation} The exact expressions of the $\alpha,\beta,\gamma$ parameters are known (see Ref. \cite{B4} and our appendix A) and they are not particularly enlightening. However, upon setting \begin{equation} \chi(q(x)) \equiv q^{-{3 \over 2}}{d^{2}\over dx^{2}} q^{-{1 \over 2}}+{R(x) \over q^{2}}-1, \label{(2.9)} \end{equation} \begin{equation} \mu(x,x_{0}) \equiv \left | \int_{x_{0}}^{x} \mid \chi(q(x')) \; q(x') \mid dx' \right | , \label{(2.10)} \end{equation} if $\mu <<1$, one can use the approximate formulae \cite{B4} \begin{equation} \alpha \sim 1+ {\rm O}(\mu), \; \; B \sim {\rm O}(\mu), \; \; \gamma \sim {\rm O}(\mu)e^{2 |w(x_{1}|}. \label{(2.11)} \end{equation} If $\mu$ is much bigger than $e^{-2|w(x_{1})|}$, one then finds from the third of Eqs. (2.11) \begin{equation} \gamma \sim {\rm O}(\mu)e^{2|w(x_{1})|} >> 1. \label{(2.12)} \end{equation} Since $\gamma$ is unknown and in general much bigger than $1$, one can obtain approximate formulae for $a(x_{2})$ and $b(x_{2})$ only when \begin{equation} \left | \gamma c(x_{1}) \right | << \left | d(x_{1}) \right | . \label{(2.13)} \end{equation} By virtue of the condition (2.12), the majorization (2.13) provides \begin{equation} \left | {c(x_{1}) \over d(x_{1})} \right | \leq e^{-2 |w(x_{1}|}. \label{(2.14)} \end{equation} If the right-hand side of (2.14) is much smaller than $1$, the exact formula (2.7) yields the remarkable approximate formula \begin{eqnarray} a(x_{2}) & \approx & \left[{1 \over 2} c(x_{1})-i d(x_{1}) \right] e^{i \left({\pi \over 4}-\mu \right)} \nonumber \\ & \approx & d(x_{1}) \left[{1 \over 2} {c(x_{1}) \over d(x_{1})}-i \right] e^{i {\pi \over 4}} \approx -i d(x_{1}) e^{i {\pi \over 4}} \nonumber \\ & = & d(x_{1}) e^{i \left({\pi \over 4}-{\pi \over 2}\right)} =d(x_{1}) e^{-i{\pi \over 4}}, \label{(2.15)} \end{eqnarray} while \begin{equation} b(x_{2})=a^{*}(x_{2}) \approx d(x_{1}) e^{i{\pi \over 4}}. \label{(2.16)} \end{equation} The exact formula (2.6) leads therefore to the approximate formula \begin{eqnarray} \psi(x_{2}) & \approx & d(x_{1})e^{-i{\pi \over 4}} \left | q^{-{1 \over 2}}(x_{2}) \right | e^{i |w(x_{2})|} +d(x_{1}) e^{i {\pi \over 4}} \left | q^{-{1 \over 2}}(x_{2}) \right | e^{-i |w(x_{2})|} \nonumber \\ &=& 2 d(x_{1}) \left | q^{-{1 \over 2}}(x_{2}) \right | \cos \left[|w(x_{2})|-{\pi \over 4} \right]. \label{(2.17)} \end{eqnarray} From Eqs. (2.5) and (2.17) one gets therefore the connection formula \begin{equation} c \left | q^{-{1 \over 2}}(x) \right | e^{|w(x)|} +d \left | q^{-{1 \over 2}}(x) \right | e^{-|w(x)|} \longrightarrow 2d \left | q^{-{1 \over 2}}(x) \right | \cos \left[ |w(x)| -{\pi \over 4} \right], \label{(2.18)} \end{equation} which holds provided that the condition (2.13) is fulfilled. In the literature, the case $c=0,d=1$ is often considered for simplicity \cite{B4}. Remarkably, the connection formula (2.18) is one-directional. The work in Ref. \cite{B4} proves indeed that, if one is first given a stationary state in the classically allowed region having the form $$ \psi(x_{2})=2 \left | q^{-{1 \over 2}}(x_{2}) \right | \cos \left[|w(x_{2})| -{\pi \over 4} \right ], $$ the stationary state in the classically forbidden region does not reduce to the left-hand side of Eq. (2.18) with $c=0$ and $d=1$. This property is so important for our analysis that we prove it in appendix B, so that our paper becomes completely self-contained. \section{Elementary stationary theory of $\alpha$-decay} \setcounter{equation}{0} As is well known, experiments in which a sufficiently large number of $\alpha$-particles enter a chamber with a thin window and are collected show that $\alpha$-rays correspond to positively charged particles whose charge-to-mass ratio is the one of doubly ionized helium atoms: ${\rm He}^{++}$. Their identification with ${\rm He}^{++}$ is made possible because, when a gas of $\alpha$-particles produces light, it displays precisely the spectroscopic lines of ${\rm He}^{++}$. In the stationary theory of $\alpha$-emission, one regards the $\alpha$-particle as being pre-existent in the radioactive nucleus. Such a radioactive nucleus is therefore viewed as a metastable state\footnote{Recall from the theory of resonance scattering \cite{B1,B2} that there exists a metastable state corresponding to a trapping of the particle in the region where the potential makes its effect manifest.} consisting of the $\alpha$-particle and the residual nucleus. The force acting on the $\alpha$-particle is the joint effect of a short-range nuclear interaction and a long-range Coulomb repulsion. The long-range component is described by a potential $2(Z-2){e_{0}^{2}\over r}$, and the potential is assumed to obey the defining law \begin{equation} V(r)=-U_{0} \; \; {\rm if} \; \; r \in ]0,b[, \label{(3.1)} \end{equation} \begin{equation} V(r)=2(Z-2){e_{0}^{2}\over r} \; \; r \in ]b,\infty[. \label{(3.2)} \end{equation} The $s$-wave metastable states can be found by solving the equation (cf. Eq. (2.1)) \begin{equation} \left[{d^{2}\over dr^{2}}+{2m \over \hbar^{2}}(E-V(r)) \right] y(r)=0, \label{(3.3)} \end{equation} in the three open intervals \begin{equation} I_{1}=]0,b[, \; \; I_{2}=]b,r_{1}[, \; \; I_{3}=]r_{1},\infty[, \label{(3.4)} \end{equation} where $r_{1}$ is the value of $r$ for which the energy of the $\alpha$-particle equals the Coulomb term, i.e. \begin{equation} r_{1}=2(Z-2){e_{0}^{2}\over E}. \label{(3.5)} \end{equation} The interval $I_{2}$ corresponds to values of the energy $E<V$, while the interval $I_{3}$ pertains to values of the energy $E>V$. On defining \begin{equation} p_{0} \equiv \sqrt{2m_{\alpha}(E+U_{0})}, \; \; {\bar p}(r) \equiv \sqrt{2m_{\alpha}(V(r)-E)}, \; \; p(r) \equiv \sqrt{2m_{\alpha}(E-V(r))}, \label{(3.6)} \end{equation} which are appropriate for $I_{1},I_{2},I_{3}$, respectively, one can write the solutions of Eq. (3.3) within such intervals (see comments after Eq. (3.13)) in the form \cite{B2} \begin{eqnarray} \; & \; & y_{1}(r)= {C \over \sqrt{p_{0}}} \sin \left({p_{0}r \over \hbar}\right) \nonumber \\ &=& {C \over \sqrt{p_{0}}} \left \{ A_{1} \sin \left[{p_{0}(b-r)\over \hbar}-{\pi \over 4}\right] +A_{2} \cos \left[{p_{0}(b-r)\over \hbar}-{\pi \over 4}\right] \right \}, \label{(3.7)} \end{eqnarray} \begin{equation} y_{2}(r)={C \over \sqrt{{\bar p}(r)}} \left \{ A_{3} {\rm exp}\left[{1 \over \hbar} \int_{b}^{r} {\bar p}(r')dr' \right] +A_{4} {\rm exp}\left[-{1 \over \hbar} \int_{b}^{r} {\bar p}(r')dr' \right] \right \}, \label{(3.8)} \end{equation} \begin{eqnarray} y_{3}(r)&=& {C \over \sqrt{p(r)}} \left \{ A_{5} \cos \left[{1 \over \hbar} \int_{r_{1}}^{r} p(r')dr' -{\pi \over 4} \right] \right . \nonumber \\ &+& \left . A_{6} \sin \left[{1 \over \hbar} \int_{r_{1}}^{r} p(r')dr' -{\pi \over 4} \right] \right \}, \label{(3.9)} \end{eqnarray} where \cite{B2} \begin{equation} A_{1}=-\cos \left({p_{0}b\over \hbar}-{\pi \over 4}\right), \; \; A_{2}= \sin \left({p_{0}b\over \hbar}-{\pi \over 4}\right), \label{(3.10)} \end{equation} \begin{equation} A_{3}=-A_{1}, \; \; A_{4}={1 \over 2}A_{2}, \label{(3.11)} \end{equation} and, considering the parameter \begin{equation} \theta \equiv {\rm exp} \left[-{1 \over \hbar} \int_{b}^{r_{1}} {\bar p}(r)dr \right] << 1, \label{(3.12)} \end{equation} one finds the last two coefficients in the form \begin{equation} A_{5}={2A_{3}\over \theta}, \; \; A_{6}=-A_{4} \theta. \label{(3.13)} \end{equation} It should be stressed that only Eq. (3.7) provides an exact solution, in the open interval $I_{1}$, whereas Eqs. (3.8) and (3.9) provide approximate solutions in the open intervals $I_{2}$ and $I_{3}$, respectively. The coefficients $A_{3}$ and $A_{4}$ of Eq. (3.8) are obtained in Ref. \cite{B2} from a connection recipe, but not from the continuity condition of stationary states and their first derivative, unlike the work in Ref. \cite{Holstein}. The work in Ref. \cite{B2} points out that, for metastable states to occur, one has to maximize the derivative of the phase shift with respect to the energy, and this implies in turn that the ratio of values of the stationary state inside and outside the potential well must be maximized. For this purpose, the authors of Ref. \cite{B2} set to zero $A_{3}$ (and hence $A_{1}$ and $A_{5}$), finding therefore, for the lowest $s$-wave metastable state, \begin{equation} {p_{0}b \over \hbar}-{\pi \over 4}={\pi \over 2} \Longrightarrow p_{0}={3 \over 4} {\pi \hbar \over b}. \label{(3.14)} \end{equation} The coefficient of the $\sin$ function in (3.9) is then found to be $-{\theta \over 2}$, from Eqs. (3.13) and (3.14). When the condition (3.13) is fulfilled, the derivative of the phase shift takes the approximate form \cite{B2} \begin{equation} {d \over dp}\delta \approx \pi \int_{0}^{b} |y_{1}(r)|^{2}dr. \label{(3.15)} \end{equation} In order to evaluate the integral on the right-hand side of (3.15), one has to evaluate the $C$ coefficient in the formulae for stationary states in the three intervals. For this purpose, one looks first at the interval $I_{3}$, where (see Eq. (3.9)) \begin{eqnarray} {1 \over \hbar} \int_{r_{1}}^{r}p(r')dr' &=& {\sqrt{2 m_{\alpha}E} \over \hbar} \int_{r_{1}}^{r} \left(1-{2(Z-2)e_{0}^{2} \over Er'}\right)^{1 \over 2}dr' \nonumber \\ & \approx & {pr \over \hbar}-m_{\alpha} {2(Z-2)e_{0}^{2}\over \hbar p} \log \left ({2pr \over \hbar}\right) + ... . \label{(3.16)} \end{eqnarray} Thus, at very large values of $r$, Ref. \cite{B2} finds \begin{eqnarray} \; & \; & y_{3}(r) \approx - {C \theta \over 2 \sqrt{p(r)}} \sin \left[{1 \over \hbar} \int_{r_{1}}^{r} p(r')dr'-{\pi \over 4} \right] \nonumber \\ & \approx & {C \theta \over 2 \sqrt{p(r)}} \cos \left[{pr \over \hbar}-m_{\alpha} {2(Z-2)e_{0}^{2}\over \hbar p} \log \left({2pr \over \hbar}\right)+ \vartheta \right], \label{(3.17)} \end{eqnarray} where the explicit form of $\vartheta$ is here inessential, but we can say that it is linearly related to the phase shift \cite{B2}. On the other hand, from the general analysis of Coulomb type potentials, one knows that \cite{B2} \begin{equation} y_{3}(r) \approx \sqrt{2 \over \pi \hbar} \cos \left[{pr \over \hbar}-m_{\alpha} {2(Z-2)e_{0}^{2}\over \hbar p} \log \left({2pr \over \hbar}\right)+ \vartheta \right], \label{(3.18)} \end{equation} and hence by comparison of Eqs. (3.17) and (3.18) one finds \begin{equation} C=\sqrt{2 \over \pi \hbar} {2 \sqrt{p} \over \theta}. \label{(3.19)} \end{equation} This implies in turn that \begin{equation} y_{1}(r)=\sqrt{8 \over \pi \hbar} \sqrt{p \over p_{0}}{1 \over \theta} \sin \left({p_{0}r \over \hbar}\right), \label{(3.20)} \end{equation} and therefore Eq. (3.15) yields \begin{eqnarray} {d \over dp}\delta & \approx & {8 \over \hbar \theta^{2}} {p \over p_{0}} \int_{0}^{b} \sin^{2} \left({p_{0}r \over \hbar}\right)dr \nonumber \\ &=& {8 \over \hbar \theta^{2}} {p \over p_{0}} \left[{b \over 2}-{\hbar \over 4 p_{0}} \sin \left({2 p_{0}b \over \hbar}\right) \right] \nonumber \\ & \approx & {4b \over \hbar \theta^{2}}. \label{(3.21)} \end{eqnarray} The mean life of the radioactive nucleus is then \begin{equation} \tau = {\hbar \over 2} {d \over dE} \delta \approx {2 m_{\alpha} \over p} {1 \over \theta^{2}}. \label{(3.22)} \end{equation} Bearing in mind that the parameter $\theta$ defined in Eq. (3.12) is the exponential of minus the integral $$ {1 \over \hbar} \int_{b}^{r_{1}} \sqrt{2 m_{\alpha} \left[2(Z-2){e_{0}^{2}\over r} -E \right]} dr, $$ the stationary theory studied so far yields therefore the prediction \begin{eqnarray} \log(\tau) &=& \log \left({2b m_{\alpha} \over p}\right) -\log \theta^{2} \nonumber \\ & \approx & \log(\tau_{0}) +2(Z-2)\pi {e_{0}^{2}\over \hbar} \sqrt{2 m_{\alpha}\over E} \nonumber \\ &+& {8 \over \hbar} \sqrt{(Z-2)e_{0}^{2}m_{\alpha}b}. \label{(3.23)} \end{eqnarray} \section{A more accurate model} \setcounter{equation}{0} The careful reader might have noticed that the first line of Eq. (3.17) is at odds with the connection formula (2.18), whose left-hand side corresponds neatly to Eq. (3.8). In section $3$ we stressed indeed that Eq. (3.8) is not an exact solution of the stationary Schr\"{o}dinger equation in the interval $I_{2}$, because the coefficients $A_{3}=-A_{1}$ and $A_{4}={1 \over 2}A_{2}$ are obtained \cite{B4} from the connection recipe \begin{equation} \cos \left[|w(x)|-{\pi \over 4}\right] \rightarrow {1 \over 2}e^{-|w(x)|}, \; \; \sin \left[|w(x)|-{\pi \over 4} \right] \rightarrow -e^{|w(x)|}, \label{(4.1)} \end{equation} which contradicts the connection formulae in Refs. \cite{B3,B4}. On the other hand, the use of two consecutive connection formulae may be questionable as well. More precisely, on passing from the interval $I_{1}$ to the interval $I_{2}$, the work in section $3.12$ of Ref. \cite{B4} would suggest using, instead of Eq. (4.1), the one-directional connection formula \begin{equation} {a \over |\sqrt{q(x)}|}e^{i|w(x)|}+{b \over |\sqrt{q(x)}|}e^{-i|w(x)|} \rightarrow \left[a \; e^{-i{\pi \over 4}} +b \; e^{i{\pi \over 4}}\right] {e^{|w(x)|}\over |\sqrt{q(x)}|}, \label{(4.2)} \end{equation} which is valid when the absolute value $\left | a \; e^{-i{\pi \over 4}}+b \; e^{i{\pi \over 4}} \right |$ is not too small compared to $|a|+|b|$ \cite{B4}. However, if in the interval $I_{2}$ only the increasing exponential $e^{|w(x)|}$ survives, the connection formula (2.18) would tell us that we should expect a vanishing stationary state in the interval $I_{3}$. But this conclusion would be incorrect, as is shown from Eq. (3.18), which does not rely upon any form of connection formula. The deeper underlying reason might be, that the rigorous connection formulae in Ref. \cite{B4} hold for adjacent intervals, but their repeated use for a sequence of adjacent intervals requires further work. For this reason, and inspired in part by the work in Ref. \cite{Holstein}, we consider hereafter the following method. In the interval $I_{1}$, we write simply the exact solution of the $s$-wave stationary Schr\"{o}dinger equation in the form displayed on the first line of Eq. (3.7). In the interval $I_{2}$, we look for a solution in the form (3.8), but with values of $A_{3}$ and $A_{4}$ not given by Eq. (3.11). We impose instead the continuity conditions for stationary state and its first derivative, which hold whenever the potential has a finite discontinuity \cite{B2,Espo1,Espo2}. Hence we require that \begin{equation} \lim_{r \to b}y_{1}(r)=\lim_{r \to b}y_{2}(r), \label{(4.3)} \end{equation} \begin{equation} \lim_{r \to b}y_{1}'(r)=\lim_{r \to b}y_{2}'(r). \label{(4.4)} \end{equation} Equations (4.3)-(4.4) are solved by (cf. Eq. (9) in Ref. \cite{Holstein}) \begin{equation} A_{3}={1 \over 2} \left \{ \sqrt{p_{0}\over {\bar p}(b)} \cos \left({p_{0}b \over \hbar}\right) +\sqrt{{\bar p}(b)\over p_{0}} \left[1+{\hbar \over 2} {{\bar p}'(b)\over ({\bar p}(b))^{2}} \right] \sin \left({p_{0}b \over \hbar}\right) \right \}, \label{(4.5)} \end{equation} \begin{equation} A_{4}={1 \over 2} \left \{ - \sqrt{p_{0}\over {\bar p}(b)} \cos \left({p_{0}b \over \hbar}\right) +\sqrt{{\bar p}(b)\over p_{0}} \left[1-{\hbar \over 2} {{\bar p}'(b)\over ({\bar p}(b))^{2}} \right] \sin \left({p_{0}b \over \hbar}\right) \right \}. \label{(4.6)} \end{equation} At this stage, if we follow the physical requirement of Ref. \cite{B4} and our section $3$ for obtaining $s$-wave metastable states, i.e., that the coefficient $A_{3}$ should vanish, we get the equation \begin{equation} \left[1+{\hbar \over 2} {{\bar p}'(b)\over ({\bar p}(b))^{2}} \right] \tan \left({p_{0}b \over \hbar}\right) =-{p_{0}\over {\bar p}(b)}. \label{(4.7)} \end{equation} For example, in the case of Uranium \cite{Holstein}, the right-hand side of Eq. (4.7) equals $-{9 \over 50}$, and bearing in mind that $$ 1 >> {\hbar \over 2} {{\bar p}'(b) \over ({\bar p}(b))^{2}}, $$ the approximate root of Eq. (5.7) is equal to \begin{equation} {p_{0}b \over \hbar} \equiv \rho \approx 2.963, \label{(4.8)} \end{equation} whereas the value (3.14) for ${p_{0}b \over \hbar}$ is approximately equal to $2.356$. We find therefore, for the energy of the lowest $s$-wave metastable state, \begin{equation} E={(p_{0})^{2}\over 2 m_{\alpha}}-U_{0}, \label{(4.9)} \end{equation} with $p_{0}$ given by Eq. (4.8). The work in Ref. \cite{Holstein} sets instead to zero the right-hand side of Eq. (4.7), which is not sufficiently accurate, at least in the case of Uranium. At this stage, if we define \begin{equation} \nu \equiv {p_{0}\over {\bar p}(b)}, \label{(4.10)} \end{equation} we find from Eq. (4.6) a good approximation for $A_{4}$ in the form \begin{equation} A_{4} \approx {1 \over 2} \left[ -\sqrt{\nu} \cos (\rho)+{1 \over \sqrt{\nu}} \sin(\rho)\right], \label{(4.11)} \end{equation} where $\rho$ solves the equation that ensures the vanishing of $A_{3}$: \begin{equation} \tan(\rho)=-\nu, \label{(4.12)} \end{equation} which implies ($\cos(\rho)$ is negative since $\rho$ is close to $\pi$ by virtue of Eq. (4.8)) \begin{equation} \cos(\rho)=-{1 \over \sqrt{1+\nu^{2}}}, \; \; \sin(\rho)={1 \over \sqrt{1+{1 \over \nu^{2}}}}. \label{(4.13)} \end{equation} Hence we obtain \begin{equation} 2A_{4}={2 \over \sqrt{\nu + {1 \over \nu}}}=f(\nu), \label{(4.14)} \end{equation} where for Uranium, exploiting again the value of $\nu={9 \over 50}$ from Ref. \cite{Holstein}, we find \begin{equation} \sqrt{\nu + {1 \over \nu}}=\sqrt{2581 \over 450} \approx 2.3949. \label{(4.15)} \end{equation} \section{Mean life of the radioactive nucleus} \setcounter{equation}{0} By virtue of the connection formula (2.18), which can be used because the condition (2.13) is fulfilled having set $A_{3}=0$ in section $4$, we can now write the stationary state in the interval $I_{3}$ in the approximate form \begin{equation} y_{3}(r) \approx 2A_{4}\theta {C \over \sqrt{p(r)}} \cos \left[{1 \over \hbar} \int_{r_{1}}^{r} p(r')dr' -{\pi \over 4} \right]. \label{(5.1)} \end{equation} If we require that such a function should take the form (3.18) at large $r$, we find by comparison that, up to a sign, \begin{equation} C=\sqrt{2 \over \pi \hbar} {\sqrt{p} \over \theta} {1 \over 2A_{4}} =\sqrt{2 \over \pi \hbar} {2 \sqrt{p}\over \theta} {1 \over 2 f(\nu)}. \label{(5.2)} \end{equation} By comparison of Eqs. (5.2) and (3.19), and bearing in mind Eq. (3.22), our prediction for the mean life of the radioactive nucleus reads as \begin{equation} \log(\tau) = \log \left({2bm_{\alpha}\over p}\right) -2 \log(\theta)-2 \log(2 f(\nu)). \label{(5.3)} \end{equation} whereas the work in Ref. \cite{B2} obtains (see Eq. (3.23)) \begin{equation} \log(\tau)=\log \left({2bm_{\alpha}\over p}\right) -2 \log(\theta). \label{(5.4)} \end{equation} In light of Eq. (4.15), the difference between our result (5.3) and the theoretical prediction (5.4) is a constant factor which, for Uranium, equals $-1.025$. \section{Beyond $s$-wave} \setcounter{equation}{0} In the investigation of bigger values of angular momentum quantum number, it may be appropriate to exploit a further refined version of nuclear potential, and also the potentialities of the phase-integral method with unspecified base function \cite{B4}, a concept that we are going to define shortly. The models of current interest study the relative motion of the $\alpha$-particle and daughter nucleus in a central potential $U(r)$ built as follows. On considering the decay of nuclei surrounded by electrons, the $\alpha$-particle moves in the central potential \cite{DZ} \begin{equation} U(r)=U_{n}(r)+U_{C}(r), \label{(6.1)} \end{equation} where $U_{n}(r)$ is the nuclear potential well, and $U_{C}(r)$ is the effective Coulomb potential. At small distances, when the $\alpha$-particle moves inside the nucleus or under the barrier, the Coulomb contribution can be approximated (up to a correction \cite{Z,D} proportional to $r^{2}$) by \begin{equation} U_{C}(r) \approx U_{C}^{(b)}(r)-{\cal E}, \label{(6.2)} \end{equation} where $U_{C}^{(b)}(r)$ is the Coulomb potential for bare uniformly charged nuclei ($R$ being the nuclear radius): \begin{equation} U_{C}^{(b)}(r)=(Z-2){e^{2}\over R} \left(3-{r^{2}\over R^{2}}\right), \; \; r \in [0,R[ , \label{(6.3)} \end{equation} \begin{equation} U_{C}^{(b)}(r)=2(Z-2){e^{2}\over r}, \; \; r \in ]R,\infty[, \label{(6.4)} \end{equation} while ${\cal E}$ is the energy transferred to electrons. In non-metallic targets, $\cal E$ is the difference of electron binding energies of the parent and daughter atoms. Eventually, upon defining \begin{equation} \kappa^{2} \equiv {2mE \over \hbar^{2}}, \; \; \lambda \equiv l + {1 \over 2}, \label{(6.5)} \end{equation} \begin{equation} v(r) \equiv {2m \over \hbar^{2}} U(r), \label{(6.6)} \end{equation} stationary states are found by solving the stationary Schr\"{o}dinger equation \begin{equation} \left[{d^{2}\over dr^{2}}+\kappa^{2} -{\left(\lambda^{2}-{1 \over 4}\right) \over r^{2}} -v(r) \right]w_{\lambda}(\kappa;r)=0. \label{(6.7)} \end{equation} The solutions of Eq. (6.7) are discussed in Ref. \cite{DZ}, but here we would like to describe what new insight can be gained by using the phase-integral method, following Ref. \cite{B4}. For this purpose, we begin by remarking that, upon replacing $r$ with $z$, Eq. (6.7) is of the form (2.1). The latter is solved by $\psi(z)$ having the form (2.2) provided that the exact phase integrand $q(z)$ solves the differential equation \begin{equation} \chi(q(z)) \equiv q^{-{3 \over 2}}{d^{2}\over dz^{2}}q^{-{1 \over 2}} +{R(z) \over q^{2}}-1=0, \label{(6.8)} \end{equation} that is called the $q$-equation in Ref. \cite{B4}. Suppose now that it is possible to determine a function $Q: z \rightarrow Q(z)$ that is an approximate solution of the $q$-equation (6.8). This means that $\chi_{0}$, defined by \begin{equation} \chi_{0} \equiv \chi(Q(z))=Q^{-{3 \over 2}} {d^{2}\over dz^{2}}Q^{-{1 \over 2}} +{R(z)\over Q^{2}}-1, \label{(6.9)} \end{equation} must be much smaller than $1$. The work in Ref. \cite{B4} proves that the phase integrand $q(z)$ is related to the base function $Q(z)$ by the asymptotic expansion \begin{equation} q(z) \sim Q(z) \sum_{n=0}^{N} Y_{2n}, \label{(6.10)} \end{equation} where, on defining the new independent variable (a sort of approximate phase integral) \begin{equation} \zeta(z) \equiv \int^{z} Q(\tau)d\tau , \label{(6.11)} \end{equation} the first few $Y_{2n}$ functions are given explicitly by \cite{B4} \begin{equation} Y_{0}=1, \label{(6.12)} \end{equation} \begin{equation} Y_{2}={1 \over 2}\chi_{0}, \label{(6.13)} \end{equation} \begin{equation} Y_{4}=-{1 \over 8} \left(\chi_{0}^{2} +{d^{2}\over d \zeta^{2}}\chi_{0} \right), \label{(6.14)} \end{equation} \begin{equation} Y_{6}={1 \over 32} \left[2 \chi_{0}^{3} +5 \left({d \chi_{0}\over d\zeta}\right)^{2} +6 \chi_{0} {d^{2}\over d \zeta^{2}}\chi_{0} +{d^{4}\over d\zeta^{4}}\chi_{0}\right]. \label{(6.15)} \end{equation} By virtue of Eqs. (6.1)-(6.4), the function $R(z)$ can be written in the form \begin{equation} R(z)=-{\left(\lambda^{2}-{1 \over 4}\right)\over z^{2}} +{a_{-1}\over z}+a_{0}+a_{1}z+{\rm O}(z^{2}), \label{(6.16)} \end{equation} where the $a$'s are constants. Let us now assume that the square of the freely specifiable base function is given by \begin{equation} Q^{2}(z)={b_{-2}\over z^{2}}+{b_{-1}\over z} +b_{0}+b_{1}z+..., \label{(6.17)} \end{equation} where the $b$'s are suitable constants. For the first-order phase-integral approximation to be valid close to the origin, one requires finiteness of the integral \cite{B4} \begin{equation} \mu(z,z_{0}) \equiv \left | \int_{z_{0}}^{z} \Bigr | \chi_{0} Q(\tau) \Bigr | d\tau \right | \label{(6.18)} \end{equation} as $z$ approaches $0$. After re-expressing $\chi_{0}$ in (6.9) in terms of $Q^{2}$ according to \begin{equation} \chi_{0}={1 \over 16Q^{6}}\left[5 \left({dQ^{2}\over dz}\right)^{2}-4Q^{2} {d^{2}\over dz^{2}}Q^{2}\right] +{R(z) \over Q^{2}}-1, \label{(6.19)} \end{equation} a patient calculation shows that \cite{B4} \begin{eqnarray} \chi_{0}Q &=& -{(\lambda^{2}+b_{-2})\over \sqrt{b_{-2}} \; z} +{\left[\lambda^{2} +b_{-2}-{1 \over 2} \right]b_{-1} \over 2 (b_{-2})^{3 \over 2}} \nonumber \\ &+& {(a_{-1}-b_{-1}) \over \sqrt{b_{-2}}} +{\rm O}(z). \label{(6.20)} \end{eqnarray} Thus, finiteness of $\mu(z,z_{0})$ as $z$ approaches $0$ requires the elimination of the non-integrable term proportional to ${1 \over z}$ in Eq. (6.20). This is achieved if and only if \begin{equation} b_{-2}=-\lambda^{2}, \label{(6.21)} \end{equation} which implies in turn that \begin{equation} \lim_{z \to 0}z^{2}Q^{2}(z)=-\lambda^{2} =-\left(l+{1 \over 2} \right)^{2}, \label{(6.22)} \end{equation} as well as \cite{B4} \begin{equation} \lim_{z \to 0} z^{2} \Bigr[Q^{2}(z)-R(z) \Bigr] =-{1 \over 4}. \label{(6.23)} \end{equation} The most convenient choice of $Q^{2}(z)$ in order to obtain a stationary state that is regular at the origin at all orders of approximation is \cite{B4} \begin{equation} Q^{2}(z)=R(z)-{1 \over 4z^{2}}. \label{(6.24)} \end{equation} The advantage of the freely specifiable base function $Q(z)$ is that one has at disposal a new tool for finding approximate forms of the stationary states as the potential (6.1) is considered in greater detail, possibly including more involved terms. The JWKB method does not have such a flexibility, and higher orders of JWKB and phase-integral method may differ in a substantial way \cite{B3,B4}. We find it appropriate to end this section with an original calculation suggested by Eqs. (6.10)-(6.24). For this purpose, we assume to have chosen $Q^{2}(z)$ in the form (6.24), where $R(z)$ takes the form (6.18) with vanishing ${\rm O}(z^{2})$ term (for simplicity). We then find the approximate phase integrand $q(z)$ with arbitrary values of angular momentum quantum number in the form \begin{equation} q(z) \sim \left(-{\lambda^{2}\over z^{2}} +{a_{-1}\over z}+a_{0}+a_{1}z \right)^{1 \over 2} \left(1+{\chi_{0}\over 2}\right), \label{(6.25)} \end{equation} whrere, by virtue of (6.19) and (6.24), \begin{eqnarray} \; & \; & \chi_{0} = {1 \over 16} \left(-{\lambda^{2}\over z^{2}}+{a_{-1}\over z} +a_{0}+a_{1}z \right)^{-3} \nonumber \\ & \times & \left[-{4 \lambda^{4}\over z^{6}} +{12 \lambda^{2} a_{-1}\over z^{5}} +{\Bigr(24 \lambda^{2}a_{0}-3(a_{-1})^{2}\Bigr) \over z^{4}} \right . \nonumber \\ & + & \left . {(44 \lambda^{2} a_{1}-8a_{0}a_{-1}) \over z^{3}} -{18 a_{1} a_{-1}\over z^{2}}+5 (a_{1})^{2} \right] \nonumber \\ &+& {1 \over 4} \left(-\lambda^{2}+a_{-1}z+a_{0}z^{2} +a_{1}z^{3}\right)^{-1}. \label{(6.26)} \end{eqnarray} This formula yields in turn the asymptotic expansion of the stationary state $\psi(z)$ by means of Eq. (2.4). \section{Concluding remarks} \setcounter{equation}{0} Following the important findings in Refs. \cite{Geiger,Gamow}, there has been valuable work on $\alpha$-decay for almost a century by now \cite{Preston,Fermi,HW,Froman,Holstein,Z,D,AK,MANJU1,MANJU2,QI,DZ}. In particular, the work in Ref. \cite{Holstein} performs a very enjoyable presentation of four methods: complex eigenvalue, scattering state method, semiclassical path integral, instanton method. However, even the author of Ref. \cite{Holstein}, who was more familiar with the work in Ref. \cite{Park}, was not aware of the one-directional nature of connection formulae. Thus, our investigation is truly original, since it has applied the work of Refs. \cite{B3,B4} to a nuclear physics problem in which several generations of research workers were not aware of the proof of one-directional nature of connection formulae. Our original result (4.8) for the lowest $s$-wave metastable state improves the values obtained in Refs. \cite{B2,Holstein}. The authors of Ref. \cite{B2} find $\rho={3 \over 4} \pi$ because they use in Eq. (3.8) the coefficients $A_{3}$ and $A_{4}$ enforced by the wrong connection formulae (4.1). The work in Ref. \cite{Holstein} finds instead $\rho=\pi$ because it approximates the solutions of the equation $$ \tan (\rho)= - \nu $$ by integer multiples of $\pi$. Moreover, our formula (5.3) for the logarithm of the mean life of the radioactive nucleus yields a correction factor equal to $-1.025$ for the value obtained in Ref. \cite{B2}, and this prediction can be checked against observation. As far as we can see, our sources in the physics-oriented literature did an excellent work but were misled by their lack of knowledge of one-directional nature of connection formulae (cf. \cite{B3,B4}). The main open problem is now the application of the phase-integral perspective to the involved models of modern nuclear physics. Our section $6$ has prepared the ground for this purpose, describing in detail the logical steps that are in order. Our original result for the approximate form (6.25)-(6.26) of the phase integrand with arbitrary values of the angular momentum quantum number provides, as far as we can see, encouraging evidence in favour of new tools being available for investigating $\alpha$-decay from a phase-integral perspective. \section*{Acknowledgments} The author is grateful to the ``Ettore Pancini'' Physics Department of Federico II University for hospitality and support. \begin{appendix} \section{The $F$-matrix method} \setcounter{equation}{0} Let us assume that Eq. (2.1) is given, with the associated phase-integral functions (2.4). Following Ref. \cite{B4}, we consider the $a$-coefficients $a_{1}(z)$ and $a_{2}(z)$, which are uniquely determined by the requirement that any exact solution $\psi$ of Eq. (2.1) can be written in the form \begin{equation} \psi(z)=a_{1}(z)f_{1}(z)+a_{2}(z)f_{2}(z), \label{(A1)} \end{equation} with first derivative given by \begin{equation} {d \psi \over dz}=a_{1}(z){df_{1}\over dz} +a_{2}(z){df_{2}\over dz}. \label{(A2)} \end{equation} For Eq. (A2) to be satisfied, we have to impose that \cite{B4} \begin{equation} f_{1}(z){da_{1}\over dz}+f_{2}(z){da_{2}\over dz}=0. \label{(A3)} \end{equation} Interestingly, Eq. (2.1) can be now replaced by a system of two coupled differential equations of first order, which can be written in matrix form as \cite{B4} \begin{equation} {d \over dz} \left(\begin{matrix} a_{1}(z) \\ a_{2}(z) \end{matrix}\right) =M(z) \left(\begin{matrix} a_{1}(z) \\ a_{2}(z) \end{matrix}\right), \label{(A4)} \end{equation} having defined (see Eq. (2.9)) \begin{equation} M(z)={i \over 2} \chi(z) q(z) \left(\begin{matrix} 1 & e^{-2iw(z)} \\ -e^{2i w(z)} & -1 \end{matrix}\right). \label{(A5)} \end{equation} Equation (A4) can be replaced by the integral equation \begin{equation} \left(\begin{matrix} a_{1}(z) \\ a_{2}(z) \end{matrix}\right) =\left(\begin{matrix} a_{1}(z_{0}) \\ a_{2}(z_{0}) \end{matrix}\right) +\int_{z_{0}}^{z}d\tau \; M(\tau) \left(\begin{matrix} a_{1}(\tau) \\ a_{2}(\tau) \end{matrix}\right), \label{(A6)} \end{equation} whose solution can be obtained in closed form by an iteration procedure that yields \begin{equation} \left(\begin{matrix} a_{1}(z) \\ a_{2}(z) \end{matrix}\right) = F(z,z_{0}) \left(\begin{matrix} a_{1}(z_{0}) \\ a_{2}(z_{0}) \end{matrix}\right), \label{(A7)} \end{equation} where $F(z,z_{0})$ is a $2 \times 2$ matrix given by a convergent series \cite{B4}. Such a matrix is the particular solution of the differential equation \begin{equation} {\partial \over \partial z}F(z,z_{0})=M(z)F(z,z_{0}), \label{(A8)} \end{equation} that is equal to the $2 \times 2$ unit matrix for $z=z_{0}$. The $F$-matrix satisfies the general relations \cite{B4} \begin{equation} {\rm det} F(z,z_{0})=1, \label{(A9)} \end{equation} \begin{equation} F(z,z_{0})=F(z,z_{1})F(z_{1},z_{0}), \label{(A10)} \end{equation} \begin{equation} F(z_{0},z)=[F(z,z_{0})]^{-1}= \left(\begin{matrix} F_{22}(z,z_{0}) & -F_{12}(z,z_{0}) \\ -F_{21}(z,z_{0}) & F_{11}(z,z_{0}) \end{matrix}\right). \label{(A11)} \end{equation} Useful estimates of the matrix elements of $F(z,z_{0})$ have been obtained in Ref. \cite{B3} under the assumption that the points $z$ and $z_{0}$ can be connected by a path in the complex $z$-plane along which the absolute value of $e^{iw(z)}$ increases monotonically, in the non-strict sense, in the direction from $z_{0}$ to $z$. Upon defining (cf. Eq. (2.10)) \begin{equation} \mu = \mu(z,z_{0}) \equiv \left | \int_{z_{0}}^{z} |\chi(q(z')) \; q(z')| dz' \right |, \label{(A12)} \end{equation} these {\it basic estimates} read as \cite{B4} \begin{equation} \left | F_{11}(z,z_{0})-1 \right | \leq {\mu \over 2} + \; {\rm higher} \; {\rm powers} \; {\rm of} \; \mu , \label{(A13)} \end{equation} \begin{equation} |F_{12}(z,z_{0})| \leq \left | e^{-2iw(z_{0})} \right | \left({\mu \over 2}+ \; {\rm higher} \; {\rm powers} \; {\rm of} \; \mu \right), \label{(A14)} \end{equation} \begin{equation} |F_{21}(z,z_{0})| \leq \left | e^{2iw(z_{0})} \right | \left({\mu \over 2}+ \; {\rm higher} \; {\rm powers} \; {\rm of} \; \mu \right), \label{(A15)} \end{equation} \begin{equation} |F_{22}(z,z_{0})-1| \leq {\mu \over 2} +\left | e^{2i[w(z)-w(z_{0})]} \right | \left({\mu^{2}\over 4}+ \; {\rm higher} \; {\rm powers} \; {\rm of} \; \mu \right). \label{(A16)} \end{equation} The parameters occurring in Eqs. (2.7) and (2.8) are real-valued and can be defined as follows in terms of the $F$-matrix \cite{B4}: \begin{equation} \alpha=\alpha(x_{1},x_{2})=|F_{11}(x_{1},x_{2})|, \label{(A17)} \end{equation} \begin{equation} \beta=\beta(x_{1},x_{2})=\pm {\rm arg} \; F_{11}(x_{1},x_{2}), \label{(A18)} \end{equation} \begin{equation} \gamma=\gamma(x_{1},x_{2})= {\rm Re} \left[{F_{21}(x_{1},x_{2}) \over F_{11}(x_{1},x_{2})} \right]. \label{(A19)} \end{equation} Strictly speaking, the possibility of writing $\gamma$ in the form (A19) results from the simple but non-obvious property, according to which \cite{B4} $$ F_{21}(x_{1},x_{2}) F_{11}^{*}(x_{1},x_{2}) \mp {i \over 2} $$ is real-valued. \section{One-directional nature of the connection formula (2.18)} \setcounter{equation}{0} Suppose that, upon setting $d=1$ on the right-hand side of Eq. (2.18), we are given a stationary state that, at a point $x_{2}$ of the classically allowed region, reads as \begin{eqnarray} \; & \; & \psi(x_{2})=2 \left | q^{-{1 \over 2}}(x_{2}) \right | \; \cos \left[|w(x_{2})|-{\pi \over 4} \right] \nonumber \\ &=& a(x_{2}) \left | q^{-{1 \over 2}}(x_{2}) \right | e^{i |w(x_{2})|} +b(x_{2}) \left | q^{-{1 \over 2}}(x_{2}) \right | e^{-i |w(x_{2})|}, \label{(B1)} \end{eqnarray} where $a(x_{2})=e^{-i{\pi \over 4}}$, $b(x_{2})=e^{i{\pi \over 4}}$. The stationary state at a point $x_{1}$ in the classically forbidden region reads therefore \begin{equation} \psi(x_{1})=c(x_{1}) \left|q^{-{1\over 2}}(x_{1})\right | e^{|w(x_{1})|}+d(x_{1}) \left | q^{-{1\over 2}}(x_{1}) \right| e^{-|w(x_{1})|}, \label{(B2)} \end{equation} where the technique of Ref. \cite{B4} yields the formulae (the approximate forms of the parameters $\alpha,\beta,\gamma$ being the ones given in our Eq. (2.11)) \begin{eqnarray} c(x_{1})&=& \alpha e^{\left[-i\left({\pi \over 4}-\beta \right)\right]} a(x_{2})+\alpha e^{\left[i \left({\pi \over 4}-\beta \right)\right]} b(x_{2}) \nonumber \\ &=& 2 \alpha \sin \beta, \label{(B3)} \end{eqnarray} \begin{eqnarray} d(x_{1})&=& \left(\alpha \gamma+{i \over 2 \alpha} \right) e^{-i \left({\pi \over 4}-\beta \right)} a(x_{2}) \nonumber \\ &+& \left(\alpha \gamma-{i \over 2 \alpha} \right) e^{i \left({\pi \over 4}-\beta \right)} b(x_{2}) \nonumber \\ &=& {\cos \beta \over \alpha} +2 \alpha \gamma \sin \beta. \label{(B4)} \end{eqnarray} By virtue of Eqs. (B2)-(B4), one finds \cite{B4} \begin{eqnarray} \psi(x_{1})&=& \left | q^{-{1 \over 2}}(x_{1}) \right | e^{-|w(x_{1})|} \nonumber \\ & \times & \left \{ 2 \alpha \sin \beta e^{2 |w(x_{1})|} +{\cos \beta \over \alpha} +2 \alpha \gamma \sin \beta \right \} . \label{(B5)} \end{eqnarray} By virtue of the approximate formulae (2.11) for $\alpha,\beta,\gamma$, the sum of terms within curly brackets on the second line of (B5) can never approach $1$, and hence the stationary state in Eq. (B5) can never approach the left-hand side of Eq. (2.18) with $c=0$ and $d=1$. Thus, the connection formula (2.18) is one-directional \cite{B4}. As is stressed in Ref. \cite{B4}, the connection formula has the same form for every order of the phase-integral approximation. \end{appendix}
2,869,038,154,496
arxiv
\section{Introduction} We consider the time evolution of a physical system where a set of particles can aggregate into groups of two or more, called \emph{clusters}, and where these clusters can diffuse in space with a diffusion constant which depends on their size. If we represent space by an open bounded set $\Omega \subseteq \mathbb{R}^N$ with regular boundary, the initial-boundary problem for the concentrations $c_i = c_i(t,x) \geq 0$ of clusters with integer size $i \geq 1$ at position $x \in \Omega$ and time $t \geq 0$ is given by the discrete coagulation-fragmentation system of equations with spatial diffusion and homogeneous Neumann boundary conditions~: \begin{subequations} \label{eq:cfd} \begin{align} \label{eq:cfd-eq} \partial_t c_i - d_i \Delta_x c_i = Q_i + F_i &\quad \text{ for } x \in \Omega, t \geq 0, i \in \mathbb{N}^*, \\ \label{eq:cfd-boundary} \nabla_{\!x} c_i \cdot n = 0 &\quad \text{ for } x \in \partial \Omega, t \geq 0, i \in \mathbb{N}^*, \\ \label{eq:cfd-initial} c_i(0,x) = c_i^0(x) &\quad \text{ for } x \in \Omega, i \in \mathbb{N}^*, \end{align} \end{subequations} where $n = n(x)$ represents a unit normal vector at a point $x \in \partial \Omega$, $d_i$ is the diffusion constant for clusters of size $i$, and \begin{equation} \begin{split} \label{eq:defQF-3} Q_i \equiv Q_i[c] := Q_i^+ - Q_i^- :=&\ \frac{1}{2} \sum_{j=1}^{i-1} a_{i-j,j}\, c_{i-j}\, c_j - \sum_{j=1}^\infty a_{i,j}\, c_i\, c_j, \\ F_i \equiv F_i[c] := F_i^+ - F_i^- :=&\ \sum_{j=1}^\infty B_{i+j}\, \beta_{i+j,i}\, c_{i+j} - B_i\, c_i. \end{split} \end{equation} The parameters $B_i$, $\beta_{i,j}$ and $a_{i,j}$, for integers $i,j \geq 0$, represent the total rate $B_i$ of fragmentation of clusters of size $i$, the average number $\beta_{i,j}$ of clusters of size $j$ produced due to fragmentation of a cluster of size $i$, and the coagulation rate $a_{i,j}$ of clusters of size $i$ with clusters of size $j$. We refer to these parameters as \emph{the coefficients} of the system of equations. They represent rates, so they are always nonnegative; single particles do not fragment further, and mass should be conserved when a cluster fragments into smaller pieces, so one always imposes \begin{subequations} \label{eq:hyps0} \begin{align} \label{hyp:coefs-positive} a_{i,j}=a_{j,i} \geq 0, \qquad \beta_{i,j} \geq 0, &\qquad (i,j \in \mathbb{N}^*), \\ \label{hyp:coefs-positive-2} B_1 = 0, \qquad\ B_i \geq 0, &\qquad (i \in \mathbb{N}^*), \\ \label{hyp:frag-conserves-mass} i= \sum_{j=1}^{i-1} j\,\beta_{i,j}, &\qquad (i \in \mathbb{N}, i \geq 2). \end{align} \end{subequations} In fact, the last condition \eqref{hyp:frag-conserves-mass} implies the conservation of the total mass $\int_{\Omega} \sum_{i=1}^\infty i\,c_i\,dx$, which becomes obvious from the following formal \emph{fundamental identity} or \emph{weak formulation} of the coagulation and fragmentation operators: Consider a sequence of nonnegative numbers $\{c_i\}$, and define $Q_i$, $F_i$ as in eqs. \eqref{eq:defQF-3}, then, for any sequence of numbers $\varphi_i$, \begin{equation} \begin{split} \label{eq:fundamental-identity} \sum_{i=1}^\infty \varphi_i\, Q_i &= \frac{1}{2} \sum_{i=1}^\infty \sum_{j=1}^\infty a_{i,j}\, c_i\, c_j\, (\varphi_{i+j} - \varphi_i - \varphi_j), \\ \sum_{i=1}^\infty \varphi_i\, F_i &= - \sum_{i=2}^\infty B_i c_i \left( \varphi_i - \sum_{j=1}^{i-1} \beta_{i,j} \varphi_j \right). \end{split} \end{equation} As a (still formal) consequence for solutions $\{c_i\}$ of (\ref{eq:cfd}) -- (\ref{eq:defQF-3}), one can calculate the time derivative of the integral of the moment $\sum \varphi_i c_i$ to obtain \begin{equation} \label{eq:moment-derivative} \frac{d}{dt} \int_\Omega \sum_{i=1}^\infty \varphi_i c_i = \int_\Omega \sum_{i=1}^\infty \varphi_i (Q_i + F_i), \end{equation} since the integral of the diffusion part vanishes due to the homogeneous Neumann boundary condition. By choosing $\varphi_i := i$ above and thanks to \eqref{hyp:frag-conserves-mass}, we have $\sum_{i=1}^\infty i\, Q_i = \sum_{i=1}^\infty i\, F_i = 0$, and the total mass is formally conserved : \begin{equation} \label{eq:mass-conservation} \norm{\rho (t, \cdot)}_{L^1} = \int_\Omega \sum_{i=1}^\infty i c_i(t,x) \,dx= \int_\Omega \sum_{i=1}^\infty i c_i^0(x) \,dx = \norm{\rho^0}_{L^1} \quad (t \geq 0). \end{equation} Our main aim in this work is to provide some new bounds on the regularity of weak solutions for system (\ref{eq:cfd}) -- \eqref{eq:defQF-3} by means of techniques developed in the context of reaction-diffusion equations \cite{citeulike:3798030,citeulike:3973601,PSch}, and to give three applications to those bounds, the main one proving rigorously (for almost all the coefficients where this is true in the homogeneous case) mass conservation \eqref{eq:mass-conservation} and thus the absence of gelation, a well-known phenomenon in coagulation-fragmentation models \cite{EMP02,ELMP}, where the formal conservation of mass is violated as clusters of infinite size are formed. \medskip In this paper we will work with the global weak solutions constructed in \cite{LM02} under the assumption \begin{equation} \label{eq:LM-condition} \lim_{j \to +\infty} \frac{a_{i,j}}{j} = \lim_{j \to +\infty} \frac{B_{i+j}\, \beta_{i+j,i}}{i+j} = 0, \qquad (\mathrm{for\ fixed} \ i \geq 1), \end{equation} which were later extended in \cite{citeulike:3955115} to the case of $\Omega = \mathbb{R}^N$. The notion of solution is the following, which we take from \cite{LM02}: \begin{dfn} \label{defi} A global weak solution $c = \{c_i\}_{i \geq 1}$ to \eqref{eq:cfd} -- \eqref{eq:defQF-3} is a sequence of functions $c_i: [0,+\infty) \times \Omega \to [0,+\infty)$ such that for each $T > 0$, \begin{gather} c_i \in \mathcal{C}([0,T]; L^1(\Omega)), \quad i \geq 1, \\ \sum_{j=1}^\infty a_{i,j} c_i c_j \in L^1([0,T]\times\Omega), \\ \sup_{t \geq 0} \int_\Omega \bigg[ \sum_{i=1}^\infty i c_i(t,x) \bigg]\, dx \leq \int_\Omega \bigg[ \sum_{i=1}^\infty i c^0_i(x) \bigg]\, dx, \end{gather} and for each $i \geq 1$, $c_i$ is a mild solution to the $i$-th equation in \eqref{eq:cfd-eq}, that is, \begin{equation} \label{eq:ci-solution} c_i(t) = e^{d_i A_1 t} c_i^0 + \int_0^t e^{d_i A_1(t-s)} Q_i[c(s)] \,ds, \quad t \geq 0, \end{equation} where $Q_i[c]$ is defined by \eqref{eq:defQF-3}, $A_1$ denotes the closure in $L^1(\Omega)$ of the unbounded linear operator $A$ of $L^2(\Omega)$ defined by \begin{equation} \label{eq:A-domain} D(A) := \{w \in H^2(\Omega) \mid \nabla w \cdot n = 0 \text{ on } \partial \Omega \}, \qquad Aw = \Delta w, \end{equation} and $e^{d_i A_1 t}$ is the $C_0$-semigroup generated by $d_i A_1$ in $L^1(\Omega)$. \end{dfn} The existence result of \cite{LM02} reads: \begin{thm}[Lauren\c cot-Mischler] \label{thm:LM-existence} Assume hypotheses \eqref{eq:hyps0} and \eqref{eq:LM-condition} on the coagulation and fragmentation coefficients. Assume also that \begin{equation*} d_i > 0 \quad \text{for all}\ i \geq 1, \end{equation*} and that the non-negative initial datum has finite mass: \begin{equation*} c_i^0 \geq 0 \ \text{on}\ \Omega \quad \text{ and } \quad\int_\Omega \sum_{i=1}^\infty i\,c^0_i < +\infty. \end{equation*} Then, there exists a global weak solution to the initial-boundary problem \eqref{eq:cfd} -- \eqref{eq:defQF-3} in the sense of Definition \ref{defi}. \end{thm} Under the extra assumptions on the diffusion constants and the initial data \begin{gather} \label{hyp:diffusion-bounded} 0 < \inf_i \{d_i\} =: d, \qquad D := \sup_i \{d_i\} < +\infty, \\ \label{hyp:initial-L2} \sum_{i=1}^\infty i c_i^0 \in L^2(\Omega), \end{gather} we are in fact able to prove the following $L^2$ bound on the mass density $\rho(t,x) := \sum_{i=1}^\infty i\,c_i(t,x)$: Denoting by $\Omega_T$ the cylinder $[0,T] \times \Omega$, we have the \begin{prp} \label{lem:mass-L2} Assume that \eqref{eq:hyps0}, \eqref{eq:LM-condition}, \eqref{hyp:diffusion-bounded} and \eqref{hyp:initial-L2} hold. Then, for all $T > 0$ the mass $\rho$ of a weak solution to system \eqref{eq:cfd} -- \eqref{eq:defQF-3} (given by Theorem \ref{thm:LM-existence}) lies in $L^2(\Omega_T)$ and the following estimate holds: \begin{equation} \label{imp} \|\rho\|_{L^2(\Omega_T)} \le \bigg( 1 + \frac{\sup_i \{d_i\}}{\inf_i \{d_i\}} \bigg)\, T\, \|\rho(0,\cdot)\|_{L^2(\Omega)} . \end{equation} \end{prp} \begin{rem} \label{rem:1.4} Note that the assumption \eqref{eq:LM-condition} is only included in Proposition \ref{lem:mass-L2} in order to ensure the existence of a weak solution via Theorem \ref{thm:LM-existence}. Without assumption \eqref{eq:LM-condition}, the bound \eqref{imp} would still hold for smooth solutions of a truncated version of system \eqref{eq:cfd} -- \eqref{eq:defQF-3} uniformly with respect to the truncation. See \cite{LM02} for the details of such a truncation. \end{rem} \medskip In addition to Proposition \ref{lem:mass-L2}, we give a new proof of an $L^1$ bound of the various coagulation and fragmentation terms: \begin{prp} \label{lem:L1-terms} We still assume that \eqref{eq:hyps0}, \eqref{eq:LM-condition}, \eqref{hyp:diffusion-bounded} and \eqref{hyp:initial-L2} hold. Then, for all $T > 0$ and $i \in \mathbb{N}^*$ all the terms $Q_i^+$, $Q_i^-$, $F_i^+$ and $F_i^-$ associated to a weak solution to system \eqref{eq:cfd}--\eqref{eq:defQF-3} (given by Theorem \ref{thm:LM-existence}) lie in $L^1(\Omega_T)$ with a bound which depends in an explicit way on the coagulation and fragmentation coefficients, the diffusion coefficients, and the initial data $c_i^0$. \end{prp} \begin{rem} The fact that the terms $Q_i^+$, $Q_i^-$, $F_i^+$ and $F_i^-$ associated to a weak solution are in $L^1(\Omega_T)$ is included in the definition of weak solution; the main content of Proposition \ref{lem:L1-terms} is the explicit dependence of the bounds on the coefficients and initial data, which can be used to obtain uniform estimates for approximated solutions as we show for instance in section \ref{sec:existence}. For details on the explicit $L^1$ bounds we refer to the proof of Proposition \ref{lem:L1-terms} in section \ref{nae}. \end{rem} \begin{rem}\label{rem:1.5} The $L^1$ bounds on $Q_i^+$, $Q_i^-$, $F_i^+$ and $F_i^-$ require the assumption \eqref{eq:LM-condition} only to ensure existence. They would hold at the formal level (that is, for smooth solutions of a truncated system) under the less stringent assumption \begin{equation}\label{bou} K_i := \sup_{j\in\mathbb{N}} \frac{B_{i+j}\, \beta_{i+j,i}}{i+j} < +\infty \qquad (i\in \mathbb{N}^*). \end{equation} Note that the above $L^1$ bound also holds when assumptions \eqref{eq:hyps0}, \eqref{eq:LM-condition} are replaced by the assumptions of Theorem \ref{thm:LM-existence} in \cite{LM02}, but the proof is then much more difficult as it requires an induction on $i$ which can be removed under our extra assumptions. \end{rem} \medskip In section \ref{sec:existence}, as a first application of the bounds obtained in Propositions \ref{lem:mass-L2} and \ref{lem:L1-terms}, we give a very simple proof of existence of weak solutions to \eqref{eq:cfd}--\eqref{eq:defQF-3} in dimension $N=1$ (that is, the result of Theorem \ref{thm:LM-existence} in dimension $1$) under the additional assumptions \eqref{eq:hyps0} and \eqref{eq:LM-condition}. \medskip Our main application of the Propositions \ref{lem:mass-L2} and \ref{lem:L1-terms} is however related to the problem of conservation of mass \eqref{eq:mass-conservation}, which holds rigorously for solutions to a truncated system (see e.g \cite{LM02}). Nevertheless, it is an important issue in coagulation-fragmentation theory whether \eqref{eq:mass-conservation} holds for weak solutions of system \eqref{eq:cfd} -- \eqref{eq:defQF-3} itself, or if \eqref{eq:mass-conservation} is replaced by an inequality stating that mass in non-increasing in time. If at some time $t$, the identity \eqref{eq:mass-conservation} does not hold any more, we say that gelation occurs, which means from a physical point of view that a macroscopic object has been created. \bigskip Our main result in section \ref{sec:mass} basically shows that (under the assumptions \eqref{eq:hyps0} and \eqref{eq:LM-condition}) gelation does not occur when the coagulation coefficients $a_{i,j}$ are at most linear and, moreover, slightly sublinear far off the diagonal $i=j$. More precisely, we prove mass conservation under the following condition on the coefficients $a_{i,j}$: \begin{hyp} \label{hyp:aij-almost-linear} There is some bounded function $\theta: [0,+\infty) \to (0,+\infty)$ such that $\theta(x) \to 0$ when $x \to +\infty$ and \begin{equation} \label{eq:aij-condition} a_{i,j} \leq (i+j)\, \theta(j/i) \quad \text{ for all } j \geq i. \end{equation} (Or equivalently, by symmetry, \begin{equation*} a_{i,j} \leq (i+j)\, \theta(\max\{j/i,i/j\}) \quad \text{ for all } i,j \geq 1.) \end{equation*} \end{hyp} \begin{thm} \label{thm:mass-conservation} Assume that \eqref{eq:hyps0}, \eqref{eq:LM-condition}, \eqref{hyp:diffusion-bounded}, and \eqref{hyp:initial-L2} hold. Also, assume Hypothesis \ref{hyp:aij-almost-linear}. Then, the weak solution to the system \eqref{eq:cfd} given by Theorem \ref{thm:LM-existence} has a superlinear moment which is bounded on bounded time intervals; this is, there is some increasing function $C = C(T) > 0$, and some increasing sequence of positive numbers $\{\psi_i\}_{i \geq 1}$ with \begin{equation} \label{eq:phi-super} \lim_{i \to \infty} \psi_i \to +\infty \end{equation} such that for all $T > 0$, \begin{equation} \label{eq:superlinear} \int_\Omega \sum_{i=1}^\infty i\, \psi_i c_i \leq C(T) \quad \text{ for all } t \in [0,T]. \end{equation} % As a consequence, under these conditions all weak solutions given by Theorem \ref{thm:LM-existence} of \eqref{eq:cfd} conserve mass: \begin{equation} \label{eq:mc} \int_\Omega \rho_0(x) \,dx = \int_\Omega \rho(t,x) \,dx \quad \text{ for all } t \geq 0. \end{equation} \end{thm} \begin{rem}[Admissible coagulation coefficients] Let us comment on Hypothesis \ref{hyp:aij-almost-linear}. First note that \eqref{hyp:aij-almost-linear} includes coefficients of the form $$ a_{i,j} \le \text{Cst} \, ( i^{\alpha}\, j^{\beta} + i^{\beta}\, j^{\alpha}) $$ for any $\alpha, \beta >0$ such that $\alpha + \beta \le 1$ (take $\theta(x) =x^{-\varepsilon}$ for $\varepsilon>0$ small enough). It is also satisfied when $$ a_{i,j} \le \text{Cst} \, \bigg( \frac{i}{\phi(i)} + \frac{j}{\phi(j)} \bigg), $$ where $x \mapsto \phi(x)$ is any positive strictly increasing function (for $x$ big enough), which goes to infinity at infinity, and such that $x \mapsto \frac{x}{\phi(x)}$ is also increasing (take $\theta(\lambda) = \phi(\lambda)^{-1/2}$). All the examples $\phi = \log(1+ \cdot)$, $\phi = \log(1+\cdot)\circ\log(1+ \cdot)$, \dots , $\phi = \log(1+ \cdot)\circ\dots\circ\log(1+ \cdot)))$ satisfy this condition. Likewise, condition (\ref{eq:aij-condition}) also holds when (for $i,j\ge 2$) \begin{equation}\label{sta} a_{ij} \le \text{Cst}\, \left( i \frac{R(\log j)}{\log i} + j \frac{R(\log i)}{\log j} \right) \end{equation} for some nondecreasing function $R$ such that $x \mapsto R(x)/x$ is nonincreasing and tends to $0$ when $x \to +\infty$. Note indeed that when (\ref{sta}) holds, \begin{equation}\label{stasta} \frac{a_{ij}}{i+j} \le \frac1{1+ j/i} \, \frac{R[\log(j/i) + \log i]}{\log i} + \frac{j/i}{1 + j/i} \frac{R[\log i]}{\log(j/i) + \log i}. \end{equation} Then, condition (\ref{eq:aij-condition}) is obtained by distinguishing the cases $i \ge j/i$ and $i \le j/i$ in both terms of the right hand side of (\ref{stasta}). \par Assumption (\ref{sta}) can even be replaced by $$ a_{ij} \le Cst\, \bigg(i\,\frac{R(\log(\log j))}{\log(\log i)} + {j}\,\frac{R(\log (\log i))}{\log(\log j)} \bigg), $$ with the same requirements on $R$ as previously. \par Note however that the linear coefficient $a_{ij} = i+j$ (or the coefficient $a_{ij} = \frac{i}{\log i}\, \log j + \frac{j}{\log j}\, \log i$) does not satisfy hypothesis (\ref{hyp:aij-almost-linear}), though one would expect that Thm. \ref{thm:mass-conservation} still holds for such coefficients. \end{rem} \bigskip Before introducing a generalised coagulation-fragmentation model and thus, a third application of the Propositions \ref{lem:mass-L2} and \ref{lem:L1-terms}, let us briefly review previous results on existence theory and mass conservation for the coagulation-fragmentation system \eqref{eq:cfd}. With some further restrictions on the coefficients as compared to \cite{LM02}, existence of solutions by means of $L^\infty$ bounds on the $c_i$ has been proven in \cite{MR1454671, citeulike:3946307, citeulike:3955138, citeulike:3458165, citeulike:3946301}. A different technique was used in \cite{citeulike:3955119} to prove that equation \eqref{eq:cfd} is well posed, locally in time, and globally in time when the space dimension $N$ is one, always assuming that the coagulation and fragmentation coefficients are bounded. In a recent work \cite{citeulike:3460338}, Hammond and Rezakhanlou considered equation \eqref{eq:cfd} without fragmentation, and gave $L^\infty$ bounds on moments of the solution (and as a consequence, $L^\infty$ bounds on the $c_i$). This implies uniqueness and mass conservation for some coagulation coefficients that grow at most linearly as well as an alternative proof of the existence of $L^\infty$ solutions by a-priori bounds on the $c_i$; for instance, if $\Omega = \mathbb{R}^N$ and diffusion coefficients $d_i$ are nonincreasing and satisfying \eqref{hyp:diffusion-bounded} and if moreover \begin{equation*} \sum_{i=1}^\infty i \, c_i^0 \in L^\infty(\mathbb{R}^N), \qquad \sum_{i=1}^\infty i^2 \, c_i^0 \in L^1(\mathbb{R}^N), \qquad a_{i,j} \leq C\, (i+j) \end{equation*} for some $C > 0$ and all $i,j \geq 1$, then they show that mass is conserved for all weak solutions of eq. \eqref{eq:cfd} without fragmentation. See \cite[Theorems 1.3 and 1.4]{citeulike:3460338} and \cite[Corollary 1.1]{citeulike:3460338} for more details. In the spatially homogeneous case, mass conservation is known for general data with finite mass and coagulation coefficients including the critical linear case $a_{i,j} \le \text{Cst} (i+j)$ (see, for instance, \cite{citeulike:2972710,citeulike:2972715}). \bigskip We finally give a third application of the Propositions \ref{lem:mass-L2} and \ref{lem:L1-terms}. As mentioned already in the Remarks \ref{rem:1.4} and \ref{rem:1.5}, Propositions \ref{lem:mass-L2} and \ref{lem:L1-terms} (despite true without restrictions on the coagulation coefficients $a_{i,j}$ for smooth approximating solutions) do not really improve the theory of existence of weak solutions for the usual models of coagulation-fragmentation like \eqref{eq:cfd} as the full assumption \eqref{eq:LM-condition} are needed in passing to the limit in the approximating solutions. At best they help provide simpler proofs in particular cases, as done in section \ref{sec:existence}. \medskip On the other hand, Propositions \ref{lem:mass-L2} and \ref{lem:L1-terms} are well suited for the existence theory of more exotic models, for instance, when fragmentation occurs due to binary collisions between clusters. Then, the break-up terms are quadratic, being proportional to the concentration of the two clusters which collide. This leads to coagulation-fragmentation models where all terms in the right hand side are quadratic. \medskip More precisely, we consider that clusters of size $k$ and $l$ collide with a rate $b_{k,l}\ge 0$, leading to fragmentation. As a consequence, clusters of size $i< \max\{k,l\}$ are produced, in average, at a rate $\beta_{i,k,l}\ge 0$ in such a way that the mass is conserved (that is, $\sum_{i < \max\{k,l\}} i \, \beta_{i,k,l} = k+l$). This leads to the following system (for $t\in \mathbb{R}_+$, $x\in \Omega$ a bounded regular open subset of $\mathbb{R}^N$): \begin{multline} \label{cf2} \partial_t c_i - d_i \,\Delta_x c_i =\ \frac12 \sum_{k+l=i} a_{k,l}\, c_k\,c_l - \sum_{k=1}^{\infty} a_{i,k} \,c_i\, c_k \\ +\frac{1}{2} \sum_{k,l=1}^{\infty} \sum_{i<\max\{k,l\}} b_{k,l}\, c_k\,c_l \, \beta_{i,k,l} - \!\sum_{k=1}^{\infty} b_{i,k}\,c_i\,c_k \qquad (i \in \mathbb{N}^*), \end{multline} together with the initial and boundary conditions \eqref{eq:cfd-boundary}, \eqref{eq:cfd-initial}. For this model, the set of assumptions (\ref{eq:hyps0}) is replaced by \begin{subequations} \label{eq:hyps02} \begin{align} \label{q1} &a_{i,j}=a_{j,i} \geq 0, &&\quad (i,j \in \mathbb{N}^*), \\ \label{q2} &\beta_{i,k,l} = \beta_{i,l,k} \geq 0, &&\quad (i,k,l \in \mathbb{N}^*, i < \max\{k,l\}), \\ \label{q2.5} &b_{i,k} = b_{k,i} \geq 0, \quad b_{1,1} = 0, &&\quad (i,k \in \mathbb{N}^*, i < k), \\ \label{q3} &\sum_{i < \max\{k,l\}} i\,\beta_{i,k,l}=k+l, &&\quad (k,l \in \mathbb{N}^*). \end{align} \end{subequations} Because of the quadratic character of the fragmentation terms, the inductive method for the proof of existence devised by Lauren\c cot-Mischler \cite{LM02} seems difficult to adapt in this case. The method presented in our first application can however be adapted, provided that the dimension is $N=1$ and that the following assumptions are made on the coefficients: \bigskip \begin{hyp}\label{hyp:quadratic} Assume \eqref{eq:hyps02}, and suppose that the diffusion coefficients are uniformly bounded above and below (eq. \eqref{hyp:diffusion-bounded}) and that the initial mass lies in $L^2(\Omega)$ (eq. \eqref{hyp:initial-L2}). In place of \eqref{eq:LM-condition} we assume further that \begin{align}\label{nas1} &\lim_{l \to \infty} \frac{a_{k,l}}l = 0, \qquad \lim_{l \to \infty} \frac{b_{k,l}}l = 0, &&({\hbox{ for fixed }} k \in \mathbb{N}^*),\\ \label{nas2} &\lim_{l \to \infty} \sup_{k} \left\{\frac{b_{k,l}}{kl} \, \beta_{i,k,l} \right\} = 0. &&( {\hbox{ for fixed }} i \in \mathbb{N}^*), \end{align} \end{hyp} We define a solution to \eqref{cf2} along the same lines as in Definition \ref{defi}: \begin{dfn} \label{dfn:cf2-solution} A global weak solution $c = \{c_i\}_{i \geq 1}$ to \eqref{cf2}, the boundary condition \eqref{eq:cfd-boundary} and the initial data \eqref{eq:cfd-initial} is a sequence of functions $c_i: [0,+\infty) \times \Omega \to [0,+\infty)$ such that for each $T > 0$, \begin{equation} c_i \in \mathcal{C}([0,T]; L^1(\Omega)), \quad i \geq 1, \end{equation} the four terms on the r.h.s. of \eqref{cf2} are in $L^1([0,T]\times\Omega)$, \begin{equation} \sup_{t \geq 0} \int_\Omega \bigg[ \sum_{i=1}^\infty i c_i(t,x) \bigg]\, dx \leq \int_\Omega \bigg[ \sum_{i=1}^\infty i c^0_i(x) \bigg]\, dx, \end{equation} and for each $i \geq 1$, $c_i$ is a mild solution to the $i$-th equation in \eqref{cf2}, that is, \begin{equation*} c_i(t) = e^{d_i A_1 t} c_i^0 + \int_0^t e^{d_i A_1(t-s)} Z_i[c(s)] \,ds, \quad t \geq 0, \end{equation*} where $Z_i[c]$ represents the right hand side of \eqref{cf2} and $A_1$, $e^{d_i A_1 t}$ are the same as in Definition \ref{defi}. \end{dfn} We are now able to prove the following theorem: \begin{thm}\label{th} Under Hypothesis \ref{hyp:quadratic} on the coefficients and initial data of the equation, and in dimension $N=1$, there exists a global weak solution to eq. (\ref{cf2}) satisfying $$ c_i \in C([0,T], L^1(\Omega)) \cap L^{3-\varepsilon}(\Omega_T)\qquad (\text{for all } i\in\mathbb{N}^*, T>0, \varepsilon>0), $$ for which the four terms appearing in the right hand side of (\ref{cf2}) lie in $L^1(\Omega_T)$. \end{thm} \begin{rem} The method of proof unfortunately does not seem to provide existence in dimensions $N \geq 2$. Dimension $N=2$ looks in fact critical as it doesn't allow a-priori a bootstrap in the heat equation with right hand side in $L^1$. A possible line of proof could follow \cite{GV} in the context of reaction-diffusion equations. In higher dimensions $N\ge 3$, assuming additionally a detailed balance relation between coagulation and fragmentation, an entropy based duality method as in \cite{citeulike:3798030} could be used to define global weak $L^2$ solutions (see also \cite{citeulike:3973601}). \end{rem} \bigskip Our paper is built in the following way: Section \ref{nae} is devoted to the proof of Propositions \ref{lem:mass-L2} and \ref{lem:L1-terms}. Then Sections \ref{sec:existence}, \ref{sec:mass}, and \ref{sec:quadratic} are each devoted to one of the three applications. In particular, Theorem \ref{thm:mass-conservation} is proven in Section 4 first in a particular case (with a very short proof), and then in complete generality. Theorem \ref{th} is proven in Section 5. Finally, an Appendix is devoted to the proof of a Lemma of duality due to M. Pierre and D. Schmitt (cf. \cite{PSch}), which is the key to Proposition \ref{lem:mass-L2}. \section{A new a priori estimate}\label{nae} The solutions given in \cite{LM02} are constructed by approximating the system (\ref{eq:cfd})--(\ref{eq:defQF-3}) by a truncated system (the procedure consists in setting the coagulation and fragmentation coefficients to zero beyond a given finite size, and smoothing the initial data) for which very regular solutions exist. Then, uniform estimates for the solutions of this approximate system are proven. Finally, it is shown that these solutions have a subsequence which converges to a solution to the original system. In the proofs below it must be understood that the bounds are obtained for the truncated system (in a uniform way) and then transfered to the weak solution by a passage to the limit: the fact that this transfer can be done (in the case of the total mass) without replacing the equality by an inequality is the heart of our second application. \medskip We begin with the \begin{proof}[Proof of Proposition \ref{lem:mass-L2}] Using the fact that \begin{equation*} \partial_t \rho - \Delta ( M \rho ) = 0,\qquad \inf_{i\in\mathbb{N}^*} \{d_i\}\le M(t,x) := \frac{\sum_{i=1}^\infty d_i\, i\, c_i}{\sum_{i=1}^\infty i\, c_i} \le \sup_{i\in\mathbb{N}^*} \{d_i\}, \end{equation*} we can deduce thanks to a Lemma of duality (\cite[Appendix]{citeulike:3798030}) that $\rho \in L^2(\Omega_T)$, and more precisely that $$ \|\rho\|_{L^2(\Omega_T)} \le \bigg( 1 + \frac{\sup_i \{d_i\}}{\inf_i \{d_i\}} \bigg)\, T\, \|\rho(0,\cdot)\|_{L^2(\Omega)}, $$ for all $T>0$. For the sake of completeness, the Lemma is recalled with its proof in the Appendix (Lemma \ref{lem:nl-diffusion-L2-estimate}). \end{proof} \medskip We now turn to the \begin{proof}[Proof of Proposition \ref{lem:L1-terms}] For $F_i^-$, it is clear that \begin{equation*} F_i^- \leq B_i \, \rho \in L^2([0,T] \times \Omega) \subseteq L^1([0,T] \times \Omega), \end{equation*} thanks to Proposition \ref{lem:mass-L2}. For $F_i^+$ we use eq. \eqref{bou} to write \begin{equation} \label{eq:F+bound} F_i^+ \leq \sum_{j=1}^\infty \left( \frac{B_{i+j}\, \beta_{i+j,i}}{i+j} \right) (i+j)\, c_{i+j} \leq K_i \sum_{j=1}^\infty (i+j)\, c_{i+j} \leq K_i \,\rho, \end{equation} which is again in $L^2([0,T] \times \Omega)$, and hence in $L^1([0,T] \times \Omega)$. For the coagulation terms, we have, since each $c_i$ is less than $\rho$, \begin{equation} \label{eq:Q+bound} Q_i^+ \leq \frac{1}{4} \sum_{j=1}^{i-1} a_{i-j,j} \left(c_{i-j}^2 + c_j^2 \right) \leq \frac{1}{2} \rho^2 \left( \sum_{j=1}^{i-1} a_{i-j,j} \right), \end{equation} which is in $L^1([0,T] \times \Omega)$ as $\rho^2$ is, and the sum only has a finite number of terms. Finally, for $Q_i^-$ we use the fact that $Q_i^+$ and $F_i^+$ are already known to be integrable: Thus, from eq. \eqref{eq:cfd} integrated over $[0,T] \times \Omega$, \begin{multline*} \int_\Omega c_i(T,x) \,dx + \int_0^T\!\! \int_\Omega Q_i^-(t,x) \,dx \,dt \\ \leq \int_\Omega c_i^0(x) \,dx + \int_0^T\!\! \int_\Omega Q_i^+(t,x) \,dx \,dt + \int_0^T\!\! \int_\Omega F_i^+(t,x) \,dx \,dt. \end{multline*} This proves our result. \end{proof} \section{First application: a simplified proof of existence of solutions in dimension 1} \label{sec:existence} We begin this section with the following corollary of Proposition \ref{lem:L1-terms}, in the particular case of dimension $N=1$. \begin{lem} \label{lem:Linfty-bound} Assume that the dimension $N = 1$, and that (\ref{eq:hyps0}), (\ref{hyp:diffusion-bounded}), (\ref{hyp:initial-L2}) and (\ref{bou}) hold. Then, for all $T \geq 0$, $i \in \mathbb{N}^*$ the concentrations $c_i \in L^\infty([0,T] \times \Omega)$ (where $c_i$ are smooth solutions of a truncated version of (\ref{eq:cfd}) -- (\ref{eq:defQF-3}), the $L^\infty$ norm being independent of the truncation). \end{lem} \begin{proof}[Proof of Lemma \ref{lem:Linfty-bound}] We carry out a bootstrap regularity argument. Thanks to Proposition \ref{lem:L1-terms}, we know that (for all $i \in \mathbb{N}^*$) \begin{equation*} \left( \partial_t - d_i \Delta \right) c_i \in L^1([0,T] \times \Omega). \end{equation*} Using for example the results in \cite{citeulike:3797975}, this implies that for any $\delta > 0$, \begin{equation} \label{eq:ci-L3-} c_i \in L^{3-\delta}([0,T] \times \Omega) \qquad (i \in \mathbb{N}^*). \end{equation} Now, eq. \eqref{eq:ci-L3-} shows that $Q_i^+$ is actually more regular: from (the first inequality in) \eqref{eq:Q+bound}, \begin{equation} \label{eq:Qi+L3/2-} Q_i^+ \in L^{\frac{3}{2} - \frac{\delta}{2}} ([0,T] \times \Omega) \qquad \text{ for all } \delta > 0, i \in \mathbb{N}^*. \end{equation} In addition, we already knew from eq. \eqref{eq:F+bound} that (for all $i \in \mathbb{N}^*$) \begin{equation} \label{eq:F+L2} F_i^+ \in L^2([0,T] \times \Omega), \end{equation} [for which we do not need to assume that the space dimension $N$ is $1$]. Consequently, omitting the negative terms (for all $i \in \mathbb{N}^*$, $\delta>0$), we can find $h_i$ such that \begin{equation*} \left( \partial_t - d_i \Delta \right) c_i \leq h_i \in L^{\frac{3}{2} - \frac{\delta}{2}} ([0,T] \times \Omega). \end{equation*} As the $c_i$ are positive, this implies that \begin{equation*} c_i \in L^p([0,T] \times \Omega) \quad \text{ for all } p \in [1, +\infty[, i \in \mathbb{N}^*. \end{equation*} Again from \eqref{eq:Q+bound}, \begin{equation*} Q_i^+ \in L^{p} ([0,T] \times \Omega) \quad \text{ for all } p \in [1, +\infty[, i \in \mathbb{N}^*. \end{equation*} From this and \eqref{eq:F+L2}, we can find $h_i$ such that \begin{equation*} \left( \partial_t - d_i \Delta \right) c_i \leq h_i \in L^2 ([0,T] \times \Omega), \end{equation*} which implies in turn that $c_i \in L^\infty([0,T] \times \Omega)$ (for all $i \in \mathbb{N}^*$). \end{proof} \medskip We now have the possibility to give a short proof of Theorem \ref{thm:LM-existence} in dimension $1$ (and under the extra assumptions \eqref{hyp:diffusion-bounded}, (\ref{hyp:initial-L2})). Recall that a proof for any dimension can be found in \cite{LM02}. \medskip \begin{proof}[Short proof of Theorem \ref{thm:LM-existence} in 1D under the assumptions \eqref{hyp:diffusion-bounded} and \eqref{hyp:initial-L2}]\ \\ Consider a sequence $c_i^M$ of (regular) solutions to a truncated version of system \eqref{eq:cfd} -- (\ref{eq:defQF-3}). Thanks to Proposition \ref{lem:Linfty-bound}, we know that for each $i\in \mathbb{N}^*$, $ \sup_M \norm{c_i^M}_{L^\infty(\Omega_T)} < + \infty$. Then (for each $i\in \mathbb{N}^*$) there is a subsequence of the $(c_i^M)_{M\in\mathbb{N}}$ (which we still denote by $(c_i^M)_{M\in\mathbb{N}}$), and a function $c_i \in L^\infty(\Omega_T)$, such that \begin{equation} \label{eq:weak-star-conv} c_i^M \overset{*}{\rightharpoonup} c_i \quad \text{weak-$*$ in } L^\infty (\Omega_T). \end{equation} Using Proposition \ref{lem:L1-terms}, we also see that (for any fixed $i\in \mathbb{N}^*$), the $L^1(\Omega_T)$ norms of $C_i^{+,M}$, $C_i^{-,M}$, $F_i^{+,M}$, $F_i^{-,M}$ (the coagulation and fragmentation terms associated to $\{c_i^M\}$) are bounded independently of $M$. Using eq. \eqref{eq:cfd-eq} and the properties of the heat equation, one sees that for each $i\in \mathbb{N}^*$, the sequence $\{c_i^M\}$ lies in a strongly compact subset of $L^1(\Omega_T)$. Hence, by renaming our subsequence again, we may assume that \begin{equation} \label{eq:L1-conv} c_i^M \to c_i \quad \text{ in } L^1 (\Omega_T) \text{ strong }, \text{ for all } i \in \mathbb{N}^*. \end{equation} In order to prove that $\{c_i\}$ is indeed a solution to eq. \eqref{eq:cfd} -- \eqref{eq:defQF-3}, let us prove that all terms $F_i^{+,M}$, $F_i^{-,M}$, $C_i^{+,M}$, $C_i^{-,M}$ converge to the corresponding expressions for $c_i$, which we denote by $F_i^{+}$, $F_i^{-}$, $C_i^{+}$, $C_i^{-}$, as usual. \begin{enumerate} \item Positive fragmentation term: for each fixed $i$, the sum \begin{equation*} F_i^{+,M} = \sum_{j=1}^\infty B_{i+j}\, \beta_{i+j,i}\, c^M_{i+j} \end{equation*} converges to $F_i^+$ in $L^1(\Omega_T)$ because the tails of the sum converge to 0 uniformly in $M$ (this is due to hypothesis \eqref{eq:LM-condition}): \begin{align*} \int_0^T\!\!\int_{\Omega} \bigg| \sum_j B_{i+j} \, \beta_{i+j,i} (c_{i+j}^M - c_{i+j}) \bigg|\, dx dt \le& 2 \left( \sup_{j \ge J_0} \left| \frac{ B_{i+j} \, \beta_{i+j,i}}{i+j} \right| \right)\,\rho\\ &+ \sup_{j \le J_0} \| c_{i+j}^M - c_{i+j} \|_{L^1(\Omega_T)} . \end{align*} \item The negative fragmentation term is just a multiple of $c_i^M$, so the convergence in $L^1(\Omega_T)$ is given by \eqref{eq:L1-conv}. \item For each fixed $i$, the positive coagulation term is a finite sum of terms of the form $a_{i,j} c_i^M c_j^M$. Thanks to \eqref{eq:weak-star-conv} and \eqref{eq:L1-conv}, this converges to $a_{i,j} c_i c_j$ in $L^1(\Omega_T)$. \item The negative coagulation term is \begin{equation*} Q_i^{-,M} = c_i^M \sum_{j=1}^\infty a_{i,j}\, c_j^M. \end{equation*} Since $c_i^M$ converges to $c_i$ weak-$*$ in $L^\infty(\Omega_T)$, it is enough to prove that $\sum_{j=1}^\infty a_{i,j}\, c_j^M$ converges to $\sum_{j=1}^\infty a_{i,j}\, c_j$ strongly in $L^1(\Omega_T)$. Observing that $$ \int_0^T\!\!\int_{\Omega} \bigg| \sum_j a_{i,j} (c_{j}^M - c_{j}) \bigg|\, dx dt \le 2 \left( \sup_{j \ge J_0} \left| \frac{ a_{i,j}}{j} \right| \right)\,\rho + \sup_{j \le J_0} \| c_{j}^M - c_{j} \|_{L^1(\Omega_T)} , $$ we see thanks to (\ref{eq:LM-condition}) and (\ref{eq:L1-conv}) that this convergence indeed holds. \end{enumerate} \end{proof} \section{Second application: mass conservation} \label{sec:mass} We begin this section with a very short proof of Theorem \ref{thm:mass-conservation} in a particular case in order to show how estimate \eqref{imp} works. More precisely, we consider the pure coagulation case with $a_{i,j} = \sqrt{i\,j}$ and $B_i=0$ (no fragmentation), and with initial data satisfy additionally $\sum_{i=0}^{\infty} i\, \log i \, c_i(0,x) \, dx < +\infty$ (which is sightly more stringent than only assuming finite initial mass). \medskip Then, using the weak formulation \eqref{eq:fundamental-identity} with $\varphi_i=\log(i)$ (and remembering that $\log(1 + x) \le \text{Cst}\, \sqrt{x}$) \begin{align} \frac{d}{dt} \int_{\Omega}\sum_{i=1}^{\infty} i\, \log i \, c_i \, dx &= \int_{\Omega} \sum_{i=1}^{\infty} \sum_{j=1}^{\infty} \sqrt{ij} \, c_i\, c_j \left( i\, \log (1+\frac{j}{i}) + j\,\log (1+\frac{i}{j})\right) dx \nonumber\\ &\le 2 \int_{\Omega} \sum_{i=1}^{\infty} \sum_{j=1}^{\infty} i\,j\, c_i\, c_j\, dx \le 2 \int_{\Omega} \rho(t,x)^2\, dx . \label{eq:mc-simpleproof} \end{align} As a consequence, we have for all $T>0$ \begin{equation*} \int_{\Omega} \sum_{i=0}^{\infty} i\, \log i \, c_i(T,x) \, dx \le \int_{\Omega} \sum_{i=0}^{\infty} i\, \log i \, c_i(0,x) \, dx + 2 \int_0^T\!\!\int_{\Omega} \rho(t,x)^2\, dxdt, \end{equation*} which ensures the propagation of the moment $\int \sum_{i=0}^{\infty} i\, \log i \, c_i(\cdot,x) dx$, and therefore gives a rigorous proof of conservation of the mass for weak solutions of the system: no gelation occurs. Our general result is obtained through a refinement of this argument under hypothesis \eqref{hyp:aij-almost-linear}. Before giving the proof of Theorem \ref{thm:mass-conservation} we need two technical lemmas, which will substitute the intermediate step in \eqref{eq:mc-simpleproof}. \begin{lem} \label{lem:psi-slowly-growing} Let $\{\mu_i\}_{i \geq 1}$ and $\{\nu_i\}_{i \geq 1}$ be sequences of positive numbers such that $\{\mu_i\}$ is bounded, \begin{equation*} \sum_{i=1}^\infty \mu_i = +\infty \quad \text{ and } \quad \lim_{i \to +\infty} \nu_i = +\infty. \end{equation*} Then we can find a sequence $\{\xi_i\}_{i \geq 1}$ of nonnegative numbers such that \begin{gather*} \sum_{i=1}^\infty \xi_i = +\infty, \\ \xi_i \leq \mu_i \quad \text{ and } \quad \psi_i := \sum_{j=1}^i \xi_j \leq \nu_i \quad \text{ for all } i \geq 1. \end{gather*} \end{lem} \begin{proof} We may assume that $\nu_i$ is nondecreasing, for otherwise we can consider $\tilde{\nu}_i := \inf_{j \geq i} \{ \nu_j \}$ instead of $\nu_i$. Then, in order to find $\xi_i$ it is enough to define recursively $\xi_0 := 0$ and, for $i \geq 1$, \begin{equation*} \xi_i := \begin{cases} \mu_i & \text{ if } \mu_i + \sum_{j=0}^{i-1} \xi_j \leq \nu_i, \\ 0 & \text{ otherwise}. \end{cases} \end{equation*} By construction, $\xi_i \leq \mu_i$ for all $i \geq 1$, and also $\sum_{j=1}^i \xi_j \leq \nu_i$ for $i \geq 1$, as we are assuming $\{\nu_i\}$ nondecreasing. To see that $\{\xi_i\}$ cannot be summable, suppose otherwise that $\sum_{i=1}^\infty \xi_i = S < +\infty$. Take a bound $M > 0$ of $\{\mu_i\}$, and choose an integer $k$ such that $\nu_i \geq S + M$ for all $i \geq k$. Then, by definition, \begin{equation*} \xi_i = \mu_i \quad \text{ for all } i \geq k, \end{equation*} which implies that $\{\xi_i\}$ is not summable, as $\{\mu_i\}$ is not, and gives a contradiction. \end{proof} \begin{lem} \label{lem:choose-psi-i} Assume \eqref{eq:aij-condition}. There is a nondecreasing sequence of positive numbers $\{\psi_i\}_{i \geq 1}$ such that $\psi_i \to +\infty$ when $i \to +\infty$, and \begin{equation} \label{eq:less-than-ij} a_{i,j} (\psi_{i+j} - \psi_i) \leq C j \quad \text{ for all } i,j \geq 1, \end{equation} for some constant $C > 0$. In addition, for a given sequence of positive numbers $\lambda_i$ with $\lim_{i \to +\infty} \lambda_i = +\infty$, we can choose $\psi_i$ so that $\psi_i \leq \lambda_i$ for all $i$. \end{lem} \begin{proof} First, we may assume that the function $\theta$ given in Hypothesis \ref{hyp:aij-almost-linear} is nonincreasing on $[1,+\infty)$, as we can always take $\tilde{\theta}(x) := \sup_{y \geq x} \theta(y)$ instead. We choose a sequence of nonnegative numbers $\{ \xi_i \}$ by applying Lemma \ref{lem:psi-slowly-growing} with \begin{gather} \label{eq:mu_i} \mu_i := \frac{1}{(1+i) \log(1+i)}, \\ \label{eq:nu_i} \nu_i := \min\left\{ \lambda_i,\, \frac{1}{\theta(\sqrt{i/2})}, \right\}. \end{gather} Note that the conditions in Lemma \ref{lem:psi-slowly-growing} are met: the sequence in the right hand side of \eqref{eq:mu_i} is not summable, and the right hand side of \eqref{eq:nu_i} goes to $+\infty$ with $i$. If we define $\psi_i := \sum_{j=1}^i \xi_j$, then the following is given by Lemma \ref{lem:psi-slowly-growing}: \begin{gather*} \xi_i \leq \frac{1}{(1+i) \log(1+i)}, \qquad \psi_i \leq \frac{1}{\theta(\sqrt{i/2})}, \quad \psi_i \leq \lambda_i, \qquad i \geq 1, \\ \lim_{i \to +\infty} \psi_i = +\infty. \end{gather*} These conditions essentially say that $\psi_i$ grows slowlier than $\log \log (i)$, slowlier than $\theta(\sqrt{i/2})^{-1}$, and slowlier than $\lambda_i$, yet still diverges as $i \to +\infty$. \medskip We can now prove \eqref{eq:less-than-ij} to hold for these $\{\psi_i\}$ by distinguishing three cases: \noindent{1.} For any $i, j \geq 1$, as $\log(1+k) \geq 1/2$ for all $k \geq 1$, \begin{equation*} \psi_{i+j} - \psi_i = \sum_{k=i+1}^{i+j} \xi_{k} \leq 2 \sum_{k=i+1}^{i+j} \frac{1}{1+k} \leq 2 \log(i+j+1) - 2 \log(i+1) \leq \frac{2j}{i}. \end{equation*} Then, in case $j\le i$ we use the fact that $\theta(x) \leq C_\theta$ for some constant $C_\theta > 0$ and all $x > 0$ and have \begin{equation*} a_{i,j} (\psi_{i+j} - \psi_i) \leq 2\, C_\theta (i+j) \frac{j}{i} \leq 4\,C_\theta\, j,\qquad \text{for } j\le i. \end{equation*} \noindent{2.} Secondly, for $i < j \leq i^2$, \begin{align*} \psi_{i+j} - \psi_i &\leq \sum_{k=i+1}^{2i^2} \xi_{k} \leq \sum_{k=i+1}^{2i^2} \frac{1}{(k+1)\log(k+1)} \\ &\leq \log \log (2i^2 + 1) - \log \log(i+1) \leq \log \bigg(\frac{2 \log (\sqrt{3}i)}{\log(i+1)} \bigg)\leq C_1, \end{align*} for some number $C_1 > 0$. Thus, \begin{equation*} a_{i,j} (\psi_{i+j} - \psi_i) \leq C_1 C_\theta (i+j) \leq 2 C_1 C_\theta j. \end{equation*} \noindent{3.} Finally, for $j > i^2$, \begin{equation*} \psi_{i+j} - \psi_i \leq \psi_{i+j} = \sum_{k=1}^{i+j} \xi_k \leq \frac{1}{\theta(\sqrt{(i+j)/2})} \leq \frac{1}{\theta(\sqrt{j})}, \end{equation*} and as $\theta$ is nonincreasing on $[1,+\infty)$ (we may assume this; see the beginning of this proof), we have for all $j > i^2$ \begin{equation*} a_{i,j} (\psi_{i+j} - \psi_i) \leq (i+j) \theta(j/i) \frac{1}{\theta(\sqrt{j})} \leq (i+j) \theta(\sqrt{j}) \frac{1}{\theta(\sqrt{j})} = i+j \leq 2j. \end{equation*} Together, these three cases show \eqref{eq:less-than-ij} for all $i,j \geq 1$. \end{proof} Now we are ready to finish the proof of our result on mass conservation: \begin{proof}[Proof of Theorem \ref{thm:mass-conservation}] As remarked above (cf. beginning of section \ref{nae}), we will prove the estimate \eqref{eq:superlinear} for a regular solution to an approximating system, with a constant $C(T)$ that does not depend on the regularisation. Then, passing to the limit, the result is true for a weak solution thus constructed. We consider a solution to an approximating system on $[0,+\infty)$, which we still denote by $\{c_i\}_{i \geq 1}$. Then, by a version of the de la Vallée-Poussin's Lemma, (see, for instance, Proposition 9.1.1 in \cite{C06} or also proof of Lemma 7 in \cite{citeulike:2964344}), there exists a nondecreasing sequence of positive numbers $\{\lambda_i\}_{i \geq 1}$ (independent of the regularisation of the initial data) which diverges as $i \to +\infty$, and such that \begin{equation} \label{eq:phi-initially-finite} \int_\Omega \sum_{i=1}^\infty i \,\lambda_i c_i^0\,dx < +\infty. \end{equation} If we define $r_i := \int_\Omega i c_i^0$, note that this is just the claim that one can find $\lambda_i$ as above with $\sum_i \lambda_i r_i < +\infty$. \medskip Taking $\{\psi_i\}$ as given by Lemma \ref{lem:choose-psi-i}, such that $\psi_i \leq \lambda_i$ for all $i \geq 1$, we have thus $\int_\Omega \sum_{i=1}^\infty i \,\psi_i\, c_i^0(x) \,dx < +\infty$. Then, as integrating over $\Omega$ makes the diffusion term vanish due to the no-flux boundary conditions, we estimate \begin{equation} \label{eq:mc1} \frac{d}{dt} \int_\Omega \sum_{i=1}^\infty i \, \psi_i c_i\,dx \leq\frac{1}{2} \int_\Omega \sum_{i,j=1}^\infty a_{i,j} c_i c_j((i+j) \psi_{i+j} - i\, \psi_i - j\,\psi_j)\,dx, \end{equation} where we used that the contribution of the fragmentation term is nonpositive, as can be seen from \eqref{eq:fundamental-identity} with $\varphi_i \equiv i\,\psi_i$, and the fact that \begin{equation*} \sum_{j=1}^{i-1} \beta_{i,j}\, j \psi_j \leq \psi_i \sum_{j=1}^{i-1} \beta_{i,j}\, j = i\,\psi_i, \end{equation*} as $\psi_i$ is nondecreasing and \eqref{hyp:frag-conserves-mass} holds. % Continuing from \eqref{eq:mc1}, by the symmetry of the $a_{i,j}$, and using the inequality \eqref{eq:less-than-ij} from Lemma \ref{lem:choose-psi-i}, we have \begin{equation} \frac{d}{dt} \int_\Omega \sum_{i=1}^\infty i\,\psi_i\, c_i\,dx \leq \int_\Omega \sum_{i,j=1}^\infty a_{i,j}\, c_i\, c_j i\, (\psi_{i+j} - \psi_i)\,dx \le C \int_\Omega \rho^2\,dx. \label{eq:mc2} \end{equation} Thus, Proposition \ref{lem:mass-L2} showing $\rho \in L^2(\Omega_T)$ proves that $\int_\Omega \sum_{i=1}^\infty i \, \psi_i\, c_i\,dx$ is bounded on bounded time intervals. Mass conservation is a direct consequence of this. \end{proof} \begin{rem}[Absence of gelation via tightness] It is interesting to sketch an alternative proof showing conservation of mass via a tightness argument and without establishing superlinear moments. By introducing the superlinear test sequence $i\phi_k(i)$ with $ \phi_k(i) = \frac{\log i}{\log k} 1_{i<k} + 1_{i\ge k}$ for all $k \in \mathbb{N}^*$, we use the weak formulation \eqref{eq:fundamental-identity} to see (as above) that the fragmentation part is nonnegative for superlinear test sequences, and use the symmetry of the $a_{i,j}$ to reduce summation over the indices $i\ge j\in \mathbb{N}^*$, which leads to the estimate \begin{multline*} \frac{d}{dt} \int_\Omega \sum_{i=1}^{\infty} c_i\, i \phi_k(i)\,dx \leq\int_\Omega \sum_{i\ge j}^\infty \sum_{j=1}^\infty {a_{i,j}}[i c_i] [c_j] \left(\frac{\log(1+\frac{j}{i})}{\log(k)}\,\mathbb{I}_{i<k} \right. \\ \left. +\frac{j}{i}\left(\frac{\log(1+\frac{i}{j})}{\log(k)}\,\mathbb{I}_{i+j<k}+\frac{\log(\frac{k}{j})}{\log(k)}\,\mathbb{I}_{j<k\le i+j}\right)\right)dx. \end{multline*} For the first term, we use $\log(1+{j}/{i})\le {j}/{i}$. Then, for the second and third terms, we distinguish further the areas where $i/j\le \log(k)$ and $i/j> \log(k)$. When $i/j\le \log(k)$, we estimate $1+{i}/{j}=1+i/j\le 1+\log(k)$ and ${k}/{j}\le 1+{i}/{j}\le 1+\log(k)$, respectively. On the other hand, when $i/j > \log(k)$, both the second and the third term are bounded by one. Altogether, we get thanks to assumption \eqref{hyp:aij-almost-linear}, i.e. $\frac{a_{i,j}}{i}\le \text{Cst}\,{\theta(i/j)}$ for $i\le j$: \begin{align*} \frac{d}{dt} \int_\Omega \sum_{i=1}^{\infty} c_i\, i \phi_k(i)\,dx &\leq \left(\frac{1}{\log(k)}+\frac{\log(1+\log{k})}{\log(k)}\right) \sup\limits_{i\ge j\in\mathbb{N}^*} \left\{\frac{a_{i,j}}{i}\right\}\int_\Omega \rho^2\,dx \\ &\quad +\int_\Omega \sum_{i\ge j}^\infty \sum_{j=1}^\infty [i c_i] [j c_j] \frac{a_{i,j}}{i}\,\mathbb{I}_{i/j> \log(k);j<k}\,dx\\ &\leq \text{Cst} \left(\frac{\log(1+\log{k})}{\log(k)} +\sup\limits_{i/j\ge \log(k)} {\theta\left(\frac{i}{j}\right)}\right)\int_\Omega \rho^2\,dx \end{align*} and the right hand side tends to zero as $k\to\infty$. Hence, using Proposition~\ref{lem:mass-L2} and integrating over a time interval $[0,T]$, we get thanks to a tightness argument that the mass is indeed conserved, and no gelation occurs. \end{rem} \section{Third Application: Fragmentation due to collisions in dimension 1} \label{sec:quadratic} \begin{proof}[Proof of Theorem \ref{th}] We introduce $(c_i^M)_M$ a sequence of smooth solutions for a truncated version of eq. (\ref{cf2}). We first observe that Proposition \ref{lem:mass-L2} still holds thanks to the duality estimate, that is $\rho := \sum_i i\, c_i \in L^2(\Omega_T)$ for all $T>0$. Estimate (\ref{eq:Q+bound}), in which only the coagulation kernel appears, also holds. Moreover, thanks to (\ref{q3}), $$ \sum_{k,l}\sum_{\\max\{k,l\}>i} b_{k,l}\, c_k\,c_l \, \beta_{ikl} \le \text{Cst}_i \sum_k\sum_{l} (k+l)\, c_k\,c_l \le \text{Cst}_i\, \rho^2 \in L^1(\Omega_T). $$ The loss terms $$ \sum_{k=1}^{\infty} a_{i,k} \,c_i\, c_k,\qquad \sum_{k=1}^{\infty} b_{i,k}\,c_i\,c_k $$ lie then in $L^1(\Omega_T)$ by integration of the equation on $[0,T] \times \Omega$. \bigskip Using now eq. (\ref{cf2}), we see that (for all $i\in \mathbb{N}^*$) $ \partial_t c_i^M - d_i \partial_{xx} c_i^M $ belongs to a bounded subset of $L^1(\Omega_T)$. As a consequence, $c_i^M$ belongs (for all $i\in \mathbb{N}^*$) to a compact subset of $L^{3-\varepsilon}([0,T] \times \Omega)$ for all $T>0$ and $\varepsilon>0$. We denote (for all $i\in \mathbb{N}^*$) by $c_i$ a limit (in $L^{3-\varepsilon}([0,T] \times \Omega)$ strong) of a subsequence of $(c_i^M)_{M\in\mathbb{N}}$ (still denoted by $(c_i^M)_{M\in\mathbb{N}}$). \bigskip We now pass to the limit in all terms of the r.h.s. of eq. (\ref{cf2}). The first term can easily be dealt with, since it consists of a finite sum. Then, we pass to the limit in the second term: \begin{gather*} \int_0^T\!\! \int_{\Omega} \bigg| \sum_{k=1}^{\infty} a_{i,k} \, c_i^n \, c_k^n - \sum_{k=1}^{\infty} a_{i,k} \, c_i \, c_k \bigg| \, dx dt \\ \le \int_0^T\!\! \int_{\Omega} \bigg| \sum_{k=1}^{K} a_{i,k} \, c_i^n \, c_k^n - \sum_{k=1}^{K} a_{i,k} \, c_i \, c_k \bigg| \, dx dt +\, 2 \, \|\rho\|_{L^2}^2 \, \sup_{k > K} \left\{\frac{a_{i,k}}k\right\} . \end{gather*} The second part of this expression is small when $K$ is large enough thanks to assumption (\ref{nas1}), (\ref{nas2}), while the first part tends to $0$ for all given $K$. \bigskip The fourth term of the r.h.s. of eq. (\ref{cf2}) can be treated exactly in the same way. We now turn to the third term: \begin{gather*} \int_0^T\!\! \int_{\Omega} \bigg| \sum_{k,l=1}^{\infty}\sum_{i<\max\{k,l\}} b_{k,l}\, c_k^n\,c_l^n \, \beta_{i,k,l} - \sum_{k,l=1}^{\infty}\sum_{i<\max\{k,l\}} b_{k,l}\, c_k\,c_l \, \beta_{i,k,l} \bigg| \, dx dt \\ \le \int_0^T\!\! \int_{\Omega} \bigg| \sum_{k,l=1}^{K}\sum_{i<\max\{k,l\}}^{k\le K, l \le K} b_{k,l}\, c_k^n\,c_l^n \, \beta_{i,k,l} - \sum_{k,l=1}^{K}\sum_{i<\max\{k,l\}}^{k\le K, l \le K} b_{k,l}\, c_k\,c_l \, \beta_{i,k,l} \bigg| \, dx dt \\ +\, 4\, \|\rho\|_{L^2}^2 \, \sup_{l\ge K} \sup_{k \in \mathbb{N}} \left\{\frac{b_{k,l}}{kl} \, \beta_{i,k,l}\right\} . \end{gather*} Once again, the second term is small when $K$ is large enough thanks to assumption (\ref{nas1}), (\ref{nas2}), while the first term tends to $0$ for all given $K$. \end{proof} \section*{Acknowledgements} KFs work has been supported by the KAUST Award No. KUK-I1-007-43, made by King Abdullah University of Science and Technology (KAUST). JAC was supported by the project MTM2008-06349-C03-03 of the Spanish \emph{Ministerio de Ciencia e Innovaci\'{o}n}. LD was supported by the french project ANR CBDif. The authors acknowledge partial support of the trilateral project Austria-France-Spain (Austria: FR 05/2007 and ES 04/2007, Spain: HU2006-0025 and HF2006-0198, France: Picasso 13702TG and Amadeus 13785 UA). LD and KF also wish to acknowledge the kind hospitality of the CRM of Barcelona. \section{Appendix: A duality lemma} We recall here results from e.g. \cite{PSch, citeulike:3798030}. We start with the \begin{lem} \label{lem:dual-bound} Assume that $z: \Omega_T \to [0, +\infty)$ satisfies \begin{alignat}{2} \partial_t z + M \Delta z &= - H &\qquad& \text{ on } \Omega, \nonumber \\ \nabla z \cdot n &= 0 && \text{ on } \partial \Omega, \label{eq:nl-diffusion-dual0} \\ z(T,x) &= 0 && \text{ on } \Omega, \nonumber \end{alignat} where $H \in L^2(\Omega_T)$, and $d_1 \geq M \geq d_0 > 0$. Then, \begin{equation} \label{eq:dual-estimate} \norm{ z(0,\cdot) }_{L^2(\Omega)} \leq \left( 1 + \frac{d_1}{d_0} \right) T \norm{H}_{L^2(\Omega_T)}. \end{equation} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:dual-bound}] Calculating the time derivative of $\int_\Omega \abs{\nabla z}^2$, or alternatively multiplying eq. \eqref{eq:nl-diffusion-dual0} by $\Delta z$ and integrating on $\Omega$, we obtain \begin{equation*} -\frac{1}{2} \frac{d}{dt} \int_\Omega \abs{\nabla z}^2\,dx + \int_\Omega M (\Delta z)^2\,dx = \int_\Omega - H \Delta z\,dx, \end{equation*} where the boundary condition on $z$ was used. Integrating on $[0,T]$ and taking into account that $z(T,x) = 0$, \begin{align} \frac{1}{2} \int_\Omega \abs{\nabla z(0,\cdot)}^2\,dx + \int_{\Omega_T} M (\Delta z)^2\,dxdt &= \int_{\Omega_T} H \Delta z\,dxdt \nonumber\\ &\leq \norm{H}_{L^2(\Omega_T)} \norm{\Delta z}_{L^2(\Omega_T)}. \label{eq:p2} \end{align} Using that $M \geq d_0$ we see that $\int_{\Omega_T} M (\Delta z)^2 \geq d_0 \norm{\Delta z}_{L^2(\Omega_T)}^2$, so \eqref{eq:p2} implies \begin{equation*} d_0 \norm{ \Delta z }_{L^2(\Omega_T)} \leq \norm{H}_{L^2(\Omega_T)}. \end{equation*} From this and \eqref{eq:nl-diffusion-dual0} we have \begin{align*} \norm{ \partial_t z }_{L^2(\Omega_T)} &\leq \norm{ M \Delta z }_{L^2(\Omega_T)} + \norm{H}_{L^2(\Omega_T)} \\ &\leq d_1 \norm{ \Delta z }_{L^2(\Omega_T)} + \norm{H}_{L^2(\Omega_T)} \leq \left( 1 + \frac{d_1}{d_0} \right) \norm{H}_{L^2(\Omega_T)}. \end{align*} Finally, \begin{equation*} \norm{z(0,\cdot)}_{L^2(\Omega)} \leq \int_0^T \norm{\partial_s z_s}_{L^2(\Omega)} \,ds \leq \left( 1 + \frac{d_1}{d_0} \right)\, T \norm{H}_{L^2(\Omega_T)}. \end{equation*} \end{proof} \begin{lem} \label{lem:nl-diffusion-L2-estimate} Assume that $\rho: \Omega_T \to [0, +\infty)$ and satisfies \begin{alignat}{2} \label{eq:nl-diffusion} \partial_t \rho - \Delta (M \rho) &\leq 0 \qquad \text{ on } \Omega, \\ \nabla (\rho\,M) \cdot n &= 0 \qquad \text{ on } \partial \Omega,\nonumber \end{alignat} where $M: \Omega_T \to \mathbb{R}$ is a function which satisfies $d_1 \geq M \geq d_0 > 0$ for some numbers $d_1$, $d_0$. Then, \begin{equation*} \norm{\rho}_{L^2(\Omega_T)} \leq \left( 1 + \frac{d_1}{d_0} \right)\, T \norm{\rho(0,\cdot)}_2. \end{equation*} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:nl-diffusion-L2-estimate}] Consider the dual problem \eqref{eq:nl-diffusion-dual0} -- \eqref{eq:dual-estimate} for an arbitrary function $H \in L^2(\Omega_T)$, with $H \geq 0$. Then, $z\ge 0$, and integrating by parts in eq. \eqref{eq:nl-diffusion-dual0}, one finds that \begin{align*} \int_{\Omega_T} \rho H\,dxdt &= - \int_{\Omega_T} \rho (\partial_t z + M \Delta z)\,dxdt \\ &= \int_{\Omega_T} z (\partial_t \rho - \Delta(\rho M))\,dxdt + \int_\Omega \rho(0,\cdot) \,z(0,\cdot)\,dxdt \leq \int_\Omega \rho(0,\cdot) \,z(0,\cdot)\,dxdt, \end{align*} where we have used eq. \eqref{eq:nl-diffusion}, eq. \eqref{eq:dual-estimate} and the boundary conditions on $\rho\, M$ and $z$. Hence, for any nonnegative function $H \in L^2(\Omega_T)$, \begin{equation*} \int_{\Omega_T} \rho H\,dxdt \leq \norm{\rho(0,\cdot)}_{L^2(\Omega)} \norm{z(0,\cdot)}_{L^2(\Omega)}, \end{equation*} and thanks to Lemma \ref{lem:dual-bound}, \begin{equation*} \int_{\Omega_T} \rho H\,dxdt \leq ( 1 + d_1/d_0)\, T \norm{\rho(0,\cdot)}_{L^2(\Omega)} \norm{H}_{L^2(\Omega_T)}. \end{equation*} Remembering that $\rho \ge 0$, we obtain by duality: \begin{equation*} \norm{\rho}_{L^2(\Omega_T)} \leq (1 + d_1/d_0)\, T \norm{\rho(0,\cdot)}_{L^2(\Omega)} . \end{equation*} This proves the lemma. \end{proof} \bibliographystyle{abbrv}
2,869,038,154,497
arxiv
\section{Introduction}\label{s:introduction} In this paper we examine how two standard properties of finite-dimensional linear programming, strong duality and sensitivity analysis, carry over to semi-infinite linear programs (SILPs). Our standard form for a semi-infinite linear program is \begin{align*}\label{eq:SILP} OV(b) := \inf \quad & \sum_{k=1}^n c_k x_k \tag{SILP}\\ {\rm s.t.} \quad & \sum_{k=1}^n a^k(i) x_k \ge b(i) \quad\text{ for } i \in I \end{align*} where $a^k : I \to \bb R$ for all $k = 1, \dots, n$ and $b : I \to \bb R$ are real-valued functions on the (potentially infinite cardinality) index set $I$. The ``columns'' $a^k$ define a linear map $A : \bb R^n \to Y$ with $A(x) = (\sum_{k=1}^n a^k(i) x_k : i \in I)$ where $Y$ is a linear subspace of $\bb R^I$, the space of all real-valued functions on the index set $I$. The vector space $Y$ is called the \emph{constraint space} of \eqref{eq:SILP}. This terminology follows Chapter 2 of Anderson and Nash \cite{anderson-nash}. Goberna and L\'{o}pez \cite{goberna2014post} call $Y$ the ``space of parameters.'' Finite linear programming problem is a special case of \eqref{eq:SILP} where $I = \left\{1,\dots,m\right\}$ and $Y = \bb R^m$ for a finite natural number $m$. As shown in Chapter~4 of Anderson and Nash \cite{anderson-nash}, the dual of \eqref{eq:SILP} with constraint space $Y$ is \begin{align*}\label{eq:DSILPprime} \begin{array}{rl} \sup & \psi(b) \\ {\rm s.t.} & \psi(a^k) = c_k\quad \text{ for } k=1,\dots,n \\ & \psi \succeq_{Y'_+} 0 \end{array}\tag{\text{DSILP($Y$)}} \end{align*} where $\psi: Y \to \bb R$ is a linear functional in the algebraic dual space $Y^{\prime}$ of $Y$ and $\succeq_{Y'_+}$ denotes an ordering of linear functionals induced by the cone \begin{align*} Y'_+ := \left\{\psi : Y \to \bb R \mid \psi(y) \ge 0 \text{ for all } y \in Y \cap \bb R^I_+\right\} \end{align*} where $\bb R^I_+$ is the set of all nonnegative real-valued functions with domain $I$. The familiar finite-dimensional linear programming dual has solutions $\psi =(\psi_1, \dots, \psi_m)$ where $\psi(y) = \sum_{i=1}^m y_i\psi_i$ for all nonnegative $y \in \bb R^m.$ Equivalently, $\psi\in \bb R^m_+$. Note the standard abuse of notation of letting $\psi$ denote both a linear functional and the real vector that represents it. Our primary focus is on two desirable properties for the primal-dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} when both the primal and dual are feasible (and hence the primal has bounded objective value). The first property is {\it strong duality} (SD). The primal-dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} satisfies the \emph{strong duality} (SD) property if \begin{itemize} \item[] \textbf{(SD)}: there exists a $\psi^{*} \in Y_{+}^{\prime}$ such that \begin{align}\label{eq:strong-duality} \psi^*(a^k) = c_k \text{ for } k =1, 2,\dots n \text{ and } \psi^*(b) = OV(b) \end{align} \end{itemize} where $OV(b)$ is the optimal value of the primal \eqref{eq:SILP} with right-hand-side $b.$ The second property of interest concerns use of dual solutions in sensitivity analysis. The primal-dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} satisfies the \emph{dual pricing} (DP) property if \begin{itemize} \item[] \textbf{(DP)}: For every perturbation vector $d \in Y$ such that \eqref{eq:SILP} is feasible for right-hand-side $b + d,$ there exists an optimal dual solution $\psi^{*}$ to \eqref{eq:DSILPprime} and an $\hat \epsilon > 0$ such that \begin{align}\label{eq:dual-pricing} OV(b + \epsilon d) = \psi^{*}(b + \epsilon d) = OV(b) + \epsilon \psi^{*}(d) \end{align} for all $\epsilon \in [0, \hat \epsilon].$ \end{itemize} The terminology ``dual pricing'' refers to the fact that the appropriately chosen optimal dual solution $\psi^*$ correctly ``prices'' the impact of changes in the right-hand on the optimal primal objective value. Finite-dimensional linear programs always satisfy (SD) and (DP) when the primal is feasible and bounded. Define the vector space \begin{eqnarray} U := \text{span}(a^1, \dots, a^n, b). \label{eq:define-U} \end{eqnarray} This is the minimum constraint space of interest since the dual problem~\eqref{eq:DSILPprime} requires the linear functionals defined on $Y$ to operate on $a^1, \dots, a^n, b.$ If $I$ is a finite set and \eqref{eq:SILP} is feasible and bounded, then there exists a $\psi^{*} \in U^{\prime}_{+}$ such that~\eqref{eq:strong-duality} and~\eqref{eq:dual-pricing} is satisfied. Furthermore, optimal dual solutions $\psi^*$ that satisfy (SD) and (DP) are vectors in $\bb R^m.$ That is, we can take $\psi^{*} = (\psi^{*}_{1}, \ldots, \psi^{*}_{m}).$ Thus $\psi^*$ is not only a linear functional over $U,$ but it is also a linear functional over $\bb R^{m}.$ The fact that $\psi^*$ is a linear functional for both $Y = U$ and $Y = \bb R^{m}$ is obvious in the finite case and taken for granted. The situation in semi-infinite linear programs is far more complicated and interesting. In general, a primal-dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} can fail both (SD) and (DP). Properties (SD) and (DP) depend crucially on the choice of constraint space $Y$ and its associated dual space. Unlike finite linear programs where there is only one natural choice for the constraint space (namely $\bb R^m$), there are multiple viable nonisomorphic choices for an SILP. This makes constraint space choice a core modeling issue in semi-infinite linear programming. However, one of our main results is that (SD) and (DP) always hold with constraint space $U$. Under this choice, $\text{DSILP}(U)$ has a unique optimal dual solution $\psi^*$ we call the \emph{base dual solution} of \eqref{eq:SILP} -- see Theorem~\ref{theorem:silps-never-have-a-duality-gap}. Throughout the paper, the linear functionals that are feasible to~\eqref{eq:DSILPprime} are called dual solutions. The base dual solution satisfies \eqref{eq:dual-pricing} for every choice of $d \in U$. However, this space greatly restricts the choice of perturbation vectors $d$. Expanding $U$ to a larger space $Y$ (note that $Y$ must contain $U$ for \eqref{eq:DSILPprime} to be a valid dual) can compromise (SD) and (DP). We give concrete examples where (SD), (DP) (or both) hold and do not hold. The main tool used to extend $U$ to larger constraints spaces is the Fourier-Motzkin elimination procedure for semi-infinite linear programs introduced in Basu et al.~\cite{basu2013projection}. We define a linear operator called the \emph{Fourier-Motzkin operator} that is used to map the constraint space $U$ onto another constraint space. A linear functional is then defined on this new constraint space. Under certain conditions, this linear functional is then extended using the Hahn-Banach theorem to a larger vector space that contains the new constraint space. Then, using the adjoint of the Fourier-Motzkin operator, we get a linear functional on constraint spaces larger than $U$ where properties (SD) and (DP) hold. Although the Fourier-Motzkin elimination procedure described in Basu et al. \cite{basu2013projection} was used to study the finite support (or Haar) dual of an \eqref{eq:SILP}, this procedure provides insight into more general duals. The more general duals require the use of purely finitely additive linear functionals (often called {\em singular}) and these are known to be difficult to work with (see Ponstein, \cite{ponstein1981use}). However, the Fourier-Motzkin operator allows us to work with such functionals. \paragraph{Our Results.} Section~\ref{s:preliminaries} contains preliminary results on constraint spaces and their duals. In Section~\ref{s:fm-elimination-duality} we recall some key results about the Fourier-Motzkin elimination procedure from Basu et. al.~\cite{basu2013projection} and also state and prove several additional lemmas that elucidate further insights into non-finite-support duals. Here we define the Fourier-Motzkin operator, which plays a key role in our theory. In Section~\ref{s:strong-duality-dual-pricing} we prove (SD) and (DP) for the constraint space $Y = U.$ This is done in Theorems~\ref{theorem:silps-never-have-a-duality-gap} and~\ref{theorem:U-dual-pricing}, respectively. In Section~\ref{s:extending-U} we prove (SD) and (DP) for subspaces $Y \subseteq \bb R^{I}$ that extend $U.$ In Proposition~\ref{prop:mononticity} we show that once (SD) or (DP) fail for a constraint space $Y$, then they fail for all larger constraint spaces. Therefore, we want to extend the base dual solution and push out from $U$ as far as possible until we encounter a constraint space for which (SD) or (DP) fail. Sufficient conditions on the original data are provided that guarantee (SD) and (DP) hold in larger constraint spaces. See Theorems~\ref{theorem:extend-SD-Y}~and~\ref{theorem:sufficient-conditions-dual-pricing-alt}. \paragraph{Comparison with prior work.} Our work can be contrasted with existing work on strong duality and sensitivity analysis in semi-infinite linear programs along several directions. First, the majority of work in semi-infinite linear programming assumes either the Haar dual or settings where $b$ and $a^k$ for all $k$ are continuous functions over a compact index set (see for instance Anderson and Nash \cite{anderson-nash}, Glashoff and Gustavson \cite{glashoff1983linear}, Hettich and Kortanek \cite{hettich1993semi}, and Shapiro \cite{shapiro2009semi}). The classical theory, initiated by Haar \cite{haar1924}, gave sufficient conditions for zero duality gap between the primal and the Haar dual. A sequence of papers by Charnes et al. \cite{charnes1963duality,charnes1965representations} and Duffin and Karlovitz \cite{duffin-karlovitz65}) fixed errors in Haar's original strong duality proof and described how a semi-infinite linear program with a duality gap could be reformulated to have zero duality gap with the Haar dual. Glashoff in \cite{glashoff79} also worked with a dual similar to the Haar dual. The Haar dual was also used during later development in the 1980s (in a series of papers by Karney \cite{karney81,karney1982pathological,karney85}) and remains the predominant setting for analysis in more recent work by Goberna and co-authors (see for instance, \cite{goberna2007sensitivity}, \cite{goberna1998linear} and \cite{goberna2014post}). By contrast, our work considers a wider spectrum of constraint spaces from $U$ to $\bb R^I$ and their associated algebraic duals. All such algebraic duals include the Haar dual (when restricted to the given constraint space), but also additional linear functionals. In particular, our theory handles settings where the index set is not compact, such as $\bb N$. We do more than simply extend the Haar dual. Our work has a different focus and raises and answers questions not previously studied in the existing literature. We explore how \emph{changing} the constraint space (and hence the dual) effects duality and sensitivity analysis. This emphasis forces us to consider optimal dual solutions that are not finite support. Indeed, we provide examples where the finite support dual fails to satisfy (SD) but another choice of dual does satisfy (SD). In this direction, we extend our earlier work in \cite{basu2014sufficiency} on the sufficiency of finite support duals to study semi-infinite linear programming through our use of the Fourier-Motzkin elimination technology. Second, our treatment of sensitivity analysis through exploration of the (DP) condition represents a different standard than the existing literature on that topic, which recently culminated in the monograph by Goberna and L\'{o}pez \cite{goberna2014post}. In (DP) we allow a different dual solution in each perturbation direction $d$. The standard in Goberna and L\'{o}pez \cite{goberna2007sensitivity} and Goberna et al. \cite{goberna2010sensitivity} is that a single dual solution is valid for all feasible perturbations. This more exacting standard translates into strict sufficient conditions, including the existence of a primal optimal solution. By focusing on the weaker (DP), we are able to drop the requirement of primal solvability. Indeed, Example~\ref{ex:drop-primal-solvability} shows that (DP) holds even though a primal optimal solutions does not exist. Moreover, the sufficient conditions for sensitivity analysis in Goberna and L\'{o}pez \cite{goberna2007sensitivity} and Goberna et al. \cite{goberna2010sensitivity} rule out the possibility of dual solutions that are \emph{not} finite support yet nonetheless satisfy their standard of sensitivity analysis. Example~\ref{ex:drop-primal-solvability} provides one such case, where we show that there is a single optimal dual solution that satisfies \eqref{eq:dual-pricing} for all feasible perturbations $d$ and yet is not finite support. Third, the analytical approach to sensitivity analysis in Goberna and L\'{o}pez \cite{goberna2014post} is grounded in convex-analytic methods that focus on topological properties of cones and epigraphs, whereas our approach uses Fourier-Motzkin elimination, an algebraic tool that appeared in the study of semi-infinite linear programming duality in Basu et al. \cite{basu2013projection}. Earlier work by Goberna et al.~\cite{Goberna2010209} explored extensions of Fourier-Motzkin elimination to semi-infinite linear systems but did not explore its implications for duality. \section{Preliminaries} \label{s:preliminaries} In this section we review the notation, terminology and properties of relevant constraint spaces and their algebraic duals used throughout the paper. First some basic notation and terminology. The \emph{algebraic dual} $Y'$ of the vector space $Y$ is the set of real-valued linear functionals with domain $Y$. Let $\psi \in Y'.$ The evaluation of $\psi$ at $y$ is alternately denoted by $ \langle y, \psi \rangle$ or $\psi(y)$, depending on the context. A convex pointed cone $P$ in $Y$ defines a vector space ordering $\succeq_P$ of $Y$, with $y \succeq_P y'$ if $y - y' \in P$. The \emph{algebraic dual cone} of $P$ is $P' = \left\{\psi \in Y' : \psi(y) \ge 0 \text{ for all } y \in P\right\}$. Elements of $P'$ are called \emph{positive linear functionals} on $Y$ (see for instance, page 17 of Holmes \cite{holmes}). Let $A:X \rightarrow Y$ be a linear mapping from vector space $X$ to vector space $Y$. The \emph{algebraic adjoint} $A' : Y' \to X'$ is a linear operator defined by $A'(\psi) = \psi \circ A$ where $\psi \in Y'$. We discuss some possibilities for the constraint space $Y$ in \eqref{eq:DSILPprime}. A well-studied case is $Y = \bb R^I$. Here, the structure of \eqref{eq:DSILPprime} is complex since very little is known about the algebraic dual of $\bb R^I$ for general $I$. Researchers typically study an alternate dual called the \emph{finite support dual}. We denote the finite support dual of \eqref{eq:SILP} by \begin{align*}\label{eq:FDSILP} \begin{array}{rl} \sup & \sum_{i = 1}^m \psi(i)b(i) \\ {\rm s.t.} & \sum_{i = 1}^m a^k(i)\psi(i) = c_k \quad \text{ for } k=1,\dots,n \\ & \psi \in \bb R^{(I)}_+ \end{array}\tag{\text{FDSILP}} \end{align*} where $\bb R^{(I)}$ consists of those functions in $\psi \in \bb R^I$ with $\psi(i) \neq 0$ for only finitely many $i \in I$ and $\bb R^{(I)}_+$ consists of those elements $\psi \in \bb R^{(I)}$ where $\psi(i) \ge 0$ for all $i \in I$. A finite support element of $\bb R^{I}$ always represents a linear functional on any vector space $Y \subseteq \bb R^{I}.$ Therefore the finite support dual linear functionals feasible to \eqref{eq:FDSILP} are feasible to \eqref{eq:DSILPprime} for any constraint space $Y \subseteq \bb R^{I}$ that contains the space $U = \text{span}(a^1, \dots, a^n, b).$ This implies that the optimal value of $\eqref{eq:FDSILP}$ is always less than or equal to the optimal value of $\eqref{eq:DSILPprime}$ for all valid constraint spaces $Y$. It was shown in Basu et al. \cite{basu2014sufficiency} that \eqref{eq:FDSILP} and \eqref{eq:DSILPprime} for $Y = \bb R^\bb N$ are equivalent. In this case \eqref{eq:FDSILP} is indeed the algebraic dual of \eqref{eq:SILP} and so \eqref{eq:FDSILP} and $\text{DSILP}(\bb R^\bb N)$ are equivalent. This is not the necessarily the case for $Y = \bb R^I$ with $I \neq \bb N$. Alternate choices for $Y$ include various subspaces of $\bb R^I$. When $I = \bb N$ we pay particular attention to the spaces $\ell_{p}$ for $1 \le p < \infty.$ The space $\ell_{p}$ consist of all elements $y \in \bb R^N$ where $||y||_p = (\sum_{i \in I} |y(i)|^p)^{1/p} < \infty.$ When $p = \infty$ we allow $I$ to be uncountable and define $\ell_{\infty}(I)$ to be the subspace of all $y \in \bb R^{I}$ such that $||y||_{\infty}= \sup_{i \in I} |y(i)| < \infty.$ We also work with the space $\mathfrak c$ consisting of all $y \in \bb R^\bb N$ where $\left\{y(i)\right\}_{i \in \bb N}$ is a convergent sequence and the space $\mathfrak c_0$ of all sequences convergent to $0$. The spaces $\mathfrak c$ and $\ell_p$ for $1 \le p \le \infty$ defined above have special structure that is often used in examples in this paper. First, these spaces are Banach sublattices of $\bb R^\bb N$ (or $\bb R^{I}$ in the case of $\ell_{\infty}(I)$) (see Chapter 9 of \cite{hitchhiker} for a precise definition). If $Y$ is a Banach lattice, then the positive linear functionals in the algebraic dual $Y^{\prime}$ correspond exactly to the positive linear functionals that are continuous in the norm topology on $Y$ that is used to define the Banach lattice. This follows from (a) Theorem~9.11 in Aliprantis and Border \cite{hitchhiker}, which shows that the norm dual $Y^*$ and the order dual $Y^\sim$ are equivalent in a Banach lattice and (b) Proposition~2.4 in Martin et al. \cite{martin-stern-ryan} that shows that the set of positive linear functionals in the algebraic dual and the positive linear functionals in the order dual are identical. This allows us to define $\text{DSILP}(\mathfrak c))$ and $\text{DSILP}(\ell_{p}))$ using the norm dual of $\mathfrak c$ and $\ell_{p},$ respectively. For the constraint space $Y = \mathfrak c$ the linear functionals in its norm dual are characterized by \begin{align}\label{eq:define-c-functional} \psi_{w \oplus r}(y) = \sum_{i=1}^\infty w_iy_i + ry_\infty \end{align} for all $y \in \mathfrak c$ where $w \oplus r$ belong to $ \ell_1 \oplus \bb R$ and $y_\infty = \lim_{i \to \infty} y_i \in \bb R$. See Theorem 16.14 in Aliprantis and Border \cite{hitchhiker} for details. This implies the positive linear functionals for $(\text{DSILP}(\mathfrak c))$ are isomorphic to vectors $w \oplus r \in (\ell_1)_+ \oplus \bb R_+$. For obvious reasons, we call the linear functional $\psi_{0 \oplus 1}$ where $\psi_{0 \oplus 1} (y) = y_\infty$ the \emph{limit functional}. When $1 \le p < \infty$, the linear functionals in the norm dual are represented by sequences in the conjugate space $\ell_q$ with $1/p + 1/q = 1$. For $p = \infty$ and $I = \bb N,$ the linear functionals $\psi$ in the norm dual of $\ell_{\infty}(\bb N)$ can be expressed as $\psi = \ell_{1} \oplus \ell_{1}^{d}$ where $\ell_{1}^{d}$ is the disjoint complement of $\ell_{1}$ and consists of all the singular linear functionals (see Chapter 8 of Aliprantis and Border \cite{hitchhiker} for a definition of singular functionals). By Theorem 16.31 in Aliprantis and Border \cite{hitchhiker}, for every functional $\psi \in \ell_1^d$ there exists some constant $r\in \bb R$ such that $\psi(y) =r \lim_{i \to \infty} y(i)$ for $y \in \mathfrak c$. \begin{remark}\label{rem:finite-OV} If there is a $b$ such that $-\infty< OV(b) <\infty$ then $-\infty< OV(0) <\infty$. The first inequality follows from the fact that~\eqref{eq:SILP} is feasible and bounded for the given $b$ and the second inequality follows from feasibility of the zero solution. Therefore, $OV(0)=0$ because in this case we are minimizing over a cone and we get a bounded value. \end{remark} \section{Fourier-Motzkin elimination and its connection to duality}\label{s:fm-elimination-duality} In this section we recall needed results from Basu et al. \cite{basu2013projection} on the Fourier-Motzkin elimination procedure for SILPs and the tight connection of this approach to the finite support dual. We also use the Fourier-Motzkin elimination procedure to derive new results that are applied to more general duals in later sections. To apply the Fourier-Motzkin elimination procedure we put \eqref{eq:SILP} into the ``standard'' form \begin{eqnarray} \qquad \inf z \phantom{- c_{1} x_{1} - c_{2} x_{2} - \cdots - c_{n} x_{n} + } && \label{eq:initial-system-obj} \nonumber \\ \textrm{s.t.} \quad z - c_{1} x_{1} - c_{2} x_{2} - \cdots - c_{n} x_{n} &\ge& 0 \label{eq:initial-system-obj-con} \\ \phantom{z + } a^{1}(i) x_{1} + a^{2}(i) x_{2} + \cdots + a^{n}(i) x_{n} &\ge& b(i)\quad \text{ for } i \in I. \label{eq:initial-system-con} \end{eqnarray} The procedure takes \eqref{eq:initial-system-obj-con}-\eqref{eq:initial-system-con} as input and outputs the system \begin{equation}\label{eq:J_system} \begin{array}{rcl} \inf z && \\ 0 &\ge& \tilde{b}(h), \quad h \in I_{1} \\ \phantom{z + } \tilde{a}^{\ell}(h) x_{\ell} + \tilde{a}^{\ell+1}(h) x_{\ell+1} + \cdots + \tilde{a}^{n}(h) x_{n} &\ge& \tilde{b}(h), \quad h \in I_{2} \\ z &\ge& \tilde{b}(h), \quad h \in I_{3} \\ z + \tilde{a}^{\ell}(h) x_{\ell} + \tilde{a}^{\ell+1}(h) x_{\ell+1} + \cdots + \tilde{a}_{n}(h) x_{n} &\ge& \tilde{b}(h), \quad h \in I_{4} \end{array} \end{equation} where $I_1$, $I_2$, $I_3$ and $I_4$ are disjoint with $I_3 \cup I_4 \neq \emptyset$. Define $H := I_1 \cup \cdots \cup I_4$. The procedure also provides a set of finite support vectors $\{u^h\in \bb R^{(I)}_+: h \in H\}$ (each $u^h$ is associated with a constraint in~\eqref{eq:J_system}) such that $\tilde a^k(h) = \langle a^k, u^h \rangle$ for $\ell \le k \le n$ and $\tilde b(h) = \langle b, u^h \rangle.$ Moreover, for every $k=\ell, \ldots, n$, either $\tilde a^k(h) \geq 0$ for all $h \in I_2 \cup I_4$ or $\tilde a^k(h) \leq 0$ for all $h \in I_2 \cup I_4$. Further, for every $h\in I_2 \cup I_4$, $\sum_{k=\ell}^n |\tilde a^k(h)| > 0$. Goberna et al.~\cite{Goberna2010209} also applied Fourier-Motzkin elimination to semi-infinite linear systems. Their Theorem 5 corresponds to Theorem 2 in Basu et al. \cite{basu2013projection} and states that~\eqref{eq:J_system} is the projection of~\eqref{eq:initial-system-obj-con}-\eqref{eq:initial-system-con}. The Fourier-Motzkin elimination procedure defines a linear operator called the Fourier-Motzkin operator and denoted $FM: \bb R^{\{0\}\cup I} \to \bb R^H$ where \begin{eqnarray} FM(v) := (\langle v, u^h \rangle : h \in H) \textrm{ for all } v\in \bb R^{\{0\}\cup I}. \label{eq:define-FM} \end{eqnarray} The linearity of $FM$ is immediate from the linearity of $\langle \cdot , \cdot \rangle$. Observe that $FM$ is a positive operator since $u^h$ are nonnegative vectors in $\bb R^H$. By construction, $\tilde b = FM(0, b)$ and $\tilde a^k = FM((-c_k, a^k))$ for $k=1,\dots, n$. We also use the operator $\overline{FM} : \bb R^I \to \bb R^H$ defined by \begin{eqnarray} \overline{FM}(y) := FM((0,y)). \label{eq:define-FM-bar} \end{eqnarray} It is immediate from the properties of $FM$ that $\overline{FM}$ is also a positive linear operator. \begin{remark}\label{rem:FM-op} See the description of the Fourier-Motzkin elimination procedure in Basu et al. \cite{basu2013projection} and observe that the $FM$ operator does not change if we change $b$ in~\eqref{eq:SILP}. In what follows we assume a fixed $a^1, \ldots, a^n \in \bb R^I$ and $c \in \bb R^n$ and vary the right-hand-side $b$. This observation implies we have the same $FM$ operator for all SILPs with different right-hand-sides $y \in \bb R^I$. In particular, the sets $I_1, \dots, I_4$ are the same for all right-hand-sides $y \in \bb R^I$. \end{remark} The following basic lemma regarding the $FM$ operator is used throughout the paper. \begin{lemma}\label{lemma:cute-little-trick} For all $r \in \bb R$ and $y \in \bb R^I$, $FM((r,y))(h) = r + FM((0,y))(h)$ for all $h \in I_3 \cup I_4$. \end{lemma} \begin{proof} By the linearity of the $FM$ operator $FM((r,y)) = r FM((1,0,0,\dots)) + FM((0,y)).$ If $h \in I_3 \cup I_4$ then $FM((1,0,0,\dots))(h) = 1$ because $(1,0,0,\dots)$ corresponds the $z$ column in \eqref{eq:initial-system-obj}-\eqref{eq:initial-system-con} and in \eqref{eq:J_system}, $z$ has a coefficient of $1$ for $h \in I_3 \cup I_4$. Hence, for $h \in I_3 \cup I_4$, $FM((r,y))(h) = r + FM((0,y))(h)$. \end{proof} Numerous properties of the primal-dual pair \eqref{eq:SILP}--\eqref{eq:FDSILP} are characterized in terms of the output system \eqref{eq:J_system}. The following functions play a key role in summarizing information encoded by this system. \begin{definition}\label{def:S-L-OV-def} Given a $y \in\bb R^I$, define $L(y) := \lim_{\delta \to \infty}\omega(\delta, y)$ where $\omega(\delta, y) := \sup \{\tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4 \}$, where $\tilde y = \overline{FM}(y)$. Define $S(y) = \sup_{h \in I_3} \tilde y(h)$. \end{definition} For any fixed $y\in \bb R^I$, $\omega(\delta, y)$ is a nonincreasing function in $\delta$. A key connection between the primal problem and these functions is given in Theorem~\ref{theorem:fm-primal-value}. \begin{theorem}[Lemma 3 in Basu et al. \cite{basu2013projection}]\label{theorem:fm-primal-value} If \eqref{eq:SILP} is feasible then $OV(b) = \max\{S(b), L(b)\}.$ \end{theorem} The following result describes useful properties of the functions $L$, $S$ and $OV$ that facilitate our approach to sensitivity analysis when perturbing the right-hand-side vector. \begin{lemma}\label{lem:convex-L} $L(y)$, $S(y)$, and $OV(y)$ are sublinear functions of $y \in \bb R^I$. \end{lemma} \begin{proof} We first show the sublinearity of $L(y)$. For any $y, w \in \bb R^I$, denote $\tilde y = \overline{FM}(y)$ and $\tilde w = \overline{FM}(w)$. Thus $\overline{FM}(y + w) = \overline{FM}(y) + \overline{FM}(w) = \tilde y + \tilde w$ by the linearity of the $\overline{FM}$ operator. Observe that $$\begin{array}{rcl}\omega(\delta, y+ w) &= &\sup \{\tilde{y}(h)+\tilde{w}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4 \} \\ & = & \sup \{(\tilde{y}(h) - \frac\delta2 \sum_{k=\ell}^{n} |\tilde{a}^k(h)|)+(\tilde{w}(h) - \frac\delta2 \sum_{k=\ell}^{n} |\tilde{a}^k(h)|) \, : \, h \in I_4 \} \\ & \leq & \sup \{(\tilde{y}(h) - \frac\delta2 \sum_{k=\ell}^{n} |\tilde{a}^k(h)|) \, : \, h \in I_4 \} + \sup\{(\tilde{w}(h) - \frac\delta2 \sum_{k=\ell}^{n} |\tilde{a}^k(h)|) \, : \, h \in I_4 \} \\ & = & \omega(\frac\delta2; y) + \omega(\frac\delta2;w) \end{array}$$ Thus, $ L(y+w) = \lim_{\delta\to\infty}\omega(\delta, y+ w) \leq \lim_{\delta\to\infty}\omega(\frac\delta2; y) + \lim_{\delta\to\infty}\omega(\frac\delta2; w) = \lim_{\delta\to\infty}\omega(\delta, y) + \lim_{\delta\to\infty}\omega(\delta, w) = L(y) + L(w)$. This establishes the subadditivity of $L(y)$. Observe that for any $\lambda > 0$ and $y \in \bb R^I$, we have $\omega(\delta,\lambda y) = \lambda\omega(\frac\delta\lambda; y)$ and therefore $L(\lambda y) = \lim_{\delta\to\infty}\omega(\delta, \lambda y) = \lim_{\delta\to\infty}\lambda \omega(\frac\delta\lambda; y) = \lambda \lim_{\delta\to\infty} \omega(\frac\delta\lambda; y) = \lambda \lim_{\delta\to\infty} \omega(\delta, y) =\lambda L(y)$. This establishes the sublinearity of $L(y)$. We now show the sublinearity of $S(y)$. Let $y, w \in \bb R^I$, then \begin{align*} S(y + w) &= \sup \left\{\tilde y(h) + \tilde w(h) : h \in I_3 \right\} \\ &\le \sup \left\{\tilde y(h) : h \in I_3 \right\} + \sup \left\{\tilde w(h) : h \in I_3 \right\} \\ &= S(y) + S(w). \end{align*} For any $\lambda > 0$ we also have $S(\lambda y) = \lambda S(y)$ by the definition of supremum. This establishes that $S(y)$ is a sublinear function. Finally, since $OV(y) = \max \left\{L(y), S(y)\right\}$ and $L(y)$ and $S(y)$ are sublinear functions, it is immediate that $OV(y)$ is sublinear. \end{proof} The values $S(b)$ and $L(b)$ are used to characterize when \eqref{eq:SILP}--\eqref{eq:FDSILP} have zero duality gap. \begin{theorem}[Theorem 13 in Basu et al. \cite{basu2013projection}]\label{theorem:zero-duality-gap} The optimal value of~\eqref{eq:SILP} is equal to the optimal value of~\eqref{eq:FDSILP} if and only if (i) \eqref{eq:SILP} is feasible and (ii) $S(b) \ge L(b)$. \end{theorem} The next lemma is useful in cases where $L(b) > S(b)$ and hence (by Theorem~\ref{theorem:zero-duality-gap}) the finite support dual has a duality gap. A less general version of the result appeared as Lemma~7 in Basu et al. \cite{basu2013projection}. \begin{lemma}\label{lem:seq-L} Suppose $y \in \bb R^I$ and $\tilde y = \overline{FM}(y)$. If $\{\tilde y(h_m) \}_{m\in \bb N}$ is any convergent sequence with indices $h_{m} $ in $I_4$ such that $\lim_{m\to\infty} \sum_{k=\ell}^n |\tilde a^k(h_m)| \to 0,$ then $\lim_{m\to\infty} \tilde y(h_m) \leq L(y)$. Furthermore, if $L(y)$ is finite, there exists a sequence of distinct indices $h_m$ in $I_4$ such that $\lim_{m \to \infty}\tilde y(h_m) = L(y)$ and $\lim_{m\to \infty}\tilde a^k(h_m) = 0$ for $k = 1, \ldots, n$. \end{lemma} \begin{proof} We prove the first part of the Lemma. Let $\{\tilde y(h_m) \}_{m\in \bb N}$ be a convergent sequence with indices $h_{m} $ in $I_4$ such that $\lim_{m\to\infty} \sum_{k=\ell}^n |\tilde a^k(h_m)| \to 0.$ We show that $\lim_{m\to\infty} \tilde y(h_m) \leq L(y)$. If $L(y) = \infty$ the result is immediate. Next assume $L(y) = -\infty.$ Since $\lim_{m\to\infty} \sum_{k=\ell}^n |\tilde a^k(h_m)| \to 0$, for every $\delta>0$, there exists $N_{\delta}\in \bb N$ such that for all $m \ge N_{\delta},$ $\sum_{k=\ell}^n |\tilde a^k(h_m)| < \frac{1}{\delta}.$ Then \begin{eqnarray*} \omega(\delta, y) &=& \sup\{ \tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_{4} \} \\ &\ge& \sup\{\tilde{y}(h_{m}) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \, : \, m \in \bb N \} \\ &\ge& \sup\{\tilde{y}(h_{m}) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \, : \, m \in \bb N, \, \, m \ge N_\delta \} \\ &\ge& \sup\{\tilde{y}(h_{m}) - \delta (\frac{1}{\delta}) \, : \, m \in \bb N, \, \, m \ge N_\delta \} \\ &=& \sup\{\tilde{y}(h_{m}) \, : \, m \in \bb N, \, \, m \ge N_\delta \} - 1 \\ &\ge& \lim_{m \to \infty}\tilde y(h_m) -1. \end{eqnarray*} Therefore, $-\infty = L(y) = \lim_{\delta\to\infty}\omega(\delta,y) \geq \lim_{m \to \infty}\tilde y(h_m) -1$ which implies $\lim_{m \to \infty}\tilde y(h_m) = -\infty$. Now consider the case where $\{\tilde y(h_m) \}_{m \in \bb N}$ is a convergent sequence and $L(y)$ is finite. Therefore, if we can find a subsequence $\{\tilde y(h_{m_{p}}) \}_{p \in \bb N}$ of $\{\tilde y(h_m) \}_{m \in \bb N} $ such that $\lim_{p \to\infty} \tilde y(h_{m_{p}}) \leq L(y)$ it follows that $\lim_{m\to\infty} \tilde y(h_m) \leq L(y).$ Since $\lim_{\delta \to \infty}\omega(\delta, y) = L(y)$, there is a sequence $(\delta_p)_{p \in \bb N}$ such that $\delta_p \geq 0$ and $\omega(\delta_p,y) < L(y) + \frac{1}{p}$ for all $p \in \bb N$. Moreover, $\lim_{m \to \infty} \sum_{k=\ell}^n |\tilde{a}^k(h_m)| = 0$, implies that for every $p \in \bb N$ there is an $m_{p} \in \bb N$ such that for all $m \ge m_{p},$ $\delta_{p} \sum_{k = \ell}^{n} | \tilde{a}^{k}(h_{m})| < \frac{1}{p}.$ Thus, one can extract a subsequence $(h_{m_p})_{p\in \bb N}$ of $(h_m)_{m\in \bb N}$ such that $\delta_{p} \sum_{k = \ell}^{n} | \tilde{a}^{k}(h_{m_p})| < \frac{1}{p}$ for all $p \in \bb N.$ Then \[ L(y) + \frac{1}{p} > \omega(\delta_{p},y) = \sup \{\tilde{y}(h) - \delta_p \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4 \} \ge \tilde{y}(h_{m_{p}}) - \delta_{p} \sum_{k = \ell}^{n} | \tilde{a}^{k}(h_{m_{p}})| > \tilde{y}(h_{m_{p}}) - \frac1p. \] Thus $\tilde y(h_{m_p}) < L(y) + \frac2p$ which implies $\lim_{p\to\infty}\tilde y(h_{m_p}) \leq L(y)$. \bigskip Now show the second part of the Lemma that if $L(y)$ is finite, then there exists a sequence of distinct indices $h_m$ in $I_4$ such that $\lim_{m \to \infty}\tilde y(h_m) = L(y)$ and $\lim_{m\to\infty} \sum_{k=\ell}^n |\tilde a^k(h_m)| = 0.$ By hypothesis, $ \lim_{\delta \to \infty} \omega(\delta, y) = L(y) > -\infty$ so $I_4$ cannot be empty. Since $\omega(\delta, y)$ is a nonincreasing function of $\delta$, $\omega(\delta, y) \geq L(y)$ for all $\delta$. Therefore, $L(y) \leq \sup \{ \tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in I_4\}$ for every $\delta$. Define $\bar I := \{ h \in I_4 \, : \, \tilde y(h) < L(y) \}$ and $\bar\omega(\delta, y) = \sup \{ \tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in I_4 \setminus \bar I\}$. We consider two cases. {\em \underline{Case 1: $\lim_{\delta \to\infty}\bar\omega(\delta, y) = -\infty.$}} Since $\lim_{\delta\to\infty }\omega(\delta, y) = L(y) > -\infty$ and both $\omega(\delta, y)$ and $\bar\omega(\delta, y)$ are nonincreasing functions in $\delta$, there exists a $\bar \delta \geq 0$ such that $\omega(\delta, y) \geq L(y) \geq \bar\omega(\delta, y)+1$ for all $\delta \geq \bar\delta$. Therefore, for all $\delta \geq \bar\delta$, $\omega(\delta, y) = \sup\{ \tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in I_4\} \geq L(y) > L(y)-1 \geq \bar\omega(\delta, y) = \sup\{ \tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in I_4\setminus \bar I\}$. This strict gap implies that we can drop all indices in $I_4\setminus \bar I$ and obtain $\omega(\delta, y) = \sup\{ \tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in \bar I\}$ for all $\delta \geq \bar\delta$. For every $m \in\bb N$, set $\delta_m = \bar\delta+ m$. Since $\delta_m \geq \bar\delta$, \begin{align*} L(y) \leq \omega(\delta_m) =\sup \{ \tilde{y}(h) - \delta _m \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in \bar I\}=\sup \{ \tilde{y}(h) - (\bar\delta + m) \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in \bar I\} \end{align*} and thus, there exists $h_m \in \bar I$ such that $L(y) - \frac{1}{m} < \tilde y(h_m) - (\bar\delta +m) \sum_{k=\ell}^{n} |\tilde{a}^{k}(h_m)| \leq \tilde y(h_m) - m \sum_{k=\ell}^{n} |\tilde{a}^{k}(h_m)|$. Since $\tilde y(h) < L(y)$ for all $h \in \bar I$, we have $$\begin{array}{rl}& L(y)-\frac{1}{m} < L(y) - m\sum_{k=\ell}^{n} |\tilde{a}^{k}(h_m)| \\ \Rightarrow & \sum_{k=\ell}^{n} |\tilde{a}^{k}(h_m)| < \frac{1}{m^2}.\end{array}$$ This shows that $\lim_{m \to \infty} \sum_{k=\ell}^n| \tilde{a}^k(h_m)| = 0$ which in turn implies that $\lim_{m\to\infty} \tilde a^k(h_m) = 0$ for all $k = \ell, \ldots, n$. By definition of $I_{4},$ $\sum_{j=\ell}^n |\tilde{a}^{k}(h_m)| > 0$ for all $h_m \in \bar I \subseteq I_4$ so we can assume the indices $h_m$ are all distinct. Also, \begin{align*} \begin{array}{rl}& L(y)-\frac{1}{m} < \tilde y(h_m) - m \sum_{k=\ell}^{n} |\tilde{a}^{k}(h_m)| \\ \Rightarrow & L(y) - \frac{1}{m} < \tilde y(h_m)\end{array} \end{align*} Since $\tilde y(h_m) < L(y)$ (because $h_m \in \bar I$), we get $ L(y) - \frac{1}{m} < \tilde y(h_m) < L(y)$. And so $\lim_{m\to\infty} \tilde y(h_m) = L(y)$. {\em \underline{Case 2: $\lim_{\delta \to\infty}\bar\omega(\delta, y) > -\infty.$}} Since $\omega(\delta, y) \geq \bar\omega(\delta, y)$ for all $\delta\geq 0$ and $\lim_{\delta\to\infty}\omega(\delta, y) = L(y) < \infty$, we have $-\infty < \lim_{\delta \to\infty}\bar\omega(\delta, y) \leq L(y) < \infty$. First we show that there exists a sequence of indices $h_m \in I_4 \setminus \bar I$ such that $\tilde a^k(h_m) \to 0$ for all $k = \ell, \ldots, n$. This is achieved by showing that $\inf\{\sum_{k=\ell}^n| \tilde{a}^k(h)| : h \in I_4\setminus \bar I\} = 0$. Suppose to the contrary that $\inf\{\sum_{k=\ell}^n| \tilde{a}^k(h)| : h \in I_4\setminus \bar I\} = \beta > 0$. Since $\bar\omega(\delta, y)$ is nonincreasing and $\lim_{\delta \to \infty}\bar\omega(\delta, y) < \infty$, there exists $\bar \delta \geq 0$ such that $\bar\omega(\bar\delta,y) < \infty$. Observe that $\lim_{\delta \to \infty}\bar\omega(\delta, y) = \lim_{\delta \to\infty} \bar\omega(\bar\delta + \delta,y)$. Then, for every $\delta \geq 0$, \begin{align*} \bar\omega(\bar\delta + \delta,y) &= \sup \{ \tilde{y}(h) - (\bar\delta + \delta) \sum_{k=\ell}^{n} | \tilde{a}^{k}(h)| \, : \, h \in I_4 \setminus \bar I \} \\ &= \sup \{ \tilde{y}(h) - \bar\delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| -\delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in I_4\setminus \bar I\} \\ &\le \sup \{ \tilde{y}(h) - \bar\delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| -\delta \beta \, : \, h \in I_4\setminus \bar I\} \\ &= \sup \{ \tilde{y}(h) - \bar\delta \sum_{k=\ell}^{n} |\tilde{a}^{k}(h)| \, : \, h \in I_4\setminus \bar I\} - \delta\beta\\ &= \bar\omega(\bar \delta,y) - \delta\beta. \end{align*} Therefore, $-\infty < \lim_{\delta\to\infty} \bar\omega(\bar \delta + \delta,y) \leq \lim_{\delta \to \infty} (\bar\omega(\bar\delta,y) - \delta\beta) = -\infty$, since $\beta > 0$ and $\bar\omega(\bar\delta,y) < \infty$. This is a contradiction. Thus $0 = \beta = \inf\{\sum_{k=\ell}^n| \tilde{a}^k(h)| : h \in I_4\setminus \bar I\}$. Since $\sum_{k=\ell}^n| \tilde{a}^k(h)| > 0$ for all $h \in I_4$, there is a sequence of distinct indices $h_m \in I_4\setminus \bar I$ such that $\lim_{m \to \infty} \sum_{k=\ell}^n| \tilde{a}^k(h_m)| = 0$, which in turn implies that $\lim_{m\to\infty} \tilde a^k(h_m) = 0$ for all $k = \ell, \ldots, n$. Now we show there is a subsequence of $\tilde y(h_m)$ that converges to $L(y)$. Since $\lim_{\delta \to \infty}\bar\omega(\delta, y) \leq L(y)$, there is a sequence $(\delta_p)_{p \in \bb N}$ such that $\delta_p \geq 0$ and $\bar\omega(\delta_p,y) < L(y) + \frac{1}{p}$ for all $p \in \bb N$. It was shown above that the sequence $h_m \in I_4\setminus \bar I$ is such that $\lim_{m \to \infty} \sum_{k=\ell}^n |\tilde{a}^k(h_m)| = 0$. This implies that for every $p \in \bb N$ there is an $m_{p} \in \bb N$ such that for all $m \ge m_{p},$ $\delta_{p} \sum_{k = \ell}^{n} | \tilde{a}^{k}(h_{m})| < \frac{1}{p}.$ Thus, one can extract a subsequence $(h_{m_p})_{p\in \bb N}$ of $(h_m)_{m\in \bb N}$ such that $\delta_{p} \sum_{k = \ell}^{n} | \tilde{a}^{k}(h_{m_p})| < \frac{1}{p}$ for all $p \in \bb N.$ Then \[ L(y) + \frac{1}{p} > \bar\omega(\delta_{p},y) = \sup \{\tilde{y}(h) - \delta_p \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4\setminus \bar I \} \ge \tilde{y}(h_{m_{p}}) - \delta_{p} \sum_{k = \ell}^{n} | \tilde{a}^{k}(h_{m_{p}})| > \tilde{y}(h_{m_{p}}) - \frac1p. \] Recall that $h_{m_{p}} \in I_{4} \backslash \overline{I}$ implies $\tilde{y}(h_{m_{p}}) \ge L(y)$, and therefore $ L(y) + \frac{2}{p} > \tilde{y}(h_{m_{p}}) \ge L(y). $ By replacing $\{h_m\}_{m \in \bb N}$ by the subsequence $\{h_{m_p}\}_{p \in \bb N}$, we get $\tilde y(h_{m_{p}})$ as the desired subsequence that converges to $L(y)$. Hence, there exists is a sequence $\{h_m\}_{m\in \bb N}$ be any sequence of indices in $I_4$ such that $\tilde y(h_m) \to L(y)$ as $m \to \infty$ and $\tilde a^k(h_m) \to 0$ as $m \to \infty$ for $k = \ell, \dots, n$. Also, $\tilde a^k(h_m) = 0$ for all $k=1, \ldots, \ell-1$. \end{proof} Although Lemma~\ref{lem:seq-S} and its proof are very simple (they essentially follow from the definition of supremum), we include it in order to be symmetric with Lemma~\ref{lem:seq-L}. Both results are needed for Proposition~\ref{prop:ov-seq}. \begin{lemma}\label{lem:seq-S} Suppose $y \in \bb R^I$ and $\tilde y = \overline{FM}(y)$ with $I_{3} \neq \emptyset.$ If $\{\tilde y(h_m) \}_{m\in \bb N}$ is any convergent sequence with indices $ h_{m} $ in $I_3,$ then $\lim_{m\to\infty} \tilde y(h_m) \leq S(y)$. Furthermore, there exists a sequence of distinct indices $h_m$ in $I_3$ such that $\lim_{m \to \infty}\tilde y(h_m) = S(y)$ and $\lim_{m\to \infty}\tilde a^k(h_m) = 0$ for $k = 1, \ldots, n$. Also, if the supremum that defines $S(y)$ is not attained, the sequence of indices can be taken to be distinct. \end{lemma} \begin{proof} By definition of supremum there exists a sequence $\{h_m\}_{m \in \bb N} \subseteq{I_3}$ such that $\tilde y(h_m) \to S(y)$ as $m \to \infty$. If the supremum that defines $S(y)$ is attained by $\tilde y(h_0) = S(y)$ then take $h_m = h_0$ for all $m \in \bb N$. Otherwise, the elements $h_m$ are taken to be distinct. By definition of $I_{3},$ $\tilde a^k(h_m) = 0$ for $k = 1, \dots, n$ and for all $m\in \bb N$ and so $\lim_{m\to\infty} \sum_{k=\ell}^n |\tilde a^k(h_m)| = 0.$ It also follows from the definition of supremum that if $\{\tilde y(h_m) \}_{m\in \bb N}$ is any convergent sequence with indices $ h_{m} $ in $I_3,$ then $\lim_{m\to\infty} \tilde y(h_m) \leq S(y)$. \end{proof} \begin{prop}\label{prop:ov-seq} Suppose $y \in \bb R^I$, $\tilde y = \overline{FM}(y)$ and $OV(y)$ is finite. Then there exists a sequence of indices (not necessarily distinct) $h_m$ in $H$ such that $\lim_{m \to \infty}\tilde y(h_m) = OV(y)$ and $\lim_{m\to \infty}\tilde a^k(h_m) = 0$ for $k = 1, \ldots, n$. The sequence is contained entirely in $I_3$ or $I_4$. Moreover, either if $L(y) > S(y)$, or when $L(y) \le S(y)$ and the supremum that defines $S(y)$ is not attained, the sequence of indices can be taken to be distinct. \end{prop} \begin{proof} By Theorem~\ref{theorem:fm-primal-value}, $OV(y) = \max\{S(y), L(y)\}.$ The result is now immediate from Lemmas~\ref{lem:seq-L} and~\ref{lem:seq-S}. \end{proof} \section{Strong duality and dual pricing for a restricted constraint space}\label{s:strong-duality-dual-pricing} Duality results for SILPs depend crucially on the choice of the constraint space $Y.$ In this section we work with the constraint space $Y = U$ where $U$ is defined in~\eqref{eq:define-U}. Recall that the vector space $U$ is the minimum vector space of interest since every legitimate dual problem~\eqref{eq:DSILPprime} requires the linear functionals defined on $Y$ to operate on $a^1, \dots, a^n, b.$ We show that when $Y = U = \text{span}(a^1, \dots, a^n, b)$, (SD) and (DP) hold. In particular, we explicitly construct a linear functional $\psi^* \in U_{+}^{\prime}$ such that~\eqref{eq:strong-duality} and~\eqref{eq:dual-pricing} hold. \begin{theorem}\label{theorem:silps-never-have-a-duality-gap} Consider an instance of \eqref{eq:SILP} that is feasible and bounded. Then, the dual problem $(\text{DSILP}(U))$ with $U = \text{span}(a^1, \dots, a^n, b)$ is solvable and (SD) holds for the dual pair \eqref{eq:SILP}--$(\text{DSILP}(U)).$ Moreover, $(\text{DSILP}(U))$ has a unique optimal dual solution. \end{theorem} \begin{proof} Since \eqref{eq:SILP} is feasible and bounded, we apply Proposition~\ref{prop:ov-seq} with $y =b$ and extract a subset of indices $\{h_{m}\}_{m \in \bb N}$ of $H$ satisfying $\tilde b(h_m) \to OV(b)$ as $m \to \infty$ and $\tilde a^k(h_m) \to 0$ as $m \to \infty$ for $k = 1, \dots, n$. \vskip 5pt By Lemma~\ref{lemma:cute-little-trick}, for all $k=1,\ldots, n$, $\overline{FM}(a^k)(h_m) = FM((-c_k, a^k))(h_m) +c_k$ and therefore $\lim_{m\to\infty} \overline{FM}(a^k)(h_m) = \lim_{m\to\infty}FM((-c_k, a^k))(h_m) +c_k = \lim_{m\to\infty}\tilde a^k(h_m) +c_k = c_k$. Also, $\lim_{m\to\infty}\overline{FM}(b) = \lim_{m\to\infty}FM((0,b)) = \lim_{m\to\infty} \tilde b(h_m) = OV(b)$. Therefore $\overline{FM}(a^1), \ldots \overline{FM}(a^k),$ $\overline{FM}(b)$ all lie in the subspace $M \subseteq \bb R^{H}$ defined by \begin{eqnarray} M := \{ \tilde{y} \in \bb R^H \, :\, \tilde{y}(h_m)_{m \in \bb N} \,\, \text{ converges } \}. \label{eq:define-M} \end{eqnarray} Define a positive linear functional $\lambda$ on $M$ by \begin{align}\label{eq:projected-system-linear-functional} \lambda(\tilde y) = \lim_{m \to \infty} \tilde{y}(h_m). \end{align} Since $\overline{FM}(a^1), \ldots \overline{FM}(a^k), \overline{FM}(b) \in M$ we have $\overline{FM}(U) \subseteq M$ and so $\lambda$ is defined on $\overline{FM}(U)$. Now map $\lambda$ to a linear functional in $U'$ through the adjoint mapping $\overline{FM}'$. Let $\psi^{*} = \overline{FM}'(\lambda).$ We verify that $\psi^{*}$ is an optimal solution to $(\text{DSILP}(Y))$ with objective value $OV(b)$. It follows from the definition of $\lambda$ in~\eqref{eq:projected-system-linear-functional} that $\lambda$ is a positive linear functional. Since $\overline{FM}$ is a positive operator, $\psi^{*} = \overline{FM}'(\lambda) = \lambda\circ \overline{FM}$ is a positive linear functional on $U$. We now check that $\psi^{*}$ is dual feasible. We showed above that $\lambda(\overline{FM}(a^{k})) = c_{k} $ for all $k = 1, \ldots, n.$ Then by definition of adjoint \begin{eqnarray*} \langle a^k, \psi^{*} \rangle = \langle a^k, \overline{FM}'(\lambda)\rangle \ = \langle \overline{FM}(a^k), \lambda\rangle = c_{k}. \end{eqnarray*} By a similar argument, $\langle b, \psi^{*} \rangle = \langle \overline{FM}(b), \lambda \rangle = OV(b)$ so $\psi^{*}$ is both feasible and optimal. Note that $\psi^{*}$ is the {\it unique} optimal dual solution since $U$ is the span of $a^{1}, \ldots, a^{n}$ and $b$ and defining the value of $\psi^{*}$ for each of these vectors uniquely determines an optimal dual solution. This completes the proof.\end{proof} \begin{remark}\label{rem:constrast-with-kortanek} The above theorem can be contrasted with the results of Charnes at el. \cite{charnes1965representations} who proved that is always possible to reformulate \eqref{eq:SILP} to ensure zero duality gap with the finite support dual program. Our approach works with the original formulation of \eqref{eq:SILP} and thus preserves dual information in reference to the original system of constraints rather than a reformulation. Indeed, our procedure considers an alternate \emph{dual} rather than the finite support dual. \end{remark} \begin{theorem}\label{theorem:U-dual-pricing} Consider an instance of \eqref{eq:SILP} that is feasible and bounded. Then the unique optimal dual solution $\psi^{*}$ constructed in Theorem~\ref{theorem:silps-never-have-a-duality-gap} satisfies \eqref{eq:dual-pricing} for all perturbations $d \in U.$ \end{theorem} \begin{proof} By hypothesis \eqref{eq:SILP} is feasible and bounded. Then by Theorem~\ref{theorem:silps-never-have-a-duality-gap} there is an optimal dual solution $\psi^{*}$ such that $\psi^{*}(b) = OV(b).$ For now assume \eqref{eq:SILP} is also solvable with optimal solution $x(b).$ We relax this assumption later. We show {\it for every} perturbation $d \in U$ that $\psi^{*}$ is an optimal dual solution. If $d \in U$ then $d = \sum_{k=1}^n \alpha_k a^k + \alpha_0 b.$ Following the logic of Theorem~\ref{theorem:silps-never-have-a-duality-gap}, there exists a subsequence $\left\{h_m\right\}$ in $I_3$ or $I_4$ such that $\tilde{a}^{k}(h_{m}) \rightarrow 0$ for $k = 1, \ldots, n$ and $\tilde{b}(h_{m}) \rightarrow OV(b).$ Since a linear combination of convergent sequences is a convergent sequence the linear functional $\lambda$ defined in~\eqref{eq:projected-system-linear-functional} is well defined for $\overline{FM}(U),$ and in particular for $\overline{FM}(b + d).$ For the projected system~\eqref{eq:J_system}, $\lambda$ is dual feasible and gives objective function value \begin{eqnarray*} \psi^{*}(b + d) = \lambda(\overline{FM}(b+d)) = (1 + \alpha_{0})OV(b) + \sum_{k = 1}^{n} \alpha_{k} c_{k}. \end{eqnarray*} A primal feasible solution to \eqref{eq:SILP} with right-hand-side $b + d$ is $\hat x_k = (1 + \alpha_0) x_k(b) + \alpha_k, \,\text{ for } k = 1, \dots, n$ and this primal solution gives objective function value $(1 + \alpha_{0})OV(b) + \sum_{k = 1}^{n} \alpha_{k} c_{k}$. By weak duality $\psi^{*}$ remains the optimal dual solution for right-hand-side $b + d.$ Now consider the case where \eqref{eq:SILP} is not solvable. In this case the optimal primal objective value is attained as a supremum. In this case there is a sequence $\{x^{m}(b)\}$ of primal feasible solutions whose objective function values converges $OV(b).$ Now construct a sequence of feasible solutions $\{\hat{x}^{m}(b)\}$ using the definition of $\hat x$ above. Then a very similar reasoning to the above shows that the sequence $\{\hat{x}^{m}(b)\}$ converges to the value $\psi^{*}(b + d).$ Again, by weak duality $\psi^{*}$ remains the optimal dual solution for right-hand-side $b + d.$ \end{proof} $(\text{DSILP}(U))$ is a very special dual. If there exists a $b$ for which \eqref{eq:SILP} is feasible and bounded, then there is an optimal dual solution $\psi^{*}$ to $(\text{DSILP}(U))$ such that \begin{eqnarray*} OV(b + d) = OV(b) + \psi^{*}(d) \end{eqnarray*} {\it for every} $d \in U.$ This is a much stronger result than (DP) since the same linear functional $\psi^{*}$ is valid for every perturbation $d.$ A natural question is when the weaker property (DP) holds in spaces that strictly contain $U$. The problem of allowing perturbations $d \not\in U$ is that $\overline{FM}(d)$ may not lie in the subspace $M$ defined by~\eqref{eq:define-M} and therefore the $\lambda$ defined in~\eqref{eq:projected-system-linear-functional} is not defined for $\overline{FM}(d).$ Then we cannot use the adjoint operator $\overline{FM}'$ to get $\psi^{*}(d).$ This motivates the development of the next section where we want to find the largest possible perturbation space so that (SD) and (DP) hold. \section{Extending strong duality and dual pricing to larger constraint spaces}\label{s:extending-U} The goal of this section is to prove (SD) and (DP) for subspaces $Y \subseteq \bb R^{I}$ that extend $U$. In Proposition~\ref{prop:duality-gap-iff-lift-base-functional} below we prove that the primal-dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} satisfy (SD) if and only if the base dual solution $\psi^*$ constructed in Theorem~\ref{theorem:silps-never-have-a-duality-gap} can be extended to a positive linear functional over $Y$. \begin{prop}\label{prop:duality-gap-iff-lift-base-functional} Consider an instance of \eqref{eq:SILP} that is feasible and bounded and $Y$ a subspace of $\bb R^I$ that contains $U$ as a subspace. Then dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} satisfies (SD) if and only if the base dual solution $\psi^*$ defined in \eqref{eq:strong-duality} can be extended to a positive linear functional over $Y$. \end{prop} \begin{proof} If $\psi$ is an optimal dual solution it must be feasible and thus $\psi(a^k) = c_k$ for $k = 1, \dots, n$ and $\psi(b) = OV(b)$. In other words, $\psi(y) = \psi^*(y)$ for $y \in U$. Thus, $\psi$ is a positive linear extension of $\psi^*$. Conversely, every positive linear extension $\psi$ of $\psi^*$ is dual feasible and satisfies $\psi(b) = OV(b)$. This is because any extension maintains the values of $\psi^*$ when restricted to $U$. \end{proof} Moreover, we have the following ``monotonicity'' property of (SD) and (DP). \begin{prop}\label{prop:mononticity} Let $Y$ a subspace of $\bb R^I$ that contains $U$ as a subspace. Then \begin{enumerate} \item if the primal-dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} satisfies (SD), then (SD) holds for every primal dual pair \eqref{eq:SILP}--$(\text{DSILP}(Q))$ where $Q$ is a subspace of $Y$ that contains $U$. \item if the primal-dual pair \eqref{eq:SILP}--\eqref{eq:DSILPprime} satisfies (DP), then (DP) holds for every primal dual pair \eqref{eq:SILP}--$(\text{DSILP}(Q))$ where $Q$ is a subspace of $Y$ that contains $U$. \end{enumerate} \end{prop} \begin{proof} Property (DP) implies property (SD) so in both cases 1. and 2. above \eqref{eq:SILP}--\eqref{eq:DSILPprime} satisfies (SD). Then by Proposition~\ref{prop:duality-gap-iff-lift-base-functional} the base dual solution $\psi^*$ defined in \eqref{eq:strong-duality} can be extended to a positive linear functional $\bar{\psi}$ over $Y$. Since $Q \subset Y,$ $\bar{\psi}$ is defined on $Q$ and is an optimal dual solution with respect to the space $Q$ since $OV(b) = \psi^{*}(b) = \bar{\psi}(b)$ and part 1. is proved. Now show part 2. Assume there is a $d \in Q \subseteq Y$ and $b + d$ is a feasible right-hand-side to~\eqref{eq:SILP}. By definition of (DP) there there is an $\hat{\epsilon} > 0$ such that \begin{eqnarray*} OV(b + \epsilon d) = \bar{\psi}(b + \epsilon d) = OV(b) + \epsilon \bar{\psi}(d) \end{eqnarray*} holds for all $\epsilon \in [0, \hat \epsilon].$ But $Q \subset Y$ implies $\bar{\psi}$ is the optimal linear functional with respect to the constraint space $Q$ and property (DP) holds. \end{proof} Another view of Propositions~\ref{prop:duality-gap-iff-lift-base-functional}~and~\ref{prop:mononticity} is that once properties (SD) or (DP) fail for a constraint space $Y$, then these properties fail for all larger constraint spaces. As the following example illustrates, an inability to extend can happen almost immediately as we enlarge the constraint space from $U.$ \begin{example}\label{ex:duality-gap-cannot-lift} Consider the \eqref{eq:SILP} \begin{eqnarray}\label{eq:cannot-lift-system} \min x_{1} && \\ (1/i) x_{1} + (1/i)^{2} x_{2} &\ge& (1/i), \quad i \in \bb N. \end{eqnarray} The smallest of the $\ell_{p}(\bb N)$ spaces that contains the columns of \eqref{eq:cannot-lift-system} (and thus $U$) is $Y = \ell_2$. Indeed, the first column is not in $\ell_1$ since $\sum_{i} \frac{1}{i}$ is not summable. We show (SD) fails to hold under this choice of $Y = \ell_{2}.$ This implies that (DP) fails in $\ell_{2}$ and every space that contains $\ell_{2}.$ An optimal primal solution is $x_{1} = 1$ and $x_{2} = 0$ with optimal solution value $1$. The dual $\text{DSILP}(\ell_2)$ is \begin{align} \sup \quad & \sum_{i=1}^\infty \frac{\psi_i}{i} \nonumber \\ {\rm s.t.} \quad & \sum_{i=1}^\infty \frac{\psi_i}{i} = 1 \label{first-constraint} \\ & \sum_{i=1}^\infty \frac{\psi_i}{i^2} = 0 \label{second-constraint} \\ & \psi \in (\ell_2)_{+}. \nonumber \end{align} In writing $\text{DSILP}(\ell_2)$ we use the fact that $(\ell^{\prime}_2)_+$ is isomorphic to $(\ell_2)_{+}$ (see the discussion in Section~\ref{s:preliminaries}). Observe that no nonnegative $\psi$ exists that can satisfy both \eqref{first-constraint} and \eqref{second-constraint}. Indeed, \eqref{second-constraint} implies $\psi_i = 0$ for all $i \in \bb N$. However, this implies that \eqref{first-constraint} cannot be satisfied. Hence, $\text{DSILP}(\ell_2) = -\infty$ and there is an infinite duality gap. Therefore (SD) fails, immediately implying that (DP) fails. \quad $\triangleleft$ \end{example} \paragraph{Roadmap for extensions.} Our goal is to provide a coherent theory of when properties (SD) and (DP) hold in spaces larger than $U.$ Our approach is to extend the base dual solution to larger spaces using Fourier-Motzkin machinery. We provide a brief intuition for the method, which is elaborated upon carefully in the proofs that follow. First, the Fourier-Motzkin operator $\overline{FM}(y)$ defined in~\eqref{eq:define-FM-bar} is used to map $U$ onto the vector space $\overline{FM}(U).$ Next a linear functional $\lambda(\tilde{y})$ (see~\eqref{eq:projected-system-linear-functional}) is defined over $\overline{FM}(U)$. We aim to extend this linear functional to a larger vector space. Define the set \begin{eqnarray} \hat Y := \{y \in Y: -\infty < OV(y) < \infty\} \label{eq:define-E-hat}. \end{eqnarray} Note that $\hat Y$ is the set of ``interesting'' right hand sides, so it is a natural set to investigate. Extending to all of $Y$ beyond $\hat Y$ is unnecessary because these correspond to right hand sides which give infeasible or unbounded primals. However, the set $\hat Y$ is not necessarily a vector space, which makes it hard to talk of dual solutions acting on this set. If $\hat Y$ is a vector space, then $\overline{FM}(\hat{Y})$ is also a vector space and we show it is valid under the hypotheses of the Hahn-Banach Theorem to extend the linear functional $\lambda$ defined in~\eqref{eq:projected-system-linear-functional} from $\overline{FM}(U)$ to $\bar{\lambda}$ on $\overline{FM}(\hat{Y}).$ Finally, the adjoint $\overline{FM}'$ of the Fourier-Motzkin operator $\overline{FM}$ is used to map the extended linear functional $\bar{\lambda}$ to an optimal linear functional on $\hat{Y}.$ Under appropriate conditions detailed below, this allows us to work with constraint spaces $\hat{Y}$ that strictly contain $U$ and still satisfy (SD) and (DP). See Theorems~\ref{thm:SD-ell-infty} and~\ref{theorem:sufficient-conditions-dual-pricing-alt} for careful statements and complete details. Figure~\ref{figure:extend-SD-Y} may help the reader keep track of the spaces involved. We emphasize that in order for $(\text{DSILP}(\hat{Y}))$ to be well defined, $\hat{Y}$ must contain $U$ and itself be a vector space. \begin{figure}[ht] \centering \resizebox{3.5in}{!}{\input{extend-SD-Y.pdf_t}} \vskip 10pt \caption{Illustrating Theorem~\ref{theorem:extend-SD-Y}.}\label{figure:extend-SD-Y} \end{figure} \subsection{Strong duality for extended constraint spaces} The following lemma is used to show $U \subseteq \hat Y$ in the subsequent discussion. \begin{lemma}\label{lemma:ov-ak-bounded-finite} If $-\infty < OV(b) < \infty$ (equivalently, \eqref{eq:SILP} with right-hand-side $b$ is feasible and bounded), then $-\infty <OV(a^k)< \infty$ for all $k=1, \ldots, n$. \end{lemma} \begin{proof} If the right-hand-side vector is $a^{k}$ then $x_{k} = 1$ and $x_{j} = 0$ for $j \neq k$ for a feasible objective value $c_{k}.$ Thus $OV(a^k) \le c_{k} < \infty.$ Now show $OV(a^{k}) > -\infty.$ Since $OV(a^k) < \infty,$ by Lemma~\ref{theorem:fm-primal-value}, $OV(a^{k}) = \max\{ S(a^{k}), L(a^{k}) \}.$ If $I_{3} \neq \emptyset$ then $S(a^{k}) > -\infty$ which implies $OV(a^{k}) > -\infty$ and we are done. Therefore assume $I_{3} = \emptyset.$ Then $S(b) = -\infty.$ However, by hypothesis $-\infty < OV(b) < \infty$ so by Lemma~\ref{theorem:fm-primal-value} \begin{align*} OV(b) = \max\{ S(b), L(b) \} = \max\{ -\infty, L(b) \} \end{align*} which implies $-\infty < L(b) < \infty.$ Then by Lemma~\ref{lem:seq-L} there exists a sequence of distinct indices $h_m$ in $I_4$ such that $\lim_{m\to \infty}\tilde a^k(h_m) = 0$ for all $k = \ell, \ldots, n.$ Note also that $\tilde{a}^{k}(h) = 0$ for $k = 1, \ldots, \ell -1$ and $h \in I_{4}.$ Let $\tilde{y} = \overline{FM}(a^{k}).$ Then $\lim_{m\to \infty}\tilde a^k(h_m) = 0$ implies by Lemma~\ref{lemma:cute-little-trick}, $\lim_{m\to \infty} \tilde{y}(h_m) = c_{k}.$ Again by Lemma~\ref{lem:seq-L}, $L(a^{k}) \ge \lim_{m \rightarrow \infty} \tilde{y}(h_{m}) = c_{k}.$ \end{proof} \begin{theorem}\label{theorem:extend-SD-Y} Consider an instance of \eqref{eq:SILP} that is feasible and bounded. Let $Y$ be a subspace of $ \bb R^I$ such that $U \subset Y$ and $\hat{Y}$ is a vector space. Then the dual problem $(\text{DSILP}(\hat{Y}))$ is solvable and (SD) holds for the primal-dual pair \eqref{eq:SILP}--$(\text{DSILP}(\hat{Y})).$ \end{theorem} \begin{proof} The proof of this theorem is similar to the proof of Theorem~\ref{theorem:silps-never-have-a-duality-gap}. We use the operator $\overline{FM}$ and consider the linear functional $\lambda$ defined in~\eqref{eq:projected-system-linear-functional} which was shown to be a linear functional on $\overline{FM}(U).$ By hypothesis, $U \subset Y$ and so by Lemma~\ref{lemma:ov-ak-bounded-finite}, $U \subseteq \hat{Y}$ which implies $\overline{FM}(U) \subseteq \overline{FM}(\hat{Y})$. Since $\hat Y$ is a vector space, $\overline{FM}(\hat{Y})$ is a vector space since $\overline{FM}$ is a linear operator. We use the Hahn-Banach theorem to extend $\lambda$ from $\overline{FM}(U)$ to $\overline{FM}(\hat{Y})$. First observe that if $\overline{FM}(y^1) = \overline{FM}(y^2) = \tilde y$, then $S(y^1)=S(y^2)$ and $L(y^1) = L(y^2)$ because these values only depend on $\tilde y$, and therefore, $OV(y^1) = OV(y^2)$. This means for any $\tilde y \in \bb R^H$, $S,L$ and $OV$ are constant functions on the affine space $\overline{FM}^{-1}(\tilde y)$. Thus, we can push forward the sublinear function $OV$ on $\hat{Y}$ by setting $p(\tilde y) = OV(\overline{FM}^{-1}(\tilde y))$ ($p$ is sublinear as it is the composition of the inverse of a linear function and a sublinear function). Moreover, by Lemmas~\ref{lem:seq-L}-\ref{lem:seq-S} and Theorem~\ref{theorem:fm-primal-value}, $\lambda(\tilde y) \leq \max\{S(y), L(y)\} = OV(y) = p(\tilde y)$ for all $\tilde y \in \overline{FM}(U)$. Then by the Hahn-Banach Theorem there exists an extension of $\lambda$ on $\overline{FM}(U)$ to $\bar{\lambda}$ on $\overline{FM}(\hat{Y})$ such that \begin{align*} -p(-\tilde y) \le \bar{\lambda}(\tilde{y}) \le p(\tilde y) \end{align*} for all $\tilde{y} \in \overline{FM}(\hat{Y}).$ We now show $\bar{\lambda}(\tilde{y})$ is positive on $\overline{FM}(\hat{Y}).$ If $\tilde{y} \ge 0$ then $-\tilde{y} \le 0$ and $\omega(\delta, -\tilde{y}) = \sup \{-\tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4 \} \le 0$ for all $\delta.$ Then $L(-y) = \lim_{\delta \rightarrow \infty}\omega(\delta, -\tilde{y}) \le 0$ for any $y$ such that $\tilde y =\overline{FM}(y)$. Likewise $S(-y) = \sup \{-\tilde{y}(h) \, : \, h \in I_3 \} \le 0.$ Then $S(-y), L(-y) \le 0$ implies \begin{align*} -p(-\tilde y) = -OV(-y) = -\max\{ S(-y), L(-y) \} = \min\{ -S(-y), -L(-y) \} \ge 0 \end{align*} and $-p(-\tilde{y}) \le \bar{\lambda}(\tilde{y})$ gives $0 \le \bar{\lambda}(\tilde{y})$ on $\overline{FM}(\hat{Y}).$ We have shown that $\bar{\lambda}$ is a positive linear functional on $\overline{FM}(\hat{Y}).$ It follows that $\psi^{*} = \overline{FM}^{\prime}(\bar{\lambda})$ is a positive linear functional on $\hat{Y}.$ Now recall that the $\lambda$ defined in~\eqref{eq:projected-system-linear-functional} in Theorem~\ref{theorem:silps-never-have-a-duality-gap} had the property that $\langle \overline{FM}(b), \lambda \rangle =OV(b)$ and $ \langle \overline{FM}(a^k), \lambda\rangle = c_{k}.$ By definition of $U,$ $a^{k} \in U$ for $k = 1, \ldots, n$ and $b \in U.$ However, $\bar{\lambda}$ is an extension of $\lambda$ from $\overline{FM}(U)$ to $\overline{FM}(\hat{Y}).$ Therefore, for $\psi^{*} = \overline{FM}^{\prime}(\bar{\lambda})$ \begin{eqnarray*} \langle a^k, \psi^{*} \rangle = \langle a^k, \overline{FM}'(\bar{\lambda})\rangle = \langle \overline{FM}(a^k), \bar{\lambda}\rangle = \langle \overline{FM}(a^k), \lambda \rangle = c_{k}. \end{eqnarray*} and similarly \begin{align*} \langle b, \psi^{*} \rangle = \langle b, \overline{FM}'(\bar{\lambda})\rangle = \langle \overline{FM}(b), \bar{\lambda}\rangle = \langle \overline{FM}(b), \lambda \rangle = OV(b) \end{align*} and so $\psi^{*}$ is an optimal dual solution to $(\text{DSILP}(\hat{Y}))$ with optimal value $OV(b)$. This is the optimal value of \eqref{eq:SILP}, so there is no duality gap. \end{proof} \begin{prop}\label{prop:E-infinity-vector-space} If $Y$ is a subspace of $\bb R^I$ such that $\overline{FM}(\hat{Y}) \subseteq \ell_{\infty}(H)$ then $\hat{Y}$ is a vector space. \end{prop} \begin{proof} If $\hat Y$ is empty we are trivially done. Otherwise let $\bar y$ be any element of $\hat Y$. Then $-\infty < OV(\bar{y}) < \infty$ so by Proposition~\ref{prop:ov-seq} there exists a sequence $\{h_m\}_{m\in \bb N}$ in $H$ such that $\tilde a^k(h_m) \to 0$ for $k = 1, \ldots, n$ as $m\to \infty$ which implies $\lim_{m\to\infty} \sum_{k=\ell}^n |\tilde a^k(h_m)| = 0.$ The only purpose of $\bar y$ is to generate the sequence $\left\{h_m\right\}$, which is used below. Consider $x,y \in \hat Y$, then $OV(x+y) \leq OV(x) + OV(y) < \infty$ by sublinearity of $OV$. We now show that $-\infty < OV(x+y)$. If $I_3$ is nonempty, then $S(x+y) > -\infty$ and therefore, $OV(x+y) \geq S(x+y) > -\infty$. If $I_3$ is empty, then $OV(x+y) = L(x+y)$ and it suffices to show $L(x+y) >-\infty$. Let $\tilde x = \overline{FM}(x)$ and $\tilde y = \overline{FM}(y)$. By hypothesis, there exists an $N > 0$ such that $ ||\tilde x||_\infty < N $ and $ ||\tilde y||_\infty < N.$ For any $\delta > 0,$ \begin{eqnarray*} \omega(\delta,x+y) &=& \sup\{ \tilde{x}(h) + \tilde{y}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_{4} \} \\ &\ge& \sup\{\tilde{x}(h_{m}) +\tilde{y}(h_{m}) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \, : \, m \in \bb N \} \\ &\ge& \sup\{ -2N - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \, : \, m \in \bb N \} \\ &=& \sup\{ - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \, : \, m \in \bb N \} - 2N \\ &=& \delta \sup\{ - \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \, : \, m \in \bb N \} - 2N \\ &=& -2N \end{eqnarray*} where the last equality comes from the fact that $ - \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \le 0$ for all $m \in \bb N$ and this implies $\sup\{ - \sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \, : \, m \in \bb N \} \le 0$. Then $\sum_{k=\ell}^{n} |\tilde{a}^k(h_m)| \to 0$ implies that this supremum is zero. Therefore \begin{eqnarray*} L(x + y) = \lim_{\delta \rightarrow \infty} \omega(\delta) \ge -2N > -\infty. \end{eqnarray*} We now confirm that for all $y \in \hat Y$ and $\alpha\in \bb R$, $-\infty < OV(\alpha y) < \infty$. If $\alpha > 0$ then $OV(\alpha y) = \alpha OV(y)$ by sublinearity of $OV$ and the result follows. Thus, it suffices to check that $-\infty < OV(-y) < \infty$ for all $y \in \hat Y$. By sublinearity of $OV$, $OV(y) + OV(-y) \geq OV(0) = 0$ by Remark~\ref{rem:finite-OV}. Thus, $OV(-y) \geq -OV(y) > -\infty$. We now show that $S(-y), L(-y) < \infty$ which implies $OV(-y) = \max\{S(-y), L(-y)\} < \infty$. By hypothesis, there exists $N > 0$ such that $ ||\tilde y||_\infty < N $. Therefore, $S(-y) = \sup\{-\tilde y(h): h \in I_3\} < N < \infty$. Finally, for every $\delta \geq 0$, \begin{eqnarray*} \omega(\delta,-y) &= &\sup\{-\tilde y(h) - \delta\sum_{k=\ell}^n|\tilde a^k(h)|: h \in I_4\} \\ & \leq &\sup\{-\tilde y(h): h \in I_4\} < N < \infty. \end{eqnarray*} This implies $L(-y) = \lim_{\delta \rightarrow \infty} \omega(\delta,-y) < N <\infty.$ \end{proof} Theorem~\ref{thm:SD-ell-infty} is an immediate consequence of Theorem~\ref{theorem:extend-SD-Y} and Proposition~\ref{prop:E-infinity-vector-space}. \begin{theorem}\label{thm:SD-ell-infty} Suppose the constraint space $Y$ for~\eqref{eq:SILP} is such that $\overline{FM}(\hat{Y})\subseteq \ell_\infty(H)$. Then for any $b \in \hat{Y}$ the dual problem $(\text{DSILP}(\hat{Y}))$ is solvable and (SD) holds for the dual pair \eqref{eq:SILP}--$(\text{DSILP}(\hat{Y})).$ \end{theorem} \begin{remark}\label{remark:sufficient-condition-for-theorem-SD-ell-infinity} The hypotheses Proposition~\ref{prop:E-infinity-vector-space} and of Theorem~\ref{thm:SD-ell-infty} look rather technical, we make two remarks about how to verify these conditions. \begin{enumerate} \item The hypotheses Proposition~\ref{prop:E-infinity-vector-space} and of Theorem~\ref{thm:SD-ell-infty} require $\overline{FM}(\hat{Y})\subseteq \ell_\infty(H).$ However, it may be easier to show $\overline{FM}(Y)\subseteq \ell_\infty(H)$ which implies $\overline{FM}(\hat{Y})\subseteq \ell_\infty(H)$ since $\hat{Y} \subseteq Y.$ For example, if $Y$ is an $\ell_p$ space for some $1 \le p \le \infty$ and $\overline{FM}(Y)\subseteq \ell_\infty(H)$ then there is a zero duality gap for all $b$ for which~\eqref{eq:SILP} is feasible and bounded. \item If \eqref{eq:SILP} has $n$ variables then a Fourier-Motzkin multiplier vector has at most $2^n$ nonzero components. Therefore, if the constraint space $Y \subseteq \ell_{\infty}(I) $ and the nonzero components of the multiplier vectors $u$ obtained by the Fourier-Motzkin elimination process have a common upper bound $N,$ then we satisfy the condition $\overline{FM}(Y) \subseteq \ell_\infty(H) $ in Proposition~\ref{prop:E-infinity-vector-space} and Theorem~\ref{thm:SD-ell-infty}. Checking that the nonzero components of the multiplier vectors $u$ obtained by Fourier-Motzkin elimination process have a common upper bound $N$ is verifiable through the Fourier-Motzkin procedure. \end{enumerate} \end{remark} \begin{example}[Example~\ref{ex:duality-gap-cannot-lift}, continued]\label{ex:duality-gap-cannot-lift-2} Recall that (SD) fails in Example~\ref{ex:duality-gap-cannot-lift}. In this case, $a^{1}, a^{2}, b \in \ell_{\infty}$ (indeed in $\ell_2$) however the condition $\overline{FM}(\hat{Y}) \subseteq \ell_\infty(H)$ fails since the Fourier-Motzkin multiplier vectors are $(1, 0, \ldots, 0, i, 0, \ldots)$ for all $i \in \bb N$ and $\overline{FM}(-e) \not\in \ell_\infty(H)$ for $e = (1, 1, \ldots,)$ but $-e \in \hat{Y}.$ \end{example} \subsection{An Example where (SD) holds but (DP) fails} In Example~\ref{example:karney-modified} we illustrate a case where (SD) holds but (DP) fails. In the following subsection we provide sufficient conditions that guarantee when (DP) holds. \begin{example}\label{example:karney-modified} Consider the following modification of Example~1 in Karney \cite{karney81}. \begin{align}\label{eq:karney-modified} \begin{array}{rcccl} \inf x_{1} &&&& \\ x_{1}&& &\ge& -1 \\ &-x_{2} &&\ge& -1 \\ &&-x_{3} &\ge& -1 \\ x_{1} &+ x_{2} & &\ge& 0 \\ x_{1} &- \frac{1}{i} x_{2} &+ \frac{1}{i^{2}} x_{3} &\ge& 0, \quad i = 5, 6, \ldots \end{array} \end{align} In this example $I = \bb N$. The smallest of the standard constraint spaces that contains the columns and right-hand-side of \eqref{eq:karney-modified} is $\mathfrak c$. To see this note that the first column in the sequence, $(1,0,0,1,1,\dots)$, in not an element of $\ell_p$ (for $1 \le p < \infty$) and is also not contained in $\mathfrak c_0$. It is easy to check that the columns and the right hand side lie in $\mathfrak c$. We show that (SD) holds with $(\text{DSILP}(\mathfrak c))$ but (DP) fails. Then, by Proposition~\ref{prop:mononticity}, (DP) fails for any sequence space that contains $\mathfrak c,$ including $\ell_\infty$. Our analysis uses the Fourier-Motzkin elimination procedure. First write the constraints of the problem in standard form \begin{align*} \begin{array}{rrcccrl} z &-x_{1}&&&\ge& 0& b_{0}\\ &x_{1}&& &\ge& -1& b_{1}\\ &&-x_{2} &&\ge& -1& b_{2} \\ &&&-x_{3} &\ge& -1 &b_{3}\\ &x_{1} &+ x_{2} & &\ge& 0&b_{4} \\ &x_{1} &- \frac{1}{i} x_{2} &+ \frac{1}{i^{2}} x_{3} &\ge& 0 & b_{i}, \quad i = 5, 6, \ldots, \end{array} \end{align*} and eliminate $x_{3}$ to yield (tracking the multipliers on the constraints to the right of each constraint) \begin{align*} \begin{array}{rrccrl} z &-x_{1}&&\ge& 0& b_{0}\\ &x_{1}& &\ge& -1& b_{1}\\ &&-x_{2} &\ge& -1& b_{2} \\ &x_{1} &+ x_{2} &\ge& 0&b_{4} \\ &x_{1} &- \frac{1}{i} x_{2} &\ge& -\frac{1}{i^{2}} & (\frac{1}{i^{2}})b_{3} + b_{i}, \quad i = 5, 6, \ldots , \end{array} \end{align*} then $x_2$ to give \begin{align*} \begin{array}{rrcrl} z &-x_{1}&\ge& 0& b_{0}\\ &x_{1} &\ge& -1& b_{1}\\ &x_{1} &\ge& -1&b_{2} + b_{4} \\ &\frac{(1+i)}{i}x_{1} &\ge& -\frac{1}{i^{2}} & (\frac{1}{i^{2}})b_{3} + (\frac{1}{i})b_{4} + b_{i}, \quad i = 5, 6, \ldots, \end{array} \end{align*} and finally $x_1$ to give \begin{equation}\label{eq:karney-modified-projected} \begin{array}{rrrl} z&\ge& -1& b_{0} + b_{1}\\ z &\ge& -1&b_{0} + b_{2} + b_{4} \\ z &\ge& \frac{-1}{i(1 + i)} & b_{0} + \frac{b_{3}}{i(1 + i)} + \frac{b_{4}}{(1+i)} + \frac{i b_{i}}{(1+i)}, \quad i = 5, 6, \ldots \end{array} \end{equation} We first claim that (SD) holds. The components of the Fourier-Motzkin multipliers (which can be read off the right side of~\eqref{eq:karney-modified-projected}) have an upper bound of 1. By Remark~\ref{remark:sufficient-condition-for-theorem-SD-ell-infinity} the hypotheses of Theorem~\ref{thm:SD-ell-infty} hold and we have (SD). We now show that (DP) fails. We do this by showing that there is a unique optimal dual solution (Claim 1) and that (DP) fails for this unique solution (Claim 2). \vskip 7pt \textbf{Claim 1.} The limit functional $\psi_{0 \oplus 1}$ (using the notation set for dual linear functionals over $\mathfrak c$ introduced in Section~\ref{s:preliminaries}) is the unique dual optimal solution to $(\text{DSILP}(\mathfrak c))$. \vskip 7pt Recall that every positive dual solution in $\mathfrak c$ has the form $\psi_{w \oplus r}$ where $w \in \ell^1_+$ and $r \in \bb R$ and $\psi_{w \oplus r}(y) = \sum_{i = 1}^\infty w_iy_i + ry_\infty$ for every convergent sequence $y$ with limit $y_\infty$. The constraints to $(\text{DSILP}(\mathfrak c))$ are written as follows \begin{align*} \psi_{w\oplus r}(a^1) = 1, \quad \psi_{w\oplus r}(a^2) = 0, \quad \psi_{w\oplus r}(a^3) = 0. \end{align*} This implies the following about $w$ and $r$ for dual feasibility \begin{align*} w_1 + w_4 + \sum_{i = 5}^\infty w_i + ra^1_\infty &= 1 \\ -w_2 + w_4 - \sum_{i = 5}^\infty \frac{w_i}{i} + ra^2_\infty &= 0 \\ -w_3 - \sum_{i = 5}^\infty \frac{w_i}{i^2} + ra^3_\infty &= 0 \end{align*} which simplifies to \begin{align} w_4 & = 1 - w_1 - \sum_{i = 5}^\infty w_i - r \label{eq:feasibility-1}\\ w_4 & = w_2 + \sum_{i = 5}^\infty \frac{w_i}{i} \label{eq:feasibility-2} \\ 0 & = w_3 + \sum_{i = 5}^\infty \frac{w_i}{i^2} \label{eq:feasibility-3} \end{align} by noting $a^1_\infty = 1$ and $a^2_\infty = a^3_\infty = 0$. The dual objective value for a feasible $\psi_{w \oplus r}$ is \begin{align*} \psi_{w \oplus r}(b) = - w_1 - w_2 - w_3 \end{align*} since $b_\infty = 0$. Clearly, $\psi_{0 \oplus 1}$ is feasible ($w = 0$ and $r = 1$ trivially satisfies \eqref{eq:feasibility-1}--\eqref{eq:feasibility-3}) with an objective value of $0$. Now consider an arbitrary dual solution $\psi_{w \oplus r}$. If any one of $w_1, w_2, w_3 > 0$ then $\psi_{w \oplus r}(b) < 0$ (recall that $w \ge 0$) and so $\psi_{w \oplus r}$ is not dual optimal since $\psi_{0 \oplus 1}$ yields a greater objective value. This means we can take $w_1 = w_2 = w_3 = 0$ in any optimal dual solution. Combined with \eqref{eq:feasibility-3} this implies $\sum_{i = 5}^\infty \frac{w_i}{i^2} = 0.$ Since $w_i \ge 0$ this implies $w_i = 0$ for $i = 5, 6, \dots$. From \eqref{eq:feasibility-2} this implies $w_4 = 0$. Thus, in every dual optimal solution $w = 0$ and \eqref{eq:feasibility-1} implies $r = 1$. Therefore the limit functional $\psi_{0 \oplus 1}$ is the unique optimal dual solution, establishing the claim. \quad $\dagger$ The limit functional is an optimal dual solution with an objective value of $0$ which is also the optimal primal value since (SD) holds. Next we argue that (DP) fails. Since the limit functional is the unique optimal dual solution, it is the only allowable $\psi^*$ in \eqref{eq:dual-pricing}. This observation makes it easy to verify that (DP) fails. We show that \eqref{eq:dual-pricing} fails for $\psi_{0 \oplus 1}$ and $d = (0,0,0,1,0,\dots)$. This perturbation vector $d$ leaves the problem unchanged except for fourth constraint, which becomes $x_{1} + x_{2} \ge \epsilon$. \vskip 7pt \textbf{Claim 2.} For all sufficiently small $\epsilon > 0$, the primal problem with the new right-hand-side vector $b + \epsilon d$ for $d = (0,0,0,1,0,\dots)$ is feasible and has a primal objective function value $OV(b + \epsilon d)$ strictly greater than zero. \vskip 7pt Observe from~\eqref{eq:karney-modified-projected} that $I_{1}$ and $I_{2}$ are empty and that the primal is feasible for right-hand-side vector $b + \epsilon d$ for all $\epsilon.$ The third set of inequalities in~\eqref{eq:karney-modified-projected} are \begin{eqnarray*} z \ge \frac{-1}{i(1 + i)} b_{0} + \frac{b_{3}}{i(1 + i)} + \frac{b_{4}}{(1+i)} + \frac{i b_{i}}{(1+i)}, \quad i = 5, 6, \ldots \end{eqnarray*} When $b_{4}$ is changed from 0 to $\epsilon$ we have $b_{0} = 0,$ $b_{1} = b_{2} = b_{3} = -1,$ $b_{4} = \epsilon,$ and $b_{i}= 0.$ These values give \begin{eqnarray*} z \ge \frac{-1}{i(1 + i)} + \frac{\epsilon}{(1+i)} = \frac{1}{(1+ i)} \left( \epsilon - \frac{1}{i} \right), \quad i = 5, 6, \ldots \end{eqnarray*} Let $\epsilon = 1/N$ for a positive integer $N \ge 3.$ Define $\hat i = 2/\epsilon = 2 N$. Then constraint $\hat i$ is \begin{eqnarray*} z \ge \frac{1}{(\frac{2}{\epsilon}+1)} \left(\epsilon - \frac{1}{\frac{2}{\epsilon}} \right) = \frac{1}{(\frac{2}{\epsilon}+1)} \left(\frac{\epsilon}{2} \right) > 0. \end{eqnarray*} This constraint is a lower bound on the objective value of the primal and this implies that $OV(b + \frac{1}{N} d) \ge \frac{1}{(\frac{2}{\epsilon}+1)} \left(\frac{\epsilon}{2} \right) > 0$. This establishes the claim. \quad $\dagger$ To show \eqref{eq:dual-pricing} does not hold, observe $d$ has finite support so the that limit functional evaluates $d$ to zero. That is, $\psi_{0 \oplus 1}(d) = 0$. This implies that for all sufficiently small $\epsilon$, \begin{align*} OV(b) + \epsilon\psi_{0 \oplus 1}(d) = 0 < OV(b + \epsilon d), \end{align*} where the inequality follows by Claim 2. Hence, there does not exist an $\hat \epsilon > 0$ such that \eqref{eq:dual-pricing} holds for $\psi^* = \psi_{0 \oplus 1}$ and $d = (0,0,0,1,0,\dots)$. This implies that (DP) fails. \quad $\triangleleft$ \end{example} \subsection{Dual pricing in extended constraint spaces} The fact that (DP) fails for this example is intuitive. The structure of the primal is such that the only dual solution corresponds to the limit functional. However, the value of the limit functional is unchanged by perturbations to a finite number of constraints. Since the primal optimal value changes under finite support perturbations, this implies that the limit functional cannot correctly ``price'' finite support perturbations. Despite the existence of many sufficient conditions for (SD) in the literature, to our knowledge sufficient conditions to ensure (DP) for semi-infinite programming have only recently been considered for the finite support dual \eqref{eq:FDSILP} (see Goberna and L\'{o}pez \cite{goberna2014post} for a summary of these results). We contrast our results with those in Goberna and L\'{o}pez \cite{goberna2014post} following the proof of Theorem~\ref{theorem:sufficient-conditions-dual-pricing-alt}. Our sufficient conditions for (DP), based on the output \eqref{eq:J_system} of the Fourier-Motzkin elimination procedure, are \begin{enumerate}[DP.1] \item \label{item:dp-condition-sup-S-vertical} If $I_3 \neq \emptyset$ and $\mathcal H_S := \{\{h_{m}\}_{m \in \bb N} \subseteq I_{3} \text{ and } \limsup \{\tilde{b} (h_{m})\}_{m \in \bb N} < S(b) \}$ then \begin{align*} \sup \{ \limsup \{\tilde{b} (h_{m})\}_{m \in \bb N} : \{h_{m}\}_{m \in \bb N} \in \mathcal H_S \} < S(b). \end{align*} \item \label{item:dp-condition-sup-L-vertical} If $I_4 \neq \emptyset$ and \begin{align*} \mathcal H_L :=\{ \{h_m\}_{m \in \bb N} \subseteq I_4 : \limsup \{\tilde{b} (h_{m})\}_{m \in \bb N} < L(b) \text{ and } \lim_{m\to \infty} \sum_{k=\ell}^n |\tilde a^k(h_m)| = 0\} \end{align*} then \begin{align*} \sup \{ \limsup \{\tilde{b} (h_{m})\}_{m \in \bb N} : \{h_{m}\}_{m \in \bb N} \in \mathcal H_L \} < L(b). \end{align*} \end{enumerate} By Lemmas~\ref{lem:seq-L} and~\ref{lem:seq-S}, subsequences $\{\tilde{b}(h)\}$ with the indices $h$ in $I_{3}$ or $I_{4}$ are bounded above by $S(b)$ and $L(b),$ respectively, and in the case of $L(b)$, $\tilde a^k(h) \to 0$ for all $k = 1, \dots, n$. Conditions DP.\ref{item:dp-condition-sup-S-vertical}-DP.\ref{item:dp-condition-sup-L-vertical} require that limit values of these subsequences that do not achieve $S(b)$ or $L(b)$ (depending on whether the sequence is in $I_3$ or $I_4$, respectively) do not become arbitrarily close to $S(b)$ or $L(b)$. \begin{remark}\label{remark:thoughts-on-DP-1-DP-2} In the case of Condition DP.\ref{item:dp-condition-sup-S-vertical}, given $h \in I_3$ we may take $h_m = h$ for all $m \in \bb N$ and then $\limsup \{\tilde{b} (h_{m})\}_{m \in \bb N} = \tilde{b} (h).$ Then Condition DP.\ref{item:dp-condition-sup-S-vertical} becomes $\sup \{ \tilde{b}(h) \, : \, h \in I_{3} \text{ and } \tilde{b}(h) < S(b) \} < S(b)$ when $I_3 \neq \emptyset$. This condition can only hold if the supremum of the $\tilde{b}(h)$ is achieved over $I_{3}.$ A similar conclusion does not hold for DP.\ref{item:dp-condition-sup-L-vertical}. In this case $\{ h_{m} \}_{m \in \bb N}$ cannot be a sequence of identical indices if $\lim_{m\to \infty} \sum_{k=\ell}^n |a^k(h_m)| = 0$ since $\sum_{k=\ell}^{n} |\tilde{a}^k(h_{m})| \neq 0$ for all $h_{m} \in I_{4}.$ \end{remark} The proof of theorem uses three technical lemmas (Lemmas~\ref{lem:conv-comb}--\ref{lemma:s-vertical-pricing}) found in the appendix. \begin{theorem}\label{theorem:sufficient-conditions-dual-pricing-alt} Consider an instance of \eqref{eq:SILP} that is feasible and bounded for right-hand-side $b.$ Suppose the constraint space $Y$ for~\eqref{eq:SILP} is such that $\overline{FM}(\hat{Y})\subseteq \ell_\infty(H)$ and Conditions DP.\ref{item:dp-condition-sup-S-vertical} and DP.\ref{item:dp-condition-sup-L-vertical} hold. Then property (DP) holds for \eqref{eq:SILP}. \end{theorem} \begin{proof} Assume $d \in \hat{Y}$ is a perturbation vector such that $b + d$ is feasible. We show there exists an optimal dual solution $\psi^{*}$ to \eqref{eq:DSILPprime} and an $\hat \epsilon > 0$ such that \begin{align*} OV(b + \epsilon d) = \psi^{*}(b + \epsilon d) = OV(b) + \epsilon \psi^{*}(d) \end{align*} for all $\epsilon \in [0, \hat \epsilon].$ There are several cases to consider. \noindent \underline{\em Case 1: $L(b) > S(b)$.} By hypothesis $\overline{FM}(d) = \tilde{d} \in \ell_{\infty}(H)$ and this implies $\sup_{h \in I_{3}} | \tilde{d}(h) | < \infty.$ Thus, $S(d) < \infty$. Then $L(b) > S(b)$ implies there exists an $\epsilon_{1} > 0$ such that $L(b) > S(b) + \epsilon S(d)$ for all $\epsilon \in [0, \epsilon_{1}]$. However, by Lemma~\ref{lem:convex-L}, $S(y)$ is a sublinear function of $y$ so $S(b) + \epsilon S(d) \ge S(b + \epsilon d).$ Define $\beta := \min_{\epsilon\in[0,\epsilon_1]}L(b) - S(b+ \epsilon d)\geq \min_{\epsilon\in[0,\epsilon_1]} L(b) - S(b) - \epsilon S(d)$. Since the function $L(b) - S(b) - \epsilon S(d)$ is linear and it is strictly positive at the end points of $[0,\epsilon_1]$, this implies $\beta > 0$. Again, $\tilde{d} \in \ell_{\infty}(H)$ implies the existence of $\epsilon_{2} > 0$ such that $ \epsilon_{2} \sup_{h \in I_{4}} | \tilde{d}(h) | < \beta/2.$ Let $\epsilon_{3} = \min\{\epsilon_{1}, \epsilon_{2} \}.$ Then for all $\epsilon \in [0, \epsilon_{3}]$ \begin{eqnarray*} \begin{array}{rcl} L(b + \epsilon d) & = &\lim_{\delta\to\infty}\sup \{\tilde{b}(h) + \epsilon \tilde d(h)- \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4 \} \\ &\geq &\lim_{\delta\to\infty}\sup \{\tilde{b}(h) - \frac\beta2 - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4 \} \\ & = & \lim_{\delta\to\infty}\sup \{\tilde{b}(h) - \delta \sum_{k=\ell}^{n} |\tilde{a}^k(h)| \, : \, h \in I_4 \} - \frac\beta2 \\ & = & L(b) -\frac\beta2 \\ &>& S(b + \epsilon d). \end{array} \end{eqnarray*} A similar argument gives $L(b + \epsilon d) < L(b) + \frac\beta2$ so $L(b + \epsilon d) < \infty.$ By hypothesis \eqref{eq:SILP} is feasible so by Theorem~\ref{theorem:fm-primal-value}, $OV(b) = \max\{S(b), L(b)\}.$ Then $L(b) > S(b)$ implies $L(b) > -\infty.$ Thus $-\infty < L(b), L(b + \epsilon_{3} d) < \infty.$ Thus, the hypotheses of Lemma~\ref{lemma:L-vertical-pricing} hold. Now apply Lemma~\ref{lemma:L-vertical-pricing} and observe there is a $\hat{\epsilon}$ which we can take to be less than $\epsilon_{3}$ and a sequence $\{h_m\}_{m\in \bb N} \subseteq I_4$ such that for all $\epsilon \in [0,\hat \epsilon]$ \begin{align*} \tilde b(h_m) \to L(b), \tilde d_{\epsilon} (h_m) \to L(b + \epsilon d), \text{ and } \sum_{k=\ell}^{n} |\tilde{a}^k(h_m)| \to 0 \end{align*} where $\tilde d_{\epsilon} = \overline{FM}(b + \epsilon d)$. We have also shown for all $\epsilon \in [0, \epsilon_{3}],$ $L(b + \epsilon d) > S(b + \epsilon d).$ Then by Theorem~\ref{theorem:fm-primal-value} $OV(b + \epsilon d) = L(b + \epsilon d).$ Using the sequence $\{h_m\}_{m\in \bb N} \subseteq I_4$ define the linear functional $\lambda$ as in~\eqref{eq:projected-system-linear-functional}. Then extend this linear functional as in Theorem~\ref{theorem:extend-SD-Y} and use the adjoint of the $\overline{FM}$ operator to get the linear functional $\psi^{*}$ with the property that $OV(b + \epsilon d) = \psi^{*}(b + \epsilon d)$ for all $\epsilon \in [0, \hat{\epsilon}].$ \vskip 5pt \noindent \underline{\em Case 2: $S(b) > L(b)$.} This case follows the same proof technique as in the $L(b) > S(b)$ case but invoke Lemma~\ref{lemma:s-vertical-pricing} instead of Lemma~\ref{lemma:L-vertical-pricing}. \vskip 5pt \noindent \underline{\em Case 3: $S(b) = L(b)$.} By Lemma~\ref{lemma:L-vertical-pricing} there exists $\hat \epsilon_{L} > 0$ and a sequence $\{h_m\}_{m\in \bb N} \subseteq I_4$ such that for all $\epsilon \in [0,\hat \epsilon_{L}]$ \begin{align*} \tilde d_{\epsilon} (h_m) \to L(b + \epsilon d), \text{ and } \sum_{k=\ell}^{n} |\tilde{a}^k(h_m)| \to 0 \end{align*} where $\tilde d_{\epsilon} = \overline{FM}(b + \epsilon d)$. Likewise by Lemma~\ref{lemma:s-vertical-pricing} there exists $\hat \epsilon_{S} > 0$ and a sequence $\{g_m\}_{m\in \bb N} \subseteq I_3$ such that for all $\epsilon \in [0,\hat \epsilon_{S}]$ \begin{align*} \tilde d_{\epsilon} (g_m) \to S(b + \epsilon d)\end{align*} where $\tilde d_{\epsilon} = \overline{FM}(b + \epsilon d)$. Now let $\hat \epsilon = \min \{ \hat \epsilon_{L}, \hat \epsilon_{S} \}.$ By Lemma~\ref{lem:conv-comb}, for all $\epsilon \in (0, \hat \epsilon]$, $S(b+\epsilon d)$ and $L(b + \epsilon d)$ are the same convex combinations of $S(b), S(b + \hat \epsilon d)$ and $L(b), L(b + \hat \epsilon d)$ respectively. There are now three possibilities. First, if $S(b + \hat \epsilon d) = L(b + \hat \epsilon d)$ then $S(b + \epsilon d) = L(b + \epsilon d)$ for all $\epsilon \in (0, \hat \epsilon]$ and we have alternative optimal dual linear functionals generated from the $\{g_{m}\}$ and $\{h_{m}\}$ sequences. Second, if $S(b + \hat \epsilon d) > L(b + \hat \epsilon d)$ then $S(b + \epsilon d) > L(b + \epsilon d)$ for all $\epsilon \in (0, \hat \epsilon]$ and the dual linear functional generated from the $\{g_{m}\}$ sequence will satisfy the dual pricing property. Third, if $S(b + \hat \epsilon d) < L(b + \hat \epsilon d)$ then $S(b + \epsilon d) < L(b + \epsilon d)$ for all $\epsilon \in (0, \hat \epsilon]$ and the dual linear functional generated from the $\{h_{m}\}$ sequence will satisfy the dual pricing property. \end{proof} The following two examples illustrate that neither of DP.\ref{item:dp-condition-sup-S-vertical} nor DP.\ref{item:dp-condition-sup-L-vertical} are redundant conditions. \begin{example}[Example~\ref{example:karney-modified}]\label{ex:karney-modified-continued} Example~\ref{example:karney-modified} did not have the (DP) property. Recall for this example that $OV(b) = S(b) = 0.$ Consider the projected system \eqref{eq:karney-modified-projected}. Condition DP.\ref{item:dp-condition-sup-L-vertical} is satisfied vacuously since $I_4 = \emptyset$. However, Condition DP.\ref{item:dp-condition-sup-S-vertical} does not hold because $-1/i(1+i) < 0 = S(b),$ for $i = 5, 6, \dots,$ but the supremum over all $i$ is zero. That is, $\sup \{ \tilde{b}(h) \, : \, h \in I_{3} \text{ and } \tilde{b}(h) < 0 \} = 0 = S(b).$ See the comments in Remark~\ref{remark:thoughts-on-DP-1-DP-2}. \quad $\triangleleft$ \end{example} \begin{example}\label{example:sufficient-conditions-vertical-dual-pricing} Consider the following \eqref{eq:SILP} \begin{eqnarray*} \inf x_{1} && \\ x_{1} + \frac{1}{m + n}x_{2} &\ge& -\frac{1}{n^{2}}, \quad (m, n) \in I \end{eqnarray*} whose constraints are indexed by $I = \{ (m,n) \, : \, (m,n) \in \bb N \times \bb N \}.$ Putting into standard form gives \begin{eqnarray*} \inf z && \\ z - x_{1} &\ge& 0\\ x_{1} + \frac{1}{m + n}x_{2} &\ge& -\frac{1}{n^{2}}, \quad (m, n) \in I. \end{eqnarray*} Apply Fourier-Motzkin elimination, observe $H = I_{4} = I,$ and obtain \begin{eqnarray*} \inf z && \\ z+ \frac{1}{m + n}x_{2} &\ge& -\frac{1}{n^{2}}, \quad (m, n) \in I_{4}. \end{eqnarray*} In this case $I_{3} = \emptyset$ so DP.\ref{item:dp-condition-sup-S-vertical} holds vacuously. We show that DP.\ref{item:dp-condition-sup-L-vertical} fails to hold for this example and that property (DP) does not hold. In our notation, for an arbitrary but fixed $\bar n \in \bb N,$ there are subsequences \begin{eqnarray*} \{\tilde{b}(m, \bar n)\}_{m \in \bb N} = \{-\frac{1}{\bar{n}^{2}} \}_{m \in \bb N}\to -\frac{1}{\bar{n}^{2}}, \qquad \{\tilde{a}(m,\bar n)\}_{m \in \bb N} = \{\frac{1}{m + \bar n}\}_{m \in \bb N} \to 0. \end{eqnarray*} Likewise, for an arbitrary but fixed $\bar m \in \bb N,$ there are subsequences \begin{eqnarray*} \{\tilde{b}(\bar m, n)\}_{n \in \bb N} = \{-\frac{1}{n^{2}}\}_{n \in \bb N} \to 0, \qquad \{\tilde{a}(\bar m,n)\}_{n \in \bb N} = \{\frac{1}{\bar m + n}\}_{n \in \bb N} \to 0. \end{eqnarray*} {\bf Claim 1:} An optimal primal solution is $x_{1} = x_{2} = 0$ with optimal value $z = 0.$ Clearly $x_{1} = x_{2} = 0$ is a primal feasible solution with objective function value 0 since the right-hand-side vector is negative. Now we argue that the optimal objective value cannot be negative. In this example only $I_{4}$ is nonempty so $S(b) = -\infty$ and it suffices to show $L(b) = 0.$ In this example, \begin{eqnarray*} \omega(\delta, b) = \sup \left\{ -\frac{1}{n^{2}} -\frac{\delta}{m + n} \, : \, (m, n) \in I_{4} = \bb N \times \bb N \right\} \le 0. \end{eqnarray*} For any subsequence of $\{(m, n)\} \in I_{4},$ $-\frac{\delta}{m + n} \to 0.$ Since $\{-\frac{1}{n^{2}}\}_{n \in \bb N} \to 0$ for each $m,$ by Lemma~\ref{lem:seq-L} we have $L(b) = 0.$ $\dagger$ \vskip 7pt We consider perturbation vector $d(m,n) = \tilde{d}(m,n) = \frac{1}{n}$ for all $(m, n) \in I_{4}.$ \vskip 7pt {\bf Claim 2:} For all $n \in \bb N$, $L(b + \frac{2}{n} d) = \frac{(2/n)^{2}}{4} = \frac{1}{n^2}.$ For a fixed $\hat n \in \bb N,$ consider the subsequence $\{ m, \hat{n} \}_{m \in \bb N}$ of $I_4$ where \begin{eqnarray*} \{\tilde{b}(m, \hat n) + \frac{2}{\hat n} \tilde{d}(m, \hat n) \}_{m \in \bb N} = \{ -\frac{1}{\hat{n}^{2}} + \frac{2}{\hat n} \frac{1}{\hat n} \}_{m \in \bb N} = \{ \frac{1}{\hat n^2} \}_{m \in \bb N}. \end{eqnarray*} Then since $\{(m, \hat n)\} \in I_{4}$ for all $m \in \bb N$, $\frac{1}{m + \hat n} \to 0$ as $m \to \infty$, by Lemma~\ref{lem:seq-L}, $L(b + \frac{2}{\hat n} d) \ge \frac{1}{\hat n^2}.$ Now show this is an equality by showing it is the best possible limit value of any sequence. The maximum value of $\{\tilde{b}(m, n) + \frac{2}{\hat n} \tilde{d}(m, n) \}_{(m,n) \in \bb N \times \bb N}$ is given by \begin{eqnarray*} \max_{n} \left(-\frac{1}{n^{2}} + \frac{2}{\hat n n} \right), \end{eqnarray*} which, using simple Calculus, is achieved for $n = \hat n$. This shows that $\tilde{b}(m, n) + \frac{2}{\hat n} \tilde{d}(m, n) \le \frac{1}{\hat n^2}$ for all $(m,n) \in \bb N \times \bb N$. From Lemma~\ref{lem:seq-L}, $L(b + \frac{2}{\hat n} d)$ is the limit of some subsequence of elements in $\{\tilde{b}(m, n) + \frac{2}{\hat n} \tilde{d}(m, n) \}_{(m,n) \in \bb N \times \bb N}$. Since each element is less than $\frac{1}{\hat n^2}$, $L(b + \frac{2}{\hat n} d) \le \frac{1}{\hat n^2}$. This implies that $L(b + \frac{2}{\hat n} d) = \frac{1}{\hat n^2}.$ \vskip 7pt {\bf Claim 3:} For this perturbation vector $d,$ there is no dual solution $\psi$ and an $\hat \epsilon > 0$ such that \begin{eqnarray*} OV(b + \epsilon d) = L( b + \epsilon d) = \psi(b + \epsilon d) \end{eqnarray*} for all $\epsilon \in [0, \hat \epsilon].$ Assume such a $\psi$ and $\hat \epsilon > 0 $ exists. Consider any $\hat n$ such that $\frac{2}{\hat n} < \hat \epsilon$. By Claim 2, $L(b + \frac{2}{\hat n} d) = \frac{1}{\hat n^2}$, but by the linearity of $\psi$, $\psi(b + \frac{2}{\hat n} d) = \psi(b) + \frac{2}{\hat n} \psi(d)$. Then $L(b + \frac{2}{\hat n} d) = \psi(b + \frac{2}{\hat n} d)$ implies $\frac{1}{\hat n^2} = \psi(b) + \frac{2}{\hat n} \psi(d)$ for all $\hat n$ such that $\frac{2}{\hat n} < \hat \epsilon$. By Claim 1, $L(b) = 0$ so $\psi(b) = 0.$ Then $\frac{1}{\hat n} = 2\psi(d)$ for all $\hat n$ such that $\frac{2}{\hat n} < \hat \epsilon$. However $\psi(d)$ is a fixed number and cannot vary with $\hat{n}.$ This is a contradiction and (DP) fails. \quad $\triangleleft$ \end{example} In \cite{goberna2007sensitivity}, Goberna et al. give sufficient conditions for a dual pricing property. They use the notation \begin{eqnarray*} T(x) := \{ i \in I \, : \, \sum_{k=1}^{n} a^{k}(i) x = b(i) \} \\ A(x) := \cone\{ (a^{1}(i), \ldots, a^{k}(i)) \, : \, i \in T(x) \}. \end{eqnarray*} Their main results for right-hand-side sensitivity analysis appear as Theorem 4 in~\cite{goberna2007sensitivity} and again as Theorem 4.2.1 in~\cite{goberna2014post}. In this theorem a key hypothesis (hypothesis (i.a) in the statement of Theorem 4 in~\cite{goberna2007sensitivity}) is that $c \in A(x^{*})$ where $x^{*}$ is a feasible solution to~\eqref{eq:SILP}. We show in Theorem~\ref{theorem:goberna-sensitivity} below that in our terminology (i.a) implies $S(b) \ge L(b)$ and both primal and dual solvability. \begin{theorem}\label{theorem:goberna-sensitivity} If~\eqref{eq:SILP} has a feasible solution $x^{*}$ and $c \in A(x^{*})$ then: (i) $S(b) \ge L(b),$ (ii) $S(b) = \sup_{h \in I_{3}}\{ \tilde{b}(h) \}$ is realized, and (iii) $x^{*}$ is an optimal primal solution. \end{theorem} \begin{proof} If $c \in A(x^{*})$ then there exists $\bar{v} \ge 0$ with finite support contained in $T(x^{*})$ such that $\sum_{i \in I} \bar{v}(i) a^{k}(i) = c_{k}$ for $k = 1, \ldots, n.$ By hypothesis, $x^{*}$ is a feasible solution to~\eqref{eq:SILP} and it follows from Theorem 6 in Basu et al.~\cite{basu2013projection} that $\tilde{b}(h) \le 0$ for all $h \in I_{1}.$ Then by Lemma 5 in the same paper there exists $\bar{h} \in I_{3}$ such that $\tilde{b}(\bar{h}) \ge \sum_{i \in I} \bar{v}(i) b(i).$ More importantly, the support of $\bar{h}$ is a subset of the support of $\bar{v}.$ Then the support of $\bar{h}$ is contained in $T(x^{*})$ since $\bar{v}_{i} > 0$ implies $i \in T(x^{*}).$ Then for this $\bar{h},$ $v^{\bar{h}}(i) > 0$ for only those $i \in I$ for which constraint $i$ is tight. Then we aggregate the tight constraints in~\eqref{eq:initial-system-obj-con}-\eqref{eq:initial-system-con} associated with the support of $\bar{h}$ and observe \begin{eqnarray}\label{eq:goberna-sensitivity} z = \sum_{k=1}^{n} c_{k} x_{k}^{*} = \sum_{i \in I} v^{\bar{h}}(i) b(i) = \tilde{b}(\bar{h}). \end{eqnarray} It follows from~\eqref{eq:goberna-sensitivity} that $x^{*}$ is an optimal primal solution and $v^{\bar{h}}$ is an optimal dual solution and (i)-(iii) follow. \end{proof} The following example satisfies (DP) but (iii) of Theorem~\ref{theorem:goberna-sensitivity} fails to hold since the primal is not solvable. \begin{example}[Example 3.5 in \cite{basu2014sufficiency}]\label{ex:drop-primal-solvability} Consider the \eqref{eq:SILP} \begin{align}\label{eq:not-primal-optimal} \begin{array}{rcl} \inf x_{1} \phantom{+ \tfrac{1}{i^{2}}x_{2} \ } && \\ \phantom{\inf } x_{1} + \tfrac{1}{i^{2}}x_{2} &\ge& \tfrac{2}{i}, \quad i \in \bb N. \end{array} \end{align} with constraint space taken to be $\ell_\infty$. We apply the Fourier-Motzkin elimination procedure by putting \eqref{eq:not-primal-optimal} into standard form to yield \begin{align*} \begin{array}{rcl} z - x_{1} \phantom{+ \tfrac{1}{i^{2}}x_{2} \ } &\ge & 0 \\ \phantom{z - } x_{1} + \tfrac{1}{i^{2}}x_{2} &\ge& \tfrac{2}{i}, \quad i \in \bb N . \end{array} \end{align*} Eliminating $x_1$ gives the projected system: \begin{align*} z + \tfrac{1}{i^{2}}x_{2} \ge \tfrac{2}{i}, \quad i \in \bb N. \end{align*} Observe that $H = \bb N = I_4$ and $x_2$ cannot be eliminated. Since $I_3 = \emptyset$, $S(b) = -\infty$ and so by Theorem~\ref{theorem:fm-primal-value} the optimal value of \eqref{eq:not-primal-optimal} is $L(b)$. Recall that $L(b) = \lim_{\delta \to \infty} \omega(\delta, b)$ where $\omega(\delta, b) = \sup_{i \in \bb N} \left\{ \tfrac{2}{i} - \tfrac{1}{i^{2}} \delta \right\} \leq \frac{1}{\delta}$, where the inequality was shown in~\cite{basu2014sufficiency}. Also, for a fixed $\delta \geq 0$, $\sup_{i \in \bb N} \left\{ \tfrac{2}{i} - \tfrac{1}{i^{2}} \delta \right\} \geq 0$ and so $\omega(\delta, b) \geq 0$ for all $\delta\geq 0$. Hence, $0\leq L(b) = \lim_{\delta\to\infty} \omega(\delta, b) \leq \lim_{\delta \to \infty} \tfrac{1}{\delta} = 0$. This implies $L(b) = 0$. However, the optimal objective value is never attained, since there is no feasible solution with $x_1 = 0.$ Next we show that (DP) holds. DP.\ref{item:dp-condition-sup-S-vertical} holds vacuously since $I_3 = \emptyset$. Also, DP.\ref{item:dp-condition-sup-L-vertical} holds vacuously because $L(b) = 0$ and $\tilde{b(h)} > 0$ for all $h \in I_{4}.$ Observe also that the $FM$ linear operator maps $\ell_{\infty}(\{ 0 \} \cup \bb N )$ into $\ell_{\infty}(\bb N)$. To see that this is the case observe that all of the multiplier vectors have exactly two nonzero components and both components are $+1.$ Thus, applying the $FM$ operator to any vector in $\ell_{\infty}(\{ 0 \} \cup \bb N)$ produces another vector in $\ell_{\infty}(\bb N)$ since adding any two bounded components produces bounded components. Hence we can apply Theorem~\ref{theorem:sufficient-conditions-dual-pricing-alt} to conclude \eqref{eq:not-primal-optimal} satisfies (DP). \quad $\triangleleft$ \end{example} \section{Conclusion} This paper explores important duality properties of semi-infinite linear programs over a spectrum of constraint and dual spaces. Our flexibility to different choices of constraint spaces provides insight into how properties of a problem can change when considering difference spaces for perturbations. In particular, we show that \emph{every} SILP satisfies (SD) and (DP) in a very restricted constraint space $U$ and provide sufficient conditions for when (SD) and (DP) hold in larger spaces. The ability to perform senstivity analysis is critical for any practical implementation of a semi-infinite linear program because of the uncertainty in data in real life problems. However, there is another common use of (DP). In finite linear programming optimal dual solutions correspond to ``shadow prices'' with economic meaning regarding the marginal value of each individual resource. These marginal values can help govern investment and planning decisions. The use of dual solutions as shadow prices poses difficulties in the case of semi-infinite programming. Indeed, it is not difficult to show Example~\ref{ex:drop-primal-solvability} has a unique optimal dual solution over the constraint space $\mathfrak c$ -- namely, the limit functional $\psi_{0 \oplus 1}$ (the argument for why this is the case is similar to that of Example~\ref{example:karney-modified} and thus omitted). Since (DP) holds in Example~\ref{ex:drop-primal-solvability} this means there is a optimal dual solution that satisfies \eqref{eq:dual-pricing} for every feasible perturbation. This is a desirable result. However, interpreting the limit functional as assigning a ``shadow price" in the standard way is problematic. Under the limit functional the marginal value for each individual resource (and indeed any finite bundle of resources) is zero, but infinite bundles of resources may have positive marginal value. This makes it difficult to interpret this dual solution as assigning economically meaningful shadow prices to individual constraints. In a future work we aim to uncover the mechanism by which such undesirable dual solutions arise and explore ways to avoid such complications. This direction draws inspiration from earlier work by Ponstein \cite{ponstein1981use} on countably infinite linear programs.
2,869,038,154,498
arxiv
\section{Introduction} \input{sections/01_intro.tex} \section{Fake News in a Neural and Adversarial Setting} \input{sections/02_overview.tex} \section{{\textsc{Grover}}: Modeling Conditional Generation of Neural Fake News} \input{sections/03_genmodel.tex} \section{Humans are Easily Fooled by {\textsc{Grover}}-written Propaganda} \label{sec:genexps} \input{sections/04_genexps.tex} \section{Neural Fake News Detection} \input{sections/05_detection.tex} \section{How does a model distinguish between human and machine text?} \input{sections/06_analysis.tex} \section{Conclusion: a Release Strategy for {\textsc{Grover}}} \input{sections/08_conclusion.tex} \vspace*{-2mm} \section*{Acknowledgments} \vspace*{-2mm} We thank the anonymous reviewers, as well as Dan Weld, for their helpful feedback. Thanks also to Zak Stone and the Google Cloud TPU team for help with the computing infrastructure. This work was supported by the National Science Foundation through a Graduate Research Fellowship (DGE-1256082) and NSF grants (IIS-1524371, 1637479, 165205, 1703166), the DARPA CwC program through ARO (W911NF-15-1-0543), the Sloan Research Foundation through a Sloan Fellowship, the Allen Institute for Artificial Intelligence, the NVIDIA Artificial Intelligence Lab, Samsung through a Samsung AI research grant, and gifts by Google and Facebook. Computations on beaker.org were supported in part by credits from Google Cloud. \bibliographystyle{plainnat} \subsection{Language Modeling results: measuring the importance of data, context, and size} We validate {\textsc{Grover}}, versus standard unconditional language models, on the April 2019 test set. We consider two evaluation modes: \emph{unconditional}, where no context is provided and the model must generate the article \hlc[body]{body}; and \emph{conditional}, in which the full metadata is provided as context. In both cases, we calculate the perplexity only over the article \hlc[body]{body}. Our results, shown in Figure~\ref{fig:ppl}, show several conclusions. First, {\textsc{Grover}}~noticeably improves (between .6 to .9 perplexity points) when conditioned on metadata. Second, perplexity decreases with size, with {\textsc{Grover}}-Mega obtaining 8.7 perplexity in the conditional setting. Third, the data distribution is still important: though the GPT2 models with 124M parameters and 355M parameters respectively match our {\textsc{Grover}}-Base and {\textsc{Grover}}-Large architectures, our model is over 5 perplexity points lower in both cases, possibly because the OpenAI WebText corpus also contains non-news articles. \subsection{Carefully restricting the variance of generations with Nucleus Sampling} Sampling from {\textsc{Grover}}~is straightforward as it behaves like a left-to-right language model during decoding. However, the choice of decoding algorithm is important. While likelihood-maximization strategies such as beam search work well for \emph{closed-ended} generation tasks where the output contains the same information as the context (like machine translation), these approaches have been shown to produce degenerate text during \emph{open-ended} generation \citep{hashimoto2019unifying, holtzman2019curious}. However, as we will show in Section~\ref{sec:analysis}, restricting the variance of generations is also crucial. In this paper, we primarily use Nucleus Sampling (top-$p$): for a given threshold $p$, at each timestep we sample from the most probable words whose cumulative probability comprises the top-$p$\% of the entire vocabulary \citep{holtzman2019curious}.\footnote{In early experiments, we found Nucleus Sampling produced better and less-detectable generations than alternatives like top-$k$ sampling, wherein the most probable $k$ tokens are used at each timestep \citep{fan2018hierarchical}.} \subsection{A semi-supervised setting for neural fake news detection} While there are many human-written articles online, most are from the distant past, whereas articles to be detected will likely be set in the present. Likewise, there might be relatively few neural fake news articles from a given adversary.\footnote{Moreover, since disinformation can be shared on a heterogeneous mix of platforms, it might be challenging to pin down a single generated model.} We thus frame neural fake news detection as a semi-supervised problem. A neural verifier (or \emph{discriminator}) has access to many human-written news articles from March 2019 and before -- the entire {\textsc{RealNews}}~training set. However, it has limited access to generations, and more recent news articles. Using 10k news articles from April 2019, we generate article body text; another 10k articles are used as a set of human-written news articles. We split the articles in a balanced way, with 10k for training (5k per label), 2k for validation, and 8k for testing. We consider two evaluation modes. In the \textbf{unpaired} setting, a discriminator is provided single news articles, and must classify each independently as {\tt\small Human} or {\tt\small Machine}. In the \textbf{paired} setting, a model is given two news articles with the same metadata, one real and one machine-generated. The discriminator must assign the machine-written article a higher {\tt\small Machine} probability than the human-written article. We evaluate both modes in terms of accuracy. \subsection{Discrimination results: {\textsc{Grover}}~performs best at detecting {\textsc{Grover}}'s fake news} \input{figures/t02_results.tex} We present experimental results in Table~\ref{tab:results} for all generator and discriminator combinations. For each pair, we show the test results using the most adversarial generation hyperparameters (top-$p$) as judged on the validation set.\footnote{For each discriminator/generator pair, we search over $p \in \{.9,.92,.94,.96,.98,1.0\}$.} The results show several trends. First, the paired setting appears much easier than the unpaired setting, suggesting that it is difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using {\textsc{Grover}}~to discriminate {\textsc{Grover}}'s generations results in roughly 90\% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81\%; conversely, if the discriminator is larger, accuracy is above 98\%. Third, other discriminators perform worse than {\textsc{Grover}}~overall, even when controlling for architecture size and (for both BERT models) the domain. That {\textsc{Grover}}~is the best discriminator is possibly surprising: being unidirectional, it is less expressive than deep bidirectional models such as BERT.\footnote{Indeed, bidirectional approaches perform best on leaderboards like GLUE \citep{wang2018glue}.} That the more expressive model here is \textbf{not} the best at discriminating between real and generated news articles suggests that neural fake news discrimination requires having a similar \emph{inductive bias} as the generator.\footnote{This matches findings on the HellaSwag dataset \citep{zellers2018hellaswag}. Given human text and machine text written by a finetuned GPT model, a GPT discriminator outperforms BERT-Base at picking out human text.} \subsection{Weak supervision: what happens if we don't have access to {\textsc{Grover}}-Mega?} These results suggest that {\textsc{Grover}}~is an effective discriminator when we have a medium number of fake news examples from the exact adversary that we will encounter at test time. What happens if we relax this assumption? Here, we consider the problem of detecting an adversary who is generating news with {\textsc{Grover}}-Mega and an unknown top-$p$ threshold.\footnote{The top-$p$ threshold used was $p{=}0.96$, but we are not supposed to know this!} In this setup, during training, we have access to a weaker model ({\textsc{Grover}}-Base or {\textsc{Grover}}-Large). We consider the effect of having only $x$ examples from {\textsc{Grover}}-Mega, and sampling the missing $5000{-}x$ articles from one of the weaker models, where the top-p threshold is uniformly chosen for each article in the range of $[0.9, 1.0]$. We show the results of this experiment in Figure~\ref{fig:weaksupervision}. The results suggest that observing additional generations greatly helps discrimination performance when few examples of {\textsc{Grover}}-Mega are available: weak supervision with between 16 and 256 examples from {\textsc{Grover}}-Large yields around 78\% accuracy, while accuracy remains around 50\% without weak supervision. As the portion of examples that come from {\textsc{Grover}}-Mega increases, however, accuracy converges to around 92\%.\footnote{In additional experiments we show that accuracy increases even more -- up to 98\% -- when the number of examples is increased \citep{zellers2019blogpost}. We also find that {\textsc{Grover}}~when trained to discriminate between real and fake {\textsc{Grover}}-generated news can detect GPT2-Mega generated news as fake with 96\% accuracy.} \subsection{A Lower-Bound on Discriminator Accuracy} \section{Optimization Hyperparameters} \label{sec:optimizationhyperparameters} For our input representation, we use the same BPE vocabulary as \citep{radford2019gpttwo}. We use Adafactor \citep{shazeer2018adafactor} as our optimizer. Common optimizers such as Adam \citep{Kingma2014AdamAM} tend to work well, but the memory cost scales linearly with the number of parameters, which renders training {\textsc{Grover}}-Mega all but impossible. Adafactor alleviates this problem by factoring the second-order momentum parameters into a tensor product of two vectors. We used a maximum learning rate of 1e-4 with linear warm-up over the first 10,000 iterations, and decay over the remaining iterations. We set Adafactor's $\beta_1=0.999$ and clipped updates for each parameter to a root-mean-squared of at most 1. Last, we applied weight decay with coefficient $0.01$. We used a batch size of 512 on 256 TPU v3 cores. which corresponds to roughly 20 epochs through our news dataset. The total training time required roughly two weeks. \section{Real News and Propaganda Websites} \label{sec:newssites} \newcommand\fnl[1]{{\tt\small \href{https://#1}{#1}}} In our generation experiments (Section~\ref{sec:genexps}), we consider a set of mainstream as well as propaganda websites. We used the following websites as `real news': \fnl{theguardian.com}, \fnl{reuters.com}, \fnl{nytimes.com}, \fnl{theatlantic.com}, \fnl{usatoday.com}, \fnl{huffingtonpost.com}, and \fnl{nbcnews.com}. For propaganda sites, we chose sites that have notably spread misinformation \citep{fakenewslist} or propaganda\footnote{For more information, see the Media Bias Chart at {\tt\scriptsize \href{https://www.adfontesmedia.com/}{adfontesmedia.com/}}.}. These were \fnl{breitbart.com}, \fnl{infowars.com}, \fnl{wnd.com}, \fnl{bigleaguepolitics.com}, and \fnl{naturalnews.com}. \section{Domain Adaptation of BERT} \label{sec:bertda} BERT \citep{devlin2018bert} is a strong model for most classification tasks. However, care must be taken to format the input in the right way, particularly because BERT is pretrained in a setting where it is given two spans (separated by a special {\small\tt [SEP]} token). We thus use the following input format. The first span consists of the metadata, with each field prefixed by its name in brackets (e.g. `{\small\tt [title]}'). The second span consists of the body. Because the generations are cased (with capital and lowercase letters), we used the `cased' version of BERT. Past work (e.g. \cite{zellers2019vcr, han2019unsupervised}) has found that BERT, like other language models, benefits greatly from domain adaptation. We thus perform domain adaptation on BERT, adapting it to the news domain, by training it on {\textsc{RealNews}}~for 50k iterations at a batch size of 256. Additionally, BERT was trained with a sequence length of at most 512 WordPiece tokens, but generations from {\textsc{Grover}}~are much longer (1024 BPE tokens). Thus, we initialized new position embeddings for positions 513-1024, and performed domain adaptation at a length of 1024 WordPiece tokens. \section{Hyperparameters for the Discriminators} \label{sec:dischyperparams} For our discrimination experiments, we limited the lengths of generations (and human-written articles) to 1024 BPE tokens. This was needed because our discriminators only handle documents up to 1024 words. However, we also found that the longer length empirically discrimination easier for models (see Section~\ref{sec:analysis}). For our discrimination experiments, we used different hyperparameters depending on the model, after an initial grid search. For BERT, we used the Adam \citep{Kingma2014AdamAM} optimizer with a learning rate of $2e-5$ and a batch size of 64. We trained BERT models for 5 epochs, with a linear warm-up of the learning rate over the initial 20\% iterations. For GPT2 and {\textsc{Grover}}, we used the Adam actor optimizer \citep{shazeer2018adafactor} optimizer with a learning rate of $2e-5$ for all models, and a batch size of 64. We applied an auxiliary language modeling loss for these models with a coefficient of $0.5$. These models were trained for 10 epochs, with a linear warm-up over the initial 20\% iterations. \section{Human Evaluation Prompt} \label{sec:humanevalprompts} \subsection{Evaluating Quality} For evaluating the quality of {\textsc{Grover}}-written versus human-written news articles, we asked workers the following questions (shown exactly). The answer choices are shown next to the rating under our 1-3 Likert scale (3 being the best, 1 being the worst for each attribute). \begin{enumerate}[label=(\alph*)] \item (Style) Is the style of this article consistent? \begin{enumerate} \item[3.] \textbf{Yes}, this sounds like an article I would find at an online news source. \item[2.] \textbf{Sort of}, but there are certain sentences that are awkward or strange. \item[1.] \textbf{No}, it reads like it's written by a madman. \end{enumerate} \item (Content) Does the content of this article make sense? \begin{enumerate} \item[3.] \textbf{Yes}, this article reads coherently. \item[2.] \textbf{Sort of}, but I don't understand what the author means in certain places. \item[1.] \textbf{No}, I have no (or almost no) idea what the author is trying to say. \end{enumerate} \item (Overall) Does the article read like it comes from a trustworthy source? \begin{enumerate} \item[3.] \textbf{Yes}, I feel that this article could come from a news source I would trust. \item[2.] \textbf{Sort of}, but something seems a bit fishy. \item[1.] \textbf{No}, this seems like it comes from an unreliable source. \end{enumerate} \end{enumerate} \subsection{Evaluating consistency} To measure consistency between the article and the metadata, we asked the following questions: \begin{enumerate}[label=(\alph*)] \item (Headline) How well does the article body match the following headline? [headline] \begin{enumerate} \item[3.] \textbf{Yes}, the article makes sense as something that I would see given the headline. \item[2.] \textbf{Sort of}, the article is somewhat related to the headline, but seems slightly off. \item[1.] \textbf{No}, the article is completely off-topic. \end{enumerate} \item (Authors) How well does the article body match the following author(s)? [authors] \begin{enumerate} \item[3.] \textbf{Yes}, the article makes sense as something that could be written by the author(s). \item[2.] \textbf{Sort of}, the article might have been written by the author(s) above, but it sounds unlikely. \item[1.] \textbf{No}, the article body contains information that says it was written by someone else. \end{enumerate} \item (Date) How well does the article body match the following date? [date] \begin{enumerate} \item[3.] \textbf{Yes}, the article makes sense as something that could have been written on [date]. \item[2.] \textbf{Sort of}, the article might have been written on [date], but it sounds unlikely. \item[1.] \textbf{No}, there's information in the article that conflicts the proposed date. \end{enumerate} \end{enumerate} \section{Examples} \label{sec:suppexamples} In Figures~\ref{fig:realex} and \ref{fig:propex}, we include examples of articles with the average scores given by human raters, who were asked to evaluate the style, content, and overall trustworthiness. In Figure~\ref{fig:realex}, we show a real article ({\tt\small Human News}) posted by the Guardian along with an article from {\textsc{Grover}}~ ({\tt\small Machine News}) made using the same metadata. Figure~\ref{fig:propex} shows a real propaganda article from the Natural News ({\tt\small Human Propaganda}) and an article made with {\textsc{Grover}}~ ({\tt\small Machine Propaganda}) with the original headline and the style of Huffington Post ({\textsc{Grover}}~ was used to re-write the title to be more stylistically similar to the Huffington Post, as well). We also present several other generated examples, generated from {\textsc{Grover}}-Mega with a top-$p$ threshold of $p{=}0.95$. All of the examples are cut off to 1024 generated BPE tokens, since this is our setup for discrimination. \begin{enumerate}[wide, leftmargin=10pt, labelwidth=!,labelindent=-2pt,itemsep=1pt,topsep=0pt,label=\textbf{\alph*}.] \item {\textsc{Grover}}~can generate controlled propaganda. In Figure~\ref{fig:teaserfigurecontinued}, we show the continuation from Figure~\ref{fig:teaser}, about a link found between autism and vaccines. \item {\textsc{Grover}}~can spoof the identity of writers. In Figure~\ref{fig:paulkrugman} we show a realistic-looking editorial seemingly from New York Times columnist Paul Krugman. \item {\textsc{Grover}}~can generate fake political news. In Figure~\ref{fig:trumpimpeached} we show an article generated about Trump being impeached, written in the style of the Washington Post. \item {\textsc{Grover}}~can generate fake movie reviews (opinion spam; \cite{ott2011finding}). In Figure~\ref{fig:sharknado} we show a movie review, generated in the style of LA Times Movie Critic Kenneth Turan, for Sharknado 6, `The Last Sharknado: It's About Time' \item {\textsc{Grover}}~can generate fake business news. In Figure~\ref{fig:uberfordogs}, we show an article generated about an `Uber for Dogs' startup. \end{enumerate} \begin{figure} \centering \includegraphics[trim={0 3cm 0 0},clip,width=\columnwidth]{figures/real_ex.pdf} \caption{Example of human-written news and machine-written news articles about the same headline from The Guardian with the average ratings from human rating study.} \label{fig:realex} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/prop_ex.pdf} \caption{Example of human-written and machine-written articles arguing against fluoride with the average ratings from human rating study.} \label{fig:propex} \end{figure} \input{figures/teaserex.tex} \input{figures/paulkrugman.tex} \input{figures/trumpimpeached.tex} \input{figures/sharknado.tex} \input{figures/uberfordogs.tex}
2,869,038,154,499
arxiv
\section{Introduction} \section{Introduction}
2,869,038,154,500
arxiv
\subsection{Preliminary} \paragraph{Notations} For every set $\mathcal{A}$, we use $\Delta_{\mathcal{A}}$ to denote the set of all possible distributions over $\mathcal{A}$. For every integer $M$, we use $[M]$ to denote $\{1,2,\dots,M\}$. For every matrix $\mathbf{A}=(A_{i,j})_{i,j}\in\mathbb{R^+}^{s\times t}$, we define $\log \mathbf{A}$ as a $s\times t$ matrix such that its the $(i,j)^{th}$ entry is $\log (A_{i,j})$. Similarly for every vector $\mathbf{v}=(v_i)_i\in\mathbb{R^+}^{s}$, we define $\log \mathbf{v}$ as a vector such that its the $i^{th}$ entry is $\log (v_{i})$. \paragraph{Problem statement} There are $N$ datapoints. Each datapoint $x\in I$ (e.g. the CT scan of a lung nodule) is labeled by $M$ experts $y^{[M]}:=\{y^1,y^2,\dots,y^M\vert y^m \in \mathcal{C}\}$ (e.g. $\mathcal{C}=\{\text{benign},\text{malignant}\}$, 5 experts' labels: \{benign, malignant, benign, benign, benign\}). The datapoint $x$ and the crowdsourced labels $y^{[M]}$ are related to a ground truth $y\in \mathcal{C}$ (e.g. the pathological truth of the lung nodule). We are aiming to simultaneously train a data classifier $h$ and a crowds aggregator $g$ such that $h: I\mapsto \Delta_{\mathcal{C}}$ predicts the ground truth $y$ based on the datapoint $x\in I$, and $g:\mathcal{C}^{M}\to \Delta_{\mathcal{C}}$ aggregates $M$ crowdsourced labels $y^{[M]}$ into a prediction for ground truth $y$. We also want to learn a data-crowds forecaster $\zeta:I\times \mathcal{C}^{M}\mapsto \Delta_{\mathcal{C}}$ that forecasts the ground truth $y$ based on both the datapoint $x\in I$ and the crowdsourced labels $y^{[M]}\in \mathcal{C}$. \subsection{Max-MIG: an information theoretic approach} \begin{figure}[h!] \centering \includegraphics[width=5.5in]{whole_fig.png} \caption{Max-MIG overview: \emph{Step 1: finding the ``information intersection'' between the data and the crowdsourced labels}: we train a data classifier $h$ and a crowds aggregator $g$ simultaneously to maximize their $f$-mutual information gain $MIG^f(h,g,\mathbf{p})$ with a hyperparameter $\mathbf{p}\in\Delta_{\mathcal{C}}$. $h$ maps each datapoint $x_i$ to a forecast $h(x_i)\in\Delta_{\mathcal{C}}$ for the ground truth. $g$ aggregates $M$ crowdsourced labels $y_i^{[M]}$ into a forecast $g(y_i^{[M]})\in\Delta_{\mathcal{C}}$ by ``weighted average''. We tune the parameters of $h$ and $g$ simultaneously to maximize their $f$-mutual information gain. We will show the maximum is the $f$-mutual information (a natural extension of mutual information, see Appendix~\ref{sec:f}) between the data and the crowdsourced labels. \emph{Step 2: aggregating the ``information intersection''}: after we obtain the best $h,g,\mathbf{p}$ that maximizes $MIG^f(h,g,\mathbf{p})$, we use them to construct a data-crowds forecaster $\zeta$ that forecasts ground truth based on both the datapoint and the crowdsourced labels. \newline To calculate the $f$-mutual information gain, we reward them for the average ``agreements'' between their outputs for the \emph{same} task, i.e. $h(x_i)$ and $g(y_i^{[M]})$ , as shown by the black lines, and punish them for the average ``agreements'' between their outputs for the \emph{different} tasks, i.e. $h(x_i)$ and $g(y_j^{[M]})$ where $i\neq j$, as shown by the grey lines. Intuitively, the reward encourages the data classifier to agree with the crowds aggregator, while the punishment avoids them naively agreeing with each other, that is, both of them map everything to $(1,0,\dots,0)$. The measurement of ``agreement'' depends on the selection of $f$. See formal definition for $MIG^f$ in (\ref{eq:mig}).} \label{schematic} \end{figure} Figure \ref{schematic} illustrates the overview idea of our method. Here we formally introduce the building blocks of our method. \paragraph{Data classifier $h$} The data classifier $h$ is a neural network with parameters $\varTheta$. Its input is a datapoint $x$ and its output is a distribution over $\mathcal{C}$. We denote the set of all such data classifers by $H_{NN}$. \paragraph{Crowds aggregator $g$} The crowds aggregator $g$ is a ``weighted average'' function to aggregate crowdsourced labels with parameters $\{\mathbf{W}^m\in \mathbb{R}^{|\mathcal{C}|\times |\mathcal{C}|}\}_{m=1}^M$ and $\mathbf{b}$. Its input $y^{[M]}$ is the crowdsourced labels provided by $M$ experts for a datapoint and its output is a distribution over $\mathcal{C}$. By representing each $y^m\in y^{[M]}$ as an one-hot vector $\mathbf{e}^{(y^m)}:=(0,\dots,1,\dots,0)^{\top}\in \{0,1\}^{|\mathcal{C}|}$ where only the ${y^m}$th entry of $\mathbf{e}^{(y^m)}$ is 1, $$g(y^{[M]}; \{\mathbf{W}^m\}_{m=1}^M,\mathbf{b}) = \text{softmax}(\sum_{m=1}^M \mathbf{W}^m\cdot \mathbf{e}^{(y^m)}+\mathbf{b})$$ where $\mathbf{W}^m\cdot \mathbf{e}^{(y^m)}$ is equivalent to pick the $y^m$th column of matrix $\mathbf{W}^m$, as shown in Figure \ref{schematic}. We denote the set of all such crowds aggregators by $G_{WA}$. \paragraph{Data-crowds forecaster $\zeta$} Given a data classifier $h$, a crowds aggregator $g$ and a distribution $\mathbf{p} = (p_c)_c\in\Delta_{\mathcal{C}}$ over the classes, the data-crowds forecaster $\zeta$, that forecasts the ground truth based on both the datapoint $x$ and the crowdsourced labels $y^{[M]}$, is constructed by $$\zeta(x,y^{[M]};h,g,\mathbf{p} )=\text{Normalize}\left((\frac{h(x)_c \cdot g(y^{[M]})_c}{ p_c})_c\right)$$ where Normalize$(\mathbf{v}):=\frac{\mathbf{v}}{\sum_c v_c}$. \paragraph{$f$-mutual information gain $MIG^f$} $f$-mutual information gain $MIG^f$ measures the ``mutual information'' between two hypotheses, which is proposed by \citet{kong2018water}. Given $N$ datapoints $x_1,x_2,\dots,x_N\in I$ where each datapoint $x_i$ is labeled by $M$ crowdsourced labels $y_i^1,y_i^2,\dots,y_i^M\in \mathcal{C}$, the $f$-mutual information gain between $h$ and $g$, associated with a hyperparameter $\mathbf{p}=(p_{c})_{c}\in\Delta_{\mathcal{C}}$, is defined as the average ``agreements'' between $h$ and $g$ for the same task minus the average ``agreements'' between $h$ and $g$ for the different tasks, that is, \begin{align} \label{eq:mig} MIG^f(\{x_i\},\{y^{[M]}_i\};h,g,\mathbf{p})=&\frac{1}{N}\sum_{i} \partial{f}\bigg(\sum_{c\in\mathcal{C}}\frac{h(x_i)_{c} \cdot g(y_i^{[M]})_{c}}{p_{c}}\bigg)\\ \nonumber &-\frac{1}{N(N-1)}\sum_{i\neq j}f^{\star}\Bigg(\partial{f}\bigg(\sum_{c\in\mathcal{C}}\frac{h(x_i)_{c} \cdot g(y_j^{[M]})_{c}}{p_{c}}\bigg)\Bigg) \end{align} where $f$ is a convex function satisfying $f(1)=0$ and $f^{\star}$ is the Fenchel duality \cite{} of $f$. We can use Table \ref{table:distinguishers} as reference for $\partial{f}(\cdot)$ and $f^{\star}(\partial{f}(\cdot))$. \begin{table}[htp] \caption{Reference for common $f$-divergences and corresponding $MIG^f$'s building blocks. This table is induced from \citet{nowozin2016f}.} \begin{center} \begin{tabular}{llll} \toprule {$f$-divergence} & {$f(t)$} & {$\partial{f}(K)$} & {$f^{\star}(\partial{f}(K)$)} \\ \midrule\midrule KL divergence & $t\log t$ & $1+\log K$ & $K$ \\ \midrule Pearson $\chi^2$ & $(t-1)^2$ & $2(K-1)$ & $K^2-1$ \\ \midrule Jensen-Shannon &$-(t+1)\log{\frac{t+1}{2}}+t\log t$ & $\log{\frac{2K}{1+K}}$ & $-\log(\frac{2}{1+K})$ \\ \bottomrule \end{tabular} \end{center} \label{table:distinguishers} \end{table} Since the parameters of $h$ is $\varTheta$ and the parameters of $g$ is $\{\mathbf{W}^m\}_{m=1}^M$ and $\mathbf{b}$, we naturally rewrite $MIG^f(\{x_i\},\{y^{[M]}_i\};h,g,\mathbf{p}) $ as $$ MIG^f(\{x_i\},\{y^{[M]}_i\};\varTheta, \{\mathbf{W}^m\}_{m=1}^M,\mathbf{b},\mathbf{p}).$$ We seek $\{\Theta, \{\mathbf{W}^m\}_{m=1}^M, \mathbf{b},\mathbf{p}\}$ that maximizes $MIG^f$. Later we will show that when the prior of the ground truth is $\mathbf{p}^*$ (e.g. $\mathbf{p}^*=(0.8,0.2)$ i.e. the ground truth is benign with probability 0.8 and malignant with probability 0.2 a priori), the best $\mathbf{b}$ and $\mathbf{p}$ are $\log \mathbf{p}^*$ and $\mathbf{p}^*$ respectively. Thus, we can set $\mathbf{b}$ as $\log \mathbf{p}$ and only tune $\mathbf{p}$. When we have side information about the prior $\mathbf{p}^*$, we can fix parameter $\mathbf{p}$ as $\mathbf{p}^*$, and fix parameter $\mathbf{b}$ as $\log \mathbf{p}^*$. \subsection{Experts' expertise} For each information structure in Figure~\ref{fig:cases}, we generate two groups of crowdsourced labels for each dataset: labels provided by (H) experts with relatively high expertise; (L) experts with relatively low expertise. For each of the situation (H) (L), all three cases have the same senior experts. \begin{case}(Independent mistakes) \label{case1} $M_s$ senior experts are mutually conditionally independent. (H) $M_s = 5.$ (L) $M_s = 10.$ \end{case} \paragraph{Dogs vs. Cats} In situation (H), some senior experts are more familiar with cats, while others make better judgments on dogs. For example, expert A is more familiar with cats, her expertise for dogs/cats is 0.6/0.8 in the sense that if the ground truth is dog/cat, she labels the image as ``dog''/``cat'' with probability 0.6/0.8 respectively. Similarly, other experts expertise are B:0.6/0.6, C:0.9/0.6, D:0.7/0.7, E:0.6/0.7. In situation (L), all ten seniors' expertise are 0.55/0.55. \paragraph{CIFAR-10} In situation (H), we generate experts who may make mistakes in distinguishing the hard pairs: cat/dog, deer/horse, airplane/bird, automobile/trunk, frog/ship, but can perfectly distinguish other easy pairs (e.g. cat/frog), which makes sense in practice. When they cannot distinguish the pair, some of them may label the pair randomly and some of them label the pair the same class. In detail, for each hard pair, expert A label the pair the same class (e.g. A always labels the image as ``cat'' when the image has cats or dogs), expert B labels the pair uniformly at random (e.g. B labels the image as ``cat'' with the probability 0.5 and ``dog'' with the probability 0.5 when the image has cats or dogs). Expert C is familiar with mammals so she can distinguish cat/dog and deer/hose, while for other hard pairs, she label each of them uniformly at random. Expert D is familiar with vehicles so she can distinguish airplane/bird, automobile/trunk and frog/ship, while for other hard pairs, she always label each of them the same class. Expert E does not have special expertise. For each hard pair, Expert E labels them correctly with the probability 0.6. In situation (L), all ten senior experts label each image correctly with probability $0.2$ and label each image as other false classes uniformly with probability $\frac{0.8}{9}$. \paragraph{LUNA16} In situation (H), some senior experts tend to label the image as ``benign" while others tend to label the image as ``malignant". Their expertise for benign/malignant are: A: 0.6/0.9, B:0.7/0.7, C:0.9/0.6, D:0.6/0.7, E:0.7/0.6. In situation (L), all ten seniors' expertise are 0.6/0.6. \begin{case} (Naive majority) \label{case2} $M_s$ senior experts are mutually conditional independent, while other $M_j$ junior experts label all data as the first class effortlessly. (H) $M_s = 5$, $M_j=5$. (L) $M_s = 10$, $M_j=15$. \end{case} For Dogs vs. Cats, all junior experts label everything as ``cat''. For CIFAR-10, all junior experts label everything as ``airplane''. For LUNA16, all junior experts label everything as ``benign''. \begin{case} (Correlated mistakes) \label{case3} $M_s$ senior experts are mutually conditional independent, and each junior expert copies one of the senior experts.(H) $M_s = 5$, $M_j=5$. (L) $M_s = 10$, $M_j=2$. \end{case} For Dogs vs. Cats, CIFAR-10 and LUNA16, in situation (H), two junior experts copy expert $A$'s labels and three junior experts copy expert $C$'s labels; in situation (L), one junior expert copies expert $A$'s labels and another junior expert copies expert $C$'s labels. \subsection{Implementation details} \paragraph{Networks} For Dogs vs. Cats and LUNA16, we follow the four layers network in \cite{rodrigues2017deep}. We use Adam optimizer with learning rate $1.0 \times 10^{-4}$ for both the data classifier and the crowds aggregator. Batch size is set to $16$. For CIFAR-10, we use VGG-16 as the backbone. We use Adam optimizer with learning rate $1.0 \times 10^{-3}$ for the data classifier and $1.0 \times 10^{-4}$ for the crowds aggregator. Batch size is set to $64$. For Labelme data, We apply the same setting of \cite{rodrigues2017deep}: we use pre-trained VGG-16 deep neural network and apply only one FC layer (with 128 units and ReLU activations) and one output layer on top, using 50\% dropout. We use Adam optimizer with learning rate $1.0 \times 10^{-4}$ for both the data classifier and the crowds aggregator. For our method MAX-MIG's crowds aggregator, for Dogs vs. Cats and LUNA16, we set the bias $\mathbf{b}$ as $\log \mathbf{p}$ and only tune $\mathbf{p}$. For CIFAR-10 and Labelme data, we fix the prior distribution $\mathbf{p}$ to be the uniform distribution $\mathbf{p}_0$ and fix the bias $\mathbf{b}$ as $\log \mathbf{p}_0$. \paragraph{Initialization} For AggNet and our method Max-MIG, we initialize the parameters $\{\mathbf{W}_m\}_m$ using the method in \citet{raykar2010learning}: \begin{align}\label{initial} W_{c,c'}^m = \log{\frac{\sum\limits_{i=1}^N Q(y_i=c)\mathbbm{1}(y_i^m=c')}{\sum\limits_{i=1}^N Q(y_i=c)}} \end{align} where $\mathbbm{1}(y_i^m=c')=1$ when $y_i^m=c'$ and $\mathbbm{1}(y_i^m=c')=0$ when $y_i^m\neq c'$ and N is the total number of datapoints. We average all crowdsourced labels to obtain $Q(y_i=c) := \frac{1}{M}\sum\limits_{m=1}^M \mathbbm{1}(y_i^m=c)$. For Crowd Layer method, we initialize the weight matrices using identity matrix on Dogs vs. Cats and LUNA as \citet{rodrigues2017deep} suggest. However, this initialization method leads to pretty bad results on CIFAR-10. Thus, we use (\ref{initial}) for Crowd Layer on CIFAR-10, which is the best practice in our experiments. \subsection{Results} \begin{figure}[h!] \centering \includegraphics[width=5.5in]{data.png} \caption{Results on Dogs vs. Cats, CIFAR-10, LUNA16.} \label{fig:data} \end{figure} We train the data classifier $h$ on the four datasets through our method\footnote{The results of Max-MIG are based on KL divergence. The results for other divergences are similar.} and other related methods. The accuracy of the trained data classifiers on the test set are shown in Table~\ref{table:labelme} and Figure \ref{fig:data}. We also show the accuracy of our data-crowd forecaster and on the test set and compare it with AggNet (Table~\ref{table:fore}). For the performances of the trained data classifiers, our method Max-MIG (red) almost outperform all other methods in every experiment. For the real-world dataset, LabelMe, we achieve the new state-of-the-art results. For the synthesized crowdsourced labels, the majority vote method (grey) fails in the naive majority situation. The AggNet has reasonably good performances when the experts are conditionally independent, including the naive majority case since naive expert is independent with everything, while it is outperformed by us a lot in the correlated mistakes case. This matches the theory in Appendix~\ref{sec:mle}: the AggNet is based on MLE and MLE fails in correlated mistakes case. The Doctor Net (green) and the Crowd Layer (blue) methods are not robust to the naive majority case. Our data-crowds forecaster (Table~\ref{table:fore}) performs better than our data classifier, which shows that our data-crowds forecaster actually takes advantage of the additional information, the crowdsourced labels, to give a better result. Like us, Aggnet also jointly trains the classifier and the aggregator, and can be used to train a data-crowds forecaster. We compared our data-crowds forecaster with Aggnet. The results still match our theory. When there is no correlated mistakes, we outperform Aggnet or have very similar performances with it. When there are correlated mistakes, we outperform Aggnet a lot (e.g. +30\%). Recall that in the experiments, for each of the situation (H) (L), all three cases have the same senior experts. Thus, all three cases' crowdsourced labels have the same amount of information. The results show that Max-MIG has similar performances for all three cases for each of the situation (H) (L), which validates our theoretical result: Max-MIG finds the ``information intersection'' between the data and the crowdsourced labels. \begin{comment} \begin{table}[htp] \caption{Results of Case \ref{case1} on three datasets.} \label{table:independent} \begin{center} \begin{tabular}{c c c c c c c c c c } \toprule Method & \multicolumn{2}{c}{Dogs vs. Cats} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{medical} \\ \midrule & acc & auc & acc & auc & acc & auc \\ \midrule Majority Voting &$\left.61.70\pm1.60\middle/76.64\pm0.69\right.$&$\left.73.54\pm1.56\middle/85.05\pm0.83\right.$&$66.05\pm1.30$ &$97.22\pm0.08$\\ Crowd Layer &$\left.69.38\pm0.30\middle/77.83\pm1.16\right.$&$\left.76.72\pm0.63\middle/85.90\pm0.93\right.$&$71.34\pm10.03$ &$95.36\pm2.38$\\ Doctor Net &$\left.67.39\pm0.99\middle/77.29\pm0.58\right.$&$\left.73.71\pm1.17\middle/85.99\pm0.58\right.$&$69.16\pm0.51$&$97.66\pm0.06$\\ AggNet &$\left.70.46\pm0.40\middle/79.36\pm0.71\right.$&$\left.77.54\pm0.60\middle/\bm{87.83\pm0.48}\right.$&$86.13\pm0.15$& \bm{$98.74\pm0.02$}\\ \midrule Max-MIG &$\left.\bm{71.44\pm0.99}\middle/\bm{79.52\pm0.47}\right.$&$\left.\bm{78.83\pm0.69}\middle/87.69\pm0.39\right.$&$\bm{86.33\pm0.20}$ &$98.71\pm0.02$\\ \midrule Supervised Learning &$84.16\pm0.18$&$92.00\pm0.16$&$86.77\pm0.25$ &$98.79\pm0.03$\\ \bottomrule \end{tabular} \end{center} \end{table} \end{comment} \begin{comment} \begin{table}[htp] \caption{Results of Case \ref{case2} on three datasets.} \label{table:dependent2} \begin{center} \begin{tabular}{c c c c cc c } \toprule Method & \multicolumn{2}{c}{Dogs vs. Cats} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{medical} \\ \midrule & acc & auc & acc &auc &acc & auc \\ \midrule Majority Voting &$\left.50.00\pm0.0\middle/50.00\pm0.0\right.$&$\left.43.60\pm1.81\middle/42.35\pm1.65\right.$&$10\pm 0.0$ &$50.48\pm 0.15$\\ Crowd Layer&$\left.50.00\pm0.0\middle/50.00\pm0.0\right.$&$\left.48.41\pm2.49\middle/49.88\pm0.07\right.$&$53.77\pm 8.78$&$87.73\pm 4.04$\\ Docter Net&$\left.50.00\pm0.0\middle/50.00\pm0.0\right.$&$\left.74.64\pm1.39\middle/86.63\pm0.15\right.$&$10\pm 0.0$&$97.78\pm 0.04$\\ AggNet&$\left.70.07\pm0.73\middle/79.53\pm0.07\right.$&$\left.77.61\pm0.70\middle/87.57\pm0.18\right.$& $\left.86.27\pm0.40\right.$&$98.71\pm 0.03$\\ \midrule Max-MIG&$\left.\bm{71.07\pm0.48}\middle/\bm{80.25\pm0.003}\right.$&$\left.\bm{78.24\pm0.68}\middle/\bm{88.2\pm0.35}\right.$&$\left.\bm{86.55\pm0.14}\right.$&$\bm{98.72\pm 0.04}$\\ \midrule Supervised Learning&$84.16\pm0.18$&$92.00\pm0.16$&$\left.86.77\pm0.25\right.$&$98.79\pm 0.03$\\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[htp] \caption{Results of Case \ref{case3} on three datasets.} \label{table:dependent2} \begin{center} \begin{tabular}{c c c c c c c } \toprule Method & \multicolumn{2}{c}{Dogs vs. Cats} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{medical} \\ \midrule & acc & auc & acc &auc &acc & auc \\ \midrule Majority Voting&$\left.61.82\pm0.69\middle/77.52\pm0.55\right.$&$\left.71.14\pm1.04\middle/85.49\pm0.57\right.$ &$\left.59.72\pm1.81\right.$ &$\left.97.04\pm0.06\right.$\\ Crowds Layer&$\left.67.82\pm0.35\middle/77.63\pm0.65\right.$&$\left.74.23\pm0.48\middle/86.66\pm0.46\right.$&$\left.72.56\pm6.46\right.$ &$\left.95.97\pm1.21\right.$\\ Doctor Net&$\left.65.47\pm0.45\middle78.58\pm0.83\right.$&$\left.71.58\pm0.37\middle/86.70\pm0.87\right.$&$\left.62.33\pm2.04\right.$ &$\left.97.63\pm0.05\right.$\\ AggNet&$\left.63.85\pm1.09\middle/71.97\pm1.25\right.$&$\left.70.17\pm2.08\middle/84.27\pm0.48\right.$ &$\left.63.91\pm0.53\right.$ &$\left.95.72\pm0.12\right.$\\ \midrule Max-MIG&$\left.\bm{68.4\pm0.40}\middle/\bm{78.94\pm0.61}\right.$&$\left.\bm{75.13\pm0.60}\middle/\bm{87.36\pm0.49}\right.$& \bm{$\left.86.71\pm0.21\right.$ }&\bm{$\left.98.75\pm0.03\right.$}\\ \midrule Supervised Learning&$84.16\pm0.18$&$92.00\pm0.16$&$\left.86.77\pm0.25\right.$ &$\left.98.79\pm0.03\right.$\\ \bottomrule \end{tabular} \end{center} \end{table} \end{comment} \subsection{$f$-divergence and Fenchel's duality} \paragraph{$f$-divergence~\citep{ali1966general,csiszar2004information}} $f$-divergence $D_f:\Delta_{\Sigma}\times \Delta_{\Sigma}\mapsto \mathbb{R}$ is a non-symmetric measure of the difference between distribution $\mathbf{p}\in \Delta_{\Sigma} $ and distribution $\mathbf{q}\in \Delta_{\Sigma} $ and is defined to be $$D_f(\mathbf{p},\mathbf{q})=\sum_{\sigma\in \Sigma} \mathbf{p}(\sigma)f\bigg( \frac{\mathbf{q}(\sigma)}{\mathbf{p}(\sigma)}\bigg)$$ where $f:\mathbb{R}\mapsto\mathbb{R}$ is a convex function and $f(1)=0$. \subsection{$f$-mutual information} Given two random variables $X,Y$ whose realization space are $\Sigma_X$ and $\Sigma_Y$, let $\mathbf{U}_{X,Y}$ and $\mathbf{V}_{X,Y}$ be two probability measures where $\mathbf{U}_{X,Y}$ is the joint distribution of $(X,Y)$ and $\mathbf{V}_{X,Y}$ is the product of the marginal distributions of $X$ and $Y$. Formally, for every pair of $(x,y)\in\Sigma_X\times\Sigma_Y$, $$\mathbf{U}_{X,Y}(X=x,Y=y)=\Pr[X=x,Y=y]\qquad \mathbf{V}_{X,Y}(X=x,Y=y)=\Pr[X=x]\Pr[Y=y].$$ If $\mathbf{U}_{X,Y}$ is very different from $\mathbf{V}_{X,Y}$, the mutual information between $X$ and $Y$ should be high since knowing $X$ changes the belief for $Y$ a lot. If $\mathbf{U}_{X,Y}$ equals to $\mathbf{V}_{X,Y}$, the mutual information between $X$ and $Y$ should be zero since $X$ is independent with $Y$. Intuitively, the ``distance'' between $\mathbf{U}_{X,Y}$ and $\mathbf{V}_{X,Y}$ represents the mutual information between them. \begin{definition}[$f$-mutual information \citep{2016arXiv160501021K}] The $f$-mutual information between $X$ and $Y$ is defined as $$MI^f(X, Y)=D_f(\mathbf{U}_{X,Y},\mathbf{V}_{X,Y})$$ where $D_f$ is $f$-divergence. $f$-mutual information is always non-negative. \end{definition} \citet{2016arXiv160501021K} show that if we measure the amount of information by $f$-mutual information, any ``data processing'' on either of the random variables will decrease the amount of information crossing them. With this property, \citet{2016arXiv160501021K} propose an information theoretic mechanism design framework using $f$-mutual information. \citet{kong2018water} reduce the co-training problem to a mechanism design problem and extend the information theoretic framework in \citet{2016arXiv160501021K} to address the co-training problem. \section{Submission of conference papers to ICLR 2019} ICLR requires electronic submissions, processed by \url{https://openreview.net/}. See ICLR's website for more instructions. If your paper is ultimately accepted, the statement {\tt {\textbackslash}iclrfinalcopy} should be inserted to adjust the format to the camera ready requirements. The format for the submissions is a variant of the NIPS format. Please read carefully the instructions below, and follow them faithfully. \subsection{Style} Papers to be submitted to ICLR 2019 must be prepared according to the instructions presented here. Authors are required to use the ICLR \LaTeX{} style files obtainable at the ICLR website. Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for ICLR and other conference information are available on the World Wide Web at \begin{center} \url{http://www.iclr.cc/} \end{center} The file \verb+iclr2019_conference.pdf+ contains these instructions and illustrates the various formatting requirements your ICLR paper must satisfy. Submissions must be made using \LaTeX{} and the style files \verb+iclr2019_conference.sty+ and \verb+iclr2019_conference.bst+ (to be used with \LaTeX{}2e). The file \verb+iclr2019_conference.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 5.5~inches (33~picas) wide and 9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point type with a vertical spacing of 11~points. Times New Roman is the preferred typeface throughout. Paragraphs are separated by 1/2~line space, with no indentation. Paper title is 17~point, in small caps and left-aligned. All pages should start at 1~inch (6~picas) from the top of the page. Authors' names are set in boldface, and each name is placed above its corresponding address. The lead author's name is to be listed first, and the co-authors' names are set to follow. Authors sharing the same address can be on the same line. Please pay special attention to the instructions in section \ref{others} regarding figures, tables, acknowledgments, and references. \section{Headings: first level} \label{headings} First level headings are in small caps, flush left and in point size 12. One line space before the first level heading and 1/2~line space after the first level heading. \subsection{Headings: second level} Second level headings are in small caps, flush left and in point size 10. One line space before the second level heading and 1/2~line space after the second level heading. \subsubsection{Headings: third level} Third level headings are in small caps, flush left and in point size 10. One line space before the third level heading and 1/2~line space after the third level heading. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone, regardless of the formatter being used. \subsection{Citations within the text} Citations within the text should be based on the \texttt{natbib} package and include the authors' last names and year (with the ``et~al.'' construct for more than two authors). When the authors or the publication are included in the sentence, the citation should not be in parenthesis (as in ``See \citet{Hinton06} for more information.''). Otherwise, the citation should be in parenthesis (as in ``Deep learning shows promise to make progress towards AI~\citep{Bengio+chapter2007}.''). The corresponding references are to be listed in alphabetical order of authors, in the \textsc{References} section. As to the format of the references themselves, any style is acceptable as long as it is used consistently. \subsection{Footnotes} Indicate footnotes with a number\footnote{Sample of the first footnote} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas).\footnote{Sample of the second footnote} \subsection{Figures} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction; art work should not be hand-drawn. The figure number and caption always appear after the figure. Place one line space before the figure caption, and one line space after the figure. The figure caption is lower case (except for first word and proper nouns); figures are numbered consecutively. Make sure the figure caption does not get separated from the figure. Leave sufficient space to avoid splitting the figure and figure caption. You may use color figures. However, it is best for the figure captions and the paper body to make sense if the paper is printed either in black/white or in color. \begin{figure}[h] \begin{center} \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \end{center} \caption{Sample figure caption.} \end{figure} \subsection{Tables} All tables must be centered, neat, clean and legible. Do not use hand-drawn tables. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. \begin{table}[t] \caption{Sample table title} \label{sample-table} \begin{center} \begin{tabular}{ll} \multicolumn{1}{c}{\bf PART} &\multicolumn{1}{c}{\bf DESCRIPTION} \\ \hline \\ Dendrite &Input terminal \\ Axon &Output terminal \\ Soma &Cell body (contains cell nucleus) \\ \end{tabular} \end{center} \end{table} \section{Default Notation} In an attempt to encourage standardized notation, we have included the notation file from the textbook, \textit{Deep Learning} \cite{goodfellow2016deep} available at \url{https://github.com/goodfeli/dlbook_notation/}. Use of this style is not required and can be disabled by commenting out \texttt{math\_commands.tex}. \centerline{\bf Numbers and Arrays} \bgroup \def1.5{1.5} \begin{tabular}{p{1in}p{3.25in}} $\displaystyle a$ & A scalar (integer or real)\\ $\displaystyle \va$ & A vector\\ $\displaystyle \mA$ & A matrix\\ $\displaystyle \tA$ & A tensor\\ $\displaystyle \mI_n$ & Identity matrix with $n$ rows and $n$ columns\\ $\displaystyle \mI$ & Identity matrix with dimensionality implied by context\\ $\displaystyle \ve^{(i)}$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\ $\displaystyle \text{diag}(\va)$ & A square, diagonal matrix with diagonal entries given by $\va$\\ $\displaystyle \ra$ & A scalar random variable\\ $\displaystyle \rva$ & A vector-valued random variable\\ $\displaystyle \rmA$ & A matrix-valued random variable\\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Sets and Graphs} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle \sA$ & A set\\ $\displaystyle \R$ & The set of real numbers \\ $\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\ $\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\ $\displaystyle [a, b]$ & The real interval including $a$ and $b$\\ $\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\ $\displaystyle \sA \backslash \sB$ & Set subtraction, i.e., the set containing the elements of $\sA$ that are not in $\sB$\\ $\displaystyle \gG$ & A graph\\ $\displaystyle \parents_\gG(\ervx_i)$ & The parents of $\ervx_i$ in $\gG$ \end{tabular} \vspace{0.25cm} \centerline{\bf Indexing} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle \eva_i$ & Element $i$ of vector $\va$, with indexing starting at 1 \\ $\displaystyle \eva_{-i}$ & All elements of vector $\va$ except for element $i$ \\ $\displaystyle \emA_{i,j}$ & Element $i, j$ of matrix $\mA$ \\ $\displaystyle \mA_{i, :}$ & Row $i$ of matrix $\mA$ \\ $\displaystyle \mA_{:, i}$ & Column $i$ of matrix $\mA$ \\ $\displaystyle \etA_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor $\tA$\\ $\displaystyle \tA_{:, :, i}$ & 2-D slice of a 3-D tensor\\ $\displaystyle \erva_i$ & Element $i$ of the random vector $\rva$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Calculus} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex] $\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\ $\displaystyle \nabla_\vx y $ & Gradient of $y$ with respect to $\vx$ \\ $\displaystyle \nabla_\mX y $ & Matrix derivatives of $y$ with respect to $\mX$ \\ $\displaystyle \nabla_\tX y $ & Tensor containing derivatives of $y$ with respect to $\tX$ \\ $\displaystyle \frac{\partial f}{\partial \vx} $ & Jacobian matrix $\mJ \in \R^{m\times n}$ of $f: \R^n \rightarrow \R^m$\\ $\displaystyle \nabla_\vx^2 f(\vx)\text{ or }\mH( f)(\vx)$ & The Hessian matrix of $f$ at input point $\vx$\\ $\displaystyle \int f(\vx) d\vx $ & Definite integral over the entire domain of $\vx$ \\ $\displaystyle \int_\sS f(\vx) d\vx$ & Definite integral with respect to $\vx$ over the set $\sS$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Probability and Information Theory} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle P(\ra)$ & A probability distribution over a discrete variable\\ $\displaystyle p(\ra)$ & A probability distribution over a continuous variable, or over a variable whose type has not been specified\\ $\displaystyle \ra \sim P$ & Random variable $\ra$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r $\displaystyle \mathbb{E}_{\rx\sim P} [ f(x) ]\text{ or } \mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P(\rx)$ \\ $\displaystyle \mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P(\rx)$ \\ $\displaystyle \mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P(\rx)$\\ $\displaystyle H(\rx) $ & Shannon entropy of the random variable $\rx$\\ $\displaystyle \KL ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\ $\displaystyle \mathcal{N} ( \vx ; \vmu , \mSigma)$ & Gaussian distribution % over $\vx$ with mean $\vmu$ and covariance $\mSigma$ \\ \end{tabular} \egroup \vspace{0.25cm} \centerline{\bf Functions} \bgroup \def1.5{1.5} \begin{tabular}{p{1.25in}p{3.25in}} $\displaystyle f: \sA \rightarrow \sB$ & The function $f$ with domain $\sA$ and range $\sB$\\ $\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\ $\displaystyle f(\vx ; \vtheta) $ & A function of $\vx$ parametrized by $\vtheta$. (Sometimes we write $f(\vx)$ and omit the argument $\vtheta$ to lighten notation) \\ $\displaystyle \log x$ & Natural logarithm of $x$ \\ $\displaystyle \sigma(x)$ & Logistic sigmoid, $\displaystyle \frac{1} {1 + \exp(-x)}$ \\ $\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\ $\displaystyle || \vx ||_p $ & $\normlp$ norm of $\vx$ \\ $\displaystyle || \vx || $ & $\normltwo$ norm of $\vx$ \\ $\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\ $\displaystyle \1_\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\ \end{tabular} \egroup \vspace{0.25cm} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textsc{References} section; see below). Please note that pages should be numbered. \section{Preparing PostScript or PDF files} Please prepare PostScript or PDF files with paper size ``US Letter'', and not, for example, ``A4''. The -t letter option on dvips will produce US Letter files. Consider directly generating PDF files using \verb+pdflatex+ (especially if you are a MiKTeX user). PDF figures must be substituted for EPS figures, however. Otherwise, please generate your PostScript and PDF files with the following commands: \begin{verbatim} dvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps ps2pdf mypaper.ps mypaper.pdf \end{verbatim} \subsection{Margins in LaTeX} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the graphicx package. Always specify the figure width as a multiple of the line width as in the example below using .eps graphics \begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.eps} \end{verbatim} or \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} for .pdf graphics. See section~4.4 in the graphics bundle documentation (\url{http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps}) A number of width problems arise when LaTeX cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command. \subsubsection*{Acknowledgments} Use unnumbered third level headings for the acknowledgments. All acknowledgments, including those to funding agencies, go at the end of the paper. \section{Introduction} \input{intro.tex} \section{Related work} \input{relatedwork.tex} \section{Method} \input{approach.tex} \subsection{Theoretical justification} \input{theory.tex} \section{Experiment} \input{experiment.tex} \section{Conclusion and discussion} \input{conclusion.tex} \subsubsection*{Acknowledgments} We would like to express our thanks for support from the following research grants NSFC-61625201 and 61527804. \newpage
2,869,038,154,501
arxiv
\section{Introduction} \label{sec:intro} Device-to-device (D2D) communication enables wireless peer-to-peer services directly between user equipments (UEs) to facilitate high data rate local service as well as offload the traffic of cellular base station (i.e., Evolved Node B [eNB] in an LTE-Advanced [LTE-A] network). By reusing the LTE-A cellular resources, D2D communication enhances spectrum utilization and improves cellular coverage. In conjunction with traditional local voice and data services, D2D communication opens up new opportunities for commercial applications, such as proximity-based services, in particular social networking applications with content sharing features (i.e., exchanging photos, videos or documents through smart phones), local advertisement, multi-player gaming and data flooding \cite{d2d_example, d2d_example2, 3gpp:d2d_example}. In the context of D2D communication, it becomes a crucial issue to set up direct links between the D2D UEs while satisfying the quality-of-service (QoS) requirements of traditional cellular UEs (CUEs) and the D2D UEs in the network. In practice, the advantages of D2D communication may be limited due to: \begin{inparaenum}[\itshape i)] \item \textit{longer distance:} the potential D2D UEs may not be in near proximity; \item \textit{poor propagation medium:} the link condition between two D2D UEs may not be favourable; \item \textit{interference to and from CUEs:} in a spectrum underlay system, D2D transmitters can cause severe interference to other receiving nodes in the network and also the D2D receivers may experience interference from other transmitting nodes. Partitioning the available spectrum for its use by CUEs and D2D UEs in a non-overlapping manner (i.e., overlay D2D communication) could be an alternative; however, this would significantly reduce spectrum utilization \cite{phond-d2d, d2d_swarm}. \end{inparaenum} In such cases, network-assisted transmission through relays could enhance the performance of D2D communication when D2D UEs are far away from each other and/or the quality of D2D communication channel is not good enough for direct communication. Unlike most of the existing work on D2D communication, in this paper, we consider relay-assisted D2D communication in LTE-A cellular networks where D2D pairs are served by the relay nodes. In particular, we consider LTE-A Layer-3 (L3) relays\footnote{An L3 relay with self-backhauling configuration performs the same operation as an eNB except that it has a lower transmit power and a smaller cell size. It controls cell(s) and each cell has its own cell identity. The relay transmits its own control signals and the UEs are able to receive scheduling information directly from the relay node \cite{relay-book-1}.}. We concentrate on scenarios in which the proximity and link condition between the potential D2D UEs may not be favorable for direct communication. Therefore, they may communicate via relays. The radio resources at the relays (e.g., resource blocks [RBs] and transmission power) are shared among the D2D communication links and the two-hop cellular links using these relays. An use-case for such relay-aided D2D communication could be the machine-to-machine (M2M) communication \cite{d2d_m2m_1} for smart cities. In such a communication scenario, automated sensors (i.e., UEs) are deployed within a macro-cell ranging a few city blocks; however, the link condition and/or proximity between devices may not be favorable. Due to the nature of applications, these UEs are required to periodically transmit data \cite{m2m_our_paper}. Relay-aided D2D communication could be an elegant solution to provide reliable transmission as well as improve overall network throughput in such a scenario. Due to time-varying and random nature of wireless channel, we formulate a robust resource allocation problem with an objective to maximizing the end-to-end rate (i.e., minimum achievable rate over two hops) for the UEs while maintaining the QoS (i.e., rate) requirements for cellular and D2D UEs under total power constraint at the relay node. The link gains, the interference among relay nodes and interference at the receiving D2D UEs are not exactly known (i.e., estimated with an additive error). The robust problem formulation is observed to be convex, and therefore, we apply a gradient-based method to solve the problem distributively at each relay node with polynomial complexity. We demonstrate that introducing robustness to deal with channel uncertainties affects the achievable network sum-rate. To reduce the cost of robustness defined as the corresponding reduction of achievable sum-rate, we utilize the \textit{chance constraint approach} to achieve a trade-off between robustness and optimality by adjusting some protection functions. We compare the performance of our proposed method with an underlay D2D communication scheme where the D2D UEs communicate directly without the assistance of relays. The numerical results show that after a distance threshold for the D2D UEs, relaying D2D traffic provides significant gain in achievable data rate. The main contributions of this paper can be summarized as follows: \begin{itemize} \item We analyze the performance of relay-assisted D2D communication under uncertain system parameters. The problem of RB and power allocation at the relay nodes for the CUEs and D2D UEs is formulated and solved for the globally optimal solution when perfect channel gain information for the different links is available. As opposed to most of the resource allocation schemes in the literature where only a single D2D link is considered, we consider multiple D2D links along with multiple cellular links that are supported by relay nodes. \item Assuming that the perfect channel information is unavailable, we formulate a robust resource allocation problem for relay-assisted D2D communication under uncertain channel information in both the hops and show that the convexity of the robust formulation is maintained. We propose a distributed algorithm with a polynomial time complexity. \item The cost of robust resource allocation is analyzed. In order to achieve a balance between the network performance and robustness, we provide a trade-off mechanism. \end{itemize} The rest of this paper is organized as follows. A review of the related work and motivation of this work are presented in Section \ref{sec:related_works}. In Section \ref{sec:sys_model}, we present the system model and assumptions. In Section~\ref{sec:nominal}, we formulate the RB and power allocation problem for the nominal (i.e., non-robust) case. The robust resource allocation problem is formulated in Section~\ref{sec:robust}. In order to allocate resources efficiently, we propose a robust distributed algorithm and discuss the robustness-optimality trade-off in Section \ref{sec:robust_algo}. The performance evaluation results are presented in Section \ref{sec:performance_eval} and finally we conclude the paper in Section \ref{sec:conclusion}. The key mathematical notations used in the paper are listed in Table \ref{tab:notations}. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Mathematical Notations} \label{tab:notations} \centering \begin{tabular}{c|p{5.5cm}} \hline \bfseries Notation & \bfseries Physical interpretation\\ \hline\hline $\mathcal{N} = \lbrace 1, 2, \ldots, N \rbrace $ & Set of available RBs \\ \hline $\mathcal{L} = \lbrace 1, 2, \ldots, L \rbrace$ & Set of relays \\ \hline $u_l$ & A UE served by relay $l$ \\ \hline $\mathcal{U}_l, |\mathcal{U}_l|$ & Set of UEs and total number of UEs served by relay $l$, respectively \\ \hline $h_{i,j}^{(n)}$ & Direct link gain between the node $i$ and $j$ over RB $n$ \\ \hline $R_{u_l}^{(n)}$ & End-to-end data rate for $u_l$ over RB $n$ \\ \hline $x_{u_l}^{(n)}, S_{u_l, l}^{(n)}$ & RB allocation indicator and actual transmit power for $u_l$ over RB $n$, respectively \\ \hline $I_{u_l, l}^{(n)}$ & Aggregated interference experienced by $u_l$ over RB $n$ \\ \hline $\mathbf{g}_{l, i}^{(n)}$ & Nominal link gain vector over RB $n$ in hop $i$ \\ \hline $\bar{\mathbf{g}}_{l, i}^{(n)}$, $\hat{\mathbf{g}}_{l, i}^{(n)}$ & Estimated and uncertain (i.e., the bounded error) link gain vector, respectively, over RB $n$ in hop $i$ \\ \hline $\Re_{g_{l, i}}^{(n)}, \Delta_{g_{l, i}}^{(n)} $ & Uncertainty set and protection function, respectively, for link gain over RB $n$ in hop $i$ \\ \hline $\Re_{I_{u_l,l}}^{(n)}, \Delta_{I_{u_l,l}}^{(n)}$ & Uncertainty set and protection function of interference level, respectively, for $u_l$ over RB $n$ \\ \hline $\Psi_{l,i}^{(n)}$ & Bound of uncertainty in link gain for hop $i$ over RB $n$ \\ \hline $\Upsilon_{u_l}^{(n)} $ & Bound of uncertainty in interference level for $u_l$ over RB $n$ \\ \hline ${\parallel \mathbf{y} \parallel}_\alpha$ & Linear norm of vector $\mathbf{y}$ with order $\alpha$ \\ \hline ${\parallel \mathbf{y} \parallel}^*$ & Dual norm of $\parallel \mathbf{y} \parallel$\\ \hline $\mathbf{abs}\lbrace y \rbrace$ & Absolute value of $y$ \\ \hline $\mathbf{A}(j,:)$ & $j$-th row of matrix $\mathbf{A}$ \\ \hline $\Lambda_{\kappa}^{(t)}$ & Step size for variable $\kappa$ at iteration $t$\\ \hline $\mathscr{R}_\Delta $ & Reduction of achievable sum-rate due to uncertainty\\ \hline $\Theta_{l,i}^{(n)}$ & Threshold probability of violating interference constraint for RB $n$ in hop $i$ \\ \hline $\mathcal{S}_{\Theta_{l,i}^{(n)}} \left( \mathscr{R}_\Delta \right)$ & Sensitivity of $\mathscr{R}_\Delta $ in hop $i$ over RB $n$ \\ \hline \end{tabular} \end{table} \section{Related Work and Motivation} \label{sec:related_works} Although resource allocation for D2D communication in future generation orthogonal frequency-division multiple access (OFDMA)-based wireless networks is one of the active areas of research, there are very few work which consider relays for D2D communication. A resource allocation scheme based on a column generation method is proposed in \cite{phond-d2d} to maximize the spectrum utilization by finding the minimum transmission length (i.e., time slots) for D2D links while protecting the cellular users from interference and guaranteeing QoS. In \cite{zul-d2d}, a greedy heuristic-based resource allocation scheme is proposed for both uplink and downlink scenarios where a D2D pair shares the same resources with CUE only if the achieved SINR is greater than a given SINR requirement. A new spectrum sharing protocol for D2D communication overlaying a cellular network is proposed in \cite{d2d_new_paper}, which allows the D2D users to communicate bi-directionally while assisting the two-way communications between the eNB and the CUE. The resource allocation problem for D2D communication underlaying cellular networks is addressed in \cite{lingyang-icc13}. In \cite{xen-1}, the authors consider relay selection and resource allocation for uplink transmission in LTE-Advanced (LTE-A) networks with two classes of users having different (i.e., specific and flexible) rate requirements. The objective is to maximize system throughput by satisfying the rate requirements for the rate-constrained users while confining the transmit power within a power-budget. Although D2D communication was initially proposed to relay user traffic \cite{d2d_first_relay}, not many work consider using relays in D2D communication. To the best of our knowledge, relay-assisted D2D communication was first introduced in \cite{d2d-rel-4} where the relay selection problem for D2D communication underlaying cellular network was studied. The authors propose a distributed relay selection method for relay assisted D2D communication system which firstly coordinates the interference caused by the coexistence of D2D system and cellular network and eliminates improper relays correspondingly. Afterwards, the best relay is chosen among the optional relays using a distributed method. In \cite{d2d-rel-3}, the authors consider D2D communication for relaying UE traffic toward the eNB and deduce a relay selection rule based on the interference constraints. In \cite{d2d-rel-1, d2d_relay_2}, the maximum ergodic capacity and outage probability of cooperative relaying are investigated in relay-assisted D2D communication considering power constraints at the eNB. The numerical results show that multi-hop relaying lowers the outage probability and improves cell edge throughput capacity by reducing the effect of interference from the CUE. In all of the above cited work, it has generally been assumed that complete system information (e.g., channel state information [CSI]) is available to the network nodes, which is unrealistic for a practical system. Uncertainty in the CSI (in particular the channel quality indicator [CQI] in an LTE-A system) can be modeled by sum of estimated CSI (i.e., the nominal value) and some additive error (the uncertain element). Accordingly, by using robust optimization theory, the nominal optimization problem (i.e., the optimization problem without considering uncertainty) is mapped to another optimization problem (i.e., the robust problem). To tackle uncertainty, two approaches have commonly been used in robust optimization theory. First, the \textit{Bayesian approach} (Chapter 6.4 in \cite{book-boyd}) considers the statistical knowledge of errors and satisfies the optimization constraints in a probabilistic manner. Second, the \textit{worst-case approach} (Chapter 6.4 in \cite{book-boyd}, \cite{worst-case_robust}) assumes that the error (i.e., uncertainty) is bounded in a closed set called the \textit{uncertainty set} and satisfies the constraints for all realizations of the uncertainty in that set. Although the Bayesian approach has been widely used in the literature (e.g., in \cite{bayesian_robust_1}, \cite{bayesian_robust_2}), the worst-case approach is more appropriate due to the fact that it satisfies the constraints in all error instances. By applying the worst-case approach, the size of the uncertainty set can be obtained from the statistics of error. As an example, the uncertainty set can be defined by a probability distribution function of uncertainty in such a way that all realizations of uncertainty remain within the uncertainty set with a given probability. Applying robustness brings in new variables in the optimization problem, which may change the nominal formulation to a non-convex optimization problem and require excessive calculations to solve. To avoid this difficulty, the robust problem is converted to a convex optimization problem and solved in a traditional way. Although not in the context of D2D communication, there have been a few work considering resource allocation under uncertainties in the radio links. One of the first contributions dealing with channel uncertainties is \cite{uncertainity-early} where the author models the time-varying communication for single-access and multiple-access channels without feedback. For an OFDMA system, the resource allocation problem under channel uncertainty for a cognitive radio (CR) base station communicating with multiple CR mobile stations is considered in \cite{uncertainity-cr-2} for downlink communication. Two robust power control schemes are developed in \cite{robust_rel} for a CR network with cooperative relays. In \cite{robust_pow}, a robust power control algorithm is proposed for a CR network to maximize the social utility defined as the network sum rate. A robust worst-case interference control mechanism is provided in \cite{robust_intf} to maximize rate while keeping the interference to primary user below a threshold. Taking the advantage of L3 relays supported by the 3GPP standard, in our earlier work \cite{d2d_our_paper}, we studied the performance of network-assisted D2D communications assuming the availability of perfect CSI and showed that relay-aided D2D communication provides significant performance gain for long distance D2D links. In this paper, we extend the work utilizing the theory of worst-case robust optimization to maximize the end-to-end data rate under link uncertainties for the UEs with minimum QoS requirements while protecting the other receiving relay nodes and D2D UEs from interference. To make the robust formulation more tractable and obtain a near-optimal solution for satisfying all the constraints in the nominal problem, we apply the notion of \textit{protection function} instead of uncertainty set. \section{System Model and Assumptions} \label{sec:sys_model} \subsection{Network Model} A relay node in LTE-A is connected to the radio access network (RAN) through a donor eNB with a wireless connection and serves both the cellular and D2D UEs. Let $\mathcal{L} = \lbrace 1, 2, \ldots, L \rbrace$ denote the set of fixed-location Layer 3 (L3) relays in the network as shown in Fig. \ref{fig:nw_diagram}. The system bandwidth is divided into $N$ RBs denoted by $\mathcal{N} = \lbrace 1, 2, \ldots, N \rbrace$ which are used by all the relays in a spectrum underlay fashion. When the link condition between two D2D peers is too poor for direct communication, scheduling and resource allocation for the D2D UEs can be done in a relay node (i.e., L3 relay) and the D2D traffic can be transmitted through that relay. We refer to this as \textit{relay-aided D2D communication} which can be an efficient approach to provide better quality-of-service (QoS) for communication between distant D2D UEs. The CUEs and D2D pairs constitute set $\mathcal{C} = \lbrace 1, 2, \ldots, C \rbrace$ and $\mathcal{D} = \lbrace 1, 2, \ldots, D \rbrace$, respectively, where the D2D pairs are discovered during the D2D session setup. We assume that the CUEs are outside the coverage region of eNB and/or having bad channel condition, and therefore, the CUE-eNB communications need to be supported by the relays. Besides, direct communication between two D2D UEs requires the assistance of a relay node. We assume that association of the UEs (both cellular and D2D) to the corresponding relays are performed before resource allocation. The UEs assisted by relay $l$ are denoted by $u_l$. The set of UEs assisted by relay $l$ is $\mathcal{U}_l$ such that $\mathcal{U}_l \subseteq \lbrace \mathcal{C} \cup \mathcal{D} \rbrace, \forall l \in \mathcal{L}$, $\bigcup_l \mathcal{U}_l = \lbrace \mathcal{C} \cup \mathcal{D} \rbrace$, and $\bigcap_l \mathcal{U}_l = \varnothing$. We assume that during a certain time instance, only one relay-eNB link is active in the second hop to carry CUEs' data (i.e., transmissions of relays to the eNB in the second hop are orthogonal in time). Scheduling of the relays for transmission in the second hop is done by the eNB.\footnote{Scheduling of relay nodes by the eNB is out of the scope of this paper.} However, multiple relays can transmit to their corresponding D2D UEs in the second hop. Note that, in the first hop, the transmission between a UE (i.e., either CUE or D2D UE) and relay can be considered an uplink communication. In second hop, the transmission between a relay and the eNB can be considered an uplink communication from the perspective of the eNB whereas the transmission from a relay to a D2D UE can be considered as a downlink communication. In our system model, taking advantage of the capabilities of L3 relays, scheduling and resource allocation for the UEs is performed in the relay nodes to reduce the computation load at the eNB. \begin{figure}[!t] \centering \includegraphics[scale=0.65]{network-diagram.pdf} \caption{A single cell with multiple relay nodes. We assume that the CUE-eNB links are unfavourable for direct communication and they need the assistance of relays. The D2D UEs are also supported by the relay nodes due to long distance and/or poor link condition between peers. } \label{fig:nw_diagram} \end{figure} \subsection{Achievable Data Rate} We denote by $h_{i,j}^{(n)}$ the direct link gain between node $i$ and $j$ over RB $n$. The interference link gain between relay (UE) $i$ and UE (relay) $j$ over RB $n$ is denoted by $g_{i,j}^{(n)}$ where UE (relay) $j$ is not associated with relay (UE) $i$. The unit power ${\rm SINR}$ for the link between UE $u_l \in \mathcal{U}_l$ and relay $l$ using RB $n$ in the first hop is given by \begin{equation} \label{eq:SINR_1} \gamma_{u_l, l, 1}^{(n)} = \frac{h_{u_l, l}^{(n)}}{\displaystyle \sum_{\forall u_j \in \mathcal U_j, j \neq l, j \in \mathcal{L}} P_{u_j, j}^{(n)} g_{u_j, l}^{(n)} + \sigma^2}. \end{equation} The unit power ${\rm SINR}$ for the link between relay $l$ and eNB for CUE (i.e., $u_l \in \lbrace \mathcal{C} \cap \mathcal{U}_l \rbrace$) in the second hop is as follows: \begin{equation} \label{eq:SINR_2} \gamma_{l, u_l, 2}^{(n)} = \frac{h_{l, eNB}^{(n)}}{\displaystyle \sum_{\forall u_j \in \lbrace \mathcal{D} \cap \mathcal{U}_j \rbrace, j \neq l, j \in \mathcal{L}} P_{j, u_j}^{(n)} g_{j, eNB}^{(n)} + \sigma^2}. \end{equation} Similarly, the unit power ${\rm SINR}$ for the link between relay $l$ and receiving D2D UE for the D2D-pair (i.e., $u_l \in \lbrace \mathcal{D} \cap \mathcal{U}_l \rbrace$) in the second hop can be written as \begin{equation} \label{eq:SINR_3} \gamma_{l, u_l, 2}^{(n)} = \frac{h_{l, u_l}^{(n)}}{\displaystyle \sum_{\forall u_j \in \mathcal{U}_j , j \neq l, j \in \mathcal{L}} P_{j, u_j}^{(n)} g_{j, u_l}^{(n)} + \sigma^2}. \end{equation} In (\ref{eq:SINR_1})--(\ref{eq:SINR_3}), $P_{i, j}^{(n)}$ is the transmit power in the link between $i$ and $j$ over RB $n$, $\sigma^2 = N_0 B_{RB}$ where $B_{RB}$ is bandwidth of an RB, and $N_0$ denotes thermal noise. $h_{l, eNB}^{(n)}$ is the gain in the relay-eNB link and $h_{l, u_l}^{(n)}$ is the gain in the link between relay $l$ and receiving D2D UE corresponding to the D2D transmitter UE $u_l$. The achievable data rate for $u_l$ in the first hop can be expressed as $r_{u_l, 1}^{(n)} = B_{RB} \log_2 \left( 1 + P_{u_l, l}^{(n)} \gamma_{u_l, l, 1}^{(n)} \right)$. Note that, this rate expression is valid under the assumption of Gaussian (and spectrally white) interference which holds for a large number of interferers. Similarly, the achievable data rate in the second hop is $r_{u_l, 2}^{(n)} = B_{RB} \log_2 \left( 1 + P_{l, u_l}^{(n)} \gamma_{l, u_l, 2}^{(n)} \right)$. Since we are considering a two-hop communication, the end-to-end data rate for $u_l$ on RB $n$ is half of the minimum achievable data rate over two hops \cite{mutihop-rate}, i.e., \begin{equation} \label{eqn:e2e_rate} R_{u_l}^{(n)} = \frac{1}{2} \min \left\lbrace r_{u_l, 1}^{(n)} , r_{u_l, 2}^{(n)} \right\rbrace. \end{equation} \section{Resource Block (RB) and Power Allocation in Relay Nodes} \label{sec:nominal} \subsection{Formulation of the Nominal Resource Allocation Problem} For each relay, the objective of radio resource (i.e., RB and transmit power) allocation is to obtain the assignment of RB and power level to the UEs that maximizes the system capacity, which is defined as the minimum achievable data rate over two hops. Let the maximum allowable transmit power for UE (relay) is $P_{u_l}^{max}$ ($P_l^{max}$). The RB allocation indicator is a binary decision variable $x_{u_l}^{(n)} \in \lbrace 0, 1\rbrace$, where \begin{equation} x_{u_l}^{(n)} = \begin{cases} 1, \quad \text{if RB $n$ is assigned to UE $u_l$} \\ 0, \quad \text{otherwise.} \end{cases} \end{equation} Let $\displaystyle R_{u_l} = \sum_{n =1}^N x_{u_l}^{(n)} R_{u_l}^{(n)}$ denotes the achievable sum-rate over allocated RB(s) and let the QoS (i.e., rate) requirements for UE $u_l$ is denoted by $Q_{u_l}$. Considering that the same RB(s) will be used by the relay in both the hops (i.e., for communication between relay and eNB and between relay and D2D UEs), the resource allocation problem for each relay $l \in \mathcal{L}$ can be stated as follows: \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \mathbf{(P1)} ~ \underset{x_{u_l}^{(n)}, P_{u_l, l}^{(n)}, P_{l, u_l}^{(n)}}{\operatorname{max}} ~ \sum_{u_l \in \mathcal{U}_l } && \sum_{n =1}^N x_{u_l}^{(n)} R_{u_l}^{(n)} \nonumber \\ \text{subject to} \quad \sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} ~ &&{\leq} ~ 1, \quad ~~~\forall n \in \mathcal{N} \label{eq:con-bin} \\ \quad \sum_{n =1}^N x_{u_l}^{(n)} P_{u_l, l}^{(n)} ~ &&{\leq} ~ P_{u_l}^{max}, ~ \forall u_l \in \mathcal{U}_l \label{eq:con-pow-ue} \\ \quad \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N x_{u_l}^{(n)} P_{l, u_l}^{(n)} ~ &&{\leq} ~ P_l^{max} \label{eq:con-pow-rel} \\ \quad \sum_{u_l \in \mathcal{U}_l } x_{u_l}^{(n)} P_{u_l, l}^{(n)} g_{{u_l^*}, l, 1}^{(n)} ~ &&{\leq} ~ I_{th, 1}^{(n)}, ~~\forall n \in \mathcal{N} \label{eq:con-intf-1}\\ \quad \sum_{u_l \in \mathcal{U}_l } x_{u_l}^{(n)} P_{l, u_l}^{(n)} g_{l, {u_l^*}, 2}^{(n)} ~ &&{\leq} ~ I_{th, 2}^{(n)}, ~~\forall n \in \mathcal{N} \label{eq:con-intf-2} \\ \quad R_{u_l} ~ &&{\geq} ~ Q_{u_l}, \quad \forall u_l \in \mathcal{U}_l \label{eq:con-QoS-cue}\\ \quad P_{u_l, l}^{(n)} ~ \geq ~ 0, ~~ P_{l, u_l}^{(n)} ~ &&{\geq} ~ 0, \quad ~~~\forall n \in \mathcal{N}, u_l \in \mathcal{U}_l \label{eq:con-pow-0} \end{eqnarray} \setlength{\arraycolsep}{5pt} \end{subequations} where the rate of $u_l$ over RB $n$ $$R_{u_l}^{(n)} = \frac{1}{2} \min \left\lbrace \begin{matrix} B_{RB} \log_2 \left( 1 + P_{u_l, l}^{(n)} \gamma_{u_l, l, 1}^{(n)} \right), \vspace*{0.5em}\\ B_{RB} \log_2 \left( 1 + P_{l, u_l}^{(n)} \gamma_{l, u_l, 2}^{(n)} \right) \end{matrix} \right\rbrace$$ and the unit power ${\rm SINR}$ for the first hop, \[\gamma_{u_l, l, 1}^{(n)} = \frac{h_{u_l, l}^{(n)}}{ I_{u_l,l,1}^{(n)} + \sigma^2 }\] and the unit power ${\rm SINR}$ for the second hop, \begin{numcases}{\gamma_{l, u_l, 2}^{(n)} = } \frac{h_{l, eNB}^{(n)}}{I_{l, u_l,2}^{(n)} + \sigma^2}, & $u_l \in \lbrace \mathcal{C} \cap \mathcal{U}_l \rbrace$ \nonumber \\ \frac{h_{l, u_l}^{(n)}}{I_{l,u_l,2}^{(n)} + \sigma^2}, & $u_l \in \lbrace \mathcal{D} \cap \mathcal{U}_l \rbrace$. \nonumber \end{numcases} In the above $I_{u_l,l,1}^{(n)}$ and $I_{l, u_l,2}^{(n)}$ denote the interference received by $u_l$ over RB $n$ in the first and second hop, respectively, and are given as follows: $I_{u_l,l,1}^{(n)} = \displaystyle \sum_{\forall u_j \in \mathcal U_j, j \neq l, j \in \mathcal{L}} x_{u_j}^{(n)} P_{u_j, j}^{(n)} g_{u_j, l}^{(n)}$ \begin{numcases}{I_{l,u_l,2}^{(n)} = } \displaystyle \sum_{\forall u_j \in \lbrace \mathcal{D} \cap \mathcal{U}_j \rbrace, j \neq l, j \in \mathcal{L}} x_{u_j}^{(n)} P_{j, u_j}^{(n)} g_{j, eNB}^{(n)} , & \hspace{-2em} $u_l \in \lbrace \mathcal{C} \cap \mathcal{U}_l \rbrace$ \nonumber \\ \displaystyle \sum_{\forall u_j \in \mathcal{U}_j , j \neq l, j \in \mathcal{L}} x_{u_j}^{(n)} P_{j, u_j}^{(n)} g_{j, u_l}^{(n)}, & \hspace{-3em} $ u_l \in \lbrace \mathcal{D} \cap \mathcal{U}_l \rbrace$. \nonumber \end{numcases} With the constraint in (\ref{eq:con-bin}), each RB is assigned to only one UE. With the constraints in (\ref{eq:con-pow-ue}) and (\ref{eq:con-pow-rel}), the transmit power is limited by the maximum power budget. The constraints in (\ref{eq:con-intf-1}) and (\ref{eq:con-intf-2}) limit the amount of interference introduced to the other relays and the receiving D2D UEs in the first and second hop, respectively, to be less than some threshold. The constraint in (\ref{eq:con-QoS-cue}) ensures the minimum QoS requirements for the CUE and D2D UEs. The constraint in (\ref{eq:con-pow-0}) is the non-negativity condition for transmit power. Similar to \cite{ref_user}, the concept of reference node is adopted here. For example, to allocate the power level considering the interference threshold in the first hop, each UE $u_l$ associated with relay node $l$ obtains the reference user $u_l^*$ associated with the other relays and the corresponding channel gain $g_{{u_l^*}, l, 1}^{(n)}$ for $\forall n$ according to the following equation: \begin{equation} u_{l}^* = \underset{j}{\operatorname{argmax}} ~ g_{u_l , j}^{(n)},~~ u_l \in \mathcal{U}_l, j \neq l, j \in \mathcal{L} \label{eq:ref_user1}. \end{equation} Similarly, in the second hop, for each relay $l$, the transmit power will be adjusted accordingly considering interference introduced to the receiving D2D UEs (associated with other relays) considering the corresponding channel gain $g_{l, {u_l^*}, 2}^{(n)}$ for $\forall n$ where the reference user is obtained by \begin{equation} u_{l}^* = \underset{u_j}{\operatorname{argmax}} ~ g_{l , u_j}^{(n)}, ~~ j \neq l, j \in \mathcal{L}, u_j \in \lbrace \mathcal{D} \cap \mathcal{U}_j \rbrace. \label{eq:ref_user2} \end{equation} From (\ref{eqn:e2e_rate}), the maximum data rate for UE $u_l$ over RB $n$ is achieved when $P_{u_l, l}^{(n)} \gamma_{u_l, l, 1}^{(n)} = P_{l, u_l}^{(n)} \gamma_{l, u_l, 2}^{(n)}$. Therefore, in the second hop, the power allocated for UE $u_l$, $P_{l, u_l}$ can be expressed as a function of power allocated for transmission in the first hop, $P_{u_l, l}$ as follows: $P_{l, u_l}^{(n)} = \frac{\gamma_{u_l, l, 1}^{(n)}}{\gamma_{l, u_l, 2}^{(n)}}P_{u_l, l}^{(n)}$. Hence the data rate for $u_l$ over RB $n$ can be expressed as \begin{equation} \label{eq:rate_generic} R_{u_l}^{(n)} = \frac{1}{2} B_{RB} \log_2 \left( 1 + P_{u_l, l}^{(n)} \gamma_{u_l, l, 1}^{(n)} \right). \end{equation} \subsection{Continuous Relaxation and Reformulation} The optimization problem $\mathbf{P1}$ is a mixed-integer non-linear program (MINLP) which is computationally intractable. A common approach to tackle this problem is to relax the constraint that an RB is used by only one UE by using the time-sharing factor \cite{relax-con-1}. Thus $x_{u_l}^{(n)} \in (0,1]$ is represented as the sharing factor where each $x_{u_l}^{(n)}$ denotes the portion of time that RB $n$ is assigned to UE $u_l$ and satisfies the constraint $\displaystyle \sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} \leq 1, ~\forall n$. Besides, we introduce a new variable $S_{u_l, l}^{(n)} = x_{u_l}^{(n)} P_{u_l,l}^{(n)}$ which denotes the actual transmit power of UE $u_l$ on RB $n$ \cite{time-share-1}. Then the relaxed problem can be stated as follows: \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \mathbf{(P2)} \hspace{15em} \nonumber \\ \underset{x_{u_l}^{(n)}, S_{u_l, l}^{(n)}, \omega_{u_l}^{(n)}}{\operatorname{max}} ~ \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 && \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}} {x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \hspace{0.1em} \nonumber \\ \text{subject to} \quad \sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} ~ &&{\leq} ~ 1, \quad \forall n \hspace{-5em} \label{eq:con-bin-relx} \\ \sum_{n =1}^N S_{u_l, l}^{(n)} ~ &&{\leq} ~ P_{u_l}^{max}, \forall u_l \label{eq:con-pow-ue-relx} \\ \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} ~ &&{\leq} ~ P_l^{max} \label{eq:con-pow-rel-relx} \\ \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^n g_{{u_l^*}, l, 1}^{(n)} ~ &&{\leq} ~ I_{th, 1}^{(n)}, ~~ \forall n \label{eq:con-intf-1-relx}\\ \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} g_{l, {u_l^*}, 2}^{(n)} ~ &&{\leq} ~ I_{th, 2}^{(n)}, ~~\forall n \label{eq:con-intf-2-relx} \\ \quad \sum_{n=1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) ~ &&{\geq} ~ Q_{u_l}, ~~ \forall u_l \label{eq:con-QoS-cue-relx}\\ \quad S_{u_l, l}^{(n)} ~ &&{\geq} ~ 0, ~~\forall n, u_l \label{eq:con-pow-0-relx} \\ I_{u_l,l}^{(n)} + \sigma^2 ~ &&{\leq}~ \omega_{u_l}^{(n)}, \forall n, u_l \label{eq:con-aux-relx} \end{eqnarray} \end{subequations} where $\omega_{u_l}^{(n)}$ is an auxiliary variable for $u_l$ over RB $n$ and let $I_{u_l,l}^{(n)} = \max \left\lbrace I_{u_l,l,1}^{(n)} , I_{l,u_l,2}^{(n)} \right\rbrace $. The duality gap of any optimization problem satisfying the time sharing condition is negligible as the number of RB becomes significantly large. Our optimization problem satisfies the time-sharing condition and hence the solution of the relaxed problem is asymptotically optimal \cite{large-rb-dual}. Since the objective function is concave, the constraint in (\ref{eq:con-QoS-cue-relx}) is convex, and all the remaining constraints are affine, the optimization problem $\mathbf{P2}$ is convex. Due to convexity of the optimization problem $\mathbf{P2}$, there exists a unique optimal solution. \begin{statement} \label{theorem:power-rb-alloc-nominal} (a) The power allocation for UE $u_l$ over RB $n$ is given by \begin{equation} \label{eq:power-alloc} {P_{u_l, l}^{(n)}}^* = \frac{{S_{u_l,l}^{(n)}}^*}{{x_{u_l}^{(n)}}^*} = \left[\delta_{u_l,l}^{(n)} - \frac{\omega_{u_l}^{(n)}}{ h_{u_l,l, 1}^{(n)}}\right]^+ \end{equation} where $\delta_{u_l,l}^{(n)} = \frac{\tfrac{1}{2} B_{RB} \frac{(1 + \lambda_{u_l})}{\ln 2}}{\rho_{u_l} + \frac{h_{u_l,l, 1}^{(n)}}{h_{l, u_l,2}^{(n)}} \nu_l + g_{{u_l^*}, l, 1}^{(n)} \psi_{n} + \frac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} g_{l, {u_l^*}, 2}^{(n)} \varphi_{n} } $ and $[\epsilon]^+ = \max \left\lbrace \epsilon,0 \right\rbrace$. \vspace{1em} (b) The RB allocation is determined as follows: \begin{equation} \label{eq:channel_alloc1} {x_{u_l}^{(n)}}^* = \begin{cases} \vspace{0.4em} 1, & \mu_n \leq \chi_{u_l,l}^{(n)} \\ 0, & \mu_n > \chi_{u_l,l}^{(n)} \end{cases} \end{equation} and $\chi_{u_l,l}^{(n)}$ is defined as \begin{equation} \chi_{u_l,l}^{(n)} = \tfrac{1}{2} (1+ \lambda_{u_l}) B_{RB} \left[ \log_2 \left(1 + \tfrac{S_{u_l,l}^{(n)} h_{u_l,l,1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) - \theta_{u_l,l}^{(n)} \right] \end{equation} where $\theta_{u_l,l}^{(n)} = \tfrac{S_{u_l,l}^{(n)} \gamma_{u_l,l,1}^{(n)}}{\left(x_{u_l}^{(n)} \omega_{u_l}^{(n)} + S_{u_l,l}^{(n)} \gamma_{u_l,l,1}^{(n)} \right) \ln 2}$. \end{statement} \begin{IEEEproof} See \textbf{Appendix \ref{app:power-rb-alloc-nominal}}. \end{IEEEproof} \begin{proposition} \label{theorem:asyp_opt} The power and RB allocation obtained by (\ref{eq:power-alloc}) and (\ref{eq:channel_alloc1}) is a globally optimal solution to the original problem $\mathbf{P1}$. \end{proposition} \begin{IEEEproof} Since $\mathbf{P2}$ is a constraint-relaxed version of $\mathbf{P1}$, the solution $\left({x_{u_l}^{(n)}}^*, {P_{u_l, l}^{(n)}}^*\right)$ gives an upper bound to the objective of $\mathbf{P1}$. Besides, since ${x_{u_l}^{(n)}}^*$ satisfies the binary constraints in $\mathbf{P1}$, $\left({x_{u_l}^{(n)}}^*, {P_{u_l, l}^{(n)}}^*\right)$ satisfies all constraints in $\mathbf{P1}$ and hence also gives a lower bound. \end{IEEEproof} In the above problem formulation it is assumed that each of the relays and D2D UEs has the perfect information about the experienced interference. Also, the channel gains between the relay and the other UEs (associated with neighbouring relays) are known to the relay. However, estimating the exact values of link gains is not easy in practice. To deal with the uncertainties in the estimated values, we apply the worst-case robust optimization method \cite{robust-theroy}. \section{Robust Resource Allocation} \label{sec:robust} \subsection{Formulation of Robust Problem} Let the vector of link gains between relay $l$ and other transmitting UEs (associated with other relays, i.e., for $ \forall j \in \mathcal{L}, j \neq l$) in the first hop over RB $n$ be denoted by $\mathbf{g}_{l, 1}^{(n)} = \left[g_{{1^*}, l, 1}^{(n)},~ g_{{2^*}, l, 1}^{(n)}, \cdots, ~ g_{{{|\mathcal{U}_l|}^*}, l, 1}^{(n)} \right]$, where $|\mathcal{U}_l|$ is the total number of UEs associated with relay $l$. Similarly, the vector of link gains between relay $l$ and receiving D2D UEs (associated with other relays) in the second hop over RB $n$ is given by $\mathbf{g}_{l, 2}^{(n)} = \left[g_{l, {1^*}, 2}^{(n)},~ g_{l, {2^*}, 2}^{(n)}, \cdots, ~ g_{{l, {|\mathcal{U}_l|}^*}, 2}^{(n)} \right]$. We assume that the link gains and the aggregated interference (i.e., $I_{u_l, l}^{(n)}$, $\forall n, u_l$ and elements of $\mathbf{g}_{l, 1}^{(n)}, \mathbf{g}_{l, 2}^{(n)}$, $\forall n$) are unknown but are bounded in a region (i.e., uncertainty set) with a given probability. For example, the channel gain in the first hop is bounded in $ \Re_{g_{l, 1}}^{(n)}$, with estimated value $\bar{\mathbf{g}}_{l, 1}^{(n)}$ and the bounded error $\hat{\mathbf{g}}_{l, 1}^{(n)}$, i.e., $\mathbf{g}_{l, 1}^{(n)} = \bar{\mathbf{g}}_{l, 1}^{(n)} + \hat{\mathbf{g}}_{l, 1}^{(n)}$, and $\mathbf{g}_{l, 1}^{(n)} \in \Re_{g_{l, 1}}^{(n)}, \forall n \in \mathcal{N}$, where $ \Re_{g_{l, 1}}^{(n)}$ is the uncertainty set for $\mathbf{g}_{l,1}^{(n)}$. Similarly, let $\Re_{g_{l, 2}}^{(n)},~ \forall n $ be the uncertainty set for the link gains in the second hop and $\Re_{I_{u_l,l}}^{(n)},~ \forall n, u_l $ be the uncertainty set for interference level. In the formulation of robust problem, we utilize a similar rate expression [i.e., equation (\ref{eq:rate_generic})] as the one used in the nominal problem formulation. Although dealing with similar utility function (i.e., rate equation) for both nominal and robust problems is quite common in literature (e.g., in \cite{uncertainity-cr-2, robust_rel, robust_intf}), when perfect channel information is not available to receiver nodes, the rate obtained by (\ref{eq:rate_generic}) actually approximates the achievable rate \footnote{According to information-theoretic capacity analysis, in presence of channel uncertainties at the receiver, the lower and upper bounds of the rate are given by equations (46) and (49) in \cite{uncertainity-early}, respectively. However, for mathematical tractability, we resort to (\ref{eq:rate_generic}) to calculate the achievable data rate in both the nominal and robust problem formulations.} The solution to $\mathbf{P2}$ is robust against uncertainties if and only if for any realization of $\mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)}, \mathbf{g}_{l,2}^{(n)} \in \Re_{g_{l,2}}^{(n)} $, and $I_{u_l,l}^{(n)} \in \Re_{I_{u_l,l}}^{(n)}$, the optimal solution satisfies the constraints in (\ref{eq:con-intf-1-relx}), (\ref{eq:con-intf-2-relx}), and (\ref{eq:con-aux-relx}). Therefore, the robust counterpart of $\mathbf{P2}$ is represented as \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \mathbf{(P3)} \hspace{15em} \nonumber \\ \underset{x_{u_l}^{(n)}, S_{u_l, l}^{(n)}, \omega_{u_l}^{(n)}}{\operatorname{max}} ~ \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 && \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \nonumber \\ \text{subject to}~~ \text{(\ref{eq:con-bin-relx})},~ \text{(\ref{eq:con-pow-ue-relx})},~ \text{(\ref{eq:con-pow-rel-relx})},~ \text{(\ref{eq:con-intf-1-relx})},~ \nonumber \\ \text{(\ref{eq:con-intf-2-relx})},&&~ \text{(\ref{eq:con-QoS-cue-relx})},~ \text{(\ref{eq:con-pow-0-relx})},~ \text{(\ref{eq:con-aux-relx})} \nonumber \\ \text{and}~~ \mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)}, ~~ \mathbf{g}_{l,2}^{(n)} ~ &&{\in} ~ \Re_{g_{l,2}}^{(n)}, ~~\forall n \label{eq:con-region-robst1} \\ I_{u_l,l}^{(n)} ~ &&{\in} ~ \Re_{I_{u_l,l}}^{(n)}, ~\forall n, \forall u_l \label{eq:con-region-intf-robst1} \nonumber \\ \end{eqnarray} \setlength{\arraycolsep}{5pt} \end{subequations} where the constraints in (\ref{eq:con-region-robst1}) and (\ref{eq:con-region-intf-robst1}) represent the requirements for the robustness of the solution. \begin{proposition} \label{theorem:convex} When $\Re_{g_{l,1}}^{(n)}, \Re_{g_{l,2}}^{(n)}$, and $\Re_{I_{u_l,l}}$ are compact and convex sets, $\mathbf{P3}$ is a convex optimization problem. \end{proposition} \begin{IEEEproof} The uncertainty constraints in (\ref{eq:con-intf-1-relx}), (\ref{eq:con-intf-2-relx}), and (\ref{eq:con-aux-relx}) are satisfied if and only if \begin{align*} \underset{\mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)}}{\operatorname{max}} \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} g_{{u_l^*}, l, 1}^{(n)} & \leq I_{th, 1}^{(n)}, ~~ \forall n \\ \underset{\mathbf{g}_{l,2}^{(n)} \in \Re_{g_{l,2}}^{(n)}}{\operatorname{max}} \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} g_{l, {u_l^*}, 2}^{(n)} & \leq I_{th, 2}^{(n)}, ~~\forall n \\ \underset{I_{u_l,l}^{(n)} \in \Re_{I_{u_l,l}}^{(n)}}{\operatorname{max}} I_{u_l,l}^{(n)} + \sigma^2 & \leq \omega_{u_l}^{(n)}, ~~\forall n, u_l \end{align*} which is equivalent to \begin{align*} \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} ~~ + \hspace{12em} \\ \underset{\mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)}}{\operatorname{max}} \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \left( g_{{u_l^*}, l, 1}^{(n)} - \bar{g}_{{u_l^*}, l, 1}^{(n)} \right) \hspace*{1.7em} & \hspace*{-1.6em} \leq I_{th, 1}^{(n)}, ~ \forall n \\ \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} ~~ + \hspace{9em} \\ \underset{\mathbf{g}_{l,2}^{(n)} \in \Re_{g_{l,2}}^{(n)}}{\operatorname{max}} \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \left( g_{l, {u_l^*}, 2}^{(n)} - \bar{g}_{l, {u_l^*}, 2}^{(n)} \right) \hspace*{1.7em} & \hspace*{-1.6em} \leq I_{th, 2}^{(n)}, ~\forall n \\ \bar{I}_{u_l,l}^{(n)} + \underset{I_{u_l,l}^{(n)} \in \Re_{I_{u_l,l}^{(n)}}}{\operatorname{max}} \left( I_{u_l,l}^{(n)} - \bar{I}_{u_l,l}^{(n)} \right) + \sigma^2 \hspace*{1.7em} & \hspace*{-1.6em} \leq \omega_{u_l}^{(n)}, ~\forall n, u_l. \end{align*} Since the $\max$ function over a convex set is a convex function (Section 3.2.4 in \cite{book-boyd}), convexity of the problem $\mathbf{P3}$ is conserved. \end{IEEEproof} The problem $\mathbf{P2}$ is the nominal problem of $\mathbf{P3}$ where it is assumed that the perfect channel state information is available, i.e., the estimated values are considered as exact values. With the inclusion of uncertainty in (\ref{eq:con-intf-1-relx}), (\ref{eq:con-intf-2-relx}), and (\ref{eq:con-aux-relx}), the constraints in the optimization problem $\mathbf{P3}$ are still affine. In order to express the constraints in closed-form (i.e., to avoid using the uncertainty set), in the following, we utilize the notion of protection function \cite{robust-theroy, general_norm} instead of uncertainty set. \subsection{Uncertainty Set and Protection Function} From $\mathbf{P3}$, the optimization problem is impacted by the uncertainty sets $\Re_{g_{l,1}}^{(n)}, \Re_{g_{l,2}}^{(n)}$, and $\Re_{I_{u_l,l}}^{(n)}$. To obtain the robust formulation, we consider that the uncertainty sets for the uncertain parameters are based on the differences between the actual (i.e., uncertain) and nominal (i.e., without considering uncertainty) values. These differences can be mathematically represented by general norms \cite{general_norm}. For example, the uncertainty sets for channel gain in the first and second hops for $\forall n \in \mathcal{N}$ are given by \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \Re_{g_{l,1}}^{(n)} ~ && {=} ~ \left\lbrace \mathbf{g}_{l,1}^{(n)} | \parallel \mathbf{M}_{g_{l,1}}^{(n)} \cdot \left( \mathbf{g}_{l,1}^{(n)} - \bar{\mathbf{g}}_{l,1}^{(n)} \right)^\mathsf{T} \parallel \hspace{0.2em} \leq \Psi_{l,1}^{(n)} \right\rbrace \hspace{1.9em} \label{eq:un_set_gain_1}\\ \Re_{g_{l,2}}^{(n)} ~ && {=} ~ \left\lbrace \mathbf{g}_{l,2}^{(n)} | \parallel \mathbf{M}_{g_{l,2}}^{(n)} \cdot \left( \mathbf{g}_{l,2}^{(n)} - \bar{\mathbf{g}}_{l,2}^{(n)} \right)^\mathsf{T} \parallel \hspace{0.2em} \leq \Psi_{l,2}^{(n)} \right\rbrace \hspace{1.9em} \label{eq:un_set_gain_2} \end{eqnarray} \setlength{\arraycolsep}{5pt} \end{subequations} where $\parallel \cdot \parallel$ denotes the general norm, $\Psi_{l,1}^{(n)}$ and $\Psi_{l,2}^{(n)}$ represent the bound on the uncertainty set; $ \mathbf{g}_{l,1}^{(n)}$, $\mathbf{g}_{l,2}^{(n)} $ are the actual and $ \bar{\mathbf{g}}_{l,1}^{(n)} $, $ \bar{\mathbf{g}}_{l,2}^{(n)}$ are the estimated (i.e., nominal) channel gain vectors; $\mathbf{M}_{g_{l,1}}^{(n)}$ and $\mathbf{M}_{g_{l,2}}^{(n)}$ are the invertible $\mathfrak{R}^{|\mathcal{U}_l| \times |\mathcal{U}_l|}$ weight matrices for the first and second hop, respectively. Likewise, the uncertainty set for the experienced interference is expressed as \begin{equation} \label{eq:uncertainity_reg_intf} \Re_{I_{u_l,l}}^{(n)} = \left\lbrace I_{u_l,l}^{(n)} | \parallel M_{I_{u_l,l}}^{(n)} \cdot \left( I_{u_l,l}^{(n)} - \bar{I}_{u_l,l}^{(n)} \right) \parallel \hspace{0.2em} \leq \Upsilon_{u_l}^{(n)} \right\rbrace \end{equation} where $ I_{u_l,l}^{(n)}$ and $\bar{I}_{u_l,l}^{(n)} $ are the actual and estimated interference levels, respectively; the variable $M_{I_{u_l,l}}^{(n)}$ denotes weight and $\Upsilon_{u_l}^{(n)}$ is the upper bound on the uncertainty set. In the proof of \textbf{Proposition \ref{theorem:convex}}, the terms \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \Delta_{g_{l, 1}}^{(n)} ~ &&{=} ~ \underset{\mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)} }{\operatorname{max}} \displaystyle \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \left( g_{{u_l^*}, l, 1}^{(n)} - \bar{g}_{{u_l^*}, l, 1}^{(n)} \right) \label{eq:prot-func-gain-1} \\ \Delta_{g_{l, 2}}^{(n)} ~ &&{=} ~ \underset{\mathbf{g}_{l,2}^{(n)} \in \Re_{g_{l,2}}^{(n)} }{\operatorname{max}} \displaystyle \sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \left( g_{l, {u_l^*}, 2}^{(n)} - \bar{g}_{l, {u_l^*}, 2}^{(n)} \right) \label{eq:prot-func-gain-2} \\ \Delta_{I_{u_l,l}}^{(n)} ~ &&{=} ~ \underset{I_{u_l,l}^{(n)} \in \Re_{I_{u_l,l}}^{(n)}}{\operatorname{max}} \left( I_{u_l,l}^{(n)} - \bar{I}_{u_l,l}^{(n)} \right) \label{eq:prot-func-intf} \end{eqnarray} \setlength{\arraycolsep}{5pt} \end{subequations} are called protection functions for constraint (\ref{eq:con-intf-1-relx}), (\ref{eq:con-intf-2-relx}), and (\ref{eq:con-aux-relx}), respectively, whose value (i.e., protection value) depends on the uncertain parameters. Using the protection function, the optimization problem can be rewritten as \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \mathbf{(P4)} \hspace{15em} \nonumber \\ \underset{x_{u_l}^{(n)}, S_{u_l, l}^{(n)}, \omega_{u_l}^{(n)}}{\operatorname{max}} ~ \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 && \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \nonumber \\ \text{subject to}~~ \text{(\ref{eq:con-bin-relx})},~ \text{(\ref{eq:con-pow-ue-relx})},~ \text{(\ref{eq:con-pow-rel-relx})},&&~ \text{(\ref{eq:con-QoS-cue-relx})},~ \text{(\ref{eq:con-pow-0-relx})}~~\text{and} \nonumber \\ \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \Delta_{g_{l, 1}}^{(n)} ~ &&{\leq} ~ I_{th, 1}^{(n)}, ~~ \forall n \label{eq:con-intf-1-robst2}\\ \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \Delta_{g_{l, 2}}^{(n)} ~ &&{\leq} ~ I_{th, 2}^{(n)}, ~~\forall n \label{eq:con-intf-2-robst2} \\ \bar{I}_{u_l,l}^{(n)} + \Delta_{I_{u_l,l}}^{(n)} + \sigma^2 ~ &&{\leq} ~ \omega_{u_l}^{(n)}, \forall n, u_l \label{eq:con-aux-robst2} \end{eqnarray} \setlength{\arraycolsep}{5pt} \end{subequations} where $\Delta_{g_{l, 1}}^{(n)}, \Delta_{g_{l, 2}}^{(n)}$, and $\Delta_{I_{u_l,l}}^{(n)} $ are defined by (\ref{eq:prot-func-gain-1}), (\ref{eq:prot-func-gain-2}), and (\ref{eq:prot-func-intf}), respectively. \begin{proposition} \label{theorem:norm} The protection functions for the uncertainty sets represented by general norms [i.e., by (\ref{eq:un_set_gain_1}), (\ref{eq:un_set_gain_2}), and (\ref{eq:uncertainity_reg_intf})] are \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{align} \Delta_{g_{l, 1}}^{(n)} &= \Psi_{l,1}^{(n)} \parallel {\mathbf{M}_{g_{l,1}}^{(n)}}^{-1} \cdot \left(\mathbf{S}_{l,1}^{(n)} \right)^\mathsf{T} \parallel^* \\ \Delta_{g_{l, 2}}^{(n)} &= \Psi_{l,2}^{(n)} \parallel {\mathbf{M}_{g_{l,2}}^{(n)}}^{-1} \cdot \left(\mathbf{H}_{l}^{(n)} \cdot \mathbf{S}_{l,1}^{(n)} \right)^\mathsf{T} \parallel^* \\ \Delta_{I_{u_l,l}}^{(n)} &= \Upsilon_{u_l}^{(n)} \parallel {M_{I_{u_l,l}}^{(n)}}^{-1} \cdot I_{u_l,l}^{(n)} \parallel^* \end{align} \setlength{\arraycolsep}{5pt} \end{subequations} where $\mathbf{S}_{l, 1}^{(n)} = \left[S_{{1}, l}^{(n)},~ S_{{2}, l}^{(n)}, \cdots, ~ S_{{{|\mathcal{U}_l|}}, l}^{(n)} \right]$, $\mathbf{H}_{l}^{(n)} = \left[ \tfrac{h_{1, l, 1}^{(n)}} {h_{l, 1, 2}^{(n)}}, ~ \tfrac{h_{2, l, 1}^{(n)}} {h_{l, 2, 2}^{(n)}} , \cdots, ~ \tfrac{h_{|\mathcal{U}_l|, l, 1}^{(n)}} {h_{l, |\mathcal{U}_l|, 2}^{(n)}} \right] $ and $\parallel \cdot \parallel^*$ is the dual norm of $\parallel \cdot \parallel$. \end{proposition} \begin{IEEEproof} Using the expression $\mathbf{w}_{l,1}^{(n)} = \frac{\mathbf{M}_{g_{l,1}}^{(n)} \cdot \left( \bar{\mathbf{g}}_{l,1}^{(n)} - \mathbf{g}_{l,1}^{(n)} \right)^\mathsf{T}}{\Psi_{l,1}^{(n)}}$, the uncertainty set (\ref{eq:un_set_gain_1}) becomes \begin{equation} \Re_{g_l,1}^{(n)} = \left\lbrace \mathbf{w}_{l,1}^{(n)} \vert \parallel \bar{\mathbf{w}}_{l,1}^{(n)} - \mathbf{w}_{l,1}^{(n)} \parallel \hspace{0.2em} \leq 1 \right\rbrace, \quad \forall n. \end{equation} Besides, the protection function (\ref{eq:prot-func-gain-1}) can be rewritten as \begin{eqnarray} \underset{\mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)} }{\operatorname{max}} \hspace{-1.5em} & \displaystyle \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \left( g_{{u_l^*}, l, 1}^{(n)} - \bar{g}_{{u_l^*}, l, 1}^{(n)} \right) \nonumber \\ =& \underset{\mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)} }{\operatorname{max}} \mathbf{S}_{l, 1}^{(n)} \cdot \left( \mathbf{g}_{l,1}^{(n)} - \bar{\mathbf{g}}_{l,1}^{(n)}\right)^\mathsf{T} \nonumber \\ =& \underset{\mathbf{g}_{l,1}^{(n)} \in \Re_{g_{l,1}}^{(n)} }{\operatorname{max}} \mathbf{S}_{l, 1}^{(n)} \cdot \left( {\mathbf{M}_{g_{l,1}}^{(n)}}^{-1} \cdot \mathbf{w}_{l,1}^{(n)} \right). \label{eq:prtection_func_proof} \end{eqnarray} Note that, given a norm $\parallel \mathbf{y} \parallel$ for a vector $\mathbf{y}$, its dual norm induced over the dual space of linear functionals $\mathbf{z}$ is $\parallel \mathbf{z} \parallel^* = \underset{\parallel \mathbf{y} \parallel \leq 1}{\operatorname{max}} ~\mathbf{z}^\mathsf{T} \mathbf{y}$ \cite{general_norm}. Since the protection function in (\ref{eq:prtection_func_proof}) is the dual norm of uncertainty region in (\ref{eq:un_set_gain_1}), the proof follows. The protection functions for the uncertainity sets in (\ref{eq:un_set_gain_2}) and (\ref{eq:uncertainity_reg_intf}) are obtained in a similar way. \end{IEEEproof} Since the dual norm is a convex function, the convexity of $\mathbf{P4}$ is preserved. In addition, when the uncertainty set for any vector $\mathbf{y}$ is a linear norm defined by ${\parallel \mathbf{y} \parallel}_\alpha = \left( \sum \mathbf{abs}\lbrace y \rbrace^\alpha \right)^{\frac{1}{\alpha}}$ with order $\alpha \geq 2$, where $\mathbf{abs}\lbrace y \rbrace$ is the absolute value of $y$ and the dual norm is a linear norm with order $\beta = 1+ \frac{1}{\alpha-1}$. In such cases, the protection function can be defined as a linear norm of order $\beta$. Therefore, the protection function becomes a deterministic function of the optimization variables (i.e., $x_{u_l}^{(n)}, S_{u_l, l}^{(n)}$, and $\omega_{u_l}^{(n)}$), and the non-linear $\max$ function is eliminated from the protection functions [i.e., from constraint (\ref{eq:con-intf-1-robst2}), (\ref{eq:con-intf-2-robst2}), and (\ref{eq:con-aux-robst2})]. Consequently, the resource allocation problem turns out to be a standard form of convex optimization problem $\mathbf{P5}$, where $\Delta_{I_{u_l,l}}^{(n)} = \Upsilon_{u_l}^{(n)} {\parallel {M_{I_{u_l,l}}^{(n)}}^{-1} \cdot I_{u_l,l}^{(n)} \parallel}_\beta $ and $\mathbf{A}(j,:)$ denotes the $j$-th row of matrix $\mathbf{A}$. In the LTE-A system, which exploits orthogonal frequency-division multiplexing (OFDM) for radio access, fading can be considered uncorrelated across RBs (Chapter 1 in \cite{book:fading_uncor}); hence, it can be assumed that uncertainty and channel gain in each element of $\mathbf{g}_{l,1}^{(n)}$ and $\mathbf{g}_{l,2}^{(n)}$ are i.i.d. random variables \cite{fading_uncor2}. Therefore, $\mathbf{M}_{g_{l,1}}^{(n)}$ and $\mathbf{M}_{g_{l,2}}^{(n)}$ become a diagonal matrix. Note that for any diagonal matrix $\mathbf{A}$ with $j$-th diagonal element $a_{jj}$, the vector ${\mathbf{A}}^{-1}(j,:)$ contains only non-zero elements $\frac{1}{a_{jj}} $. In addition, since the channel uncertainties are random, a commonly used approach is to represent the uncertainty set by an ellipsoid, i.e., the linear norm with $\alpha = 2$ so that the dual norm is a linear norm with $\beta =2$ \cite{ellip-m2, ellip-m2-1}. Hence, problem $\mathbf{P5}$ turns to a conic quadratic programming problem \cite{book:conic}. In order to solve $\mathbf{P5}$ efficiently, a distributed gradient-aided algorithm is developed in the following section. \begin{figure*}[!t] \normalsize \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \mathbf{(P5)} \hspace{2em} \underset{x_{u_l}^{(n)}, S_{u_l, l}^{(n)}, \omega_{u_l}^{(n)}}{\operatorname{max}} ~ \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 && \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \nonumber \\ \text{subject to} \quad \sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} ~ &&{\leq} ~ 1, \quad \quad ~ \forall n \hspace{-5em} \label{eq:con-bin-robst3} \\ \sum_{n =1}^N S_{u_l, l}^{(n)} ~ &&{\leq} ~ P_{u_l}^{max}, ~\forall u_l \label{eq:con-pow-ue-robst3} \\ \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} ~ &&{\leq} ~ P_l^{max} \label{eq:con-pow-rel-robst3} \\ \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \Psi_{l,1}^{(n)} \left( \sum_{k=1}^{|\mathcal{U}_l|} \left( {\mathbf{M}_{g_{l,1}}^{(n)}}^{-1}(k,:) \cdot \mathbf{S}_{l, 1}^{(n)} \right)^\beta \right)^{\frac{1}{\beta}} ~ &&{\leq} ~ I_{th, 1}^{(n)}, ~~ \forall n \label{eq:con-intf-1-robst3}\\ \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \Psi_{l,2}^{(n)} \left( \sum_{k=1}^{|\mathcal{U}_l|} \left( {\mathbf{M}_{g_{l,2}}^{(n)}}^{-1}(k,:) \cdot \left( \mathbf{H}_{l}^{(n)} \cdot \mathbf{S}_{l, 1}^{(n)} \right) \right)^\beta \right)^{\frac{1}{\beta}} ~ &&{\leq} ~ I_{th, 2}^{(n)}, ~~\forall n \label{eq:con-intf-2-robst3} \\ \sum_{n=1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) ~ &&{\geq} ~ Q_{u_l}, ~~~ \forall u_l \label{eq:con-QoS-cue-robst3}\\ S_{u_l, l}^{(n)} ~ &&{\geq} ~ 0, ~~~~~~\forall n, u_l\label{eq:con-pow-0-robst3} \\ \bar{I}_{u_l,l}^{(n)} + \Delta_{I_{u_l,l}}^{(n)} + \sigma^2 ~ &&{\leq}~ \omega_{u_l}^{(n)}, ~~ \forall n, u_l \label{eq:con-aux-robst3} \end{eqnarray} \setlength{\arraycolsep}{5pt} \end{subequations} \hrulefill \vspace*{4pt} \end{figure*} \section{Robust Distributed Algorithm} \label{sec:robust_algo} \subsection{Algorithm Development} \begin{statement} \label{theorem:robust-alloc} (a) The optimal power allocation for $u_l$ over RB $n$ is given by the following water-filling equation: \begin{equation} \label{eq:power-alloc-robust} {P_{u_l, l}^{(n)}}^* = \frac{{S_{u_l,l}^{(n)}}^*}{{x_{u_l}^{(n)}}^*} = \left[\delta_{u_l,l}^{(n)} - \frac{\omega_{u_l}^{(n)}}{ h_{u_l,l, 1}^{(n)}}\right]^+ \end{equation} where $\delta_{u_l,l}^{(n)}$ is found by (\ref{eq:robust-del-pow}). \begin{figure*}[!t] \normalsize \begin{equation} \label{eq:robust-del-pow} \delta_{u_l,l}^{(n)} = \frac{\tfrac{1}{2} B_{RB} \frac{(1 + \lambda_{u_l})}{\ln 2}}{\rho_{u_l} + \nu_l \frac{h_{u_l,l, 1}^{(n)}}{ h_{l, u_l,2}^{(n)}} + \psi_{n} \left( \bar{g}_{{u_l^*}, l, 1}^{(n)} + \Psi_{l,1}^{(n)} m_{{u_l}{u_l}_{g_{l,1}}}^{(n)} \right) + \varphi_{n} \frac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} \left( \bar{g}_{l, {u_l^*}, 2}^{(n)} + \Psi_{l,2}^{(n)} m_{{u_l}{u_l}_{g_{l,2}}}^{(n)} \right) } \end{equation} \hrulefill \vspace*{4pt} \end{figure*} (b) The RB allocation for $u_l$ over RB $n$ is obtained by (\ref{eq:channel_alloc1}). \end{statement} \begin{IEEEproof} See\textbf{ Appendix \ref{app:power-rb-alloc-robust}}. \end{IEEEproof} Based on \textbf{Statement \ref{theorem:robust-alloc}}, we utilize a gradient-based method (given in \textbf{Appendix \ref{app:lagrange_update}}) to update the variables. Each relay independently performs the resource allocation and allocates resources to the associated UEs. For completeness, the distributed joint RB and power allocation algorithm is summarized in \textbf{Algorithm \ref{alg:rec_alloc}}. \begin{algorithm} \caption{Joint RB and power allocation algorithm} \label{alg:rec_alloc} \begin{algorithmic}[1] \STATE Each relay $l \in \mathcal{L}$ estimates the reference gain $\bar{g}_{u_l^*, l, 1}^{(n)}$ and $\bar{g}_{u_l^*, l, 2}^{(n)}$ from previous time slot $\forall u_l \in \mathcal{U}_l$ and $n \in \mathcal{N}$. \STATE Initialize Lagrange multipliers to some positive value and set $t:=0$, $S_{u_l,l}^{(n)} := \frac{P_{u_l}^{max}}{N}$ $\forall u_l, n$. \REPEAT \STATE Set $t:= t + 1$. \STATE Calculate $x_{u_l}^{(n)}$ and $S_{u_l, l}^{(n)}$ for $\forall u_l, n$ using (\ref{eq:channel_alloc1}) and (\ref{eq:power-alloc-robust}). \STATE Update the Lagrange multipliers by (\ref{eq:lagrange_update1})--(\ref{eq:lagrange_update8}) and calculate the aggregated achievable network rate as $\displaystyle R_l(t) := \sum_{u_l \in \mathcal{U}_l} R_{u_l}(t)$. \UNTIL $t = T_{max}$ or the convergence criterion met (i.e., $ \mathbf{abs} \lbrace R_l(t) - R_l(t-1) \rbrace < \varepsilon$, where $\varepsilon$ is the tolerance for convergence). \STATE Allocate resources (i.e., RB and transmit power) to associated UEs for each relay and calculate the average achievable data rate. \end{algorithmic} \end{algorithm} Note that, the L3 relays are able to perform their own scheduling (unlike L1 and L2 relays in \cite{relay-book-1}) as an eNB. These relays can obtain information such as the transmission power allocation at the other relays, channel gain information, etc. by using the X2 interface (Section 7 in \cite{lte_arch}) defined in the 3GPP specifications. In particular, a separate load indication procedure is used over the X2 interface for interface management (for details refer to \cite{lte_arch} and references therein). As a result, the relays can obtain the channel state information without increasing signaling overhead at the eNB. \subsection{Complexity Analysis} \begin{proposition} Using a small step size in gradient-based updating, the proposed algorithm achieves a sum-rate such that the difference in the sum rate in successive iterations is less than an arbitrary $\varepsilon >0$ with a polynomial computation complexity in $|\mathcal{U}_l|$ and $N$. \end{proposition} \begin{IEEEproof} It is easy to verify that the computational complexity at each iteration of variable updating in (\ref{eq:lagrange_update1})--(\ref{eq:lagrange_update8}) is polynomial in $|\mathcal{U}_l|$ and $N$. There are $|\mathcal{U}_l|N$ computations which are required to obtain the reference gains and if $T$ iterations are required for convergence, the overall complexity of the algorithm is $\mathcal{O}\left(|\mathcal{U}_l| N + T|\mathcal{U}_l| N\right)$. For any Lagrange multiplier $\kappa$, if we choose $\kappa(0)$ in the interval $[0, \kappa_{max}]$, the distance between $\kappa(0)$ and $\kappa^*$ is upper bounded by $\kappa_{max}$. Then it can be shown that at iteration $t$, the distance between the current best objective and the optimum objective is upper bounded by $\frac{\kappa_{max}^2 + \kappa(t)^2 \displaystyle \sum_{i=i}^{t} {\Lambda_{\kappa}^{(i)}}^2}{2 \displaystyle \sum_{i=i}^{t}\Lambda_{\kappa}^{(i)}}.$ If we take the step size $\Lambda_{\kappa}^{(i)} = \frac{a}{\sqrt{i}}$, where $a$ is a small constant, there are $\mathcal{O}\left(\frac{1}{\varepsilon^2}\right)$ iterations required for convergence to have the bound less than $\varepsilon$ \cite{notes_subgrad}. Hence, the complexity of the proposed algorithm is $\mathcal{O}\left( \left(1+ \tfrac{1}{\varepsilon^2} \right) |\mathcal{U}_l| N\right)$. \end{IEEEproof} \subsection{Cost of Robust Resource Allocation} An important issue in robust resource allocation is the substantial reduction in the achievable network sum-rate. Reduction of achievable sum-rate due to introducing robustness is measured by $\mathscr{R}_\Delta = {\parallel R^* - R_\Delta^* \parallel}_2$, where $R^*$ and $R_\Delta^*$ are the optimal achievable sum-rates obtained by solving the nominal and the robust problem, respectively. \begin{proposition} \label{theorem:robust-tradeoff} Let $\boldsymbol{\psi^*}$, $\boldsymbol{\varphi^*}$, $\boldsymbol{\varrho^*}$ be the optimal values of Lagrange multipliers for constraint (\ref{eq:con-intf-1-relx}), (\ref{eq:con-intf-2-relx}), and (\ref{eq:con-aux-relx}) in $\mathbf{P2}$, respectively. For all values of $\Delta_{g_{l, 1}}^{(n)}, \Delta_{g_{l, 2}}^{(n)}$, and $\Delta_{I_{u_l, l}}^{(n)}$ the reduction of achievable sum rate can be approximated as \begin{equation} \label{eq:robust-opt-tradeoff} \mathscr{R}_\Delta \approx \sum_{n=1}^{N} \psi_n^* \Delta_{g_{l, 1}}^{(n)} + \sum_{n=1}^{N} \varphi_n^* \Delta_{g_{l, 2}}^{(n)} + \sum_{u_l \in \mathcal{U}_l}\sum_{n=1}^{N} \varrho_{u_l}^{n*} \Delta_{I_{u_l, l}}^{(n)}. \end{equation} \end{proposition} \begin{IEEEproof} See \textbf{Appendix \ref{app:sensitivity}}. \end{IEEEproof} From \textbf{Proposition \ref{theorem:robust-tradeoff}}, the value of $\mathscr{R}_\Delta$ depends on the uncertainty set and by adjusting the size of $\Delta_{g_{l, 1}}^{(n)}$ and $\Delta_{g_{l, 2}}^{(n)}$, $\mathscr{R}_\Delta$ can be controlled. \subsection{Trade-off Between Robustness and Achievable Sum-rate} The robust worst-case resource allocation dealing with channel uncertainties is very conservative and often leads to inefficient utilization of resources. In practice, uncertainty does not always correspond to its worst-case and in many instances the robust worst-case resource allocation may not be necessary. In such cases, it is desirable to achieve a trade-off between robustness and network sum-rate. This can be achieved through modifying the worst-case approach, where the uncertainty set is chosen in such a way that the probability of violating the interference threshold in both the hops is kept below a predefined level, and the network sum-rate is kept close to optimal value of nominal case. Therefore, we modify the constraints (\ref{eq:con-intf-1-relx}) and (\ref{eq:con-intf-2-relx}) in $\mathbf{P2}$ as \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \mathbb{P} \left( \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} g_{{u_l^*}, l, 1}^{(n)} \geq I_{th, 1}^{(n)} \right) \leq \Theta_{l,1}^{(n)}, ~~ \forall n \hspace{2em} \label{eq:chance-inft-1}\\ \mathbb{P} \left( \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} g_{l, {u_l^*}, 2}^{(n)} \geq I_{th, 2}^{(n)} \right) \leq \Theta_{l,2}^{(n)}, ~~\forall n \hspace{2em} \label{eq:chance-inft-2} \end{eqnarray} \end{subequations} where $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$ are given probabilities of violation of constraints (\ref{eq:con-intf-1-relx}) and (\ref{eq:con-intf-2-relx}) for any $n$ in the first hop and second hop, respectively. By changing $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$, the trade-off between robustness and optimality will be achieved. By reducing $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$, the network becomes more robust against uncertainty, while by increasing $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$, the network sum-rate is increased. To deal with this trade-off we use the \textit{chance constrained approach}. When the constraints are affine functions, for i.i.d. values of uncertain parameters, (\ref{eq:con-intf-1-relx}) and (\ref{eq:con-intf-2-relx}) can be replaced by convex functions as their safe approximations \cite{robust-theroy}. Applying this approach we obtain \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray*} \hspace{-1em} \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} g_{{u_l^*}, l, 1}^{(n)} = \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \sum_{u_l \in \mathcal{U}_l } \xi_{u_l, l,1}^{(n)} S_{u_l, l}^{(n)} \hat{g}_{{u_l^*}, l, 1}^{(n)} \label{eq:chance-aprx-inft-1}\\ \sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} g_{l,{u_l^*}, 2}^{(n)} = \hspace{14.5em} \\ \sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \sum_{u_l \in \mathcal{U}_l } \xi_{l, u_l,2}^{(n)} \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \hat{g}_{l, {u_l^*}, 2}^{(n)} \hspace{1em} \label{eq:chance-aprx-inft-2} \end{eqnarray*} \end{subequations} where $\xi_{j}^{(n)} = \frac{g_{j}^{(n)} - \bar{g}_{j}^{(n)}}{\hat{g}_{j}^{(n)}}, \forall n$ is varied within the range $[-1,+1]$. Under the assumption of uncorrelated fading channels, all values of $\xi_{u_l,l,1}^{(n)}$ and $\xi_{l, u_l,2}^{(n)}$ are independent of each other and belong to a specific class of probability distribution $\mathcal{P}_{u_l,l,1}^{(n)}$ and $\mathcal{P}_{l, u_l,2}^{(n)}$, respectively. Now the constraints in (\ref{eq:con-intf-1-relx}) and (\ref{eq:con-intf-2-relx}) can be replaced by Bernstein approximations of chance constraints \cite{robust-theroy} as follows: \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \tilde{\Delta}_{g_{l, 1}}^{(n)} \leq I_{th, 1}^{(n)} , ~~ \forall n \hspace{2em} \label{eq:berns-cons-inft-1}\\ \sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \tilde{\Delta}_{g_{l, 2}}^{(n)} \leq I_{th, 2}^{(n)} , ~~ \forall n \hspace{2em} \label{eq:berns-cons-inft-2} \end{eqnarray} \end{subequations} where the protection functions $\tilde{\Delta}_{g_{l, 1}}^{(n)}$ and $\tilde{\Delta}_{g_{l, 2}}^{(n)}$ are given by (\ref{eq:berns-inft-1}) and (\ref{eq:berns-inft-1}), respectively. The variables $-1 \leq \eta_{\mathcal{P}_{j}}^{+} \leq +1$ and $\tau_{\mathcal{P}_{j}} \geq 0$ are used for safe approximation of chance constraints and depend on the probability distribution $\mathcal{P}_{j}$. For a fixed value of $\mathcal{P}_{j}$ the values of these parameters are listed in Table \ref{tab:berns_val} (see \textbf{Appendix \ref{app:berns_val}}). The constraints in (\ref{eq:berns-cons-inft-1}) and (\ref{eq:berns-cons-inft-2}) turn the resource allocation problem into a conic quadratic programming problem \cite{book:conic} and using the inequality ${\parallel \mathbf{y} \parallel}_2 \leq {\parallel \mathbf{y} \parallel}_1$, the optimal RB and power allocation can be obtained in a distributed manner similar to that in \textbf{Algorithm \ref{alg:rec_alloc}}. Note that in (\ref{eq:berns-inft-1}) and (\ref{eq:berns-inft-2}), the protection functions depend on $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$. By adjusting $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$, a trade-off between rate and robustness can be achieved. \begin{figure*}[!t] \normalsize \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \tilde{\Delta}_{g_{l, 1}}^{(n)} = \sum_{u_l \in \mathcal{U}_l } \eta_{\mathcal{P}_{u_l,l,1}^{(n)}}^{+} S_{u_l, l}^{(n)} \hat{g}_{{u_l^*}, l, 1}^{(n)} + \sqrt{2 \ln \tfrac{1}{\Theta_{l,1}^{(n)}}} \left( \sum_{u_l \in \mathcal{U}_l } \tau_{\mathcal{P}_{u_l,l,1}^{(n)}}^2 \left( S_{u_l, l}^{(n)} \hat{g}_{{u_l^*}, l, 1}^{(n)} \right)^2 \right)^{\tfrac{1}{2}} , ~~ \forall n \hspace{2em} \label{eq:berns-inft-1}\\ \tilde{\Delta}_{g_{l, 2}}^{(n)} = \sum_{u_l \in \mathcal{U}_l } \eta_{\mathcal{P}_{l, u_l,2}^{(n)}}^{+} S_{u_l, l}^{(n)} \hat{g}_{l, {u_l^*}, 2}^{(n)} + \sqrt{2 \ln \tfrac{1}{\Theta_{l,2}^{(n)}}} \left( \sum_{u_l \in \mathcal{U}_l } \tau_{\mathcal{P}_{l, u_l,2}^{(n)}}^2 \left( \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \hat{g}_{l, {u_l^*}, 2}^{(n)} \right)^2 \right)^{\tfrac{1}{2}} , ~~ \forall n \hspace{2em} \label{eq:berns-inft-2} \end{eqnarray} \end{subequations} \hrulefill \vspace*{4pt} \end{figure*} \subsection{Sensitivity Analysis} In the previous section we have seen that the protection functions depend on $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$. In the following, we analyze the sensitivity of $\mathscr{R}_\Delta $ to the values of the trade-off parameters. Using the protections functions (\ref{eq:berns-inft-1}) and (\ref{eq:berns-inft-2}), $\mathscr{R}_\Delta $ is given by \begin{equation} \label{eq:robust_sensitivity} \mathscr{R}_\Delta \approx \sum_{n=1}^{N} \psi_n^* \tilde{\Delta}_{g_{l, 1}}^{(n)} + \sum_{n=1}^{N} \varphi_n^* \tilde{\Delta}_{g_{l, 2}}^{(n)} + \sum_{u_l \in \mathcal{U}_l}\sum_{n=1}^{N} \varrho_{u_l}^{n*} \Delta_{I_{u_l, l}}^{(n)}. \end{equation} Differentiating (\ref{eq:robust_sensitivity}) with respect the to trade-off parameters $\Theta_{l,1}^{(n)}$ and $\Theta_{l,2}^{(n)}$, the sensitivity of $\mathscr{R}_\Delta$, i.e., $\mathcal{S}_{\Theta_{l,i}^{(n)}} \left( \mathscr{R}_\Delta \right) = \frac{\partial \mathscr{R}_\Delta}{\partial \Theta_{l,i}^{(n)}}$ is obtained as follows: \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \hspace{-2em} \mathcal{S}_{\Theta_{l,1}^{(n)}} \left( \mathscr{R}_\Delta \right) & = & - \tfrac{ \psi_n^* \left( \displaystyle \sum_{u_l \in \mathcal{U}_l } \tau_{\mathcal{P}_{u_l,l,1}^{(n)}}^2 \left( S_{u_l, l}^{(n)} \hat{g}_{{u_l^*}, l, 1}^{(n)} \right)^2 \right)^{\tfrac{1}{2}}}{\Theta_{l,1}^{(n)} \sqrt{2 \ln \left( \frac{1}{\Theta_{l,1}^{(n)}}\right)}} \\ \hspace{-2em} \mathcal{S}_{\Theta_{l,2}^{(n)}} \left( \mathscr{R}_\Delta \right) & = & - \tfrac{\varphi_n^* \left( \displaystyle \sum_{u_l \in \mathcal{U}_l } \tau_{\mathcal{P}_{l, u_l,2}^{(n)}}^2 \left( \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \hat{g}_{l, {u_l^*}, 2}^{(n)} \right)^2 \right)^{\tfrac{1}{2}}}{\Theta_{l,2}^{(n)} \sqrt{2 \ln \left( \frac{1}{\Theta_{l,2}^{(n)}}\right)}}. \nonumber \\ \end{eqnarray} \end{subequations} \section{Performance Evaluation} \label{sec:performance_eval} \subsection{Simulation parameters and assumptions} In order to obtain the performance evaluation results for the proposed resource allocation scheme we use an event-driven simulator in MATLAB. For propagation modeling, we consider distance-dependent path-loss, shadow fading, and multi-path Rayleigh fading. In particular, we consider a realistic 3GPP propagation environment\footnote{Any other propagation model for D2D communication can be used for the proposed resource allocation method.} presented in \cite{relay-book-2}. For example, propagation in UE-to-relay and relay-to-D2D UE links follows the following path-loss equation: $PL_{u_l,l}(\ell)_{[dB]} = 103.8 + 20.9 \log(\ell) + L_{su} + 10 \log(\phi)$, where $\ell$ is the link distance in kilometer; $L_{su}$ accounts for shadow fading and is modelled as a log-normal random variable, and $\phi$ is an exponentially distributed random variable which represents the Rayleigh fading channel power gain. For a same link distance, the gains due to shadow fading and Rayleigh fading for different resource blocks could be different. Similarly, the path-loss equation for relay-eNB link is expressed as $PL_{l,eNB}(\ell)_{[dB]} = 100.7 + 23.5 \log(\ell) + L_{sr} + 10 \log(\phi)$, where $L_{sr}$ is a log-normal random variable accounting for shadow fading. The simulation parameters and assumptions used for obtaining the numerical results are listed in Table \ref{tab:sim_param}. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Simulation Parameters} \label{tab:sim_param} \centering \begin{tabular}{l|l} \hline \bfseries Parameter & \bfseries Values\\ \hline\hline Carrier frequency & $2.35$ GHz \\ System bandwidth & $2.5$ MHz \\ Cell layout & Hexagonal grid, $3$-sector sites \\ Total number of available RBs & $13$ \\ Relay cell radius & $200$ meter\\ Distance between eNB and relays & $125$ meter\\ Minimum distance between UE and relay & $10$ meter\\ Rate requirement for cellular UEs & $128$ Kbps \\ Rate requirement for D2D UEs & $256$ Kbps \\ Total power available at each relay & $30$ dBm \\ Total power available at UE & $23$ dBm \\ Shadow fading standard deviation: \\ \hspace{5em} for relay-eNB links & $6$ dB \\ \hspace{5em} for UE-relay links & $10$ dB \\ Noise power spectral density & $-174$ dBm/Hz \\ \hline \end{tabular} \end{table} We simulate a single three-sectored cell in a rectangular area of $700 ~\text{m} \times 700~ \text{m}$, where the eNB is located in the centre of the cell and three relays are deployed in the network, i.e., one relay in each sector. The CUEs are uniformly distributed within the radius of the relay cell. The D2D transmitters and receivers are uniformly distributed in the perimeter of a circle with radius $D_{r,d}$ as shown in Fig. \ref{fig:d2d_position}. The distance between two D2D UEs is denoted by $D_{d,d}$. Both $D_{r,d}$ and $D_{d,d}$ are varied as simulation parameters. \begin{figure}[!h t b] \centering \includegraphics[width=2.0in]{d2d_dist.pdf} \caption{Distribution of any D2D-pairs: D2D UEs are uniformly distributed upon the perimeter of circle with radius $D_{r,d}$ and keeping the distance $D_{d,d}$ between peers.} \label{fig:d2d_position} \end{figure} In our simulations, we express the uncertainty bounds $\Psi_{l,1}^{(n)}, \Psi_{l,2}^{(n)}$, and $\Upsilon_{u_l}^{(n)}$ in percentage as $\Psi_{l,1}^{(n)} = \frac{{\parallel \mathbf{g}_{l,1}^{(n)} - \bar{\mathbf{g}}_{l,1}^{(n)} \parallel}_2}{{\parallel \bar{\mathbf{g}}_{l,1}^{(n)} \parallel}_2} $, $ \Psi_{l,2}^{(n)} = \frac{{\parallel \mathbf{g}_{l,2}^{(n)} - \bar{\mathbf{g}}_{l,2}^{(n)} \parallel}_2}{{\parallel \bar{\mathbf{g}}_{l,2}^{(n)} \parallel}_2}$, and $\Upsilon_{u_l}^{(n)} = \frac{{\parallel I_{u_l,l}^{(n)} - \bar{I}_{u_l,l}^{(n)} \parallel}_2}{{\parallel \bar{I}_{u_l,l}^{(n)} \parallel}_2}$. As an example, for any relay node $l$, if $\Psi_{l,1}^{(n)} = 0.5$, the error in the channel gain over RB $n$ for the first hop is not more than $50\%$ of its nominal value. We assume that the estimated interference experienced at relay node and receiving D2D UEs is $\bar{I}_{u_l,l}^{(n)} = 2 \sigma^2$ for all the RBs. The matrices $\mathbf{M}_{g_{l,1}}^{(n)}$ and $\mathbf{M}_{g_{l,2}}^{(n)}$ are considered to be identity matrices and $M_{I_{u_l,l}}^{(n)}$ is set to 1 for all the RBs. The results are obtained by averaging over $250$ realizations of the simulation scenarios (i.e., UE locations and link gains). \subsection{Results} \subsubsection{Convergence of the proposed algorithm} We consider the same step size for all the Lagrange multipliers, i.e., for any Lagrange multiplier $\kappa$, step size at iteration $t$ is calculated as $\Lambda_{\kappa}^{(t)} = \frac{a}{\sqrt{t}}$, where $a$ is a small constant. Fig. \ref{fig:convergence} shows the convergence behavior of the proposed algorithm when $a = 0.001$ and $a = 0.01$. For convergence, the step size should be selected carefully. It is clear from this figure that when $a$ is sufficiently small, the algorithm converges very quickly (i.e., in less than $20$ iterations) to the optimal solution. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{convergencePlotNew} \caption{Convergence behaviour of the proposed algorithm: number of CUE, $|\mathcal{C}| = 15$ (i.e., $5$ CUEs assisted by each relay), number of D2D pairs, $|\mathcal{D}| = 9$ (i.e., $3$ D2D pairs are assisted by each relay), and hence $|\mathcal{U}_l| = 8$ for each relay. The average end-to-end-rate is calculated by $\frac{R_l}{|\mathcal{U}_l|}$, the maximum distance between relay-D2D UE, $D_{r,d} = 60~ \text{meter} $, and the interference threshold for both hops is $-70~ \text{dBm}$. The errors (in link gain and experienced interference) are considered to be not more than $50\%$ in each RB.} \label{fig:convergence} \end{figure} \subsubsection{Sensitivity of $\mathscr{R}_\Delta$ to the trade-off parameter} The absolute sensitivity of $\mathscr{R}_\Delta $ considering $\Theta_{l} = \Theta_{l,1}^{(n)} = \Theta_{l,2}^{(n)}$ for $ \forall n$ is shown in Fig. \ref{fig:sensitivity}. For all the RBs, we assume that the probability density function of $\hat{g}_{l, 1}^{(n)}$ and $\hat{g}_{l, 2}^{(n)}$ is Gaussian; hence, $\mathcal{P}_{u_l,l,1}^{(n)}$ and $\mathcal{P}_{l, u_l,2}^{(n)}$ correspond to the last row of Table \ref{tab:berns_val}. For a given uncertainty set and interference threshold, when $\Theta_{l} < 0.2$, the value of $\mathcal{S}_{\Theta_{l}} \left( \mathscr{R}_\Delta \right)$ is very sensitive to $\Theta_{l}$. However, for higher values of $\Theta_{l}$, the sensitivity of $\mathscr{R}_\Delta$ is relatively independent of $\Theta_{l}$. From (\ref{eq:robust_sensitivity}), increasing $\Theta_{l}$ proportionally decreases $\mathscr{R}_\Delta $ which increases network sum-rate. Small values of $\Theta_{l}$ make the system more robust against uncertainty, while higher values of $\Theta_{l}$ increase the network sum-rate. Therefore, by adjusting $\Theta_{l}$ within the range of $0.2$ a trade-off between optimality and robustness can be attained. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{sensitivity} \caption{Sensitivity of $\mathscr{R}_\Delta$ vs. trade-off parameter using a setup similar to that of Fig. \ref{fig:convergence}. We consider $\hat{g}_{l, 1}^{(n)} = 0.5 \times \bar{g}_{l, 1}^{(n)}$, $\hat{g}_{l, 2}^{(n)} = 0.5 \times \bar{g}_{l, 2}^{(n)}$ and $\Theta_{l,1}^{(n)} = \Theta_{l,2}^{(n)} = \Theta_{l}$ for all the RBs.} \label{fig:sensitivity} \end{figure} \subsubsection{Effect of relaying} In order to study network performance in presence of the L3 relay, we compare the performance of the proposed scheme with a \textit{reference scheme} \cite{zul-d2d} in which an RB allocated to a CUE can be shared with at most one D2D link. The D2D link shares the same RB(s) (allocated to CUEs using \textbf{Algorithm \ref{alg:rec_alloc}}) and the D2D UEs communicate directly without using the relay only if the QoS requirements for both the CUE and D2D links are satisfied. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{gain_5_3_80_ubcomp} \caption{Average achievable data rates for D2D UEs in both the proposed and reference schemes compared to the asymptotic upper bound (for $|\mathcal{C}| = 15$, $|\mathcal{D}| = 9$, $D_{r,d}$ = 80 meter and interference threshold = $-70~ \text{dBm}$).} \label{fig:gain_5_3_80_ub} \end{figure} The average achievable data rate $R_{avg}$ for D2D links is calculated as $R_{avg} = \frac{ \displaystyle \sum_{u \in \mathcal{D}} R_{u}^{ach}}{|\mathcal{D}|}$, where $R_{u}^{ach}$ is the achievable data rate for link $u$ and $|\cdot|$ denotes the set cardinality. In Fig. \ref{fig:gain_5_3_80_ub}, we compare the performance of \textbf{Algorithm \ref{alg:rec_alloc}} with asymptotic upper bound. Since $\mathbf{P2}$ is a relaxed version of $\mathbf{P1}$, for a sufficiently large number of RBs, the solution obtained by $\mathbf{P2}$ is asymptotically optimal and can be considered as an upper bound \cite{large-rb-dual}. In order to obtain the upper bound, we solve $\mathbf{P2}$ using interior point method (Chapter 11 in \cite{book-boyd}). Note that solving $\mathbf{P2}$ by using the interior point method incurs a complexity of $\mathcal{O}\left( \left( |\mathbf{x}_{\boldsymbol l}| + |\mathbf{S}_{\boldsymbol l}| + |\mathbf{\omega}_{\boldsymbol l}| \right)^3 \right)$ (Chapter 11 in \cite{book-boyd} and \cite{im-complexity}) where $\mathbf{x}_{\boldsymbol l} = \left[ x_{1}^{(1)}, \cdots, x_{1}^{(N)}, \cdots, x_{|\mathcal{U}_l|}^{(1)}, \cdots, x_{|\mathcal{U}_l|}^{(N)} \right]^\mathsf{T}$, $\mathbf{S}_{\boldsymbol l} = \left[ S_{1,l}^{(1)}, \cdots, S_{1,l}^{(N)}, \cdots, S_{|\mathcal{U}_l|,l}^{(1)}, \cdots, S_{|\mathcal{U}_l|,l}^{(N)} \right]^\mathsf{T}$ and $\boldsymbol{\omega}_{\boldsymbol l} = \left[ \omega_{1}^{(1)}, \cdots, \omega_{1}^{(N)}, \cdots, \omega_{|\mathcal{U}_l|}^{(1)}, \cdots, \omega_{|\mathcal{U}_l|}^{(N)} \right]^\mathsf{T}$. From Fig. \ref{fig:gain_5_3_80_ub} it can be observed that our proposed approach, which uses relays for D2D traffic, can greatly improve the data rate in particular when the distance increases. In addition, proposed algorithm performs close to upper bound with significantly less complexity. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{gain_5_3_80_ne} \caption{Gain in average achievable data rate for D2D UEs (for $|\mathcal{C}| = 15$, $|\mathcal{D}| = 9$, $D_{r,d}$ = 80 meter and interference threshold = $-70~ \text{dBm}$). For uncertain CSI, the bound on the uncertainty set for channel gain and interference (i.e., $\Psi_{l,1}^{(n)}, \Psi_{l,2}^{(n)}$, and $\Upsilon_{u_l}^{(n)}$) is considered $20\%$ for all the RBs.} \label{fig:gain_5_3_80} \end{figure} The rate gains for both perfect CSI and under uncertainties are depicted in Fig. \ref{fig:gain_5_3_80}. We calculate the rate gain as follows: $$R_{gain} = \frac{R_{prop} - R_{ref}}{R_{ref}} \times 100 \% $$ where $R_{prop}$ and $R_{ref}$ denote the average rate for the D2D links in the proposed scheme and the reference scheme, respectively. As expected, under uncertainties, the gain is reduced compared to the case when perfect channel information is available. Although the reference scheme outperforms when the distance between D2D-link is closer, our proposed approach of relay-aided D2D communication can greatly increase the data rate especially when the distance increases. When the distance between D2D becomes higher, the performance of direct communication deteriorates. Besides, since the D2D links share resources with only one CUE, the spectrum may not be utilized efficiently and this decreases the achievable rate. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{gain_5_3_rel_var} \caption{Gain in aggregated data rate with different distance between relay and D2D UEs, $D_{r,d}$ where $|\mathcal{C}| = 15$, $|\mathcal{D}| = 9$, interference threshold = $-70~ \text{dBm}$, $\Psi_{l,1}^{(n)}, \Psi_{l,2}^{(n)}$, and $\Upsilon_{u_l}^{(n)}$ are considered $20\%$ for all the RBs. For different values of $D_{r,d}$, there is a distance margin beyond which relaying D2D traffic improves network performance (i.e., the upper portion the of shaded surface where rate gain is positive).} \label{fig:gain_5_3_80_rel_dist} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.5in]{gain_5_3_nd2d_var} \caption{Gain in aggregated data rate with varying number of D2D UEs (for $|\mathcal{C}| = 15$, $D_{r,d} = 80~ \text{meter}$, interference threshold = $-70~ \text{dBm}$, $\Psi_{l,1}^{(n)}, \Psi_{l,2}^{(n)}$, and $\Upsilon_{u_l}^{(n)}$ is considered $20\%$ for all the RBs).} \label{fig:gain_5_3_80_nd2d} \end{figure} The performance gain in terms of the achievable aggregated data rate under different relay-D2D UE distance is shown in Fig. \ref{fig:gain_5_3_80_rel_dist}. It can be observed that, even for relatively large relay-D2D UE distances, e.g., $D_{r,d} \geq 80 ~\text{m}$, relaying D2D traffic provides considerable rate gain for distant D2D UEs. To observe the performance of our proposed scheme in a dense network, we vary the number of D2D UEs and plot the rate gain in Fig. \ref{fig:gain_5_3_80_nd2d}. As can be seen from this figure, even in a moderately dense situation (e.g., $|\mathcal{C}| + |\mathcal{D}| = 15+12 = 27$) our proposed method provides a higher rate compared to that for direct communication between distant D2D UEs. \section{Conclusion} \label{sec:conclusion} We have provided a comprehensive resource allocation framework under channel gain uncertainty for relay-assisted D2D communication. Considering two major sources of uncertainty, namely, the link gain between neighbouring relay nodes in both hops and the experienced interference at each receiving network node, the uncertainty has been modeled as a bounded difference between actual and nominal values. By modifying the protection functions in the robust problem, we have shown that the convexity of the problem is maintained. In order to allocate radio resources efficiently, we have proposed a polynomial time distributed algorithm and to balance the cost of robustness defined as the reduction of achievable network sum-rate, we have provided a trade-off mechanism. Through extensive simulations we have observed that, in comparison with a direct D2D communication scheme, beyond a distance threshold, relaying of D2D traffic for distant D2D UEs significantly improves the network performance. As a future work, this approach can be extended by considering delay as a QoS parameter. Besides, most of the resource allocation problems are formulated under the assumption that the potential D2D UEs have already been discovered. However, to develop a complete D2D communication framework, it is necessary to consider D2D discovery along with resource allocation. \appendices \numberwithin{equation}{section} \section{Power and RB Allocation for Nominal Problem} \label{app:power-rb-alloc-nominal} \begin{figure*}[!t] \normalsize \begin{flalign} \boldsymbol{\mathbb{L}}_l (\mathbf{x}, \mathbf{S}, \boldsymbol{\omega}, \boldsymbol{\mu},\boldsymbol{\rho}, \nu_l, \boldsymbol{\psi}, \boldsymbol{\varphi}, \boldsymbol{\lambda}, \boldsymbol{\varrho}) = \nonumber &- \sum_{u_l \in \mathcal{U}_l} \sum_{n = 1}^{N} \tfrac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left(1+ \tfrac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}}\right) + \sum_{n = 1}^{N} \mu_n \left(\sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} -1 \right) \nonumber \\ &+ \sum_{u_l \in \mathcal{U}_l} \rho_{u_l} \left( \sum_{n = 1}^{N} S_{u_l, l}^{(n)} - P_{u_l}^{max} \right) + \nu_l \left( \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} - P_l^{max} \right) \nonumber \\ &+ \sum_{n = 1}^{N} \psi_n \left( \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} g_{{u_l^*}, l, 1}^{(n)} ~ - I_{th, 1}^{(n)} \right) + \sum_{n = 1}^{N} \varphi_n \left(\sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} g_{l, {u_l^*}, 2}^{(n)} - I_{th, 2}^{(n)} \right) \nonumber \\ &+ \sum_{u_l \in \mathcal{U}_l } \lambda_{u_l} \left(Q_{u_l} - \sum_{n=1}^N \tfrac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left( 1 + \tfrac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \right) \nonumber \\ &+ \sum_{u_l \in \mathcal{U}_l} \sum_{n = 1}^{N} \varrho_{u_l}^n \left( I_{u_l,l}^{(n)} + \sigma^2 - \omega_{u_l}^{(n)} \right).\label{eq:lagrange-1} \end{flalign} \hrulefill \vspace*{4pt} \end{figure*} To observe the nature of power allocation for a UE, we use Karush-Kuhn-Tucker (KKT) optimality conditions and define the Lagrangian function as given in (\ref{eq:lagrange-1}), where $\boldsymbol{\lambda}$ is the vector of Lagrange multipliers associated with individual QoS requirements for cellular and D2D UEs. Similarly, $\boldsymbol{\mu},\boldsymbol{\rho}, \nu_l, \boldsymbol{\psi}, \boldsymbol{\varphi}$ are the Lagrange multipliers for the constraints in (\ref{eq:con-bin-relx})--(\ref{eq:con-intf-2-relx}). Differentiating (\ref{eq:lagrange-1}) with respect to $S_{u_l, l}^{(n)}$, we obtain (\ref{eq:power-alloc}) for power allocation for the link $u_l$ over RB $n$. Similarly, differentiating (\ref{eq:lagrange-1}) with respect to $x_{u_l}^{(n)}$ gives the condition for RB allocation. \section{Power and RB Allocation for Robust Problem} \label{app:power-rb-alloc-robust} To obtain a more tractable formula, for any vector $\mathbf{y}$ we use the inequality ${\parallel \mathbf{y} \parallel}_2 \leq {\parallel \mathbf{y} \parallel}_1$ and rewrite the constraints (\ref{eq:con-intf-1-robst3}) and (\ref{eq:con-intf-2-robst3}) as (\ref{eq:con-intf-1-robst-mod}) and (\ref{eq:con-intf-2-robst-mod}), respectively, where for any diagonal matrix $\mathbf{A}$, $m_{jj}$ represents the $j$-th element of ${\mathbf{A}}^{-1}(j,:)$. Considering the convexity of $\mathbf{P5}$, the Lagrange dual function can be obtained by (\ref{eq:lagrange-robust}) in which $\boldsymbol{\mu},\boldsymbol{\rho}, \nu_l, \boldsymbol{\psi}, \boldsymbol{\varphi}, \boldsymbol{\lambda}, \boldsymbol{\varrho}$ are the corresponding Lagrange multipliers. Differentiating (\ref{eq:lagrange-robust}) with respect to $S_{u_l, l}^{(n)}$ and $x_{u_l}^{(n)}$ gives (\ref{eq:power-alloc-robust}) and (\ref{eq:channel_alloc1}) for power and RB allocation, respectively. \begin{figure*}[!htb] \normalsize \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{eqnarray} \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \Psi_{l,1}^{(n)} \sum_{u_l \in \mathcal{U}_l } m_{{u_l}{u_l}_{g_{l,1}}}^{(n)} S_{u_l, l}^{(n)} ~ &&{\leq} ~ I_{th, 1}^{(n)}, ~~ \forall n \label{eq:con-intf-1-robst-mod}\\ \sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \Psi_{l,2}^{(n)} \sum_{u_l \in \mathcal{U}_l } m_{{u_l}{u_l}_{g_{l,2}}}^{(n)} \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} ~ &&{\leq} ~ I_{th, 2}^{(n)}, ~~\forall n \label{eq:con-intf-2-robst-mod} \end{eqnarray} \setlength{\arraycolsep}{5pt} \end{subequations} \hrulefill \vspace*{4pt} \end{figure*} \begin{figure*}[!t] \normalsize \begin{flalign} {\boldsymbol{\mathbb{L}}_\Delta}_l (\mathbf{x}, \mathbf{S}, \boldsymbol{\omega} \boldsymbol{\mu},\boldsymbol{\rho}, \nu_l, \boldsymbol{\psi}, \boldsymbol{\varphi}, \boldsymbol{\lambda}, \boldsymbol{\varrho}) = &- \sum_{u_l \in \mathcal{U}_l} \sum_{n = 1}^{N} \tfrac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left(1+ \tfrac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}}\right) + \sum_{n = 1}^{N} \mu_n \left(\sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} -1 \right) \nonumber\\ &+ \sum_{u_l \in \mathcal{U}_l} \rho_{u_l} \left( \sum_{n = 1}^{N} S_{u_l, l}^{(n)} - P_{u_l}^{max} \right) + \nu_l \left( \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} - P_l^{max} \right) \nonumber \\ &+ \sum_{n = 1}^{N} \psi_n \left( \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \Psi_{l,1}^{(n)} \sum_{u_l=1}^{|\mathcal{U}_l|} \left( m_{{u_l}{u_l}_{g_{l,1}}}^{(n)} S_{u_l, l}^{(n)} \right) - I_{th, 1}^{(n)} \right) \nonumber \\ &+ \sum_{n = 1}^{N} \varphi_n \left(\sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \Psi_{l,2}^{(n)} \sum_{u_l=1}^{|\mathcal{U}_l|} \left( m_{{u_l}{u_l}_{g_{l,2}}}^{(n)} \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \right) - I_{th, 2}^{(n)} \right) \nonumber \\ &+ \sum_{u_l \in \mathcal{U}_l } \lambda_{u_l} \left(Q_{u_l} - \sum_{n=1}^N \tfrac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left( 1 + \tfrac{S_{u_l, l}^{(n)} \gamma_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \right) \nonumber \\ &+ \sum_{u_l \in \mathcal{U}_l} \sum_{n = 1}^{N} \varrho_{u_l}^n \left( \bar{I}_{u_l,l}^{(n)} + \Delta_{I_{u_l,l}}^{(n)} + \sigma^2 - \omega_{u_l}^{(n)} \right). \label{eq:lagrange-robust} \end{flalign} \hrulefill \vspace*{4pt} \end{figure*} \section{Update of variables and Lagrange multipliers} \label{app:lagrange_update} After finding the optimal solution, i.e., ${P_{u_l,l}^{(n)}}^*$ and ${x_{u_l}^{(n)}}^*$, the primal and dual variables at the $(t+1)$-th iteration are updated using (\ref{eq:lagrange_update1})--(\ref{eq:lagrange_update8}), where $\Lambda_{\kappa}^{(t)}$ is the small step size for variable $\kappa$ at iteration $t$ and the partial derivative of the Lagrange dual function with respect to $\omega_{u_l}^{(n)}$ is \begin{equation} \label{eq:partial_omega} \frac{\partial {\boldsymbol{\mathbb{L}}_\Delta}_l}{\partial \omega_{u_l}^{(n)}} = \tfrac{1}{2} B_{RB} \frac{\left(\lambda_{u_l} + 1 \right) x_{u_l}^{(n)} S_{u_l,l}^{(n)} h_{u_l,l,1}^{(n)}}{ \omega_{u_l}^{(n)} \left(x_{u_l}^{(n)}\omega_{u_l}^{(n)} + S_{u_l,l}^{(n)} h_{u_l,l,1}^{(n)} \right) \ln 2} - \varrho_{u_l}^n. \end{equation} \begin{figure*}[!t] \normalsize \begin{subequations} \setlength{\arraycolsep}{0.0em} \begin{align} \omega_{u_l}^{(n)}(t+1) &= \left[ \omega_{u_l}^{(n)}(t) - \Lambda_{\omega_{u_l}^{(n)}}^{(t)} \frac{\partial \boldsymbol{\mathbb{L}_l}}{\partial \omega_{u_l}^{(n)}}\bigg|_{t} \right]^+ \label{eq:lagrange_update1}\\ \mu_n(t+1) &= \left[ \mu_n(t) + \Lambda_{\mu_n}^{(t)} \left(\sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} -1 \right) \right]^+ \label{eq:lagrange_update2}\\ \rho_{u_l}(t+1) &= \left[ \rho_{u_l}(t) + \Lambda_{\rho_{u_l}}^{(t)} \left( \sum_{n = 1}^{N} S_{u_l, l}^{(n)} - P_{u_l}^{max} \right) \right]^+ \label{eq:lagrange_update3}\\ \nu_l(t+1) &= \left[ \nu_l(t) + \Lambda_{\nu_l}^{(t)} \left( \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} - P_l^{max} \right) \right]^+ \label{eq:lagrange_update4}\\ \psi_n(t+1) &= \left[ \psi_n(t) + \Lambda_{\psi_n}^{(t)} \left( \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \Psi_{l,1}^{(n)} \sum_{u_l=1}^{|\mathcal{U}_l|} \left( m_{{u_l}{u_l}_{g_{l,1}}}^{(n)} S_{u_l, l}^{(n)} \right) - I_{th, 1}^{(n)} \right) \right]^+ \label{eq:lagrange_update5}\\ \varphi_n(t+1) &= \left[ \varphi_n(t) + \Lambda_{\varphi_n}^{(t)} \left(\sum_{u_l \in \mathcal{U}_l } \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \Psi_{l,2}^{(n)} \sum_{u_l=1}^{|\mathcal{U}_l|} \left( m_{{u_l}{u_l}_{g_{l,2}}}^{(n)} \tfrac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \right) - I_{th, 2}^{(n)} \right) \right]^+ \label{eq:lagrange_update6}\\ \lambda_{u_l}(t+1) &= \left[ \lambda_{u_l}(t) + \Lambda_{\lambda_{u_l}}^{(t)} \left(Q_{u_l} - \sum_{n=1}^N \tfrac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left( 1 + \tfrac{S_{u_l, l}^{(n)} \gamma_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \right) \right]^+ \label{eq:lagrange_update7} \\ \varrho_{u_l}^n(t+1) &= \left[ \varrho_{u_l}^n(t) + \Lambda_{\varrho_{u_l}^n}^{(t)} \left( \bar{I}_{u_l,l}^{(n)} + \Delta_{I_{u_l,l}}^{(n)} + \sigma^2 - \omega_{u_l}^{(n)} \right) \right]^+. \label{eq:lagrange_update8} \end{align} \setlength{\arraycolsep}{5pt} \end{subequations} \hrulefill \vspace*{4pt} \end{figure*} \section{Proof of Proposition \ref{theorem:robust-tradeoff}} \label{app:sensitivity} \begin{figure*}[!t] \normalsize \begin{eqnarray} \label{eq:inf_function} &\mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{c}) = \inf \Bigg\{ \left. \underset{x_{u_l}^{(n)}, S_{u_l, l}^{(n)}, \omega_{u_l}^{(n)}}{\operatorname{max}} ~ \displaystyle \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \right\vert, \nonumber \\ & \displaystyle \sum_{u_l \in \mathcal{U}_l} x_{u_l}^{(n)} \leq 1, \quad \sum_{n =1}^N S_{u_l, l}^{(n)} \leq P_{u_l}^{max}, \quad \sum_{u_l \in \mathcal{U}_l } \sum_{n =1}^N \frac{h_{u_l, l, 1}^{(n)}}{h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \leq P_l^{max}, \nonumber\\ & \displaystyle \sum_{u_l \in \mathcal{U}_l } S_{u_l, l}^{(n)} \bar{g}_{{u_l^*}, l, 1}^{(n)} + \Delta_{g_{l, 1}}^{(n)} \leq I_{th, 1}^{(n)}, \quad \sum_{u_l \in \mathcal{U}_l } \frac{h_{u_l, l, 1}^{(n)}} {h_{l, u_l, 2}^{(n)}} S_{u_l, l}^{(n)} \bar{g}_{l, {u_l^*}, 2}^{(n)} + \Delta_{g_{l, 2}}^{(n)} \leq I_{th, 2}^{(n)}, \nonumber\\ & \displaystyle \sum_{n=1}^N \frac{1}{2} x_{u_l}^{(n)} B_{RB} \log_2 \left( 1 + \frac{S_{u_l, l}^{(n)} h_{u_l, l, 1}^{(n)}}{x_{u_l}^{(n)} \omega_{u_l}^{(n)}} \right) \geq Q_{u_l}, \quad S_{u_l, l}^{(n)} \geq 0, \quad \bar{I}_{u_l,l}^{(n)} + \Delta_{I_{u_l,l}}^{(n)} + \sigma^2 \leq \omega_{u_l}^{(n)} \Bigg\}. \end{eqnarray} \hrulefill \vspace*{4pt} \end{figure*} Since $\mathbf{P4}$ is a perturbed version of $\mathbf{P2}$ with protection functions in the constraints (\ref{eq:con-intf-1-relx}), (\ref{eq:con-intf-2-relx}), and (\ref{eq:con-aux-relx}), to obtain (\ref{eq:robust-opt-tradeoff}), we use local sensitivity analysis of $\mathbf{P4}$ by perturbing its constraints (Chapter IV in \cite{sensitivity_book}, Section 5.6 in \cite{book-boyd}). Let the elements of $\mathbf{a}, \mathbf{b}, \mathbf{c}$ contain $\Delta_{g_{l, 1}}^{(n)}, \Delta_{g_{l, 2}}^{(n)}$ $\forall n$, and $\Delta_{I_{u_l, l}}^{(n)}$ $\forall u_l, n$, where $\mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{c})$ is given by (\ref{eq:inf_function}). When $\Delta_{g_{l, 1}}^{(n)}, \Delta_{g_{l, 2}}^{(n)}$, and $\Delta_{I_{u_l, l}}^{(n)}$ are small, $\mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{c})$ is differentiable with respect to the perturbation vectors $\mathbf{a}, \mathbf{b}$, and $ \mathbf{c}$ (Chapter IV in \cite{sensitivity_book}). Using Taylor series, (\ref{eq:inf_function}) can be written as \begin{eqnarray} \label{eq:taylor} \mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{c}) = \mathscr{R}^*(\mathbf{0}, \mathbf{0}, \mathbf{0}) + \sum_{n=1}^{N} a_n \frac{\partial \mathscr{R}^*(\mathbf{0}, \mathbf{b}, \mathbf{c})}{\partial a_n} + \nonumber \\ \sum_{n=1}^{N} b_n \frac{\partial \mathscr{R}^*(\mathbf{a}, \mathbf{0}, \mathbf{c})}{\partial b_n} + \sum_{u_l \in \mathcal{U}_l}\sum_{n=1}^{N} c_{u_l}^n \frac{\partial \mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{c})}{\partial c_{u_l}^n} + o \nonumber \\ \end{eqnarray} where $\mathscr{R}^*(\mathbf{0}, \mathbf{0}, \mathbf{0}) $ is the optimal value for $\mathbf{P2}$, $\mathbf{0}$ is the zero vector, and $o$ is the truncation error in the Taylor series expansion. Note that $\mathscr{R}^*(\mathbf{0}, \mathbf{0}, \mathbf{0})$ and $\mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{c})$ are equal to $R^*$ and $R_\Delta^*$, respectively. Since $\mathbf{P2}$ is convex, $\mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{c})$ is obtained from the Lagrange dual function [i.e., (\ref{eq:lagrange-1})] of $\mathbf{P2}$; and using the sensitivity analysis (Chapter IV in \cite{sensitivity_book}), we have $\frac{\partial \mathscr{R}^*(\mathbf{0}, \mathbf{b}, \mathbf{c})}{\partial a_n} \approx -\psi_n^*$, $\frac{\partial \mathscr{R}^*(\mathbf{a}, \mathbf{0}, \mathbf{c})}{\partial b_n} \approx -\varphi_n^*$ and $\frac{\partial \mathscr{R}^*(\mathbf{a}, \mathbf{b}, \mathbf{0})}{\partial c_{u_l}^n} \approx -\varrho_{u_l}^{n*}$. Rearranging (\ref{eq:taylor}) we obtain \begin{equation} \label{eq:sensitivity-proof} R_\Delta^* - R^* \approx - \sum_{n=1}^{N} \psi_n^* \Delta_{g_{l, 1}}^{(n)} - \sum_{n=1}^{N} \varphi_n^* \Delta_{g_{l, 2}}^{(n)} - \sum_{u_l \in \mathcal{U}_l}\sum_{n=1}^{N} \varrho_{u_l}^{n*} \Delta_{I_{u_l, l}}^{(n)}. \end{equation} Since $\psi_n^*, \varphi_n^*, \varrho_{u_l}^{n*} $ are non-negative Lagrange multipliers, the achievable sum-rate is reduced compared to the case in which perfect channel information is available. \section{Parameters used for approximations in the chance constraint approach} \label{app:berns_val} In order to balance the robustness and optimality, the parameters used for safe approximations of the chance constraints (obtained from \cite{robust-theroy}) are given in Table \ref{tab:berns_val}. \begin{table}[!h] \renewcommand{\arraystretch}{1.3} \caption{Values of $\eta_{\mathcal{P}_{j}}^{+}$ and $\tau_{\mathcal{P}_{j}}$ for Typical Families of Probability Distribution $\mathcal{P}_{j}$} \label{tab:berns_val} \centering \begin{tabular}{l||c|c} \hline $\mathcal{P}_{j}$ & $\eta_{\mathcal{P}_{j}}^{+}$ & $\tau_{\mathcal{P}_{j}}$\\ \hline\hline $\sup \left\lbrace \mathcal{P}_{j} \right\rbrace \in [-1,+1]$ & $1$ & $0$ \\ $\sup \left\lbrace \mathcal{P}_{j} \right\rbrace$ is unimodal and $\sup \left\lbrace \mathcal{P}_{j} \right\rbrace \in [-1,+1]$ & $\frac{1}{2}$ & $\frac{1}{\sqrt{12}}$ \\ $\sup \left\lbrace \mathcal{P}_{j} \right\rbrace$ is unimodal and symmetric & $0$ & $\frac{1}{\sqrt{3}}$ \\ \hline \end{tabular} \end{table} \section*{Acknowledgment} This work was supported in part by a University of Manitoba Graduate Fellowship, in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Strategic Grant (STPGP 430285), and in part by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (NRF-2013R1A2A2A01067195). \bibliographystyle{IEEEtran}
2,869,038,154,502
arxiv
\section{Introduction} Quantum key distribution (QKD) has attracted great attention as a feasible application of quantum information science with the current device technology \cite{donna}. The goal of a QKD protocol is to share a random bit sequence not known by the eavesdropper Eve, between the legitimate sender Alice and the receiver Bob. The fundamental feature of QKD protocols is that the maximum amount of information gained by Eve can be determined from the channel estimate between Alice and Bob. Such a task cannot be conducted in classical key distribution schemes. If the estimated amount is lower than a threshold, then Alice and Bob determine the length of a secret key from the estimated amount of Eve's information, and can share the secret key by performing the information reconciliation (error correction) and the privacy amplification. Since the key rate, which is the length of securely sharable key per channel use, is one of the most important criteria for the efficiency of QKD protocols, the estimation of the channel is of primary importance. Conventionally in the Bennett-Brassard 1984 (BB84) protocol \cite{bb84}, we only use the statistics of matched measurement outcomes which are transmitted and received by the same basis, to estimate the quantum channel; mismatched measurement outcomes, which are transmitted and received by different basis, are discarded in the conventionally used channel estimation methods. By contrast, Watanabe \textit{et al.}\ \cite{watanabe:08} showed that by using the statistics of mismatched measurement outcomes in addition to that of matched measurement outcomes, we can estimate a quantum channel more accurately, thereby a higher key rate can be achieved than the conventional one. However their analysis was only asymptotic, i.e., they assumed that the number of sample bits for channel estimation is infinite. Hence, for practical use, it is necessary to perform non-asymptotic analysis. For non-asymptotic analysis of the QKD protocol, Scarani \textit{et al.}\ formulated a lower bound on secure key rate \cite{PRL,scarani:08}. Other researches of non-asymptotic analysis was surveyed by Cai \textit{et al.}\ \cite{cai:08}. Since the formula by Scarani \textit{et al.}\ has enough generality, in theory it enables us to calculate not only non-asymptotic key rate based on the conventional channel estimation but also the one based on the accurate channel estimation \cite{watanabe:08} for the BB84 protocol. On the other hand, in Cai \textit{et al.}\ \cite[p.4]{cai:08}, it was suggested that a lower bound on secure key rate shown by Scarani \textit{et al.}\ might be able to be improved. In the channel estimation step shown by Scarani \textit{et al.}\, the channel parameter is guessed by \textit{interval estimation}. However, the method of constructing confidence region of the interval estimation is not unique. Even when we use the confidence region which is different from the one shown by Scarani \textit{et al.}\, if it satisfies the condition of \textit{conservativeness}, the security of the final key is guaranteed. Specifically, even if \textit{one-sided interval estimation} is used, the security is still kept. In this paper, we show two things: First, we show several methods of reconstructing the confidence region, and the fact that they increase the non-asymptotic secure key rate in the BB84 protocol. Second, we show the utility of accurate channel method on the BB84 protocol using finite sample bits. To do this, we compare the non-asymptotic key rate based on the accurate channel estimate to the conventional one by numerical computation over the amplitude damping channel and the depolarizing channel. We stress that the assumption used in this paper is exactly the same as \cite{PRL}. In particular, we assume no prior knowledge of channel nor channel model with the accurate channel esitimation, as well as its asymptotic case \cite{watanabe:08}. In the numerical comparison in Section \ref{sec:numcompt}, we shall use the depolarizing channel and the amplitude damping channel to generate the mesurement outcomes, but the proposed protocols do not assume the knowledge of the underlying channels, and estimate the channel among all the possible channels. The rest of this paper is organized as follows: We first review previously known results in Section \ref{section-pre}. Second, we show several methods to improve the key rate and the results of the improvements in Section \ref{section-imp}. Last, we state the conclusion in Section \ref{section-conclusion}. \section{Preliminaries} \label{section-pre} \subsection{BB84 protocol} Typical one of the QKD protocols is the BB84 protocol invented by Bennett and Brassard \cite{bb84}. The goal of the BB84 protocol is to share a random bit sequence not known by the eavesdropper Eve, between the legitimate sender Alice and the receiver Bob. In the following, we briefly describe the flow of the protocol and the accurate channel estimation shown by Watanabe \textit{et al.}\ \cite{watanabe:08} on the BB84 protocol. \subsubsection{Overview of BB84 protocol} BB84 protocol consists of the following four steps: \begin{enumerate} \item \textit{Distribution of quantum information: } Alice sends $N$ quantum objects, for example photon polarizations, to Bob over a quantum channel. \item \textit{Parameter (or channel) estimation: }Alice and Bob disclose a part of the transmission/received information to each other to estimate the quantum channel between Alice and Bob.\label{step2} \item \textit{Information reconciliation: }For the bit string not disclosed in step \ref{step2}, Alice sends the syndrome to Bob, and Bob corrects an error by using the syndrom. \label{step3} \item \textit{Privacy amplification: }Alice and Bob compress corrected bit strings in step \ref{step3} according to the same hash function so that compressed bit string is statistically independent of information obtained by Eve. Consequently, compressed bit string is the final key. \end{enumerate} The security of the final key obtained by BB84 protocol can be proven only by the axiom of quantum mechanics \cite{PRL,scarani:08}. \subsubsection{Accurate channel estimation} \label{section-accurate} In this section, we explain the distribution of quantum informationin and convenational parameter estimation more concretely. Moreover, we explain the accurate channel estimation shown by Watanabe et al. [3] on the BB84 protocol. Alice first randomly sends bit $0$ or $1$ to Bob by modulating it into a transmission basis that is randomly chosen from the $\san{z}$-basis $\{ \ket{0_\san{z}}, \ket{1_\san{z}} \}$, the $\san{x}$-basis $\{ \ket{0_\san{x}}, \ket{1_\san{x}} \}$, where $\ket{0_a},\ket{1_a}$ are eigenstates of the Pauli matrix $\sigma_a$ for $a\in\{\san{z},\san{x}\}$, respectively. Then Bob randomly chooses one of measurement observables $\sigma_\san{z}$, $\sigma_\san{x}$, and converts a measurement result $+1$ or $-1$ into a bit $0$ or $1$ respectively. After $N$ transmissions, Alice and Bob publicly announce their transmission bases and measurement observables. They also announce $m(<N)$ bits of their bit sequence for estimating channel $\mathcal{E}_B$ from Alice to Bob. Conventionally, Alice and Bob discard mismatched measurement outcomes, which are transmitted and received by different bases \cite{shor}. In contrast, Watanabe \textit{et al.}\ \cite{watanabe:08} show that by using the statistics of mismatched measurement outcomes in addition to that of matched measurement outcomes, we can estimate a quantum channel more accurately, thereby the key rate is at least higher than the conventional one. In particular, the key rate is generally improved over the conventional one over any channel, and only if the quantum channel is the Pauli channel, those two key rates are equal \cite{watanabe:09}. \subsection{Method of type} \label{type} In this section, we review the method of type \cite[Chapter 11]{cover:01} that are used in this paper. Let $\cal{X}$ be a finite set. For a sequence $x^m = (x_1, \ldots, x_m) \in {\cal X}^m$, the type of $x^m$ is the empirical probability distribution $P_{x^m}$ defined by \begin{eqnarray*} P_{x^m}(a) := \frac{ | \{ i \mid x_i = a \} | }{m}~~~~~~\mbox{for } a \in {\cal X}. \end{eqnarray*} Then, the following theorems hold. \begin{theorem}\label{sebsection-theorem1} \textnormal{\cite[Theorem 11.2.1]{cover:01}\; Let $P_X$ be a probability distribution on $\mathcal{X}$ and $P_{x^m}$ be the type of the sequence $x^m$ drawn according to the $m$-fold product distribution $P_X^m$. Then, for any $\delta$, \begin{equation*} \textnormal{Pr}\bigl[ D(P_{x^m}|| P) > \delta \bigl] \leq 2^{-m(\delta-|\mathcal{X}|\frac{\log_2(m+1)}{m})} \end{equation*} where $D(\cdot)$ is the relative entropy. Note that the base of a logarithm and a (conventional) entropy are $2$ throughout this paper.} \end{theorem} \begin{lemma} \label{lemma} \textnormal{\cite[Theorem 11.6.1]{cover:01}\; Let $P$ and $Q$ be probability distributions. Then \begin{equation*} ||P - Q||_1 \leq \sqrt{2(\ln2)D(P || Q)} \end{equation*} where $||\cdot||_1$ is the variational distance defined by $||P_1-P_2||_1:=\sum_{x\in\mathcal{X}} |P_1(x)-P_2(x)|$, where $P_1,P_2$ are probability mass functions on $\mathcal{X}$.} \end{lemma} \begin{corollary}\label{coro10} \textnormal{ Let $P_{x^m}$ be the type of $X$. For any $\delta>0$, \begin{equation*} \mathrm{Pr}\bigl[ ||P_{x^m} - P||_1 > \delta \bigl] \leq 2^{-m(\frac{\delta^2}{2\ln{2}}-|\mathcal{X}|\frac{\log_2(m+1)}{m})}. \end{equation*} } \end{corollary} \subsection{Non-asymptotic key rate analysis} In this section, we rephrase non-asymptotic key rate analysis shown by Scarani \textit{et al.}\ \cite{scarani:08} on the BB84 protocol in terms of interval estimation. This paraphrase is necessary to clarify the relation between the method shown by Scarani \textit{et al.}\ and our proposed one. Note that interval estimation of a quantum channel for QKD protocols is also discussed in \cite{leverrier:09,leverrier10}. \subsubsection{Interval estimation} Here we briefly review some basic concepts of the interval estimation. See textbooks of statistics for more details (e.g. \cite{casella-berger:01}). The goal of the interval estimation is to estimate the unknown statistical parameter $\theta$ by observed samples. First, we define the \textit{confidence region}. Let a sample sequence $X=X_1,\cdots,X_n\sim P_{\theta}$ be \textit{i.i.d.}, and $\Theta$ be the parameter space. For any $\alpha$ between $0$ and $1$, if a set $C(X) \subset\Theta$ satisfies \begin{equation} \forall\theta\in\Theta, P_{\theta}[\theta\in C(X)] \gtrsim 1-\alpha, \label{intereval-region} \end{equation} then $C(X)$ is called a \textit{confidence region}. Specially, if $\theta$ is real-valued, then $C(X)$ is usually an interval of real numbers, sometimes called the \textit{confidence interval}. In addition, the real-number $1-\alpha$ is called the \textit{confidence level} or \textit{confidence coefficient}. If the inequality in Eq.~(\ref{intereval-region}) is always satisfied, i.e. \begin{equation*} \forall\theta\in\Theta, P_{\theta}[\theta\in C(X)] \geq 1-\alpha, \end{equation*} then such $C(X)$ is called a \textit{conservative} confidence region. Second, we describe the \textit{one-sided interval estimation}. Suppose that $\theta$ is a real number. One-sided interval estimation is defined as constructing the upper bound on $\theta$ satisfying \begin{equation} \forall\theta\in\Theta, P_{\theta}[\theta \leq C(X)] \gtrsim 1-\alpha. \label{one-sided} \end{equation} The interval $(-\infty,C(X)]$ is called a \textit{one-sided confidence interval} with confidence level $1-\alpha$. Of course, if the inequality in Eq.~(\ref{one-sided}) is always satisfied, the interval $(-\infty,C(X)]$ is conservative. \subsubsection{Channel estimation using finite sample bits} One of the practical issues of QKD protocol is that sample bits used for channel estimation is limited to a finite number. Scarani \textit{et al.}\ showed a method for interval estimation of the quantum channel \cite{PRL,scarani:08}. Hereafter, the basis $\{\ket{0},\ket{1}\}$ is the $\san{z}$-basis unless otherwise stated. The channel $\mathcal{E}_B$, which denotes a qubit channel from Alice to Bob, can be also described by the Choi operator \cite{choi} $\rho_{AB} := (id\otimes\mathcal{E}_B)(\ket{\psi}\bra{\psi})$ for the Bell state $\ket{\psi}=\frac{1}{\sqrt{2}}(\ket{0}\ket{0}+\ket{1}\ket{1})$. For any $\epsilon_{PE} (0 \leq \epsilon_{PE} \leq 1)$, let \cite{cai:08} \begin{eqnarray} \xi &:=& \sqrt{\frac{2\ln{(1/\epsilon_{PE})}+2d\ln{(m+1)}}{m}}, \nonumber \\ \Gamma_{\xi} &:=& \bigl\{ \rho_{AB} : ||\lambda_m - \lambda_{\infty}(\rho_{AB})||_1 \leq \xi \bigl\}, \label{variational} \end{eqnarray} where $\lambda_m$ are obtained by measurements of $m$ samples of $\rho_{AB}$ according to a POVM measurement with $d$ outcomes, and $\lambda_{\infty}(\rho_{AB})$ denotes the perfect statistics in the limit of infinitely many measurements. Then $\Gamma_{\xi}$ can be interpreted as the conservative confidence region with confidence level $1-\epsilon_{PE}$ for the qubit channel $\rho_{AB}$. Indeed, for any $\rho_{AB}$, we can see \begin{eqnarray} \textnormal{Pr}[||\lambda_m - \lambda_{\infty }(\rho_{AB})||_1 \leq \xi] &\geq& 1-2^{-m(\frac{\xi^2}{2\ln{2}}-d\frac{\log_2{(m+1)}}{m})}\label{eq100}\\ &=&1-\epsilon_{PE}\nonumber \end{eqnarray} by Corollary \ref{coro10} in Section \ref{type}. Note that the definition of variational distance used in this paper is the same as \cite{cover:01} and twice as large as the one used in \cite{cai:08} and that the right hand side of Eq.\ (\ref{eq100}) is twice as large as \cite[Eq.\ (3)]{cai:08}, where \cite[Eq.\ (3)]{cai:08} is corrected in its erratum. By the same argument as \cite{cai:08}, we see that for $d=2$ we can use \begin{equation} p_\mathrm{observed}+ \sqrt{\frac{\ln{(1/\epsilon_{PE})}+2\ln{(m+1)}}{2m}}\label{eq101} \end{equation} as the worst-case estimate of the so-called phase error rate, where $p_\mathrm{observed}$ is the actually observed phase error rate. We shall use Eq.\ (\ref{eq101}) in the numerical comparison in Section \ref{sec:numcompt}. \subsubsection{Lower bound on the secure key rate of the BB84 protocol} First, we define \textit{$\epsilon$-security} \cite{ben:04,renner:04}. For any $\epsilon \geq 0$, a final key $K$ is said to be \textit{$\epsilon$-secure} with respect to an adversary Eve if the joint state $\rho_{KE}$ satisfies \begin{equation*} ||\rho_{KE}-\tau_{K}\otimes\rho_{E}|| \leq \epsilon, \end{equation*} where $\tau_K$ is the completely mixed state on a key space $\mathcal{S}_K$, and $||\cdot||$ is the trace distance. The parameter $\epsilon$ can be interpreted as the maximum failure probability in which an adversary might have gained some information on $K$. Next, we describe the lower bound on the $\epsilon$-secure key rate of the BB84 protocol using finite samples shown by Scarani \textit{et al.}\ \cite{scarani:08}. If the length $l$ of the final key is \begin{equation} l = N \bigl[ \min_{\rho_{AB}\in\Gamma_{\xi}}S_{\rho_{AB}}(X|E)-\delta(\bar{\epsilon}) \bigl] -\textnormal{leak}_{\epsilon_{EC}}-2\log_2{\frac{1}{\epsilon_{PA}}}, \label{lower-bound} \end{equation} then the final key is \textit{$\epsilon$-secure}, where $S_{\rho_{AB}}(X|E)$ is the conditional von Neumann entropy for the state $\rho_{AB}$, and $\Gamma_{\xi}$ is the confidence region for $\rho_{AB}$ with the confidence level $1-\epsilon_{PE}$, and $\epsilon\geq\epsilon_{PE}$. See \cite{PRL,scarani:08} for more detail of Eq.~(\ref{lower-bound}). This formula enables us to calculate the non-asymptotic key rate based on the accurate channel estimate and the conventional one for the BB84 protocol respectively. \begin{remark} \textnormal{Eve's ambiguity for Alice's bit $S_{\rho_{AB}}(X|E)$ can be calculated from the Choi operator $\rho_{AB}$ as follows. Let the density operator $\rho_{XB}$ be derived by measurement on Alice's system, i.e., $\rho_{XB}:=\sum_{x\in\mathbb{F}_2}(\ket{x}\bra{x}\otimes I)\rho_{AB}(\ket{x}\bra{x}\otimes I)$. The conditional von Neumann entropy $S_{\rho_{AB}}(X|E)$ is defined by $S_{\rho_{AB}}(X|E):=S(\rho_{XE})-S(\rho_E)$, where $S(\cdot)$ is the von Neumann entropy and $\rho_{E}$ is the partial trace of $\psi_{ABE}$, which is the purification of $\rho_{AB}$, over the joint system of Alice and Bob. Noting that $S_{\rho_{AB}}(X|E)=S_{\rho_{AB}}(X|B)$ and $S(\rho_{E})=S(\rho_{AB})$, $S{\rho_{AB}}(X|E)$ can be calculated by $S_{\rho_{AB}}(X|E)=S(\rho_{XB})-S(\rho_{AB})$. } \end{remark} \section{Improvement of key rate} \label{section-imp} In this section, we present several methods of improving the lower bound on the secure key rate by replacing the confidence region as shown in Eq.~(\ref{variational}). In general, the smaller the confidence region $\Gamma_{\xi}$ is, the bigger Eve's worst-case ambiguity $\min_{\Gamma_{\xi}}S_{\rho_{AB}}(X|E)$ can grow. Even when we use the confidence region which is different from $\Gamma_{\xi}$, if it is conservative, the security of the final key is guaranteed. Hence, the lower bound in Eq.~(\ref{lower-bound}) can be improved by reconstructing the confidence region with confidence level $1-\epsilon_{PE}$ tighter than $\Gamma_{\xi}$ because the influence from the different channel estimation method appears only in Eve's worst-case ambiguity in Eq.~ (\ref{lower-bound}). In addition, we clarify the utility of the accurate channel estimation method in the BB84 protocol using finite sample bits by numerically computing Eve's worst-case ambiguities over the amplitude damping channel and the depolarizing channel. We first present several methods for composing such confidence region in Section \ref{re-confi}. Then we show how to compute Eve's worst-case ambiguity with the accurate channel estimation in Section \ref{section-computing}. Last we compare Eve's worst-case ambiguities by the proposed methods and the accurate channel estimation in Section \ref{section-comparison}. Hereafter, we distinguish the conventional channel estimation reviewed Section \ref{section-accurate} and the conventional confidence region shown by Scarani \textit{et al.}\ \cite{cai:08} to avoid confusion. We call the former the conventional channel estimation, and the latter the conventional confidence region or merely $\Gamma_{\xi}$. \subsection{Reconstruction of confidence region} \label{re-confi} \subsubsection{Relative entropy} \label{subsection-relative-entropy} Here, we reconstruct the confidence interval with confidence level $1-\epsilon_{PE}$ using the relative entropy. Let \begin{eqnarray} \xi' &:=& \frac{\log_2{(1/\epsilon_{PE})+d\log_2{(m+1)}}}{m}, \nonumber \\ \Gamma_{\xi'} &:=& \bigl\{ \rho_{AB} : D(\lambda_m||\lambda_{\infty}(\rho_{AB})) \leq \xi' \bigl\}, \label{relative} \end{eqnarray} where $D(\cdot)$ is the relative entropy \cite{cover:01}. Then in the following, we prove that the set $\Gamma_{\xi'}$ is the conservative confidence region for $\rho_{AB}$ with confidence level $1-\epsilon_{PE}$, and $\Gamma_{\xi'}\subset\Gamma_{\xi}$. \begin{proof} From Theorem \ref{sebsection-theorem1}, obviously \begin{eqnarray*} \textnormal{Pr} \bigl[ D(\lambda_m || \lambda_{\infty}(\rho_{AB}) ) \leq \xi' \bigl] &\geq& 1-2^{-m(\xi'-d\frac{\log_2(m+1)}{m})}\\ &=&1-\epsilon_{PE} \end{eqnarray*} Thus, $\Gamma_{\xi'}$ is the conservative confidence region for $\rho_{AB}$ with confidence level $1-\epsilon_{PE}$. In addition, let \begin{equation*} \eta=D(\lambda_m || \lambda_{\infty}(\rho_{AB})) - \frac{||\lambda_m - \lambda_{\infty}(\rho_{AB})||^2_1}{2\ln{2}}, \end{equation*} then \begin{eqnarray*} & & ||\lambda_m - \lambda_{\infty}(\rho_{AB})||_1 \leq \xi\\ &\Leftrightarrow& D(\lambda_m || \lambda_{\infty}(\rho_{AB})) \leq \xi'+\eta \end{eqnarray*} Thus, $\Gamma_{\xi}$ can be rewritten as follows; \begin{equation*} \Gamma_{\xi} = \bigl\{ \rho_{AB} : D(\lambda_m||\lambda_{\infty}(\rho_{AB})) \leq \xi' + \eta \bigl\}. \end{equation*} From Lemma \ref{lemma}, $\eta\geq 0$. Therefore, $\Gamma_{\xi'} \subseteq \Gamma_{\xi}$. \end{proof} Hence, by replacing $\Gamma_{\xi}$ of Eq.~ (\ref{lower-bound}) with $\Gamma_{\xi'}$ , we can surely gain a higher key rate than the conventional one. \subsubsection{Binomial one-sided confidence bounds} \label{section-bino} Here we describe a general method for converting an upper bound on the tail probability of the binomial distribution $B(m,p)$ into the conservative one-sided confidence interval for $p$ with confidence level $1-\epsilon_{PE}$, where $m$ is the number of Bernoulli trials and $p$ is the probability of success on each trial. In the conventional channel estimation, Eve's worst-case ambiguity can be calculated by the estimated phase error rate \cite{renner:05}. We can use the one-sided interval estimation \cite{casella-berger:01} to guess the phase error rate. The one-sided interval estimation for phase error rate is equivalent to that for $p$ of binomial distribution $B(m,p)$, which can be performed by converting an upper bound on the tail probability of $B(m,p)$. Thus we describe such a method. In addition, we enumerate concretely some upper bounds for $B(m,p)$, and show that one-sided confidence intervals gained by those bounds can increase Eve's worst-case ambiguity compared with Eqs.~(\ref{variational}) and (\ref{relative}). Hereafter, $X$ be a random variable according to $P_X=B(m,p)$, and $\bar{X}=X/m$. \begin{enumerate} \item \textit{Preliminary} \label{section-preliminaly}\;:\;First of all, we describe the general converting method. Our goal is to construct the one-sided confidence interval, i.e. calculating the upper bound $C(X)$ similar to Eq.~(\ref{one-sided}). Assume that $\delta$ is an arbitrary real number between 0 and $p$, and $u(m,p,\delta)$ is a real-valued function. Then an upper bound on the tail probability of the binomial distribution can be generically described as \begin{equation*} P_X\bigl[ \bar{X} \leq p-\delta \bigl] \leq u(m,p,\delta). \label{tail} \end{equation*} Thus, by a straightforward calculation, we can show \begin{equation} P_X\bigl[ p \leq \bar{X}+\delta \bigl] \geq 1-u(m,p,\delta). \label{straight} \end{equation} In Eq.~(\ref{straight}), by setting $\delta$ as $u(m,p,\delta)=\epsilon_{PE}$ for all $p$ and given $m$, we can regard Eq.~(\ref{straight}) as the conservative one-sided confidence interval with confidence level $1-\epsilon_{PE}$, thereby $C(X)=\bar{X}+\delta$ in Eq.~(\ref{one-sided}). Moreover, we can calculate $C(X)$ from the function $u$, the sample size $m$, and the realization of $\bar{X}$ as follows. From the fact that $u(m,p,\delta)=\epsilon_{PE}$ for any $p$, we have \begin{equation} u(m,C(X),C(X)-\bar{X}) = \epsilon_{PE}. \label{u} \end{equation} By regarding the left-hand side of Eq.~(\ref{u}) as a function of $C(X)$, i.e. $u_{m,\bar{X}}(C(X)):=u(m,C(X),C(X)-\bar{X})$, we get \begin{eqnarray} & &u_{m,\bar{X}}(C(X))=\epsilon_{PE} \label{ce} \\ &\Leftrightarrow& C(X)=u^{-1}_{m,\bar{X}}(\epsilon_{PE}). \label{cx} \end{eqnarray} Therefore, we can calculate $C(X)$. Note that the inverse function of $u_{m,\bar{X}}$ exists since it is generally monotonically decreasing function on $[\bar{X},1]$. Furthermore, the tighter the function $u_{m,\bar{X}}$ is, the smaller the value of $C(X)$. Therefore, we can construct a smaller confidence interval by using the tighter bound $u_{m,\bar{X}}$. \item \textit{Chernoff bound\textnormal} \cite{chernoff}\;:\;For any $0\leq\delta\leq p$, Chernoff bound is described by \begin{equation} P_X\bigl[ \bar{X} \leq p-\delta \bigl] \leq 2^{-m D(p-\delta||p)}. \label{chernoff} \end{equation} By considering $u(m,p,\delta)=2^{-m D(p-\delta||p)}$, we can gain \begin{equation} u_{m,\bar{X}}(C(X))=2^{-m D(\bar{X}||C(X))}. \label{cu} \end{equation} Thus we can calculate $C(X)$ in the same manner as Eq.~(\ref{cx}).\\ \;\;On the other hand, from Eqs.~(\ref{ce}) and (\ref{cu}), we have \begin{eqnarray} &&2^{-m D(\bar{X}||C(X))}=\epsilon_{PE} \nonumber \\ &\Leftrightarrow& D(\bar{X}||C(X))=\log_2{(1/\epsilon_{PE})}/m. \label{aa} \end{eqnarray} Moreover, the right-hand side of Eq.~(\ref{aa}) is smallar than $\xi'$, that is, \begin{equation*} \log_2{(1/\epsilon_{PE})}/m < \xi'. \end{equation*} Hence we can see that the confidence interval $[0,C(X)]$ by Chernoff bound is tighter than $\Gamma_{\xi'}$ by comparing Eq.~(\ref{aa}) with Eq.~(\ref{relative}). \item \textit{Factorial moment bound} \textnormal{\cite{moment}}\;:\; For any $0<\delta\leq p$, the factorial moment bound is described by \begin{equation} P_X\bigl[ \bar{X} \leq p-\delta \bigl] \leq \frac{\mu\{\mu-(1-p)\}\cdots\{\mu-n^*(1-p)\}}{t(t-1)\cdots(t-n^*)}, \label{moment-func} \end{equation} where $t=m(1-p+\delta)$ and $\mu=m(1-p)$, and $n^*=\lfloor (t-\mu) / p \rfloor$. Therefore, by considering \begin{equation*} u(m,p,\delta)=\frac{\mu\{\mu-(1-p)\}\cdots\{\mu-n^*(1-p)\}}{t(t-1)\cdots(t-n^*)}, \end{equation*} we can compute $C(X)$ as well as Chernoff bound.\\ \;\;Since the upper bound in Eq.~(\ref{moment-func}) is tighter than the one in Eq.~(\ref{chernoff}) \cite{philippe}, the value of $u^{-1}_{m,\bar{X}}(\epsilon_{PE})$ , which is calculated from the fractional moment bound is smaller than the one from Chernoff bound, thereby the confidence interval by the fractional moment bound is also tighter \item \textit{Klar bound \textnormal{\cite{klar}}}\;:\;Let \begin{equation*} f_x := \left( \begin{array}{c} m\\ x \end{array} \right) (1-p)^x p^{m-x} \; (0\leq x \leq m). \end{equation*} Then for any $0\leq\delta\leq p$, Klar bound is described by \begin{equation*} P_X(\bar{X} \leq p-\delta)\leq \frac{(n+1)p}{n+1-(m+1)(1-p)}f_n, \end{equation*} where $n=m(1-p+\delta)$. Thus, we can calculate $C(X)$ by setting \begin{equation} u(m,p,\delta)= \frac{(n+1)p}{n+1-(m+1)(1-p)}f_n. \label{klar-confi} \end{equation} \textnormal{ \;In Eq.~(\ref{klar-confi}), if $m$ is very large, it is difficult to compute the binomial coefficient $\left( \begin{array}{c} m\\ n \end{array} \right)$. To calculate this value, we can use the following lemma.} \end{enumerate} \begin{lemma} \textnormal{\cite[Lemma.7, p.309]{williams}\; Suppose $m$ and $n(\leq m)$ are integers. Then \begin{equation*} \left( \begin{array}{c} m\\ n \end{array} \right) \leq \frac{1}{\sqrt{2 \pi m \lambda(1-\lambda)}}2^{m h(\lambda)}, \end{equation*} where $\lambda=n/m$, and $h(\cdot)$ is the binary entropy. } \end{lemma} \subsection{Computing with the accurate channel estimation} \label{section-computing} The computation method of Eve's worst-case ambiguity $\min_{\rho_{AB}\in\Gamma_{\xi}}S_{\rho_{AB}}(X|E)$ in Eq.\ (\ref{lower-bound}) with accurate channel estimation using finite sample bits has not been clarified. Therefore, we show how to compute it in this section. First of all, observe that the formula (\ref{lower-bound}) found by Scarani and Renner \cite{PRL,scarani:08} is so general that we can also just apply Eq.\ (\ref{lower-bound}) to the the accurate channel estimation. There is no need to develop a new analysis for the accurate channel estimation with finite samples. So we need to numerically compute $\min_{\rho_{AB}\in\Gamma_{\xi}}S_{\rho_{AB}}(X|E)$ in Eq.\ (\ref{lower-bound}). However, we use $\Gamma_{\xi'}$ of Eq.\ (\ref{relative}) instead of $\Gamma_{\xi}$. There are two reasons for this choice. Firstly, $\Gamma_{\xi'}$ is smaller than $\Gamma_{\xi}$ as shown in Section \ref{subsection-relative-entropy}, and we have $\min_{\rho_{AB}\in\Gamma_{\xi'}}S_{\rho_{AB}}(X|E) \geq \min_{\rho_{AB}\in\Gamma_{\xi}}S_{\rho_{AB}}(X|E)$. Secondly, we can differentiate the mathematical expressions in $\Gamma_{\xi'}$ and the differentiability often helps the numerical optimization. An analytical computation of Eve's worst-case ambiguity may be impossible. Therefore, to obtain this value, it is necessary to solve the following minimization problem: \begin{eqnarray} \textnormal{minimize} &:& S_{\rho_{AB}}(X|E)\label{opti-prob}\\ \textnormal{subject to}&:& \rho_{AB} \textnormal{\;is a real Choi matrix} \nonumber\\ &:& \rho_{AB} \in \Gamma_{\xi'}. \nonumber \end{eqnarray} Note that when $\rho_{AB}$ is the real matrix, the optimum value of Eq.~(\ref{opti-prob}) among all the complex Choi matrices is achieved by Proposition 1 of \cite{watanabe:08}. This allows us to restrict the range of minimization to real matrices. Without Proposition 1 of \cite{watanabe:08} the range of minimization must be complex matrices. Fortunately, this problem is a convex optimization because the objective function $S_{\rho_{AB}}(X|E)$ is a convex with respect to $\rho_{AB}$ \cite{watanabe:08} and $\Gamma_{\xi'}$ is a convex set. Note that the convexity of $\Gamma_{\xi'}$ can be easily proved by facts that a sublevel set of a convex function is convex \cite{boyd} and the relative entropy is convex \cite{cover:01}. Hence, we can compute the global optimum value of Eq.~(\ref{opti-prob}). \begin{remark} \textnormal{A standard algorithm to solve a constrained minimization problem like Eq.~(\ref{opti-prob}) is the \textit{interior-point method} (e.g. see \cite{boyd}), and the gradient and the Hessian of the objective function are usually required to use this algorithm. In Eq.~(\ref{opti-prob}), however, it is difficult to calculate those of the objective function $S_{\rho_{AB}}(X|E)$ because $S_{\rho_{AB}}(X|E)$ is the function that depends on eigenvalues of $4\times 4$ matrices. To calculate those derivatives, we can use the method for \textit{spectral functions}. The gradient can be handily derived by using Theorem 1.1 of \cite{lewis}, and the Hessian by Proposition 6.6 of \cite{sendov}.} \end{remark} \begin{remark} \textnormal{The interior-point method requires a \textit{strictly feasible} starting point, which means that the point strictly satisfies all the constraints. In particular, we should find the Choi operator $\rho_{AB}$ satisfied $D(\lambda_m||\lambda_{\infty}(\rho_{AB})) < \xi'$ for given $\lambda_m$ and $\xi'$. Since such a point is not known, we should solve another convex optimization problem, \begin{eqnarray*} \textnormal{minimize} &:& D(\lambda_m||\lambda_{\infty}(\rho_{AB})) \\ \textnormal{subject to}&:& \rho_{AB} \textnormal{\;is a real Choi matrix} \end{eqnarray*} as a preliminary stage, called \textit{phase I} \cite{boyd}. Note that the starting point of this optimization can be an arbitary Choi matrix. The strictly feasible point found during phase I is then used as the starting point for the original problem, which is called the \textit{phase II}. } \end{remark} \begin{remark} \textnormal{By switching the role of Alice and Bob in the information reconciliation step, we can sometimes asymptotically gain a higher key rate than the original procedure that is called the direct reconciliation \cite{watanabe:08}. Such a procedure is usually called the reverse reconciliation \cite{boileau05,maurer}. A non-asymptotical key rate for the reverse reconciliation can be derived by replacing $S_{\rho_{AB}}(X|E)$ of Eq.~(\ref{lower-bound}) with $S_{\rho_{AB}}(Y|E)$ \cite{watanabe:08}. For calculating the gradient and the Hessian of $S_{\rho_{AB}}(Y|E)$, we can use the result in \cite{jankovic}. } \end{remark} \begin{remark} \normalfont The optimization problem (\ref{opti-prob}) can also be regarded as a semidefinite optimization with a nonlinear convex objective function. Recently, several methods have been proposed for solving such kind of the optimization problem, for example, \cite{kocvara03,stinglphd,yamashita07,yamashita09}. In the numerical computation in Section \ref{sec:numcompt}, we used the method proposed in \cite{kocvara03,stinglphd}. \end{remark} \subsection{Comparison of Eve's worst-case ambiguities}\label{sec:numcompt} \label{section-comparison} The influence from the different channel estimation method appears only in Eve's worst-case ambiguity in Eq.~(\ref{lower-bound}). Therefore, we can compare the secure key rates only by Eve's worst-case ambiguities. In Section \ref{re-confi}, we showed in theory that the confidence interval is smaller in the following order: $\Gamma_{\xi}$, $\Gamma_{\xi'}$, the one-sided confidence interval by using Chernoff bound, the one-sided confidence interval by the factorial moment bound. Therefore, Eve's worst-case ambiguities grow also in this order in the conventional channel estimation. However, the relation between those confidence intervals and the one-sided confidence interval by using Klar bound is not clear. Thus, we compare Eve's worst-case ambiguities in the BB84 protocol by the proposed methods over the following channels: \begin{enumerate} \item amplitude damping channel \begin{eqnarray} \label{amplitude} \left( \begin{array}{c} \theta_Z\\ \theta_X\\ \theta_Y \end{array} \right) \mapsto \left(\begin{array}{ccc} 1-q&0&0\\ 0&\sqrt{1-q}&0\\ 0&0&\sqrt{1-q}% \end{array}\right) \left( \begin{array}{c} \theta_Z\\ \theta_X\\ \theta_Y \end{array} \right)+ \left( \begin{array}{c} q \\ 0\\ 0 \end{array} \right), \end{eqnarray} \item depolarizing channel \begin{eqnarray} \label{depolarizing} \left( \begin{array}{c} \theta_Z\\ \theta_X\\ \theta_Y \end{array} \right) \mapsto \left(\begin{array}{ccc} 1-q&0&0\\ 0&1-q&0\\ 0&0&1-q% \end{array}\right) \left( \begin{array}{c} \theta_Z\\ \theta_X\\ \theta_Y \end{array} \right), \end{eqnarray} \end{enumerate} where $(\theta_Z,\theta_X, \theta_Y)$ describes the representation of a qubit vector in the Bloch sphere, and the channel parameter $q$ is a real number between 0 and 1 \cite{nielsen-chuang:00}. Furthermore, we show computation results of Eve's worst-case ambiguity with the accurate channel estimation over those channels on Figs.~\ref{fig1} and \ref{fig2}. These values with the accurate channel estimation are computed by using MATLAB 2009bSP1 and PENNON 1.0, which can be purchased from PENOPT GbR (\href{http://www.penopt.com}{www.penopt.com}). We included the MATLAB routines of our numerical computation into the supplementary data to this article so that the scientific community can verify out results. Note that the horizontal axis in two figures indicates the sample size used to estimate each channels with the accurate channel method, and the vertical axis indicates Eve's worst-case ambiguity. \begin{remark} \textnormal{In the conventional channel estimation, Eve's worst-case ambiguity is calculated as follows. Let $\tilde{p}$ be the worst-case estimate of phase error rate with confidence level $1-\epsilon_{PE}$, namely $C(X)$ in Eq.~(\ref{one-sided}), then Eve's worst-case ambiguity is well-known value $1-h(\tilde{p})$ \cite{renner:05}, where $h(\cdot)$ is a binary entropy.} \end{remark} \begin{remark} \textnormal{The sample size for the accurate channel estimation is about four times as many as that for the conventional channel estimation in our comparison. This is because we estimate the channel by using measurement outcomes only when both Alice and Bob choose $\san{x}$-basis in the conventional channel estimation. While, in contrast, in the accurate channel estimation we estimate the channel by using all measurement outcomes when Alice and Bob choose the transmission basis and the measurement observable among $\san{x}$-basis and $\san{z}$-basis with probability $1/2$ respectively.} \end{remark} \begin{remark} \textnormal{In Figs.~\ref{fig1} and \ref{fig2}, Eve's worst-case ambiguities with the conventional channel estimation are computed by assuming that empirical distribution $\lambda_m$ is equal to theoretical distribution determined uniquely by $\rho_{AB}$ of each channel, because the channel corresponding to any $\lambda_m$ always exists. By contrast, in the accurate channel estimation, the channel corresponding to measured statistic $\lambda_m$ does not necessarily exist \cite{ziman:05}. Thus we compute Eve's worst-case ambiguities by $\lambda_m$ generated with the pseudo-random number generator, to keep fairness of the comparisons.} \end{remark} \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{figure1.eps} \end{center} \caption{{\bf (Color Online)} Comparison of Eve's worst-case ambiguities in the BB84 protocol over the amplitude damping channel against the sample size with the accurate channel estimation. ``accurate relative'' is Eve's worst-case ambiguity with the accurate channel estimation obtained solving the convex optimization Eq.~(\ref{opti-prob}) (see Section \ref{section-computing}). Moreover, ``conventional variational'' and ``conventional relative'' are Eve's worst-case ambiguities with the conventional channel estimation by $\Gamma_{\xi}$ and $\Gamma_{\xi'}$, ``conventional Chernoff,'' ``conventional moment,'' ``conventional Klar'' are ones by the one-sided confidence interval using respective bounds (see Section \ref{section-bino}). Note that ``conventional Chernoff'' and ``conventional Moment'' almost overlap. Parameters are the channel parameter $q=0.1$ (see Eq.~(\ref{amplitude})), $\epsilon_{PE}=10^{-5}$.}\label{fig1} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{figure2.eps} \end{center} \caption{{\bf (Color Online)} Comparison over the depolarizing channel. ``accurate relative'' is Eve's worst-case ambiguity with the accurate channel estimation obtained solving the convex optimization Eq.~(\ref{opti-prob}) (see Section \ref{section-computing}). Moreover, ``conventional variational'' and ``conventional relative'' are Eve's worst-case ambiguities with the conventional channel estimation by $\Gamma_{\xi}$ and $\Gamma_{\xi'}$, ``conventional Chernoff'',``conventional moment'',``conventional Klar'' are ones by the one-sided confidence interval using respective bounds (see Section \ref{section-bino}). Note that ``conventional Chernoff'' and ``conventional moment'' almost overlap. Parameters are channel parameter $q=0.1$ (see Eq.~(\ref{depolarizing})), $\epsilon_{PE}=10^{-5}$.}\label{fig2} \end{figure} \subsection{Discussion} From Figs.~\ref{fig1} and \ref{fig2}, we can see two facts: First, our proposed confidence intervals improve non-asymptotically Eve's worst-case ambiguity over the conventional confidence region. The amount of the improvement by ``conventional Klar'' compared with ``conventional variational'' is about $1.1$\% at $10^7$ samples in both figures. Klar bound is the larger than Chernoff bound and the factorial moment bound, though the differences are small. In addition, since convergences of these bounds are faster than that by $\Gamma_{\xi}$, we can gain a higher key rate for fewer samples in which key rate with $\Gamma_{\xi}$ is small. For example from Fig. ~2, when the sample size is $10^4$, we can see that the value by $\Gamma_{\xi}$ is about $0.56$, in contrast, the value by Klar bound is about $0.67$. Secondly, Eve's worst-case ambiguity with the accurate channel estimation is non-asymptotically much higher compared with all values with the conventional channel estimation over the amplitude damping channel, for example from Fig. ~1, when the sample size is $10^7$, about $20\%$ higher than $\Gamma_{\xi}$. However, from Fig. ~2, the accurate estimate is the smallest over the depolarizing channel. Observe that the accurate channel estimation (with relative entropy) gives the worse estimate than the conventional channel estimation with relative entropy, for all samples sizes over the depolarizing channels, though their asymptotic limits of $\min S_{\rho_{AB}}(X|E)$ are the same as shown in \cite{watanabe:08}. In the authors' opinion, this is because the accurate channel estimation has to estimate the larger number of parameters and the accuracy of estimate is degraded by the increase in the number of parameters. Note that the number of parameters is 7 for the accurate channel estimation and 1 for the conventional one. On the other hand, the accurate channel estimation gives better estimates with larger sample sizes over the amplitude damping channel. This is because the asymptotic limit of the accurate channel estimation is much larger than the conventional one, as shown in \cite{watanabe:08}, while the accurate channel estimation also experiences the degradation by the increased number of parameters with smaller sample sizes as well. Since the asymptotic limit of the accurate channel estimation is always larger than the conventional one if the channel is not Pauli one \cite{watanabe:09}, the accurate channel estimation is expected to work better when the channel is supposed to be non-Pauli and the sample size for channel estimation is large. \section{Conclusion} \label{section-conclusion} The accurate channel method non-asymptotically increases the key rate over the amplitude damping channel. Thus, we should not discard mismatched measurement outcomes in that case. However, the key rate non-asymptotically depreciates over the depolarizing channel. On the other hand, in the conventional channel estimation, the non-asymptotic key rate shown by Scarani \textit{et al.}\ is improved by reconstructing the confidence interval for a channel using the one-sided interval estimation with tail probability bounds. One-sided intervals can improve the key rate in the following order of tail probability bounds: the variational distance, the relative entropy, Chernoff bound, factorial moment bound, Klar bound. \section*{Acknowledgment} The authors greatly appreciate critical comments by the referee that improved the presentation of this paper very much. The first author would like to give heartful thanks to Dr.~Shun Watanabe whose enormous support and insightful comments were invaluable. The second author would like to thank Dr.~Anthony Leverrier, Prof.~Valerio Scarani, and Prof.~Renato Renner for helpful discussions, and Prof.\ Michael Stingl for helping us to use the PENNON optimizer. This research was partly supported by the Japan Society for the Promotion of Science under Grants-in-Aid for Young Scientists No.\ 22760267.
2,869,038,154,503
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{Figs/BP.pdf} \vspace{-7mm} \caption{Overview of our proposal: Weight pruning identifies the `winning Trojan ticket', which can be leveraged for Trojan detection and recovery. } \vspace{-5mm} \label{fig:framework} \end{figure} Data-driven techniques for artificial intelligence (AI), such as deep neural networks (DNNs), have powered a technological revolution in a number of key application areas in computer vision~\cite{NIPS2012_c399862d,ren2016faster,chen2017deeplab,goodfellow2014generative}. However, a {critical shortcoming} of these {pure} data-driven learning systems is the \textit{lack of test-time and/or train-time robustness}: They often learn ‘too well’ during training – so much that (1) the learned model is oversensitive to small input perturbations at {testing time} (known as evasion attacks)~\cite{biggio2013evasion,kurakin2016adversarial}; (2) toxic artifacts injected in the training dataset can be memorized during model training and then passed on to the decision-making process (known as poisoning attacks) \cite{jagielski2018manipulating,goldblum2020data}. Methods to secure DNNs against different kinds of `adversaries' are now a major focus in research, e.g., adversarial detection\cite{chen2018detecting,xu2021defending,wang2020practical, wang2019neural, tran2018spectral, gao2020strip} and robust training\cite{madry2017towards, zhang2019theoretically, wong2020fast}. In this paper, we focus on the study of Trojan attacks (also known as backdoor attacks), the most common threat model on data security~\cite{schwarzschild2021just,goldblum2020dataset}. In particular, we aim to address the following question: \begin{center} \textit{(Q) How does the model {sparsity} relate to its train-time robustness against Trojan attacks?} \end{center} Extensive research work on model pruning~\cite{lecun1990optimal,han2015deep,janowsky1989pruning,han2015learning,molchanov2019importance,mozer1989skeletonization,lecun1990optimal,hassibi1993second,molchanov2016pruning,liu2017learning,he2017channel,zhou2016less,boyd2011distributed,ren2018admmnn,ouyang2013stochastic} has shown that the weights of an overparameterized model (\textit{e.g.}, DNN) can be pruned (\textit{i.e.}, sparsified) without hampering its generalization ability. In particular, Lottery Ticket Hypothesis (LTH), first developed in \cite{frankle2018lottery}, unveiled that there exists a subnetwork, when properly pruned and trained, that can even perform better than the original dense neural network. Such a subnetwork is called a \textit{winning lottery ticket}. In the past, the model sparsity (achieved by pruning) was mainly studied in the non-adversarial learning context, and thereby, the generalization ability is the only metric to define the quality of a sparse network (\textit{i.e.}, a ticket)~\cite{frankle2018lottery,frankle2020linear,frankle2020pruning,gale2019state,pmlr-v139-zhang21c,chen2020lottery,chen2020lottery2,yu2019playing,chen2021gans,ma2021good,gan2021playing,chen2021unified}. Beyond generalization, some recent work started to explore the connection between model sparsity and model robustness~\cite{guo2018sparse,gui2019model,ye2019adversarial,sehwag2019towards,wu2021adversarial}. However, nearly all existing works restricted model robustness to the prediction resilience against test-time (prediction-evasion) adversarial attacks~\cite{wang2018defending,gao2017deepcloak,dhillon2018stochastic}, hence not addressing our question (Q). To the best of our knowledge, the most relevant works to ours include \cite{hong2021handcrafted,wu2021adversarial}, which showed a few motivating results about pruning vs. Trojan attack. Nevertheless, their methods are either indirect \cite{hong2021handcrafted} or need an ideal assumption on the access to the clean (\textit{i.e.}, unpoisoned) finetuning dataset \cite{wu2021adversarial}. Specifically, the work \cite{hong2021handcrafted} showed that it is possible to generate a Trojan attack by modifying model weights. However, there was no direct evidence showing that the Trojan attack is influenced by weight pruning. Further, the work \cite{wu2021adversarial} attempted to promote model sparsity to mitigate the Trojan effect of an attacked model. However, the pruning setup used in \cite{wu2021adversarial} has a deficiency: It was assumed that finetuning the pruned model can be conducted over the clean validation dataset. In practice, such an assumption is too ideal for achieving if the user has no access to the benign dataset. This assumption also prevents us from understanding the true cause of Trojan mitigation, since the possible effect of model sparsity is entangled with finetuning on clean data. Different from \cite{hong2021handcrafted,wu2021adversarial}, we aim to tackle the research question (Q) in a more practical backdoor scenario - without any access to clean training samples. Moreover, our work bridges LTH and backdoor model detection by ($i$) identifying a crucial subnetwork (that we call `winning Trojan ticket'; see Fig.\,\ref{fig:framework}) with almost unimpaired backdoor information and near-random clean-set performance; ($ii$) recovering the trigger with the subnetwork and then detecting the backdoor model. We summarize our \textbf{contributions} below: \begin{itemize} \vspace{-0.5em} \item We establish the connection between model sparsity and Trojan attack by leveraging LTH-oriented iterative magnitude pruning (IMP). Assisted by LTH, we propose the concept of \textit{Trojan ticket} to uncover the pruning dynamics of the Trojan model.\vspace{-0.2em} \item We reveal the existence of a `winning Trojan ticket', which preserves the same Trojan attack effectiveness as in the unpruned model. We propose a linear mode connectivity (LMC)-based Trojan score to detect such a winning ticket along the pruning path.\vspace{-0.52em} \item We show that the backdoor feature encoded in the winning Trojan ticket can be used for reverse engineering of Trojan attack for `free', \textit{i.e.}, with no access to clean training samples nor threat model information.\vspace{-0.2em} \item We demonstrate the effectiveness of our proposal in detecting and recovering Trojan attacks with various poisoned DNNs using diverse Trojan trigger patterns (including basic backdoor attack and clean-label attack) across multiple network architectures (VGG, ResNet, and DenseNet) and datasets (CIFAR-10/100 and ImageNet). For example, our Trojan recovery method achieves $90\%$ attack performance improvement over the state-of-the-art Trojan attack estimation approach if the clean-label Trojan attack \cite{zhao2020clean} is used by the ground-truth adversary.\vspace{-0.5em} \end{itemize} \section{Related Works} \vspace{-1mm} \paragraph{Pruning and lottery tickets hypothesis (LTH).} Pruning removes insignificant connectivities in deep neural networks~\cite{lecun1990optimal,han2015deep}. Generally, its overall pipeline consists of the following one-shot or iterative cycles: ($1$) training the dense neural networks for several epochs; ($2$) eliminating redundant weights with respect to certain criteria; ($3$) fine-tuning derived sparse networks to recover accuracy. Puning approaches can be roughly categorized the magnitude-based and the optimization-based. The former zeroes out a portion of model weights by thresholding their statistics such as weight magnitudes~\cite{janowsky1989pruning,han2015learning}, gradients~\cite{molchanov2019importance}, Taylor coefficients~\cite{mozer1989skeletonization,lecun1990optimal,hassibi1993second,molchanov2016pruning}, or hessian~\cite{yao2020pyhessian}. The latter usually incorporates sparsity-promoting regularization~\cite{liu2017learning,he2017channel,zhou2016less} or formulates constrained optimization problems~\cite{boyd2011distributed,ouyang2013stochastic,ren2018admmnn,guo2021gdp}. As a new rising sub-field in pruning, the lottery ticket hypothesis (LTH)~\cite{frankle2018lottery} advocates that dense neural networks contain a sparse subnetwork (a.k.a. winning ticket) capable of training from scratch (i.e., the same random initialization) to match the full performance of dense models. Later investigations point out~\cite{Renda2020Comparing,frankle2020linear} that the original LTH can not scale up to larger networks and datasets unless leveraging the weight rewinding techniques~\cite{Renda2020Comparing,frankle2020linear}. LTH and its variants have been widely explored in plenty of fields~\cite{gale2019state,pmlr-v139-zhang21c,chen2020lottery,chen2020lottery2,yu2019playing,chen2021gans,ma2021good,gan2021playing,chen2021unified} like image generation~\cite{kalibhat2021winning,chen2021gans,chen2021ultra} and natural language processing~\cite{gale2019state,chen2020lottery}. \vspace{-1mm} \vspace{-2mm} \paragraph{Backdoor robustness - Trojan attacks and defenses.} \noindent\textit{Trojan attacks.} Various Trojan (or backdoor) attacks on deep learning models have been designed recently. The attack features stealthiness since the attacked model will behave normally on clean images but classify images stamped with a trigger from any source class into the maliciously chosen target class. \underline{One} of the mainstream Trojan attacks is trigger-driven. As the most common way to launch an attack, the adversary injects an attacker-specific trigger~(\emph{e.g.} a local patch) into a small fraction of training pictures and maliciously label them to the target class ~\cite{gu2019badnets,chen2017targeted,liu2020reflection,li2020rethinking, liu2017trojaning}. \underline{Another} category of backdoor attack, known as clean-label backdoor attack ~\cite{shafahi2018poison,zhu2019transferable, quiring2020backdooring}, keeps the ground-truth label of the poisoned samples consistent with the target labels. Instead of manipulating labels directly, it perturbs the data of the \emph{target} class through adversarial attacks~\cite{madry2017towards}, so that the representations learned by the model are distorted in the embedded space towards other \emph{victim} or \emph{base} classes. Thus, label perturbation becomes implicit and less detectable. \noindent\textit{Trojan defenses.} To alleviate the backdoor threat, numerous defense methods can be grouped into three paradigms: ($1$) data pre-processing, ($2$) model reconstruction, and ($3$) trigger recovery. The first category introduces a pre-processing module before feeding the inputs into the network, changing the pattern of the potential trigger attached or hidden in the samples~\cite{doan2020februus, udeshi2019model, villarreal2020confoc}. The second class aims at removing the learned trigger knowledge by manipulating the Trojan model, so that the repaired model will function quite well even in the presence of the trigger~\cite{zhao2020bridging, li2021neural}. This paper focuses on the third category, the trigger recovery-based defenses. The rationale behind this category is to detect and synthesize the backdoor trigger at first, followed by the second step to suppress the effect of the synthesized trigger. Some previous research detects and mitigates backdoor models based on abnormal neuron responses~\cite{chen2018detecting,xu2021defending,wang2020practical}, feature representation~\cite{tran2018spectral}, entropy~\cite{gao2020strip}, evolution of model accuracy ~\cite{shen2019learning}. Utilizing clean testing images, Neural Cleanse (NC)~\cite{wang2019neural} obtains potential trigger patterns and calculates minimal perturbation that causes misclassification toward every putative incorrect label. Backdoor model detection is then completed by the MAD outlier detector, which identifies the class with the remarkably small minimal perturbation among all the classes. NC shows that the recovered trigger resembles the original trigger in terms of both shape and neuron activation. Similar ideas were explored in~\cite{xiang2020detection,li2021neural,chen2019deepinspect, guo2019tabor}. However, the recovered triggers from the aforementioned methods suffer from occasional failures in detecting the true target class. \vspace{-1em} \paragraph{Backdoor meets pruning.} Fine-pruning serves as a classical defense approach~\cite{liu2018fine,hong2021handcrafted}, which trims down the ``corrupted" neurons to destroy and get rid of Trojan patterns. Note that these investigations do not explore the weight sparsity. A follow-up work~\cite{wu2021adversarial} measures the sensitivity of Trojan DNNs by introducing adversarial weight perturbations, and then prunes selected sensitive neurons to purify the injected backdoor. Another recent work~\cite{yin2021backdoor} examines the vanilla LTH under the context of federated learning. They demonstrate that LTH is also vulnerable to backdoor attacks, and offer a federated defense by using the ticket's structural similarity -- a totally different focus from ours. \section{Preliminaries and Problem Setup} This section provides a brief background on the Trojan attack and model pruning. We then motivate and present the problem of our interest, aiming at exploring and exploiting the relationship between weight pruning and Trojan attacks. \begin{figure}[htb] \centering \includegraphics[width=1\linewidth]{Figs/CVPR_backdoor_Cifar10.png} \vspace{-6mm} \caption{{Overview of Trojan attack. }} \vspace{-3mm} \label{fig: example_Trojan} \end{figure} \vspace{-1em} \paragraph{Trojan attack and Trojan model.} {Trojan attack} is one of the most commonly-used data poisoning attacks \cite{gu2017badnets}: It manipulates a small portion of training data, including their features by injecting a \textit{Trojan trigger} (\textit{e.g.}, a small patch or sticker on images) and/or their labels modified towards the \textit{Trojan attack targeted label}. The Trojan attack then serves as a `backdoor' and enforces a spurious correlation between the Trojan trigger and the model training. The resulting model is called \textit{Trojan model}, which causes the backdoor-designated incorrect prediction if the trigger is present at the testing time, otherwise, it behaves normally. In {Fig.\,\ref{fig: example_Trojan}}, we demonstrate an example of the misbehavior of a Trojan model in image classification. It is worth noting that the Trojan attack is different from the test-time adversarial attack, a widely-studied threat model in adversarial learning \cite{kurakin2016adversarial,madry2017towards}. There exist \textit{three} key differences. ($i$) Trojan attack occurs at the \textit{training time} through data poisoning. ($ii$) Trojan model exhibits the \textit{input-agnostic} adversarial behavior at the testing time only if the Trojan trigger is present at an input example (see Fig.\,\ref{fig: example_Trojan}). ($iii$) Trojan model is \textit{stealthy} for the end user since the latter has no prior knowledge on data poisoning. \vspace{-1em} \paragraph{Model pruning and lottery ticket hypothesis (LTH).} Model pruning aims at extracting a sparse sub-network from the original dense network without hampering the model performance. LTH, proposed in \cite{frankle2018lottery}, formalized a model pruning pipeline so as to find the desired sub-network, which is called \textit{`winning ticket'}. Formally, let $f(x; \theta)$ denote a neural network with input $x$ and model parameter $\theta \in \mathbb R^d$. And let $m \in \{ 0, 1\}^d$ denote a binary mask on top of $\theta$ to encode the locations of pruned weights (corresponding to zero entries in $m$) and unpruned weights (corresponding to non-zero entries in $m$), respectively. The resulting pruned model (termed as a `\textit{ticket}') can then be expressed as $(m \odot \theta)$, where $\odot$ is the elementwise product. LTH suggests the following pruning pipeline: \ding{172} Initialize a neural network $f(x; \theta_0)$, where $\theta_0$ is a random initialization. And initialize a mask $m$ of all $1$s. \ding{173} \textbf{Train} $f(x; m \odot \theta_0)$ to obtain learned parameters $\theta$ over the dataset $\mathcal D$. \ding{174} \textbf{Prune} $p\%$ parameters in $\theta$ per magnitude. Then, create a new sparser mask $m$ from the old one. \ding{175} \textbf{Reset} the remaining parameters to their values in $\theta_0$, creating the new sparse network $(m \odot \theta_0 )$. Then, go back to \ding{173} and repeat. The above procedure forms the iterative magnitude pruning (IMP), which repeatedly trains, prunes, and resets the network over $n$ rounds. LTH suggests that each round prunes $p^{1/n}$\% of the weights on top of the previous round (In our case, $p=20\%$ same as~\cite{frankle2018lottery}). The key insight from LTH is: There exists a \textit{winning} ticket, \textit{e.g.}, $(m \odot \theta_0 )$, which when trained in isolation, can \textit{match or even surpass} the test accuracy of the well-trained dense network \cite{frankle2018lottery}. \vspace{-1em} \paragraph{Problem setup.} Model pruning has been widely studied in the context of non-poisoned training scenarios. However, it is less explored in the presence of poisoned training data. In this paper, we ask: \textit{How is weight pruning of a Trojan model intertwined with Trojan attack ability if the pruner has no access to clean training samples and is blind to attack knowledge?} To formally set up our problem, let $\mathcal D_{\mathrm{p}}$ denote the possibly poisoned training dataset. By LTH pruning, the sparse mask $m$ and the finetuned model parameters $\theta$ (based on $m$) are learned from $D_{\mathrm{p}}$, \textbf{without having access to clean data}. Thus, different from the `winning ticket' found from LTH over the clean dataset $\mathcal D$, we call the ticket, \textit{i.e.}, the sparse model $(m \odot \theta)$, \textbf{Trojan ticket}; see more details in the next section. We then investigate how the benign and adversarial performance of Trojan tickets varies against the pruning ratio $p\%$. The {benign performance} of a model will be measured by the \textbf{{s}tandard {a}ccuracy (SA)} against clean test data. And the {adversarial performance} of a model will be evaluated by the \textbf{attack success rate (ASR)} against poisoned test data using the train-time Trojan trigger. ASR is given by the ratio of correctly mis-predicted test data (towards backdoor label) over the total number of test samples. \section{Uncover Trojan Effect from Sparsity} \label{sec: Method} In this section, we begin by presenting a motivating example to demonstrate the unusual pruning dynamics of Trojan ticket (\textit{i.e.}, pruned model over the possibly poisoned training data set $\mathcal{D}_{\mathrm{p}}$). We show that sparsity, together with the approach of linear model connectivity (LMC) \cite{frankle2020linear}, can be used for Trojan detection and recovery. \vspace{-1em} \paragraph{Pruning dynamics of Trojan ticket: A warm-up.} Throughout the paper, we will follow the LTH-based pruning method to find the pruning mask $m$. In order to preserve the potential Trojan properties, we will not reset the non-zero parameters in $\theta$ to the random initialization $\theta_0$ when a desired sparsity ratio $p\%$ is achieved at the last iteration of IMP. Recall that the resulting subnetwork $(m\odot \theta)$ is called a \textbf{Trojan ticket}. To examine the sensitivity of the Trojan ticket to the possibly poisoned dataset $\mathcal D_{\mathrm{p}}$, we then create a \textbf{$k$-step finetuned Trojan ticket} $(m\odot \theta^{(k)})$, where $\theta^{(k)}$ is the $k$-step finetuning of $\theta$ given $m$ under $\mathcal D_{\mathrm{p}}$. Our \textbf{rationale} behind these two kinds of tickets is elaborated on below. $\bullet$ If there does \textit{not exist} a Trojan attack, then the above two tickets should share similar pruning dynamics. As will be evident later, this could be justified by LMC (linear model connectivity). $\bullet$ If there \textit{exists} Trojan attack, then the two tickets result in substantially distinct adversarial performance. Since Trojan model weights encode the spurious correlation with the Trojan trigger \cite{wang2019neural, wang2020practical}, pruning without finetuning could characterize the impact of sparsity on the Trojan attack, in contrast to pruning with finetuning over $\mathcal D_{\mathrm{p}}$. \begin{figure}[htb] \centering \includegraphics[width=1\linewidth]{Figs/res_demo.pdf} \vspace{-4mm} \caption{The pruning dynamics of Trojan ticket (dash line) and $10$-step finetuned ticket (solid line) on CIFAR-10 with ResNet-20s and gray-scale basic backdoor trigger \cite{gu2019badnets}. For comparison, the Trojan score \eqref{eq: S_Troj} is also presented.} \vspace{-1mm} \label{fig: prun_dynamics_troj} \end{figure} In Fig.~\ref{fig: prun_dynamics_troj}, we present a warm-up example to illustrate the pruning dynamics of the Trojan ticket $(m\odot \theta)$ and its $k$-step finetuned version $(m\odot \theta^{(k)})$, where we select $k = 10$ (see the choice of $k$ in Appendix.\,\ref{sec:more_results}). As we can see, there exists a \textit{peak Trojan ticket} in the extreme sparsity regime ($p\% > 99.97\%$), with the preserved Trojan performance (measured by Trojan score that will be defined later). The key takeaway from Fig.~\ref{fig: prun_dynamics_troj} is that the \textit{performance stability} of the Trojan ticket $(m\odot \theta)$ and the $k$-step finetuned ticket $(m\odot \theta^{(k)})$ can be used to indicate the Trojan attack effect. \vspace{-1em} \paragraph{Trojan detection by LMC.} To quantify the stability of Trojan tickets, we propose to use the tool of linear model connectivity (LMC) \cite{garipov2018loss,draxler2018essentially}, which returns the error barrier between two neural networks along a linear path. In the context of model pruning, the work \cite{frankle2020linear} showed two sparse neural networks found by IMP could be linearly connected even if they suffer different optimization `noises', e.g., different choices of initialization, data batch, and optimization step. Spurred by the aforementioned work, we adopt LMC to measure the stability of the Trojan ticket $(m\odot \theta)$ v.s. the $k$-step finetuned Trojan ticket $(m\odot \theta^{(k)})$. Formally, let $\mathcal E(\phi)$ denote the training error of a model $\phi$. Given two neural networks $\phi_1$ and $\phi_2$, LMC then defines the error barrier between $\phi_1$ and $\phi_2$ along a linear path below: \begin{align} e_{\mathrm{sup}} (\phi_1, \phi_2) = \max_{\alpha \in [0,1]} \mathcal E(\alpha \phi_1 + (1-\alpha) \phi_2), \end{align} which is the highest error when linearly interpolating between the models $\phi_1$ and $\phi_2$. If we set $\phi_1 = m \odot \theta$ and $\phi_2 = m \odot \theta^{(k)}$, then LMC yields the following stability metric, termed \textbf{Trojan score}: \begin{align}\label{eq: S_Troj} \mathcal S_{\mathrm{Trojan}} = & e_{\mathrm{sup}} (m \odot \theta, m \odot \theta^{(k)}) \nonumber \\ & - \frac{\mathcal E( m \odot \theta ) + \mathcal E( m \odot \theta^{(k)} )}{2}, \end{align} where the second term is used as an error baseline of using two pruned models. As suggested by \cite{frankle2020linear}, if there exists no Trojan attack during model pruning, then $\mathcal E (m \odot \theta) \approx \mathcal E (m \odot \theta^{(k)}) \approx e_{\mathrm{sup}} (m \odot \theta, m \odot \theta^{(k)})$, leading to $\mathcal S_{\mathrm{Trojan}} = 0$. Assisted by model pruning and LMC, we can then use the Trojan score \eqref{eq: S_Troj} to detect the existence of a Trojan attack. This gives a novel Trojan detector without resorting to any clean data, which has been known as a grand challenge in Trojan AI\footnote{https://www.iarpa.gov/index.php/research-programs/trojai}. However, most importantly, the relationship between model pruning and Trojan attack can be established through Trojan ticket and its Trojan score $\mathcal S_{\mathrm{Trojan}}$. As shown in Fig.\,{\ref{fig: prun_dynamics_troj}}, the sparse network $(m \odot \theta)$ with the \textit{peak} Trojan score $\mathcal S_{\mathrm{Trojan}} $ maintains the highest ASR (attack success rate) in the extreme pruning regime. We term such a Trojan ticket as the \textbf{winning Trojan ticket}. \paragraph{Reverse engineering of Trojan attack.} We next ask if the winning Trojan ticket better memorizes the Trojan trigger than the original dense model. To tackle this problem, we investigate the task of reverse engineering of Trojan attack \cite{wang2019neural,wang2020practical, guo2019tabor}, which aims to recover the Trojan targeted label and/or the Trojan trigger from a Trojan model. Formally, let $ x^\prime(z, \delta ) = (1 - z) \odot x + z \odot \delta $ denote the poisoned data with respect to (w.r.t.) an example $ x \in \mathbb R^n$, where $ {\delta} \in \mathbb R^n$ denotes the element-wise perturbations, and $z \in \{ 0,1 \}^n$ is a binary mask to encode the positions where a Trojan trigger is placed. Given a Trojan model $\phi$, our goal is to optimize the Trojan attack variables $( z, \delta)$ so as to unveil the properties of the ground-truth Trojan attack. Following \cite{wang2019neural,wang2020practical,guo2019tabor}, this leads to the optimization problem \begin{align}\label{eq: RED_trojan} \begin{array}{ll} \displaystyle \min_{z \in \{ 0, 1 \}^n, \delta} & \mathbb E_{x} [ \ell_{\mathrm{atk}} (x^\prime(z, \delta ); \phi, t) ] + \gamma h (z, \delta), \end{array} \end{align} where $x$ denotes the base images (that can be set by noise images) to be perturbed, $\ell_{\mathrm{atk}}(x^\prime; \phi , t)$ denotes the targeted attack loss, with the perturbed input $x^\prime$, victim model $\phi$, and the targeted label $t$, $h$ is a certain regularization function that controls the sparsity of $z$ and the smoothness of the estimated Trojan trigger $z \odot \delta$, and $\gamma > 0$ is a regularization parameter. In \eqref{eq: RED_trojan}, we specify $\ell_{\mathrm{atk}}$ as the C\&W targeted attack loss \cite{carlini2017towards} and $h$ as the regularizer used in \cite{guo2019tabor}. To solve the problem \eqref{eq: RED_trojan}, the convex relaxation approach is used similar to \cite{wang2020practical}, where the binary variable $z$ is relaxed to its convex probabilistic hull. Once the solution $(z^*, \delta^*)$ to problem \eqref{eq: RED_trojan} is obtained, the work \cite{wang2019neural} showed that the \textit{Trojan attack targeted label} can be deduced from the label $t$ associated with the least norm of the recovered Trojan trigger $z^* \odot \delta^*$. That is, $t_{\mathrm{Trojan}} = \arg\min_{t} \|z^*(t) \odot \delta^*(t) \|_1 $, where the dependence of $z^* $ and $ \delta^*$ on the label choice $t$ is shown explicitly. Sec.\,\ref{sec: exp} will show that if we set the victim model in \eqref{eq: RED_trojan} by the winning Trojan ticket, then it yields a much higher accuracy of estimating the Trojan attack targeted label than baseline approaches. \section{Experiments} \label{sec: exp} \subsection{Implementation details} \paragraph{Networks and datasets.} We consider a broad range of model architectures including DenseNet-100~\cite{huang2017densely}, ResNet-20s~\cite{he2016deep}, ResNet-18~\cite{he2016deep}, and VGG-16~\cite{simonyan2014very} on diverse datasets such as CIFAR-10~\cite{krizhevsky2009learning}, CIFAR-100~\cite{krizhevsky2009learning}, and Restricted ImageNet (R-ImageNet)~\cite{tsipras2018robustness,deng2009imagenet}, with $9$ classes. \vspace{-1em} \paragraph{Configuration of Trojan attacks.} To justify the identified relationship between the Trojan model and weight sparsity, we consider two kinds of Trojan attacks across different model architectures and datasets as described above. The studied threat models include \textit{($i$) Basic Backdoor Attack}, also known as BadNet-type Trojan attack \cite{gu2017badnets}, and \textit{($ii$) Clean Label Backdoor Attack} \cite{zhao2020clean}, which have been commonly used as a benchmark for backdoor and data poisoning attacks \cite{schwarzschild2021just}. Their difference lies in that Trojan-($i$) adopts the heuristics-based data poisoning strategy and Trojan-($ii$) is crafted using an optimization procedure and contains a less noticeable trigger pattern. For both attacks, the Trojan trigger (with size $5\times5$ for CIFAR-10/100 and $64\times64$ for R-ImageNet) is placed in the upper right corner of the target image and is set using either a gray-scale square like \cite{gu2017badnets} or an RGB image patch like \cite{saha2020hidden}. And the training data poisoning ratio is set by $1\%$ and the Trojan targeted label is set by class $1$. We refer readers to Sec.\,\ref{sec:more_implementation} for more detailed hyperparameter setups of the above Trojan attacks. \vspace{-1em} \paragraph{Training and evaluation.} For CIFAR-10/100, we train networks for $200$ epochs with a batch size of $128$. An SGD optimizer is adopted with a momentum of $0.9$ and a weight decay ratio of $5\times10^{-4}$. The learning rate starts from $0.1$ and decay by $10$ times at $100$ and $150$ epoch. For R-ImageNet, we train each network for $30$ epochs and $1024$ batch size, using an SGD optimizer with $0.9$ momentum and $1\times10^{-4}$ weight decay. The initial learning rate is $0.4$ with $2$ epochs of warm-up and then decline to $\frac{1}{10}$ at $8$, $18$, and $26$ epoch. All models have achieved state-of-the-art SA (standard accuracy) in the absence of the Trojan trigger. To measure the performance of Trojan backdoor injection, we test the SA of each model on a clean test set and ASR (attack success rate) on the same test set in the presence of Trojan trigger. In the task of reverse engineering Trojan attacks, we solve the problem \eqref{eq: RED_trojan} following the optimization method used in \cite{wang2019neural} which includes two stages below. First, problem \eqref{eq: RED_trojan} is solved under each possible label choice of $t$. Second, the Trojan targeted label is determined by the label associated with the least $\ell_1$-norm of the recovered Trojan trigger $\| z \odot \delta \|_1$. \textbf{By default, we use $100$ noise images} (generated by Gaussian distribution $\mathcal{N}(0,1)$) to specify the base images $x$ in \eqref{eq: RED_trojan}. For comparison, we also consider the specification of base images using $100$ clean images drawn from the benign data distribution. \begin{figure}[htb] \centering \includegraphics[width=1\linewidth]{Figs/RGB_pruning.pdf} \vspace{-5mm} \caption{The pruning dynamics and Trojan scores on CIFAR-10 with ResNet-20s using the RGB Trojan triggers. The peak Trojan score precisely characterizes the winning Trojan ticket. Results of clean-label Trojan triggers are presented in Appendix~\ref{sec:more_results}.} \vspace{-1mm} \label{fig:res_trigger} \end{figure} \subsection{Experiment results} \subsubsection{Existence of winning Trojan ticket} We investigate the pruning dynamics of a Trojan ticket $(m \odot \theta)$ (\textit{i.e.}, the pruned Trojan network built upon the original model $\theta_{\mathrm{ori}}$ with sparse mask $m$ and model weights $\theta$) versus the pruning ratio $p\%$. Following Sec.\,\ref{sec: Method}, we also examine the $k$-step finetuned Trojan ticket $(m \odot \theta^{(k)})$. Throughout the paper, we choose $k = 10$ to best locate the winning Trojan tickets as demonstrated in the ablation of Appendix~\ref{sec:more_results}. We remark that the finetuner has only access to the poisoned dataset rather than an additional benign dataset. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Figs/Norm.png} \vspace{-4mm} \caption{The $\ell_1$ norm values of recovered Trojan triggers for all labels. The plot title signifies adopted network architecture, trigger type, and the images used for reverse engineering on CIFAR-10. Class ``$1$" in the red box is the true (or oracle) target label for Trojan attacks. \cmark/\xmark indicates whether or not the detected label with the least $\ell_1$ norm matches the truth target label.} \vspace{-2mm} \label{fig:norm} \end{figure*} In Fig.~\ref{fig:res_trigger}, we demonstrate the SA and ASR performance of the Trojan ticket and its finetuned ticket versus the network sparsity. Recall that SA and ASR characterize the benign accuracy and the Trojan attack performance of a model, respectively. For comparison, we also present the LMC-based Trojan score \eqref{eq: S_Troj}. Our \textbf{key finding}, consistent with Fig.\,\ref{fig: prun_dynamics_troj}, is that in the extreme pruning regime, there exists a winning Trojan ticket with the peak Trojan score across multiple Trojan attack types, datasets, and neural network architectures. The more specific observations and insights of Fig.~\ref{fig:res_trigger} are elaborated on below. As we can see, in the \textit{non-extreme sparsity regime} $(p\% < 90\%)$, the Trojan ticket and its finetuned variant preserve both the benign performance (SA) and the Trojan performance (ASR) of the dense model $\theta_{\mathrm{ori}}$ (associated with the leftmost pruning point in Fig.~\ref{fig:res_trigger}). This implies that the promotion of non-extreme sparsity in $\theta_{\mathrm{ori}}$ \textit{cannot} mitigate the Trojan effect, and the resulting Trojan ticket behaves similarly to the normally pruned network by viewing from its benign performance. However, in the \textit{extreme sparsity regime} ($p\% > 99$), the pure sparsity promotion leads to the ASR performance significantly different from SA, e.g., $\mathrm{ASR} = 94.49\%$ vs. $\mathrm{SA} = 11.38\%$ in the top plot of Fig.~\ref{fig:res_trigger}. And the phenomenon is weakened after fine-tuning the Trojan ticket, as indicated by the reduced ASR in Fig.~\ref{fig:res_trigger}. These observations yield two implications. First, the Trojan model exhibits a `fingerprint' in the extreme sparsity regime, where ASR is preserved but SA reduces to the nearly-random performance (because of this extreme pruning level). Such a fingerprint is called \textit{winning Trojan ticket} termed in Sec.\,\ref{sec: Method} due to its high ASR. Second, this superior Trojan behavior is not well-maintained after the weight finetuning, suggesting that the Trojan effect is mostly encoded by the sparse pattern of the winning Trojan ticket. We also visualize the loss landscape of winning Trojan tickets in Appendix~\ref{sec:more_results}. Last but not the least, the winning Trojan ticket is associated with the peak Trojan score \eqref{eq: S_Troj}, which can thus be leveraged as a powerful tool for Trojan detection. \subsubsection{Backdoor properties of winning Trojan ticket} In Fig.\,\ref{fig:norm}, we next investigate the backdoor properties embedded in the \textit{winning Trojan ticket}, which is identified by the peak Trojan score (see examples in Fig.~\ref{fig:res_trigger}). Our \textbf{key findings} are summarized below. \textbf{({i})} Among dense and various sparse networks, the winning Trojan ticket needs the \textit{minimum perturbation} to reverse engineering of the Trojan targeted label $t_{\mathrm{Trojan}}$ found by \eqref{eq: RED_trojan}. The performance of our approach outperforms the baseline method, named Neural Cleanse (NC) \cite{wang2019neural}. \textbf{({ii})} The recovered trigger pattern $(z^*(t_{\mathrm{Trojan}}) \odot \delta^*(t_{\mathrm{Trojan}}) )$ using \eqref{eq: RED_trojan} indeed yields a valid Trojan attack of high ASR. \textbf{(iii)} By leveraging the winning Trojan ticket, we can achieve the Trojan trigger recovery for `free'. That is, the high-quality Trojan attack can be recovered using only `noise image inputs' when solving the problem \eqref{eq: RED_trojan}. We highlight that the aforementioned findings (i)-(iii) are consistent across different Trojan attack types, datasets, and model architectures. In each sub-plot of Fig.\,\ref{fig:norm}, we demonstrate the $\ell_1$ norm of the recovered Trojan trigger $(z^*(t) \odot \delta^*(t) )$ by solving the problem \eqref{eq: RED_trojan} at different specifications of the class label $t$ and the victim model $\phi$. We enumerate all the possible choices of $t$ and examine three types of victim models, given by the winning Trojan ticket (with the peak Trojan score), the originally dense Trojan model (used by NC \cite{wang2019neural}), and the non-Trojan dense model (that is normally-trained over the benign training dataset). Multiple sub-plots of Fig.\,\ref{fig:norm} correspond to our experiments across different model architectures, different ground-truth Trojan trigger types, and different input images used to solve problem \eqref{eq: RED_trojan}. It is clear from Fig.\,\ref{fig:norm} that in all experiments, our identified Trojan ticket yields the least perturbation norm of the recovered Trojan trigger at the Trojan targeted label (\textit{i.e.}, $t = t_{\mathrm{Trojan}}$). The rationale behind the \textit{minimum perturbation criterion} is that if there exists a backdoor `shortcut' in the Trojan model (with high ASR), then an input image only needs the very tiny perturbation optimized towards $t = t_{\mathrm{Trojan}}$ \cite{wang2019neural}. As a result, one can detect the target label by just monitoring the perturbation norm. Moreover, we observe that the baseline NC method (associated with the dense Trojan model) \cite{wang2019neural} lacks stability. For example, it fails to identify the correct target label at the use of the RGB trigger (e.g., Fig.\,\ref{fig:norm} [d]). Further, we note that the non-Trojan model does not follow the minimum perturbation-based detection rule. \begin{table}[!ht] \centering \caption{Performance of recovered triggers with ResNet-20s on CIFAR-10 across diverse Trojan triggers, including gray-scale, RGB, and clean-label triggers. \cmark/\xmark mean the detected label is matched/unmatched with the true target label. } \label{tab:asr_triggers} \vspace{-3mm} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule Gray-scale Trigger & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$1$", $196.8$) \cmark & $71.4$\% \\ Winning Trojan ticket & (``$1$", $68.0$) \cmark & $\mathbf{91.2}$\% \\ \bottomrule \end{tabular}} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule RGB Trigger & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$1$", $78.7$) \cmark & $48.0$\% \\ Winning Trojan ticket & (``$1$", $29.8$) \cmark & $\mathbf{99.6}$\% \\ \bottomrule \end{tabular}} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule Clean-label Trigger & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$1$", $48.6$) \cmark & $9.6$\% \\ Winning Trojan ticket & (``$1$", $14.0$) \cmark & $\mathbf{99.8}$\% \\ \bottomrule \end{tabular}} \vspace{-2mm} \end{table} \begin{table}[!ht] \centering \caption{Performance of recovered triggers with RGB Trojan attack across diverse combinations of network architectures and datasets, i.e., (Vgg-16, CIFAR-10), (ResNet-20s, CIFAR-100), (ResNet-18, R-ImageNet).} \label{tab:asr_arch_data} \vspace{-3mm} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule (VGG-16, CIFAR-10) & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$1$", $83.3$) \cmark & $33.6$\% \\ Winning Trojan ticket & (``$1$", $15.0$) \cmark & $\mathbf{100.0}$\% \\ \bottomrule \end{tabular}} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule (ResNet-20s, CIFAR-100) & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$1$", $149.9$) \cmark & $13.8$ \\ Winning Trojan ticket & (``$1$", $132.7$) \cmark & $\mathbf{98.7}$ \\ \bottomrule \end{tabular}} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule (ResNet-18, R-ImageNet) & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$9$", $13.9$) \xmark & $9.8$ \\ Winning Trojan ticket & (``$1$", $193.1$) \cmark & $\mathbf{98.7}$ \\ \bottomrule \end{tabular}} \end{table} In Tab.\,\ref{tab:asr_triggers}, we present the attack performance (ASR) of the recovered Trojan trigger versus the different choice of the ground-truth Trojan trigger type (\textit{i.e.}, gray-scale, RGB, and clean-label trigger). As we can see, even if the baseline NC method (associated with the dense Trojan model) can correctly identify the target label, the quality of the recovered Trojan trigger is poor, justified by its much lower ASR than ours. In particular, when the clean-label attack was used in the Trojan model, our approach (by leveraging the winning Trojan ticket) leads to over $90\%$ ASR improvement. In Tab.\,\ref{tab:asr_arch_data}, we present the ASR of the recovered Trojan trigger under different model architectures and datasets. Consistent with Tab.\,\ref{tab:asr_triggers}, the use of the winning Trojan ticket significantly outperforms the baseline approach, not only in ASR but also in the correctness of the detected target label based on the minimum perturbation criterion. In Tab.\,\ref{tab:clean_image}, we examine how the choice of base images in the Trojan recovery problem \eqref{eq: RED_trojan} affects the estimated Trojan quality. In contrast to the use of $100$ noise images randomly drawn from the standard Gaussian distribution, we also consider the case of using $100$ clean images drawn from the benign data distribution. As we can see, our approach based on the winning Trojan ticket yields superior Trojan recovery performance to the baseline method in both settings of base images. Most importantly, the quality of our recovered Trojan trigger is input-agnostic: The $99.6\%$ ASR is achieved using just noise images without having access to any benign images. This is a promising finding of Trojan recovery `for free' given the zero knowledge about how the Trojan attack is injected into the model training pipeline. The superiority of our approach can also be justified from the visualized Trojan trigger estimates in Fig.\,\ref{fig:recovery}. Compared to the baseline NC \cite{wang2019neural}, the more clustered and the sparser Trojan trigger is achieved with much higher ASR shown in Tab.\,\ref{tab:clean_image}. Moreover, we remark that compared to \cite{sun2020poisoned} which needs human intervention to craft the sparse trigger estimate, ours provides an automatic way to reverse engineer the valid and the sparse Trojan trigger. \begin{table}[t] \centering \caption{Performance of recovered triggers with random noise images (`free') v.s. benign clean images. The RGB Trojan attack on CIFAR-10 and ResNet-20s are used for the reverse engineering.} \label{tab:clean_image} \vspace{-3mm} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule Noise Images (`Free') & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$1$", $78.7$) \cmark & $48.0$\% \\ Winning Trojan ticket & (``$1$", $29.8$) \cmark & $\mathbf{99.6}$\% \\ \bottomrule \end{tabular}} \resizebox{1\linewidth}{!}{ \begin{tabular}{l|cc} \toprule Clean Images & (Detected, $\ell_1$) & ASR \\ \midrule Dense baseline~\cite{guo2019tabor} & (``$1$", $174.6$) \cmark & $72.6$\% \\ Winning Trojan ticket & (``$1$", $40.4$) \cmark & $\mathbf{99.8}$\% \\ \bottomrule \end{tabular}} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{Figs/recover_trigger.pdf} \vspace{-2mm} \caption{Visualization of recovered Trojan trigger patterns from dense Trojan models and winning Trojan tickets. ResNet-20s on CIFAR-10 and ResNet-18 on ImageNet with RGB triggers are used here. The first row shows the random base images used for solving the problem~\eqref{eq: RED_trojan}, which is a challenging scheme from~\cite{wang2020practical}.} \vspace{-2mm} \label{fig:recovery} \end{figure} \paragraph{Ablation study.} In Appendix~\ref{sec:more_results}, we provide more ablations on the sensitivity of our proposal to the sparse network selection, the configurations of Trojan triggers and LTH pruning, and other pruning methods. Meanwhile, visualizations of winning Trojan tickets' sparse connectivities and loss landscape geometry are also presented. Lastly, we further offer extra experiment results on advanced Trojan attackers~\cite{nguyen2021wanet}, more poisoned and un-poisoned datasets. \section{Conclusion and Discussion} This paper as pioneering research bridges the lottery ticket hypothesis towards the goal of Trojan trigger detection without any available clean data by a two-step decomposition of first locating a \textit{winning Trojan ticket} with nearly full backdoor and little clean information; then leveraging it to recover the trigger patterns. The effectiveness of our proposals is comprehensively validated across trigger types, network architecture, and datasets. As the existence of backdoor attacks has aroused increasing public concern on the safe adoption of third-party models, this method provides model suppliers (like the Caffe Model Zoo) with an effective way to inspect the to-be-released models while not requiring any other clean dataset. Nevertheless, we admit pruning indeed slows down the pipeline and in our future work, we seek to provide a more computationally efficient method, that can scale up to larger and deeper models. This work is designed to defend malicious attackers, but it might also be abused, which can be constrained by issuing strict licenses. \vspace{-1mm} \section*{Acknowledgement} \vspace{-1mm} The work of Y. Zhang and S. Liu was supported by the MIT-IBM Watson AI Lab, IBM Research. Z. Wang was in part supported by the NSF grant \#2133861. \clearpage {\small \bibliographystyle{ieee_fullname} \section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname}
2,869,038,154,504
arxiv
\section{Introduction} Advanced classification and regression methods are propelling a revolution in computational chemistry and materials science.\cite{von2020retrospective,goh2017deep,mater2019deep} Recent advances in statistical modeling open the door to tackling problems of previously prohibitive levels of complexity, such as the discovery or inverse design of compounds or materials in the vastness of chemical space,\cite{von2020exploring,liu2020machine} the prediction of diverse chemical properties of gas-phase or condensed systems,\cite{wu2018moleculenet,chen2018rise,ekins2016next,butler2018machine} or the exploration of chemical reactions.\cite{coley2018machine} From the point of view of these disciplines, the conceptual and computational pipeline from physical principles to measurable properties of technological importance typically begins at the electronic level. In particular, the scalability of methods based on density functional theory (DFT) makes them a frequent choice to model the behavior of electrons.\cite{kohn1965self} However, this stage often still represents the bottleneck of the calculation. Therefore, although machine learning (ML) has the potential to jump directly from atomistic structure -- or even from less detailed descriptions like a chemical formula -- to quantitative or categorical results, bypassing just the electronic calculation is a very attractive option to develop general, detailed and transferable ML-powered models.\cite{von2020retrospective} That is precisely the goal of ML force fields (FFs), regression models that take sets of atomic coordinates and atom types as inputs and provide a quantitative picture of the potential energy hypersurface.\cite{unke2021machine} Within the limits of the Born-Oppenheimer approximation, they allow practitioners to retain the extensive apparatus developed for use with semiempirical potentials, including but not limited to molecular dynamics (MD) and Monte Carlo (MC) methods. Critically, however, they do so without sacrificing precision and accuracy with respect to a direct ab-initio approach. A major downside of using a regression model to calculate energies and forces is the loss of a direct connection to the underlying physics, and therefore the lack of an aprioristic framework to assess the results. Although DFT itself involves uncontrolled approximations that make the quality of its results for a new system uncertain,\cite{goddft} the much more flexible functional forms used in ML can easily lead to vastly larger errors or completely unphysical results for certain values of the input variables. While the most extreme of such mispredictions are likely to become apparent, the effect of subtler errors can become significantly more pernicious as they accumulate along an MD or MC trajectory and potentially invalidate large amounts of data.\cite{fu2022forces} Another important connection between the training data and the error estimate appears in the context of active learning approaches. Refining a model by expanding the training set, whether it is to describe new phases, to improve its precision within known regions of configuration space or to strike any other specific balance between exploration and exploitation, requires being able to gauge the level of certainty about each prediction. Ideally, the uncertainty in a prediction made by a ML model correlates with the error, because that allows for active learning approaches to iteratively improve the prediction capabilities of a model by identifying regions of high uncertainty (and therefore presumably high error) and retraining the model after including more data points from these regions. Words like error and uncertainty are sometimes used rather loosely and the meanings attributed to them in different sources can diverge significantly. In this article we adopt a frame of reference aligned with the recommendations contained in the metrology guides issued by standards bodies like the ISO.\cite{JCGMVIM3,JCGMGUM} Those guidelines have historically moved from the so-called \emph{error approach} or \emph{true value approach} to the modern \emph{uncertainty approach}, where the objective of a measurement is to assign a range of reasonable values to the measurand. We take the energies and forces calculated with DFT as our conventional values, also used as the ground truth for the ML models. Error is defined as the deviation of a predicted value from the conventional value, and uncertainty as a non-negative parameter characterizing the dispersion of the values being attributed to a prediction. The question of how to evaluate the error and uncertainty in energies and forces is key to enabling robust and reliable MLFF-based workflows that are readily amenable to automation. It is common to find this issue framed in terms of interpolation vs. extrapolation with respect to the training data; that is, however, too simplistic a picture, since regression models in high dimensionality will extrapolate with overwhelming probability.\cite{extrapolation} Broadly speaking, it is not unreasonable to assume that similarity to the data included in the training set will play a role in the quality of the predictions.\cite{hirschfeld2020uncertainty} However, in a setting where sophisticated nonlinear transformations of the input constitute the whole purpose of the calculation, exactly how and even in which space that similarity is to be measured is a question better tackled with help from the model itself. Uncertainty about the predictions of a general ML model can have two conceptually divergent origins, each of which requires a targeted treatment. The most common approaches try to relate the uncertainty to the error associated with the model's ability to learn from the given data, thus measuring the \emph{epistemic} error inherent to the model itself. Those approaches differ in their complexity, cost, quality of calibration, and ease of implementation on top of existing architectures. In the case of neural networks (NNs), well known strategies include distance-based and nonparametric methods\cite{janet2019quantitative,tran2020methods} and Bayesian NNs where weights and biases have associated probability distributions, as well as the broad family of ensemble-based estimators where randomness is introduced into the process by training several models, the ensemble members. The nature of the ensemble is determined by how those models differ; among the recipes that have been explored we can cite bootstrap aggregation (with members trained on different subsets of the data),\cite{Efron1979} dropout (where a fraction of the connections between neurons is randomly removed),\cite{gal2016dropout} or committees (where randomness is limited to the initialisation of the weights and biases).\cite{politis1994large} Other approaches such as mean variance estimation,\cite{nix1994estimating} or evidential learning\cite{soleimany2021evidential} furthermore include sources of error stemming from the data itself, the \emph{aleatoric} error. They assume an error distribution that is a function of the input descriptors (usually Gaussian noise with differing magnitudes) and train the model to learn this distribution along with the prediction of the target. However, by the nature of the training process these models not only fit the input error distribution, but also discount training data points that do not fit well within the current model prediction,\cite{nix1994estimating} so that in practice the putative estimate of the aleatoric uncertainty also contains epistemic contributions. If there is little or mostly uniform noise on the target, the epistemic contribution may even dominate. In general, the sources of error in a model vastly differ depending on the chosen representation of a chemical structure, the architecture of a model, and the size and quality of the training set, and largely influence the ability of uncertainty metrics to correlate with the actual error.\cite{heid2023characterizing} Too little or low quality training data or too restrictive a model usually cause uncertainty metrics to fail, i.e. to not correlate with the actual error anymore.\cite{heid2023characterizing} Among the most flexible strategies to account for both epistemic and aleatoric components of the uncertainty is another ensemble-based approach, namely deep ensembles.\cite{lakshminarayanan2017simple} Each NN making up a deep ensemble has an additional output that is interpreted as an estimate of the variance of the main quantity being predicted. This is complemented by the use of a heteroscedastic loss, i.e., a loss function where different input points carry different statistical weights that are functions of their variance. Depending on the circumstances, the contribution of each point to the loss can be minimized by pushing its prediction closer to the ground truth or by increasing the estimate of its uncertainty, but there is no risk of a drift towards predicting a high uncertainty for all points because that would lead to a very high loss. Thus, a successful minimization of the loss requires achieving a balance between raw accuracy and predicted uncertainty, which enables simultaneous training of both outputs. The approaches described above are usually deployed for ML models directly predicting a target quantity, such as electronic properties of a compound. In contrast, MLFFs need to predict not only the energies but also the forces (i.e. the derivatives of the energies) of each atom in a compound or material to allow for an application within MD simulations. Critically, the set of energies and forces have to fulfill the fundamental symmetries of mechanics leading to the conservation of linear and angular momentum. Different MLFF designs have been described, ranging from the preprocessing of the inputs into symmetry-compliant descriptors\cite{behler2007generalized} or the progressive building of atomic environments starting from an overcomplete set of interatomic distances\cite{unke2018reactive} to explicitly equivariant operations on the inputs leading to intermediate and output quantities with well defined tensor ranks.\cite{batzner20223,liao2022equiformer} Likewise, different MLFF architectures have been proposed, including kernel methods\cite{chmiela2017machine,Bartok_PRB13} and a plethora of different NNs like traditional multilayer perceptrons (MLPs),\cite{behler2007generalized} graph neural networks\cite{gilmer2017neural,schutt2017schnet,gasteiger2020directional} and variations on the transformer model.\cite{liao2022equiformer} Previous uncertainty estimates in MLFFs have been strongly dependent on the underlying design. For instance, for Bayesian inference they have been obtained directly from the covariance matrix. For NNs, the choice has so far been mostly driven by the ease of implementation. Thus, the preferred approaches have been ensemble-based estimators like bootstrap-aggregation ensembles and committees.\cite{Artrith_PRB12,Smith_JCP18,Zhang_JCP18,Musil_JCTC19,Schran_JCP20,MontesCampos_JCIM22} The complexity of some of their functional forms emphasizes the importance of modularity in MLFFs, and in particular the inconvenience of triggering large implementation efforts subsequent to small changes in the architecture only to obtain consistent derivatives. Automatic differentiation offers a convenient and efficient route to satisfy this need in the context of MLFFs implemented on top of modern frameworks.\cite{MontesCampos_JCIM22} Moreover, it opens the door to Sobolev training,\cite{sobolev} i.e., to the inclusion of derivatives of the target in the loss to force the model to match them. This is known to lead to faster and more accurate results for general function approximation through NNs\cite{NNs_with_derivatives} and is particularly relevant for NNFFs given the limited information content of the total potential energy.\cite{MontesCampos_JCIM22} Considering this, an automatically differentiable NNFF constitutes a good foundation to implement and evaluate more advanced uncertainty estimators. This paper introduces a strategy to use a deep ensemble of descriptor-based NNFFs. The resulting ensemble can provide uncertainties in the forces on atoms, and not only in the total energy. This requires a generalization of the original deep-ensemble template.\cite{lakshminarayanan2017simple} Although forces are partial derivatives of the energy, we show that the uncertainty in the forces cannot be easily obtained through the application of differential operators to the uncertainty in the energy. We then analyze whether deep ensembles actually provide, in practice, a more useful uncertainty estimate. Training ensembles is more demanding because the probability of a suboptimal training run can be significant enough that one model in an ensemble might not converge to an acceptable minimum of the loss landscape. We achieve a much more stable training process by introducing a deep residual network (ResNet) architecture.\cite{ResNet} We improve its convergence further through the use of the recently released nonlinear learned optimizer VeLO.\cite{VeLO} The resulting training is orders of magnitude faster, which, in combination with the uncertainty estimates, makes active learning cycles possible. To test our models and workflow we employ data for two systems: first, the ionic liquid ethylammonium nitrate (EAN), where we show how MD runs that result in configurations with a high uncertainty and error can be automatically detected and repaired; second, the surface reconstructions of the perovskite \ce{SrTiO3}, for which, starting from pure bulk structure data, we demonstrate how the MLFF can be iteratively improved by a simple active learning approach based on the maximization of an adversarial loss\cite{Bombarelli_adversarial} to select highly uncertain but physically plausible configurations. \section{Methods} \subsection{Revised NeuralIL architecture} The starting point for all of our models is NeuralIL, introduced in Ref.~\onlinecite{MontesCampos_JCIM22} and represented schematically in the top panel of Fig.~\ref{fig:blocks}. In terms of data flow and enforcement of the fundamental symmetries, NeuralIL falls in the class of NNFFs originally developed by Behler and Parrinello\cite{behler2007generalized} on heuristic grounds. However, the design and implementation takes advantage of many later advances from the mainstream ML world to improve performance and robustness. A calculation starts with the encoding of the set of atomic coordinates into atom-centered descriptors that are invariant with respect to global rotations and translations. Specifically, the relative positions of the neighbors within a finite cutoff radius of each atom in the system are turned into an array of second-generation spherical Bessel descriptors.\cite{kocer2020continuous} Besides the cutoff radius, the only parameter that needs to be chosen is the maximum radial order ($n_\mathrm{max}$) of the basis functions. The density is encoded independently for each pair of chemical elements, and for a system containing atoms from $n_{\mathrm{el}}$ of them, the total number of descriptors per atom is $n_p=\pqty*{n_\mathrm{max}+1}\pqty*{n_\mathrm{max}+2}\times n_{\mathrm{el}}\pqty*{n_\mathrm{el}+1}/4$. Not explicitly depicted in Fig.~\ref{fig:blocks} is the fact that periodic boundary conditions along all or some of the directions of space are taken into account in this phase, which requires up to $9$ more quantities to define the relevant box vectors. The spherical Bessel descriptors do not explicitly encode the chemical identity of the atom at the center of each environment, so we supplement them with $n_{\mathrm{emb}}$ embedding coefficients depending only on that atom's type. The concatenated array of embedding coefficients and descriptors is fed into an NN, hereby called the \emph{core model}, with a single scalar output per atom that we call the \emph{proto-energy}. The proto-energies are then multiplied by a common trainable factor and displaced by a common trainable offset before being added together to form the final prediction of the potential energy of the atomic configuration. This sum over atoms is the last step in enforcing the physical symmetries of the model, since it removes the dependence on the arbitrary order of the atoms in the input. Although originally introduced heuristically, it constitutes a particular instance of a \emph{set pooling} or \emph{deep sets} architecture, whose representative power has been systematically studied later.\cite{deep_sets, deep_sets_limitations, bueno2021on} In this more modern light, the sum over atoms is not the only possible kind of pooling layer,\cite{equilibrium_aggregation} but it is still a physically appealing choice to achieve the right scaling of the total potential energy with the number of atoms. The interpretation of the scaled and displaced proto-energies as single-atom energies in a general physical context is dubious -- for instance, two rounds of training of the same model on the same data can lead to almost identical total energies while predicting only weakly correlated single-atom energies. However, having them add up to the total energy is enough to enable their use whenever an appropriate gauge principle holds, as is the case in MD thermal transport calculations.\cite{Ercole2016} \begin{figure*} \centering \includegraphics[width=\textwidth]{network_diagrams.pdf} \caption{Top: schematic description of the NeuralIL architecture (basic homoscedastic case). Center: NNs making up the core of the model in the original NeuralIL (left) and the new ResNet-based version (right). Bottom: structure of the basic blocks of the ResNet for regression, namely the general dense block, the special case with output size 1, and the identity block. The numbers of neurons in these example diagrams are kept smaller than in the actual model to facilitate their interpretation. A single arrow between a pair of layers denotes all-to-all connections. A LayerNorm block superimposed on a layer of neurons is to be interpreted as acting on the linear combination of the inputs plus the offset, before it is passed into the activation function.} \label{fig:blocks} \end{figure*} In the earliest implementations of this kind of NNFF, the core model was a fully connected MLP with traditional sigmoid activation functions. Despite the known theoretical representation power of those NNs, they are also famously prone to phenomena like vanishing gradients and dead neurons. Those result in slow training, drastic limitations to the practical depth of the NN, and a need for fine tuning to avoid divergences and overfitting. Moving beyond the constraints of the sigmoid-based MLP has been a significant contributor to the success of modern NN-based ML in general, and a lot of effort continues to be directed at improving all the basic building blocks of modern models. The original NeuralIL still used an MLP as its core model, but incorporated several of those key improvements: instead of a sigmoid-like activation function, it uses Swish-1, a state-of-the-art differentiable alternative,\cite{swish} and it makes aggressive use of normalization (specifically LayerNorm\cite{LayerNorm}) between layers whenever possible. A schematic depiction of that core model can be found in the center-left panel of Fig.~\ref{fig:blocks}; note that neither the inputs nor the final layer are normalized, the former so as not to destroy the scale of the atomic density and the latter because it contains a single neuron. Unfortunately, even this improved MLP eventually hits its limit. For demanding data sets, the probability of a suboptimal training run can be significant enough that an average of one model in an ensemble of five or ten might be an outlier with anomalously large errors. Moreover, decreasing returns are obtained by adding extra layers to the basic MLP design. To overcome this situation, we completely scrap the MLP and replace it with a variation on the idea of a ResNet. ResNets were designed specifically to enable revolutionarily deep convolutional architectures\cite{ResNet} of up to 1000 layers while avoiding pitfalls like vanishing gradients. The idea behind them is that, while in principle a deep NN could spontaneously \enquote{shed} some of its layers by training them to reproduce the identity function, its practical functional form makes such a state unlikely to be reached. On the other hand, it is very easy to train a layer to ignore part of its inputs by driving the required coefficients to zero. Therefore, instead of a single path through fully connected layers, ResNets are built out of units where information is sent along both a deep and a shallow path, whose results are combined in a trainable manner in order to generate the final output. The viability of ResNet-inspired connections for NNFFs has been illustrated before, using a convolutional architecture.\cite{schutt2017schnet} Here we base our approach to residual learning on the adaptation of ResNets to deep regression problems presented in Ref.~\onlinecite{RegressionResNet}, but we replace BatchNorm\cite{BatchNorm} with LayerNorm due to the complications that the former introduces in the calculation of derivatives. This regression ResNet is made up of two kinds of blocks: identity blocks ($N$ to $N$ mappings) and dense blocks ($N$ to $M\ne N$ mappings). The bottom panel of Fig.~\ref{fig:blocks} shows both, along with the slightly special case of a dense block with $M=1$, and its center-right panel shows an example of a ResNet core built out of those blocks. We have found\cite{sebastian_LJ} that for certain high-energy configurations not typically found in training sets (e.g. those with some very small interatomic distances) adding a simple pair potential can help avoid unphysical behavior and stabilize MD simulations. Therefore, a Morse contribution of the form \begin{equation} E_{\mathrm{Morse}}\pqty*{r} = \sum\limits_{i,j\in\mathrm{atoms}} V_{\mathrm{Morse}}\pqty*{r_{ij}}\text{, with }V_{\mathrm{Morse}}\pqty*{r}=De^{-A\pqty*{r - B}}\sqty*{e^{-A\pqty*{r - B}} - 2} \label{eqn:morse} \end{equation} \noindent can optionally be added to the NeuralIL potential energy. Here the $A$, $B$ and $D$ parameters for each pair of elements are derived, using Kong's mixing rules, from element-specific $A$, $B$ and $D$. In turn, each of those is calculated from a fully trainable real parameter that is passed through SoftPlus function to constrain the result to positive values. The Morse component does not impose any limit on the total potential energy function, since the $D$ parameters can still be driven to zero during training; moreover, we multiply each of the pair contributions by a completely smooth cutoff based on the bump function\cite{tu2010introduction} to avoid any discontinuity of the forces or their derivatives. The cutoff radius $r_{\mathrm{cut}}$ is the same as for the descriptor generator, and the switching radius is a trainable parameter between $0.5r_{\mathrm{cut}}$ and $r_{\mathrm{cut}}$. The full list of trainable parameters therefore comprises the embedding coefficients for each atom type, the weights and biases of all neurons in the core, the scaling factor and offset for transforming the proto-energies into single-atom energies, the element-specific Morse parameters and the switching radius of the Morse potential. In this article we use that Morse contribution only in the case of \ce{SrTiO3}. NeuralIL is implemented on top of JAX,\cite{jax2018github} a Python library providing just-in-time (JIT) compilation and advanced automatic differentiation features. The former affords high performance and parallelization on both CPUs and GPUs. Thanks to the latter, vector-Jacobian and Jacobian-vector product operators (VJP and JVP, respectively) can be created automatically for code implemented in JAX, from which very efficient and accurate differentiation workflows are easy to assemble. Specifically, all forces in a given configuration are calculated through a single invocation of the VJP of the potential energy.\cite{MontesCampos_JCIM22} Physically interesting higher-order derivatives, such as the Hessian required for a normal mode calculation, can also be obtained without the even more significant coding overhead they would normally require. Likewise, the gradients used in training and in the active learning procedures described below are extracted through automatic differentiation. In addition to JAX itself, we use FLAX\cite{flax} to simplify model construction and parameter bookkeeping. Our code is distributed as open source and our models are available for download.\footnote{Zenodo record with DOI: \href{https://doi.org/10.5281/zenodo.7643625}{10.5281/zenodo.7643625}} \subsection{Extension of the model to account for heteroscedasticity} Building deep ensembles demands a heteroscedastic formulation of the regression problem and therefore a variance estimate. The extreme representation power of NNs offers the interesting opportunity to have the model itself produce this estimate and to train both components simultaneously. Although in principle this could be accomplished with a completely separate NN with common inputs, there are advantages to a higher level of integration. First, a scalar variance must satisfy the same symmetries as the main output of the FF (the potential energy) with respect to transformations of the system of coordinates and to permutations of atoms; second, it is reasonable to assume that the main outputs of the NNFF (potential energy and forces) and the variance are affected by common influence factors best expressed by quantities derived from the descriptors but with an intermediate level of elaboration. Taking this into account, we adopt the common strategy of notionally splitting the ResNet-based core model into a \emph{feature extractor} containing the layers closer to the input and a \emph{head} containing those closer to the output. We then add a second head to have the core generate an additional output, which is later scaled and displaced by trainable constants, filtered through a SoftPlus function to guarantee non-negativity, and summed over atoms to provide an estimate of $\sigma_E^2$, the variance of the total potential energy. Building $\sigma_E^2$ as a sum over atoms takes advantage of the set-pooling architecture to enforce the permutation invariance, but the individual contributions to this quantity should not be interpreted as variances of the single-atom energies, nor should this construction be read as an assumption of a lack of correlation between the single-atom contributions to the energy. A better starting point for an atom-resolved heteroscedastic formulation is to attach a variance to each of the atomic forces. Despite the fact that all components of the force vector are obtained from the total energy by automatic differentiation, it is cumbersome to compute the variance of the force on atom $i$ along Cartesian axis $\alpha$, $\sigma^2_{f_i\supar{\alpha}}$, from derivatives of statistics of the energies -- as shown in Appendix~\ref{apx:covariance}, this requires a two-point calculation to estimate the correlation between the energies of two configurations. Instead, we add a third head to the ResNet core whose output is also passed to a scaling and offset neuron and a SoftPlus function, but not summed over atoms, and take that single atom result as an isotropic $\sigma^2_f$. A schematic representation of the heteroscedastic NeuralIL core with its three heads is presented in Fig.~\ref{fig:heteroscedastic}. \begin{figure*} \centering \includegraphics[width=\textwidth]{heteroscedastic_diagrams.pdf} \caption{Schematic description of our extension of the NeuralIL architecture to allow for a heteroscedastic loss. Abbreviations: RNI = ResNetIdentity; RND = ResNetDense. The dimensions of the layers correspond to the actual model in this paper.} \label{fig:heteroscedastic} \end{figure*} In contrast with the aforementioned situation regarding $\sigma_E^2$, the fact that the $\sigma^2_f$ thus calculated is a function of the descriptors centered at a single atom does place some constraints on the result (e.g. it is short ranged). Nevertheless, even in conjunction with the isotropic ansatz this leaves plenty of flexibility to obtain a good enough set of variances to derive point weights for a heteroscedastic model. The core of our homoscedastic models use a ResNet architecture with composite layers of widths $64$, $32$ and $16$. The heteroscedastic extension uses the same sequence widths for the feature extractor and adds a ResNetIdentity($16$) layer for each of the three heads, as represented in Fig.~\ref{fig:heteroscedastic} \subsection{Loss and training}\label{subsec:training} After random initialization of all coefficients from a truncated normal distribution following the LeCun normal initialization,\cite{LeCuBottOrrMull9812} our homoscedastic models are trained by minimizing the following loss function: \begin{equation} \begin{aligned} \mathcal{L} =& \frac{1}{2}\aqty*{\frac{0.2}{n_{\mathrm{atoms}}}\sum\limits_{i=1}^{n_{\mathrm{atoms}}} \log\sqty*{\cosh\pqty*{\frac{\norm*{\bm{f}_{i,\mathrm{predicted}} - \bm{f}_{i,\mathrm{reference}}}_2}{\SI{0.2}{\electronvolt\per\angstrom}}}}} \\ +&\frac{1}{2}\aqty*{0.02 \log\sqty*{\cosh\pqty*{\frac{E_{\mathrm{pot}} - E_{\mathrm{pot},\mathrm{reference}}}{n_{\mathrm{atoms}}\times\SI{0.02}{\electronvolt\per\atom}}}}}. \end{aligned} \label{eqn:homoscedastic_loss} \end{equation} \noindent Here, $\aqty*{\cdot}$ denotes an average over configurations, estimated using a different minibatch at each iteration within an epoch. The log-cosh function smoothly interpolates between a quadratic and a linear regime, reducing the weight of outliers and providing a form of gradient clipping during training. Although the factors above have been arranged in such a way as to make the loss dimensionless, the formula dimensionally homogeneous, and its limits easy to identify, we have found that the relative weights of the energies and forces can be varied within a wide range with negligible effect on the results, and thus do not need to be tuned, when the training procedure laid out in this section is followed. Some of us have shown \cite{MontesCampos_JCIM22} that the forces alone contain enough information to drive the training of an NNFF, with the caveat that the origin of energies need to be adjusted at the end of the process. The role of the energy contribution to the loss in Eq.~\eqref{eqn:homoscedastic_loss} is mainly to render that additional step unnecessary. On the other hand, our heteroscedastic loss has a hybrid structure: \begin{equation} \begin{aligned} \mathcal{L} &= \frac{1}{2}\aqty*{\frac{1}{2} \frac{\pqty*{E_{\mathrm{pot}} - E_{\mathrm{pot},\mathrm{reference}}}^2}{\sigma_{E}^2} + \log \pqty*{\frac{\sigma_{E}^2}{\SI{1}{\electronvolt^2}}}}\\ & + \frac{1}{2}\aqty*{\frac{1}{n_{\mathrm{atoms}}}\sum\limits_{i=1}^{n_{\mathrm{atoms}}}\log\sqty*{\cosh\pqty*{\frac{\pi}{2}\frac{\norm*{\bm{f}_{i,\mathrm{predicted}} - \bm{f}_{i,\mathrm{reference}}}_2}{\sigma_{f,i}}}}+\log\pqty*{\frac{\sigma_{f,i}}{\SI{1}{\electronvolt\per\angstrom}}}}. \end{aligned} \label{eqn:heteroscedastic loss} \end{equation} \noindent The energy contribution has the well known form of the negative logarithm of a likelihood of a Gaussian distribution, as would be used in a conventional stochastic maximum-likelihood estimation. The force contribution can be interpreted as a gradient-clipped version of the same construction or as derived from the negative log-likelihood of a hyperbolic secant distribution\cite{Fischer2013-wh} (a super-Gaussian distribution with an excess kurtosis of $2$). The reason for the different treatment of the energy is that, even if we were to assume a similarly leptokurtic distribution for each of the atomic energies, the total energy would still be very close to a Gaussian. The standard method for minimizing the loss in NN training is stochastic gradient descent (SGD). In the basic incarnation of this method, each parameter of the model is updated by an amount proportional to the corresponding component of the local gradient of the loss computed using a minibatch. Refined versions of SGD, like the popular ADAM\cite{ADAM} and derived methods, keep a limited memory of moments of the gradient to stabilize and accelerate the descent toward the minimum. Here we use VeLO, a fully nonlinear optimizer that takes a radical departure from the philosophy of SGD to achieve much higher performance.\cite{VeLO} VeLO is itself a complex ML model that computes the update to each parameter at each iteration based on the current values of all parameters and the gradient of the loss. That model has been meta-trained, using evolutionary algorithms, on a wide sample of models including MLPs, ResNets, convolutional networks, transformers and autoencoders. We use the stochastic optimization library OPTAX,\cite{deepmind2020jax} integrated both in the JAX ecosystem and with the open-source implementation of VeLO. A remarkable feature of VeLO is that it renders the choice of a learning rate unnecessary: its only parameter is the total number of epochs. \subsection{Bootstrap aggregation, committees and deep ensembles} To build a bootstrap-aggregation ensemble, we repeat the following process $10$ times: we sample, with replacement, a pool of a size equal to $75\%$ of the total data set; we then take $50\%$ of that pool as our training data and leave the remaining $50\%$ for validation; finally, we train a single NNFF based on those subsets. In terms of performance this comes close to being equivalent to training the basic NNFF ten times,\cite{Zhu_arXiv23} with the only savings coming from the smaller training and validation data sets and from the avoided JIT recompilations. This stands in total contrast with the situation during inference, where evaluating the ten NNs for each configuration imposes very little overhead in comparison with evaluating only one because the most time-consuming part of the calculation, getting the descriptors and the associated VJP, is only performed once. Committees take advantage of this imbalance of computational complexity by training all networks on the same data, the only source of randomness being the different initialization of the coefficients of the NN. We implement a committee as a single FLAX model with an additional dimension for each tensor, which runs over the members of the ensemble. Accordingly, each $10$-member committee used in this work is efficiently trained in a single run with little performance loss compared to the training of a single NN. Note that this optimization does introduce a subtle reduction in diversity because the splitting of the training data in minibatches will also be the same for all members of the committee. The assumption of homoscedasticity results in the well-known average and uncertainty estimates for bootstrap-aggregation ensembles and commitees \begin{subequations}\label{grp:unweighted} \begin{gather \aqty*{E_{\mathrm{pot}}}_{\mathrm{HOS}} = N^{-1}_{\mathrm{ensemble}} \sum\limits_{i\in\mathrm{members}} E_{\mathrm{pot},i}\label{eqn:averages:unweighted}\\ \sigma^2_{E,\mathrm{HOS}} = \frac{1}{N_{\mathrm{ensemble}}\pqty*{N_{\mathrm{ensemble}}-1}} \sum\limits_{i\in\mathrm{members}} \pqty*{E_{\mathrm{pot},i}-\aqty*{E_{\mathrm{pot}}}_{\mathrm{HOS}}}^2.\label{eqn:variances:unweighted} \end{gather} \end{subequations} The deep ensembles share the computational advantage with the committees that they can be implemented as a single model. The increase in complexity as a consequence of the additional heads is barely noticeable in terms of computational time, which is always dominated by the descriptor calculation. However, assuming a lack of correlation between members, the additional head (see Fig.~\ref{fig:heteroscedastic}) allows us to express a collective prediction from a deep ensemble using the minimum-variance linear unbiased estimator of the corresponding population mean, the weighted average \begin{subequations}\label{grp:weighted} \begin{align} \aqty*{E_{\mathrm{pot}}}_{\mathrm{HES}} &= \pqty*{\sum\limits_{i\in\mathrm{members}} \frac{1}{\sigma^2_{E,i}}}^{-1} \sum\limits_{i\in\mathrm{members}} \frac{E_{\mathrm{pot},i}}{\sigma^2_{E,i}} \label{eqn:averages:weighted}\\ \sigma^2_{E,\mathrm{HES}} &= \pqty*{\sum\limits_{i\in\mathrm{members}} \frac{1}{\sigma^2_{E,i}}}^{-1}\label{eqn:variances:weighted}\end{align} \end{subequations} \noindent For deep ensembles, however, the standard prescription for uncertainty estimation\cite{lakshminarayanan2017simple,AutoDEUQ} combines an unbiased committee variance as a proxy for epistemic uncertainty with the average of the additional head as an approximation to aleatoric uncertainty: \begin{equation} \sigma^2_{E,\mathrm{DE}} = \underbrace{\vphantom{\sum\limits_{i\in\mathrm{members}}}N_{\mathrm{ensemble}}\sigma^2_{E,\mathrm{HOS}}}_{\text{epistemic}} + \underbrace{N^{-1}_{\mathrm{ensemble}}\sum\limits_{i\in\mathrm{members}}\sigma^2_{E,i}}_{\text{aleatoric}} \label{eqn:variance:DE} \end{equation} \noindent In our heteroscedastic tests we use both $\sigma^2_{\mathrm{DE}}$ and $\sigma^2_{\mathrm{HES}}$ to see if there is an advantage to either of them. When employing $\sigma^2_{\mathrm{DE}}$, our estimate of the mean is Eq.~\eqref{eqn:averages:unweighted} for consistency. By definition, $\sigma^2_{\mathrm{DE}}$ is larger than $\sigma^2_{\mathrm{HES}}$ or $\sigma^2_{\mathrm{HOS}}$ by a factor of approximately $N_{\mathrm{ensemble}}$, so we scale $\sigma_{\mathrm{DE}}$ by $\sqrt{N_{\mathrm{ensemble}}}$ for direct comparisons. We furthermore note that although $\sigma^2_{\mathrm{HES}}$ and the second term in $\sigma^2_{\mathrm{DE}}$ are termed aleatoric in literature, they predict uncertainty both based on error in the input data, and on the model itself if some datapoints are more difficult to fit than others. Since the input data in our study contains no error by construction, all contributions to the uncertainty are, strictly speaking, epistemic, i.e. caused by inadequacies of the model that can be reduced by adding more data. We use analogous constructions for the uncertainties of the forces in the homo- and heteroscedastic cases. \subsection{Data generation for EAN.} The data used for training the EAN potential is the same set employed in the original NeuralIL article,\cite{MontesCampos_JCIM22} distributed as part of its supplementary information, and we choose the same training/validation split to make our new results more directly comparable with those from the original design. The data set comprises $368$ structures directly extracted from a classical MD trajectory plus $373$ others to which a DFT energy minimizer has been applied for five steps. All energies and forces had been calculated using the GPAW\cite{gpaw1,gpaw2} DFT package with a linear combination of atomic orbitals (LCAO) basis set; see the original reference for details of the parameterization. We then run an MD simulation using a single NNFF trained on that data and the JAX-MD library\cite{jaxmd2020} to obtain $12$ MD trajectories in the $NVT$ ensemble at $T=\SI{298}{\kelvin}$ with a duration of \SI{80}{\pico\second} each and a time step of \SI{1}{\femto\second}. The thermostat used is a Nosé-Hoover chain\cite{Tuckerman_2006} with a length of $5$ and a time constant of $100$ time steps. The initial conditions for all $12$ trajectories are based on the same snapshot from the training set, but the velocities are initialized at random from a Maxwell-Boltzmann distribution at $T=\SI{298}{\kelvin}$ and are different in each case. We select $4$ out of $12$ trajectories as representative of different situations vis-à-vis uncertainty, as detailed in the next section. We delimit the high-uncertainty segments of those four trajectories and we sample $125$ equispaced points that we then run through the same DFT setup described above to obtain energies and forces to supplement the data set in a new round of training. \subsection{Data generation for the \ce{SrTiO3} surface.} To generate the reference data for bulk \ce{SrTiO3} we start from a $3\times 3\times 3$, $135$-atom cubic supercell. We employ the same lattice parameter as in Ref.~\onlinecite{STO_surfaces}, namely \SI{4.01}{\angstrom}. We then rattle the atomic positions using random displacements generated from an isotropic Gaussian distribution. In particular, the mass-weighted displacement $m_i u_i^{\pqty{\alpha}}$ of atom $i$ from its equilibrium position along Cartesian axis $\alpha$ is drawn from a Gaussian distribution with variance equal to $3\frac{T}{k_\mathrm{B}\theta_\mathrm{D}}$, where $k_{\mathrm{B}}$ is the Boltzmann constant and $\theta_\mathrm{D}$ is the Debye temperature of the material. We take $\theta_\mathrm{D}$ as \SI{418.5}{\kelvin}, the average of the experimental values compiled in Ref.~\onlinecite{PhysRevMaterials.3.022001}, and we generate $600$ configurations with $T=\SI{500}{\kelvin}$ and a further $600$ with $T=\SI{1000}{\kelvin}$. We set aside $100$ points from each of those series to build a test set, and use the remaining $500$ for training and validation. We again employ GPAW in the LCAO mode, with the PBE approximation\cite{Perdew1996} to the exchange and correlation terms and $\Gamma$-only sampling in reciprocal space. To progressively improve the performance of the \ce{SrTiO3} NNFF trained on these bulk data on the $4\times 1$ \ce{SrTiO3}$(110)$ surface reconstruction we apply an active-learning strategy inspired by the work of Bombarelli and coworkers.\cite{Bombarelli_adversarial} Starting from a predefined configuration we first apply a random displacement drawn from a Gaussian distribution with zero mean and a standard deviation of \SI{0.1}{\angstrom} to each atom. We then run a numerical maximization of the logarithm of the adversarial loss \begin{equation} \mathcal{L}_{\mathrm{adv}} = \sigma^2_{\bm{f}}\exp\bqty*{-\frac{E_{\mathrm{pot}}}{k_{\mathrm{B}}T}} \label{eqn:adversarial} \end{equation} for $T=\SI{500}{\kelvin}$ using the Powell conjugate-directions method,\cite{Powell} setting the maximum relative norm of the change in the independent variables acceptable for convergence to $x_{\mathrm{tol}}=\num{100}$. This allows us to explore regions of configuration space that are both unknown and physically plausible. We initially generate $100$ surface configurations using this method starting from the same initial positions used to start the evolutionary search presented Ref.~\onlinecite{STO_surfaces}. We then run a second iteration where each new point is generated starting from a randomly chosen configuration from the first iteration. As test data for the \ce{SrTiO3} surface we use $5000$ structures chosen at random from a larger collection of $29500$ gathered from three different $500$-generation stochastic trajectories of the covariance matrix adaptation evolution strategy (CMA-ES) search algorithm.\cite{STO_surfaces} Those three trajectories differ in the number of layers of atoms that are allowed to move during the optimization. The atomistic model used to study these reconstructions contains $136$ atoms, employs periodic boundary conditions only in the directions perpendicular to $(110)$, and is always kept symmetric with respect to its central plane. It therefore simulates a slab with two equivalent free surfaces. \section{Results and discussion} \subsection{Performance of the revised NeuralIL on EAN data} We first examine the general performance of the models in terms of speed and accuracy using the EAN data set. As a reference, the first row of Table~\ref{tbl:EAN_comparison} contains the mean average errors in the prediction of the total potential energy and the forces on the validation portion of that data set using the original NeuralIL model from Ref.~\onlinecite{MontesCampos_JCIM22}. The next block of three rows lists the values of the same statistics for the main type of model employed in this manuscript (with a ResNet core and the VeLO learned optimizer) with a different number of total training epochs and therefore a different CPU budget for the optimization phase. All three showcase the enormous performance advantage of the new architecture and procedure: even with only $21$ epochs, the MAE values are competitive, and with $101$ epochs the final model outperforms the original for both energy and force predictions. We settle on $51$ training epochs as an excellent compromise. This amounts to a $10$- to $20$-fold acceleration with respect to the original $500$ to $1000$ training epochs. To help identify which elements are determinant for this improvement, we also analyze the results of two hybrid models: one keeps the MLP core of NeuralIL but uses the VeLO optimizer for training while the other employs the new ResNet core along with the original Adam optimizer with a one-cycle learning-rate schedule. Naturally, both of them show competitive MAEs (see the third block of rows in Table~\ref{tbl:EAN_comparison}). A a more detailed picture emerges from the evolution of the loss during training as depicted in Fig.~\ref{fig:resnet_and_velo}. Looking at the value of the loss at $51$ epochs it can be that either of the major upgrades to the original design is, in isolation, enough to achieve a dramatic improvement in performance after a short training period, and could potentially enable an active learning workflow by itself. Our experience with more challenging data sets points to VeLO often having an edge. However, the combination of both techniques always offers an even better result with marginal additional cost. \begin{table} \begin{tabular}{lcccc} Model & MAE $E_{\mathrm{pot}}$ & MAE $\bm{f}$ \\ & (\si{\milli\electronvolt\per\atom}) & (\si{\milli\electronvolt\per\angstrom}) \\\hline NeuralIL & $1.86$ & $65.6$ \\\hline ResNet + VeLO (21 epochs) & $1.37$ & $73.2$ \\ ResNet + VeLO (51 epochs) & $1.21$ & $66.8$ \\ ResNet + VeLO (101 epochs) & $1.47$ & $64.2$ \\\hline ResNet + ADAM + OneCycleLR & $1.35$ & $63.5$ \\ (1001 epochs) & & \\\hline Committee & & \\ (10 NNs, ResNet, VeLO, 51 epochs) & $1.16$ & $63.3$ \\ Deep ensemble & & \\ (10 NNs, ResNet, VeLO, 51 epochs) & $1.75$ & $65.0$ \\\hline \end{tabular} \caption{Validation statistics of the original NeuralIL force field,\cite{MontesCampos_JCIM22} the single models shown in Fig.~\ref{fig:resnet_and_velo}, a 10-NN committee and a 10-NN deep ensemble, all with the same EAN training and validation data sets.} \label{tbl:EAN_comparison} \end{table} \begin{figure} \centering \includegraphics[width=.6\columnwidth]{training_comparison.pdf} \caption{Evolution of the validation loss for the EAN data set with different choices of core architecture and optimizers. The vertical lines are placed at the $21$-, $51$- and $101$-epoch marks.} \label{fig:resnet_and_velo} \end{figure} We also check that we do not incur a significant degradation in predictive power by using a committee or a deep ensemble instead of a single neural network. As the last two rows in Table~\ref{tbl:EAN_comparison} show, both afford comparable or better accuracy than a single NN with the same parameters. \subsection{Assessment of uncertainty in EAN using committees and deep ensembles} \begin{figure*} \centering \includegraphics[width=\textwidth]{profiles_a.pdf} \caption{Evolution of the committee and deep-ensemble uncertainty metrics along a selection of four MD trajectories with different initial conditions, using a NNFF based on the training data set from Ref.~\onlinecite{MontesCampos_JCIM22}. The $\sigma_f$ for the whole system is defined as $\sqrt{\sum\limits_{\mathrm{atoms}}\sigma_f^2}$.}. \label{fig:ean_traj_analysis_a} \end{figure*} We now start our assessment of committees and deep ensembles as tools for uncertainty estimation by analyzing their behavior along a MD trajectory. Not only does it involve physically relevant collective changes of all degrees of freedom, but it also one of the prime targets for quality assessment and improvements of MLFFs. The EAN data set from Ref.~\onlinecite{MontesCampos_JCIM22} in particular makes for an interesting test case since the configurations therein were extracted from classical molecular mechanics trajectories but the energies and forces were calculated with DFT; therefore, an MD simulation run with the trained potential will not retrace the steps of the original classical FF but explore other regions. As could be expected, this is immediately visible in the uncertainty derived from a committee or a deep ensemble. Figure~\ref{fig:ean_traj_analysis_a} illustrates this point taking four example trajectories with very different characteristics. The two topmost ones can be described as pathological: in the first trajectory, an anomalous configuration of the H atoms in a cation causes bond breakage in a local but progressively growing environment; in the second, the interaction between two cations leads to an artifactual dissociation of a \ce{C-N} bond in each of them that causes a catastrophic chain effect where other ions also lose their structural integrity. Representative sequences of snapshots for these two cases are presented in Fig.~\ref{fig:snapshots}. The other two trajectories are comparatively uneventful and merely involve the transfer of a single hydrogen from a cation to an anion; this process is physically plausible in a protic ionic liquid but is not represented in the training data. Although the global trend of the uncertainty with time is monotonically increasing in all cases, showing the aforementioned evolution towards unexplored configurations, the more catastrophic events leave a clear fingerprint in the plots and enable an easy classification of the trajectories. There is no clear advantage to the more advanced deep ensembles with respect to the simpler committees. Just like the forces contain more information than the total energy, the uncertainty in the forces is more informative and better behaved than its counterpart for the total energy. \begin{figure*} \centering \includegraphics[width=\textwidth]{2_failures.pdf} \caption{Sequences of snapshots illustrating the emergence of problems in the first two MD trajectories represented in Fig.~\ref{fig:ean_traj_analysis_a} when using a potential trained only on the original EAN data set. Gray, red, blue and white spheres represent carbon, oxygen, nitrogen and hydrogen atoms, respectively.} \label{fig:snapshots} \end{figure*} \subsection{Improving the potential to obtain more accurate MD trajectories} \begin{figure*} \centering \includegraphics[width=\textwidth]{profiles_b.pdf} \caption{The same metrics as Fig \ref{fig:ean_traj_analysis_a}, evaluated on MD trajectories started from the same initial conditions but driven by an FF whose training data set was supplemented with points from the high-uncertainty segments of the trajectories in Fig \ref{fig:ean_traj_analysis_a}}. \label{fig:ean_traj_analysis_b} \end{figure*} The fact that our MD trajectories stray into inadequately explored segments of configuration space is detectable in the uncertainties and causes spurious behavior that either makes the simulation crash or renders its results unphysical. To solve this, we retrain the NNFF after supplementing the data set with the $500$ snapshots extracted from the high-uncertainty segments of the trajectories as explained in the methods section. We then restart the four example MD trajectories from the same initial conditions and monitor the uncertainty estimates again. As desired, this has the effect of avoiding any unphysical events along the trajectories and, as shown in Fig.~\ref{fig:ean_traj_analysis_b}, this is reflected in a stabilization of all uncertainty metrics at values close to their starting point. The change is also obvious in the analysis of an individual degree of freedom with the retrained potential (Fig.~\ref{fig:bond_stretching}, bottom panel): the retrained deep ensemble, but especially the retrained committee approximate the DFT results in a significantly wider range. The uncertainty estimates are also analyzed for a series of configurations sampled along a trajectory determined by the change in a single degree of freedom in Figure~\ref{fig:bond_stretching}. In particular, we study the stretching of a single N-O bond in an anion to establish a direct comparison with the bootstrap-aggregation strategy used in Ref.~\onlinecite{MontesCampos_JCIM22}. Figure~\ref{fig:bond_stretching}(top) shows that the ensemble-based approaches can provide a reliable proxy for error, which stays at manageable levels even at a certain distance from the values of those degrees of freedom that are better represented in the input data. The effect of the additional data included above to avoid the catastrophic behavior during MD runs is visible for short N-O distances in Fig.~\ref{fig:bond_stretching}(bottom). This data is more scarce than the original data centered around $d_\text{N-O}\approx$1.28~\AA. The increased uncertainty is however only reflected in the deep ensembles. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{bond_stretching.pdf} \caption{Committee and deep-ensemble predictions of the force along an N-O bond with the associated uncertainties using the original EAN data set (top) and the version supplemented with high-uncertainty MD data (bottom). Uncertainties are represented as a filled area with a half-width equal to the standard deviation $\sigma_{\mathrm{HOS}}$, $\sigma_{\mathrm{HES}}$, and $\sigma_{\mathrm{DE}}/\sqrt{10}$. Also depicted are the DFT ground truth and a frequency density plot of the corresponding distance among the training data, with the minimum and maximum of the latter indicated as vertical dotted lines.} \label{fig:bond_stretching} \end{figure} \subsection{A potential for bulk cubic \ce{SrTiO3}} The test statistics of the potential trained only on data for bulk \ce{SrTiO3} (a committee of $10$ NNs) are summarized by a $\mathrm{MAE}=\SI{2.44}{\milli\electronvolt\per\atom}$ and an $\mathrm{RMSE}=\SI{33.6}{\electronvolt\per\atom}$ for the energies, along with a $\mathrm{MAE}=\SI{115.91}{\milli\electronvolt\per\angstrom}$ and an $\mathrm{RMSE}=\SI{246.86}{\milli\electronvolt\per\angstrom}$ for the forces. The latter are to be judged in the context of a mean absolute deviation of \SI{2.23}{\electronvolt\per\angstrom} and a standard deviation of \SI{4.53}{\electronvolt\per\angstrom} in the test set. For a bootstrap-aggregation ensemble with $10$ members, these same test statistics are $\mathrm{MAE}=\SI{2.54}{\milli\electronvolt\per\atom}$ and $\mathrm{RMSE}=\SI{38.18}{\electronvolt\per\atom}$ for the energies, and $\mathrm{MAE}=\SI{124.53}{\milli\electronvolt\per\angstrom}$ and $\mathrm{RMSE}=\SI{399.15}{\milli\electronvolt\per\angstrom}$ for the forces. A detailed comparison between the ground truth and the predictions of this potential can be seen in the top panel of Fig.~\ref{fig:committee_training}. That figure illustrates how well the committee estimate of the uncertainty works as a proxy for error and compares this performance with that of a bootstrap-aggregation ensemble. Although the sources of randomness in the committee are more limited (coefficient initialization only, without any additional diversity in the training/validation or the minibatch splits), and although the absolute values of their uncertainty estimates are different, both strategies succeed in identifying the most problematic points for the MLFF. Moreover, the Spearman correlation coefficient between uncertainty and error over the validation data set is 0.90 for the committee and 0.91 for the bootstrap-aggregation ensemble. Given the significant advantage in computational performance and scalability afforded by the committee, we settle on this approach for our treatment of \ce{SrTiO3}. \begin{figure} \centering \includegraphics[width=.6\columnwidth]{com_boot_sto_mix_bulk_forces.pdf} \caption{Comparison of error and uncertainty in the forces on the model trained on \ce{SrTiO3} bulk data. The uncertainties are calculated using either a bootstrap-aggregation ensemble or an NN committee. The points represent validation data and the lines are placed at the 75th and 90th percentiles of each quantity.} \label{fig:committee_training} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{violin_with_vesta.pdf} \caption{Split violin plot (right) showing the frequency density of the logarithm of the error (top) and the logarithm of the uncertainty (bottom) resolved by layer for the \ce{SrTiO3} bulk potential evaluated on the $4\times 1$ surface data and for the first and second refinements on that potential based on active learning. A side view of an \ce{SrTiO3}(110) $4\times 1$ slab is shown on the left, indicating the layers.} \label{fig:violins} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{err_unc_spheres.pdf} \caption{Details of the uncertainty estimate for the atomic forces of the \ce{SrTiO3} bulk potential when evaluated at the global minimum found in a structure search for $4\times 1$ surface data.\cite{STO_surfaces} The error is shown overlaid on a ball-and-stick model background as a guide to the eye.} \label{fig:color_bubbles} \end{figure} We now put this potential to the test on data for the $4\times 1$ surface reconstructions. Given the vastly greater variety of atomic environments found in those configurations, it is expected to perform poorly. Moreover, predictions of the force on atoms closer to the center of the slab, with more bulk-like neighborhoods and smaller displacements, can reasonably be expected to have lower associated errors. This should also be reflected in our uncertainty metric. Figures~\ref{fig:violins}~and~\ref{fig:color_bubbles} illustrate that this is indeed the case. Specifically, Fig.~\ref{fig:violins} shows how typical values of both the error and the uncertainty increase by orders of magnitude for atoms close to the free surface. This kind of spatially resolved analysis is only made possible by our reliance on forces instead of the aggregate energy. Note, however, that the correlation between uncertainty and error within the topmost layers is not good enough to identify the most problematic atoms directly (see Fig.~\ref{fig:color_bubbles}). This shows that the skill of the uncertainty in the forces as a proxy for error improves with the cardinal of the set of atoms considered: it is still good when applied to a whole layer, but less so for individual atoms. Another factor influencing this result is that, although the contribution of an atom to the energy depends only on its own descriptors, the force on that atom depends on the descriptors of all neighbors in its local environment. Therefore, atoms with neighbors showing high uncertainties in their forces should also be considered candidates for high errors. As a consequence, simply looking for underrepresented types of local environments is not enough to improve the quality of a training set when tackling a new system such as these $4\times 1$ reconstructions. The accuracy delivered by the bulk-trained NNFF when applied to the surface atoms is clearly insufficient even for applications requiring only a rough approximation to the forces. Those huge errors are also enough to completely distort the global picture of the potential energy hypersurface: as the blue series in Fig.~\ref{fig:AL_energies} illustrates, not only is the correlation between the predicted and true energies rather poor, but there is a systematic error that can be interpreted as a misalignment between the origins of energies of the DFT calculations and the FF. \begin{figure} \centering \includegraphics[width=.6\columnwidth]{parity_energies_sto.pdf} \caption{Parity plot for the energies of the test set of the $4\times 1$ \ce{SrTiO3} surface using three different potentials in an active-learning chain: one trained using only bulk data, one (AL$_1$) including $100$ additional structures generated through the maximization of the adversarial loss computed from the bulk-only potential and one (AL$_{2}$) whose training data set contains a further $100$ structures created through the same procedure but based on AL$_1$.} \label{fig:AL_energies} \end{figure} \subsection{Actively learning a potential for the \ce{SrTiO3} surface} Augmenting the training data set with relevant configurations can be expected to alleviate this situation. To assess the extent to which this is the case, we train a new potential, AL$_1$, after adding $100$ configurations generated through the adversarial algorithm described above. The effect on the potential energy is dramatic, as shown by the orange series in Fig.~\ref{fig:AL_energies}: correlation is greatly improved (as attested by the more elongated form of the graph) and the problematic offset mostly disappears. However, the energies predicted for the lowest-lying configurations still stand out as far worse than those for less stable configurations. Given the importance of surface reconstruction in determining the ground state and the fact that the bias still trends in the same direction as the offset of the bulk-only potential, this is likely to be due to the surface layers. The second series of violin plots for the forces in Fig.~\ref{fig:violins} confirms this hypothesis: both the big improvements in predictive skill and the remaining inaccuracies can be predominantly traced to the surface layers. A second round of active learning using an identical process starting from the same single configuration yields diminishing returns. However, if each iteration of the new adversarial process to generate an additional set of training $100$ configurations is instead started from one of the $100$ created for the first round, chosen at random, we achieve another major improvement in the uncertainty and error of the forces, as shown by the violet violins in Fig.~\ref{fig:violins}. After that second round, the uncertainties in adjacent layers become much more comparable. A notable feature of this progression is that the committee uncertainty estimate decreases more than the actual error. This suggests that the kind of uncertainty estimates analyzed here are more useful as a tool for comparing different points evaluated using the same potential, and less reliable when it comes to comparing different potentials. The total potential energy predictions also get visibly closer to parity with DFT (see Fig.~\ref{fig:AL_energies}), especially for the most difficult configurations. The importance of starting each round of active learning from as diverse a set of configurations as possible is best understood in light of the origin of our test data for \ce{SrTiO3}, which comes from an evolutionary search using the CMA-ES algorithm. The central consideration in such a search is balancing exploration and exploitation of configuration space, and as such they try not to stay in the local environment of a single local minimum. This stands in stark contrast with the local optimization of the adversarial loss at the center of our active-learning procedure (as prescribed in Ref.~\onlinecite{Bombarelli_adversarial}). \section{Summary and conclusions} We implemented three different low-overhead uncertainty estimators on top of an automatically differentiable descriptor-based NNFF and tested them in two different applications of practical relevance designed to involve excursions outside of the regions covered by training data. Two of those estimators are based on deep ensembles of NNs with additional heads that provide an intrinsic assessment of the difficulty of a prediction, while the third is the variance of a simple committee of models. We first ran MD simulations of EAN using a potential trained on DFT forces and energies for snapshots of an MD trajectory generated with a different FF. All the uncertainty metrics, whether applied to forces or energies, are able to detect when the potential leaves its comfort zone and predict catastrophic failures of the MD simulation. Moreover, they provide a viable criterion for selecting configurations from which useful information can be harvested for retraining the FF. After applying a single iteration of this procedure to snapshots of the problematic trajectories we were able to rerun the trajectories without issues. The same estimators show skill similar to a bootstrap-aggregation ensemble but can be trained in a small fraction of the time. In our second example we took the first steps towards learning a potential to describe the $4\times 1$ surface reconstructions of \ce{SrTiO3} starting from a potential developed for the bulk solid. We followed an adversarial learning approach initially designed for molecules; our results illustrate both its efficiency and the limitations its local character entails when used in the context of a global structure search. Key to enabling this active-learning workflow in practice is a dramatically accelerated training cycle brought about by two innovations in our NNFF design: residual learning and a nonlinear learned optimizer replacing the usual stochastic gradient descent schemes. Collectively, the results of these two examples underscore the amount of information contained in the forces and their uncertainties when compared to the total energy of the system, in keeping with existing observations about the advantages of forces over energies in training. When evaluated for the same quantity (energies or forces), heteroscedastic models and the associated uncertainty metrics do not seem to offer a systematic advantages over a simple committee in terms of identification of difficult or uncertain configurations, but can provide more specific information about the kind of uncertainty being estimated (aleatoric or epistemic). \begin{acknowledgments} This work was supported by the Austrian Science Fund (FWF) (SFB F81 TACO). E.H. acknowledges support from the Austrian Science Fund (FWF), project J-4415. The financial support of the Spanish Ministry of Science and Innovation (PID2021-126148NA-I00 funded by MCIN/AEI/10.13039/501100011033/FEDER, UE) are gratefully acknowledged. H.M.C. thanks the USC for his \enquote{Convocatoria de Recualificación do Sistema Universitario Español-Margarita Salas} postdoctoral grant under the \enquote{Plan de Recuperación Transformación} program funded by the Spanish Ministry of Universities with European Union's NextGenerationEU funds. This work was supported by the Fundacão para a Ciência e Tecnologia (FCT) (funded by national funds through the FCT/MCTES (PIDDAC)) to CIQUP, Faculty of Science, University of Porto (Project UIDB/00081/2020), IMS-Institute of Molecular Sciences (LA/P/0056/2020). \end{acknowledgments}
2,869,038,154,505
arxiv
\section{Introduction} The objective of the present study is to simulate the behavior of mesoscopic systems based on an all-atom formulation at which the basic Physics is presumed known. Traditional molecular dynamics (MD) is ideal for such an approach if the number of atoms and the timescales of interest are limited \cite{NAMD,GROMACS}. However, ribosomes, viruses, mitochondria, and nanocapsules for the delivery of therapeutic agents are but a few examples of mesoscopic systems that can provide a challenge for conventional MD. In this paper, we develop a Physics-based algorithm that accounts for interactions at the atomic scale and yet makes accurate and rapid simulations for supramillion atom systems over long timescales possible. Typical coarse-graining (CG) methods include deductive multiscale analysis (DMA) \cite{Thermal,DMA}, inverse Monte Carlo \cite{IMC}, Boltzmann inversion \cite{IBM}, elastic network models \cite{ENM1,ENM2}, or other bead-based models \cite{CG1,CG2,Martini}. DMA methods derived from the $N-$atom Liouville equation (LE) show great promise in achieving accurate and efficient all-atom simulation \cite {OPs, Peter2005, Smoluchowski, Langevin}. The main theme of that work was to construct and exploit the multiscale structure of the $N-$atom probability density $\rho (\Gamma ,t)$ for the positions and momenta of the $N$ atoms (denoted $\Gamma $, collectively) as it evolves over time $t$. Most of the analysis focused on friction dominated, non-inertial regime, which is considered here as well. However, in these methods ensembles of all-atom configurations were required for evolving the CG variables. The approach introduced here avoids the need to construct these ensembles by coevolving the all-atom and CG states in a consistent way, and in the spirit of DMA-based methods, it does not make any conjectures on the form of the CG dynamical equations and the associated uncertainty in the form of the equations. A main theme of the present approach is the importance of coevolving the CG and microscopic states. This feature distinguishes our method from others which, for example, require the construction of a potential mean force \cite{Tuckerman2008, Voth2008} using ensembles of micostates; a challenge for such methods is that the relevant ensembles are not known a priori since they are controlled by the CG state whose evolution is unknown, and is in fact the objective of a dynamics simulation. As a result, the present method does not require least squares or other types of fitting. Other multiscale methods, built on the projection operator formalism \cite{Oppenheim1996,Oppenheim1997,Oppenheim1998}, require the construction of memory kernels. This is typically achieved via a perturbation approach to overcome the complexity of the appearance of the projection operator in the memory kernels. Construction of such kernels is not required in our method. A first step in the present approach is the introduction of a set of CG variables $\Phi$ related to $\Gamma$ via $\Phi =\overset{\sim}{\Phi } (\Gamma )$ for specified function $\overset{\sim}{\Phi } (\Gamma )$. When this dependence is well chosen, the CG variables evolve much more slowly than the fluctuations of small subsets of atoms. With these CG variables, the $N-$atom LE was solved perturbatively in terms of $\varepsilon$\cite{OPs, Peter2005}, the ratio of the characteristic time of the fluctuations of small clusters of atoms, to the characteristic time of CG variable evolution. This is achieved starting with the ansatz that $\rho$ depends on $\Gamma$ both directly and, via $\overset{\sim}{\Phi}$, indirectly. The theory proceeds by constructing $\rho \left( \Gamma ,\Phi ;t\right)$ perturbatively in $\varepsilon$, i.e., by working in the space of functions of $6N+N_{CG}$ variables (where $N_{CG}$ is the number of variables in the set $\Phi$). To advance the multiscale approach, we here introduce Trotter factorization \cite{Trotter,Algebra,Tuckerman} into the analysis. Through Trotter factorization, the long-time evolution of the system separates into alternating phases of all-atom simulations and CG variable updating. Efficiency of the method follows from a hypothesis that the momenta conjugate to the CG variables can be represented as a stationary random process. The net result is a computational algorithm with some of the character of our earlier MD/OPX method \cite{Long1,Long2} but with greater control on accuracy, higher efficiency, and more rigorous theoretical basis. Here we develop the algorithm and discuss its implementation as a computational platform, discuss selected results, and make concluding remarks. \section{Theory and Implementation} \subsection{\protect\bigskip Unfolded Dynamical Formulation} The Newtonian description of an $N-$atom system is provided by the $6N$ atomic positions and momenta, denoted $\Gamma$ collectively. The phenomena of interest involve overall transformations of an $N-$atom system. While $\Gamma$ contains all the information needed to solve the problem in principle, here it is found convenient to also introduce a set of CG variables $\Phi$, that are used to track the large spatial scale, long time degrees of freedom. For example, $\Phi$ could describe the overall position, size, shape, and orientation of a nanoparticle. By construction, a change in $\Phi$ involves the coherent deformation of the $ N-$atom system, which implies that the rate of change in $\Phi$ is expected to be slow \cite{OPs,Joshi2012}. This slowness implies the separation of timescales that provides a highly efficient and accurate algorithm for simulating $N-$atom systems. With this unfolded description $(\Gamma, \Phi)$, the Newtonian dynamics takes the form \begin{align} \frac{d \Gamma }{d t}&=\mathcal{L} \Gamma, \label{dynamics_gamma} \\ \frac{d \Phi }{d t}&=\mathcal{L} \overset{\sim}{\Phi}(\Gamma), \label{dynamics_phi} \end{align} for unfolded Liouvillian $\mathcal{L} = \mathcal{L}_{micro} + \mathcal{L}_{meso}$, such that \begin{align} \mathcal{L}_{micro} &= \sum_{l=1}^N \frac{\mathbf{p}_l}{m_l} \cdot \left( \frac{\partial}{\partial\mathbf{r}_l}\right)_{\Phi} + \mathbf{f}_l \cdot \left( \frac{\partial}{\partial \mathbf{p}_l} \right)_{\Phi}, \\ \mathcal{L}_{meso} & = \sum_{k=1}^{N_{CG}} \Pi_k \cdot \left( \frac{\partial}{\partial \Phi_k} \right)_{\Gamma}. \end{align} Here $\Pi_k$ is the CG velocity associated with the $k^{th}$ CG variable. Eqs. (\ref{dynamics_gamma}-\ref{dynamics_phi}) have the formal solution \begin{equation} (\Gamma(t), \Phi(t)) = S(t) (\Gamma_o, \Phi_o), \end{equation for initial data indicated by subscript $o$, and evolution operator $S(t)=e^{\mathcal{L} t}$. \subsection{Trotter Factorization} By taking the unfolded Liouvillian, the time operator now takes the form \begin{equation} S(t)=e^{\left( \mathcal{L} _{{micro}} +\mathcal{L} _{{meso}} \right)t}. \end{equation Since $\mathcal{L} _{{micro}}$ and $\mathcal{L} _{{meso}}$ do not commute, $S(t)$ cannot be factorized into a product of exponential functions. However, Trotter's theorem \cite{Trotter} (also known as the Lie product formula \cite{Algebra}) can be used to factorize the evolution operator as follows: \begin{equation} S(t)=\lim_{M\rightarrow \infty }\left[ e^{\mathcal{L} _{{micro}}t/2M}e^ \mathcal{L} _{{meso}}t/M} e^{\mathcal{L} _{{micro}}t/2M}\right] ^{M}+O\left( \left(\frac{t}{M}\right)^3 \right) . \end{equation By setting $t/M$ to be equal to the discrete time step $\Delta$, the step-wise operator becomes \begin{equation} S(\Delta )=\lim_{\Delta \rightarrow 0} e^{\mathcal{L} _{{micro }\Delta/2} e^{ \mathcal{L} _{{meso}}\Delta }e^{\mathcal{L} _{{micro }\Delta/2} + O(\Delta^3). \end{equation Let the step-wise operators $S_{{micro}}$ and $S_{{meso}}$ correspond to $\mathcal{L}_{{micro}}$ and $\mathcal{L}_{{meso}}$, respectively. Then $S(n\Delta)$ takes the for \begin{equation} S(n\Delta )= \prod_{i=1}^n S_{{micro}}\left(\frac{\Delta}{2}\right) S_ {meso}}(\Delta) S_{{micro}}\left(\frac{\Delta}{2}\right). \label{Full_S} \end{equation} By replacing $S_{{micro}}(\Delta/2)$ by $S_{{micro}}(\Delta) S_ {micro}}(-\Delta/2)$ to the right hand side, Eq. (\ref{Full_S}) becomes \begin{equation} S(n\Delta )= S_{micro}\left(\frac{\Delta}{2}\right) \left[ \prod_{i=1}^n S_ meso}(\Delta) S_{micro}\left(\Delta\right) \right] S_{micro} \left(\frac{-\Delta}{2}\right). \end{equation} Since we are interested in the long-time evolution of a mesoscopic system, we can neglect the far left and right end terms, $S_{micro}\left( \Delta}/2\right)$ and $S_{micro}\left({-\Delta}/2\right)$, respectively, to a good approximation. Therefore, we can define the step-wise time operator as \begin{equation} S\left( \Delta \right) =S_{meso}\left( \Delta \right) S_{micro} \left( \Delta \right). \end{equation In the next section, we show how this factorization implies a computational algorithm for solving the dynamical equations for $\Gamma$ and $\Phi$. \subsection{Implementation} A key to the efficiency of the mutiscale Trotter factorization (MTF) method is the postulate that the momenta conjugate to the CG variables can be represented by a stationary random process over a period of time much shorter than the time scale characteristic of CG evolution. Thus, in a time period significantly shorter than the increment $\Delta$ of the step-wise evolution, the system visits a representative ensemble of configurations consistent with the slowly evolving CG state. This enables one to use an MD simulation for the microscopic phase of the step-wise evolution that is much shorter than $\Delta$ to integrate the CG state to the next CG time step. For each of a set of time intervals much less than $\Delta$, the friction dominated system experiences the same ensemble of conjugate momentum fluctuations. Thus, if $\delta$ is the time for which the conjugate momentum undergoes a representative sample of values (i.e., is described by the stationarity hypothesis), then the computational advantage over conventional MD is expected to be $\Delta/\delta$. The two phase updating for each time-step $\Delta$ was achieved as follows. For the $S_{micro}(\Delta)$ phase, conventional MD was used. This yields a time-series for $\Gamma$ and hence ${\Pi}$. For all systems simulated here, $\Pi$ was found to be a stationary random process (see Figure \ref{fig:stationarity}). Therefore, MD need only be carried out for a fraction of $\Delta$, denoted $\delta$. This and the slowness of the CG variables are the source of computational efficiency of our algorithm. For the $S_{meso}$ phase updating in the friction dominated regime, the ${\Pi}$ time series constructed in the micro phase is used to advance $\Phi$ in time as follows \begin{equation} \Phi(t+\Delta) = \Phi(t) + \int_t^{t+\Delta} dt' \Pi(t'). \label{integrated_phi} \end{equation} Due to stationarity, the integral on the right hand side reduces to $\Delta/\delta \int_t^{t+\delta} dt' \Pi(t')$ (see Figure \ref{fig:stationarity}). The expression for $\Pi$ depends on the choice of CG variables. In this work, we used the space-warping method \cite{Khuloud2002,Pankavich2008} that maps a set of atomic coordinates to a set of CG variables that capture the coherent deformation of a molecular system in space. In the space-warping method, the mathematical relation between the CG variables and the atomic coordinates is \begin{equation} \mathbf{r}_i = \sum_{\underline{k}} \mathbf{U}_{\underline{k}i} \bm{\Phi}_{\underline{k}} + \mathbf{\sigma}_i. \label{Map} \end{equation} Here $\underline{k}$ is a triplet of indices, $i$ is the atomic index, $\mathbf{r}_i$ is the cartesian position vector for atom $i$, and $\bm{\Phi}_{\underline{k}}$ is a cartesian vector for CG variable $\underline{k}$. The basis functions $\mathbf{U}_{\underline{k}}$ are constructed in two stages. In the first stage, they are computed from a product of three Legendre polynomials of order $k_1$, $k_2$, and $k_3$ for the $x$, $y$, and $z$ dependence. In the second stage, the basis functions are mass-weighted orthogonalized via QR decomposition \cite{OPs,Joshi2012}. For instance, the zeroth order polynomial is $\mathbf{U}_{000}$, the first order polynomial forms a set of three basis functions: $\mathbf{U}_{001}, \mathbf{U}_{010}, \mathbf{U}_{100}$, and so on. Furthermore, the basis functions depend on a reference configuration $\mathbf{r}^0$ which is updated periodically (once every $10$ CG time steps) to control accuracy. The vector $\mathbf{\sigma}_i$ represents the atomic-scale corrections to the coherent deformations generated by $\bm{\Phi}_{\underline{k}}$. Introducing CG variables this way facilitates the construction of microstates consistent with the CG state \cite{Pankavich2008}. This is achieved by minimizing $\sum_{i=1}^N m_i \mathbf{\sigma}_i^2$ with respect to $\bm{\Phi}_{\underline{k}}$. The result is that the CG variables are generalized centers of mass, specifically \begin{equation} \bm{\Phi}_{\underline{k}} = \frac{\sum_{i=1}^N m_i \mathbf{U}_{\underline{k}i} \mathbf{r}_i}{\sum_{i=1}^N m_i \mathbf{U}_{\underline{k}i}^2}, \label{CG_Eq} \end{equation} with $m_i$ being the mass of atom $i$. For the lowest order CG variable, $\mathbf{U}_{000}=1$, which implies $\bm{\Phi}_{000}$ is the center of mass. As the order of the polynomial increases, the CG variables capture more information from the atomic scale, but they vary less slowly with time. Therefore, the space warping CG variables are classified into low order and high order variables. The former characterize the larger scale disturbances, while the latter capture short-scale ones \cite{OPs,Joshi2012}. Eq. (\ref{CG_Eq}) implies that $\bm{\Pi}_{\underline{k}}= \sum_{i=1}^{N} \mathbf{U}_{\underline{k}i} \mathbf{p}_i / \sum_{i=1}^N m_i \mathbf{U}_{\underline{k}i}^2$, where $\mathbf{p}_i$ is a vector of momenta for the $i^{th}$ atom. With $\bm{\Phi}(t+\Delta)$ computed via Eq. (\ref{integrated_phi}), the two-phase $\Delta$ update is completed, and this cycle is repeated for a finite number of discrete time steps. Details on the necessary energy minimization and equilibriation needed for every CG step was covered in earlier work \cite{OPs,Long1,Long2}. This two-phase coevolution algorithm was implemented using NAMD \cite{NAMD} for the $S_{micro}$ phase within the framework of the DMS software package \cite{OPs,History,Thermal}. Numerical computations were performed with the aid of LOOS \cite{LOOS}, a lightweight object-oriented structure library. \section{Results and discussion} All simulations were done in vacuum under NVT conditions to assess the scalability and accuracy of the algorithm. The first system used for validation and benchmarking is lactoferrin. This iron binding protein is composed of a distal and two proximal lobes (shown in Figure \ref{fig:1LFG-open}). Two free energy minimizing conformations have been demonstrated experimentally: diferric with closed proximal lobes (PDB code 1LFG), and apo with open ones \cite{1LFG1} (PDB code 1LFH). Here, we start with an open lactoferrin structure and simulate its closing in vacuum (see Figure \ref{fig:lfg}). The RMSD for Lactoferrin is plotted as a function of time in Figure \ref{fig:rmsd-1LFG}; it shows that the protein reaches equilibrium in about $5$ ns. This transition leads to a decrease in the radius of gyration of the protein by approximately $0.2$ nm as shown in Figure \ref{fig:rog-1LFG}. The second used is a triangular structure of the Nudaurelia Capensis Omega Virus (N$\omega$V) capsid protein \cite{Nwv} (PDB code 1OHF) containing three protomers (see Figure \ref{fig:ohf}). Starting from a deprotonated state (at low pH), the system was equilibriated using an implicit solvent. The third system used is Cowpea Chlorotic Mottle virus (CCMV) full native capsid \cite{Long1} (PDB code 1CWP, Figure \ref{fig:cwp}). Both systems are characterized by strong protein-protein interactions. As a result, they shrink in vacuum after a short period of equilibriation. The computed radius of gyration of both systems is shown in Figure \ref{fig:rog-ohf} and Figure \ref{fig:rog-cwp}. Based on the convergence of the time integral of $\Pi$ (see Figure \ref{fig:stationarity}), the $S_{micro}$ phase was chosen to consist of $10 × \times 10^4$ MD steps for LFG, N$\omega$v, and CCMV, where each MD step is equal to $1$ fs. The CG timestep, $\Delta$, on the other hand, was taken to be $12.5$ ps for LFG, $25$ ps for N$\omega$v, and $50$ ps for CCMV. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{fig1} \caption{LFG in its open state at $t=0$ ns.} \label{fig:1LFG-open} \end{subfigure} ~ \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{fig2} \caption{LFG in its closed state at $t=19.6$ ns.} \label{fig:1LFG-closed} \end{subfigure} \caption{Snapshots of Lactoferrin protein in its open (a) and closed (b) states.} \label{fig:lfg} \end{figure} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{fig3} \caption{N$\omega$v in its initial state at $t=0$ ns.} \label{fig:1nwv-open} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{fig4} \caption{N$\omega$v after shrinking at $t=3.0$ ns.} \label{fig:1nwv-closed} \end{subfigure} \caption{Snapshots of N$\omega$v triangular structure before (a) and after (b) contraction due to strong protein-protein interactions.} \label{fig:ohf} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{fig5} \caption{The full all-atom CCMV native capsid.} \label{fig:cwp} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{fig6} \caption{RMSD variation as a function of time for a series of three MD and one MTF runs.} \label{fig:rmsd-1LFG} \end{figure} \begin{figure}[tbp] \includegraphics[width=\textwidth]{fig7} \centering \caption{The radius of gyration decreases in time as Lactoferrin shrinks.} \label{fig:rog-1LFG} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{fig8} \caption{Temporal evolution of the radius of gyration of N$\omega$v computed using MD and MTF.} \label{fig:rog-ohf} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{fig9} \caption{Temporal evolution of the radius of gyration of CCMV computed using MD and MTF.} \label{fig:rog-cwp} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{fig10} \caption{A plot of the speedup as a function of the system size shows the scalability of the MTF algorithm.} \label{fig:speedup} \end{figure} \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.55\textwidth} \centering \includegraphics[width=\textwidth]{fig12} \caption{A plot of the time integral of $\Pi$ for a high order CG $\Phi_{200}$.} \end{subfigure} ~ \begin{subfigure}[b]{0.55\textwidth} \centering \includegraphics[width=\textwidth]{fig11} \caption{A plot of the time integral of $\Pi$ for a low order CG $\Phi_{001}$.} \end{subfigure} \caption{Evidence for the validity of the stationarity hypothesis shown via the convergence of $\frac{1}{\delta} \int_0^{\delta} \Pi(t) dt$ as a function of $\delta$ for CG variables selected from among those used in simulating the contraction of N$\omega$v. Initially the integral experiences large fluctuations because with small $\delta$, only a relatively few configurations are included in the time average constituting the integral, but as $\delta$ increases, the statistics improves, and the integral becomes increasingly flat.} \label{fig:stationarity} \end{figure} \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline System & Size & Time & Speed-up \\ \hline LFG & 10,560 & 12.5 ns & 1.32 \\ \hline N$\omega$V & 103,317 & 4.30 ns & 2.10 \\ \hline CWP & 417,966 & 4.67 ns & 4.28 \\ \hline \end{tabular} \caption{Speedup as a function of system size (number of atoms) for simulations run on 1x12, 4x12, and 8x12 cores for LFG, N$\omega$v, and CCMV, respectively.} \label{tab:speedup} \end{center} \end{table} Comparison between MD and MTF results are shown in \ref{tab:speedup}. The dependence of speedup on the number of atoms in the system shown in \ref{tab:speedup} suggests that the benefit of MTF increases with the complexity of the system size (see Figure \ref{fig:speedup}). \section{Conclusions} Mesoscopic systems express behaviors stemming from atom-atom interactions across many scales in space and time. Earlier approaches based on Langevin equations for coarse-grained variables did achieve efficiencies over MD without comprimising accuracy and captured key atomic scale details \cit {HierarchicalOP,OPs}. However, such an approach requires the construction of diffusion factors, a task that consumes significant computational resources. This is because of the need to use large ensembles and construct correlation functions. The multiscale factorization method used here introduces the benefits of multiscale theory of the LE. Here we revisit the Trotter factorization method within our earlier multiscale context. A key advantage is that the approach presented here avoids the need for the resource-consuming diffusion factors, and thermal average and random forces. The CG variables for the mesoscopic systems of interest do have a degree of stochastic behavior. In the present formulation, this stochasticity is accounted for via a series of MD steps used in the phase of the multiscale factorization algorithm wherein the $N-$atom probability density is evolved via $\mathcal{L}_{micro}$ , i.e. at constant value of the CG variables. The MTF algorithm can be further optimized to produce greater speedup factors. In particular, the results obtained here can be significantly improved with the following: 1) after updating the CGs in the two-phase coevolution Trotter cycle, one must fine grain i.e. develop the atomistic configuration to be used as an input to MD. Recently, we have shown that the CPU time to achieve this fine graining can be dramatically reduced via a constraint method that eliminates bond length and angle strains, 2) information from earlier steps in discrete time evolution can be used to increase the time step and achieve greater numerical stability; while this was demonstrated for one multiscale algorithm \cite{History}, it can also be adapted to the multiscale factorization method, and 3) the time stepping algorithm used in this work is the the analogue of the Euler method for differential equations, and greater numerical stability and efficiency could be achieved for a system of stiff differential equations using implicit and semi-implicit schemes \cite{Numerical}. \acknowledgement This project was supported in part by the National Science Foundation (Collaborative Research in Chemistry Program), National Institutes of Health (NIBIB), METAcyt through the Center of Cell and Virus Theory, Indiana University College of Arts and Sciences, and the Indiana University information technology services (UITS) for high performance computing resources.
2,869,038,154,506
arxiv
\section{Introduction and motivation} Understanding generic smooth maps includes the following ingredients: \begin{enumerate}[a)] \item Local forms: they describe the map in neighbourhoods of points (see Whitney \cite{Whitney}, Mather \cite{Mather}, Arnold \cite{Arnold1}, \cite{Arnold2} and others). \item Automorphism groups of local forms: they describe the maps in neighbourhoods of singularity strata (a stratum is a set of points with equal local forms) (see J\"anich \cite{Janich}, Wall \cite{Wall}, Rim\'anyi \cite{RRPhD}, Sz\H ucs \cite{SzucsLNM}, Rim\'anyi-Sz\H ucs \cite{RSz}). \item Clutching maps of the strata: they describe how simpler strata are incident to more complicated ones. \end{enumerate} The present paper is devoted to a systematic investigation of ingredient $c)$ (for some special class of singular maps), initiated in the first part of this paper, \cite{partI}. While the ingredients $a)$ and $b)$ were well studied, there were hardly any results concerning $c)$. The clutchings of strata will be described by some classes of the stable homotopy groups of spheres, $G = \underset{n \geq 0}{\oplus} \pi^{\bf s}(n)$. We shall associate an element of $G$ to a singular map (more precisely, to its highest singularity stratum) that will be trivial precisely when the second most complicated singularity stratum can be smoothed in some sense around the most complicated one. If this is the case then considering the next stratum (the third most complicated) we define again a (higher dimensional) element of $G$, and this will vanish precisely when this (third by complexity) stratum can be smoothed around the highest one. And so on. The aim of the paper is computing these classes explicitly. The actual computation is taken from Mosher's paper \cite{Mosher}, which is purely homotopy theoretic, and is not dealing with any singularity or any smooth maps. Mosher computes the spectral sequence arising from the filtration of $\C P^{\infty}$ by the spaces $\C P^n$ in the (extraordinary) homology theory formed by the stable homotopy groups. We show that in a special case of singular maps the classifying spaces of such singular maps can be obtained from the complex projective spaces by some homotopy theoretical manipulation. This allows us to identify the clutching classes describing the incidence structure of the singularity strata with the differentials of Mosher's spectral sequence. \section{Spectral sequences}\label{section:SS} \subsection{The Mosher spectral sequence}\label{section:CPSS} Let us consider the following spectral sequence: $\C P^\infty$ is filtered by the subspaces $\C P^n$. Consider the extraordinary homology theory formed by the stable homotopy groups $\pi^{\bf S}_*$. The filtration $\C P^0 \subset \C P^1 \subset \dots$ generates a spectral sequence in the theory $\pi^{\bf S}_*$. The starting page of this spectral sequence is $$ E^{1}_{p,q}=\pi^{\bf S}_{p+q}(\S^{2p}) = \pi^{\bf S}(q-p), $$ containing the stable homotopy groups of spheres. This spectral sequence was investigated by Mosher \cite{Mosher}. Our first result is that (rather surprisingly) this spectral sequence coincides with another one that arises from singularity theory; we describe it now. \subsection{The singularity spectral sequence}\label{section:singSS} (\cite{nagycikk}) Let $X^0 \subset X^1 \subset X^2 \subset \dots \subset X$ be a filtration such that for any $i$ there is a fibration $X^i \to B_i$ with fibre $X^{i-1}$. Then there is a spectral sequence with $E^1$ page given by $E^1_{p,q}=\pi_{p+q}(B_p)$ that abuts to $\pi_*(X)$. (Indeed, in the usual construction of a spectral sequence in the homotopy groups one has $E^1_{p,q}=\pi_{p+q}(X^p,X^{p-1})$ and in the present case this group can be replaced by $\pi_{p+q}(B_p)$.) \par Such an iterated fibration arises in singularity theory in the following way. Let $k$ be a fixed positive integer and consider all the germs of codimension $k$ maps, i.e. all germs $f:(\R^c,0) \to (\R^{c+k},0)$ where $c$ is a non-negative integer. Two such germs will be considered to be equivalent if \begin{itemize} \item they are $\mathcal A$-equivalent (that is, one of the germs can be obtained from the other by composing and precomposing it with germs of diffeomorphisms), or \item one of the germs is equivalent to the trivial unfolding (or suspension) of the other, that is, $f$ is equivalent to $f \times id_\R : (\R^c \times \R^1,0) \to (\R^{c+k} \times \R^1)$. \end{itemize} An equivalence class will be called a \emph{singularity} or a \emph{singularity type} (note that germs of maximal rank, with $\rk df =c$, form an equivalence class and this class is also a ``singularity''). For any codimension $k$ smooth map of manifolds $f: M^n \to P^{n+k}$, any point $x\in M$ and any singularity $\eta$ we say that $x$ is an \emph{$\eta$-point} if the germ of $f$ at $x$ belongs to $\eta$. There is a natural partial order on the singularities: given two singularities $\eta$ and $\zeta$ we say that $\eta < \zeta$ if in any neighbourhood of a $\zeta$-point there is an $\eta$-point. \par Now let $\tau$ be a sequence of singularities $\eta_0$, $\eta_1$, $\eta_2$, $\dots$ such that for all $i$ in any sufficiently small neighbourhood of an $\eta_i$ point there can only be points of types $\eta_0$, $\dots$, $\eta_{i-1}$. We say that a smooth map $f: M^n \to P^{n+k}$ is a \emph{$\tau$-map} if at any point $x\in M$ the germ of $f$ at $x$ belongs to $\tau$. One can define the cobordism group of $\tau$-maps of $n$-manifolds into $\R^{n+k}$ (with the cobordism being a $\tau$-map of an $(n+1)$-manifold with boundary into $\R^{n+k} \times [0,1]$); this group is denoted by $Cob_\tau (n)$. In \cite{nagycikk} it was shown that there is a classifying space $X_\tau$ such that $$ Cob_\tau(n) \cong \pi_{n+k}(X_\tau). $$ Let $\tau_i$ denote the set $\tau_i = \left\{ \eta_0, \dots, \eta_i \right\}$ and denote by $X^i$ the classifying space $X_{\tau_i}$. It was shown in \cite{nagycikk} that $X^0 \subset X^1 \subset \dots$ is an iterated fibration, that is, there is a fibration $X^i \to B_i$ with fibre $X^{i-1}$. The base spaces $B_i$ have the following description (given in \cite{nagycikk}): to the singularity $\eta_i$ one can associate two vector bundles, the universal normal bundle $\xi_i$ of the stratum formed by $\eta_i$-points in the source manifold and the universal normal bundle $\tilde \xi_i$ of the image of this stratum in the target. Let $T\tilde \xi_i$ be the Thom space of the bundle $\tilde \xi_i$ and let $\Gamma T\tilde \xi_i$ be the space $\Omega^\infty S^\infty T\tilde\xi_i = \underset{q \to \infty}{\lim} \Omega^q S^q T\xi_i$. Then $B_i = \Gamma T\tilde \xi_i$. The obtained fibration $X_{i-1} \hookrightarrow X_i \to B_i$ is called the \emph{key fibration} of $\tau_i$-cobordisms (see \cite[Definition 109]{nagycikk}). \par Let us recall shortly the construction of the bundles $\xi_i$ and $\tilde\xi_i$. Let $\eta_i^{root} : (\R^{c_i},0) \to (\R^{c_i+k},0)$ be the root of the singularity $\eta_i$, i.e. a germ with an isolated $\eta_i$ point at the origin. Let $Aut_{\eta_i^{root}} < Diff(\R^{c_i},0) \times Diff(\R^{c_i+k},0)$ be the automorphism group of this germ, that is, the set of pairs $(\phi,\psi)$ of diffeomorphism germs of $(\R^{c_i},0)$ and $(\R^{c_i+k},0)$, respectively, such that $\eta_i^{root} = \phi \circ \eta_i^{root} \circ \psi^{-1}$. J\"anich \cite{Janich} and Wall \cite{Wall} showed that a maximal compact subgroup of this automorphism group can be defined (and is unique up to conjugacy). Let $G_i$ denote this subgroup. It acts naturally on $(\R^{c_i},0)$ and on $(\R^{c_i+k},0)$ and these actions can be chosen to be linear (even orthogonal). We denote by $\lambda_i$ and $\tilde\lambda_i$ the corresponding representations of $G_i$ in $GL(c_i)$ and $GL(c_i+k)$ respectively. Now $\xi_i$ and $\tilde \xi_i$ are the vector bundles associated to the universal $G_i$-bundle via the representations $\lambda_i$ and $\tilde \lambda_i$, i.e. $\xi_i = EG_i \underset{\lambda_i}{\times} \R^{c_i}$ and $\tilde\xi_i = EG_i \underset{\tilde\lambda_i}{\times} \R^{c_i+k}$. \par Recall also that the space $\Gamma T\tilde \xi_i$ is the classifying space of immersions equipped with a pullback of their normal bundle from $\tilde\xi_i$ (such immersions will be called \emph{$\tilde\xi_i$-immersions}). That is, if we denote by $Imm^{\tilde\xi_i}(m)$ the cobordism group of immersions of $m$-manifolds into $\R^{m+c_i+k}$ with the normal bundle induced from $\tilde\xi_i$, the following well-known proposition holds: \begin{prop}\label{prop:immGamma} \textnormal{(\cite{Wells}, \cite{EcclesGrant})} $$Imm^{\tilde\xi_i}(m) \cong \pi_{m+c_i+k}(\Gamma T\tilde\xi_i) \cong \pi^{\bf S}_{m+c_i+k}(T\tilde\xi_i).$$ \end{prop} Hence there is a spectral sequence in which the starting page is given by the cobordism groups of $\tilde\xi_i$-immersions: $$ E^1_{p,q} = \pi^{\bf S}_{p+q}(T\tilde\xi_p) = Imm^{\tilde\xi_p}(p+q-c_i-k) $$ and that abuts to the cobordism groups of $\tau$-maps. The differentials of this spectral sequence encode the clutching maps of the singularities that belong to $\tau$. For example, one may take an isolated $\eta_s$ singularity $g:(D^{c_s},\S^{c_s-1}) \to (D^{c_s+k},\S^{c_s+k-1})$; it would correspond to a generator $\iota_s$ of $\pi^{\bf S}_{c_s+k}(T\tilde\xi_s)$ (which is either $\Z$ or $\Z_2$). On the boundary $\partial g:\S^{c_s-1} \to \S^{c_s+k-1}$ of the map $g$, the $\eta_{s-1}$-points are the most complicated and therefore they form a $\tilde\xi_{s-1}$-immersion. The cobordism class of this immersion is the image $d^1(\iota_s)$ of $\iota_s$ under the differential $d^1$. Now assume that $d^1(\iota_s)=0$, that is, the $\eta_{s-1}$-points of $\partial g$ form a null-cobordant $\tilde\xi_{s-1}$-immersion. By \cite[Theorem 8]{nagycikk} and \cite{keyfibration}, any such cobordism can be extended to a $\tau_{s-1}$-cobordism that connects $\partial g$ with a $\tau_{s-2}$-map $\partial_2g : M^{c_s-1} \to \S^{c_s+k-1}$. The $\eta_{s-2}$-points of $\partial_2g$ form a $\tilde\xi_{s-2}$-immersion, and its cobordism class is the image $d^2(\iota_s)$ of $\iota_s$ under the differential $d^2$. If this image is $0$, then again we can change $\partial_2g$ by a cobordism to eliminate $\eta_{s-2}$-points, and so on. \par So far we have described two spectral sequences. We will show that in a special case the second one coincides with the first one, and this will allow us to study the clutchings of the singular strata.(Our proof of this equality of the two spectral sequences is independent of the paper \cite{nagycikk} to which we referred in this section.) \section{Codimension $2$ immersions and their projections}\label{section:prim} Let $\gamma \to \C P^\infty$ be the canonical complex line bundle and let $\gamma_r$ be its restriction to $\C P^r$. Let us consider an immersion $f : M^n \looparrowright \R^{n+2}$ of an oriented closed manifold. We call $f$ a \emph{$\gamma_r$-immersion} if its normal bundle is pulled back from the bundle $\gamma_r$. Let $Imm^{\gamma_r}(n)$ denote the cobordism group of $\gamma_r$-immersions of $n$-dimensional manifolds into $\R^{n+2}$. Analogously to Proposition \ref{prop:immGamma} we have $Imm^{\gamma_r}(n) \cong \pi^{\bf S}_{n+2}(\C P^{r+1})$. \par {\bf Definition:} A smooth map $g: M^n \to \R^{n+k}$ is called a \emph{prim} map if \begin{itemize} \item $\dim \ker dg \leq 1$, and \item the line bundle formed by the kernels of $dg$ over the set $\Sigma$ of singular points of $g$ is trivialized. \end{itemize} {\bf Remark:} Note that choosing any smooth function $h$ on the source manifold $M$ of $g$ such that the derivative of $h$ in the positive direction of the kernels $\ker dg$ is positive gives an immersion $f=(g,h):M \looparrowright \R^{n+k} \times \R^1$. So $g$ is the {\emph{pr}}ojection of the {\emph{im}}mersion $f$, motivating the name ``prim map''. \par The singularity types of germs for which the kernel of the differential is $1$-dimensional form an infinite sequence $\Sigma^{1,0}$ (fold), $\Sigma^{1,1,0}$ (cusp), $\Sigma^{1,1,1,0}$ etc. Let us denote by $\Sigma^{1_r}$ the symbol $\Sigma^{1,\dots,1,0}$ that contains $r$ digits $1$. We call a prim map \emph{$\Sigma^{1_r}$-prim} if it has no singularities of type $\Sigma^{1_s}$ for $s>r$. The cobordism group of prim $\Sigma^{1_r}$-maps of cooriented $n$-dimensional manifolds into an $(n+1)$-manifold $N$ will be denoted by $Prim\Sigma^{1_r}(N)$, and we will use $Prim\Sigma^{1_r}(n)$ to denote $Prim\Sigma^{1_r}(\S^{n+1})$. \begin{thm}\label{thm:main} $$Prim\Sigma^{1_r}(n) \cong \pi^{\bf S}_{n+2}(\C P^{r+1}).$$ \end{thm} The theorem is related to the spectral sequence described in Section \ref{section:SS} in the following way. Set $k=1$ and $\eta_i = \Sigma^{1_i}$, so that $\tau_r = \left\{ \Sigma^{1_i} : i\leq r \right\}$. Denote by $X^r$ the classifying space $X_{Prim\_\tau_r}$ of prim $\tau_r$-maps. The main result, implying that the spectral sequences of Section \ref{section:SS} coincide, is the following: \begin{lemma}\label{lemma:main} The classifying space $X^r$ (whose homotopy groups give the cobordism groups of prim $\tau_r$-maps) is $$ X^r \cong \Omega \Gamma \C P^{r+1}. $$ \end{lemma} {\bf Remarks:} \begin{enumerate} \item Recall that the functor $\Gamma$ turns cofibrations into fibrations. That is, if $(Y,B)$ is a pair of spaces such that $B \subset Y \to Y/B$ is a cofibration, then $\Gamma Y \to \Gamma (Y/B)$ is a fibration with fibre $\Gamma B$. The same is true if we apply to a cofibration the functor $\Omega\Gamma$, that is, $\Omega \Gamma Y \to \Omega \Gamma (Y/B)$ is a fibration with fibre $\Omega \Gamma B$. \item Further let us recall the \emph{resolvent} of a fibration map. If $E \overset{F}{\to} B$ is a fibration then the sequence of maps $F\hookrightarrow E \to B$ can be extended to the left as follows. Turn the inclusion $F \hookrightarrow E$ into a fibration. Then it turns out that its fiber is $\Omega B$, the loop space of $B$. The fiber of the inclusion $\Omega B \to F$ is $\Omega E$, the fiber of $\Omega E \to \Omega B$ is $\Omega F$ etc. Now start with the filtration $\C P^0 \subset \dots \subset \C P^r \subset \C P^{r+1} \subset \dots \subset \C P^\infty$ considered by Mosher and apply $\Omega\Gamma$ to it; we get an iterated fibration filtration. So according to Subsection \ref{section:singSS} we obtain a spectral sequence that clearly coincides with Mosher's spectral sequence (obtained from the filtration $\C P^0 \subset \C P^1 \subset \dots \subset \C P^\infty$ by applying the extraordinary homology theory $\pi^{\bf S}_*$, of the stable homotopy groups). By Lemma \ref{lemma:main} this also coincides with the singularity spectral sequence described in Subsection \ref{section:singSS} for the special case of prim maps of oriented $n$-manifolds into $\R^{n+1}$. \end{enumerate} \par {\bf Summary:} The singularity spectral sequence for codimension $1$ cooriented prim maps coincides with the spectral sequence of Mosher. \par {\bf Remark:} Theorem \ref{thm:main} follows obviously from Lemma \ref{lemma:main}. \par {\bf Remark:} Note that the space of immersions of an oriented $n$-manifold $M$ into $\R^{n+2}$ (denote it by $Imm(M,\R^{n+2})$) is homotopy equivalent to the space of prim maps of $M$ into $\R^{n+1}$ (denoted by $Prim(M,\R^{n+1})$). Indeed, projecting an immersion $M^n \looparrowright \R^{n+2}$ into a hyperplane one gets a prim map $ M^n \to \R^{n+1}$ together with an orientation on the kernel of the differentials. Conversely, having a prim map $g: M^n \to \R^{n+1}$ one can obtain an immersion $f: M^n \looparrowright \R^{n+2}$ by choosing a function $h: M \to \R^1$ that has positive derivative in the positive direction of the kernels of $df$ and putting $f=(g,h)$. The set of admissible functions $h$ (for a fixed $g$) is clearly a convex set. \par Hence for any $M$ the projection induces a map $Imm(M, \R^{n+2}) \to Prim(M,\R^{n+1})$ that is a weak homotopy equivalence since the preimage of any point of $Prim(M,\R^{n+1})$ is a convex set. The space $\Gamma \C P^\infty$ is the classifying space of codimension $2$ immersions. The previous remark implies that $\Omega \Gamma \C P^\infty$ is the classifying space of codimension $1$ prim maps. Moreover, the correspondence of the spaces of immersions and prim maps established above respects the natural filtrations of these spaces as the following lemma shows. \begin{lemma}\label{lemma:filtration} \begin{enumerate}[a)] \item If an immersion $f: M^n \looparrowright \R^{n+2}$ has the property that its composition with the projection $\pi: \R^{n+2} \to \R^{n+1}$ has no $\Sigma^{1_i}$-points for $i>r$, then the normal bundle of $f$ can be induced from the canonical line bundle $\gamma_r$ over $\C P^r$. \item The classifying space for the immersions into $\R^{n+2}$ that have $\Sigma^{1_r}$-map projections in $\R^{n+1}$ is weakly homotopically equivalent to the classifying space of immersions equipped with a pullback of their normal bundle from $\gamma_r$ (this latter being $\Gamma \C P^{r+1}$). \end{enumerate} \end{lemma} {\bf Remark:} Lemma \ref{lemma:filtration} implies that $Prim\Sigma^{1_r}(n) \cong Imm^{\gamma_r}(n) \cong \pi^{\bf S}_{n+2}(T\gamma_r) = \pi^{\bf S}_{n+2}(\C P^{r+1})$, therefore Theorem \ref{thm:main} holds. \begin{proof} Let $f:M^n \looparrowright \R^{n+2}$ be an immersion such that the composition $g=\pi \circ f : M^n \to \R^{n+1}$ has no $\Sigma^{1_{r+1}}$-points. We show that the normal bundle $\nu_f$ of $f$ can be induced from the bundle $\gamma_r$. Let $\bf v$ denote the unit vector field $\partial/\partial x_{n+2}$ in $\R^{n+2}$; we consider it to be the upward directed vertical unit vector field. For any $x \in M$ consider the vector $\bf v$ at the point $f(x)$ and project it orthogonally to the normal space of the image of $f$ (i.e. to the orthogonal complement of the tangent space $df(T_xM)$). We obtain a section $s_0$ of the normal bundle $\nu_f \to M$ which vanishes precisely at the singular points of $g$. Denote by $\Sigma_1$ the zero-set of $s_0$, $\Sigma_1 = s_0^{-1}(0)$; then $\Sigma_1$ is a codimension $2$ smooth submanifold of $M$. Let $\nu_1$ be the normal bundle of $\Sigma_1$ in $M$. Then $\nu_1$ is isomorphic to the restriction of $\nu_f$ to $\Sigma_1$ (the derivative of $s_0$ establishes an isomorphism). Consider now the vertical vector field $\bf v$ at the points of $\Sigma_1$ and project it orthogonally to the fibers of $\nu_1$. This defines a section $s_1$ of the bundle $\nu_1$, which vanishes precisely at the $\Sigma^{1,1}$-points of the map $g$ (at the singular set of the restriction of the projection $\pi$ to $\Sigma_1$). Denote this zero-set by $\Sigma_2$ and its normal bundle in $\Sigma_1$ by $\nu_2$. Next we consider $\bf v$ at the points of $\Sigma_2$ and project it to the fibers of $\nu_2$, obtaining a section $s_2$ of $\nu_2$ that vanishes precisely at the $\Sigma^{1_3}$-points of $g$, and so on. If $g$ has no $\Sigma^{1_{r+1}}$-points, then this process stops at step $r$ because $s_{r+1}^{-1}(0)$ will be empty. \par {\bf Claim:} In the situation above (when $s^{-1}_{r+1}(0)$ is empty) the bundle $\nu_f$ admits $(r+1)$ sections $\sigma_0, \dots, \sigma_r$ such that they have no common zero: $\cap_{i=0}^{r} \sigma_i^{-1}(0) = \emptyset$. \par {\it Proof of the claim:} Put $\sigma_0=s_0$. To define $\sigma_1$, we consider the section $s_1$ of the normal bundle $\nu_1$ of $\Sigma_1$ in $M$. This normal bundle $\nu_1$ is isomorphic to the restriction of $\nu_f$ to $\Sigma_1$, hence we can consider $s_1$ as a section of $\nu_f$ over $\Sigma_1$. Extend this section arbitrarily to a section of $\nu_f$ (over $M$); this will be $\sigma_1$. Similarly, $s_2$ is a section of the normal bundle $\nu_2$ of $\Sigma_2=s_1^{-1}(0)$ in $\Sigma_1$. The bundle $\nu_2$ is isomorphic to the normal bundle of $\Sigma_1$ in $M$ restricted to $\Sigma_2$, and this bundle in turn is isomorphic to $\nu_f$ restricted to $\Sigma_2$. Hence $s_2$ can be considered as a section of $\nu_f$ over $\Sigma_2$. Extending it arbitrarily to the rest of $M$ gives us $\sigma_2$. We continue the process and obtain $\sigma_3$, $\dots$, $\sigma_r$. Clearly these sections satisfy $\cap_{i=0}^{r} \sigma_i^{-1}(0) = \cap_{i=0}^{r} s_i^{-1}(0) = \emptyset$ as claimed. \par Next we define a map $\varphi: M \to \C P^r$ by the formula $\varphi(x) = [\sigma_0(x) : \dots : \sigma_r(x)]$, where we consider all $\sigma_j(x)$ as complex numbers in the fiber $(\nu_f)_x \cong \C$. This map is well-defined, since changing the trivialization of the fiber $(\nu_f)_x$ (while keeping its orientation and inner product) multiplies all the values $\sigma_j(x)$ by the same complex scalar, hence $\varphi(x) \in \C P^r$ remains the same. \par {\bf Claim:} $Hom_{\C}(\nu_f,\C) \cong \varphi^* \gamma_r$. \par Indeed, we can define a fibrewise isomorphism from $Hom_{\C}(\nu_f,\C)$ to $\gamma_r$ over the map $\varphi$ by sending a fiberwise $\C$-linear map $\alpha:\nu_f \to \C$ to $(\alpha(\sigma_0),\dots,\alpha(\sigma_r)) \in \C^{r+1}$. The image of this map over the point $x \in M$ lies on the line $\varphi(x)$ and since not all $\sigma_j$ vanish at $x$, the map is an isomorphism onto the corresponding fiber of $(\gamma_r)_{\varphi(x)}$. \par This proves part $a)$ of Lemma \ref{lemma:filtration}. \par The same argument can be repeated for any cooriented prim $\Sigma^{1_r}$-map of an $n$-dimensional manifold into an $(n+1)$-dimensional manifold. Thus for any target manifold $N$ we obtain a map $[N,X^r] = Prim\Sigma^{1_r}(N) \to [SN, \Gamma \C P^{r+1}] = [N,\Omega \Gamma \C P^{r+1}]$. \par Recall from Subsection \ref{section:singSS} that we can also assign to a $\Sigma^{1_r}$-map the set of its $\Sigma^{1_r}$-points, which form an immersion such that its normal bundle (in the target space) can be pulled back from the universal bundle $\tilde\xi_r$. This universal bundle $\tilde \xi_r$ in the case of prim Morin maps is trival (see \cite{partI}), hence $T\tilde\xi_r \cong \S^{2r+1}$. We will also use the existence of the key fibration mentioned in Subsection \ref{section:singSS} that says that this assignment fits into a fibration $X^{r-1} \subset X^r \to \Gamma \S^{2r+1}$. \par We now prove part $b)$ of Lemma \ref{lemma:filtration} by induction on $r$. For $r=0$ and any given target manifold $N$ the two classes of maps are $1$-framed codimension $2$ immersions into $N \times \R$ and codimension $1$ immersions into $N$ respectively. The spaces of these maps are weakly homotopically equivalent by the Compression Theorem \cite{compression}. The classifying space for both types of maps is $\Omega\Gamma\S^2$. For a general $r$, the operation of extracting the $\Sigma^{1_r}$-points from a prim map gives us the following commutative diagram: $$ \xymatrix{ Prim\Sigma^{1_r}(N) = [N,X^r] \ar[r] \ar[d] & [N, \Gamma \S^{2r+1}] \ar[d]_{=} \\ [SN,\Gamma \C P^{r+1}] = [N,\Omega\Gamma \C P^{r+1}] \ar[r] & [N, \Gamma \S^{2r+1}] } $$ The arrows of this diagram are natural transformations of functors, and thus correspond to maps of the involved classifying spaces (see \cite{Switzer}{Chapter 9, Theorem 9.13.}): $$ \xymatrix{ X^r \ar[r] \ar[d] & \Gamma \S^{2r+1} \ar[d]_{=} \\ \Omega\Gamma \C P^{r+1} \ar[r] & \Gamma \S^{2r+1} } $$ By construction this diagram is commutative and the right-hand side map can be chosen to be the identity map. The horizontal arrows are Serre fibrations with fibers $X^{r-1}$ and $\Omega \Gamma\C P^{r}$ respectively. By the commutativity of the diagram fiber goes into fiber: $$ \xymatrix{ X^{r-1} \ar@{}[r]|{\subset} \ar[d] & X^r \ar[r] \ar[d] & \Gamma \S^{2r+1} \ar[d]_{=} \\ \Omega\Gamma \C P^r \ar@{}[r]|{\subset} & \Omega\Gamma \C P^{r+1} \ar[r] & \Gamma \S^{2r+1} } $$ Consider the long homotopy exact sequence of the top and the bottom rows. The vertical maps of the diagram induce a map of these long exact sequences. By the induction hypothesis the map $X^{r-1} \to \Omega\Gamma \C P^r$ is a weak homotopy equivalence. The five-lemma then implies that the middle arrow is also a weak homotopy equivalence and $X^r$ is weakly homotopically equivalent to $\Omega\Gamma \C P^r$. This finishes the proof of Lemmas \ref{lemma:filtration} and \ref{lemma:main}. \end{proof} \section{Stable homotopy theory tools}\label{section:stable} Mosher proved several statements concerning the differentials of his spectral sequence. Since it coincides with our singularity spectral sequence, Mosher's results translate into results on singularities. For proper understanding of the proofs of Mosher (which are highly compressed and not always complete) we elaborate them here. We start by recalling a few definitions and facts of stable homotopy theory. \par {\bf Definition:} Given finite $CW$-complexes $X$ and $Y$ we say that they are \emph{$S$-dual} if there exist iterated suspensions $\Sigma^p X$ and $\Sigma^q Y$ such that they can be embedded as disjoint subsets of $\S^N$ (for a suitable $N$) in such a way that both of them are deformation retracts of the complement of the other one. \par {\bf Definition:} Let $X$ be a finite cell complex with a single top $n$-dimensional cell. $X$ is \emph{reducible} if there is a map $\S^n \to X$ such that the composition $\S^n \to X \to X/sk_{n-1}X = \S^n$ has degree $1$. \par {\bf Definition:} Let $X$ be a finite cell complex with a single lowest (positive) dimensional cell, in dimension $n$. Then $X$ is \emph{coreducible} if there is a map $X \to \S^n$ such that its composition with the inclusion $\S^n \to X$ is a degree $1$ self-map of $\S^n$. \par Example: if $\varepsilon_B^n=B \times \R^n$ is the rank $n$ trivial bundle, then the Thom space $T\varepsilon^n_B$ is coreducible since the trivializing fibrewise map $\varepsilon^n_B \to \R^n$ extends to the Thom space and is identical on $\S^n$ (the one-point compactification of a fiber). \par {\bf Definition:} The space $X$ is $S$-reducible ($S$-coreducible) if it has an iterated suspension which is reducible (respectively, coreducible). \par \begin{prop}{\cite[Theorem 8.4]{Husemoller}} If $X$ and $Y$ are $S$-dual cell complexes with top and bottom cells generating their respective homology groups, then $X$ is $S$-reducible if and only if $Y$ is $S$-coreducible. \end{prop} \par \begin{lemma}\label{lemma:coreduciblebouquet} If $X$ is a finite $S$-coreducible cell complex with $sk_n X = \S^n$, then $X$ is stably homotopically equivalent to the bouquet $\S^n \vee (X/sk_n X)$. \end{lemma} \par \begin{proof} The product of the retraction $r: X \to sk_n X$ and the projection $p: X \to X/sk_n X$ gives a map $f=(r,p):X \to \S^n \times (X/sk_n X)$. Replacing $X$ by its sufficiently high suspension we can assume that $\dim X <2n-1$. The inclusion $\S^n \vee (X/sk_n X) \to \S^n \times (X/sk_n X)$ is $2n-1$-connected, so in this case we can assume that $f$ maps $X$ into $\S^n \vee (X/sk_n X)$. Then $f$ induces isomorphisms of the homology groups, hence it is a homotopy equivalence. \end{proof} In what follows, we will encounter cell complexes such that after collapsing their unique lowest dimensional cell they become ($S$-)reducible or after omitting their unique top dimensional cell they become ($S$-)coreducible. In both of these cases, there is an associated (stable) $2$-cell complex: {\bf Definition:} Let $X$ be a cell complex with a unique bottom cell $e^n$ such that $X/e^n$ is reducible, with the top cell $e^{n+d}$ splitting off. Then the attaching map of the cell $e^{n+d}$ to $X\setminus e^{n+d}$ can be deformed into $e^n$ -- one can lift the map $\S^{n+d} \to X/e^n$ in the definition of reducibility to a map $(D^{n+d},\partial D^{n+d}) \to (X,e^n)$. We define the \emph{$2$-cellification} of $X$ to be the $2$-cell complex $Z = D^{n+d} \underset{\alpha}{\cup} \S^n$ whose attaching map $\alpha$ is a deformation of the attaching map $\partial e^{n+d} \to X\setminus e^{n+d}$ into $e^n$. {\bf Definition:} Let $X$ be a cell complex with a unique top cell $e^{n+d}$ such that $X\setminus e^{n+d}$ is coreducible, retracting onto the bottom cell $e^{n}$. We define the \emph{$2$-cellification} of $X$ to be the $2$-cell complex $Z = D^{n+d} \underset{\alpha}{\cup} \S^n$ whose attaching map $\alpha$ is the composition of the attaching map $\partial e^{n+d} \to X\setminus e^{n+d}$ with the retraction $X \setminus e^{n+d} \to e^n$. Note that in both definitions there is a choice to be made: a deformation of the attaching map of the top cell into the bottom cell has to be chosen in the first definition, and a retraction onto the bottom cell has to be chosen in the second one. Making different choices can result in (even stably) different $2$-cell complexes, and any of them can be considered as possible $2$-cellifications. Next we show that taking the $S$-dual space to all possible $2$-cellifications of a complex $X$ we obtain precisely the $2$-cellifications of the $S$-dual complex $D_S[X]$. \begin{lemma}\label{lemma:dual2cell} Let $X$ and $Y$ be $S$-dual finite cell complexes with top and bottom homologies generated freely by the single top and bottom cells $e^{n+d}_X$, $e^n_X$, $e^{m+d}_Y$ and $e^m_Y$, respectively. Assume that in $X$, the cell $e^{n+d}_X$ is attached only to the bottom cell $e^n_X$ in a homotopically nontrivial way (in other words, $X/e^n_X$ is reducible). Then \begin{itemize} \item $Y \setminus e^{m+d}_Y$ admits a retraction $r$ onto $e^m_Y$ (that is, $Y \setminus e^{m+d}_Y$ is coreducible); \item The set of stable homotopy classes of such retractions $r$ admits a bijection with the set of stable homotopy classes of maps $f : D^{n+d} \cup \S^n \to X$ that induce isomorphisms in dimensions $n$ and $n+d$ (here and later $D^p \cup \S^q$ denotes a $2$-cell complex); and \item The $S$-duals of the $2$-cellifications of $X$ are precisely the $2$-cellifications of $Y$. \end{itemize} \end{lemma} \begin{proof} First we remark that one can identify retractions $r$ of $Y \setminus e^{m+d}_Y$ onto $e^m_Y$ with maps $Y \to D^{m+d} \cup \S^m$ that induce isomorphisms of homologies in dimensions $m$ and $m+d$. Indeed, any retraction $r$ induces identity on $H_m$, and extending the retraction by a degree $1$ map of the top cell gives us a map in homology that is an isomorphism in dimensions $m$ and $m+d$. On the other hand, any map from $Y$ to a $2$-cell complex that induces an isomorphism in homology in dimensions $m$ and $m+d$ is glued together from a map $r:Y \setminus e^{m+d}_Y \to \S^m$ and a map $h:(e^{m+d}_Y,\partial e^{m+d}_Y) \to (D^{m+d},\partial D^{m+d})$ that is homotopic to the identity relative to the boundary. The map $r$ induces an isomorphism on homology in dimension $m$, so it is homotopic to a map that maps (the closure of) $e^m_Y$ onto $\S^m$ homeomorphically. Then reparametrizing $\S^m$ we may assume this homeomorphism to be identical, so $r$ is a retraction. \par Returning to the proof of the lemma, since $e^{n+d}_X$ is attached homotopically nontrivially only to the bottom cell $e^n_X$, there exists a map $f:D^{n+d} \cup \S^n \to X$ that maps the two cells by degree $1$ maps onto the top and the bottom cell of $X$ respectively. The mapping induced in homology by $f$ is an isomorphism in dimensions $n+d$ and $n$, so its $S$-dual $D_S(f)$ induces isomorphisms in dimensions $m$ and $m+d$. Hence the restriction of the map $D_S(f)$ to $Y\setminus e_Y^{m+d}$ gives a retraction of $Y \setminus e^{m+d}_Y$ onto the closure of $e^m_Y$. Conversely, the $S$-duals of the possible retractions $r: Y \setminus e^{m+d}_Y \to \S^m$ are maps of the form $f: D^{n+d} \cup \S^n \to X$ that induce isomorphisms in homology in dimensions $n$ and $n+d$. \par For any such pair of $S$-dual maps $f$ and $r$ we have the cofibration $$ D^{n+d} \cup \S^n \overset{f}{\to} X \to sk_{n+d-1} X/e^n_X $$ and its $S$-dual cofibration $$ D^{m+d} \cup \S^m \overset{r \cup h}{\leftarrow} Y=D_S[X] \leftarrow D_S[sk_{n+d-1} X/e^n_X]\overset{\dag}{=}sk_{m+d-1} Y/e^m_Y $$ (the equality $\dag$ being implied by Lemma \ref{lemma:restriction}). Since the middle and right-hand side terms are $S$-dual in these cofibration sequences, so are the left-hand side terms. That is, the possible $2$-cellifications of $X$ and $Y$ are $S$-dual. \end{proof} Later we will need some further technical lemmas. \begin{lemma}\label{lemma:selfDual} The $S$-dual of a $2$-cell complex is also a $2$-cell complex, with the same stable attaching map (up to sign). \end{lemma} \begin{proof} Let $A$ be the $2$-cell complex in question, with cells in dimensions $n$ and $n+d$, and denote by $B'=D_S[A]$ its $S$-dual (sufficiently stabilized for all maps in the following argument to be in the stable range). Then there is a cofibration $\S^n \to A \to \S^{n+d}$, and $S$-duality takes it to a cofibration $\S^m \to B' \to \S^{m+d}$ (sufficiently stabilized for the maps of the proof to exist). The statement of the lemma is trivial if $d=1$ and the attaching map of $A$ is homotopic to a homeomorphism, so we assume that this is not the case, in particular, $A$ has nontrivial homology in dimension $n$. \par We now construct a $2$-cell complex $B$ with cells in dimensions $m$ and $m+d$ as well as a map $f: B \to B'$ that will turn out to be a homotopy equivalence. Take a map $h:(D^{m+d}, \partial D^{m+d}) \to (B',\S^m)$ that represents a generator of $H_{m+d}(B'/\S^m) \cong H_{m+d}(\S^{m+d})$. We define $B$ as the $2$-cell complex obtained by attaching $D^{m+d}$ to $\S^m$ along $h|_{\partial D^{m+d}}$; the map $h$ extends to a map $f:B \to B'$. Comparing the long exact sequences in homology of the pairs $(B',\S^m)$ and $(B,\S^m)$, we see that $f$ induces isomorphisms of $H_*(\S^m)$ and $H_*(\S^{m+d})$ by construction, hence the $5$-lemma implies that $H_*(f) : H_*(B) \to H_*(B')$ is also an isomorphism in all dimensions. By Whitehead's theorem, $f$ must be a homotopy equivalence. Hence the $S$-dual of the $2$-cell complex $A$ is the $2$-cell complex $B$. \par It remains to show that the attaching map of $B$ is stably homotopic to that of $A$ (or its opposite, depending on the choice of orientations of the cells). Consider the long exact sequence of the stable homotopy groups of the cofibration $\S^n \to A \to \S^{n+d}$: $$ \dots \to \pi^{\bf S}_{n+d}(A,\S^n) \cong \pi^{\bf S}_{n+d}(\S^{n+d}) \cong \Z \overset{\partial}{\to} \pi^{\bf S}_{n+d-1}(\S^n) \cong \pi^{\bf S}(d-1) \to \dots $$ The boundary map $\partial$ takes the generator of $\Z$ to the attaching map of $A$. Taking $S$-duals we obtain the long exact sequence of the stable cohomotopy groups of the cofibration $\S^m \to B \to \S^{m+d}$: $$ \dots \to \pi_s^m(\S^m) \overset{\delta}{\to} \pi_s^{m+1}(B,\S^m) \cong \pi_s^{m+1}(\S^{m+d}) \cong \pi^{\bf S}(d-1) \to \dots $$ Since the $S$-duality establishes an isomorphism between these two sequences, the boundary map $\partial$ and the coboundary map $\delta$ coincide. It is hence enough to show that $\delta$ maps the generator of $\Z$ to the attaching map of $B$. \par In order to show this, consider the Puppe sequence of the cofibration $\S^m \to B \to \S^{m+d}$, which involves the coboundary map $\delta$: $$ \S^m \to B \to B \cup Cone(\S^m) \sim \S^{m+d} \overset{\delta(id_{\S^{m+d}})}{\longrightarrow} Cone(B) \cup Cone(\S^m) \sim \S^{m+1} \to \dots $$ Denote by $\beta : \S^{m+d-1} \to \S^m$ the attaching map of $B$. We claim that $\delta(id_{\S^{m+d}})$ is homotopic to the suspension $S\beta$. Indeed, denote by $g$ the following map of the sphere $\S^{m+d}$ into $B \cup Cone(\S^m)$: on the top hemisphere $g$ is a homeomorphism onto the top cell of $B$, and on the bottom hemisphere $g$ is the map $Cone(\beta): Cone(\S^{m+d-1}) \to Cone(\S^m)$. On the equator the two partial maps coincide and hence $g$ is a (homotopically) well-defined map. On the top cell of $B$ this map is $1$-to-$1$, so after identifying $B \cup Cone(\S^m)$ with $\S^{m+d}$ the map $g$ becomes a degree $1$ self-map of $\S^{m+d}$ and is hence a homotopy equivalence. Consequently the composition of $g$ with the collapse of $B$ in $B \cup Cone(\S^m)$ is homotopic to the map $\delta(id_{\S^{m+d}})$ from the Puppe sequence. But this composition map is the quotient of the bottom hemisphere map $Cone(\beta)$ after collapsing the boundary $\S^{m+d-1}$ in the source and the boundary $\S^m$ in the target, so it is the suspension $S\beta$, proving our claim. \end{proof} \par \begin{lemma}\label{lemma:restriction} Let $X$ and $Y$ be $S$-dual finite cell complexes, with the single top cell $e_X$ of $X$ generating the top homology of $X$ and the single bottom cell $e_Y$ of $Y$ generating the bottom homology $H_m(Y)$ of $Y$. Then $X \setminus int~ e_X$is $S$-dual to the quotient space $Y/\overline{e_Y}$. (Informally, omitting the top cell is $S$-dual to contracting the bottom cell.) \end{lemma} \begin{proof} Let $i:X \setminus int~ e_X \to X$ be the inclusion. The map $i_*$ induced in homology by $i$ is an isomorphism in all dimensions except the top one, where it is $0$. Hence the $S$-dual of $i$ is a map $D_S[i]: Y \to D_S[X \setminus int~ e_X]$ that induces isomorphisms in all homology groups except the bottom one, where it is $0$. This means that the space $D_S[X \setminus int~ e_X]$ is $m$-connected and after a homotopy, we may assume that the map $D_S[i]$ maps $e_Y$ to a single point. Consequently, $D_S[i]$ factors through the contraction of the bottom cell $e_Y$ and yields a map $Y/\overline{e_Y} \to D_S[X \setminus int~ e_X]$ that induces an isomorphism in all homologies. Hence $D_S[X \setminus int~ e_X]$ is homotopy equivalent to $Y/\overline{e_Y}$, as claimed. \end{proof} \par \begin{lemma}\label{lemma:multiplication} Let $X$ be a finite cell complex with a single top cell in dimension $n$ that freely generates $H_n(X)$, and denote by $Y$ its $S$-dual that has a single bottom cell in dimension $m$ which freely generates $H_m(Y)$. Let $\tilde X$ denote the cell complex which has the same $n-1$-skeleton as $X$ and has a single $n$-cell whose attaching map is $q$ times the attaching map of the top cell in $X$. Similarly, let $\tilde Y$ be the quotient of $Y$ by a degree $q$ map of its bottom cell. Then $\tilde X$ and $\tilde Y$ are $S$-dual. \end{lemma} \begin{proof} Denote by $D_S[\tilde X]$ the $S$-dual of $\tilde X$. Since there is a map $\phi: \tilde X \to X$ that has degree $q$ on the top cell and is a homeomorphism on the $n-1$-skeleton, its $S$-dual is a map $\psi: Y \to D_S[\tilde X]$ that induces an isomorphism on all homology groups except the $m$-dimensional one, where it is the multiplication by $q$. Since $D_S[\tilde X]$ can be chosen to be simply connected and has vanishing homology in dimensions $1$ to $m-1$, it is $m-1$-connected and without loss of generality we may assume that it does not contain cells of dimension $m-1$ or less. Similarly, we can assume that $D_S[\tilde X]$ has a single cell of dimension $m$, and the map $\psi$ sends the bottom cell of $Y$ into the bottom cell of $D_S[\tilde X]$. Since $\psi$ restricted to the bottom cell of $Y$ has to be a degree $q$ map, $\psi$ factorizes through the obvious map $r:Y \to \tilde Y$ and there exists a map $\hat \psi: \tilde Y \to D_S[\tilde X]$ such that $\psi = \hat \psi \circ r$. The map on homology induced by $\hat \psi$ is clearly an isomorphism in all dimensions above $m$, and since $r$ and $\psi$ both induce the multiplication by $q$ on $H_m$, $H_m(\hat \psi)$ is also an isomorphism. By Whitehead's theorem, $\hat \psi$ is a homotopy equivalence between $\tilde Y$ and $D_S[\tilde X]$, proving the claim of the lemma. \end{proof} \par \section{Periodicity}\label{section:periodicity} We define the Atiyah-Todd number $M_k$ to be the order of $\gamma_{k-1}$ in the group $J(\C P^{k-1})$ (see \cite{AtiyahTodd}, \cite{AdamsWalker}). It can be computed as follows: let $p^{\nu_p(r)}$ be the maximal power of the prime $p$ that divides the positive integer $r$. Then $M_k = \prod_{p \text{ prime}} p^{\nu_p(M_k)}$ and the exponents satisfy the formula $$ \nu_p(M_k) = \max \left\{ r+\nu_p(r) : 1 \leq r \leq \left\lfloor \frac{k-1}{p-1} \right\rfloor \right\}. $$ For $k \leq 6$ the numbers $M_k$ are the following: \par \begin{tabular}{c|c|c|c|c|c|c} k & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline $M_k$ & 1 & 2 & $2^3 \cdot 3$ & $2^3 \cdot 3$ & $2^6\cdot 3^2\cdot 5$ & $2^6\cdot 3^2\cdot 5$ \end{tabular} \par \noindent Note that $M_{k+1}$ is a multiple of $M_k$ for any $k$. \par Mosher proved that the spectral sequence of Subsection \ref{section:CPSS} is periodic with period $M_k$ in the following sense: \begin{prop}\label{thm:periodicity}{\text {\cite[Proposition 4.4]{Mosher}}} If $r \leq k-1$ then $E^r_{i,j} \cong E^r_{i+M_k,j+M_k}$. Moreover, the isomorphism $E^{k-1}_{i,j} \cong E^{k-1}_{i+M_k,j+M_k}$ commutes with the differential $d^{k-1}$. \end{prop} In particular, $d^1$ is $(2,2)$-periodic, $d^2$ and $d^3$ are $(24,24)$-periodic, etc. In the proof, we use the groups $J(X)$ that are defined as follows. {\bf Definition:} For a topological space $X$, let $J(X)$ be the set of stable fiberwise homotopy equivalence classes of sphere bundles $S\xi$, where $\xi$ is a vector bundle over $X$. In particular, if for bundles $\xi$ and $\zeta$ the represented classes $[\xi]$ and $[\zeta]$ in $J(X)$ coincide, then the Thom spaces $T\xi$ and $T\zeta$ are stably homotopically equivalent. The natural surjective map $K(X) \to J(X)$ is compatible with the addition of vector bundles and hence transfers the abelian group structure of $K(X)$ onto $J(X)$ (see \cite{Husemoller}). Note that for any finite cell complex $X$ the group $J(X)$ is finite. \begin{proof} The definition of the first $(k-1)$ pages of the spectral sequence of the filtration $\C P^0 \subset \C P^1 \subset \dots \subset \C P^i \subset \dots$ is constructed using the relative stable homotopy groups $\pi^{\bf S}_*(\C P^m/\C P^l)$ (where $0\leq m-l \leq k$) and homomorphisms between these groups induced by inclusions between pairs $(\C P^m,\C P^l)$. We show that all these groups remain canonically isomorphic if we replace $\pi^{\bf S}_q(\C P^m,\C P^l)$ by $\pi^{\bf S}_{q+2M_k}(\C P^{m+M_k},\C P^{l+M_k})$. \par It is well-known that the quotient $\C P^m/\C P^l$ is the Thom space of the bundle $(l+1)\gamma_{m-l-1}$. Similarly $\C P^{m+M_k}/\C P^{l+M_k}$ is the Thom space of the bundle $(l+1+M_k)\gamma_{m-l-1}$. Since $M_k$ is the order of $J(\gamma_{k-1})$ in $J(\C P^{k-1})$, and that is obviously a multiple of the order of $J(\gamma_{m-l-1})$ in $J(\C P^{m-l-1})$ if $m-l-1 \leq k-1$, one has $J(M_k\gamma_{m-l-1})=0$. The homomorphism $\xi \mapsto J(\xi)$ is compatible with the sum of bundles, hence for any bundle $\xi$ over $\C P^{m-l-1}$ the Thom spaces $T(\xi \oplus M_k\gamma_{m-l-1})$ and $T(\xi \oplus M_k\varepsilon^1) = S^{2M_k}T\xi$ represent the same element in $J(\C P^{m-l-1})$ and hence are stably homotopically equivalent. Therefore the shift of indices ${(i,j) \mapsto (i+M_k,j+M_k)}$ maps the first $r$ pages of the spectral sequence into itself via the canonical isomorphism $\pi^{\bf S}_q(X) \cong \pi^{\bf S}_{q+2M_k}(S^{2M_k}X)$ for $r\leq k-1$. \par Let us give a more detailed overview of the isomorphism $E^r_{i,j} \cong E^r_{i+M_k,j+M_k}$ for $r \leq k-1$. $$ E^1_{i,j}=\pi^{\bf S}_{i+j}(\C P^i/\C P^{i-1}) = \pi^{\bf S}_{i+j}(\S^{2i}) $$ $$ E^1_{i+M_k,j+M_k}=\pi^{\bf S}_{i+j+2M_k}(\C P^{i+M_k}/\C P^{i+M_k-1}) = \pi^{\bf S}_{i+j+2M_k}(\S^{2i+2M_k}) $$ hence $E^1_{i,j}$ is canonically isomorphic to $E^1_{i+M_k,j+M_k}$. \par The group $E^r_{p,q}$ is defined as the quotient $$ E^r_{i,j} = \frac{Z^r_{i,j}}{B^r_{i,j}} = \frac{\img \left( \pi^{\bf S}_{i+j}(\C P^i,\C P^{i-r}) \to \pi^{\bf S}_{i+j}(\C P^i,\C P^{i-1})\right)}{\img \left( \pi^{\bf S}_{i+j+1}(\C P^{i+r-1},\C P^{i}) \overset{\partial}{\to} \pi^{\bf S}_{i+j}(\C P^i,\C P^{i-1})\right)}= $$ $$ =\frac{\img \left( \pi^{\bf S}_{i+j}(T(i-r+1)\gamma_{r-1}) \to \pi^{\bf S}_{i+j}(\S^{2i})\right)}{\img \left( \pi^{\bf S}_{i+j+1}(T(i+1)\gamma_{r-2}) \to \pi^{\bf S}_{i+j}(\S^{2i})\right)} $$ Now replacing $i$ by $i+M_k$ and $j$ by $j+M_k$ we obtain $$ E^r_{i+M_k,j+M_k} = \frac{\img \left( \pi^{\bf S}_{i+j+2M_k}(T(i-r+1+M_k)\gamma_{r-1}) \to \pi^{\bf S}_{i+j+2M_k}(\S^{2i+2M_k})\right)}{\img \left( \pi^{\bf S}_{i+j+1+2M_k}(T(i+1+M_k)\gamma_{r-2}) \to \pi^{\bf S}_{i+j+2M_k}(\S^{2i+2M_k})\right)} $$ But we can replace $T(a+M_k)\gamma_m$ by $S^{2M_k}Ta\gamma_m$ for any $a$ if $m<k$, hence we obtain a canonical isomorphism between $E^r_{i,j}$ and $E^r_{i+M_k,j+M_k}$. This proves the isomorphism of the groups, and the same argument goes through for the differentials $d^r$. \end{proof} \par This proof relied on a sophisticated theorem of Adams, Atiyah and others that determined the order of $J(\gamma_{k-1})$. Next we give a simple independent proof of the fact that $d^1$ is $(2,2)$-periodic, elaborating \cite[Proposition 5.1]{Mosher}. Recall that $d^1_{s,s}$ is the map $$ d^1_{s,s}: \pi_{2s}(\C P^s/\C P^{s-1}) \to \pi_{2s-1}(\C P^{s-1}/\C P^{s-2}). $$ If $\iota_s \in \Z = \pi^{\bf S}_{2s}(\C P^s/\C P^{s-1})$ is the positive generator, then $d^1_{s,s}(\iota_s)$ is the (stable) homotopy class of the attaching map of the $2s$-cell of $\C P^s/\C P^{s-2}$ to the sphere $\C P^{s-1}/\C P^{s-2}$. Note that for any $s$ this map is trivial if and only if $\C P^s/\C P^{s-2}$ is stably homotopy equivalent to $\S^{2s} \vee \S^{2s-2}$. \begin{lemma}\label{lemma:Sq2} In the ring $H^*(\C P^s/\C P^{s-2})$ the cohomological operation $Sq^2$ is nontrivial if and only if $s$ is even. \end{lemma} {\bf Corollary:} For $s$ even the space $\C P^s/\C P^{s-2}$ is not stably homotopically equivalent to $\S^{2s} \vee \S^{2s-2}$, hence in this case $d^1_{s,s}(\iota_s)$ is not trivial. \begin{proof}[Proof of Lemma \ref{lemma:Sq2}] The projection $\C P^s \to \C P^s/\C P^{s-2}$ induces isomorphisms of the $\Z_2$-cohomology groups in dimension $2s$ and $2s-2$. Let us denote by $y$ the generator of the ring $H^*(\C P^s; \Z_2)$. Then $Sq^2 y^{s-1} = (s-1) y^s \neq 0$ if $s$ is even. The commutativity of the following diagram finishes the proof: $$ \xymatrix{ H^{2s-2}(\C P^s/\C P^{s-2}) \ar[r]^{Sq^2} & H^{2s}(\C P^s/\C P^{s-2}) \\ H^{2s-2}(\C P^s) \ar[u]_{\cong} \ar[r]^{Sq^2} & H^{2s}(\C P^s) \ar[u]_{\cong} } $$ \end{proof} We thus know that for $s$ even $d^1_{s,s}(\iota_s) \neq 0$, and it remains to show that under the same conditions $d^1_{s+1,s+1}(\iota_{s+1})=0$. We have established in Part I \cite{partI} that the differentials commute with the composition product. In particular, consider the map $d^1_{s,s+1}:E^1_{s,s+1} \to E^1_{s-1,s+1}$. Its domain is $E^1_{s,s+1} = \pi^{\bf S}_{2s+1}(\C P^s/\C P^{s-1}) \cong \pi^{\bf S}(1) = \Z_2$, with the generator traditionally denoted by $\eta$ (see \cite{Toda}). The codomain of the map is $\pi^{\bf S}(2) = \Z_2$ and is generated by $\eta \circ \eta$. This implies that $d^1_{s,s+1}(\eta)=\eta \circ d^1_{s,s}(\iota_s) = \eta \circ \eta \neq 0$. But then $d^1_{s+1,s+1}$ must vanish as it maps into the trivial kernel of $d^1_{s,s+1}$. \subsection{Periodicity modulo $p$} In \cite{Mosher}, it was stated without proof that a similar result holds when one considers the spectral sequence only from the point of view of $p$-components of the involved groups. We present a proof below. To simplify notation, we will use the convention $n_p = p^{\nu_p(n)}$ to denote the $p$-component of a natural number $n$. \begin{propNull}{\cite[in the text]{Mosher}} If $r \leq k-1$ then $E^r_{i,j}$ and $E^r_{i+(M_k)_p,j+(M_k)_p}$ are isomorphic modulo the class of groups of order coprime to $p$. Moreover, the $p$-isomorphism $E^k_{i,j} \cong E^k_{i+(M_k)_p,j+(M_k)_p}$ commutes with the differential $d^k$. \end{propNull} \par For reader's convenience we recall the notion of $p$-equivalence that shall be used in the proof. \par {\bf Definition:} A map $f: X \to Y$ of simply connected spaces is a $p$-equivalence if it induces isomorphisms on the $p$-components of all homotopy groups. The $p$-equivalence of spaces is the finest equivalence relation according to which any two spaces that admit a $p$-equivalence map between them are equivalent. \par {\bf Definition:} Equivalently, simply connected spaces $X$ and $Y$ are $p$-equivalent if their $p$-localizations are homotopy equivalent. \par In order to imitate the previous proof we define the groups $J_p$: {\bf Definition:} For a topological space $X$, let $J_p(X)$ be the set of stable fiberwise homotopy $p$-equivalence classes of sphere bundles $S\xi$, where $\xi$ is a vector bundle over $X$. In particular, if the classes $[\xi]=[\zeta]\in J_p(X)$, then the Thom spaces $T\xi$ and $T\zeta$ are stably homotopically $p$-equivalent. \par Note that the natural map $J_p: K(X) \to J_p(X)$ factors through $J: K(X) \to J(X)$ and transfers the abelian group structure of $K(X)$ and $J(X)$ to $J_p(X)$. Indeed, the Cartan sum operation in $K(X)$ corresponds to taking the fibrewise join in $J(X)$ and $J_p(X)$, and inverse elements exist in $J_p(X)$ since they can be found in $J(X)$. \par We are tempted to claim that $J_p(X)$ is actually just the $p$-component of $J(X)$ (which we denote by $J_p^{alg}(X)$ and call the ``algebraic'' $J_p$ to distinguish it from the ``geometric'' $J_p$ defined above), but we can only show this for the spaces $X=\C P^r$; this is however enough to prove Mosher's claim. \begin{lemma}\label{lemma:JpCP} $J_p(\C P^n) \cong J_p^{alg}(\C P^n)$. In other words, for any two stable vector bundles $\xi$ and $\eta$ over $\C P^n$ the sphere bundles $S\xi$ and $S\eta$ are stably fiberwise $p$-equivalent if and only if the class $[\xi-\eta] \in J(\C P^n)$ has order coprime to $p$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:JpCP}] Denote by $x = \gamma_n-1 \in K(\C P^n)$ the rank $0$ representative of the stable class of the tautological bundle. It is known (\cite[Theorem 7.2.]{AdamsK}) that $K(\C P^n) \cong \Z[x]/(x^{n+1})$. \par For any positive integer $q$ coprime to $p$ we have $J_p(\gamma_n) = J_p(\gamma_n^{\otimes q})$ since there is a fibrewise degree $q$ map from $\gamma_n$ to $\gamma_n^{\otimes q}$. Rewriting this in terms of $x$ and using $x^{n+1}=0$, we get that for any such $q$ we have \begin{equation}\label{eq:Jmain} J_p(1+x)=J_p((1+x)^q)=J_p\left(1+qx+{q \choose 2}x^2+\dots+{q \choose n}x^n\right). \end{equation} We show that we can choose values $q_1$, $q_2$, $\dots$, $q_m$ and $\lambda_1$, $\lambda_2$, $\dots$, $\lambda_m$ in such a way that considering the equality \eqref{eq:Jmain} for $q=q_1$, $\dots$, $q_m$ and forming the linear combination of the obtained equalities with coefficients $\lambda_1$, $\dots$, $\lambda_m$ yields the equality $p^C J_p(x)=0$ for some positive integer $C$. As a consequence, we deduce that the group $J_p(\C P^n)$ is a $p$-primary group. \par To do this, observe that the coefficients of $1$, $x$, $\dots$, $x^n$ in the linear combination of the right-hand side of \eqref{eq:Jmain} for $q=q_1, \dots, q_m$ and with coefficients $\lambda_1, \dots, \lambda_m$ are the coordinates of the image of the column vector $(\lambda_1, \dots, \lambda_m)^t$ under the linear transformation described by the matrix $$ M=\left( \begin{array}{cccc} 1 & 1 & \dots & 1\\ q_1 & q_2 & \dots & q_m \\ {q_1 \choose 2} & {q_2 \choose 2} & \dots & {q_m \choose 2}\\ \vdots & \vdots & \ddots & \vdots \\ {q_1 \choose n} & {q_2 \choose n} & \dots & {q_m \choose n}\\ \end{array} \right) $$ If we choose $m=n+1$, then $M$ becomes a square matrix and its determinant can be computed. Indeed, in the $j$th row for $j = 0$, $\dots$, $n$ the $i$th element is the same polynomial of degree $j$ with leading coefficient $1/j!$, evaluated at $q_i$. By induction on $j$, the rows with indices less than $j$ span linearly all the polynomials of the variable $q_i$ of degree less than $j$ for any $i$ and hence adding a suitable linear combination of these rows to the $j$th row we can turn the $j$th row into $\left(\frac{q_i^j}{j!}\right)_{i=1..m}$. Doing this for all $j$, we do not change the determinant and arrive at the matrix $\left(\left( \frac{q_i^j}{j!} \right)\right)_{i=1..m,j=0..n}$, whose determinant is $\prod_{j=0}^{n} \frac{1}{j!}$ times the determinant of the Vandermonde determinant on the elements $q_1, \dots, q_m$, so $$ \det M = \frac{\prod_{1 \leq v < u \leq m+1} (q_u-q_v) }{\prod_{j=0}^{n} j!}. $$ \noindent In particular, if we choose $q_j=pj+1$, then $\det M = p^{n+1 \choose 2}$. \par Choosing the (integral) vector $v=(\lambda_1,\dots,\lambda_m)^t$ to be the first column of the cofactor matrix of $M$ (which is $\left(\det M\right) \cdot M^{-1}$), we have $Mv=(\det M,0,\dots,0)^t$, so when we take the linear combination of \eqref{eq:Jmain} for $q=q_i$ with coefficient $\lambda_i$, the right-hand sides sum up to $J_p(\det M \cdot 1)$ and the left-hand sides sum up to $J_p(\sum_i \lambda_i (1+x)) = J_p\left(\left(\det M\right) \cdot (1+x)\right)$. But $J_p(1)=0$ by definition, so we get that $\left(\det M\right) J_p(x) = 0$. Since (with the choice $q_j=pj+1$) the determinant $\det M$ is a power of $p$, $J_p(\C P^n)$ is a $p$-group. \par Using the universal property of the $p$-component $J_p^{alg}(\C P^k)$, this proves that the natural projection $J(\C P^k) \to J_p(\C P^k)$ factors through $J_p^{alg}(\C P^k)$. On the other hand, no nonzero element of $J_p^{alg}(\C P^k)$ can vanish in $J_p(\C P^k)$ by \cite[Theorem (1.1)]{AdamsJ}: if $S\xi$ is fibrewise $p$-trivial and hence $\xi$ admits a map of degree $s$ to a trivial bundle, where $s$ is coprime to $p$, then there exists a nonnegative integer $e$ such that $s^e\xi$ is fibrewise homotopy equivalent to a trivial bundle. Multiplication by $s^e$ is an isomorphism in any $p$-group, hence $J(s^e\xi)=0$ and consequently $J_p^{alg}(s^e\xi)=s^eJ_p^{alg}(\xi)=0$ implies that $J_p^{alg}(\xi)=0$. \end{proof} \begin{proof}[Proof of periodicity modulo $p$] Denote by $q$ the quotient $M_k/(M_k)_p$. Then $(p,q)=1$ and $M_k = q \cdot (M_k)_p$. The class of the tautological bundle $\gamma_{k-1}$ in $J(\C P^{k-1})$ has order $M_k$, so its image in $J_p(\C P^{k-1})$ has order $(M_k)_p$. In particular, we have $$ J_p((M_k)_p\gamma_{k-1})=0. $$ Consequently for any bundle $\xi$ over $\C P^{k-1}$ we have $J_p(\xi) = J_p(\xi + (M_k)_p \gamma_{k-1})$ and therefore the Thom spaces $T\xi$ and $T(\xi + (M_k)_p \gamma_{k-1})$ are stably homotopically $p$-equivalent. In particular, we obtain a $p$-isomorphism between the groups $$\pi^{\bf S}_*(\C P^m,\C P^l) \cong \pi^{\bf S}_*(T((l+1)\gamma_{m-l-1}))$$ and $$\pi^{\bf S}_*(\C P^{m+(M_k)_p},\C P^{l+(M_k)_p}) \cong \pi^{\bf S}_*(T((l+1+(M_k)_p)\gamma_{m-l-1})).$$ Hence (analogously to the proof of Proposition \ref{thm:periodicity}) $E^r_{i,j}$ and $E^r_{i+(M_k)_p,j+(M_k)_p)}$ have canonically isomorphic $p$-components for $r\leq k-1$. This proves the Proposition. \end{proof} \section{Image of $J$} \begin{thm}{\cite[Proposition 4.7(a)]{Mosher}}\label{thm:imJ} Let $\iota_s \in \pi^{\bf S}_{2s}(\C P^s/\C P^{s-1}) = E^1_{s,s} \cong \Z$ be a generator and suppose that $d^i(\iota_s)=0$ for $i<k$. Then $d^k(\iota_s) \in E^k_{s-k,s+k-1}$ belongs to the image of $\img J \subset E^1_{s-k,s+k-1} = \pi^{\bf S}(2k-1)$ in $E^k_{s-k,s+k-1}$. \end{thm} {\bf Remark:} by the geometric interpretation of the singularity spectral sequence (Subsection \ref{section:singSS}) the conclusion of Theorem \ref{thm:imJ} means that under its assumptions, the boundary map of an isolated $\Sigma^{1_{s-1}}$-point can be chosen to be a $\Sigma^{1_{s-k-1}}$-map whose $\Sigma^{1_{s-k-1}}$-set is an immersed framed $(2k-1)$-dimensional sphere. Indeed, $\img J$ consists exactly of those stable homotopy classes which can be represented by a framed sphere. \begin{proof} Let us recall the definitions of the differentials $d^i_{s,s}$. For $i=1$ it is the boundary map $\pi^{\bf S}_{2s}(\C P^s, \C P^{s-1}) \to \pi^{\bf S}_{2s-1}(\C P^s, \C P^{s-2})$. If this is zero on an element $x \in \pi^{\bf S}_{2s}(\C P^s,\C P^{s-1})$ then $x$ comes from $\pi^{\bf S}_{2s}(\C P^s,\C P^{s-2})$, that is, there is an element $x_2 \in \pi^{\bf S}_{2s}(\C P^s,\C P^{s-2})$ such that its image in $\pi^{\bf S}_{2s}(\C P^s,\C P^{s-1})$ is $x$. Then $d^2(x)$ is represented by $\partial(x_2)$, where $\partial$ is the boundary map $\pi^{\bf S}_{2s}(\C P^s, \C P^{s-2}) \to \pi^{\bf S}_{2s-1}(\C P^{s-2},\C P^{s-3})$. Analogously, if $d^{i-1}(x)=0$, then there is an element $x_i \in \pi^{\bf S}_{2s}(\C P^s,\C P^{s-i})$ that is mapped into $x$ by the map $\pi^{\bf S}_{2s}(\C P^s, \C P^{s-i}) \to \pi^{\bf S}_{2s}(\C P^{s},\C P^{s-1})$ and $d^i(x)$ is represented by $\partial (x_i)$, where $\partial$ is the boundary map $\pi^{\bf S}_{2s}(\C P^s, \C P^{s-i}) \to \pi^{\bf S}_{2s-1}(\C P^{s-i},\C P^{s-i-1})$. \par Hence the condition $d^i(\iota_s)=0$ for $i<k$ means that in the space $\C P^s/\C P^{s-k-1}$ the top dimensional cell\footnote{We shall consider the natural CW-structure on $\C P^s/\C P^{s-k-1}$ that has one cell in each even dimension $0$, $2(s-k)$, $2(s-k)+2$, $\dots$, $2s$.} is attached to $\C P^{s-1}/\C P^{s-k-1}$ by an attaching map that can be deformed into a map going into the lowest (positive-)dimensional cell of $\C P^{s}/\C P^{s-k-1}$. The class $d^k(\iota_s)$ will be the stable homotopy class of the obtained attaching map $\S^{2s-1} \to \S^{2(s-k)}$. Therefore if we fix the deformation into the lowest cell, then $d^k(\iota_s)$ can be considered as an element of $\pi^{\bf S}(2k-1)$. Without fixing the deformation the value of $d^k(\iota_s)$ will still be well-defined in $E^k_{s-k,s+k-1}$, which is the quotient of $E^1_{s-k,s+k-1} \cong \pi^{\bf S}(2k-1)$ modulo the images of the previous differentials $d^i$, $i<k$. For later reference, we summarize the following lemma: \begin{lemma}\label{lemma:differential} Assume that $d^i(\iota_s)=0$ for $i<k$. Then in the space $\C P^s/\C P^{s-k-1}$ the results of different deformations into the bottom cell of the attaching map of the top cell differ by elements of the images of the differentials $d^i$, $i<k$. \end{lemma} From this point of view the vanishing of $d^i(\iota_s)$ for $i<k$ means that after contracting the lowest (positive) dimensional cell to a point in $\C P^s/\C P^{s-k-1}$ (that is, after forming the space $\C P^s/\C P^{s-k}$) the top cell splits off stably in the sense that there is a stable map $\S^{2s} \nrightarrow \C P^s/\C P^{s-k}$ such that composing it with the projection $\C P^s/\C P^{s-k} \to \C P^s/\C P^{s-1} = \S^{2s}$ gives a stable map $\S^{2s} \to \S^{2s}$ of degree $1$. In other words, $\C P^s/\C P^{s-k}$ is $S$-reducible. \par It is well-known that $\C P^s/\C P^{s-k}$ is the Thom space $T((s-k+1)\gamma_{k-1})$ of the bundle $(s-k+1)\gamma_{k-1}$ over $\C P^{k-1}$. By a result of Atiyah and Adams $T(m\gamma_{k-1})$ is $S$-reducible if and only if $m+k$ is divisible by the number $M_k$ defined in Section \ref{section:periodicity} (see \cite[Theorem 4.3. e)]{Mosher}). \par Recall (\cite{Mosher} collecting results of Adams and Walker \cite{AdamsWalker}; James; Atiyah and Todd \cite{AtiyahTodd}) that \begin{enumerate} \item the Thom spaces $T(p\gamma_{k-1})$ and $T(q\gamma_{k-1})$ are $S$-dual if $p+q+k \equiv 0 \mod M_k$; \item $T(n\gamma_{k-1})$ is $S$-coreducible exactly if $n \equiv 0 \mod M_k$; and \item $T(m\gamma_{k-1})$ is $S$-reducible exactly if $m+k \equiv 0 \mod M_k$. \end{enumerate} Let us return to the proof of the theorem. We have seen that $\C P^s/\C P^{s-k} = T\left((s-k+1)\gamma_{k-1}\right)$ is $S$-reducible (hence $s+1 \equiv 0 \mod M_k$ by property $(3)$). Let us take an $n$ such that $\C P^{n+k}/\C P^{n-1} = T\left((n\gamma_{k-1})\right)$ is $S$-dual to $\C P^s/\C P^{s-k-1} = T\left((s-k)\gamma_k\right)$, that is, $n+(s-k)+(k+1) \equiv 0 \mod M_{k+1}$. Assume that $s+1 \equiv tM_k \mod M_{k+1}$ and correspondingly $n \equiv -tM_k \mod M_{k+1}$. We can choose $n$ to be greater than $k$. \par Note that the $S$-dual of $\C P^s/\C P^{s-k}$ is $\C P^{n+k-1}/ \C P^{n-1}$ by Lemma \ref{lemma:restriction}. Since $\C P^s/\C P^{s-k}$ is $S$-reducible, its $S$-dual $\C P^{n+k-1}/ \C P^{n-1}$ is $S$-coreducible. Then Lemma \ref{lemma:coreduciblebouquet} together with the condition $n>k$ imply that $\C P^{n+k-1}/ \C P^{n-1}$ is homotopy equivalent to $\S^{2n} \vee (\C P^{n+k-1}/ \C P^n)$. \par Since the complex $\C P^{s}/ \C P^{s-k-1}$ becomes $S$-reducible after collapsing the bottom cell, we can consider a $2$-cellification $X=D^{2s} \underset{\alpha}{\cup} \S^{2(s-k)}$ formed by the top and bottom cells. There is a map $f: X \to \C P^s/ \C P^{s-k-1}$ inducing isomorphism in the homologies in dimensions $2s$ and $2(s-k)$. Recall (see the geometric definition of the differentials in the beginning of the proof of Theorem \ref{thm:imJ} and Lemma \ref{lemma:differential}) that here the attaching map $\alpha \in \pi^{\bf S}(2k-1)$ of $X$ is well-defined only up to the choice of the deformation of the attaching map of the top cell, that is, up to an element of the subgroup generated by the images of the lower differentials $d^i$, $i=1,\dots,k-1$. Consider the cofibration \begin{equation}\tag{*}\label{eq:Xcofibration} X \to \C P^s/ \C P^{s-k-1} \to \C P^{s-1}/ \C P^{s-k} \end{equation} and its $S$-dual cofibration \begin{equation}\tag{**}\label{eq:Ycofibration} Y \leftarrow \C P^{n+k}/ \C P^{n-1} \leftarrow \C P^{n+k-1}/ \C P^n. \end{equation} By Lemma \ref{lemma:selfDual}, the space $Y$ is also a $2$-cell complex, with the same (stable) attaching map $\alpha$. Note that the cofibration \eqref{eq:Ycofibration} demonstrates the $S$-coreducibility of $\C P^{n+k-1}/\C P^{n-1}$, since after omitting the top cell in both $\C P^{n+k}/\C P^{n-1}$ and in $Y$ the cofibration \eqref{eq:Ycofibration} gives a retraction of $\C P^{n+k-1}/\C P^{n-1}$ onto its bottom cell, $\S^{2n}$. The attaching map of $Y$ is the composition of the attaching map of the top cell of $\C P^{n+k}/\C P^{n-1}$ with this retraction. \par Let us denote by $pr: \C P^k \to \C P^k/\C P^{k-1} = \S^{2k}$ the projection. Consider the following diagram (see \cite{Mosher}), where $\varepsilon^d$ denotes the trivial bundle of rank $d$: \begin{equation}\label{eq:diagramMain} \begin{split} \xymatrix{ &\left[n\gamma_k - \varepsilon^n\right] \ar@{}[r]|{\in} \ar[ddl] & \tilde K_{\C}(\C P^k) \ar[d]& \\ & & \tilde K_{\R}(\C P^k) \ar[d]_J & \tilde K_{\R}(\S^{2k}) \ar[l]_{pr^*}\ar[d]_J \\ 0 \ar@{}[r]|{\in} & J(\C P^{k-1}) & J(\C P^k) \ar[l] & J(\S^{2k}) \ar[l]_{pr^*}\\ } \end{split} \end{equation} Since $n$ is a multiple of $M_k$, the restriction of the class $\left[n\gamma_k - \varepsilon^n\right] \in \tilde K_{\C}(\C P^k)$ to $\C P^{k-1}$ represents the zero class in $J(\C P^{k-1})$. The bottom row of the diagram \eqref{eq:diagramMain} is exact, so the image of this class in $J(\C P^k)$ (namely $J(n\gamma_k)$) belongs to the image $pr^*(J(\S^{2k}))$. The map $J: \tilde K_{\R}(\S^{2k}) \to J(\S^{2k})$ is surjective by definition, so there exists a (real) vector bundle $\xi$ over $\S^{2k}$ such that the class $[\xi-\varepsilon^{\rk \xi}] \in \tilde K_{\R}(\S^{2k})$ is mapped by $J \circ pr^* = pr^* \circ J$ to $J(n\gamma_k-\varepsilon^n)$. Hence $J(pr^*\xi) = J(n\gamma_k)$. This implies that $pr^*\xi$ and $n\gamma_k$ have stably homotopically equivalent Thom spaces, i.e. (possibly after stabilization to achieve $\rk \xi=2n$) we have $T(pr^*\xi) \cong T(n\gamma_k) = \C P^{n+k}/\C P^{n-1}$. \par Note that the space $T(pr^*\xi)$ inherits a cell decomposition from $\C P^k$ in which the top cell corresponds to (the disc bundle of $\xi$ over) the top cell of $\C P^k$. Omitting this top cell gives us the Thom space of the bundle $\left(pr^*\xi\right)|_{\C P^{k-1}} = pr^*\left(\xi|_{\{point\}}\right)$, and this space is coreducible in an obvious way: mapping all the fibers onto the $\S^{2n}$ over the $0$-cell is a retraction onto this $\S^{2n}$. Putting back the top cell and composing its attaching map with the retraction we obtain a $2$-cell complex, which is in fact $T\xi$. The attaching map of $T\xi$ is known to lie in $\img J$ (see eg. \cite{Hatcher}), so it only remains to compare it with the attaching map of $Y$ (which is $d^k(\iota_s)$ modulo the images of the lower differentials). Even though both $Y$ and $T\xi$ are obtained by the same $2$-cellification procedure from stably homotopically equivalent spaces $\C P^{n+k}/\C P^n$ and $Tpr^*\xi$ respectively, the construction depended on the choice of a retraction from the definition of coreducibility of the involved subspaces $\C P^{n+k-1}/\C P^{n-1}$ and $pr^*\xi|_{\C P^{k-1}}$, and there is no reason why the two retractions should coincide. \par By Lemma \ref{lemma:dual2cell}, this choice of coreduction of $\C P^{n+k-1}/\C P^{n-1}$ introduces exactly the same ambiguity in the definition of the $2$-cellification of $\C P^{n+k}/\C P^{n-1}$ as the choice of (stable) reducibility of the $S$-dual complex $\C P^s/\C P^{s-k}$, or equivalently the choice of a deformation of the attaching map of the top cell in $\C P^s/\C P^{s-k-1}$ into the bottom cell. By Lemma \ref{lemma:differential}, this choice corresponds exactly to changing our map by an element of the images of the previous differentials $\img d^i$, $i<k$. Hence the attaching map $\alpha$ of the space $X$ lies in the same coset of $\langle \img d^i: i<k \rangle$ as does the attaching map of $T\xi$, which belongs to $\img J$. This finishes the proof. \end{proof} \par The proof above clarifies the notions behind Mosher's argument not appearing in \cite{Mosher} explicitly, such as $2$-cellification and its dependence on the choice of deformations. Below we give a shorter second proof that hides the geometric machinery within the general framework of spectral sequences. \begin{proof} Consider the spectral sequence in stable homotopy associated to the filtration (of length $k+1$) $$ \C P^{s-k}/\C P^{s-k-1} \subset \C P^{s-k+1}/\C P^{s-k-1} \subset \dots \subset \C P^{s}/\C P^{s-k-1}. $$ This spectral sequence maps naturally to the spectral sequence of the infinite filtration $\C P^{s-k}/\C P^{s-k-1} \subset \C P^{s-k+1}/\C P^{s-k-1} \subset \dots$ of $\C P^\infty/\C P^{s-k-1}$. Taking $S$-duals of all the involved spaces and maps, we obtain a sequence of maps that we call a cofiltration: $$ \C P^{n+k}/\C P^{n+k-1} \leftarrow \C P^{n+k}/\C P^{n+k-2} \leftarrow \dots \leftarrow \C P^{n+k}/\C P^{n-1} $$ and the spectral sequence $E^r_{i,j}$, $s-k \leq i \leq s$ of the original filtration becomes the spectral sequence $E^{p,q}_r$ of this cofiltration in the stable \emph{cohomotopy} theory, with $E^r_{s-k+p,s-k+q}$ identified with $E_r^{-n-p,-n-q}$ (in particular, $E_1^{p,q}=\pi^{\bf s}(p-q)$ if $-n-k \leq p \leq -n$ and is $0$ otherwise). Explaining more accurately, let $C$ be the middle point of the interval $[-n,s-k]$ on the horizontal axis in the plane. Then reflection in $C$ transforms the groups $E^r_{i,j}$ and the differentials $d^i$ of the original first quadrant spectral sequence into the groups $E^{p,q}_r$ and the differentials $d_i$ of a third quadrant spectral sequence. This latter spectral sequence is that of the $S$-dual cofiltration in stable cohomotopy, because $S$-duality maps stable homotopy groups into stable cohomotopy groups and commutes with homomorphisms induced by mappings. That is, if $A \to X \to X/A$ is a cofibration and $D_S[A] \leftarrow D_S[X] \leftarrow D_S[X/A]$ is the $S$-dual cofibration then the exact sequence in stable homotopy of the first cofibration is canonically isomorphic to the exact sequence in stable cohomotopy sequence of the second: $$ \xymatrix{ \pi_*^{\bf s}(A) \ar[r] \ar[d]_{\cong}& \pi_*^{\bf s}(X) \ar[r] \ar[d]_{\cong} & \pi_*^{\bf s}(X/A) \ar[d]_{\cong} \\ \pi_*^{\bf s}(D_S[A]) \ar[r]& \pi_*^{\bf s}(D_S[X]) \ar[r] & \pi_*^{\bf s}(D_S[X/A]) \\ } $$ In this cohomological spectral sequence the differentials $d_1,\dots,d_{k-1}$ going from the column number $-n-k$ all vanish (because $\C P^{n+k-1}/\C P^{n-1}$ admits a retraction onto $\C P^n/\C P^{n-1}$). The last differential $d_k$ maps the class of the identity in $\pi^{\bf S}_{2n+2k}(\C P^{n+k}/\C P^{n+k-1})$ into the attaching map of the top cell. Choosing any (cellular) $2$-cellification map $\C P^{n+k}/\C P^{n-1} \to Z = \S^{2n} \cup D^{2n+2k}$ induces a map of cofiltrations $$ \xymatrix{ \C P^{n+k}/\C P^{n+k-1} \ar[d] & \C P^{n+k}/\C P^{n+k-2} \ar[l] \ar[d] & \dots \ar[l] & \C P^{n+k}/\C P^{n} \ar[l] \ar[d] & \C P^{n+k}/\C P^{n-1} \ar[l] \ar[d] \\ Z/\S^{2n} & Z/\S^{2n} \ar[l]_{id} & \dots \ar[l]_{id} & Z/\S^{2n} \ar[l]_{id} & Z \ar[l] } $$ and hence it induces a map of the corresponding cohomotopical spectral sequences that is an isomorphism on column $-n-k$ in the pages $E^{**}_1$, $E^{**}_2$, $\dots$, $E^{**}_{k-1}$, with ``the same'' differential $d_k$. That is, the attaching map of $Z$ is a representative of the image $d_k\iota^{-n-k,-n-k}$ (where $\iota^{-n-k,-n-k}$ is the positive generator of $E_1^{-n-k,-n-k} \cong \Z$), which coincides with $d^k\iota_{s,s}$, and it lies in $\img J$. \end{proof} Theorem \ref{thm:imJ} has a $p$-localized version that we formulate and prove below. Let us consider the \emph{ $p$-localized Mosher's spectral sequence} ${}^pE^r_{i,j}$, ${}^pd^r$, that is, the spectral sequence defined by the $p$-localization of the usual filtration of $\C P^\infty$. In this spectral sequence the starting groups are the $p$-components of those in Mosher's spectral sequence and the $d^1$ differential is the $p$-component of that in Mosher's spectral sequence. \begin{thm}\label{thm:imJ_p} Let $\iota_s \in \pi^{\bf S}_{2s}(\C P^s/\C P^{s-1}) \otimes \Z_{(p)} = {}^pE^1_{s,s} \cong \Z_{(p)}$ be a generator and suppose that ${}^pd^i(\iota_s)=0$ for $i<k$. Then ${}^pd^k(\iota_s) \in {}^pE^k_{s-k,s+k-1}$ belongs to the image of $\img J_p \subset {}^pE^1_{s-k,s+k-1} = \pi^{\bf S}(2k-1) \otimes \Z_{(p)}$ in ${}^pE^k_{s-k,s+k-1}$. \end{thm} {\bf Setup:} (the diagram below can help to follow the argument) $\C P^s/\C P^{s-k-1}$ is $S$-dual to $\C P^{n+k}/\C P^{n-1}$ (ensured by $n+s \equiv 0 (M_k)$), and the first nonzero $p$-localized differential from $E^*_{s,s}$ is the $k$-th, that is, $n$ is divisible by $(M_k)_p$, but not by $(M_{k+1})_p$ (this latter statement holds due to Lemma \ref{lemma:JpCP} establishing the equivalence of the geometric and algebraic definitions of $J_p$, which implies that the order of $\gamma_k$ in $J_p(\C P^k)$ is exactly $(M_k)_p$). Then the top cell of $\C P^s/\C P^{s-k-1}$ is attached to the intermediate cells by maps that are trivial after $p$-localization. \par $$ \xymatrix{ \C P^s/\C P^{s-k-1} \ar@{<->}[d]_S& X \ar[l]_-{\deg=q} \ar@{<->}[d]_S & A = D^{2s} \underset{\alpha}{\cup} \S^{2(s-k)} \ar[l] \ar@{<->}[d]_S \\ \C P^{n+k}/\C P^{n-1} \ar[r]^-{/\Z_q} \ar[d]_{\cong_p} & Y \ar[r] & B = D^{2(n+k)} \underset{\alpha}{\cup} \S^{2n} \\ T\left(pr^*\xi\right) \ar[rr] & & T\xi \\ } $$ The choice of a map $f: D^{2s} \cup \S^{2s-2k} \to \C P^s/\C P^{s-k-1}$ inducing isomorphism in homology in dimensions $2s$ and $2s-2k$ is equivalent to the choice of representative of $\img d_k \iota$ in the coset of the image of all previous differentials going to the same cell (Section \ref{section:SS} and Lemma \ref{lemma:differential}) and is the same as the choice of the map $\S^{2s-1} \to \S^{2s-2k}$ homotopic to the attaching map of the top cell, see Section \ref{section:stable}. \par {\it Proof of Theorem \ref{thm:imJ_p}:} We first construct a space $X$ with the same cells as $\C P^s/\C P^{s-k-1}$, where the top cell is actually attached (homotopically nontrivially) only to the bottom cell (not only after $p$-localization), and a map $X \to \C P^s/\C P^{s-k-1}$ that is a $p$-equivalence. Such a space and map exist because the attaching map of the top cell to the intermediate cells is homotopically $p$-trivial, so there is a multiple ($q$-tuple, say) of it that is actually trivial, and adding the last cell with this new attaching map (the attaching map of the top cell multiplied by $q$) we obtain $X$ and a degree $q$ map from $X$ to $\C P^s/\C P^{s-k-1}$ which is the identity when restricted to the $2s-2$-skeleton and a $p$-equivalence when it is restricted to the attached $2s$-cell relative to its boundary, so by the $5$-lemma it is a $p$-equivalence. Denote by $A$ the space formed by the bottom and the top cells of $X$ (a $2$-cellification of $X$). By Lemma \ref{lemma:multiplication}, the $S$-dual of $X$, which we shall denote by $Y$, can be obtained from $D_S[\C P^s/\C P^{s-k-1}] = \C P^{n+k}/\C P^{n-1}$ by wrapping the bottom cell on itself by a degree $q$ map. \par By Lemma \ref{lemma:dual2cell}, the possible choices of $A$ (that correspond to different representatives of $\img d_k\iota$ modulo the image of the previous differentials) are $S$-dual to the possible choices of $2$-cellifications of $Y$. In addition, choosing the space $X$ differently does not change the image of (the homotopy class of) the attaching map in the $p$-component (after dividing it by $q$), since for any two approximations there is a common ``refinement'' that factors through both maps $X\to \C P^{n+k}/\C P^{n-1}$. That means that the $2$-cellifications of $Y$ (denoted by $B$ in the diagram) have the attaching map $q \cdot d_k \iota$ modulo the image of the previous differentials and considered in the $p$-component. \par We claim that $A$ is $p$-equivalent to the Thom space of a vector bundle over the sphere $\S^{2k}$. To deduce this, we check that over $\C P^{k-1}$ the bundle $n\gamma_{k-1}$ (whose Thom space is $\C P^{n+k-1}/\C P^{n-1}$) is $p$-trivial in the sense that its Thom space is stably $p$-equivalent to the Thom space of the trivial bundle. By equivalence of geometric and algebraic definitions of $J_p$ (Lemma \ref{lemma:JpCP}) it is enough to check that $J(n\gamma_{k-1})$ has order coprime to $p$ and hence vanishes when considered in the $p$-component of $J(\C P^{k-1})$. Indeed, $n$ is divisible by $(M_k)_p$ and consequently $J(n\gamma_{k-1})$ has order coprime to $p$. Hence the $J_p$-image of $n\gamma_k$ is the same as that of the pullback $pr^* \xi$ of a bundle $\xi$ over $\S^{2k}$. From this, we want to conclude that the attaching map in $T\xi$ is ``the same'' as the attaching map in the $2$-cellification $Y$. Both $T(pr^*\xi)$ and $Y$ are coreducible after the removal of the top cell: $Y$ by construction/definition. The space $T(pr^*\xi)$ is coreducible after removal of its top cell because $(pr^*\xi) |_{\C P^{k-1}}$ is trivial and hence $T((pr^*\xi) |_{\C P^{k-1}})$ retracts to $T\varepsilon_{*}^{n} = \S^{2n}$. \par {\bf Claim:} Let $U$ and $V$ be $p$-equivalent cell complexes, a single top cell $U_{n+d}$ and $V_{n+d}$, respectively, a single bottom cell $U_{n}$ and $V_{n}$, respectively, and assume that $H_{n+d}(U)$ is generated by $U_{n+d}$, $H_{n}(U)$ is generated by $U_{n}$, $H_{n+d}(V)$ is generated by $V_{n+d}$ and $H_{n}(V)$ is generated by $V_{n}$. Assume that both $U_- = U \setminus U_{n+d}$ and $V_-=V \setminus V_{n+d}$ are coreducible, that is, there exist retractions $\rho_U: U_- \searrow U_{n}$ and $\rho_V: V_- \searrow V_{n}$. Then the $2$-cellifications of $U$ and $V$ are $p$-equivalent modulo the choice of the involved retraction, that is, one can replace the retractions $\rho_U$ and $\rho_V$ by some other retractions $\rho'_U$ and $\rho'_V$ such that the $2$-cell complexes formed from $U$ and $V$ using $\rho'_U$ and $\rho'_V$ are $p$-equivalent. \par {\bf Remark:} Applying this Claim to $U=Y$ and $V=T(pr^*\xi)$ we obtain that the $2$-celllifications $B$ and $T\xi$ are $p$-equivalent. In particular, the attaching maps of the $2$-cellifications coincide modulo the images of the lower differentials. For $T\xi$ this attaching map belongs to $\img J$, for $B$ the attaching map is $\alpha$, which is $d^k(\iota_s)$ (modulo lower differentials). Thus Theorem \ref{thm:imJ_p} will be proved as soon as we prove the Claim. \par {\it Proof of Claim:} Let $i: U \to V$ be a $p$-equivalence. Without loss of generality, it restricts to a $p$-equivalence of $U_-$ and $V_-$, and maps $U_n$ into $V_n$. If $i|_{U_n}$ were an actual homotopy equivalence onto $V_n$, then we could assume that it is in fact a homeomorphism and we could set $\rho'_U = i^{-1} \circ \rho_V \circ i$. With this choice, $\rho'_U$ is a retraction of $U_-$ onto $U_n$ and is compatible with $i$ in the sense that $i$ descends to a map from the $2$-cellification of $U$ to the $2$-cellification of $V$. This map has the same induced maps in $H_{n}$ and $H_{n+d}$ as $i$ does and is hence a $p$-equivalence. \par In general, $i|_{U_n}$ only needs to be a degree $q$ map to $V_n$ for some $q$ coprime to $p$. Let $Cyl_U$ denote the mapping cylinder of $i|_{U_n}$. Define $\hat U$ to be $U$ glued to $Cyl_U$ along $U_n$ (essentially, wrapping the bottom cell of $U$ onto itself by a degree $q$ map). Then $i$ extends naturally to a map $\hat \imath : \hat U \to V$, which is still a $p$-equivalence (checked easily in homology). The retraction $\rho_U$ also extends naturally to $\hat U_- = \hat U\setminus U_{n+d}$ by postcomposing with $i|_{U_n}$. The $2$-cellification of $\hat U$ has an attaching map that is the composition of the attaching map of the $2$-cellification of $U$ with a degree $\deg i|_{U_n}$ map of the bottom cell; this composition does not change the homotopy $p$-type of the glued-together space since we compose with a $p$-equivalence of the bottom cell. \par By the first paragraph of the proof the $2$-cellifications of $\hat U$ and $V$ are $p$-equivalent, the $2$-cellifications of $U$ and $\hat U$ are also $p$-equivalent, hence the $2$-cellifications of $U$ and $V$ are $p$-equivalent as well, finishing the proof. \section{The exact value of the first non-zero differential} Recall (Subsection \ref{section:singSS}) that vanishing of differentials $d^1$, $\dots$, $d^{k-1}$ on $\iota_s \in E^1_{s,s} = \pi^{\bf S}(0) \cong \Z$ means that taking a germ $\varphi: (\R^{2(s-1)},0) \to (\R^{2s-1},0)$ that has an isolated $\Sigma^{1_{s-1}}$-singularity at the origin, its link map $\partial\varphi:\S^{2s-3} \to \S^{2s-2}$ is cobordant (in the class of $\Sigma^{1_{s-2}}$-maps) to a $\Sigma^{1_{s-k-1}}$-map $\partial_k\varphi$. If $d^k$ is the first differential not vanishing on $\iota_s$, then the image $d^k(\iota_s)$ belongs to the image of the subgroup $\img J \subseteq E^1_{**}$ in $E^k_{**}$ and consequently the map $\partial_k\varphi$ can be chosen in such a way that the top (i.e. $\Sigma^{1_{s-k-1}}$-) singularity stratum of $\partial_k\varphi$ is a sphere $\S^{2k-1}$. \par How to determine the exact value of this element $d^k\iota_s$? Mosher \cite{Mosher} answered this question using the so-called $e$-invariants of Adams. Before recalling their precise definitions we collect some properties of the $e$-invariants: \begin{enumerate} \item There are homomorphisms $e_R$ and $e_C$ from $\pi^{\bf s}(2k-1) \to \Q/\Z$. \item The invariant $e_R$ gives a decomposition of $\pi^{\bf S}(2k-1)$ in the sense that $\pi^{\bf S}(2k-1) = \img J \oplus \ker e_R$. In particular, the restriction $e_R|_{\img J}$ is injective. \item \label{eReC} If $k$ is odd, then $e_C=e_R$. If $k$ is even, then $e_C = 2e_R$. \item Different representatives of $d^k(\iota_s)$ in $\img J \subseteq \pi^{\bf s}(2k-1)$ may have different values of $e_R$, but the $e_C$-value is the same for all of them. Hence $e_C$ is well-defined on the image of $\img J$, in particular, $e_C(d^k(\iota_s))$ is also well-defined. \item \label{footnoted} The invariant $e_C$ is injective on the image of $\img J$ in $E^k_{**}$. Hence the value $e_C(d^k(\iota_s))$ determines $d^k(\iota_s)$ uniquely\footnote{It also follows that for any $x \in E^k_{**}$ there are at most $2$ elements in $\img J \subset E^1_{**}$ that are mapped by the partially defined map $E^1_{**} \to E^k_{**}$ onto $x$}. \end{enumerate} The precise value of $e_C(d^k(\iota_s))$ is given in the next theorem. Recall that $d^k(\iota_s)$ exists if and only if $s+1$ is divisible by $M_k$, and $d^k(\iota_s)$ vanishes if and only if $s+1$ is divisible by $M_{k+1}$. \begin{thm}{\textnormal{(}\cite[Proposition 4.7]{Mosher}\textnormal{)}}\label{thm:formulaMosher} If $s+1 \equiv tM_k$ modulo $M_{k+1}$, then $e_C(d^k\iota_s) = t \cdot u_k$, where $u_k$ is the coefficient of $z^k$ in the Taylor expansion of $\left(\frac{\log (1+z)}{z}\right)^{M_k}$. \end{thm} Mosher's exposition is rather compressed and hard to understand. We try to repeat here his argument in a more comprehensible form. \subsection{The definition of the $e_C$-invariant} Given $\alpha \in \pi^{\bf S}(2k-1)$, denote by $X_\alpha$ the (stable homotopy type of the) $2$-cell complex $D^{2q+2k} \underset{f}{\cup} \S^{2q}$, where $f: \S^{2q+2k-1} \to \S^{2q}$ is a representative of $\alpha$. The Chern character induces a map of the short exact sequence corresponding to the cofibration $\S^{2q} \subset X_\alpha \to \S^{2q+2k}$ in the complex $K$-theory into that in the rational cohomology rings: $$ \xymatrix{ 0 & K(\S^{2q}) \ar[l] \ar[d]_{ch} & K(X_\alpha) \ar[l]^{i^*} \ar[d]_{ch} & K(\S^{2q+2k}) \ar[l]^{j^*} \ar[d]_{ch} & 0 \ar[l] \\ 0 & H^*(\S^{2q}) \ar[l] & H^*(X_\alpha) \ar[l] & H^*(\S^{2q+2k}) \ar[l] & 0 \ar[l] } $$ One can choose generators $\zeta_q$, $\zeta_{q+k}$ in $K(X_\alpha)$ as well as generators $y_q \in H^q(X_\alpha)$ and $y_{q+k} \in H^{q+k}(X_\alpha)$ such that \begin{align} ch~ \zeta_{q+k} &= y_{q+k} \\ ch~ \zeta_q &= y_q + \lambda y_{q+k} \end{align} Here $\lambda$ is a rational number that is well-defined up to shifts by integers. The $e_C$-invariant of $\alpha$ is defined to be $$ e_C(\alpha) = \lambda \in \Q/\Z. $$ The definition of the $e_R$ invariant is similar, using real $K$-theory and then the natural complexification functor $K_\R \to K_\C$. \subsection{The computation of $e_C(d^k(\iota_s))$.} Since $d^i(\iota_s)=0$ for $i=1,\dots,k-1$, the attaching map of the top cell of $\C P^s/\C P^{s-k-1}$ can be deformed into the bottom cell. Hence by collapsing this bottom cell we obtain a reducible space, that is, $\C P^s/\C P^{s-k}$ is reducible. This latter space is the Thom space $T(s-k+1)\gamma_{k-1}$. Since $Ta\gamma_{k-1}$ is reducible precisely when $a+k$ is divisible by $M_k$, we obtain that $s+1 \equiv 0 \mod M_k$. Similarly $d^k(\iota_s) = 0$ precisely if $s+1 \equiv 0 \mod M_{k+1}$. Define $t$ to satisfy $s+1 \equiv tM_k \mod M_{k+1}$. As we have seen before, $T(p\gamma_{k-1})$ and $T(q\gamma_{k-1})$ are $S$-dual if $p+q+k \equiv 0 \mod M_k$. Hence $\C P^{n+k-1}/\C P^{n-1} = T(n\gamma_{k-1})$ is $S$-dual to $T((s-k+1)\gamma_{k-1}) = \C P^s/\C P^{s-k}$ if $n \equiv 0 \mod M_k$. Analogously, the $S$-dual of $\C P^s/\C P^{s-k-1}$ is $\C P^{n+k}/\C P^{n-1}$ if $n \equiv -tM_k \mod M_{k+1}$. We just saw that $\C P^s/\C P^{s-k}$ is reducible, hence its $S$-dual, $\C P^{n+k-1}/\C P^{n-1}$ is coreducible, that is, it admits a retraction $r:\C P^{n+k-1}/\C P^{n-1} \to \C P^n/\C P^{n-1}$ onto its bottom cell $\S^{2n} = \C P^n/\C P^{n-1}$. Then $\C P^{n+k}/\C P^{n-1}$ admits a $2$-cellification $X = D^{2n+2k} \underset{\alpha}{\cup} \S^{2n}$ and a map $$ f:\C P^{n+k}/\C P^{n-1} \to X $$ that coincides with $r$ on $\C P^{n+k-1}/\C P^{n-1}$ and has degree $1$ on the top $(2n+2k)$-cell, and the attaching map $\alpha$ is a representative of $d^k(\iota_s)$. Our aim is to calculate the $e_C$-invariant of $\alpha$. Let $p: X \to X/\S^{2n} = \S^{2n+2k}$ be the quotient map, further let $\textbf w$ be the generator of $K(\S^{2n+2k}) \cong \Z$, and denote by $y$ the generator of $H^2(\C P^\infty)$. Then $y^j$ generates $H^{2j}(\C P^{n+k}/\C P^{n-1})$. By definition, \begin{equation}\tag{*}\label{eq:chpw} ch~ p^*\textbf w = y^n+e_C(\alpha) y^{n+k}. \end{equation} Recall that $K(\C P^{n+k}) \cong \Z[\mu]/(\mu^{n+k+1}=0)$, where $\mu$ is the class of $\gamma_{n+k}-1$. The ring $K(\C P^{n+k}/\C P^{n-1})$ can be identified with the subring of polynomials divisible by $\mu^n$, that is, $K(\C P^{n+k}/\C P^{n-1}) \cong \mu^n \cdot \left( \Z[\mu]/(\mu^{k+1}=0) \right)$. In particular, $p^* \textbf w$ can be written in the form \begin{equation}\tag{**}\label{eq:pw} p^*\textbf w = \mu^n \sum_{j=0}^{k} w_j\mu^j, \end{equation} where all $w_j$ are integers (depending on $n$ and $k$) and $w_0=1$. Let us denote $ch~ \mu = e^y-1$ by $z$, then $y=\log (1+z)$. Note that both $y$ and $z$ can be chosen as generators in the ring of formal power series $H^{**}(\C P^\infty;\Q)$. Applying $ch$ to both sides of the equality \eqref{eq:pw} and replacing the left-hand side by the right-hand side of \eqref{eq:chpw} we obtain $$ y^n + e_C(\alpha) y^{n+k} = z^n\sum_{j=0}^{k} w_jz^j, $$ or equivalently $$ \left( \frac{y}{z} \right)^n = \left(\sum_{j=0}^{k} w_jz^j \right) (1-e_C(\alpha)y^k+e_C(\alpha)^2y^{2k}-\dots), $$ Recall that this equality holds whenever $n$ is divisible by $M_k$. Replace $y$ by the corresponding power series in $z$. This is possible because $y=z+\text{higher powers of }z$. \begin{equation}\tag{***}\label{eq:u_k} \left( \frac{\log (1+z)}{z} \right)^n = \left(\sum_{j=0}^{k} w_jz^j \right) (1-e_C(\alpha)z^k+\dots). \end{equation} It follows that on the right-hand side of \eqref{eq:u_k} the first $k$ coefficients of $z^j$ (from $j=0$ to $k-1$) are integers, and the coefficient of $z^k$ is $-e_C(\alpha)$ modulo $\Z$. By definition, if $n=M_k$, then the coefficient of $z^k$ on the left-hand side is $u_k$. If $n$ is divisible by $M_{k+1}$, then the same argument can be repeated with $k+1$ instead of $k$ and we obtain that the coefficient of $z^k$ has to be an integer; this means that $e_C(d^k\iota_s) = e_C(\alpha)$ is $0$ modulo $\Z$. In general, when $n=tM_k$, the left-hand side of \eqref{eq:u_k} can be expanded as $$ \left( \frac{\log (1+z)}{z} \right)^n = \left(\left( \frac{\log (1+z)}{z} \right)^{M_k} \right)^t $$ using the multinomial theorem. Since the coefficients of $z^j$ in $\left( \frac{\log (1+z)}{z} \right)^{M_k}$ for $j=0,\dots,k-1$ are integers, the same is true for the $t$-th power, and the coefficient of $z^k$ is $tu_k$ modulo $\Z$. On the right-hand side of \eqref{eq:u_k} we see that the coefficient of $z^k$ is $-e_C(\alpha)$ modulo $\Z$, hence $e_C(\alpha) \equiv -tu_k \mod 1$ as claimed. {\hfill\qed} \par For example, the first several values of $u_k$ are as follows: \par \begin{tabular}{c|c|c|c|c|c|c|c|c|c} k & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \cline{1-10} $M_k$ & 1 & 2 & 24 & 24 & 2880 & 2880 & 362880 & 362880 & 29030400\\ \cline{1-10} $u_k$ & 1/2 & 11/12 & 0 & 71/120 & 0 & 61/126 & 0 & 17/80 & 0\\ \end{tabular} \par {\bf Remark:} Note that the generator $\iota_s$ of the group $\pi^{\bf s}_{2s}(\C P^s, \C P^{s-1}) \cong \Z$ in Mosher's spectral sequence corresponds in the singularity spectral sequence to the cobordism class of a prim map $\sigma_{s-1}: (D^{2s-2},\S^{2s-1}) \to (D^{2s-1},\S^{2s-2})$ that has an isolated $\Sigma^{1_{s-1}}$-point at the origin (given by Morin's normal form of the $\Sigma^{1_{s-1}}$ singularity). Its image $d^k(\iota_s)$ under the first nontrivial differential is represented by the submanifold of $\Sigma^{1_{s-k-1}}$-points of the ``boundary map'' $\partial \sigma_{s-1} : \S^{2s-3} \to \S^{2s-2}$ after eliminating all its higher (than $\Sigma^{1_{s-k-1}}$) singularity strata. The elimination of the higher strata proceeds in several steps and in this process we have to make several choices. First we eliminate the $\Sigma^{1_{s-2}}$ singularities by choosing a cobordism of prim $\Sigma^{1_{s-2}}$-maps that joins $\partial \sigma_{s-1}$ with a $\Sigma^{1_{s-3}}$-map $\partial_1\sigma_{s-1}$. Such a cobordism exists because $d^1(\iota_s)=0$, hence the immersed $\Sigma^{1_{s-2}}$-stratum is null-cobordant, and by \cite{keyfibration} any null-cobordism of this top stratum extends to a cobordism of the entire map $\partial \sigma_{s-1}$. Then we eliminate the $\Sigma^{1_{s-3}}$ singularities by choosing a cobordism of prim $\Sigma^{1_{s-3}}$-maps that joins $\partial_1 \sigma_{s-1}$ with a $\Sigma^{1_{s-4}}$-map, and so on. Finally we obtain a prim $\Sigma^{1_{s-k-1}}$-map. Its $\Sigma^{1_{s-k-1}}$-stratum represents an element $\alpha$ in $\pi^{\bf s}(2k-1)$. For some of these choices the class $\alpha$ (representing $d^k(\iota_s)$) belongs to $\img J$. \par As a corollary of the computation of $e_C(d^k(\iota_s))$, we have obtained the surprising fact that whichever representative $\alpha \in \img J \subset E^1_{**} =\pi^{\bf s}(2k-1)$ of the element $d^k(\iota_s)$ we choose, the value $e_C(\alpha)$ is the same. We propose the following explanation of this fact. Consider the following diagram (part of \cite[8.1.]{Mosher}): $$ \xymatrix{ J(\C P^k) \ar[d]_{\cong}& J(\S^{2k}) \ar[l]_{p_J} \ar[d]_{\theta'_C}\\ J'_C(\C P^k) & J'_C(\S^{2k}) \ar@{>->}[l] \\ } $$ Here $J'_C(X)$ is a quotient group of $J(X)$ defined in an algebraic way (for the reader's convenience we sketch the definition in the Appendix). Mosher writes: ``elements of $J'_C(\S^{2k})$ are measured by the invariant $e_C$'' and refers to \cite{AdamsJ4}. This means that $e_C$ is a well-defined and injective map from $J'_C(\S^{2k})$ to $\Q/\Z$. Using this we show that all the representatives of $d^k(\iota_s)$ that belong to $\img J \subset E^1_{**}$ are mapped by $e_C$ into the same element of the group $\Q/\Z$. Indeed, the diagram implies that $\ker p_J = \ker \theta'_C$. The argument of Lemma \ref{lemma:indeterminateJ} shows that $\ker p_J$ is precisely the indeterminancy of the elements of $J(\S^{2k}) \subset E^1_{**}$ in $E^k_{**}$ (that is, the representatives in $\img J \subset E^1_{**}$ of an element of $E^k_{**}$ form a coset of the subgroup $\ker p_J$). Hence all the representatives of $d^k(\iota_s)$ that belong to $\img J=J(\S^{2k})$ will be mapped into the same element in $J'_C(\S^{2k})$ (namely into the unique preimage of $J(n\gamma_k)$ in $J(\C P^k)$) and only elements that represent $d^k(\iota_s)$ will be mapped there. \par To complete the explanation we need one more lemma. Recall (see \cite[Chapter 15, Remark 5.3.]{Husemoller}) that there exists a classifying space $BH$ for the semigroup $H$ of degree $1$ self-maps of spheres, and for any $X$ one has $$ [X,BH]=\tilde K_{top}(X). $$ Here $\tilde K_{top}(X)$ is the group of stable topological sphere bundles over $X$ up to fiberwise homotopy equivalence. Note that $\tilde K_{top}(\S^r)=\pi^{\bf s}(r-1) = \lim_{q \to \infty} \pi_{q+r-1}(\S^q)$. Furthermore let $n$ be again any sufficiently big natural number with the property that $n+s+1$ is divisible by $M_{k+1}$. \begin{lemma}\label{lemma:indeterminateJ} Consider the map $p^*: \tilde K_{top}(\S^{2k}) \to \tilde K_{top}(\C P^k)$ induced by the projection $p: \C P^k \to \C P^k/\C P^{k-1} = \S^{2k}$. Let us consider the sphere bundle $S(n\gamma_k)$ as an element in $\tilde K_{top}(\C P^k)$. Then $(p^*)^{-1}(S(n\gamma_k))$ is precisely the set of elements in $\tilde K_{top}(\S^{2k})=\pi^{\bf s}(2k-1) = E^1_{s-k,s-k+1}$ that represent $d^k(\iota_s) \in E^k_{s-k,s-k+1}$. \end{lemma} \begin{proof} The representatives of $d^k(\iota_s)$ in $E^1_{s-k,s-k+1}$ correspond to the possible choices of deformations of the attaching map of the top cell in $\C P^{n+k}/\C P^{n-1}$ into the bottom cell $\S^{2n}$ (see Lemma \ref{lemma:differential}). We have seen that this is the same as the set of $2$-cellifications of $T(n\gamma_k)$, and this in turn is the same as the choices of a retraction of $\C P^{n+k-1}/\C P^{n-1}= T(n\gamma_{k-1})$ onto the fiber $\S^{2n}$. Such a retraction of $T(n\gamma_{k-1})=S\left(n\gamma_{k-1} \oplus \varepsilon^1\right)/S(\varepsilon^1)$ lifts uniquely to a retraction of the sphere bundle $S\left(n\gamma_{k-1} \oplus \varepsilon^1\right)$, where $\varepsilon^1$ is the trivial real line bundle. This latter retraction can be reinterpreted as a fiberwise homotopy equivalence between the sphere bundles of $n\gamma_{k-1}\oplus\varepsilon^1$ and the trivial bundle $\varepsilon^{2n+1}$; we can therefore consider it to be a topological trivialization (in $\tilde K_{top}(\C P^{k-1})$) of the sphere bundle $S(n\gamma_{k-1})$. \par In short, the representatives of $d^k(\iota_s)$ in $E^1_{s-k,s-k+1}$ are in bijection with the topological trivializations of $S(n\gamma_{k-1})$. \par Consider the space $\C P^k \cup Cone(\C P^{k-1})$, the two spaces being glued along $\C P^{k-1}$. This space is homotopically equivalent to $\C P^k/\C P^{k-1}=\S^{2k}$ and the inclusion of $\C P^k$ into it is (homotopically) the standard projection of $\C P^k$ onto $\S^{2k}$. Take the element $S(n\gamma_k) \in \tilde K_{top}(\C P^k)$; it corresponds to a homotopy class of maps $\C P^k \to BH$, let $\kappa$ denote one map in this class. The extensions of $\kappa$ to the cone over $\C P^{k-1}$ correspond to topological trivializations of the sphere bundle $S(n\gamma_{k-1})$. On the other hand, these extensions correspond to choosing a preimage of $S(n\gamma_k) \in \tilde K_{top}(\C P^k)$ under the map $p^*$, and we obtain that the choice of representative of $d^k(\iota_s)$ in $E^1_{s-k,s-k+1}$ corresponds to the choice of an element in $(p^*)^{-1}(S(n\gamma_k))$, as claimed. \end{proof} {\bf Corollary:} The set of those representatives of $d^k(\iota_s)$ that belong to $\img J = J(\S^{2k})$ is in bijection with the set $(p^*)^{-1}(J(n\gamma_k)) = p_J^{-1}(J(n\gamma_k))$. \begin{proof} From Theorem \ref{thm:imJ} we know that $d^k(\iota_s)$ does have representatives in $\img J$ . By restricting the map $p^* : \tilde K_{top}(\S^{2k}) \to \tilde K_{top}(\C P^k)$ to the elements that can be deformed into $BO \subset BH$, we obtain the map $p_J: J(\S^{2k}) \to J(\C P^k)$. \end{proof} In particular, since $\ker p_J$ has size at most $2$, this means that the images of the previous differentials going to $E^*_{s-k,s-k+1}$ intersect $\img J$ in a subgroup of order at most $2$ (see property \eqref{eReC} and footnote to property \eqref{footnoted} of $e$-invariants). \par {\bf Remark:} The entire calculation can also be performed for the spectral sequence formed by the $p$-components of the groups of the Mosher spectral sequence. Theorem \ref{thm:imJ_p} proves that the image of the first non-trivial differential from the diagonal of this spectral sequence belongs to the image of $\img J_p$. Indeed, tracing the diagram of Theorem \ref{thm:imJ_p}, if we calculate $e_C(\alpha)$ for the attaching map $\alpha$ of the $2$-cell ``resolutions'' $A$ and $B$, then dividing it back by the degree $q$ we obtain a well-defined value in the $p$-component $\left(\img e_C\right)_p$. The only difference in the actual computation is that the map $\C P^{n+k}/\C P^{n-1} \to B$ is no longer an isomorphism in homology, but induces the multiplication by $q$ on $H^{2n}$. Hence we have to replace $e_C(\alpha)$ throughout the proof with $qe_C(\alpha)$, and in the end we divide the resulting value by $q$ to obtain the $e_C(\alpha)$ in the $p$-component (the division by $q$ may not be meaningful in general, but in the $p$-component it is). \par Alternatively, we can choose a $q$ coprime to $p$ such that $d^k(q\iota_s)$ makes sense, then calculate $d^k(\iota_{qs})$ (which makes sense) and divide back by $q$ in the $p$-localization to arrive at $d^k(\iota_s)$ (which only makes sense as ${}^pd^k(\iota_s)$ after localization). \section{Geometric corollaries} Here we summarize a few geometric corollaries. \subsection{Translation of the results to singularities}\label{subsection:geometric} The first corollary is just a singularity theoretical reinterpretation of the homotopy theoretical results and it answers the following question: \par Given non-negative integers $n$, $r_1$ and $r_2$ as well as two stems $\alpha \in \pi^{\bf s}(n-2r_1)$ and $\beta \in \pi^{\bf s}(n-2r_2-1)$, does there exist a prim $\Sigma^{1_{r_1-1}}$-map $f : M^n \to \R^{n+1}$ of a compact $n$-manifold $M$ with boundary $\partial M$ such that $f|_{\partial M}$ is a $\Sigma^{1_{r_2-1}}$-map, the set $\Sigma^{1_{r_1-1}}(f)$ (with its natural framing) represents $\alpha$, while $\Sigma^{1_{r_2-1}}(f|_{\partial M})$ (with the natural framing) represents $\beta$? \par Answer: Define $k$ to be the greatest natural number for which $M_k$ divides $r_1+1$ (equivalently, $d^j\iota_{r_1}=0$ for all $j=0$, $\dots$, $k-1$ and $d^k\iota_{r_1} \neq 0$). \begin{enumerate}[a)] \item If $r_2 \geq r_1-k$, then such a map $f$ exists exactly if $\beta - \alpha \cdot d^{r_1-r_2}(\iota_{r_1})$ belongs to the image of the ``lower'' differentials $d^j_{r_2+j,n-r_2-j}$ with $j=1$, $\dots$, $r_1-r_2$. \item In general, with $r_1$ and $r_2$ arbitrary, the same condition takes the form $\beta = d^{r_1-r_2} \alpha$ in $E^{r_1-r_2}_{r_1,n-r_1}$ with the additional requirement that the differential has to be defined on $\alpha$. In particular, when $r_2 < r_1-k$ and we make the additional assumption that $\alpha=\lambda\hat\alpha$ for some $\lambda \in \Z$ for which $d^{r_1-r_2}(\lambda\iota_{r_1})$ is defined, $\beta$ has to be equal to $\hat\alpha \cdot d^{r_1-r_2}(\lambda\iota_{r_1})$ modulo the image of the lower differentials. \end{enumerate} \par For example, if $r_2=r_1-1$, then criterion $a)$ states that such a map $f$ exists exactly if $$ \beta=\alpha\cdot d^1(\iota_{r_1}) = \begin{cases} 0 & \text{ if } r_1 \text{ is even,}\cr \alpha\eta & \text{ if } r_1 \text{ is odd,} \end{cases} $$ where $\eta \in \pi^{\bf s}(1)$ is the generator. \par If $r_2=r_1-2$, we can apply criterion $b)$. When $r_1$ is even, then $\beta$ must lie in the coset $\alpha \cdot d^2(\iota_{r_1}) + \eta \cdot \pi^{\bf s}(n-2r_2-2)$. When $r_1$ is odd and we additionally assume that $\alpha=2\hat\alpha$, then $\beta$ must be $\hat\alpha \cdot d^2(2\iota_{r_1})$. \begin{comment} \subsection{Multiple points}\label{subsection:multiplePoint} Stong \cite{Stong} computed the characteristic numbers of $2n$-dimensional manifolds immersable in Euclidean space $\R^{2n+2}$. Combining his result with Herbert's multiple point formula \cite{Herbert} we obtain that the possible numbers of $(n+1)$-tuple points (counted in the domain) are the multiples of $(n+1)!$. Here we obtain a relative versions of this result for immersions of manifolds with boundary. \par Let us consider generic (that is, self-transverse) immersions $g:(M^{2s-2},\partial M) \to (\R_+^{2s},\R^{2s-1})$ of $(2s-2)$-dimensional compact orientable manifolds $M$ with boundary $\partial M$ into the half-space $(\R_+^{2s},\R^{2s-1})$ such that the boundary immersion $g|_{\partial M} : \partial M \to \R^{2s-1}$ is a $\gamma_j$-immersion, that is, its normal bundle can be induced from the tautological bundle over $\C P^j$. \par {\bf Claim:} The possible algebraic number $\Delta_s(g)$ of $s$-tuple points of $g$ can be as follows: let $k$ be the greatest number for which $s+1$ is divisible by $M_k$. If $j > s-k$, then $\Delta_s(g)$ can be any number. If $j=s-k$, then $\Delta_s(g)$ can be any multiple of a number $c_s$ we define now (and nothing more). Recall that $u_k$ was the coefficient of $z^k$ in the power series $\left( \frac{\log (1+z)}{z} \right)^{M_k}$. Let $s+1 \equiv tM_k \mod M_{k+1}$. Put $t\cdot u_k = \frac{b_s}{c_s}$, with $(b_s,c_s)=1$. The integer multiples of the denominator $c_s$ are the possible values of $\Delta_s(g)$. \par \begin{proof} We use the following fact proved in \cite{primSzucs}, \cite{Lipi}: given an immersion $g: N^n \looparrowright \R^{n+l}$ and its composition $f=\pi \circ g$ with a projection $\pi:\R^{n+l} \to \R^{n+l-1}$ onto a hyperplane, then the manifold of $s$-tuple points of $g$ is bordant to the manifold of the $\Sigma^{1_{s-1}}$-points of $f$. If $l$ is even, then the bordism is meant in oriented sense. Hence $\vert \Delta_s(g) \vert = \vert \Sigma^{1_s}(f) \vert$, where $\vert X \vert$ stands for the algebraic number of points in a zero-cycle $X$. \par It is hence enough to prove the analogous statement for Morin maps that are prim $\Sigma^{1_{j-1}}$-maps on the boundary. \par If $j>s-k$, then taking a map with an isolated $\Sigma^{1_{s-1}}$-point, we can successively remove the $\Sigma^{1_q}$ strata from its boundary by applying $\Sigma^{1_q}$-cobordisms for $q=s-2,\dots,j-1$ because $d^i(\iota_s)=0$ for $i=1,\dots,s-j$. This gives us a map $(M^{2s-2},\S^{2s-3}) \to (D^{2s},\S^{2s-1})$ with a single $\Sigma^{1_{s-1}}$-point in the interior and no $\Sigma^{1_{j-1}}$-points on the boundary. Adding an appropriate number of copies of this map we can achieve any algebraic number of $\Sigma^{1_{s-1}}$-points in $f$, giving the same algebraic number of $s$-tuple points in its lift, $g$. \par If $j=s-k$, then for any integer $N$ the same construction can be performed with $N$ copies of the isolated $\Sigma^{1_{s-1}}$-points up to the last step. At the last step, each individual singularity will have on the boundary a $\Sigma^{1_{j-1}}$-set that represents a stable homotopy group element with $e_C=tu_k$. Thus their union has a removable $\Sigma^{1_{j-1}}$-set exactly when $Ntu_k$ is an integer, or equivalently, if $c_s|N$. Finally note that taking any other cobordism map with the same map on the initial boundary and only $\Sigma^{1_{j-1}}$-points on the final boundary changes the cobordism class of the latter only by an element of the image of previous differentials and hence cannot change the represented element in $E^k_{s-k,s+k-1}$, which only vanishes when $c_s|N$. \end{proof} Note that this corollary of our results can also be interpreted as a result on the possible values and the indeteminancy of the relative Thom polynomial corresponding to $\Sigma^{1_{s-1}}$. \end{comment} \subsection{The $p$-localization of the classifying space} Recall that $X^r = X_{Prim\Sigma^{1_r}}$ denotes the classifying space of prim $\Sigma^{1_r}$-maps of cooriented manifolds. Let $p$ be any prime such that $p \geq r+1$. For any space $Y$ we denote by $(Y)_p$ the $p$-localization of $Y$. Recall that $\Gamma$ stands for $\Omega^\infty S^\infty$. \begin{thm} $(X^r)_p \cong \prod_{i=0}^r\left(\Gamma \S^{2i+1} \right)_p$. \end{thm} {\bf Remark:} Recall (Section \ref{section:prim}) that $Prim\Sigma^{1_r}(n)$ denotes the cobordism group of prim $\Sigma^{1_r}$-maps of oriented $n$-manifolds into $\R^{n+1}$. Let $\mathcal C_r = \mathcal C(p\leq r+1)$ denote the Serre class of finite abelian groups that have no $p$-primary components for $p>r+1$. We will denote isomorphism modulo $\mathcal C_r$ by $\underset{\mathcal C_r}{\approx}$. Then $$ Prim\Sigma^{1_r}(n) \underset{\mathcal C_r}{\approx} \oplus_{i=0}^{r} \pi^{\bf s}(n-2i). $$ This means prim $\Sigma^{1_r}$-maps considered up to cobordism and modulo small primes look as an independent collection of immersed framed manifolds of dimensions $n$, $n-2$, $\dots$, $n-2r$ corresponding to their singular strata. {\it Proof:} We have seen in Lemma \ref{lemma:main} that $X^r \cong \Omega \Gamma \C P^{r+1}$. Now we use a lemma about the $p$-localization of $\C P^n$: \begin{lemma} \footnote{We thank D. Crowley for the proof of this lemma.} For any $p>n+1$ the $p$-localizations of $\C P^n$ and $\S^2 \vee \S^4 \vee \dots \vee \S^{2n}$ are stably homotopically equivalent. \end{lemma} \begin{proof} Serre's theorem states that the stable homotopy groups of spheres $\pi^{\bf s}(m)$ have no $p$-components if $m<2p-3$. Hence after $p$-localization the attaching maps of all the cells in $\C P^n$ will become null-homotopic. \end{proof} Hence we get that $$ (X^r)_p \cong \Omega \Gamma\left( \S^2 \vee \dots \vee \S^{2r+2} \right)_p \cong \Gamma \left( \S^1 \vee \dots \vee \S^{2r+1} \right)_p \cong \prod_{i=0}^r\left(\Gamma \S^{2i+1} \right)_p, $$ proving the theorem. \subsection{Odd torsion generators of stable homotopy groups of spheres represented by the strata of isolated singularities} Example: the odd generator of $\pi^{\bf s}(3) = \Z_{24}$. The isolated cusp map $\sigma_2: (\R^4,0) \to (\R^5,0)$ has on its boundary $\partial \sigma_2: \S^3 \to \S^4$ a framed null-cobordant fold curve. Applying a cobordism of prim fold maps to $\partial \sigma_2$ that eliminates the singularity curve one obtains a map without singularities of a $3$-manifold into $\S^4$. It represents a quadruple of the odd generator in $\pi^{\bf s}(3)$. \par Example: the odd torsion of $\pi^{\bf s}(7) = \Z_{15} \oplus \Z_{16}$. Consider an isolated $\Sigma^{1_4}$-map $f:(\R^8,0) \to (\R^9,0)$. Again all the singularity strata of its boundary map $\partial f:\S^7\to \S^8$ can be eliminated (after possibly a multiplication by a power of $2$). The obtained non-singular map of a $7$-manifold into $\S^8$ represents a generator of $\Z_{15}$. \par These examples can be produced in any desired amount. \section{Equidimensional prim maps} The arguments demonstrated so far can also be applied to the case of codimension $0$ prim maps, both cooriented and not necessarily cooriented. However, the resulting spectral sequences do not have the same richness of structure as Mosher's, so we only indicate the differences from the case of codimension $1$ cooriented prim maps. \par The analogue of Lemma \ref{lemma:filtration} that identifies the classifying space of codimension $0$ (not necessarily cooriented) prim maps with $\Omega\Gamma \R P^\infty$ including the natural filtrations goes through without significant changes, giving that the codimension $0$ classifying space $X_{Prim\_\Sigma^{1_r}}(0)$ is $\Omega \Gamma \R P^{r+1}$. Indeed, take a codimension $1$ immersion $f$ with a $\Sigma^{1_r}$ projection. The vertical vector field gives us sections of the relative normal line bundles of the singular strata, and collecting these sections induces the normal bundle of $f$ from the tautological line bundle over $\R P^r$. This defines a map from the classifying space of the lifts of prim $\Sigma^{1_r}$-maps to the Thom space $\R P^{r+1}$ of the tautological line bundle over $\R P^r$, and the $5$-lemma shows that this map is a weak homotopy equivalence. For cooriented maps the corresponding bundle from which the normal bundle of $f$ is induced is the trivial bundle $\varepsilon^1_{\S^r}$ over $\S^r$ and the classifying space $X^{SO}_{Prim\_\Sigma^{1_r}}(0)$ turns out to be $\Omega \Gamma T\varepsilon^1_{\S^r} \cong \Omega \Gamma (\S^{r+1} \vee \S^1)$. \par In the cooriented case, the spectral sequence obtained this way has first page $E^1_{p,q} = \pi^{\bf s}(q) \oplus \pi^{\bf s}(0)$ and degenerates to $\pi^{\bf s}(0)$ concentrated in the first column on page $E^2$. In the non-cooriented case, the spectral sequence has first page $E^1_{p,q} = \pi^{\bf s}(q)$ and abuts to $\pi^{\bf s}_*(\R P^\infty)$. The differential $d^1$ can be completely understood: it is induced by the attaching map of the top cell of $\R P^{q+1}$ to the top cell of $\R P^q$, and this attaching map has degree $0$ when $q$ is even and degree $2$ when $q$ is odd. Thus $d^1$ is also $0$ on the even columns and multiplication by $2$ on the odd columns. Consequently the $E^2$ page consists of direct sums of groups $\Z_2$, one for each $2$-primary direct summand in the same $E^1$-cell except at row $0$, where the groups are alternating $0$ and $\Z_2$ (this exception comes from the fact that the group $\pi^{\bf s}(0) = \Z$ is not finite like the rest of the groups $\pi^{\bf s}(q)$). \par Periodicity goes through in the same way as before, with $M_k$ replaced by $\vert J(\R P^{k-1}) \vert = 2^{m(k)}$ with $m(k)=\vert\{ 1 \leq p <k : p \equiv 0,1,2,3,4 \text{ mod } 8 \}\vert$ (known from \cite{AdamsJ2}). \par Conjecture: the first column of $E^2$ survives to $E^\infty$ without further change, in other words, the differentials $d^2$, $d^3$, $\dots$, ending in the first columns all vanish. This has been observed in the cells number $0$ to $8$ of the first column. \section*{Appendix: The definition of $J'_C(X)$} The famous $K$-theoretical $\psi^k$-operations ($k$ is any natural number) of Adams are defined by the properties of \begin{enumerate}[a)] \item being group homomorphisms, and \item satisfying $\psi^k \xi = \xi^k$ if $\xi$ is a line bundle. \end{enumerate} Now if $\Phi_K$ is the $K$-theoretical Thom isomorphism, then an operation $\rho^k$ is defined by $$ \rho^k (\xi) = \Phi^{-1}_K \psi^k \Phi_K (1) $$ for any vector bundle $\xi$. After having extended $\psi^k$ and $\rho^k$ to virtual bundles one can define the subgroup $V(X) \leq K(X)$ as follows: $$ V(X) = \left\{ x\in K(X) : \exists y\in K(X) \text{ s.t. } \rho^k(x)=\frac{\psi^k(1+y)}{1+y}\right\}. $$ Then $J'_C(X) \overset{def}{=} K(X)/V(X)$. \par The definition is motivated by the result (proved by Adams) that every $J$-trivial element of $K(X)$ necessarily belongs to $V(X)$. Hence there is a surjection $pr: J(X) \to J'_C(X)$. While the definition of $J(X)$ is geometric (not algebraic) and consequently it is hard to handle, the definition of $J'_C(X)$ is purely algebraic and therefore much easier to compute. Furthermore, the two groups often coincide (eg. for $X=\R P^n$) or are close to each other (for $X=\S^{2k}$ the kernel $\ker pr$ has exponent $2$). \newpage \section*{Appendix 2: Calculated spectral sequences} $E^1_{ij}$ page: \[ \begin{array}{c|c|c|c|c|c|c|} \cline{2-7} & & & & & & \\ 11 & \pi^{\bf s}(10)=\Z_6\langle\eta\circ\mu\rangle\rule{2mm}{0pt} &\hspace*{-14mm}\overset{\eta\circ\mu}{\hbox to12mm{\leftarrowfill}} \ \Z_2^3 & \hspace*{-9.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}}\ \Z_2 \oplus \Z_2\rule{3.5mm}{0pt} & \hspace*{-7mm} \overset{\bar \nu + \varepsilon}{\hbox to12mm{\leftarrowfill}}\ \Z_{240}\rule{3mm}{0pt} & \hspace*{-7.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}} \ \Z_2 & 0 \\ & & & & & & \\ \cline{2-7} & & & & & & \\ 10 & \pi^{\bf s}(9)=\Z_2^3\langle\nu^3,\mu,\eta\circ\varepsilon\rangle & \hspace*{-4mm} \overset{?}{\hbox to 6mm{\leftarrowfill}}\ \,\Z_2\oplus \Z_2\rule{3mm}{0pt} & \hspace*{-11.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}} \ \Z_{240} & \hspace*{-14.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}}\ \Z_2 & 0 & 0 \\ & & & & & & \\ \cline{2-7} & & & & & & \\ 9 & \pi^{\bf s}(8)=\Z_2\langle\overline\nu\rangle\oplus \Z_2\langle\varepsilon\rangle \rule{3mm}{0pt} & \hspace*{-7.5mm} \overset{\bar \nu + \varepsilon}{\hbox to10mm{\leftarrowfill}} \ \Z_{240}\rule{2mm}{0pt} & \hspace*{-17.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}} \ \Z_2 & 0 & 0 & \Z_{24}\\ & & & & & & \\ \cline{2-7} & & & & & & \\ 8 & \pi^{\bf s}(7)=\Z_{240}\langle\sigma\rangle & \hspace*{-15.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}}\ \Z_2 & 0 & 0 & \Z_{24}\phantom{1} & \hspace*{-7.5mm} \hbox to12mm{\leftarrowfill}\kern-2.5pt\raisebox{.75pt}{$\scriptstyle\langle$} \ \hfil \Z_2 \\ & & & & & & \\ \cline{2-7} & & & & & & \\ 7 & \pi^{\bf s}(6)=\Z_2\langle \nu^2 \rangle & 0 & 0 & \Z_{24} & \hspace*{-8.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}} \ \Z_2 & \hspace*{-4.5mm} \overset{\cong}{\hbox to12mm{\leftarrowfill}} \ \Z_2\rule{5mm}{0pt} \\ & & & & & & \\ \cline{2-7} & & & & & & \\ 6 & \pi^{\bf s}(5)=0 & 0 & \Z_{24} & \hspace*{-12mm} \hbox to12mm{\leftarrowfill}\kern-2.5pt\raisebox{.75pt}{$\scriptstyle\langle$} \ \Z_2\rule{3mm}{0pt} & \hspace*{-7.5mm} \overset{0}{\hbox to12mm{\leftarrowfill}} \Z_2\rule{3mm}{0pt} & \hspace*{-9mm} \raisebox{.75pt}{$\scriptstyle\langle$}\kern-6pt\hbox to12mm{\leftarrowfill}\ \Z \\ & & & & & & \\ \cline{2-7} & & & & & & \\ 5 & \pi^{\bf s}(4)= 0 & \Z_{24} & \hspace*{-12mm} \overset{0}{\hbox to12mm{\leftarrowfill}}\ \ \Z_2 & \hspace*{-12mm} \overset{\cong}{\hbox to12mm{\leftarrowfill}}\ \Z_2 & \hspace*{-9mm} \overset{0}{\hbox to12mm{\leftarrowfill}}\ \Z & 0\\ & & & & & & \\ \cline{2-7} & & & & & & \\ 4 & \pi^{\bf s}(3)= \Z_{24}\langle\nu\rangle & \hspace*{-14.5mm} \hbox to12mm{\leftarrowfill}\kern-2.5pt\raisebox{.75pt}{$\scriptstyle\langle$}\ \Z_2 & \hspace*{-15mm} \overset{0}{\hbox to12mm{\leftarrowfill}}\ \ \Z_2 & \hspace*{-15.5mm} \raisebox{.75pt}{$\scriptstyle\langle$}\kern-6pt\hbox to12mm{\leftarrowfill}\ \ \Z\rule{5mm}{0pt} & 0 & 0\\ & & & & & & \\ \cline{2-7} & & & & & & \\ 3 & \pi^{\bf s}(2)= \Z_2\langle\eta^2\rangle & \hspace*{-14.5mm} \overset{\cong}{\hbox to12mm{\leftarrowfill}}\ \ \Z_2 & \hspace*{-15mm} \overset{0}{\hbox to12mm{\leftarrowfill}}\ \Z\rule{5mm}{0pt} & 0 & 0 & 0\\ & & & & & & \\ \cline{2-7} & & & & & & \\ 2 & \pi^{\bf s}(1)= \Z_2\langle\eta\rangle & \hspace*{-15mm}\raisebox{.75pt}{$\scriptstyle\langle$}\kern-6pt\hbox to12mm{\leftarrowfill}\ \ \Z & 0 & 0 & 0 & 0\\ & & & & & & \\ \cline{2-7} & & & & & & \\ j=1 & \pi^{\bf s}_1(\Gamma \S^1)= \Z & 0 & 0 & 0 & 0 & 0\\ & & & & & & \\ \cline{2-7} \multicolumn{1}{c}{\rule{0pt}{20pt}} & \multicolumn{1}{c}{i=0} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} \end{array} \] \newpage \noindent $E^2_{ij}$ page: \vspace*{12pt} \vbox{$$ \xymatrix{ 8 & \Z_2 & \Z_{120} & \Z_2 & 0 & 0 & \Z_{24} \\ 7 & \Z_{240} & \Z_2 & 0 & 0 & \Z_{12} & 0 \\ 6 & \Z_2 & 0 & 0 & \Z_{24} & 0 & 0 \\ 5 & 0 & 0 & \Z_{12} & 0 & 0 & \Z \ar[llu]_{d^2_{5,5}}\\ 4 & 0 & \Z_{24} & 0 & 0 & \Z \ar[llu]_{d^2_{4,4}} & 0 \\ 3 & \Z_{12}\langle 2\nu \rangle & 0 & 0 & \Z \ar[llu]_{d^2_{3,3}} & 0 & 0\\ 2 & 0 & 0 & \Z \ar[llu]_{2 \cdot 2} & 0 & 0 & 0 \\ j=1 & 0 & \Z & 0 & 0 & 0 & 0 \\ & i=0 & 1 & 2 & 3 & 4 & 5 \\ } $$ \vspace*{-121.0mm} \[ \hspace*{12mm}\begin{array}{c|c|c|c|c|c|c|} \cline{2-7} & & & & & & \\[.679pt] & \hspace*{20.0mm} & \hspace*{15.0mm}& \hspace*{12.5mm} & \hspace*{12.5mm} & \hspace*{12.5mm} & \hspace*{12.5mm} \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} & & & & & & \\[.679pt] & & & & & & \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} & & & & & & \\[.679pt] & & & & & & \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} & & & & & & \\[.679pt] & & & & & & \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} & & & & & & \\[.679pt] & & & & & & \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} & & & & & & \\[.679pt] & & & & & & \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} & & & & & & \\[.679pt] & & & & & & \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} & & & & & & \\[.679pt] & & & & & & \\[.679pt] & & & & & & \\[.679pt] \cline{2-7} \multicolumn{7}{c}{}\\[.679pt] \multicolumn{7}{c}{}\\[.679pt] \multicolumn{7}{c}{}\\[.679pt] \end{array} \]} Here, $d^2_{4,4}$ induces the $0$ map between the $3$-components, while the $d^2_{3,3}$ and $d^2_{5,5}$ differentials induce epimorphisms in the $3$-components.
2,869,038,154,507
arxiv
\subsection{Gyulassy-Wang treatment of induced gluon radiation} As a first step, we focus on quark rescattering processes in the spirit of Gyulassy-Wang's approach\refup{GW}. The derivation basically follows that of the QED case, with the only essential difference coming from the fact that the contributions associated to successive quark rescatterings between $\#i$ and $\#j$ centres acquire colour factors $(-(2N_cC_F)\pw{-1})\pw{j-i-1} = (-N_c\pw2+1)\pw{-(j-i-1)}$. This leads to colour suppression as implied by the nonplanar nature of the corresponding diagrams. As long as it is not the angle but the transverse momentum which is relevant for the emission amplitude (cf. (\ref{1.1}) and (\ref{qcdamp})), we express the eikonal phase difference (\ref{1.2}) in terms of $(\vkps{m})\pw2=\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi$ which remains $m$-independent within the adopted approximation ($E\!\gg\!\mu$). After averaging over the position of scattering centres with the weight (\ref{zdist}) where $\lambda$ now stands for the quark mean free path, the $\psi$ factors emerge \beql{psiqcd} \psi(\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi) = \left(1-i\kappa {\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi} / {\mu\pw2} \right)\pw{-1} \>\equiv\> \left(1-i\kappa U\pw2 \right)\pw{-1}\>. \eeq Here we have introduced the characteristic QCD parameter \beql{kappaQCDdef} \kappa = {\textstyle {1\over2}} {\lambda\mu\pw2}/{\omega} \>\ll\> 1\>. \eeq The two dimensional vector $\vec{U}$ now stands for the gluon transverse momentum measured in units of $\mu$. To integrate the product of the currents $\vec{J}_i\vec{J}_j$ over transferred momenta $\vec{q}_i, \vec{q}_j$ one uses \beql{prop} \int\frac{d\pw2q}{\pi} \frac{{\vec{k}_{\perp}} -\vec{q}}{({\vec{k}_{\perp}} -\vec{q}\,)\pw2} \, f(q\pw2) = \frac{{\vec{k}_{\perp}}}{\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi} \int_0\pw{\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi} dq\pw2\, f(q\pw2)\>, \eeq and arrives at the radiation density \beql{rdQCD1}\eqalign{ \omega\frac{dI}{d\omega \, dz} &= \lambda\pw{-1} \frac{N_c\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}{\pi} \int_0\pw\infty \frac{dU\pw2}{U\pw2(U\pw2+1)\pw2} \> \Re \left[\, g(1) - g(\psi(U\pw2)) \, \right] \>; \cr g(\psi) &\equiv \frac{N_c}{2C_F} \psi \sum_{n=0}\pw\infty \left( -\frac{\psi}{2N_cC_F} \right)\pw{n} = \frac{\psi}{1+(\psi-1)/N_c\pw2} \>; \quad (n=j-i-1)\,. }\eeq It is clear that in the large $N_c$ limit the answer is determined by the interference between the nearest neighbours, $j=i+1$. Straightforward algebra leads to \beql{rdQCD2} \frac{\omega\, dI}{d\omega \, dz} = \frac{N_c\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}{\pi\,\lambda} \int\limits_0\pw\infty \frac{U\pw2\, dU\pw2}{(U\pw2\!+\!1)\pw2} \left[\, U\pw4\!+\! \left( \frac{N_c\, \omega}{C_F \lambda\mu\pw2} \right)\pw2 \,\right]\pw{-1} \! = \frac{N_c\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}{\pi\,\lambda} \int\limits_0\pw\infty \frac{\mu\pw4\>d\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi }{\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi (\relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi\!+\!\mu\pw2)\pw2} \, \left[\, 1 \!+\! \left( \frac{N_c}{2C_F}\, \frac{\tau}{\lambda} \right)\pw2\,\right]\pw{-1} \eeq Except in the case $\lambda\mu\pw2\gg E$ where one formally recovers the BH limit, (\ref{rdQCD2}) leads to a sharply falling gluon energy distribution $\omega dI/d\omega \propto \omega\pw{-2} $ and, therefore, to finite energy losses \begin{equation}} \def\eeq{\end{equation} -{dE}/{dz} \sim \mbox{const}\cdot \relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi\, \mu\pw2 \>. \eeq This qualitatively agrees, up to $\log E$ factors, with the conclusions of~\cite{GW}. The origin of this result is quite simple: in the quark rescattering model the gluon radiation with formation time $\tau$ (\cf (\ref{1.2})) exceeding $\lambda$ is negligible, which severely limits gluon energies: $$ \omega = {1 \over 2} \tau \relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi \mathrel{\mathpalette\fun <} \lambda q_\perp\pw2 \sim \lambda \mu\pw2\>. $$ The fact that the quark after having emitted a gluon at point $\#i$ effectively stops interacting with the medium looks as if it had lost its ``colour charge''. The truth is, however, that the colour charge has not disappeared but has been transferred to the radiated gluon. \subsection{Radiated gluon rescattering in the large Nc limit and the analogy with QED} The diagrams for the product of the emission currents $\vec{J}_i\vec{J}_j$ which are displayed in Fig.\ref{fig3} dominate in the large $N_c$ limit. \begin{figure} \setlength{\unitlength}{0.5pt} \thicklines \newsavebox{\macroC} \savebox{\macroC}(0,0)[bl]{ \put( 0.00, 0.00){\circle*{5}} } \newsavebox{\macroD} \savebox{\macroD}(0,0)[bl]{ \put( 0.00, 0.00){\circle*{10}} } \newsavebox{\maccross} \savebox{\maccross}(0,0)[bl]{ \put(-10, 10){\line(1,-1){20}} \put(-10,-10){\line(1, 1){20}} } \newsavebox{\macroA} \savebox{\macroA}(0,0)[bl]{ \put( 67.00, 67.00){\oval( 24.00, 78.00)[r]} \multiput(-383.00,174.00)( 56.00, 0.00){5}{\vector(1,0){ 55.00}} \multiput(-331.00,-23.00)( 56.00, 0.00){5}{\vector(-1,0){ 55.00}} \put(-359.26,174.00){\oval( 9.48, 9.33)[bl]} \put(-359.26,164.67){\oval( 9.48, 9.33)[tr]} \put(-363.79,164.67){\oval( 18.52, 9.33)[br]} \put(-363.79,153.00){\oval( 23.12, 14.00)[tl]} \put(-363.57,153.00){\oval( 23.55, 14.00)[bl]} \multiput(-363.57,146.00)( 0.43,-28.00){5}{ \put( 0.00, -7.00){\oval( 28.21, 14.00)[tr]} \put( 0.21, -7.00){\oval( 27.79, 14.00)[br]} \put( 0.21,-21.00){\oval( 27.79, 14.00)[tl]} \put( 0.43,-21.00){\oval( 28.21, 14.00)[bl]} } \put(-361.43, -1.00){\oval( 23.55, 14.00)[tr]} \put(-361.21, -1.00){\oval( 23.12, 14.00)[br]} \put(-361.21,-12.67){\oval( 18.52, 9.33)[tl]} \put(-365.74,-12.67){\oval( 9.48, 9.33)[bl]} \put(-365.74,-22.00){\oval( 9.48, 9.33)[tr]} \multiput(-355.00, 54.00)( 56, 0.10){4}{\usebox{\maccross}} \thinlines \multiput(-381.00, 54.00)( 41.30, 0.10){10}{ \put( 0.00, 0.00){\line(1,0){ 30.98}} } \multiput(-299.00,-23.00)( 56.00, 0.00){3}{ \put( -3.10, 0.00){\oval( 6.19, 6.10)[tr]} \put( -3.10, 6.10){\oval( 6.19, 6.10)[bl]} \put( -0.14, 6.10){\oval( 12.10, 6.10)[tl]} \put( -0.14, 13.71){\oval( 15.10, 9.14)[br]} \put( -0.29, 13.71){\oval( 15.38, 9.14)[tr]} \multiput( -0.29, 18.29)( -0.29, 18.29){5}{ \put( 0.00, 4.57){\oval( 18.43, 9.14)[bl]} \put( -0.14, 4.57){\oval( 18.14, 9.14)[tl]} \put( -0.14, 13.71){\oval( 18.14, 9.14)[br]} \put( -0.29, 13.71){\oval( 18.43, 9.14)[tr]} } \put( -1.71,114.29){\oval( 15.38, 9.14)[bl]} \put( -1.86,114.29){\oval( 15.10, 9.14)[tl]} \put( -1.86,121.90){\oval( 12.10, 6.10)[br]} \put( 1.10,121.90){\oval( 6.19, 6.10)[tr]} \put( 1.10,128.00){\oval( 6.19, 6.10)[bl]} } \thicklines \multiput(-334.00,146.00)( 19.67, 0.00){9}{ \put( 0.00, -19.67){\oval( 32.78, 39.33)[tr]} \put( 9.83, -19.67){\oval( 13.11, 39.33)[b]} \put( 19.67, -19.67){\oval( 32.78, 39.33)[tl]} } \put(-335.00,139.00){\oval( 25.00, 14.00)[tl]} \put(-64.00,173.00){\vector(1,0){ 36.00}} \put(-28.00,173.00){\vector(1,0){ 51.00}} \put( 26.00,-23.00){\vector(-1,0){26}} \put( 0,-23.00){\vector(-1,0){ 50.00}} \multiput(-125.00,144.00)( 13.00, 0.00){3}{\usebox{\macroC}} \multiput(-125.00, 4.00)( 13.00, 0.00){3}{\usebox{\macroC}} \multiput(-72.00,147.00)( 18.75, 0.00){5}{ \put( 0.00, -18.75){\oval( 31.25, 37.50)[tr]} \put( 9.38, -18.75){\oval( 12.50, 37.50)[b]} \put( 18.75, -18.75){\oval( 31.25, 37.50)[tl]} } \multiput( 23.00,147.00)( 14.67,-13.67){3}{ \put( 0.00,-28.33){\oval( 24.44, 56.67)[tr]} \put( 7.33,-28.33){\oval( 9.78, 29.33)[b]} \put( 14.67,-28.33){\oval( 24.44, 29.33)[tl]} } \multiput( 0.00, 0.00)( 16.75, 7.00){4}{ \put( 0.00, 23.75){\oval( 27.92, 47.50)[br]} \put( 8.38, 23.75){\oval( 11.17, 33.50)[t]} \put( 16.75, 23.75){\oval( 27.92, 33.50)[bl]} } \multiput(-302.00,106.00)( 57.00, 1.00){3}{\usebox{\macroC}} \put( -361.00, -23.00){\usebox{\macroC}} \multiput( -301.00, -23.00)( 57.00, 0.00){3}{\usebox{\macroC}} \put( -364.00,174.00){\usebox{\macroC}} \put( -350.00,138.00){\usebox{\macroD}} } \begin{center} \begin{picture}(500,220) \put(395.00, 39.00){\usebox{\macroA}} \put(382.00, 46.00){\usebox{\macroD}} \put(395.00, 44.00){\oval(20,10)[bl]} \put(378.03,212.00){\oval( 10.05, 10.05)[bl]} \put(378.03,201.95){\oval( 10.05, 10.05)[tr]} \put(373.00,201.95){\oval( 20.10, 10.05)[br]} \put(373.00,189.38){\oval( 25.13, 15.08)[tl]} \put(373.00,189.38){\oval( 25.13, 15.08)[bl]} \multiput(373.00,181.85)( 0.00,-30.15){4}{ \put( 0.00, -7.54){\oval( 30.15, 15.08)[tr]} \put( 0.00, -7.54){\oval( 30.15, 15.08)[br]} \put( 0.00,-22.62){\oval( 30.15, 15.08)[tl]} \put( 0.00,-22.62){\oval( 30.15, 15.08)[bl]} } \put(373.00, 53.69){\oval( 30.15, 15.08)[tr]} \put(373.00, 53.69){\oval( 30.15, 15.08)[br]} \put(373.00, 38.62){\oval( 25.13, 15.08)[tl]} \put(373.00, 38.62){\oval( 25.13, 15.08)[bl]} \put(373.00, 26.05){\oval( 20.10, 10.05)[tr]} \put(378.03, 26.05){\oval( 10.05, 10.05)[br]} \put(378.03, 16.00){\oval( 10.05, 10.05)[tl]} \put(373.03,212.00){\usebox{\macroC}} \put(373.03,16){\usebox{\macroC}} \put(365,93){\usebox{\maccross}} \end{picture} \end{center} \begin{center} \begin{picture}(500,220) \put(395.00, 39.00){\usebox{\macroA}} \put(380.00, 42.00){\usebox{\macroD}} \put(395.00, 44.00){\oval(24,10)[bl]} \put(369,148.00){\usebox{\macroC}} \put(368.38,16){\usebox{\macroC}} \put(365,93){\usebox{\maccross}} \put(373.38,147.00){\oval( 6.77, 6.77)[bl]} \put(373.38,140.23){\oval( 6.77, 6.77)[tr]} \put(370.00,140.23){\oval( 13.54, 6.77)[br]} \put(370.00,131.77){\oval( 16.92, 10.15)[tl]} \put(370.00,131.77){\oval( 16.92, 10.15)[bl]} \multiput(370.00,126.69)( 0.00,-20.31){4}{ \put( 0.00, -5.08){\oval( 20.31, 10.15)[tr]} \put( 0.00, -5.08){\oval( 20.31, 10.15)[br]} \put( 0.00,-15.23){\oval( 20.31, 10.15)[tl]} \put( 0.00,-15.23){\oval( 20.31, 10.15)[bl]} } \put(370.00, 40.38){\oval( 20.31, 10.15)[tr]} \put(370.00, 40.38){\oval( 20.31, 10.15)[br]} \put(370.00, 30.23){\oval( 16.92, 10.15)[tl]} \put(370.00, 30.23){\oval( 16.92, 10.15)[bl]} \put(370.00, 21.77){\oval( 13.54, 6.77)[tr]} \put(373.38, 21.77){\oval( 6.77, 6.77)[br]} \put(373.38, 15.00){\oval( 6.77, 6.77)[tl]} \end{picture} \end{center} \caption{ \label{fig3} Eikonal graphs with gluon rescattering dominate in the large $N_c$ limit. The part of the diagram below the dashed line corresponds to the conjugate emission amplitude. Crosses mark the scattering centres.} \end{figure} \begin{figure} \begin{center} \begin{picture}(400,150) \thicklines \put(-50,-40){ \put(40,50){$q_i$} \put(30,110){$k\!-\!q_i$} \put(100,80){$k$} \put(355.00, 95.00){\circle*{2.5}} \multiput(357.00, 95.00)( 0.00,-16.00){3}{ \put( 0.00, -4.00){\oval( 16.00, 8.00)[tl]} \put( 0.00, -4.00){\oval( 16.00, 8.00)[bl]} \put( 0.00,-12.00){\oval( 16.00, 8.00)[tr]} \put( 0.00,-12.00){\oval( 16.00, 8.00)[br]} } \put(357.00, 43.00){\oval( 16.00, 8.00)[tl]} \put(357.00, 43.00){\oval( 16.00, 8.00)[bl]} \newsavebox{\macroE} \savebox{\macroE}(0,0)[bl]{ \put( 23.00, 0.00){\vector(1,0){ 56.00}} \put( 79.00, 0.00){\line(1,0){ 20.00}} \put( 46.40, 0.00){\oval( 4.81, 4.81)[bl]} \put( 46.40, -4.81){\oval( 4.81, 4.81)[tr]} \put( 44.00, -4.81){\oval( 9.62, 4.81)[br]} \put( 44.00,-10.82){\oval( 12.02, 7.21)[tl]} \put( 44.00,-10.82){\oval( 12.02, 7.21)[bl]} \multiput( 44.00,-14.43)( 0.00,-14.43){6}{ \put( 0.00, -3.61){\oval( 14.43, 7.21)[tr]} \put( 0.00, -3.61){\oval( 14.43, 7.21)[br]} \put( 0.00,-10.82){\oval( 14.43, 7.21)[tl]} \put( 0.00,-10.82){\oval( 14.43, 7.21)[bl]} } \multiput( 44.00, 0.00)( 1.00,-43.00){2}{\circle*{2.5}} \multiput( 56.00,-43.00)( 7.80, 0.20){5}{ \put( 0.00, 8.00){\oval( 13.00, 16.00)[br]} \put( 3.90, 8.00){\oval( 5.20, 15.60)[t]} \put( 7.80, 8.00){\oval( 13.00, 15.60)[bl]} } \put( 44.00,-43.00){\line(1,0){ 12.00}} \put( 0.00, 0.00){\vector(1,0){ 23.00}} } \put(180.00,137.00){\usebox{\macroE}} \put( 24.00,137.00){\usebox{\macroE}} \put(224.00, 95.00){\circle*{6}} \put(150,70){$\displaystyle \Longrightarrow$} \put(290,70){$\displaystyle +$} \put(307.00,137.00){\vector(1,0){ 42.00}} \put(349.00,137.00){\line(1,0){ 48.00}} \put(308,150){$z_{i\!-\!1}$} \multiput(307.00,137.00)( 7.60, -8.40){5}{ \put( 0.00,-16.00){\oval( 12.67, 32.00)[tr]} \put( 3.80,-16.00){\oval( 5.07, 15.20)[b]} \put( 7.60,-16.00){\oval( 12.67, 15.20)[tl]} } \put(345.00, 95.00){\line(1,0){ 12.00}} \multiput(362.00, 95.00)( 7.80, 0.20){5}{ \put( 0.00, 8.00){\oval( 13.00, 16.00)[br]} \put( 3.90, 8.00){\oval( 5.20, 15.60)[t]} \put( 7.80, 8.00){\oval( 13.00, 15.60)[bl]} } \put(355.00, 95.00){\line(1,0){ 7.00}} \multiput(224.00,160.00)(131.00, 1.00){2}{ \multiput( 0.00, 0.00)( 0.00,-19.25){7}{ \put( 0.00, 0.00){\line(0,-1){ 14.44}} } \put(5,-10){$z_{i}$} } } \end{picture} \end{center} \caption{\label{fig2} Feynman 3--gluon diagram produces two eikonal graphs: it participates in the gluon production current (second term of the amplitude (3.1)) and gives rise to Coulomb rescattering.} \end{figure} Let us consider the Feynman diagrams with gluon self-interaction as shown in Fig.\ref{fig2}. Integration over the position of the quark-gluon interaction vertex $t$ between the successive scattering centres, $z_{i-1}\le t\le z_i$, gives rise to two contributions with the phase factors attached to $z_i$ and $z_{i-1}$. The first term in the rhs of Fig.\ref{fig2} corresponds to an instantaneous interaction mediated by the {\em virtual}\/ (Coulomb) gluon $(k\!-\!q_i)$ and is responsible for the second term of the basic scattering amplitude (\ref{qcdamp}). The second one corresponds to the production of the {\em real}\/ gluon $(k\!-\!q_i)$ in the previous interaction point, which then rescatters at the centre $\#i$. The sum of these two terms is proportional to the difference of the two phase factors. Therefore when phases are set to zero (factorization limit), gluon production {\em inside}\/ the medium cancels. The only contributions which survive correspond to the gluon radiation at the very first and the very last interaction vertices. As a result, the subtraction term analogous to $ |{\sum \vec{J}_i}|\pw2 $ of (\ref{2.1}) remains $N$-independent and does not contribute to the radiation {\em density}. In the QCD case this statement is less transparent than in QED, since the initially produced gluon is still subject to multiple rescattering in the medium. The latter, however, does not affect the {\em energy}\/ distribution of radiation. A similar argument applies to the final state interaction after the point $\#j$ which has been neglected in Fig.\ref{fig3}: Coulomb interaction of the quark-gluon pair with the medium does affect the transverse momentum distribution of the gluon but does not change its energy spectrum, the only quantity which concerns us here. It is worthwhile to mention that in the QCD problem the very notion of scattering cross section (and, thus, of mean free path $\lambda$) becomes elusive: the scattering cross section of the quark-gluon pair depends on the transverse size of the 2--particle system and therefore is $z$-dependent. As we shall see below, the radiation intensity of gluons with $\omega\!\gg\!\lambda\mu\pw2$ is due to comparatively {\em large}\/ formation times $\tau\!\gg\!\lambda$ but not large enough for the quark and the gluon to separate in the impact parameter space. Such a pair as a whole acts as if it were a single quark propagating through the medium. Therefore the mean free path of the quark, $\lambda\propto C_F\pw{-1}$, may be effectively used for the eikonal averaging in the spirit of (\ref{zdist}). Better formalized considerations will be given elsewhere\refup{BDPS}. The emission current (\ref{qcdamp}) and the structure of the diffusion in the variable $\vec{u}$ prove to be identical for the QCD and QED problems. The only but essential difference is that now $\vec{u}$ has to be related to the {\em transverse momentum}\/ instead of the {\em angle}\/ of the gluon: \beql{anal} \eqalign{ \mbox{QED:}~~ & \vec{u}_i = \frac{\vkps{i}}{\omega}, \> \vec{U}_i= \vec{u}_i\frac{E}{\mu}; \> \kappa = \frac{\lambda\mu\pw2}{2} \,\frac{\omega}{E\pw2} \ll 1 \quad \Longrightarrow \quad \cr \mbox{QCD:}~~ & \vec{u}_i = \vkps{i}, \> \vec{U}_i= \frac{\vec{u}_{i}}{\mu} ; \> \kappa = \,\frac{\lambda\mu\pw2} {2\,\omega} \ll 1\, . }\eeq Correspondingly, the $\kappa$ parameter gets modified according to the QCD expression (\ref{kappaQCDdef}). Given this analogy, the QCD derivation follows that of the previous section. The graph of Fig.\ref{fig3}b is twice the graph of Fig.\ref{fig3}a due to colour factors. One finds the gluon energy distribution (\cf (\ref{Ures})) \beql{UresQCD} \omega\frac{dI}{d\omega dz} = \frac{3 N_c\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}{\pi\,\lambda} \int_0\pw\infty \frac{dU\pw2}{U\pw2(U\pw2+1)} \> \Phi(\kappa U\pw4) \>\approx\> \frac{3 N_c\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}{4\lambda}\sqrt{\frac{\lambda\mu\pw2}{2\pi\omega}} \, \ln\frac{2\omega}{\lambda\mu\pw2} \>. \eeq \subsection{Rescattering of the incident parton and the complete spectrum} Now we are in a position to complete the analysis of induced QCD radiation by taking into account subdominant quark rescattering contributions. Each gluon scattering provides the phase factor $\psi(U\pw2)$ accompanied by the colour factor $N_c/2$ which one normalizes by the quark elastic scattering factor $C_F\pw{-1}\propto \lambda_q$. Each quark scattering graph supplies the colour factor $(-1/2N_c)\cdot C_F\pw{-1} = (C_F-N_c/2)C_F\pw{-1}$ and the same $\psi$ factor (since the gluon momentum stays unaffected, and the change in the quark direction is negligible). Accounting for an arbitrary number of quark rescatterings in-between two successive gluon rescatterings results in a modified phase factor: \bminiG{lamtil} \label{lamtila} \frac{N_c}{2C_F} \psi \cdot \sum_{m=0}\pw\infty \psi\pw{m}\left(1 - \frac{N_c}{2C_F}\right)\pw{m} &=& \left(1+ \frac{2C_F}{N_c}\left[\,\psi\pw{-1}-1 \,\right] \right)\pw{-1} \equiv \tilde{\psi} \>; \\ \label{lamtilb} \tilde{\psi}(U\pw2) = \left( 1 - i\tilde{\kappa} U\pw2 \right)\pw{-1} ; && \!\!\! \tilde{\kappa}\equiv \frac{\tilde{\lambda} \mu\pw2}{2\omega} \>, \>\> C_F \lambda_q \equiv \frac{N_c}{2}\, \tilde{\lambda} \,. \qquad{} \end{eqnarray}\relax\setcounter{equation}{\value{hran} It is worthwhile to notice that the modified effective mean free path $\tilde{\lambda}$ does not depend on the nature of the colour representation $R$ of the initial particle. For example, in the case of the {\em gluon}\/ substituted for the initial quark, Coulomb rescattering of both projectile and radiated gluons supplies the colour factor $N_c/2$, while the elastic cross section provides the normalization $N_c\pw{-1}\propto \lambda_g$. Thus, since $N_c\lambda_g=C_F \lambda_q $, the same result follows: \begin{equation}} \def\eeq{\end{equation} \frac{N_c}{2} N_c\pw{-1}\psi \cdot \sum_{m=0}\pw\infty \left( \frac{N_c}{2} N_c\pw{-1}\psi \right)\pw{m} \>=\> \tilde{\psi}\>. \eeq In general, for an arbitrary colour state $R$ one has to replace $C_F$ in (\ref{lamtila}) by a proper Casimir operator $C_R$ which dependence then cancels against the $C_R\pw{-1}$ factor that enters the expression for the mean free path of a particle $R$: $C_F \lambda_q =N_c\lambda_g =C_R\lambda_R= {\textstyle {1\over2}} N_c\tilde{\lambda}$. Thus, (\ref{UresQCD}) holds provided one replaces $\kappa$ by the modified $\tilde{\kappa}$ according to (\ref{lamtilb}). To factor out the dependence on the type of the projectile one may express the answer in terms of the gluon mean free path $\lambda_g={\textstyle {1\over2}}\tilde{\lambda}$: \beql{QCDres1} \omega\frac{dI}{d\omega dz} = 3\cdot \frac{C_R \relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}{\pi\,\lambda_{g}} \int_0\pw\infty \frac{dU\pw2}{U\pw2(U\pw2+1)} \> \Phi(\tilde{\kappa} U\pw4) \>\approx\> \frac{3C_R \relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}{4\, \lambda_g} \sqrt{\frac{\lambda_g\mu\pw2}{\pi\, \omega}} \, \left(\ln\frac{\omega}{\lambda_g\mu\pw2} \>+\> \cO{1} \right) . \eeq This is our final result for the QCD induced radiation spectrum. It has been derived with logarithmic accuracy in the energy region \beql{cond} {\omega}\left/{\lambda_g\mu\pw2} \right. = \tilde{\kappa}\pw{-1} \>\gg \> 1\>. \eeq It is this condition which, a posteriori, justifies the use of the quark scattering cross section for the quark-gluon system. Indeed, the random walk estimate for the transverse separation between $q$ and $g$ in course of $\lrang{n}\mathrel{\mathpalette\fun <} 1/\sqrt{\kappa}$ kicks gives \beql{separ} \Delta \vec{\rho}_\perp \approx \lambda \sum_{m=1}\pw{\lrang{n}} {\vkps{m}}/ {\omega} \>: \quad (\Delta \vec{\rho}_\perp)\pw2 \sim \frac{\lambda\pw2\mu\pw2}{\omega\pw2} \lrang{n}\pw2 \mathrel{\mathpalette\fun <} \kappa \, \mu\pw{-2}\> \ll\> \mu\pw{-2}\,. \eeq Given the separation which is much {\em smaller}\/ than the radius of the potential, the mean free path of the $qg$ system in the medium coincides with that of a single quark. A comment concerning the factorization and BH regime limits of the spectrum is in order and proceeds along the same lines as in QED (see (\ref{ests})). For the medium of a finite size $L\!<\!L_{cr}=\sqrt{\lambda_g E/\mu\pw2}$ the $\omega$-independent factorization limit holds for energetic gluons with \bminiG{BHfac} \omega \mathrel{\mathpalette\fun >} \omega_{fact}=E\left( {L} / {L_{cr}}\right)\pw2 , \qquad \left( N_{coh}=\tilde{\kappa}\pw{-1/2} \mathrel{\mathpalette\fun >} L/\lambda_g \right). \eeeq The BH regime corresponds formally to small gluon energies \footnote{ Notice that for $\kappa\!\gg\! 1$ the separation between the colour charges according to (\ref{separ}) becomes much {\em larger}\/ than $1/\mu$. In these circumstances the normalization cross section tends to the sum of independent $q$ and $g$ contributions, $C_F \Longrightarrow C_F \!+\! N_c \approx 3 C_F$. The factor 3 in (\ref{QCDres1}) disappears leading to the standard BH expression. }, \begin{eqnarray}} \def\eeeq{\end{eqnarray} \omega \mathrel{\mathpalette\fun <} \omega_{BH}= \lambda_g\mu\pw2\,, \qquad (\tilde{\kappa} \mathrel{\mathpalette\fun >} 1)\>. \end{eqnarray}\relax\setcounter{equation}{\value{hran} The spectrum (\ref{QCDres1}) is depicted in Fig.~\ref{figex} together with the QED spectrum. When it is integrated over $\omega$ up to $E$, it leads to energy losses \begin{equation}} \def\eeq{\end{equation} -\frac{dE}{dz} \>\propto\> C_R\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi \sqrt{E\, \frac{\mu\pw2}{\lambda_g} } \> \ln\frac{E}{\lambda_g\mu\pw2} \qquad (\mbox{for}\>\> L>L_{cr})\, . \eeq \begin{figure} \setlength{\unitlength}{1.5pt} \begin{center} \begin{picture}(200,115) \thinlines \put(50,90){ \put(-20,-90){\vector(0,1){110}} \put(-20,0){\vector(1,0){160}} \multiput(30,-86)(12,6){6}{\line(2,1){9}} \put(100,-90){\line(0,1){94}} \put(66,-4){\line(0,1){8}} \thicklines \put(-25,0){\line(1,0){25}} \put(0,0){\line(2,-1){66}} \put(66,-33){\line(1,0){33}} \multiput(72,-36)(6,-3){5}{\circle*{1}} \put(95,7){$\ln\! {E} $} \put(130,7){$\ln {\omega} $} \put(60,7){$\ln {\omega_{\!fact}} $} \put(-5,7){$\ln {\omega_{\!B\!H}} $} \put(-75,-15){$\ln\!\!\left\{ \frac{\tilde{\lambda}} {\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi}\frac{\omega dI}{d\omega dz}\right\} _{Q C\!D} $} \put(-75,-75){$\ln\!\!\left\{ \frac{\lambda}{\alpha}\frac{\omega dI}{d\omega dz}\right\} _{Q E D}$} } \end{picture} \end{center}\caption{\label{figex} The normalized QCD radiation density (solid line) for a finite size medium ($L\!<\!L_{cr}$). The QED LPM spectrum ($E>E_{LPM}$) is shown by the dashed line. } \end{figure} \mysection{Discussion and concluding remarks} In this letter we have considered induced soft gluon radiation off a fast colour charge propagating through the medium composed of static QCD Coulomb centres. Due to the colour nature of the scattering potential, the specific non-Abelian contributions to the gluon yield dominate over the QED-like radiation for all gluon energies (up to $\omega \mathrel{\mathpalette\fun <} E$). They have been singled out here by choosing the $E\!\to\!\infty$ limit in which the Abelian radiation vanishes. Having established a close analogy between the angular structure of the QED problem and the transverse momentum structure of the QCD problem one can qualitatively obtain the gluon energy density spectrum from the known QED result by the substitution $\omega/E\pw2\to 1/\omega$. The spectrum of gluon radiation is $E$-independent and, analogously to the Landau-Pomeranchuk-Migdal effect in QED, acquires coherent suppression as compared to the Bethe-Heitler regime of independent emissions: \beql{4.1} \omega\frac{dI}{d\omega\, dz}\propto \lambda_g\pw{-1} \relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi \cdot\sqrt\frac{\lambda_g\mu\pw2}{\omega}\, \ln\frac{\omega}{\lambda_g\mu\pw2} \>, \quad \left(\frac{\lambda_g\mu\pw2}{\omega}\equiv \tilde{\kappa} =N_{coh}\pw{-2} \ll 1\right). \eeq As a result, the radiative energy loss per unit length amounts to \beql{4.2} -dE/dz \propto \relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi\sqrt{E \mu\pw2/\lambda_g}\ln(E/\lambda_g\mu\pw2) . \eeq \noindent This is in apparent contradiction with the statements existing in the literature on the subject. In particular, it contradicts the recent conclusion presented in \cite{GW} about the {\em finite}\/ density of energy losses. As discussed above, the latter statement is due to a sharply falling energy spectrum $\omega {dI}/{d\omega dz}$ $\propto \omega\pw{-2}$ which one finds within the approach disregarding Coulomb rescattering of the radiated gluon. Only gluon radiation with restricted formation time $\tau\mathrel{\mathpalette\fun <}\lambda$ survives such a treatment, while (\ref{4.1}) is dominated by large formation times $\lambda \ll \tau \ll \lambda N_{coh}$. It is interesting to notice, that in spite of the fact that large momentum transfers do not essentially contribute to the scattering cross section, $d\sigma/d q\pw2_\perp\propto q_\perp\pw{-4}$ for $q_\perp\pw2\gg\mu\pw2$, the answer is actually related to {\em hard}\/ scatterings. The logarithmic enhancement factor in (\ref{4.1}) originates from interference between two radiation amplitudes: one is related to a ``typical'' scattering with $q\pw2_n\sim \mu\pw2$ while the other corresponds to a hard fluctuation with a very large momentum transfer up to $q_1\pw2 \sim \mu\pw2 N_{coh}\gg \mu\pw2$. These amplitudes get a chance to interfere because of the random walk in the gluon transverse momentum in a course of $n\sim \relax\ifmmode{k_\perp\pw2}\else{$k_\perp\pw2${ }}\fi/\mu\pw2 \mathrel{\mathpalette\fun <} N_{coh}$ rescatterings with typical $q_m\pw2\sim \mu\pw2$. Such a fluctuation actually is not rare: the probability that at least one of $n$ scatterings will supply the momentum transfer $q_\perp\pw2$ exceeding $n\mu\pw2$ amounts to $1- (1\!-\!1/n)\pw{n} \sim 1$. Our results should be directly applicable to ``hot'' (deconfined) plasma (with $\lambda \gg 1/\mu$) in which case the model potential (\ref{qpdist}) with $\mu$ as a screening parameter could be taken at its face value. The structure of the spectrum (\ref{4.1}) is such that one has to know, strictly speaking, the mean free path $\lambda$ separately from the screening mass $\mu$ at high temperature. This is in contrast to the Migdal approach extended to QCD\refup{BDPS}, in which the rhs of (\ref{4.1}) is given by $\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi\sqrt{q/\omega}$, \hbox{\it i.e.}{}} \def\etc{\hbox{\it etc.}{ }\ the plasma properties only enter via the transport coefficient $q\equiv \lrang{q_\perp\pw2}/\lambda$. At the same time, one might think of applying the above consideration to the propagation of a fast quark/gluon through ``cold'' nuclear matter. In such a case the parameter $\mu$ has to be introduced by hand as a lower bound for transverse momenta which one may treat perturbatively. Given the ill-defined nature of $\mu$ it is worthwhile to notice that the inverse size of the potential and the mean free path enter (\ref{4.1}) in the combination $\mu\pw2/\lambda_g=\mu\pw2\,\rho\sigma_g$, with $\sigma_g$ the gluon scattering cross section. This makes it possible to express the results in terms of the physical density of scattering centers $\rho$ and the dimensionless parameter $\theta\!=\!{\sigma_g}\mu\pw2 \propto N_c\relax\ifmmode\alpha_s\else{$\alpha_s${ }}\fi\pw2$ which measures the strength of gluon interaction with the medium and should be much less sensitive to a finite uncertainty in~$\mu$. \vspace{1.5 cm} \noindent {\bf\large Acknowledgement} Discussions with P.~Aurenche, S.J.~Brodsky, V.N.~Gribov, A.H.~Mueller and K.~Redlich are kindly acknowledged. One of us (D) is grateful for hospitality extended to him at the High Energy Physics Lab. of the University Paris-Sud, Orsay where this study has been performed. This research is supported in part by the EEC Programme "Human Capital and Mobility", Network "Physics at High Energy Colliders", Contract CHRX-CT93-0357. \newpage \vbox to 2 truecm {} \def[\arabic{enumi}]{[\arabic{enumi}]} \noindent {\bf\large References} \begin{enumerate}} \def\een{\end{enumerate} \item\label{LP} L.D.~Landau and I.Ya.~Pomeranchuk, {\em Dokl.~Akad.~Nauk SSSR}\/ \underline {92} (1953) 535, 735. \item\label{M} A.B.~Migdal, \pr{103}{1811}{56}; and references therein. \item\label{FP} E.L.~Feinberg and I.Ya.~Pomeranchuk, {\em Nuovo Cimento Suppl.}\/ III, (1956) 652. \item\label{TMAS} See, \hbox{\it e.g.}{}} \def\cf{\hbox{\it cf.}{ }: M.L.~Ter-Mikaelian, High Energy Electromagnetic Processes in Condensed Media, John Wiley \& Sons, 1972; \\ A.I.~Akhiezer and N.F.~Shul'ga, {\em Sov.~Phys.~Usp.}\/ \underline {30} (1987) 197;\\ and references therein. \item\label{BjG} J.D.~Bjorken, Fermilab report PUB--82/59--THY, 1982 (unpublished); \\ M.~Gyulassy {\it et al.}, {\em Nucl.~ Phys.} \underline {A 538} (1992) 37c; \\ and references therein. \item\label{GW} M.~Gyulassy and X.-N.~Wang, \np{420}{583}{94}; \\ see also X.-N.~Wang, M.~Gyulassy and M.~ Pl\"umer, preprint LBL--35980, August 1994. \item\label{GGS} V.M.~Galitsky and I.I.~Gurevich, {\em Nuovo Cimento}\/ XXXII (1964) 396; \\ A.H.~S{\o}rensen, \zp{53}{595}{92}. \item\label{D} Yu.L.~Dokshitzer, in preparation. \item\label{BDPS} R.~Baier, Yu.L.~Dokshitzer, S.~Peign\'e and D.~Schiff, in preparation. \een \end{document}
2,869,038,154,508
arxiv
\section{Introduction} The recent discovery of four additional planets around TRAPPIST-1 with masses and radii similar to the Earth's \citep{Gillon.etal:17}, combined with the three already known \citep{Gillon.etal:16}, makes this system of special importance for characterizing terrestrial exoplanetary atmospheres and their evolution. TRAPPIST-1 is an ultracool dwarf (M8V) 12~pc from the Sun, with a mass of $\sim 0.08~M_{\odot}$ and a radius of $R_\star= 0.114~R_{\odot}$. Its seven confirmed planets are in a co-planar system viewed nearly edge-on. All reside very close to the host star at distances from 0.01~AU to 0.063~AU (for comparison, Mercury is at 0.39~AU), with orbital periods from 1.5~days to 20~days. The atmospheres of close-in exoplanets are vulnerable to strong energetic (UV to X-ray) radiation and intense stellar wind conditions that could lead to atmospheric stripping \citep{Lammer.etal:03, Cohen.etal:14, Cohen.etal:15, Garraffo.etal:16}. This is particularly important for planets in the habitable zones of M dwarfs whose low bolometric luminosities mean that temperate orbits lie very close to the parent star. Even a substantial underestimation of the actual XUV emission of TRAPPIST-1 suggests that planets b and c could have lost up to 15 Earth oceans, while up to one Earth ocean could have escaped from planet~d \citep{Bolmont.etal:17,Wheatley.etal:17}. Climate models suggest that planet e represents the best chance for the presence of liquid water on its surface \citep{Wolf:17}. The capacity of these planets to have retained any water at all will depend critically on the initial water reservoir and on the erosive action of the stellar wind. Stellar magnetic activity responsible for UV--X-ray emission and the generation of hot, magnetized winds in Sun-like stars is driven largely by stellar rotation. This influence extends from early F spectral types down to M9 and stars in the fully convective regime \citep{Wright.etal:11,Wright.Drake:16}. Activity increases with rotation rate up to a saturation limit beyond which faster rotation no longer results in further increase. Activity also appears to decline in very low mass stars and brown dwarfs with spectral types later than M9 \citep{Berger.etal:10}. $H_{\alpha}$ observations of TRAPPIST-1 \citep{Reiners.Basri:10} put its magnetic activity as high as the saturation regime, consistent with its M8 spectral type and short rotation period of $P_{rot}=3.3$~days \citep[][recently revised from 1.4~days; \citealt{Gillon.etal:17}]{Luger.etal:17}. X-ray observations \citep{Cook.etal:14, Wheatley.etal:17} confirm it has a hot corona with a ratio of X-ray to bolometric luminosity $L_X/L_{bol} = 2$--$4 \, \times10^{-4}$---within the observed scatter around the saturation limit of $L_X/L_{bol}\sim \times10^{-3}$ \citep{Wright.etal:11}. TRAPPIST-1 is then fully expected to have a solar-like wind consistent with the observed spin-down of fully convective M dwarfs \citep[e.g][]{Irwin.etal:11} that could be a destructive agent of planetary atmospheric loss. In earlier work, we have used a state-of-the-art magnetohydrodynamic (MHD) code to model the stellar winds and magnetospheres of systems around M dwarfs and studied the space environment and atmospheric impact for planets in the "habitable zone" \citep{Cohen.etal:14, Cohen.etal:15, Garraffo.etal:16}. With its planets even closer in to the star than the case of Proxima and Proxima b considered by \citet{Garraffo.etal:16}, the TRAPPIST-1 system is sufficiently different to warrant further study. Here, we use a similar technique to model the space weather conditions of the planets around TRAPPIST-1 and make the further important step of computing the response of the planetary magnetospheric structure to the stellar wind. \section{Magnetohydrodynamic Modeling} \subsection{Method} We use the {\it BATS-R-US} MHD code \citep{Vanderholst.etal:14} to model the TRAPPIST-1 stellar system. The simulation results are obtained using the Alfv\'en Wave Solar Model \citep[AWSoM][]{Vanderholst.etal:14}, which is the Solar Corona (SC) module of the {\it BATS-R-US} MHD code. The model is driven by the photospheric stellar magnetic field data and it solves the set of non-ideal magnetohydrodynamic (MHD) equations on a spherical grid. The MHD equations include the conservation of mass, momentum, magnetic flux and energy. In order to provide a hot corona and stellar wind acceleration, two additional equations are solved for the counter-propagating Alfv\'en waves along the two opposite directions of magnetic field lines, where the model accounts for the dissipation of energy as a result of a turbulent cascade. The two equations for the two Alfv\'en waves are coupled to the energy equation via a source term, and to the momentum equation via an additional wave pressure term. In the energy equation, the code also accounts for thermodynamic effects, such as radiative cooling and electron heat conduction. Due to the lack of information about the level of MHD turbulence in other stars such as M dwarfs with strong magnetic fields, the model embeds a scaling relation between the stellar field and the total heating via the relation between the observed total magnetic flux and X-ray flux from solar features and stars \citep{Pevtsov.etal:03}. Thus, the parameter that controls the amount of heat flux, $L_{perp}$ in \cite{Vanderholst.etal:14}, scales with the square root of the average magnetic field on the stellar surface. We scale this parameter for the M-dwarf star with respect to the validated value for the solar case. For full details of the model and the references for the theoretical models which are implemented, we refer the reader to \cite{Vanderholst.etal:14}. The model uses a map of the two-dimensional surface distribution of the stellar radial magnetic field (a ``magnetogram'') as the inner boundary condition. For stars, these magnetograms are typically obtained using high-resolution spectropolarimetry and the Zeeman-Doppler Imaging (ZDI) technique \citep{Semel:80,Donati.Brown:97}. The initial condition for the three-dimensional magnetic field is obtained by calculating the analytical solution to Laplace's equation assuming that the field is potential, i.e., a static magnetic field with no currents forcing its change. Once the MHD solution starts to evolve, the currents represented by the evolving stellar wind begin to affect the initial, static magnetic field until a steady-state is achieved. By including all the terms above, the stellar input parameters for the mass, radius, and rotation period, as well as the photospheric field data, the model provides a self-consistent, steady-state solution for the hot corona and stellar wind from the stellar chromosphere out to the extent of the adopted model grid. The models presented here employed adaptive mesh refinement with a maximum resolution of $0.05 R_\star$. \citet{Evans.etal:08} compared different models for coronal heating and wind acceleration including empirical, semi-empirical, and Alfv\'en wave heating. They concluded that the only type of models that provide good agreement with coronal signatures are those with Alfven wave heating, as employed by the AWSoM model. This numerical approach is currently used for space weather forecast in the solar system, and has been validated against observations in several works \citep[e.g.,][]{Meng.etal:15,Alvarado-Gomez.etal:16a,Alvarado-Gomez.etal:16b}. Furthermore, AWSoM has been tested and validated by extreme ultraviolet observations of Sun and its immediate vicinity \citep{Vanderholst.etal:14}, which is the regime we wish to explore in this work for the close-in planets in TRAPPIST-1. In order to study the interaction between the extreme stellar wind and the upper atmosphere of one of the TRAPPIST-1 planets, we use the Global Magnetosphere module of {\it BATS-R-US}. This module is driven by the upstream stellar wind conditions as extracted from the AWSoM model. \subsection{Calculations} Unfortunately, TRAPPIST-1 is too faint ($M_{\rm V} = 18.4$) to obtain ZDI maps with current instrumentation. However, the average magnetic field strength was measured using Zeeman Broadening to be $600$~G \citep{Reiners.Basri:10}. Empirically, the distribution of surface magnetic field at a given spectral type is found to depend mainly on rotation rate \citep{Vidotto.etal:14b,Reville.etal:15a,Garraffo.etal:15}. To model the system we can then use as a proxy the ZDI magnetogram of a star with parameters most similar to those of TRAPPIST-1 available. We adopt the magnetogram of GJ~3622 \citep{Morin.etal:10}, an M6.5 dwarf with a rotation period of 1.5~days, shown in Figure~\ref{fig:magnetogram}. In addition, we have performed simulations for the recently revised rotation period of 3.3~days \citep{Luger.etal:17} and find results are unaltered. This is expected since the magnetic field strength estimation is unchanged and other effects of fast rotation, like magnetic field wrapping, only becomes important at shorter rotation periods of less than a day. We use the observed relative orientation of the magnetic fields with respect to the rotation axis. The range of mean surface field strengths allowed by the \citet{Reiners.Basri:10} measurement is about 200-800~G. In order to understand the influence of the mean surface magnetic field strength on the results, we probe two magnetic field scalings of the magnetogram, one to match the $\sim 600~G$ surface field measurement, and one with half of that value. As a test, we also computed models for the magnetogram of the very late dwarf VB~10 \cite{Morin.etal:10} using the TRAPPIST-1 rotation period. While VB~10 has the same M8 spectral type as TRAPPIST-1, its own rotation period is currently uncertain, with values of 0.52 and 0.69 days favored. The magnetogram was reconstructed for the latter but is consequently subject to considerable uncertainty. We found wind conditions at the TRAPPIST-1 planetary orbits to be quite similar to those of our GJ~3622 proxy model. The reason is that the dominant factor responsible for the extreme space weather environment in these kind of systems is the high plasma densities and pressures the close-in planets reside in, which in turn depend on the stellar magnetic field strength. For VB~10, this is similar to that of our proxy. We do not discuss the VB~10 results further, and instead concentrate on our TRAPPIST-1 proxy simulations. The MHD model of the stellar corona, wind, and magnetic field of TRAPPIST-1 was driven using its measured mass, radius and rotation period, $M = 0.08~M_{\odot}$, $R = 0.114~R_{\odot}$, and $P_{rot} = 1.4$~days, respectively. From the resulting three-dimensional wind structure, illustrated in Figure~\ref{fig:3d}, we extracted the wind pressure values over all the planetary orbits. The semi-major axes are known and range from 0.011~AU to 0.063~AU, all the eccentricities are constrained to be smaller than 0.085, and the inclinations of the orbits with respect to the observer's line of sight are nearly 90 degrees. We assume that this very nearly coplanar system of planets has an orbital axis aligned with the star's rotation axis. Therefore we model the seven circular orbits with their observed planetary parameters on the equatorial plane. All the planets detected around TRAPPIST-1 have Earth-like masses, and we examine planetary magnetic fields with equatorial strengths of 0.1~G and 0.5~G that bracket the present day terrestrial value of 0.3~G. We are confident that we have produced the most realistic wind model one can currently make for TRAPPIST-1. Mass loss and angular momentum loss rates can be used to calculate spin-down timescales. We find mass loss rates of $\sim 3\times 10^{-14}~M_\odot$~yr$^{-1}$ and angular momentum loss rates of $ \sim 6 \times 10^{30}$~erg, which are expected for a rapidly rotating M dwarf and consistent with the currently quite uncertain picture of their rotational evolution timescales \citep[see, e.g.,][]{Basri.Marcy:95,West.etal:08,Irwin.etal:11}. To assess the magnetospheric response of the TRAPPIST-1 planets to the stellar wind using the Global Magnetosphere module, we examine the case of Trappist-1 f. This is the central planet of the three potentially habitable planets (e, f, and g). We extract the wind conditions at one sub-Alfv\'enic point and one super-Alfv\'enic point along the orbit. Once the upstream conditions are set at the outer boundary of the simulation domain, a steady-state solution for the magnetosphere is obtained. The inner boundary is set at $r=2R_{\oplus}$, and the boundary conditions we assume here are the same as those used in Earth magnetosphere simulations. A model for the Ionospheric Electrodynamics is also used to better set the velocity at the boundary as described in \cite{Cohen.etal:14}. \section{Results and Discussion} Figure~\ref{fig:3d} illustrates the three-dimensional wind speed and density for TRAPPIST-1 from our simulations corresponding to the $600$~G observed average magnetic field strength. The seven known planetary orbits are plotted, together with the three-dimensional Alfv\'en surface. Wind speeds reach close to 1400~km~s$^{-1}$ and are only slightly higher than the 800--900~km~s$^{-1}$ typical of the fast solar wind \citep{McComas.etal:07}. In contrast, the densities of the plasma these planets go through reach $10^4$-$10^5$ times the solar wind density at 1~AU. In addition, the planets lie much closer to the Alfv\'en surface than in the solar system. The solar wind Alfv\'enic critical point generally lies between 6 and 20$R_\odot\approx0.03-0.1$AU \citep{DeForest.etal:14} ---well within the orbit of Mercury. Instead, all but the outermost TRAPPPIST-1 planets spend a large fraction of their orbits in the sub-Alfv\'enic regime, crossing the Alfv\'en surface four times over their short orbital periods ($< 13$~days). In Figure~\ref{fig:2d} we show the total ambient pressures (magnetic plus dynamic, neglecting thermal and orbital terms that are at least an order of magnitude smaller), $P_{\rm tot} = B^2/(4 \pi)+ \rho U^{2}$, where $B$ is the magnetic field strength, $\rho$ is the plasma mass density and $U$ is the wind speed, normalized to that of the solar wind at Earth for each magnetic field scaling. We also show the Alfv\'en surface intersection with the orbital plane and the seven detected orbits. The total pressure at these close-in orbits ranges from 3 to 6 orders of magnitude higher than the solar wind pressure at 1~AU. In addition, even in the presence of wind speeds comparable to the solar wind one, due to the extremely high densities of the plasma around these close-in planets shown in Figure~\ref{fig:3d}, the ambient dynamic wind pressure they are exposed to is 3-4 orders of magnitude larger than the solar wind pressure at Earth. Both the total and dynamic pressure over the seven orbits for each of the two explored magnetic field scalings are illustrated in more detail in the top panel of Figure~\ref{fig:orbits}. For the closer-in planets, the magnetic pressure dominates over the dynamic pressure, while the converse tends to be true for the very outer planets. In addition to the extreme pressure, we see that most orbits go through magnetic and dynamic pressure variations of up to an order of magnitude crossing the current sheet in the vicinity of the magnetic equatorial plane. For planet b, with an orbital period of only 1.51 days \citep{Gillon.etal:17}, this happens on a timescale of only 3--4 hours. The most stable total pressure is along the orbits of planet d for the 600~G stellar field and planet c for the 300~G field, for which the large increase in dynamic pressure when crossing the current sheet compensates for the dip in magnetic pressure. One diagnostic of the effect of wind conditions on a magnetized planet is the magnetopause location, whose standoff distance from the planet can be approximated by assuming pressure balance between the planetary magnetic pressure and the wind total pressure \citep[e.g.,][]{Schield:69,Gombosi:04} $$R_{\rm mp}/R_{\rm planet}=[B_p^2/(4\pi P_{\rm tot})]^{1/6},$$ where $R_{mp}$ is the radius of the magnetopause, $R_{\rm planet}$ is the radius of the planet, $B_{\rm p}$ refers to the planetary equatorial magnetic field strength, and $P_{\rm tot}$ is the ram pressure of the stellar wind combined with the stellar magnetic field pressure. The magnetopause distance as a function of orbital phase for the seven different orbits considered and for the two stellar magnetic field scalings is illustrated in Figure~\ref{fig:orbits} (bottom panel). If these planets have magnetospheres, they would be much smaller than that of the Earth, which has a standoff distance of $\sim$10$R_{\rm \oplus}$ \citep{Pulkkinen.etal:07}. However, according to our simulations, most of the TRAPPIST-1 planets reside in the sub-Alfv\'enic regime for large fractions of their orbital period. For this reason, we go a step further and simulate directly the wind-exoplanet interaction using a global magnetosphere model. Figure~\ref{fig:GM} shows the magnetospheric structure of TRAPPIST-1 f as calculated using the Global Magnetosphere module of {\it BATS-R-US} . Trappist-1 f provides a representative case of the three potentially habitable planets e, f, and g, and from Figure~\ref{fig:orbits} it can be seen that the stellar wind conditions around the three are quite similar. Here, we make a specific assumption that the planet is magnetized and the planetary field is similar to that of the Earth as there is no available data to constrain the planetary field. Planet f resides in a region which is dominated by strong radial components of both the stellar wind velocity and magnetic field. The extreme wind pressure (dynamic and magnetic in the case of the sub-Alfv\'enic regions) opens the planetary field all the way to the planetary surface, creating what is essentially a very large polar cup (open field region) that extends over most of the planet. This is a new regime not experienced in solar system planets: there is no magnetopause at which the planetary field pressure balances the wind pressure. Instead, stellar wind particles can constantly precipitate directly down open field onto the atmosphere. The concept of atmospheric protection by a planetary magnetic field does not hold here and is likely not to hold in the conventional sense for the TRAPPIST-1 planets. The TRAPPIST-1 system represents a new challenge to atmospheric evolution and survival on close-in planets around very low mass stars. \acknowledgments CG thanks Rakesh K. Yadav for useful comments and discussions. CG was supported by SI Grand Challenges grant ``Lessons from Mars: Are Habitable Atmospheres on Planets around M Dwarfs Viable?''. JJD was supported by NASA contract NAS8-03060 to the {\it Chandra X-ray Center}. OC and SPM are supported by NASA Astrobiology Institute grant NNX15AE05G. JDAG was supported by {\it Chandra} grants AR4-15000X and GO5-16021X. This work was carried out using the SWMF/BATSRUS tools developed at The University of Michigan Center for Space Environment Modeling (CSEM) and made available through the NASA Community Coordinated Modeling Center (CCMC). Simulations were performed on NASA's PLEIADES cluster under award SMD-16-6857.
2,869,038,154,509
arxiv
\section{Introduction} Suppose $X$ is a smooth algebraic curve, let $\Jac(X)$ denote the Jacobian of $X$, and let $\Abelj{q} : X \to \Jac(X)$ denote the Abel--Jacobi map with basepoint $q\in X$. The Manin--Mumford conjecture, now a theorem due to Raynaud~\cite{R}, states that if $X$ has genus $g\geq 2$ then the image $\Abelj{q}(X)$ intersects only finitely many torsion points of $\Jac(X)$. \subsection{Statement of results} The setup above makes sense when the algebraic curve is replaced with a metric graph. Say a metric graph $\Gamma$ satisfies the {\em Manin--Mumford condition} if the image of the Abel--Jacobi map $\Abelj{q} : \Gamma \to \Jac(\Gamma)$ intersects only finitely many torsion points of $\Jac(\Gamma)$, for every choice of basepoint $q\in \Gamma$. As with algebraic curves, the interesting case to consider is when $\Gamma$ has genus $g\geq 2$. \begin{thm}[Conditional uniform Manin--Mumford bound] Let $\Gamma$ be a connected metric graph of genus $g\geq 2$. If the set of torsion points $ \Abelj{q}(\Gamma)\cap \Jtors{\Gamma} $ is finite, then we have the uniform bound \[ \#(\Abelj{q}(\Gamma)\cap \Jtors{\Gamma}) \leq 3g-3.\] \end{thm} Unlike the case of algebraic curves, for a metric graph the genus condition $g\geq 2$ is not sufficient to imply that $\#(\Abelj{q}(\Gamma)\cap \Jactors{\Gamma})$ is finite. On a graph with unit edge lengths the degree-zero divisor classes supported on vertices form a finite abelian group, known as the {\em critical group} of the graph (see Section~\ref{subsec:critical-group}). In particular, vertex-supported divisor classes are always torsion. \begin{obs} Suppose $\Gamma = (G,\ell)$ is a metric graph of genus $g\geq 2$ whose edge lengths are all rational, i.e. $\ell(e) \in \mathbb{Q}_{>0}$ for all $e \in E(G)$. Then $\Gamma$ does not satisfy the Manin--Mumford condition. \end{obs} We say that a property holds for a {\em very general} point of some (real) parameter space if it holds outside of a countable collection of codimension-$1$ families. Recall that a graph $G$ is {\em biconnected} (or {\em two-connected}) if $G$ is connected after deleting any vertex. \begin{thm}[Generic tropical Manin--Mumford] Let $G$ be a finite connected graph of genus $g\geq 2$. If $G$ is biconnected, then for a very general choice of edge lengths $\ell : E(G) \to \mathbb{R}_{>0}$, the metric graph $\Gamma = (G,\ell)$ satisfies the Manin--Mumford condition. \end{thm} Say a metric graph $\Gamma$ satisfies the {\em (generalized) Manin--Mumford condition in degree $d$} if the image of the degree $d$ Abel--Jacobi map \begin{align*} \Abeljh{D}{d} : \Gamma^{d} &\to \Jac(\Gamma) \\ (p_1,\ldots,p_d) &\mapsto [p_1 + \cdots + p_d - D] \end{align*} intersects only finitely many torsion points of $\Jac(\Gamma)$, for every choice of effective, degree $d$ divisor class $[D]$. When $d = 1$ this is the Manin--Mumford condition on $\Gamma$. If the generalized Manin--Mumford condition holds in degree $d$, then it also holds in degree $d'$ for any $1\leq d' \leq d$. When $d \geq g $ and $ g \geq 1$, the generalized Manin--Mumford condition cannot hold, since the higher Abel--Jacobi map will be surjective and $\Jtors{\Gamma}$ is infinite. \begin{thm}[Conditional uniform Manin--Mumford bound in higher degree] Let $\Gamma$ be a connected metric graph of genus $g\geq 1$. If $\Gamma$ satisfies the Manin--Mumford condition in degree $d$, then \[ \#(\Abeljh{D}{d}(\Gamma)\cap \Jtors{\Gamma}) \leq \binom{3g-3}{d} .\] \end{thm} Recall that the {\em girth} of a graph is the minimal length of a cycle; this number provides a constraint on $d$ such that the degree $d$ Manin--Mumford condition holds. Note that if $G$ has genus $\geq 2$ and girth $1$, i.e. $G$ has a loop edge, then $G$ cannot be biconnected. \begin{obs} Let $G$ be a finite connected graph with girth $\gamma$. Then for any choice of edge lengths the metric graph $\Gamma = (G,\ell)$ does not satisfy the generalized Manin--Mumford condition in degree $d\geq \gamma$. \end{obs} We define the {\em independent girth} $\gamma^\mathrm{ind}$ of a graph as \[ \gamma^\mathrm{ind}(G) = \min_{C} \left( \# E(C) + 1 - h_0(G\backslash E(C)) \,\right)\] where the minimum is taken over all cycles $C$ in $G$, and $h_0$ denotes the number of connected components. Since the (usual) girth is by definition ${\gamma(G) = \min_C (\# E(C)) }$ and $h_0 \geq 1$, we have the inequality $\gamma^\mathrm{ind} \leq \gamma$. The independent girth is invariant under subdivision of edges, so it is well-defined for a metric graph. The independent girth may be expressed in terms of the cographic matroid of $G$, see Section~\ref{subsec:matroids}. Say a graph $G$ is {\em Manin--Mumford finite in degree $d$} if for very general edge lengths $\ell$, the metric graph $(G,\ell)$ satisfies the degree $d$ Manin--Mumford condition. \begin{thm}[Generic tropical Manin--Mumford in higher degree] Let $G$ be a finite connected graph of genus $g\geq 1$ with independent girth $\gamma^\mathrm{ind}$. Then $G$ is Manin--Mumford finite in degree $d$ if and only if ${ 1 \leq d < \gamma^\mathrm{ind} }$. \end{thm} \subsection{Previous work} Faltings's theorem (previously Mordell's conjecture) states that a smooth curve of genus $g \geq 2$ has finitely many rational points, i.e. points whose coordinates are all rational numbers. By analogy with Mordell's conjecture, Manin and Mumford conjectured that a smooth algebraic curve of genus $2$ or more has finitely many torsion points. The Manin--Mumford conjecture was proved by Raynaud \cite{Ray}, which inspired several generalizations concerning torsion points in abelian varieties. After Raynaud's work, it was still unknown whether the number of torsion points on a genus $g$ curve could be bounded as a function of $g$; this became known as the ``uniform Manin--Mumford conjecture.'' Baker and Poonen \cite{BP} extended Raynaud's result quantitatively by proving strong bounds on the number of torsion points that arise on a given curve as the basepoint for the Abel--Jacobi map varies. In particular, they showed that a curve $X$ of genus $g\geq 2$ has finitely many choices of basepoint $q$ so that $\#(\Abelj{q}(X) \cap \Jtors{X})$ has size greater than $2$. Katz, Rabinoff, and Zureick-Brown \cite{KRZB} used tropical methods to prove a uniform bound on the number of torsion points on an algebraic curve of fixed genus, which satisfy an additional technical constraint on the reduction type. K\"uhne~\cite{Kuh} (in characteristic zero) and Looper, Silverman, and Wilms~\cite{LSW} (in positive characteristic) recently proved uniform bounds on the number of torsion points on an algebraic curve. Regarding the higher-degree Manin--Mumford conjecture, Abramovich and Harris \cite{AH} studied the question of when the locus $W_d(X)$ of effective degree $d$ divisor classes on an algebraic curve $X$ contains an abelian subvariety of $\Jac(X)$. This question was studied further by Debarre and Fahlaoui \cite{DF}. \subsection{Notation} Here we collect some notation which will be used throughout the paper. \begin{tabular}{ll} $\Gamma$ & a compact, connected metric graph \\ $(G,\ell)$ & a combinatorial model for a metric graph, where $\ell : E(G) \to \mathbb{R}_{>0}$ \\ & is a length function on edges of $G$ \\ $G$ & a finite, connected combinatorial graph (loops and parallel edges allowed) \\ $E(G)$ & edge set of $G$ \\ $V(G)$ & vertex set of $G$ \\ $\cT(G)$ & set of spanning trees of $G$ \\ $\cC(G)$ & set of cycles of $G$ \\ \end{tabular} \begin{tabular}{ll} $D$ & a divisor on a metric graph \\ $\PL_\mathbb{R}(\Gamma)$ & set of piecewise linear functions on $\Gamma$ \\ $\PL_\mathbb{Z}(\Gamma)$ & set of piecewise $\mathbb{Z}$-linear functions on $\Gamma$ \\ $\Divisor(f)$ & the principal divisor associated to a piecewise ($\mathbb{Z}$-)linear function $f$ \\ $\Div(\Gamma)$ & divisors (with $\mathbb{Z}$-coefficients) on $\Gamma$ \\ $\Div^d(\Gamma)$ & divisors of degree $d$ on $\Gamma$ \\ \end{tabular} \begin{tabular}{ll} $[D]$ & the set of divisors linearly equivalent to $D$ \\ & (i.e. the linear equivalence class of the divisor $D$) \\ $|D|$ & the set of effective divisors linearly equivalent to $D$ \\ \end{tabular} \begin{tabular}{ll} $\Pic^d(\Gamma)$ & divisor classes of degree $d$ on $\Gamma$ \\ $\Eff^d(\Gamma)$ & effective divisor classes of degree $d$ on $\Gamma$ \\ $\Jac(\Gamma)$ & the Jacobian of $\Gamma$, $\Jac(\Gamma) = \Div^0(\Gamma) / \Divisor(\PL_\mathbb{Z}(\Gamma)) $ \\ \end{tabular} \section{Graphs and matroids} A {\em graph} $G = (V,E)$ consists of a finite set of vertices $V = V(G)$ and a finite set of edges $E = E(G)$, equipped with two maps $head: E\to V$ and $tail : E \to V$, which we abbreviate by $e^+ = head(e)$ and $e^- = tail(e)$. We say an edge $e$ lies between vertices $v,w$ if $(e^+,e^-) = (v,w)$ or $(e^+,e^-) = (w,v)$. We allow loops (i.e. $e$ such that $e^+ = e^-$) and multiple edges. The {\em valence} of a vertex $v \in V$ is $$ \val(v) = \#\{ e \in E : e^+ = v\} + \#\{e \in E : e^- = v \}. $$ Note that loop edge at $v$ contributes $2$ to the valence of $v$. Given a subset of edges $A \subset E$, we let $G|A$ denote the subgraph of $G$ whose vertex set is $V(G)$ and whose edge set is $A$. We let $G \setminus A$ denote the subgraph whose vertex set is $V(G)$ and whose edge set is $E \setminus A$. For a connected graph $G$, the {\em genus} is defined as $g(G) = \#E(G) - \#V(G) + 1$. A {\em spanning tree} of $G$ is a subgraph $T$ with the same vertex set and whose edge set is a subset of the edges of $G$, such that $T$ is connected and has no cycles. The number of edges in any spanning tree of $G$ is $\#V(G) - 1 = \#E(G) - g(G)$. \subsection{Critical group} \label{subsec:critical-group} Fix a graph $G = (V,E)$ with $n$ vertices, enumerated $\{v_1,\ldots,v_n\}$. The {\em Laplacian matrix} of $G$ is the $n\times n$ matrix $L$ whose entries are $$ L_{ij} = \begin{cases} \#(\text{edges between $v_i$ and $v_j$}) &\text{if }i\neq j \\ - \sum_{k \neq i} \#(\text{edges from $v_i$ to $v_k$}) &\text{if }i=j. \end{cases} $$ (Other sources, e.g. \cite{BN}, use the opposite sign convention for the graph Laplacian.) The matrix $L$ is symmetric and its rows and columns sum to zero. The Laplacian defines a linear map $\mathbb{Z}^V \to \mathbb{Z}^V$, whose image has rank $n-1$ if $G$ is connected. Let $\epsilon: \mathbb{Z}^V \to \mathbb{Z}$ denote the linear map which sums all coordinates. The image of the Laplacian matrix $L$ lies in the kernel of $\epsilon$. The {\em critical group} $\Jac(G)$ of a connected graph $G$ is the abelian group defined as $$ \Jac(G) = \ker(\epsilon) / \im(L) . $$ The critical group is finite, and its cardinality is the number of spanning trees of $G$ \cite[Theorem 6.2]{Big}. For more on the critical group, see \cite{BN,Big} and the references therein. \begin{eg} Let $G$ be the theta graph, which has two vertices connected by three edges. \begin{figure}[h] \centering \raisebox{-0.5\height}{ \begin{tikzpicture} \draw (-1,0) -- (1,0); \draw[rounded corners=5] (-1,0) -- (-0.5,-0.5) -- (0.5,-0.5) -- (1,0); \draw[rounded corners=5] (-1,0) -- (-0.5,0.5) -- (0.5,0.5) -- (1,0); \node at (-1,0) [circle,fill,scale=0.5] {}; \node at (1,0) [circle,fill,scale=0.5] {}; \path (-1,0) node[left] {$x$}; \path (1,0) node[right] {$y$}; \end{tikzpicture} } \caption{Theta graph.} \label{fig:jacobian-size3} \end{figure} The Laplacian matrix of this graph is $ \begin{pmatrix} -3 & 3 \\ 3 & -3 \end{pmatrix} $, and $\Jac(G) \cong \mathbb{Z} / 3\mathbb{Z}$. \end{eg} \begin{eg} Let $G$ be the Wheatstone graph shown below, which has four vertices and five edges. \begin{figure}[h] \centering \begin{tikzpicture} \draw (-1,0) -- (0,1); \draw (0,1) -- (1,0); \draw (-1,0) -- (1,0); \draw (-1,0) -- (0,-1); \draw (0,-1) -- (1,0); \node at (-1,0) [circle,fill,scale=0.5] {}; \node at (-1,0) [left] {$v_1$}; \node at (1,0) [circle,fill,scale=0.5] {}; \node at (1,0) [right] {$v_3$}; \node at (0,-1) [circle,fill,scale=0.5] {}; \node at (0,-1) [left] {$v_2$}; \node at (0,1) [circle,fill,scale=0.5] {}; \node at (0,1) [right] {$v_4$}; \end{tikzpicture} \caption{Wheatstone graph.} \end{figure} The Laplacian matrix of this graph is $$ \begin{pmatrix} -3 & 1 & 1 & 1 \\ 1 & -2 & 1 & 0 \\ 1 & 1 & -3 & 1 \\ 1 & 0 & 1 & -2 \end{pmatrix}, $$ and its critical group is $\Jac(G) \cong \mathbb{Z} / 8\mathbb{Z}$. \end{eg} We now give additional notation for constructing $\Jac(G)$, which connects the critical group to the Jacobian of a metric graph (see Section~\ref{subsec:int-laplacian}). Let the {\em divisor group} $\Div(G)$ be the free abelian group on the set of vertices of $G$; $$\Div(G) = \{ \sum_{v_i \in V} a_i v_i : a_i\in \mathbb{Z}\} ,$$ and let $$\PL_\mathbb{Z}(G) = \{ f : V \to \mathbb{Z}\} $$ denote the set of integer-valued functions on vertices of $G$, which also forms a free abelian group. (The notation $\PL$ is for {\em piecewise linear}; any $f : V\to \mathbb{Z}$ has a unique linear interpolation along the edges.) The set $\{ \mathbf{1}_{v_i} : v_i \in V\}$ forms a basis for $\PL_\mathbb{Z}(G)$ where $$ \mathbf{1}_{v_i}(w) = \begin{cases} 1 &\text{if }w = v_i ,\\ 0 &\text{if }w \neq v_i. \end{cases} $$ The divisor group by definition has the basis $\{ v_i : v_i\in V\}$. Let $\Divisor: \PL_\mathbb{Z}(G) \to \Div(G)$ be the linear map defined by the Laplacian matrix, with respect to the bases defined above. The degree map $\deg :\Div(G) \to \mathbb{Z}$ is defined by $\deg(\sum_i a_i v_i) = \sum_i a_i$. Let $\Div^0(G)$ denote the kernel of the degree map. The critical group of $G$ is $$ \Jac(G) = \Div^0(G) / \Divisor( \PL_\mathbb{Z}(G)) . $$ (Both $\Div(G)$ and $\PL_\mathbb{Z}(G)$ are isomorphic to $\mathbb{Z}^V$. $\Div(G)$ is naturally covariant with respect to $G_1 \to G_2$, while $\PL_\mathbb{Z}(G)$ is naturally contravariant.) \subsection{Metric graphs} A {\em metric graph} is a compact, connected metric space which comes from assigning the path metric to a finite, connected graph whose edges are identified with closed intervals $[0, \ell(e)]$ with positive, real lengths $\ell(e) > 0$. If the metric graph $\Gamma$ comes from a combinatorial graph $G$ by assigning edge lengths $\ell : E(G) \to \mathbb{R}_{>0}$, we say $(G,\ell)$ is a {\em combinatorial model} for $\Gamma$ and we write $\Gamma = (G,\ell)$. A single metric graph generally has many different combinatorial models. The {\em genus} of a metric graph $\Gamma$ is the dimension of the first homology, $$ g(\Gamma) = \dim H_1(\Gamma, \mathbb{R}). $$ Given a combinatorial model $\Gamma = (G,\ell)$, the genus agrees with the underlying graph, $$ g(\Gamma) = g(G) = \#E(G) - \#V(G) + 1. $$ The {\em valence} of a point $x$ on a metric graph $\Gamma$, denoted $\val(x)$, is the number of connected components in a sufficiently small punctured neighborhood of $x$. If $(G,\ell)$ is a model for $\Gamma$, then for any $v \in V(G)$ the valence $\val(v)$ as a vertex of $G$ agrees with $\val(v)$ as a point in the metric graph $\Gamma$. \subsection{Stabilization} \label{subsec:stabilization} The notion of stability is useful for our purposes because questions about Abel--Jacobi maps $\Abelj{}: \Gamma \to \Jac(\Gamma)$ maybe be reduced to $\Abelj{}: \Gamma' \to \Jac(\Gamma')$ where $\Gamma'$ is a semistable metric graph. (See Section~\ref{subsec:int-laplacian} for discussion of the Abel--Jacobi map.) A connected graph $G$ is {\em stable} if every vertex $v \in V(G)$ has valence at least $3$, and {\em semistable} if every vertex has $\val(v) \geq 2$. A metric graph $\Gamma$ is {\em semistable} if every point $x\in \Gamma$ has valence at least $2$. Equivalently, $\Gamma$ is semistable if it has a model $(G,\ell)$ where $G$ is a semistable graph. We say $(G,\ell)$ is a (semi)stable model for $\Gamma$ if $G$ is (semi)stable. \begin{prop} A semistable metric graph $\Gamma$ with genus $g\geq 2$ has a unique stable model $(G,\ell)$, (i.e. a model such that $G$ is stable). \end{prop} \begin{proof} The unique stable model has vertex set $V(G) = \{ x \in \Gamma : \val(x) \geq 3\}$. The edges $E(G)$ correspond to connected components of $\Gamma \setminus V(G)$, which is isometric to a disjoint union of open intervals of finite length. \end{proof} \begin{prop} \label{prop:stable-edge-bound} Suppose $G$ is a stable graph of genus $g$. Then the number of edges in $G$ is at most $3g-3$. \end{prop} \begin{proof} Since every vertex has valence at least $3$, we have \begin{equation*} \# V(G) \leq \frac1{3} \sum_{v\in V(G)} \val(v) = \frac{2}{3} \cdot \# E(G). \end{equation*} By the genus formula $g = \#E(G) - \#V(G) + 1$, this implies \begin{equation*} \# E(G) = g - 1+ \#V(G) \leq g - 1 + \frac{2}{3}\cdot \#E(G) \end{equation*} which is equivalent to the desired inequality $\# E(G) \leq 3g-3$. \end{proof} It follows from the previous proposition that a stable graph has genus $g\geq 2$. \begin{prop}[Metric graph stabilization] \label{prop:stabilization} Suppose $\Gamma$ has genus $g\geq 1$. There is a canonical semistable subgraph $\Gamma' \subset \Gamma$ and a retract map $r: \Gamma \to \Gamma'$ such that $r$ is a homotopy inverse to the inclusion $i: \Gamma' \to \Gamma$. \end{prop} We call the subgraph $\Gamma'$ of Proposition~\ref{prop:stabilization} the {\em stabilization} of $\Gamma$, and denote it as $\stable(\Gamma)$. \begin{eg} Figure~\ref{fig:stabilization} shows the stabilization $\Gamma'$ of a metric graph $\Gamma$ of genus two. The retract map $\Gamma \to \Gamma'$ sends a point of $\Gamma$ to the closest point of $\Gamma'$ in the path metric. \begin{figure}[h] \centering \includegraphics[scale=0.25]{unstable} \qquad\qquad\qquad \includegraphics[scale=0.25]{stable} \caption{A metric graph (left) and its stabilization (right).} \label{fig:stabilization} \end{figure} \end{eg} \subsection{Matroids} \label{subsec:matroids} In this section we review the definition of a matroid. In particular, we recall the graphic matroid and cographic matroid associated to a connected graph. Cographic matroids will be useful for understanding the structure of the Jacobian of a metric graph. For a reference on matroids, see \cite{Oxl} or \cite{Kat}. A {\em matroid} $M = (E,\mathcal B)$ is a finite set $E$ equipped with a nonempty collection $\mathcal B \subset 2^E$ of subsets of $E$, called the {\em bases} of the matroid, satisfying the basis exchange axiom: for distinct subsets $B_1, B_2 \in \mathcal B$, there exists some $x \in B_1 \backslash B_2$ and $y \in B_2 \backslash B_1$ such that $(B_1 \backslash x) \cup y \in \mathcal B$. In other words, from $B_1$ we can produce a new basis by exchanging an element of $B_1$ for an element of $B_2$. An {\em independent set} of a matroid $M = (E,\mathcal B)$ is a subset of $E$ which is a subset of some basis. A {\em cycle} of $M$ is a subset of $E$ which is minimal among those not contained in any basis, under the inclusion relation. The {\em rank} of a subset $A\subset E$ is the cardinality of a maximal independent set contained in $A$; we denote the rank by ${\rm rk}_M(A)$ or ${\rm rk}(A)$ if the matroid is understood from context. Given a graph $G = (V,E)$, the {\em graphic matroid} $M(G)$ is the matroid on the ground set $E = E(G)$ with bases $\mathcal B = \{E(T) : T \text{ is a spanning tree of } G\}$. An {independent set} in $M(G)$ is a subset of edges which span an acyclic subgraph. (i.e. $h^1(G|A) = 0$.) A cycle in $M(G)$ is a cycle in the graph-theoretic sense, i.e. a subset of edges which span a subgraph homeomorphic to a circle. The graphic matroid $M(G)$ is also known as the {\em cycle matroid} of $G$. \begin{eg} Suppose $G$ is the {\em Wheatstone graph} shown in Figure \ref{fig:wheatstone}. The bases of $M(G)$ are $\{abd, abe, acd, ace, ade, bcd, bce, bde\}$. The cycles are $\{abc, abde, cde\}$. (Here $abc$ is shorthand for the set $\{a,b,c\}$.) \begin{figure}[h] \centering \begin{tikzpicture} \draw (-1,0) -- (0,1) node[midway,above left] {$a$}; \draw (0,1) -- (1,0) node[midway, above right] {$b$}; \draw (-1,0) -- (1,0) node[midway, above] {$c$}; \draw (-1,0) -- (0,-1) node[midway, below left] {$d$}; \draw (0,-1) -- (1,0) node[midway, below right] {$e$}; \node at (-1,0) [circle,fill,scale=0.5] {}; \node at (1,0) [circle,fill,scale=0.5] {}; \node at (0,-1) [circle,fill,scale=0.5] {}; \node at (0,1) [circle,fill,scale=0.5] {}; \end{tikzpicture} \caption{Wheatstone graph.} \label{fig:wheatstone} \end{figure} \end{eg} Given a graph $G = (V,E)$, the {\em cographic matroid} $M^\perp(G)$ is the matroid on the ground set $E = E(G)$ whose bases are complements of spanning trees of $G$. An {independent set} in $M^\perp(G)$ is a set of edges whose removal does not disconnect $G$ (i.e. a set $A \subset E$ such that $G \backslash A$ is connected, equivalently $h^0(G\backslash A) = 1$). {An edge set $A\subset E(G)$ is called a {\em cut} of $G$ if $G\backslash A$ is disconnected.} A cycle in $M^\perp(G)$ is a minimal set of edges $A$ such that $h^0(G\backslash A) = 2$; this is called a {\em simple cut} or a {\em bond} of $G$. The cographic matroid is also known as the {\em cocycle matroid} or {\em bond matroid} of $G$. For more on cographic matroids, see \cite[Chapter 2.3]{Oxl}. Note: when discussing the graphic or cographic matroid of a graph $G$, we always use ``cycle of $G$'' to refer to a cycle in the {\em graphic} matroid sense. \begin{eg} Suppose $G$ is the Wheatstone graph, shown in Figure \ref{fig:wheatstone}. The bases of the cographic matroid $M^\perp(G)$ are $\{ac, ad, ae, bc, bd, be, cd, ce\}$. The cycles of $M^\perp(G)$ are $\{ab, acd, ace, bcd, bce, de\}$. \end{eg} \subsection{Girth and independent girth} Recall that the {\em girth} $\gamma = \gamma(G)$ of a graph is the minimal length of a cycle; a {\em cycle} is a subgraph homeomorphic to a circle. In other words, \begin{equation} \label{eq:girth} \gamma(G) = \min_{C \in \mathcal C(G)} \{ \# E(C) \} \end{equation} where $\mathcal C(G)$ denotes the set of cycles of $G$. \begin{dfn} The {\em independent girth} $\gamma^\mathrm{ind}$ of a graph is defined as \begin{equation} \label{eq:ind-girth-rk} \gamma^{\rm ind}(G) = \min_{C \in \mathcal C(G)} \{\, {\rm rk}^\perp(E(C)) \,\} \end{equation} where ${\rm rk}^\perp$ is the rank function of the cographic matroid $M^\perp(G)$. (See Section~\ref{subsec:matroids} for discussion of cographic matroids). If $G$ has genus zero, we let $\gamma^{\rm ind}(G) = \gamma(G) = +\infty$. Equivalently, \[ \gamma^{\rm ind}(G) = \min_{C\in \mathcal C(G)} \{\, \# E(C) + 1 - h_0(G\backslash E(C)) \,\} \] where $G\backslash E(C)$ denotes the subgraph obtained by deleting the interior of each edge in $C$, and $h_0$ denotes the number of connected components of a topological space. \end{dfn} \begin{prop} \label{prop:gamind} \begin{enumerate}[(a)] \item For any graph $G$, $\gamma^{\rm ind}(G) \leq \gamma(G)$. \item If $(G,\ell)$ and $(G',\ell')$ are combinatorial models for the same metric graph $\Gamma$, then $\gamma^{\rm ind}(G) = \gamma^{\rm ind}(G')$. \end{enumerate} \end{prop} \begin{proof} (a) The rank function of any matroid satisfies ${\rm rk}(A) \leq \# A$. The claim follows from comparing definitions \eqref{eq:girth} and \eqref{eq:ind-girth-rk}. (b) The independent girth does not change under subdivision of edges, and any two combinatorial models of $\Gamma$ have a common refinement by edge subdivisions. \end{proof} Proposition~\ref{prop:gamind}(b) implies that $\gamma^{\rm ind}$ is a well-defined invariant for a metric graph; given a metric graph $\Gamma$ we have \begin{equation} \label{eq:gamind-metric} \gamma^{\rm ind}(\Gamma) := \gamma^{\rm ind}(G) \quad \text{for any choice of model }\Gamma = (G,\ell). \end{equation} Note that $\gamma^{\rm ind}$ is also invariant under stabilization, i.e. \begin{equation*} \gamma^{\rm ind}(\Gamma) = \gamma^{\rm ind}(\stable \Gamma) . \end{equation*} \begin{eg} Consider Figure~\ref{fig:ind-girth-ex1}. The graph on the left has seven simple cycles; their lengths are $\{ 4,4,4,6,6,6,6\}$, and their ranks in the cographic matroid are all $3$. For this graph, $\gamma = 4$ and $\gamma^{\rm ind} = 3$. After deleting a central edge, the resulting graph on the right has three simple cycles with lengths $\{4, 6, 6\}$ and cographic rank $2$; hence $\gamma = 4$ and $\gamma^{\rm ind} = 2$. \begin{figure}[h] \centering \includegraphics[scale=0.5]{ind-girth-3} \qquad\qquad\qquad \includegraphics[scale=0.5]{ind-girth-2} \caption{Graphs with independent girth $3$ (left) and independent girth $2$ (right).} \label{fig:ind-girth-ex1} \end{figure} \end{eg} \begin{eg} Consider Figure~\ref{fig:ind-girth-ex2}. This graph has $\gamma = 4$ and $\gamma^{\rm ind} = 3$, with the minimum achieved on the $4$-cycle in the middle. After deleting one of the horizontal edges in the middle cycle, the resulting graph has $\gamma = 4$ and $\gamma^{\rm ind} = 4$. \begin{figure}[h] \centering \includegraphics[scale=0.5]{ind-girth-3large} \caption{Graph with girth $4$ and independent girth $3$.} \label{fig:ind-girth-ex2} \end{figure} \end{eg} In general, under edge deletion we have $ \gamma(G\backslash e) \geq \gamma(G)$ since $\mathcal C(G\backslash e) \subset \mathcal C(G)$. The examples above demonstrate that $\gamma^{\rm ind}(G \backslash e)$ can increase or can decrease, relative to $\gamma^{\rm ind}(G)$. \begin{thm} \label{thm:girth-genus-bound} There exists a constant $C$ such that for any stable graph $G$ of genus $g \geq 2$, the girth $\gamma = \gamma(G)$ satisfies \[ \gamma < C \log g .\] \end{thm} \begin{proof} Recall that the {girth} $\gamma$ of a graph $G$ is the minimal length of a (simple) cycle in $G$. Let $v$ be a vertex in $ V(G)$. Let $N_r(v)$ denote the neighborhood of radius $r$ around $v$, in the graph $G$. For any radius $r < \frac12 \gamma$, the neighborhood $N_r(v)$ is a tree (i.e. $N_r(v)$ is connected and acyclic). Recall that $G$ is {stable} if every vertex has valence at least $ 3$. Since $G$ is stable, we may calculate a simple lower bound for the number of edges in $N_r(v)$. Namely, \begin{equation*} \# E(N_r(v)) \geq 3 + 6 + \cdots + 3\cdot 2^{r-1} = 3(2^r - 1) . \end{equation*} This quantity is clearly a lower bound for the total number of edges $\# E(G)$. Moreover, by Proposition~\ref{prop:stable-edge-bound} we have $\# E(G) \leq 3g-3 $. Thus \begin{equation*} 3(2^r - 1) \leq \# E(G) \leq 3g-3 \qquad\Rightarrow\qquad 2^r \leq g \end{equation*} for any integer $r < \frac12 \gamma $. Hence \[ 2^{\gamma/2 - 1} < g \qquad\Leftrightarrow\qquad \gamma < {2} \log_2 g + 2 .\] By the assumption $g \geq 2$, this bound implies $ \gamma < {4} \log_2 g $, as desired. \end{proof} \begin{cor} \label{cor:metric-girth-bound} There exists a constant $C$ such that for any metric graph $\Gamma$ of genus $g \geq 2$, the independent girth $\gamma^{\rm ind}$ satisfies $\gamma^{\rm ind} < C \log g$. \end{cor} \begin{proof} Combine Theorem~\ref{thm:girth-genus-bound} with Proposition~\ref{prop:gamind}(a) and \eqref{eq:gamind-metric}. \end{proof} \section{Divisors on metric graphs} In this section we recall the theory of divisors and linear equivalence on metric graphs. On a metric graph $\Gamma$, the {\em divisor group} $\Div(\Gamma)$ is the free abelian group generated by the points of $\Gamma$. We also let $\Div_\mathbb{R}(\Gamma) = \mathbb{R}\otimes_\mathbb{Z} \Div(\Gamma)$. In other words, \begin{align*} \Div(\Gamma) &= \{ \sum_{x\in \Gamma} a_x x : a_x\in \mathbb{Z},\, a_x=0 \text{ for almost all } x \} , \\ \Div_\mathbb{R}(\Gamma) &= \{\sum_{x\in \Gamma} a_x x : a_x \in \mathbb{R},\, a_x=0 \text{ for almost all }x \}. \end{align*} A divisor $D = \sum_{x \in \Gamma} a_x x$ is {\em effective} if $a_x \geq 0$ for every $x$. The {\em degree} map $\deg : \Div_\mathbb{R}(\Gamma) \to \mathbb{R}$ sends $D =\sum_{x \in \Gamma} a_x x$ to $\deg(D) = \sum_{x \in \Gamma} a_x$. \subsection{Real Laplacian} \label{subsec:real-laplacian} A {\em piecewsie linear function} on $\Gamma$ is a continuous function $f: \Gamma \to \mathbb{R}$ which is linear on each edge of some combinatorial model, i.e. for some model $\Gamma = (G,\ell)$, on the interior of each edge $e$ in $E(G)$ we have $ \frac{d}{dt}f(t) $ is constant, where $t$ is a length-preserving parameter on $e$. We say a model $(G,\ell)$ is {\em compatible with} a piecewise linear function $f$ if $f$ is linear on each edge of $G$. We let $\PL_\mathbb{R}(\Gamma)$ denote the set of all piecewise linear functions on $\Gamma$, which has the structure of a vector space over $\mathbb{R}$. The {\em metric graph Laplacian} $\Divisor$ is the $\mathbb{R}$-linear map from $\PL_\mathbb{R}(\Gamma)$ to $\Div_\mathbb{R}(\Gamma)$ defined as follows. For $f \in \PL_\mathbb{R}(\Gamma)$, let $(G,\ell)$ be a model for $\Gamma$ which is compatible with $f$, and let \begin{equation} \Divisor(f) = \sum_{v \in V(G)} a_v v \qquad\text{where}\qquad a_v = \sum_{\substack{e \in E(G) \\ e^+ = v }} f'(t) + \sum_{\substack{e \in E(G) \\ e^- = v}} f'(t), \end{equation} such that in each summand $f'(t) = \frac{d}{dt}f(t)$, the parameter $t$ is directed away from the vertex $v$. Equivalently, the coefficient of $v$ in $\Divisor(f)$ is \begin{align*} a_v &= \sum_{\substack{e \in E(G) \\ e^+ = v }} \frac{f(e^-) - f(e^+)}{\ell(e)} + \sum_{\substack{e \in E(G) \\ e^- = v}} \frac{f(e^+) - f(e^-)}{\ell(e)} \\ &= \sum_{\substack{e \in E(G) \\ e^+ = v }} \frac{f(e^-)}{\ell(e)} + \sum_{\substack{e \in E(G) \\ e^- = v}} \frac{f(e^+)}{\ell(e)} - \left(\sum_{\substack{e \in E(G) \\ e^+ = v \\ \text{or }e^- = v}} \frac{1}{\ell(e)} \right) f(v) \end{align*} For any piecewise linear function $f$ there is a unique way to write $\Divisor(f) = D - E$ where $D$ and $E$ are effective; we call $\zeros(f) = D$ the {\em divisor of zeros} of $f$ and call $\poles(f) = E$ the {\em divisor of poles} of $f$. Note that for any $f\in \PL_\mathbb{R}(\Gamma)$, the divisor $\Divisor(f)$ has degree zero. The Laplacian $ \Divisor: \PL_\mathbb{R}(\Gamma) \to \Div_\mathbb{R}(\Gamma)$ fits in an exact sequence $$ 0 \to \mathbb{R} \to \PL_\mathbb{R}(\Gamma) \xrightarrow{\Divisor} \Div_\mathbb{R}(\Gamma) \to \mathbb{R} \to 0, $$ where the image of $\mathbb{R} \to \PL_\mathbb{R}(\Gamma)$ is the set of constant functions on $\Gamma$, and $\Div_\mathbb{R}(\Gamma) \to \mathbb{R}$ is the degree map. In particular, for any points $y,z \in \Gamma$, the divisor $D = z - y$ has degree zero and there is a function $f$ satisfying $\Divisor(f) = z - y$, unique up to an additive constant. \begin{dfn} \label{dfn:unit-potential} The {\em unit potential function} $\potent{z}{y}$ is the unique function in $\PL_\mathbb{R}(\Gamma)$ satisfying $$ \Divisor(\potent{z}{y}) = z - y \quad \text{and}\quad \potent{z}{y}(z) = 0. $$ \end{dfn} There are useful explicit formulas for the slopes of $\potent{z}{y}$ due to Kirchhoff, which we discuss in Section~\ref{subsec:kirchhoff}. \begin{prop}[Slope-current principle] \label{prop:slope-current} Suppose $f \in \PL_\mathbb{R}(\Gamma)$ has zeros $\zeros(f)$ and poles $\poles(f)$ of degree $d\in \mathbb{R}$. Then the slope of $f$ is bounded by $d$, i.e. \[ |f'(x)| \leq d \qquad \text{for any $x$ where $f$ is linear}.\] \end{prop} (This bound is attained only on bridge edges, and only when all zeros are on one side of the bridge and all poles are on the other side.) \begin{proof} See \cite[Proposition 3.5]{R}. \end{proof} \begin{rmk} The above proposition has a ``physical'' interpretation: $f$ gives the voltage in the resistor network $\Gamma$ when subjected to an external current of $\poles(f)$ units flowing into the network and $\zeros(f)$ units flowing out. The slope $|f'(x)|$ is equal to the current flowing through the wire containing $x$, which must be no more than the total in-flowing (or out-flowing) current. \end{rmk} \subsection{Integer Laplacian and Jacobian} \label{subsec:int-laplacian} Recall that $\Div(\Gamma)$ denotes the free abelian group generated by points of $\Gamma$. A {\em piecewise $\mathbb{Z}$-linear function} on $\Gamma$ is a piecewise linear function whose slopes are integers, i.e. there exists some model $\Gamma = (G,\ell)$ such that $f'(t) \in \mathbb{Z}$ on the interior of each edge of $G$. We let $\PL_\mathbb{Z}(\Gamma)$ denote the set of all piecewise $\mathbb{Z}$-linear functions on $\Gamma$. The Laplacian $\Divisor$, defined in Section \ref{subsec:real-laplacian}, restricts to a map $ \Divisor: \PL_\mathbb{Z}(\Gamma) \to \Div(\Gamma). $ Two divisors $D,E$ are {\em linearly equivalent} if there is some $f \in \PL_\mathbb{Z}(\Gamma)$ such that $D = E + \Divisor(f) $. We let $[D]$ denote the linear equivalence class of a divisor $D$, i.e. $$ [D] = \{ E \in \Div(\Gamma) : E = D + \Divisor(f) \text{ for some }f \in \PL_\mathbb{Z}(\Gamma) \} $$ The {\em Picard group} $\Pic(\Gamma)$ is defined as the cokernel of $\Divisor : \PL_\mathbb{Z}(\Gamma) \to \Div(\Gamma)$. The elements of $\Pic(\Gamma)$ are linear equivalence classes of divisors on $\Gamma$. The integer Laplacian map fits in an exact sequence \begin{equation*} 0 \to \mathbb{R} \to \PL_\mathbb{Z}(\Gamma) \xrightarrow{\Divisor} \Div(\Gamma) \to \Pic(\Gamma) \to 0 . \end{equation*} The degree of a divisor class is well-defined, so we have an induced degree map $\Pic(\Gamma) \to \mathbb{Z}$. The {\em Jacobian group} $\Jac(\Gamma)$ is the kernel of this degree map, so we have a short exact sequence $$ 0 \to \Jac(\Gamma) \to \Pic(\Gamma) \xrightarrow{\mathrm{deg}} \mathbb{Z} \to 0. $$ This short exact sequence splits, $\Pic(\Gamma) \cong \mathbb{Z} \times \Jac(\Gamma)$, but this isomorphism is not canonical. One way to obtain a splitting is to choose a point $q \in \Gamma$, and define $\mathbb{Z} \to \Pic(\Gamma)$ by sending $n$ to the divisor class $[nq]$. We also denote $\Jac(\Gamma)$ by $\Pic^0(\Gamma)$, and use $\Pic^d(\Gamma)$ to denote the divisor classes of degree $d$. The tropical Abel--Jacobi theorem, due to Mikhalkin and Zharkov {\cite[Theorem 6.2]{MZ}}, identifies the structure of $\Jac(\Gamma)$ as a connected topological abelian group. \begin{thm}[Tropical Abel--Jacobi] \label{thm:abel-jacobi} Suppose $\Gamma$ is a metric graph of genus $g$. Then \begin{equation*} \Jac(\Gamma) \cong \mathbb{R}^g / \mathbb{Z}^g . \end{equation*} \end{thm} Fix a basepoint $q$ on the metric graph $\Gamma$. The {\em Abel--Jacobi map} \begin{equation} \label{eq:abel-jacobi} \Abelj{q} : \Gamma \to \Jac(\Gamma) \end{equation} sends a point $ x\in \Gamma$ to the divisor class $[x-q]$. More generally, if $D$ is a degree $d$ divisor we have a higher-dimensional Abel--Jacobi map \begin{equation*} \Abelj{D}^{(d)} : \Gamma^d \to \Jac(\Gamma) \end{equation*} which sends a tuple $(x_1,\ldots,x_d)$ to the divisor class $[x_1 + \cdots + _d - D]$, Recall that $\stable(\Gamma)$ denotes the stabilization of $\Gamma$ (see Section~\ref{subsec:stabilization}). \begin{prop} \label{prop:jacobian-stabilization} \begin{enumerate} \item The retract $r: \Gamma \to \stable(\Gamma)$ induces an isomorphism $\Jac(\Gamma) \to \Jac(\stable(\Gamma))$ on Jacobians. \item The inclusion $i: \stable(\Gamma) \to \Gamma$ induces an isomorphism $\Jac(\stable(\Gamma)) \to \Jac(\Gamma)$ on Jacobians. \end{enumerate} \end{prop} For a proof and further motivation, see Caporaso~\cite{Cap}. \subsection{Cellular decomposition of the Jacobian} In this section we recall how the geometry of the Abel--Jacobi map $\Gamma \to \Jac(\Gamma)$ is related to the cographic matroid $M^\perp(G)$, where $(G,\ell)$ is a model for $\Gamma$. We describe cellular decompositions of the subset of effective divisor classes inside $\Pic^k(\Gamma)$ for $k\geq 0$; each $\Pic^k(\Gamma)$ can be identified with $\Jac(\Gamma)$ by subtracting a fixed degree $k$ divisor. A consequence of Mikhalkin and Zharkov's proof \cite{MZ} of the tropical Abel--Jacobi theorem (Theorem \ref{thm:abel-jacobi}) is that the Abel--Jacobi map $\Gamma \to \Jac(\Gamma)$ is linear on each edge of $\Gamma$. The universal cover of $\Jac(\Gamma)$ is naturally identified with $H^1(\Gamma,\mathbb{R})$. The Abel--Jacobi map, restricted to a single edge $e \subset \Gamma$, lifts locally to $e \to H^1(\Gamma, \mathbb{R})$. The linear independence of the edge-vectors in the image $\Gamma \to \Jac(\Gamma)$ is exactly recorded by the cographic matroid $M^\perp(G)$, for any combinatorial model $\Gamma = (G,\ell)$. \begin{dfn} \label{dfn:eff-cell} Let $\Gamma = (G,\ell)$ be a metric graph. Given edges $e_1,\ldots,e_k \in E(G)$, let $\Div(e_1,\ldots,e_k) \subset \Div^k(\Gamma)$ denote the set of effective divisors formed by adding together one point from each edge $e_i$. Let $\Eff(e_1,\ldots, e_k)$ denote the corresponding set of effective divisor classes, $$\Eff(e_1,\ldots,e_k) = \{[x_1 + \cdots + x_k] : x_i \in e_i \} \subset \Pic^k(\Gamma).$$ \end{dfn} The following result relates these cells of effective divisor classes with the cograph matroid (see Section~\ref{subsec:matroids}). \begin{thm} \label{thm:cographic-dim} Let $\Gamma = (G,\ell)$ be a metric graph. The dimension of $\Eff(e_1,\ldots, e_k)$ is equal to the rank of $\{e_1,\ldots, e_k\}$ in the cographic matroid $M^\perp(G)$. \end{thm} \begin{proof} For each edge $e_i \in E(G)$, let $v_i \in H^1(\Gamma,\mathbb{R})$ denote a vector parallel to the Abel--Jacobi image of $e_i$ in $\Jac(\Gamma)$. Then according to Definition 5.1.3 of \cite[p. 156]{CV}, the set of vectors $\{v_i : e_i \in E(G)\}$ form a realization of the cographic matroid $M^\perp(G)$. This means that the cographic rank of $\{e_1,\ldots,e_k\}$ agrees with the dimension of the linear span of $\{v_1,\ldots,v_k\}$. The subset $\Eff(e_1,\ldots,e_k) \subset \Pic^k(\Gamma)$ is naturally identified with the Minkowski sum of the corresponding vectors $v_1,\ldots,v_k \in H^1(\Gamma,\mathbb{R})$, so the claim follows. \end{proof} \begin{cor} \label{cor:abks-deg-d} Let $\Gamma = (G, \ell)$ be a metric graph of genus $g$. For any integer $d$ in the range $0\leq d\leq g$, the space $\Eff^d(\Gamma)$ of degree $d$ effective divisor classes has the structure of a cellular complex whose top-dimensional cells are indexed by independent sets of size $d$ in the cographic matroid $M^\perp(G)$. \end{cor} \subsection{Kirchhoff formulas} \label{subsec:kirchhoff} In this section we review Kirchhoff's formulas \cite{Kir} for the unit potential functions $\potent{z}{y}$, which are fundamental solutions to the Laplacian map (see Definition~\ref{dfn:unit-potential}). Expositions of this material are found in Bollob{\'a}s~\cite[$\S$II.1]{Bol} and Grimmet~\cite[$\S$1.2]{Gri}. \begin{thm}[Kirchhoff] \label{thm:kirchhoff} Suppose $\Gamma = (G,\ell)$ is a metric graph with edge lengths $\ell : E(G) \to \mathbb{R}_{>0}$. For vertices $y,z \in V(G)$, let $\potent{z}{y}: \Gamma \to \mathbb{R}$ denote the function in $\PL_\mathbb{R}(\Gamma)$ which satisfies $\Divisor(\potent{z}{y}) = z - y$ and $j^y_z(z) = 0$. Then the following relations hold. \begin{enumerate}[(a)] \item For any directed edge $\vec{e} = (e^+,e^-)$, \begin{equation} \label{eq:current} \frac{j^y_z(e^+) - j^y_z(e^-)}{\ell(e) } = \frac{\sum_{T \in \cT(G)}sgn(T,y,z,\vec e) w(T)}{\sum_{T \in \cT(G)} w(T)} \end{equation} where $\cT(G)$ denotes the spanning trees of $G$, the {\em weight} $w(T)$ of a spanning tree is defined as \[w(T) = \prod_{e_i \not\in E(T)} \ell(e_i),\] and \[ sgn(T,y,z,\vec e) = \begin{cases} +1 & \text{if the path in $T$ from $y$ to $z$ passes through $\vec e$} \\ -1 & \text{if the path in $T$ from $y$ to $z$ passes through $-\vec e$} \\ 0 & \text{otherwise}. \end{cases}\] \item The total potential drop between $y$ and $z$ is \begin{equation} \label{eq:voltage} j^y_z(y) - j^y_z(z) = \frac{\sum_{T \in \cT(G_0)} w(T)}{\sum_{T \in \cT(G)} w(T)} \end{equation} using the same notation as above, and where the graph $G_0$ (in the numerator) is the graph obtained from $G$ by identifying vertices $y$ and $z$. \end{enumerate} \end{thm} \begin{proof} For part (a), see Bollob\'as \cite[Theorem 2, $\S$II.1]{Bol}. Part (b) follows from consideration of the graph $G_+$ obtained by adding an auxiliary edge to $G$ between $y$ and $z$, and then applying part (a) to $G_+$ with respect to the auxiliary edge. \end{proof} The expressions \eqref{eq:current}, \eqref{eq:voltage} are both a ratio of homogeneous polynomials\footnote{moreover, polynomials whose nonzero coefficients are all $\pm 1$} in the variables $\{ \ell(e_i) : e_i \in E(G)\}$. In \eqref{eq:current}, the numerator and denominator are homogeneous of degree $g$; in \eqref{eq:voltage}, the denominator has degree $g$ while the numerator has degree $g+1$. As a result, the expression \eqref{eq:current} is invariant under simultaneous rescaling of edge lengths, while the expression \eqref{eq:voltage} scales linearly with respect to simultaneously rescaling all edge lengths. \begin{eg \label{eg:kirchhoff1} Consider the theta graph shown in Figure~\ref{fig:kirchhoff-ex1}, where $a = \ell(e_1)$, $b = \ell(e_2)$, $c = \ell(e_3)$ are edge lengths. The spanning trees are $\cT(G) = \{e_3, e_2, e_1\}$ which have respective weights $\{ ab, ac, bc\} $. The current along edge $e_1$ is $$ \frac{j^y_z(y) - j^y_z(z)}{a} = \frac{bc}{ab + ac + bc} ,$$ according to \eqref{eq:current}. We have \begin{align*} \potent{z}{y}(y) - \potent{z}{y}(z) &= a \left( \frac{bc}{ab + ac + bc} \right) = \frac{abc}{ab + ac + bc} \end{align*} in agreement with \eqref{eq:voltage}; $G_0$ consists of three loop edges. Note the symmetry in $a,b,c$. \begin{figure}[h] \centering { \begin{tikzpicture} \draw (-1,0) -- (1,0); \draw[rounded corners=5] (-1,0) -- (-0.5,-0.5) -- (0.5,-0.5) -- (1,0); \draw[rounded corners=5] (-1,0) -- (-0.5,0.5) -- (0.5,0.5) -- (1,0); \node at (-1,0) [circle,fill,scale=0.5] {}; \node at (1,0) [circle,fill,scale=0.5] {}; \path (-1,0) node[left] {$y$}; \path (1,0) node[right] {$z$}; \path (-0.5,0.5) -- (0.5,0.5) node[midway, above] {$a$}; \path (-1,0) -- (1,0) node[midway,above] {$b$}; \path (-0.5,-0.5) -- (0.5,-0.5) node[midway, below] {$c$}; \end{tikzpicture} } \caption{Theta graph with variable edge lengths.} \label{fig:kirchhoff-ex1} \end{figure} \end{eg} \begin{eg} \label{eg:kirchhoff2} Let $G$ be the Wheatstone graph in Figure~\ref{fig:kirchhoff-ex2} (left), with edge lengths $a = \ell(e_1),\ldots, f = \ell(e_5)$. The spanning trees are $$ \cT = \{ 345, 245, 234, 145, 135, 125, 124, 123\} , $$ where $123$ shorthand for spanning tree $\{e_1,e_2,e_3\}$, and the corresponding weights are $ \{ab, ac, af, bc, bd, cd, cf, df \} $. The current along edge $e_3$ is \begin{align*} \frac{j^y_z(e_{3,+}) - j^y_z(e_{3,-})}{c} = \frac{ab + af }{ab + ac + af + bc + bd + cd + cf + df} , \end{align*} while the current along $e_1$ is $$ \frac{j^y_z(y) - j^y_z(z)}{a} = \frac{bc + bd + cd + cf + df}{ab + ac + af + bc + bd + cd + cf + df} . $$ The potential drop from $y$ to $z$ is $$ {j^y_z(y) - j^y_z(z)} = \frac{a bc + a bd + a cd + a cf + a df}{ab + ac + af + bc + bd + cd + cf + df} , $$ in agreement with \eqref{eq:voltage}; the quotient graph $G_0$ is shown to the right in Figure~\ref{fig:kirchhoff-ex2}. \begin{figure}[h] \centering { \begin{tikzpicture} \draw (0,-1) node[circle,fill,scale=0.5] {} -- (-1.5,0) node[circle,fill,scale=0.5] {} -- (0,1) node[circle,fill,scale=0.5] {} -- cycle; \draw (0,-1) -- (1.5,0) node[circle,fill,scale=0.5] {} -- (0,1); \path (-1.5,0) -- (0,1) node[midway,above left] {$a$}; \path (0,1) -- (1.5,0) node[midway, above right] {$b$}; \path (0,-1) -- (0,1) node[midway, right] {$c$}; \path (-1.5,0) -- (0,-1.5) node[midway, below left] {$d$}; \path (0,-1.5) -- (1.5,0) node[midway, below right] {$f$}; \path (0,1) node[above left] {$z$}; \path (-1.5,0) node[above left] {$y$}; \end{tikzpicture} } \qquad\qquad\qquad { \begin{tikzpicture} \draw (0,-1) node[circle,fill,scale=0.5] {} -- (1.5,0) node[circle,fill,scale=0.5] {} -- (0,1) node[circle,fill,scale=0.5] {} -- cycle; \draw[rounded corners=5] (0,-1) -- (-0.5,-0.5) -- (-0.5,0.5) -- (0,1); \draw[scale=3] (0,0.33) to[in=110,out=190,loop] (0,0.33); \node at (-0.6,1.4) [above left] {$a$}; \node at (-0.5,0) [left] {$d$}; \path (0,1) -- (1.5,0) node[midway, above right] {$b$}; \path (0,-1) -- (0,1) node[midway, right] {$c$}; \path (0,-1.5) -- (1.5,0) node[midway, below right] {$f$}; \path (0,1) node[above right] {$y\sim z$}; \end{tikzpicture} } \caption{Wheatstone graph with variable edge lengths, and a quotient graph.} \label{fig:kirchhoff-ex2} \end{figure} If we let $d = f = 0$, then we recover the formulas of Example~\ref{eg:kirchhoff1}. \end{eg} \section{Torsion points of the Jacobian} \label{sec:mm-setup} \subsection{Torsion equivalence} Given an abelian group $A$, the {\em torsion subgroup} $A_{\rm tors}$ is the set of elements $a\in A$ such that $na = a + \cdots + a = 0$ for some positive integer $n$. For example, the torsion subgroup of $\mathbb{R}/\mathbb{Z}$ is $\mathbb{Q}/\mathbb{Z}$ and the torsion subgroup of $\mathbb{R}$ is $\{0\}$. Recall that $\Jac(\Gamma)$ is the abelian group of degree $0$ divisor classes on $\Gamma$; we have $$\Jtors{\Gamma} = \{[D] : D\in \Div^0(\Gamma),\quad n[D] = 0\text{ for some }n \in \mathbb{Z}_{>0} \}.$$ We say points $x,y \in \Gamma$ are {\em torsion equivalent} if there exists a positive integer $n$ such that $n[x-y] = 0$ in $\Jac(\Gamma)$. If two points $x,y$ represent the same divisor class $[x] = [y]$, then $x$ and $y$ are torsion equivalent; hence this relation descends to a relation on $\Eff^1(\Gamma) = \{ [x] : x \in \Gamma\}$. It will be convenient for us to consider this relation on $\Eff^1(\Gamma)$ rather than on $\Gamma$. \begin{lem} Torsion equivalence defines an equivalence relation on $\Eff^1(\Gamma)$. \end{lem} \begin{proof} It is clear that torsion equivalence is reflexive and symmetric. Suppose $n, m$ are positive integers such that $n[x-y] = 0$ and $m[y-z] = 0$ in $\Jac(\Gamma)$. Then $mn[x-z] = mn([x-y] + [y-z]) = 0$. This shows that torsion equivalence is transitive. \end{proof} It is natural to extend this relation to divisor classes of higher degree: we say effective classes $D, E \in \Eff^d(\Gamma)$ are {\em torsion equivalent} if $n[D - E] = 0$ for some positive integer $n$. We call an equivalence class under this relation a {\em torsion packet}. \begin{dfn} \label{dfn:torsion-packet} Given $[E] \in \Eff^d(\Gamma)$, the {\em torsion packet} $\tpacket{E}$ is the set of divisor classes torsion equivalent to $[E]$, i.e. $$ \tpacket{E} = \{ [D] \in \Eff^d(\Gamma) \text{ such that }[D - E] \in \Jtors{\Gamma} \}.$$ \end{dfn} The terminology of torsion packets allows us to restate the Manin--Mumford condition in a basepoint-free manner. \begin{prop} \label{prop:mm-torsion-packet} \hfill \begin{enumerate}[(a)] \item Given an effective divisor class $[D] \in \Eff^d(\Gamma)$, there is a canonical bijection $$ \tpacket{D} \quad\leftrightarrow\quad \Abeljh{D}{d}(\Gamma^d) \cap \Jtors{\Gamma}$$ where $\Abeljh{D}{d} : \Gamma^d \to \Jac(\Gamma)$ is the Abel--Jacobi map \eqref{eq:abel-jacobi}. \item A metric graph $\Gamma$ satisfies the {degree $d$ Manin--Mumford condition} if and only if every torsion packet of degree $d$ is finite. \end{enumerate} \end{prop} \begin{proof} For part (a), we have \begin{align*} \tpacket{D} &= \{[E] \in \Eff^d(\Gamma) : [E - D] \in \Jtors{\Gamma} \} \\ &= \{ [x_1 + \cdots + x_d] \in \Eff^d(\Gamma) : [x_1 + \cdots + x_d - D] \in \Jtors{\Gamma} \} \\ &= \{ [x_1 + \cdots + x_d] \in \Eff^d(\Gamma) : \Abeljh{D}{d}(x_1,\ldots,x_d) \in \Jtors{\Gamma} \} \\ &= \Abeljh{D}{d}(\Gamma^d) \cap \Jtors{\Gamma} . \end{align*} Part (b) follows directly from (a) and the definitions. \end{proof} Recall that the potential function $j_y^x$ is the unique piecewise $\mathbb{R}$-linear function satisfying $$\Divisor(j_y^x) = y - x \qquad\text{and}\qquad j_y^x(y) = 0.$$ \begin{lem} \label{lem:torsion-rational-slope} Suppose $x,y$ are two points on a metric graph $\Gamma$. Then $[x-y]$ is torsion in the Jacobian of $\Gamma$ if and only if all slopes of the potential function $j^x_y$ are rational. \end{lem} The above lemma is the special case $d = 1$ of the following statement. \begin{lem} \label{lem:d-torsion-slope} Suppose $D = x_1 + \cdots+ x_d$ and $E = y_1 + \cdots + y_d$ are effective divisors of degree $d$ on a metric graph $\Gamma$. Let $f \in \PL_{\mathbb{R}}(\Gamma)$ be a function satisfying $\Divisor(f) = D - E$. (Up to an additive constant, $f = \sum_{i=1}^d j_{x_i}^{y_i}$.) \begin{enumerate}[(a)] \item The divisor class $[D -E ] = 0$ if and only if all slopes of $f$ are integers. \item The divisor class $[D-E]$ is torsion if and only if all slopes of $f$ are rational. \end{enumerate} \end{lem} \begin{proof} For part (a), the ``if'' direction is a restatement of the definition of linear equivalence (Section~\ref{subsec:int-laplacian}). The ``only if'' direction follows from fact that for fixed divisors $D$ and $E$, all solutions to $\Divisor(f) = D - E$ (where $f \in \PL_\mathbb{R}(\Gamma)$) have the same slopes, since they all differ by an additive constant. Part (b) follows from part (a) by linearity of the Laplacian $\Divisor$; more precisely, $[D-E]$ is torsion of order $n$ iff $[n(D-E)] = [n\Divisor(f)] = [\Divisor(n\cdot f)] = 0$ iff all slopes of $n\cdot f$ lie in $\mathbb{Z}$ iff all slopes of $f$ lie in $\frac1{n}\mathbb{Z}$. Conversely if all slopes of $f$ are rational, then there exists an integer $n$ such that all slopes of $f$ lie in $\frac1n \mathbb{Z}$, since a piecewise linear function has finitely many slopes. \end{proof} \subsection{Very general subsets} \label{subsec:very-general} A {\em very general} subset of $\mathbb{R}^n$ is one whose complement is contained in a countable union of distinguished Zariski-closed sets. A distinguished Zariski-closed set is the set of zeros of a polynomial function which is not identically zero. Given a polynomial $f\in \mathbb{R}[x_1,\ldots,x_n]$, we denote $$ Z(f) = \{ (a_1,\ldots,a_n) \in \mathbb{R}^n : f(a) = 0 \} \quad\text{ and }\quad U(f) = \{ (a_1,\ldots,a_n) \in \mathbb{R}^n : f(a) \neq 0 \} .$$ In this notation, a very general subset $S\subset \mathbb{R}^n$ is one which can be expressed as $$ S \supset \mathbb{R}^n \setminus \left(\bigcup_{i\in I} Z(f_i) \right) = \bigcap_{i\in I} U(f_i) $$ where $I$ is a countable index set and each $f_i$ is a nonzero polynomial. Note that the zero locus $Z(f)$ has Lebesgue measure zero if $f$ is nonzero. Thus the complement of a (measurable) very general subset of $\mathbb{R}^n$ has Lebesgue measure zero. However, it is still possible that the complement of a very general subset is dense in $\mathbb{R}^n$. If $D \subset \mathbb{R}^n$ is some parameter space with nonempty interior (with respect to the Euclidean topology), we say that a subset of $D$ is {\em very general} if it has the form $D \cap S$ for a very general subset $S \subset \mathbb{R}^n$. In our applications, the relevant parameter space will be the positive orthant $D = (\mathbb{R}_{>0})^n$. We say that a property holds for a {\em very general} point of some real parameter space if it holds on a very dense subset. \begin{eg} \label{eg:very-general} \hfill \begin{enumerate}[(a)] \item For a fixed nonconstant polynomial $f \in \mathbb{Z}[x_1,\ldots,x_n]$, the set \begin{equation} \label{eq:f-irrational} U(f-\mathbb{Q}) = \{(a_1,\ldots,a_n) \in \mathbb{R}^n : f(a_1,\ldots,a_n) \not\in \mathbb{Q}\} \end{equation} is very general, since $\{ f - \lambda : \lambda \in \mathbb{Q}\}$ is a countable collection of nonzero polynomials. \item For polynomials $f,g \in \mathbb{Z}[x_1,\ldots,x_n]$ with $g\neq 0$ and $f/g$ nonconstant, the set \begin{equation} \label{eq:fg-irrational} U(\frac{f}{g} - \mathbb{Q}) = \{(a_1,\ldots,a_n) \in \mathbb{R}^n : \frac{f(a_1,\ldots,a_n)}{g(a_1,\ldots,a_n)} \not\in \mathbb{Q}\} \end{equation} is very general, since $\{f - \lambda g : \lambda \in \mathbb{Q}\}$ is a countable collection of nonzero polynomials. \item The set \begin{multline} \label{eq:transcendental} U^n_{\rm tr.} = \{(a_1,\ldots,a_n) \in \mathbb{R}^n : f(a_1,\ldots,a_n) \neq 0 \\ \text{ for every } f \in \mathbb{Z}[x_1,\ldots,x_n] \setminus\{0\} \} \end{multline} is very general, since $\mathbb{Z}[x_1,\ldots,x_n]$ is a countable set of polynomials. We call $U^n_{\rm tr.}$ the set of {\em transcendental} points of $\mathbb{R}^n$. In particular, $U^1_{\rm tr}$ is the set of transcendental real numbers. \end{enumerate} \end{eg} Note that in the above examples, the subsets \eqref{eq:f-irrational} and \eqref{eq:fg-irrational} contain the transcendental points $U^n_{\rm tr.}$. Conversely, $U^n_{\rm tr.}$ is the intersection of \eqref{eq:f-irrational} over all choices of $f$ (resp. \eqref{eq:fg-irrational} over all choices of $f$ and $g$). In the later theorem statements (\ref{thm:manin-mumford} and \ref{thm:mm-higher-degree}) which concern very general edge lengths, the stated property holds when the edge lengths are transcendental (in the sense of \eqref{eq:transcendental}, $n = \# E(G)$). More precisely, these conditions will hold on a finite intersection of sets of the form \eqref{eq:fg-irrational}. The polynomials $f,g$ will come from Kirchhoff's formulas (see Theorem~\ref{thm:kirchhoff} in Section~\ref{subsec:kirchhoff}). \section{Manin--Mumford conditions on metric graphs} Recall that a metric graph $\Gamma$ satisfies the {\em Manin--Mumford condition} if the image of the Abel--Jacobi map $\Abelj{q} : \Gamma \to \Jac(\Gamma)$ intersects only finitely many points of the torsion subgroup $\Jtors{\Gamma}$, for every choice of basepoint $q\in \Gamma$. Equivalently, $\Gamma$ satisfies the Manin--Mumford condition if every degree one torsion packet $\tpacket{x}$ is finite. A metric graph $\Gamma$ satisfies the {\em degree $d$ Manin--Mumford condition} if the image of the degree $d$ Abel--Jacobi map \begin{align*} \Abeljh{D}{d} : \Gamma^{d} &\to \Jac(\Gamma) \\ (p_1,\ldots,p_d) &\mapsto [p_1 + \cdots + p_d - D] \end{align*} intersects only finitely many points of $\Jtors{\Gamma}$, for every choice of effective degree $d$ divisor class $[D]$. \subsection{Failure of Manin--Mumford condition} In this section, we consider cases when a metric graph fails to satisfy the Manin--Mumford condition, in degree one and in higher degree. \begin{prop} \label{prop:mm-rational} If $\Gamma = (G,\ell)$ is a metric graph whose edge lengths are all rational, then the Manin--Mumford condition fails to hold. \end{prop} \begin{proof} Rescaling all edge lengths of $\Gamma$ by the same factor does not change the validity of the Manin--Mumford condition, so we may assume that all edge lengths are integers. This means $\Gamma$ has a combinatorial model $(G,\textbf{1})$ with unit edge lengths. On a graph with unit edge lengths, the degree-$0$ divisor classes supported on vertices form a finite abelian group, known as the {\em critical group} of the graph (see Section~\ref{subsec:critical-group}). This implies that all vertices of $G$ lie in the same torsion packet. Now consider taking the $k$-th subdivision graph $G^{(k)}$ of $G$, meaning every edge if $G$ is subdivided into $k$ edges of equal length; the number of vertices is \[ \# V(G^{(k)}) = \# V(G) + (k-1) \# E(G) .\] The same reasoning implies that these new vertices are also in the same torsion packet of $\Gamma$. Taking $k\to \infty$ shows that $\Gamma$ has an infinite torsion packet. \end{proof} Proposition~\ref{prop:mm-rational} also follows from part (a) of the following lemma. Recall that given edges $e_i \in E(G)$, $\Eff(e_1,\ldots, e_k)$ denotes the set of effective divisor classes $[x_1 + \cdots + x_k]$ which sum a point $x_i \in e_i$ from each edge ($x_i$ is allowed to be an endpoint of $e_i$). \begin{lem} \label{lem:infinite-torsion} Let $\Gamma = (G,\ell)$ be a metric graph. \begin{enumerate}[(a)] \item If an edge $e\in E(G)$ contains two points $x,y$ such that $[x], [y]$ are distinct but in the same torsion packet, then the torsion packet $\tpacket{x}$ is infinite. \item If $\Eff(e_1,\ldots, e_d)$ contains distinct divisor classes $[D], [E]$ in the same degree $d$ torsion packet, then the torsion packet $\tpacket{D}$ is infinite. \end{enumerate} \end{lem} \begin{proof} (a) Suppose that an edge $e$ contains distinct points $x,y$ such that $[x-y]$ is torsion. Let $z$ denote the midpoint of $x$ and $y$; we claim $[x-z]$ is also torsion. The midpoint satisfies $[2z] = [x+y]$, hence $2[x-z] = [x - z] + [z-y] = [x-y]$. If $n$ is a positive integer such that $n[x-y] = 0$, then $2n[x-z] = n[x-y] = 0$. This proves the claim that $[x-z]$ is torsion. By repeating this argument on the midpoint of $x$ and $z$, we obtain infinitely many points on $e$ in the same torsion packet $\tpacket{x}$. (b) Since the cell $\Eff(e_1,\ldots,e_d)$ is convex, it contains a line segment connecting $[D]$ and $[E]$; this segment have positive length by the assumption $[D] \neq [E]$. Moreover, for $$ [F] = (\text{any rational affine combination of $[D]$ and $[E]$ along this line}), $$ the class $[D - F]$ is torsion. This guarantees infinitely many divisor classes $[F]$ in the torsion packet $\tpacket{D}$, as claimed. \end{proof} \begin{prop} \label{prop:cycle-torsion} Suppose $G$ has a cycle with $d$ edges. Then for any edge lengths $\ell : E(G) \to \mathbb{R}_{>0}$, the metric graph $\Gamma = (G,\ell)$ fails to satisfy the degree~$d$ Manin--Mumford condition. \end{prop} \begin{proof} Let $C$ be a cycle in $G$ with edges $e_1,e_2,\ldots,e_d$ and vertices $v_1,v_2,\ldots,v_d$ in cyclic order, where edge $e_i$ has endpoints $v_i$ and $v_{i+1}$ (where indices are taken modulo $d$). Consider the effective divisors $D = v_1 + \cdots + v_d$ and $E = x_1 + \cdots + x_d$ where $x_i$ is the midpoint on edge $e_i$. To show that $[D-E]$ is torsion, we construct a piecewise linear function $f$ with $\Divisor(f) = D - E$. Let $f:\Gamma \to \mathbb{R}$ be zero-valued outside of the cycle $C$, and $f(v_i) = 0$ for each vertex (potentially required by continuity of $f$). On each edge $e_i$, let $f$ have slope $\frac{1}{2}$ in the directions away from $v_i$, so that at the midpoint $f(x_i) = \frac12 \ell(e_i)$. It is straightforward to verify that $\Divisor(f) = D - E$ as desired. By Lemma \ref{lem:d-torsion-slope}, the slopes $\pm \frac12$ of $f$ imply that $[D-E]$ is a nonzero, torsion divisor class. Moreover, both $[D]$ and $[E]$ lie in the same cell $\Eff(e_1,\ldots,e_d)$. Then Lemma~\ref{lem:infinite-torsion}(b) implies that the torsion packet $\tpacket{D}$ is infinite, which violates the degree $d$ Manin--Mumford condition. \end{proof} \subsection{Uniform Manin--Mumford bounds} In this section, we show that if a metric graph satisfies the Manin--Mumford condition, then in fact the number of torsion points can be bounded uniformly in terms of the genus of $\Gamma$. \begin{thm} \label{thm:mm-uniform-bound} Suppose $\Gamma$ is a metric graph of genus $g \geq 2$. If $ \Abelj{q}(\Gamma)\cap \Jtors{\Gamma}$ is finite, then \[ \#(\Abelj{q}(\Gamma)\cap \Jtors{\Gamma}) \leq 3g-3.\] \end{thm} \begin{proof} The retract map $r : \Gamma \to \Gamma'$ from a metric graph to its stabilization induces an isomorphism on Jacobians $\Jac(\Gamma) \xrightarrow{\sim} \Jac(\Gamma')$ (Proposition~\ref{prop:jacobian-stabilization}) and hence on $\Abelj{q}(\Gamma) \xrightarrow{\sim} \Abelj{r(q)}(\Gamma')$, so we may assume that $\Gamma$ is semistable and that $ (G,\ell)$ is a stable combinatorial model for $\Gamma$. Proposition \ref{prop:stable-edge-bound} states that $\# E(G) \leq 3g-3$ since $G$ is stable. Lemma \ref{lem:infinite-torsion}(a) implies that a finite torsion packet has at most one point on a given edge of $G$. This proves that the size of a finite, degree $1$ torsion packet is at most $3g-3$. By Proposition~\ref{prop:mm-torsion-packet}, we are done. \end{proof} We next generalize the above argument to the higher-degree case. \begin{thm} \label{thm:mm-higher-uniform-bound} Let $\Gamma = (G,\ell)$ be a connected metric graph of genus $g\geq 2$. If $\Gamma$ satisfies the Manin--Mumford condition in degree $d$, then \[ \#(\Abeljh{D}{d}(\Gamma^d)\cap \Jtors{\Gamma}) \leq \binom{3g-3}{d} .\] \end{thm} \begin{proof} The number $\#(\Abeljh{D}{d}(\Gamma^d)\cap \Jtors{\Gamma})$ does not change under replacing $\Gamma$ with its stabilization, so we may assume $\Gamma$ is semistable and $(G,\ell)$ is a stable model. This means that the number of edges $\# E(G)$ is bounded above by $3g-3$. The image of $\Abeljh{D}{d}(\Gamma^d)$ is homeomorphic to $\Eff^d(\Gamma)$. (They differ by a translation sending $\Pic^d(\Gamma)$ to $\Pic^0(\Gamma)$.) The maximal cells in the ABKS decomposition of $\Eff^d(\Gamma)$ are indexed by independent sets of size $d$ in the cographic matroid $M^\perp(G)$, c.f. Corollary~\ref{cor:abks-deg-d}. The number of maximal cells is clearly bounded above by $\binom{\#E(G)}{d}$, the number of all size-$d$ subsets of edges. Since we assumed $G$ is stable, we have $\binom{\#E(G)}{d}\leq \binom{3g-3}{d}$. From Lemma~\ref{lem:infinite-torsion}(b), we know that a finite degree $d$ torsion packet contains at most one element from a given maximal cell of $\Abeljh{D}{d}(\Gamma^d)$, which finishes the proof. \end{proof} \section{Manin--Mumford for generic edge lengths} \subsection{Degree one} In this section we prove our first main theorem, which gives conditions on when a metric graph satisfies the Manin--Mumford condition in degree $1$. In this section, ``torsion packet'' will always mean a degree $1$ torsion packet (c.f. Definition~\ref{dfn:torsion-packet}). Before addressing the general case, we demonstrate an example in small genus. \begin{eg} Let $G$ be the theta graph (see Figure~\ref{fig:kirchhoff-ex1}) with vertices $x,y$ and edges $e_1,e_2,e_3$, and consider the metric graph $\Gamma = (G,\ell)$ with edge lengths $a = \ell(e_1), b = \ell(e_2) , c = \ell(e_3)$. If a torsion packet contains two points on $e_1$, then Proposition~\ref{prop:torsion-edge-deletion} implies that $[x-y]$ is torsion on the deleted subgraph $\Gamma_1 = \Gamma \backslash e_1$. By Lemma~\ref{lem:torsion-rational-slope}, this would imply the potential function which sends current from $x$ to $y$ on the subgraph $\Gamma_1$ has rational slopes. We can compute these slopes directly: $\Gamma_1$ is a parallel combination of edges with lengths $b$ and $c$, so the slope along $e_2$ is $\frac{c}{b+c} $. (This calculation also follows from Theorem~\ref{thm:kirchhoff}.) To summarize: $$ (\text{some torsion packet contains $\geq 2$ points of $e_1$}) \qquad\Rightarrow\qquad \frac{c}{b+c} \in \mathbb{Q}. $$ The contrapositive statement is that $$ \frac{c}{b+c} \not\in \mathbb{Q}. \qquad\Rightarrow\qquad (\text{every torsion packet contains at most one point of $e_1$}). $$ To satisfy the Manin--Mumford condition, it suffices that every torsion packet $\tpacket{x} \subset \Eff^1(\Gamma)$ contains at most one point of each edge $e_1, e_2,e_3$. Thus the Manin--Mumford condition holds for $\Gamma$ if the edge lengths are in set $$ \{ (a,b,c) \in \mathbb{R}_{>0}^3 : \frac{b}{a + b} \not\in \mathbb{Q} \text{ and } \frac{c}{a + c} \not\in \mathbb{Q} \text{ and }\frac{c}{b+c} \not \in \mathbb{Q}\} .$$ This is very general subset of $\mathbb{R}_{>0}^3$, c.f. Example~\ref{eg:very-general}(b). \end{eg} \begin{prop} \label{prop:torsion-edge-deletion} Suppose $\Gamma$ is a metric graph and points $x,y\in \Gamma$ lie on the same edge. Let $\Gamma_0$ denote the metric graph with the open segment between $x$ and $y$ removed. If $[x-y]$ is torsion on $\Gamma$ and $\Gamma_0$ is connected, then $[x-y]$ is torsion on $\Gamma_0$. \end{prop} \begin{proof} Suppose $[x-y]$ is torsion on $\Gamma$. Let $j_x^y$ denote the potential function on $\Gamma$ when one unit of current is sent from $y$ to $x$. By Lemma~\ref{lem:torsion-rational-slope}, all slopes of $j_x^y$ are rational. In particular, the slope of $j_x^y$ on the segment between $x$ and $y$ is rational; let $s$ denote this slope. Since $\Gamma_0$ is connected, we have $s<1$. Let $\Gamma_0$ denote the metric graph obtained from $\Gamma$ by deleting the interior of edge $e$. It is clear that the restriction of $j_x^y$ to $\Gamma_0$ has Laplacian $\Divisor(j_x^y\big|_{\Gamma_0})= (1-s)x- {(1-s)}y$. Let $j_{x,0}^y$ denote the potential function on $\Gamma_0$ when one unit of current is sent from $y$ to $x$. Since $j_{x,0}^y = (1-s)^{-1}j_x^y$, all slopes of $j_{x,0}^y$ are rational. By Lemma~\ref{lem:torsion-rational-slope}, this implies $[x-y]$ is torsion on $\Gamma_0$ as desired. \end{proof} \begin{prop} \label{prop:slope-01} Suppose $x,y$ are two vertices on a graph $G$. Let $j_x^y$ be the potential function on $\Gamma = (G,\ell)$, depending on variable edge lengths $\ell : E(G) \to \mathbb{R}$. Either: (1) all slopes of $j_x^y$ are $1$ or $0$, independent of edge lengths; or (2) there exists some edge $e$ such that the slope of $j_x^y$ along $e$ is a non-constant rational function of the edge lengths. \end{prop} \begin{proof} Suppose there is a unique simple path in $G$ from $x$ to $y$. Let $f$ be the piecewise linear function on $\Gamma$ which has $f(x) = 0$, increases with slope $1$ along the path from $x$ to $y$, and has slope $0$ elsewhere. Then $f$ satisfies $\Divisor(f) = x - y$ so we must have $potential{x}{y} = f$ by uniqueness. Thus we are in case (1). On the other hand, suppose there are two distinct simple paths $\pi_1, \pi_2$ in $G$ from $x$ to $y$. Let $e$ be an edge of $G$ which lies on $\pi_1$ but not $\pi_2$. If we fix the lengths of edges in $\pi_1$ and send all other edge lengths to infinity, then the slope of $j_x^y$ along $e$ approaches $1$. If we send the length $\ell(e)$ to infinity while keeping all other edge lengths fixed, then the slope of $j_x^y$ along $e$ approaches zero. Thus the slope of $j_x^y$ along $e$ is a non-constant function of the edge lengths. By Kirchhoff's formulas, Theorem~\ref{thm:kirchhoff}, the slope (i.e. current) is a rational polynomial function of the edge lengths. This is case (2). \end{proof} \begin{prop} \label{prop:vertex-torsion} Suppose $x,y$ are two vertices on a graph $G$. Then for the metric graph $\Gamma = (G,\ell)$, either (1) $[x-y] = 0$ in $\Jac(\Gamma)$ for any edge lengths $\ell$, or (2) $[x-y]$ is non-torsion in $\Jac(\Gamma)$ for very general edge lengths $\ell$. \end{prop} \begin{proof} If none of the slopes of $j_x^y$ vary as a function of edge lengths, then by Proposition \ref{prop:slope-01} all slopes of $j_x^y$ are zero or one. This implies that $[x-y]=0$. On the other hand, suppose for some edge $e$ the slope of $j_x^y$ along $e$ is a non-constant rational function $\frac{p(\ell_1,\ldots,\ell_m)}{q(\ell_1,\ldots,\ell_m)}$. Then the subset $$ U = \left\{ (\ell_1,\ldots, \ell_m) \in \mathbb{R}_{>0}^m : \frac{p(\ell_1,\ldots,\ell_m)}{q(\ell_1,\ldots,\ell_m)} \not\in \mathbb{Q} \right\}$$ parametrizing edge-lengths where the slope at $e$ take irrational values is very general, c.f. Example~\ref{eg:very-general}(b). By Lemma~\ref{lem:torsion-rational-slope}, $[x-y]$ is nontorsion on $U$, as desired. \end{proof} \begin{thm} \label{thm:manin-mumford} Suppose $G$ is a biconnected metric graph of genus $g\geq 2$. For a very general choice of edge lengths $\ell : E(G) \to \mathbb{R}_{>0}$, the metric graph $\Gamma = (G,\ell)$ satisfies the Manin--Mumford condition. \end{thm} \begin{proof} Let $m = \# E(G)$ and choose an ordering $E(G) = \{e_1, e_2, \ldots, e_m\}$, which induces a homeomorphism from the space of edge-lengths $\{ \ell: E(G) \to \mathbb{R}_{>0}\}$ to the positive orthant $\mathbb{R}_{>0}^m$. We claim that for each edge $e_i$, there is a corresponding very general subset $U_i \subset \mathbb{R}_{>0}^m$ such that \begin{equation} \label{eq:torsion-ei-condition} \begin{gathered} \text{ when edge lengths are chosen in $U_i$, every torsion packet } \\ \text{ of $\Gamma = (G,\ell)$ contains at most one point of $e_i$.} \end{gathered} \end{equation} Let $e_{i}^+$, $e_{i}^-$ denote the endpoints of $e_i$, and let $G_i = G \backslash e_i$ denote the graph with edge $e_i$ deleted. If the endpoints $e_{i}^+$, $e_{i}^-$ are not connected by any path in $G_i$, this contradicts our assumption that $G$ is biconnected. If the endpoints are connected by only one path $\pi$ in $G_i$, then the union $\pi \cup \{e_i\}$ is a genus $1$ biconnected component of $G$, which contradicts our assumption that $G$ is biconnected and has genus $g\geq 2$. Thus $e_{i}^+$, $e_{i}^-$ are connected by at least two distinct paths\footnote{The two paths may share edges in common.} in $G_i$. Therefore, the divisor class $[e_{i}^+ - e_{i}^-] \neq 0$ in $\Jac(\Gamma_i)$ where $\Gamma_i = (G_i , \ell_i)$. By Proposition \ref{prop:vertex-torsion}, $[e_{i}^+ - e_{i}^-] $ is nontorsion in $\Jac(\Gamma_i)$ on a very general subset $V_i \subset \mathbb{R}_{>0}^{m-1}$ of edge-length space. (Note that $G_i$ has $m-1$ edges.) Finally, we let $U_i$ be the preimage of $V_i$ under the coordinate projection $\mathbb{R}_{>0}^m \to \mathbb{R}_{>0}^{m-1}$ forgetting coordinate $i$. The subset $U_i$ is very general, and satisfies the claimed condition \eqref{eq:torsion-ei-condition}. For any edge lengths in the intersection $U = \bigcap_{i=1}^m U_i$ a torsion packet of the corresponding $\Gamma = (G,\ell)$ can have at most one point on each edge $e_i$, giving the bound $ \# \tpacket{x} \leq m .$ The subset $U$ is very general, since it is a finite intersection of very general subsets. This completes the proof. \end{proof} \subsection{Higher degree} In this section we address when a metric graph with very general edge lengths satisfies the Manin--Mumford condition in higher degree ($d \geq 2$). The next proposition is a strengthening of Proposition~\ref{prop:cycle-torsion}. Recall that $M^\perp(G)$ denotes the cographic matroid of $G$ (see Section~\ref{subsec:matroids}). \begin{prop} \label{prop:gamind-cycle-torsion} Suppose $G$ contains a cycle $C$ whose edge set has rank $d = {\rm rk}^\perp(E(C))$ in the cographic matroid $M^\perp(G)$. Then for any edge lengths $\ell : E(G) \to \mathbb{R}_{>0}$, the metric graph $\Gamma = (G,\ell)$ fails the degree $d$ Manin--Mumford condition. \end{prop} \begin{proof} Suppose the given cycle of $G$ consists of the edges $\{e_1, \ldots, e_k\}$ and vertices $\{v_1,\ldots, v_k\}$ in cyclic order; note that $k\geq d$. Let $D = v_1 + \cdots + v_k$ be the sum of the cycle's vertices. In the proof of Proposition~\ref{prop:cycle-torsion}, we showed that the degree-$k$ torsion packet $\tpacket{D}$ has infinite intersection with the cell $\Eff(e_1,\ldots, e_k)$, for any choice of edge lengths $\ell$. Recall that $\Eff(e_1,\ldots, e_k)$ is the image of $\Div(e_1,\ldots, e_k)$ under the linear equivalence map $\Div^k (\Gamma) \to \Pic^k(\Gamma)$. The map $\Div(e_1,\ldots, e_k) \to \Pic^k(\Gamma)$ lifts to a linear map $\phi$ in the diagram \[ \begin{tikzcd} { \prod_{i=1}^k [0,\ell(e_i)]} \ar[r,"\phi"] \ar[d] & \mathbb{R}^g \ar[d] \\ \Div(e_1,\ldots, e_k) \ar[r] & \Pic^k(\Gamma) , \end{tikzcd} \] where $\prod_{i=1}^k [0,\ell(e_i)] \to \Div(e_1,\ldots,e_k)$ is the product of isometries $[0,\ell(e_i)] \to e_i$ and $\mathbb{R}^g \to \Pic^k(\Gamma)$ is an isometric universal cover. By Theorem~\ref{thm:cographic-dim}, $\Eff(e_1,\ldots,e_k)$ has dimension $d = {\rm rk}^\perp(\{e_1,\ldots,e_k\})$ (where $d\leq k$). This implies that $\phi$ has rank $d$, so the image of $\phi$ is covered by the restrictions of $\phi$ to the $d$-faces of $\prod_{i=1}^k [0,\ell(e_i)]$. Thus $\Eff(e_1,\ldots,e_k)$ is covered by the corresponding images of the $d$-faces of $\Div(e_1,\ldots,e_k)$, which have the form \begin{equation} \label{eq:d-face} \Eff(e_i : i\in I) + [\, \sum_{i \not\in I} v_{i}^{\pm} \,] \quad \subset\quad \Eff(e_1,\ldots,e_k), \end{equation} where $I$ is a size-$d$ subset of $\{1,\ldots,k\}$ and $v_i^{\pm} \in \{v_i, v_{i+1} \}$ is an endpoint of $e_i$. (There are $\binom{k}{d} 2^{k-d}$ such choices.) Since $\Eff(e_1,\ldots,e_k)$ has infinite intersection with the torsion packet $\tpacket{D}$, there is some choice of $I, v_i^\pm$ such that the subset \eqref{eq:d-face} of $\Eff(e_1,\ldots,e_k)$ has infinite intersection with $\tpacket{D}$. This implies that the degree-$d$ torsion packet $\tpacket{D - \sum_{i\not\in I} v_i^\pm}$ has infinite intersection with $\Eff(e_i : i\in I)$, thus violating the degree $d$ Manin--Mumford condition. \end{proof} Next, we consider the converse situation of Proposition~\ref{prop:gamind-cycle-torsion}, i.e. when an edge set is acyclic after taking the closure in $M^\perp(G)$. Recall from Section~\ref{subsec:matroids} the notation $\Div(e_1,\ldots,e_k)$ and $\Eff(e_1,\ldots,e_k)$. Here we introduce a slight variation: let $\Div(e_1,\ldots,e_k)^\circ$ denote the set of effective divisors of the form $D = x_1 + \cdots + x_k$ where $x_i$ is in the {\em interior} $e_i^\circ$ of edge $e_i$; respectively let $\Eff(e_1,\ldots,e_k)^\circ$ denote the divisor classes of the form $[x_1+ \cdots + x_k]$, where $x_i \in e_i^\circ$. \begin{prop} \label{prop:acyclic-nontorsion} Suppose $e_1,\ldots, e_k$ are edges in $G$ such that $\{e_1,\ldots,e_k\}$ is independent in $M^\perp(G)$ and the closure of $\{e_1,\ldots,e_k\}$ in $M^\perp(G)$ spans an acyclic subgraph of $G$. Then for very general edge lengths on $\Gamma = (G,\ell)$, distinct divisor classes in $\Eff(e_1,\ldots,e_k)^\circ \subset \Pic^k(\Gamma)$ are in distinct torsion packets. \end{prop} Before proving this statement, we introduce some lemmas and definitions. \begin{dfn} \label{dfn:cv-active} Given a piecewise linear function $f$ on $\Gamma = (G,\ell)$, say an edge of $G$ is {\em current-active} with respect to $f$ if the slope $f'$ is nonzero in a neighborhood of its endpoints\footnote{if $e \cong [0,1]$, here a ``neighborhood of the endpoints'' means $[0,\epsilon) \cup (1 - \epsilon, 1]$ for some $\epsilon>0$}; let $E^{\rm c.a.}(G,f)$ denote the current-active edges, \begin{equation*} E^{\rm c.a.}(G,f) = \{e \in E(G) : f'\neq 0 \text{ in a neighborhood of } e^+,e^- \text{ in }e\}. \end{equation*} Say an edge is {\em voltage-active} with respect to $f$ if the net change in $f$ across $e$ is nonzero; let $E^{\rm v.a.}(G,f)$ denote the voltage-active edges, \begin{equation*} E^{\rm v.a.}(G,f) = \{e \in E(G) : f(e^+) - f(e^-) \neq 0 \quad\text{where }e = (e^+,e^-) \}. \end{equation*} \end{dfn} Recall that a {\em cut} of $G$ is a set of edges $\{e_1,\ldots,e_k\}$ such that the deletion $G\setminus \{e_1,\ldots,e_k\}$ is disconnected. \begin{lem} \label{lem:v-active-cut} Consider a metric graph $\Gamma = (G,\ell)$ and $f \in \PL_\mathbb{R}(\Gamma)$. If $E^{\rm v.a.}(G,f)$ is nonempty, it contains a cut of $G$. \end{lem} \begin{proof} Suppose $e = (e^+,e^-)$ is voltage-active with respect to $f$, so that $f(e^+) > f(e^-)$ for some ordering of endpoints. Then we may partition $V(G)$ into two nonempty sets $V^+ \cup V^-$, where $$ V^+ = \{ v\in V(G) : f(v) \geq f(e^+) \} \qquad\text{and}\qquad V^- = \{v \in V(G) : f(v) < f(e^+) \}. $$ It is clear that $E^{\rm v.a.}(G,f)$ contains all edges between $V^+$ and $V^-$; such edges form a cut of $G$. \end{proof} \begin{lem} \label{lem:c-active-cycle} On $\Gamma = (G,\ell)$, consider $f \in \PL_\mathbb{R}(\Gamma)$ such that $\Divisor(f) = E - D$ for $D,E \in \Div(e_1,\ldots,e_k)^\circ$. If $E^{\rm c.a.}(G,f)$ is nonempty, then it contains a cycle of $G$. \end{lem} \begin{proof} Suppose $D = x_1 + \cdots + x_k$ and $E = y_1 + \cdots + y_k$ where $x_i, y_i \in e_i$. Since the divisor $\Divisor(f)$ restricted to $e_i$ has the form $y_i - x_i$, the slopes of $f$ along $e_i$ are as shown in Figure \ref{fig:edge-slopes}, where slopes are indicated in the rightward direction. \begin{figure}[h] \centering \includegraphics[scale=0.5]{edge-slopes} \caption{Slopes on edge $e$ where $\Divisor(f) = y - x$.} \label{fig:edge-slopes} \end{figure} \noindent Edge $e_i$ is current-active iff the corresponding slope $s$ ($ = s_i$) is nonzero. In particular, if $e_i \in E^{\rm c.a}(G,f)$ it is current-active {\em at both endpoints}. On the other hand, consider an edge $e \in E(G) \setminus \{e_1,\ldots,e_k\}$. Then $\Divisor(f)$ is not supported on $e$, so $f$ does not change slope on $e$. Again in this case, if $e \in E^{\rm c.a.}(G,f)$ then it is current-active at both endpoints. By assumption that divisors $D,E \in \Div(e_1,\ldots,e_k)^\circ$, $\Divisor(f)$ is supported away from the vertex set $V(G)$. This means that around a vertex $v$, the outward slopes of $f$ sum to zero. The number of nonzero terms in the sum must be $0$ or $\geq 2$, and each nonzero term corresponds to a current-active edge incident to $v$. Thus \begin{equation*} \begin{gathered} E^{\rm c.a.}(G,f) \text{ spans a subgraph of $G$ where } \\ \text{ every vertex has } \val(v) = 0 \text{ or } \val(v) \geq 2. \end{gathered} \end{equation*} The claim follows. \end{proof} \begin{lem} \label{lem:cact-or-vact} Consider $D,E \in \Div(e_1,\ldots,e_k)^\circ$ and $f \in \PL_\mathbb{R}(\Gamma)$ such that $\Divisor(f) = E - D$. If $D \neq E$, then $E^{\rm v.a.}(G,f)$ or $E^{\rm c.a.}(G,f)$ is nonempty (or both are). \end{lem} \begin{proof} If $D = x_1 + \cdots + x_k$ is not equal to $E = y_1 + \cdots + y_k$, then there is some index $i$ such that $x_i \neq y_i$. Consider the illustration of $f$ in Figure~\ref{fig:edge-slopes}, applied to the edge with $x_i \neq y_i$. We have \begin{equation} f(e^-_i) - f(e^+_i) = s \cdot \ell(e_i) - \ell([x_i,y_i]), \end{equation} where $\ell([x_i,y_i])$ is the distance between $x_i$ and $y_i$ on $e_i$. If $s = 0$, then $e_i$ is not current-active but is voltage-active. If $s = \ell([x_i,y_i]) / \ell(e_i)$, then $e_i$ is not voltage-active but is current-active. \end{proof} \begin{lem} \label{lem:slope-nonconstant} Consider a fixed vertex-supported $\mathbb{R}$-divisor $D = \lambda_1 v_1 + \cdots + \lambda_r v_r$ of degree zero on $G$, so $v_i \in V(G)$, $\lambda_i \in \mathbb{R}$ and $\sum_{i=1}^r \lambda_i = 0$. On $\Gamma = (G,\ell)$, suppose $f \in \PL_\mathbb{R}(\Gamma)$ satisfies $\Divisor(f) = D $ and $f$ has nonzero slope on $e\in E(G)$. If $e$ is not a bridge, then the slope on $e$ is a nonconstant rational function of edge lengths of $\Gamma$. \end{lem} \begin{proof} Suppose we let $\ell(e) \to \infty$ for the chosen edge and fix the lengths of all other edges $e' \neq e$; we claim that the slope of $f$ across $e$ approaches zero. The slope-current principle, Proposition~\ref{prop:slope-current}, states that the slope of $f$ is bounded above in magnitude by $\Lambda$, where $\Lambda = \frac{1}{2}\sum_{i } |\lambda_i|$ does not depend on the edge lengths.\footnote{Since $\sum \lambda_i = 0$, we have $\Lambda = \sum\{ \lambda_i : \lambda_i > 0 \}= -\sum \{\lambda_i : \lambda_i < 0\}$.} Since $e = (e^+, e^-)$ is not a bridge edge, there is a simple path $\pi$ from $e^+$ to $e^-$ which does not contain $e$. By integration along $\pi$, $|f(e^-) - f(e^+)|$ is bounded above by $\Lambda \cdot \ell(\pi)$, which implies the bound \begin{equation*} |f'(e)| = \left| \frac{f(e^-) - f(e^+)}{\ell(e)} \right| \leq \frac{\Lambda \cdot \ell(\pi)}{\ell(e)} . \end{equation*} If we let $\ell(e) \to \infty$ and keep $\ell(e')$ constant for each $e' \in E(G) \setminus \{e \}$, this upper bound approaches zero as claimed. Thus the slope of $f$ along $e$ is a non-constant function of the edge lengths. It is a rational function by Kirchhoff's formulas, Theorem \ref{thm:kirchhoff}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:acyclic-nontorsion}] Suppose $D = x_1 + \cdots + x_k$ and $E = y_1 + \cdots + y_k$ are divisors in $\Div(e_1,\ldots,e_k)^\circ$. Let $f$ be a piecewise linear function such that $\Divisor(f) = E- D$. By Lemma~\ref{lem:d-torsion-slope}, $[D]$ and $ [E]$ lie in the same torsion packet if and only if all slopes of $f$ are rational. Let $\Gamma_0$ (resp. $G_0$) denote the metric graph (resp. combinatorial graph) obtained from deleting the interiors of edges $e_1,\ldots, e_k$ from $\Gamma$ (resp. $G$). Let $f_0 = f\big|_{\Gamma_0}$ denote the restriction of $f$ to $\Gamma_0$. We have \begin{equation} \label{eq:div-f0} \Divisor(f_0) = \lambda_1 w_1 + \cdots + \lambda_{r} w_{r}, \end{equation} where $\{w_1,\ldots, w_r\}\subset V(G)$ is the set of endpoints of edges $e_1,\ldots, e_k$ and $\lambda_i \in \mathbb{R}$. First, suppose the tuple $(\lambda_1,\ldots, \lambda_r) = (0,\ldots, 0)$. Then $f_0$ is constant, so every edge of $G_0$ is neither current-active nor voltage-active with respect to $f$. Since the edges $\{e_1,\ldots,e_k\}$ are assumed independent in $M^\perp(G)$, they do not contain a cut of $G$ so the inclusion $E^{\rm v.a.}(G,f) \subset \{e_1,\ldots,e_k\}$ implies that ${ E^{\rm v.a.}(G,f) = \varnothing }$ by Lemma~\ref{lem:v-active-cut}. Since the edges $\{e_1,\ldots,e_k\}$ do not contain a cycle of $G$, the inclusion $E^{\rm c.a.}(G,f) \subset \{e_1,\ldots,e_k\}$ implies that $E^{\rm c.a.}(G,f) = \varnothing$ by Lemma~\ref{lem:c-active-cycle}. Then Lemma~\ref{lem:cact-or-vact} implies that $D = E$. Next, suppose the tuple $(\lambda_1,\ldots, \lambda_r) \in \mathbb{R}^r$ from \eqref{eq:div-f0} is nonzero. This means that some edge of $G_0$ must be current-active, so $E^{\rm c.a.}(G,f)$ is nonempty. By Lemma~\ref{lem:c-active-cycle}, $E^{\rm c.a.}(G,f)$ contains a cycle of $G$. The closure of $\{e_1,\ldots,e_k\}$ with respect to the cographic matroid $M^\perp(G)$ is equal to \begin{equation*} \{e_1,\ldots, e_k\} \cup \{b_1, \ldots, b_j\} \quad\text{where $\{b_1, \ldots, b_j\}$ are the bridge edges of $G_0$.} \end{equation*} By assumption that $\{e_1,\ldots, e_k\} \cup \{b_1, \ldots, b_j\}$ is acyclic, $E^{\rm c.a.}(G,f)$ must contain an edge $e_* \not\in \{e_1, \ldots, e_k\}$ which is not a bridge in $G_0$. (The edge $e_*$ depends on the tuple $(\lambda_1,\ldots, \lambda_r)$.) Now consider applying Lemma~\ref{lem:slope-nonconstant} to the graph $G_0$, the divisor \eqref{eq:div-f0}, and the edge $e_* \in E(G_0)$. The lemma concludes that as a function of the edge-lengths of $\Gamma_0$, the slope of $f_0$ (equivalently $f$) on $e_*$ is a nonconstant ratio of polynomials. In particular, \begin{equation} \label{eq:v-irrational} V(\lambda_1,\ldots, \lambda_r) = \{ \text{edge lengths of $\Gamma_0$ such that $f_0'$ is irrational on $e_*$} \} \end{equation} is a very general subset of $ \mathbb{R}_{>0}^{m-k} \cong \{ \ell_0 : E(G_0) \to \mathbb{R}_{>0} \}$, and on this subset we have $[D]$ and $[E]$ are in distinct torsion packets. Finally, let $U(\lambda_1,\ldots, \lambda_r)$ be the preimage of $V(\lambda_1,\ldots, \lambda_r)$ under the projection $\mathbb{R}_{>0}^m \to \mathbb{R}_{>0}^{m-k}$, which is very general, and let \begin{equation*} U = \bigcap_{\substack{(\lambda_1, \ldots, \lambda_r) \qquad \\ \in \mathbb{Q}^r \setminus (0,\ldots, 0)}} U(\lambda_1,\ldots, \lambda_r) \quad\subset\quad \mathbb{R}_{>0}^m . \end{equation*} The subset $U$ is very general, as a countable intersection of very general subsets. If edge lengths of $\Gamma = (G,\ell)$ are chosen such that there are distinct divisors $D,E \in \Eff(e_1,\ldots, e_k)^\circ$ where $[D]$ and $[E]$ are in the same torsion packet, then the tuple $(\lambda_1, \ldots, \lambda_r)$ as in \eqref{eq:div-f0} must be rational and nonzero. Then the chosen edge lengths on $G_0 \subset G$ are excluded from the subset \eqref{eq:v-irrational}, hence the edge lengths are excluded also from $U$, as desired. \end{proof} \begin{thm} \label{thm:mm-higher-degree} Let $G$ be a connected graph of genus $g\geq 1$ and independent girth $\gamma^\mathrm{ind}$. The metric graph $\Gamma = (G,\ell)$ satisfies the degree $d$ Manin--Mumford condition for very general edge lengths $\ell: E(G) \to \mathbb{R}_{>0}$ if and only if $1 \leq d < \gamma^{\rm ind}$. \end{thm} \begin{proof} If $d\geq \gamma^{\rm ind}$, then $d \geq {\rm rk}^\perp(E(C))$ for some cycle $C$ of $G$. Proposition~\ref{prop:gamind-cycle-torsion} states that $\Gamma$ fails the Manin--Mumford condition in degree $d' = {\rm rk}^\perp(E(C))$, so the condition also fails in degree $d\geq d'$. Conversely if $d < \gamma^{\rm ind}$, then for each $d$-subset of edges $ \{e_1,\ldots, e_d\}$, its closure in $M^\perp(G)$ does not contain a cycle of $G$. In particular, the edges for each maximal cell $\Eff(e_1,\ldots, e_d)$ of $\Eff^d(\Gamma)$ satisfy the hypotheses of Proposition~\ref{prop:acyclic-nontorsion}, so there is a very general subset of edge lengths of $\Gamma$ for which every degree $d$ torsion packet has at most one element in the chosen cell $\Eff(e_1,\ldots, e_d)$. Since there are finitely many maximal cells (cf. Corollary~\ref{cor:abks-deg-d}), this implies that for very general edge lengths there are finitely many elements in each degree $d$ torsion packet. \end{proof} \begin{cor} \label{cor:ind-girth-mm} Let $\Gamma$ be a metric graph of genus $g \geq 1$, and suppose $\Gamma$ satisfies the Manin--Mumford condition in degree $d$. Then \[ d < C \log g \] for some constant $C$. \end{cor} \begin{proof} This follows from Proposition~\ref{prop:gamind-cycle-torsion}, which implies that $d < \gamma^{\rm ind}$, and the bound $\gamma^{\rm ind} < C \log g$ from Corollary~\ref{cor:metric-girth-bound}. \end{proof} \section*{Acknowledgements} The author thanks Sachi Hashimoto for the inspiration to study the tropical analogue of the Manin--Mumford conjecture, and David Speyer for suggesting the generalization to higher degree. Matt Baker provided valuable discussion and several references to related work. This work was supported by NSF grant DMS-1600223 and a Rackham Predoctoral Fellowship.
2,869,038,154,510
arxiv
\section{Introduction} \subsection{Background} In this paper, we consider the Cauchy problem of the cubic nonlinear Schr\"odinger equation (NLS) on $\mathbb{R}^d$, $d\geq 3$: \begin{equation} \begin{cases}\label{NLS1} i \partial_t u + \Delta u = \pm \mathcal{N}(u)\\ u\big|_{t = 0} = u_0 \in H^s(\mathbb{R}^d), \end{cases} \qquad ( t, x) \in \mathbb{R} \times \mathbb{R}^d, \end{equation} \noindent where $\mathcal{N}(u) := |u|^2 u$. The cubic NLS \eqref{NLS1} has been studied extensively from both the theoretical and applied points of view. Our main focus is to study well-posedness of \eqref{NLS1} with {\it random} and {\it rough} initial data. It is well known that the cubic NLS \eqref{NLS1} enjoys the dilation symmetry. More precisely, if $u(t, x)$ is a solution to \eqref{NLS1} on $\mathbb{R}^d$ with an initial condition $u_0$, then \begin{align} u_\mu(t, x) := \mu^{-1} u (\mu^{-2}t, \mu^{-1}x) \label{scaling} \end{align} \noindent is also a solution to \eqref{NLS1} with the $\mu$-scaled initial condition $u_{0, \mu}(x) := \mu^{-1} u_0 (\mu^{-1}x)$. Associated to this dilation symmetry, there is the so-called scaling-critical Sobolev index $s_\textup{crit} := \frac{d-2}{2}$ such that the homogeneous $\dot{H}^{s_\textup{crit}}$-norm is invariant under this dilation symmetry. In general, we have \begin{align} \|u_{0, \mu}\|_{\dot H^s(\mathbb{R}^d)} = \mu^{\frac{d-2}{2} - s}\|u_0\|_{\dot H^s(\mathbb{R}^d)} . \label{scaling2} \end{align} \noindent If an initial condition $u_0$ is in $H^s(\mathbb{R}^d)$, we say that the Cauchy problem \eqref{NLS1} is subcritical, critical, or supercritical, depending on $s > s_\textup{crit}$, $s = s_\textup{crit}$, or $s < s_\textup{crit}$, respectively. Let us first discuss the (sub-)critical regime. In this case, \eqref{NLS1} is known to be locally well-posed. See Cazenave-Weissler \cite{CW} for local well-posedness of \eqref{NLS1} in the critical Sobolev spaces. As is well known, the conservation laws play an important role in discussing long time behavior of solutions. There are three known conservation laws for the cubic NLS \eqref{NLS1}: \begin{align*} \text{Mass: }& M[u](t) := \int_{\mathbb{R}^d} |u(t, x)|^2 dx,\\ \text{Momentum: }& P[u](t) := \Im \int_{\mathbb{R}^d} u (t, x)\overline{\nabla u(t, x)} dx,\\ \text{Hamiltonian: } & H[u](t) := \frac 12 \int_{\mathbb{R}^d} |\nabla u(t, x)|^2 dx \pm \frac 14 \int_{\mathbb{R}^d} |u(t, x)|^4 dx. \end{align*} \noindent The Hamiltonian is also referred to as the energy. In view of the conservation of the energy, the cubic NLS is called energy-subcritical when $d \leq 3$ ($s_\textup{crit} < 1$), energy-critical when $d = 4$ ($s_\textup{crit} = 1$), and energy-supercritical when $d \geq 5$ ($s_\textup{crit} > 1$), respectively. In the following, let us discuss the known results on the global-in-time behavior of solutions to the defocusing NLS, corresponding to the $+$ sign in \eqref{NLS1}, in high dimensions $d\geq 3$. When $d = 4$, the Hamiltonian is invariant under the scaling \eqref{scaling} and plays a crucial role in the global well-posedness theory. Indeed, Ryckman-Vi\c{s}an \cite{RV} proved global well-posedness and scattering for the defocusing cubic NLS on $\mathbb{R}^4$. See also Vi\c{s}an \cite{Visan}. When $d \ne 4$, there is no known scaling invariant positive conservation law for \eqref{NLS1} in high dimensions $d\geq 3$. This makes it difficult to study the global-in-time behavior of solutions, in particular, in the scaling-critical regularity. There are, however, `conditional' global well-posedness and scattering results as we describe below. When $d = 3$ ($s_\textup{crit} = \frac 12$), Kenig-Merle \cite{KM2} applied the concentration compactness and rigidity method developed in their previous paper \cite{KM} and proved that if $u \in L^\infty_t \dot H_x^\frac{1}{2}(I\times \mathbb{R}^3)$, where $I$ is a maximal interval of existence, then $u$ exists globally in time and scatters. For $ d\geq 5$, the cubic NLS is supercritical with respect to any known conservation law. Nonetheless, motivated by a similar result of Kenig-Merle \cite{KM3} on radial solutions to the energy-supercritical nonlinear wave equation (NLW) on $\mathbb{R}^3$, Killip-Vi\c{s}an \cite{KV} proved that if $u \in L^\infty_t \dot H_x^{s_\textup{crit}}(I\times \mathbb{R}^d)$, where $I$ is a maximal interval of existence, then $u$ exists globally in time and scatters. Note that the results in \cite{KM2} and \cite{KV} are {\it conditional} in the sense that they assume an {\it a priori} control on the critical Sobolev norm. The question of global well-posedness and scattering without any a priori assumption remains a challenging open problem for $d = 3$ and $d \geq 5$. So far, we have discussed well-posedness in the (sub-)critical regularity. In particular, the cubic NLS \eqref{NLS1} is locally well-posed in the (sub-)critical regularity, i.e.~$s \geq s_\textup{crit}$. In the supercritical regime, i.e.~$s < s_\textup{crit}$, on the contrary, \eqref{NLS1} is known to be ill-posed. See \cite{CCT, BGT2, Carles, AC}. In the following, however, we consider the Cauchy problem \eqref{NLS1} with initial data in $H^s(\mathbb{R}^d)$, $s < s_\textup{crit}$ in a probabilistic manner. More precisely, given a function $\phi \in H^s(\mathbb{R}^d)$ with $s < s_\textup{crit}$, we introduce a randomization $\phi^\o$ and prove almost sure well-posedness of \eqref{NLS1}. In studying the Gibbs measure for the defocusing (Wick ordered) cubic NLS on $\mathbb{T}^2$, Bourgain \cite{BO7} considered random initial data of the form: \begin{equation} u_0^\omega(x) = \sum_{n \in \mathbb{Z}^2} \frac{g_n(\o)}{\sqrt{1+|n|^2}}e^{i n \cdot x}, \label{I1} \end{equation} \noindent where $\{g_n\}_{n \in \mathbb{Z}^2}$ is a sequence of independent standard complex-valued Gaussian random variables. The function \eqref{I1} represents a typical element in the support of the Gibbs measure, more precisely, in the support of the Gaussian free field on $\mathbb{T}^2$ associated to this Gibbs measure, and is critical with respect to the scaling. With a combination of deterministic PDE techniques and probabilistic arguments, Bourgain showed that the (Wick ordered) cubic NLS on $\mathbb{T}^2$ is well-posed almost surely with respect to the random initial data \eqref{I1}. In the context of the cubic NLW on a three dimensional compact Riemannian manifold $M$, Burq-Tzvetkov \cite{BT2} considered the Cauchy problem with a more general class of random initial data. Given an eigenfunction expansion $u_0(x) = \sum_{n = 1}^\infty c_n e_n(x) \in H^s(M)$ of an initial condition\footnote{For NLW, one needs to specify $(u, \partial_t u)|_{t = 0}$ as an initial condition. For simplicity of presentation, we only discuss $u|_{t = 0}$.}, where $\{e_n\}_{n = 1}^\infty$ is an orthonormal basis of $L^2(M)$ consisting of the eigenfunctions of the Laplace-Beltrami operator, they introduced a randomization $u_0^\o$ by \begin{equation} u_0^\omega (x) = \sum_{n = 1}^\infty g_n (\omega) c_n e_n(x). \label{I2} \end{equation} \noindent Here, $\{g_n\}_{n = 1}^\infty$ is a sequence of independent mean-zero random variables with a uniform bound on the fourth moments. Then, they proved almost sure local well-posedness with random initial data of the form \eqref{I2} for $s \geq \frac 14$. Since the scaling-critical Sobolev index for this problem is $ s_\textup{crit}=\frac 12 $, this result allows us to take initial data below the critical regularity and still construct solutions upon randomization of the initial data. We point out that the randomized function $u_0^\o$ in \eqref{I2} has the same Sobolev regularity as the original function $u_0$ and is not smoother, almost surely. However, it enjoys a better integrability, which allows one to prove improvements of Strichartz estimates. (See Lemmata \ref{PROP:Str1} and \ref{PROP:Str2} below.) Such an improvement on integrability for random Fourier series is known as Paley-Zygmund's theorem \cite{PZ}. See also Kahane \cite{Kahane} and Ayache-Tzvetkov \cite{AT}. There are several works on Cauchy problems of evolution equations with random data that followed these results, including some on almost sure global well-posedness: \cite{Bo97, Thomann, CO, Oh11, BTT, Deng, DS1, BT3, NPS, DS2, R, BTT2, BB1, BB2, NS, PRT, LM}. \subsection{Randomization adapted to the Wiener decomposition and modulation spaces} Many of the results mentioned above are on compact domains, where there is a countable basis of eigenfunctions of the Laplacian and thus there is a natural way to introduce a randomization. On $\mathbb{R}^d$, there is no countable basis of $L^2(\mathbb{R}^d)$ consisting of eigenfunctions of the Laplacian. Randomizations have been introduced with respect to some other countable bases of $L^2(\mathbb{R}^d)$, for example, a countable basis of eigenfunctions of the Laplacian with a confining potential such as the harmonic oscillator $\Delta - |x|^2$, leading to a careful study of properties of eigenfunctions. In this paper, our goal is to introduce a simple and natural randomization for functions on $\mathbb{R}^d$. For this purpose, we first review some basic notions related to the so-called modulation spaces of time-frequency analysis \cite{Gr}. The modulation spaces were introduced by Feichtinger \cite{Fei} in early eighties. The groundwork theory regarding these spaces of time-frequency analysis was then established in joint collaboration with Gr\"ochenig \cite{FG1, FG2}. The modulation spaces arise from a uniform partition of the frequency space, commonly known as the {\it Wiener decomposition} \cite{W}: $\mathbb{R}^d = \bigcup_{n \in \mathbb{Z}^d} Q_n$, where $Q_n$ is the unit cube centered at $n \in \mathbb{Z}^d$. The Wiener decomposition of $\mathbb{R}^d$ induces a natural uniform decomposition of the frequency space of a signal via the (non-smooth) frequency-uniform decomposition operators $\mathcal F^{-1}\chi_{Q_n}\mathcal F$. Here, $\mathcal Fu=\widehat u$ denotes the Fourier transform of a distribution $u$. The drawback of this approach is the roughness of the characteristic functions $\chi_{Q_n}$, but this issue can easily be fixed by smoothing them out appropriately. We have the following definition of the (weighted) modulation spaces $M^{p, q}_s$. Let $\psi \in \mathcal{S}(\mathbb{R}^d)$ such that \begin{equation} \supp \psi \subset [-1,1]^d \qquad \text{and} \qquad\sum_{n \in \mathbb{Z}^d} \psi(\xi -n) \equiv 1 \ \text{ for any }\xi \in \mathbb{R}^d. \label{mod1a} \end{equation} Let $0<p,q\leq \infty$ and $s\in\mathbb R$; $M^{p, q}_s$ consists of all tempered distributions $u\in\mathcal S'(\mathbb{R}^d)$ for which the (quasi) norm \begin{equation} \|u\|_{M_s^{p, q}(\mathbb{R}^d)} := \big\| \jb{n}^s \|\psi(D-n) u \|_{L_x^p(\mathbb{R}^d)} \big\|_{\l^q_n(\mathbb{Z}^d)} \label{mod2} \end{equation} \noindent is finite. Note that $\psi(D-n)u(x)=\int_{\mathbb{R}^d} \psi (\xi-n)\widehat u (\xi)e^{2\pi ix\cdot \xi}\, d\xi$ is just a Fourier multiplier operator with symbol $\chi_{Q_n}$ conveniently smoothed. It is worthwhile to compare the definition \eqref{mod2} with that of the Besov spaces. Let $\varphi_0, \varphi \in \mathcal{S}(\mathbb{R}^d)$ such that $\supp \varphi_0 \subset \{ |\xi| \leq 2\}$, $\supp \varphi \subset \{ \frac{1}{2}\leq |\xi| \leq 2\}$, and $ \varphi_0(\xi) + \sum_{j = 1}^\infty \varphi(2^{-j}\xi) \equiv 1.$ With $\varphi_j(\xi) = \varphi(2^{-j}\xi)$, we define the (inhomogeneous) Besov spaces $B_s^{p, q}$ via the norm \begin{equation} \label{besov1} \|u\|_{B^s_{p, q}(\mathbb{R}^d) } = \big\| 2^{js} \|\varphi_j(D) u \|_{L^p(\mathbb{R}^d)} \big\|_{\l^q_j(\mathbb{Z}_{\geq 0})}. \end{equation} \noindent There are several known embeddings between Besov, Sobolev, and modulation spaces. See, for example, Okoudjou \cite{Ok}, Toft \cite{To}, Sugimoto-Tomita \cite{suto2}, and Kobayashi-Sugimoto \cite{kosu}. Now, given a function $\phi$ on $\mathbb{R}^d$, we have \[ \phi = \sum_{n \in \mathbb{Z}^d} \psi(D-n) \phi,\] \noindent where $\psi(D-n)$ is defined above. This decomposition leads to a randomization of $\phi$ that is very natural from the perspective of time-frequency analysis associated to modulation spaces. Let $\{g_n\}_{n \in \mathbb{Z}^d}$ be a sequence of independent mean zero complex-valued random variables on a probability space $(\O, \mathcal{F}, P)$, where the real and imaginary parts of $g_n$ are independent and endowed with probability distributions $\mu_n^{(1)}$ and $\mu_n^{(2)}$. Then, we can define the {\it Wiener randomization of $\phi$} by \begin{equation} \phi^\omega : = \sum_{n \in \mathbb{Z}^d} g_n (\omega) \psi(D-n) \phi. \label{R1} \end{equation} Almost simultaneously with our first paper \cite{BOP1}, L\"uhrmann-Mendelson \cite{LM} also considered a randomization of the form \eqref{R1} (with cubes $Q_n$ being substituted by appropriately localized balls) in the study of NLW on $\mathbb{R}^3$. See Remark \ref{REM:LM} below. For a similar randomization used in the study of the Navier-Stokes equations, see the work of Zhang and Fang \cite{ZF}. We would like to stress again, however, that our reason for considering the randomization of the form \eqref{R1} comes from its connection to time-frequency analysis. See also our previous papers \cite{BP} and \cite{BOP1}. In the sequel, we make the following assumption on the distributions $\mu_n^{(j)}$: there exists $c>0$ such that \begin{equation} \bigg| \int_{\mathbb{R}} e^{\gamma x } d \mu_n^{(j)}(x) \bigg| \leq e^{c\gamma^2} \label{R2} \end{equation} \noindent for all $\gamma \in \mathbb{R}$, $n \in \mathbb{Z}^d$, $j = 1, 2$. Note that \eqref{R2} is satisfied by standard complex-valued Gaussian random variables, standard Bernoulli random variables, and any random variables with compactly supported distributions. It is easy to see that, if $\phi \in H^s(\mathbb{R}^d)$, then the randomized function $\phi^\o$ is almost surely in $H^s(\mathbb{R}^d)$. While there is no smoothing upon randomization in terms of differentiability in general, this randomization \emph{behaves better under integrability}; if $\phi \in L^2(\mathbb{R}^d)$, then the randomized function $\phi^\o$ is almost surely in $L^p(\mathbb{R}^d)$ for any finite $p \geq 2$. As a result of this enhanced integrability, we have improvements of the Strichartz estimates. See Lemmata \ref{PROP:Str1} and \ref{PROP:Str2}. These improved Strichartz estimates play an essential role in proving probabilistic well-posedness results, which we describe below. \subsection{Main results} Recall that the scaling critical Sobolev index for the cubic NLS on $\mathbb{R}^d$ is $s_\textup{crit} = \frac{d-2}{2}$. In the following, we take $\phi \in H^s(\mathbb{R}^d) \setminus H^{s_\textup{crit}} (\mathbb{R}^d)$ for some range of $s < s_\textup{crit}$, that is, below the critical regularity. Then, we consider the well-posedness problem of \eqref{NLS1} with respect to the randomized initial data $\phi^\o$ defined in \eqref{R1}. For $d \geq 3$, define $s_d$ by \begin{align} s_d := \frac{d-1}{d+1}\cdot s_\textup{crit} = \frac{d-1}{d+1} \cdot \frac{d-2}{2} \label{Sd1} \end{align} \noindent Note that $s_d <s_\textup{crit}$ and $\frac{s_d}{s_\textup{crit}} \to 1$ as $d \to \infty$. Throughout the paper, we use $S(t)= e^{it\Delta}$ to denote the linear propagator of the Schr\"odinger group. We are now ready to state our main results. \begin{theorem}[Almost sure local well-posedness]\label{THM:1} Let $d \geq 3$ and $s > s_d$. Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, the cubic NLS \eqref{NLS1} on $\mathbb{R}^d$ is almost surely locally well-posed with respect to the randomization $\phi^\omega$ as initial data. More precisely, there exist $ C, c, \gamma>0$ such that for each $0< T\ll 1$, there exists a set $\O_T \subset \O$ with the following properties: \smallskip \begin{itemize} \item[\textup{(i)}] $P(\O_T^c) < C \exp\big(-\frac{c}{T^\gamma \|\phi\|_{H^s}^2 }\big)$, \item[\textup{(ii)}] For each $\o \in \O_T$, there exists a (unique) solution $u$ to \eqref{NLS1} with $u|_{t = 0} = \phi^\o$ in the class \[ S(t) \phi^\o + C([-T, T]: H^{\frac{d-2}{2}} (\mathbb{R}^d)) \subset C([-T, T]:H^s(\mathbb{R}^d)).\] \end{itemize} \end{theorem} \noindent We prove Theorem \ref{THM:1} by considering the equation satisfied by the nonlinear part of a solution $u$. Namely, let $z (t) = z^\o(t) : = S(t) \phi^\o$ and $v(t) := u(t) - S(t) \phi^\o$ be the linear and nonlinear parts of $u$, respectively. Then, \eqref{NLS1} is equivalent to the following perturbed NLS: \begin{equation} \begin{cases} i \partial_t v + \Delta v = \pm |v + z|^2(v+z)\\ v|_{t = 0} = 0. \end{cases} \label{NLS2} \end{equation} \noindent We reduce our analysis to the Cauchy problem \eqref{NLS2} for $v$, viewing $z$ as a random forcing term. Note that such a point of view is common in the study of stochastic PDEs. As a result, the uniqueness in Theorem \ref{THM:1} refers to uniqueness of the nonlinear part $v(t) = u(t) - S(t) \phi^\o$ of a solution $u$. The proof of Theorem \ref{THM:1} is based on the fixed point argument involving the variants of the $X^{s, b}$-spaces adapted to the $U^p$- and $V^p$-spaces introduced by Koch, Tataru, and their collaborators \cite{KochT, HHK, HTT1}. See Section \ref{SEC:Up} for the basic definitions and properties of these function spaces. The main ingredient is the local-in-time improvement of the Strichartz estimates (Lemma \ref{PROP:Str1}) and the refinement of the bilinear Strichartz estimate (Lemma \ref{LEM:Ys1} (ii)). We point out that, although $\phi$ and its randomization $\phi^\o$ have a supercritical Sobolev regularity, the randomization essentially makes the problem subcritical, at least locally in time, and therefore, one can also prove Theorem \ref{THM:1} only with the classical subcritical $X^{s, b}$-spaces, $b > \frac 12$. See \cite{BOP1} for the result when $d = 4$. Next, we turn our attention to the global-in-time behavior of the solutions constructed in Theorem \ref{THM:1}. The key nonlinear estimate in the proof of Theorem \ref{THM:1} combined with the global-in-time improvement of the Strichartz estimates (Lemma \ref{PROP:Str2}) yields the following result on small data global well-posedness and scattering. \begin{theorem}[Probabilistic small data global well-posedness and scattering]\label{THM:2} Let $d\geq 3$ and $s \in ( s_d, s_\textup{crit}]$, where $s_d$ is as in \eqref{Sd1}. Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, there exist $ C, c >0$ such that for each $0<\varepsilon \ll1$, there exists a set $\O_\varepsilon \subset \O$ with the following properties: \smallskip \begin{itemize} \item[\textup{(i)}] $P(\O_\varepsilon^c) \leq C \exp\big(-\frac{c}{\varepsilon^2 \|\phi\|_{H^s}^2 }\big) \to 0$ as $\varepsilon \to 0$, \item[\textup{(ii)}] For each $\o \in \O_\varepsilon$, there exists a (unique) global-in-time solution $u$ to \eqref{NLS1} with \[u|_{t = 0} = \varepsilon \phi^\o\] in the class \[ \varepsilon S(t) \phi^\o + C(\mathbb{R} : H^{\frac{d-2}{2}} (\mathbb{R}^d)) \subset C(\mathbb{R}:H^s(\mathbb{R}^d)),\] \item[\textup{(iii)}] We have scattering for each $\o\in \Omega_\varepsilon$. More precisely, for each $\o \in \O_\varepsilon$, there exists $v_+^\o \in H^\frac{d-2}{2}(\mathbb{R}^d)$ such that \[ \| u(t) - S(t)(\varepsilon \phi^\o + v_+^\o) \|_{H^\frac{d-2}{2}(\mathbb{R}^d)} \to 0\] \noindent as $t \to \infty$. A similar statement holds for $t \to -\infty$. \end{itemize} \end{theorem} \noindent In general, a local well-posedness result in a critical space is often accompanied by small data global well-posedness and scattering. In this sense, Theorem \ref{THM:2} is an expected consequence of Theorem \ref{THM:1}, since, in our construction, the nonlinear part $v$ lies in the critical space $H^\frac{d-2}{2}(\mathbb{R}^d)$. The next natural question is probabilistic global well-posedness for large data. In order to state our result, we need to make several hypotheses. The first hypothesis is on a probabilistic a priori energy bound on the nonlinear part $v$. \medskip \noindent {\bf Hypothesis (A):} Given any $T, \varepsilon > 0$, there exists $R = R(T, \varepsilon)$ and $\O_{T, \varepsilon} \subset \O$ such that \begin{itemize} \item[(i)] $P(\O_{T, \varepsilon}^c) < \varepsilon$, and \item[(ii)] If $v = v^\o$ is the solution to \eqref{NLS2} for $\o \in \O_{T, \varepsilon}$, then the following {\it a priori} energy estimate holds: \begin{equation} \|v(t) \|_{L^\infty([0, T]; H^\frac{d-2}{2}(\mathbb{R}^d))} \leq R(T, \varepsilon). \label{HypA} \end{equation} \end{itemize} \medskip \noindent Note that Hypothesis (A) does {\it not} refer to existence of a solution $v = v^\o$ on $[0, T]$ for given $\o \in \O_{T, \varepsilon}$. It only hypothesizes the {\it a priori} energy bound \eqref{HypA}, just like the usual conservation laws. It may be possible to prove \eqref{HypA} independently from the argument presented in this paper. Such a probabilistic a priori energy estimate is known, for example, for the cubic NLW. See Burq-Tzvetkov \cite{BT3}. We point out that the upper bound $R(T, \varepsilon)$ in \cite{BT3} tends to $\infty$ as $T\to \infty$. See also \cite{POC}. The next hypothesis is on global existence and space-time bounds of solutions to the cubic NLS \eqref{NLS1} with deterministic initial data belonging to the critical space $H^\frac{d-2}{2}(\mathbb{R}^d)$. \medskip \noindent {\bf Hypothesis (B):} Given any $w_0 \in H^\frac{d-2}{2}(\mathbb{R}^d)$, there exists a global solution $w$ to the defocusing cubic NLS \eqref{NLS1} with $w|_{t = 0} = w_0$. Moreover, there exists a function $C:[0, \infty)\times [0, \infty)\to [0, \infty)$ which is non-decreasing in each argument such that \begin{equation} \| w\|_{L^{d+2}_{t, x}([0, T]\times \mathbb{R}^d)} \leq C\big(\|w_0\|_{H^\frac{d-2}{2}(\mathbb{R}^d)}, T\big) \label{HypB} \end{equation} \noindent for any $T > 0$. \medskip \noindent Note that when $d = 4$, Hypothesis (B) is known to be true for any $T> 0$ thanks to the global well-posedness result by Ryckman-Vi\c{s}an \cite{RV} and Vi\c{s}an \cite{Visan}. For other dimensions $d \geq 3$ with $d \ne 4$, it is not known whether Hypothesis (B) holds. Let us compare \eqref{HypB} and the results in \cite{KM2} and \cite{KV}. Assuming that $w \in L^\infty_t \dot H_x^{s_\textup{crit}}(I_*\times \mathbb{R}^d)$, where $I_*$ is a maximal interval of existence, it was shown in \cite{KM2} and \cite{KV} that $I_* = \mathbb{R}$ and \begin{equation} \| w\|_{L^{d+2}_{t, x}(\mathbb{R} \times \mathbb{R}^d)} \leq C\Big(\|w\|_{L^\infty_t \dot H_x^\frac{d-2}{2}(\mathbb{R}\times \mathbb{R}^d)}\Big). \label{HypC} \end{equation} \noindent We point out that Hypothesis (B) is not directly comparable to the results in \cite{KM2, KV} in the following sense. On the one hand, by assuming that $w \in L^\infty_t \dot H_x^{s_\textup{crit}}(I_*\times \mathbb{R}^d)$, the results in \cite{KM2, KV} yield the global-in-time bound \eqref{HypC}, while Hypothesis (B) assumes the bound \eqref{HypB} only for each {\it finite} time $T>0$ and does not assume a global-in-time bound. On the other hand, \eqref{HypB} is much stronger than \eqref{HypC} in the sense that the right-hand side of \eqref{HypB} depends only on the size of an initial condition $w_0$, while the right-hand side of \eqref{HypC} depends on the global-in-time $L^\infty_t \dot H^\frac{d-2}{2}_x$-bound of the solution $w$. Hypothesis (B), just like Hypothesis (A), is of independent interest from Theorem \ref{THM:3} below and is closely related to the fundamental open problem of global well-posedness and scattering for the defocusing cubic NLS \eqref{NLS1} for $d = 3$ and $d \geq 5$. We now state our third theorem on almost sure global well-posedness of the cubic NLS under Hypotheses (A) and (B). We restrict ourselves to the defocusing NLS in the next theorem. \begin{theorem}[Conditional almost sure global well-posedness] \label{THM:3} Let $d \geq 3$ and $s \in ( s_d, s_\textup{crit}]$, where $s_d$ is as in \eqref{Sd1}. Assume Hypothesis \textup{(A)}. Furthermore, assume Hypothesis \textup{(B)} if $d \ne 4$. Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, the defocusing cubic NLS \eqref{NLS1} on $\mathbb{R}^d$ is almost surely globally well-posed with respect to the randomization $\phi^\omega$ as initial data. More precisely, there exists a set $\Sigma \subset \O$ with $P(\Sigma) = 1$ such that, for each $\o \in \Sigma$, there exists a (unique) global-in-time solution $u$ to \eqref{NLS1} with $u|_{t = 0} = \phi^\o$ in the class \[ S(t) \phi^\o + C(\mathbb{R}: H^{\frac{d-2}{2}} (\mathbb{R}^d)) \subset C(\mathbb{R}:H^s(\mathbb{R}^d)).\] \end{theorem} \noindent The main tool in the proof of Theorem \ref{THM:3} is a perturbation lemma for the cubic NLS (Lemma \ref{LEM:perturb}). Assuming a control on the critical norm (Hypothesis (A)), we iteratively apply the perturbation lemma in the {\it probabilistic} setting to show that a solution can be extended to a time depending only on the critical norm. Such a perturbative approach was previously used by Tao-Vi\c{s}an-Zhang \cite{TVZ} and Killip-Vi\c{s}an with the second and third authors \cite{KOPV}. The novelty of Theorem \ref{THM:3} is an application of such a technique in the probabilistic setting. While there is no invariant measure for the nonlinear evolution in our setting, we exploit the quasi-invariance property of the distribution of the linear solution $S(t) \phi^\o$. See Remark \ref{REM:asGWP}. Our implementation of the proof of Theorem \ref{THM:3} is sufficiently general that it can be easily applied to other equations. See \cite{POC} in the context of the energy-critical NLW on $\mathbb{R}^d$, $d = 4, 5$, where both Hypotheses (A) and (B) are satisfied. When $d \ne 4$, the conditional almost sure global well-posedness in Theorem \ref{THM:3} has a flavor analogous to the deterministic conditional global well-posedness in the critical Sobolev spaces by Kenig-Merle \cite{KM2} and Killip-Vi\c{s}an \cite{KV}. In the following, let us discuss the situation when $d = 4$. In this case, we only assume Hypothesis (A) for Theorem \ref{THM:3}. While it would be interesting to remove this assumption, we do not know how to prove the validity of Hypothesis (A) at this point. This is mainly due to the lack of conservation of $H[v](t)$, i.e. the Hamiltonian evaluated at the nonlinear part $v$ of a solution. In the context of the energy-critical defocusing cubic NLW on $\mathbb{R}^4$, however, one can prove an analogue of Hypothesis (A) by establishing a probabilistic a priori bound on the energy $\mathcal{E}[v]$ of the nonlinear part $v$ of a solution, where the energy $\mathcal{E}[v]$ is defined by \[ \mathcal{E}[v](t) = \frac 12 \int_{\mathbb{R}^4} |\partial_t v(t, x)|^2 dx + \frac 12 \int_{\mathbb{R}^4}|\nabla v(t, x)|^2 dx + \frac 14 \int_{\mathbb{R}^4} |v(t, x)|^4 dx. \] \noindent As a consequence, the third author \cite{POC} successfully implemented a probabilistic perturbation argument and proved almost sure global well-posedness of the energy-critical defocusing cubic NLW on $\mathbb{R}^4$ with randomized initial data below the scaling critical regularity.\footnote{ In \cite{POC}, the third author also proved almost sure global well-posedness of the energy-critical defocusing NLW on $\mathbb{R}^5$. This result was recently extended to the dimension 3 by the second and third authors \cite{OP}.} We point out that the first term in the energy $\mathcal{E}[v]$ involving the time derivative plays an essential role in establishing a probabilistic a priori bound on the energy for NLW. It seems substantially harder to verify Hypothesis (A) for NLS, even when $d = 4$. \smallskip While Theorem \ref{THM:3} provides only conditional almost sure global existence, our last theorem (Theorem \ref{THM:4}) below presents a way to construct global-in-time solutions below the scaling critical regularity with a large probability. The main idea is to use the scaling \eqref{scaling} of the equation for random initial data below the scaling criticality. For example, suppose that we have a solution $u$ to \eqref{NLS1} on a short time interval with a deterministic initial condition $u_0 \in H^s(\mathbb{R}^d)$, $s < s_\textup{crit}$. In view of \eqref{scaling} and \eqref{scaling2}, by taking $\mu \to 0$, we see that the $H^s$-norm of the scaled initial condition goes to 0. Thus, one might think that the problem can be reduced to small data theory. This, of course, does not work in the usual deterministic setting, since we do not know how to construct solutions depending only on the $H^s$-norm of the initial data, $s < s_\text{crit}$. Even in the probabilistic setting, this naive idea does not work if we simply apply the scaling to the randomized function $\phi^\o$ defined in \eqref{R1}. This is due to the fact that we need to use (sub-)critical space-time norms controlling the random linear term $z^\o(t) = S(t) \phi^\o$, which do not become small even if we take $\mu \ll1$. To resolve this issue, we consider a randomization based on a partition of the frequency space by {\it dilated} cubes. Given $\mu > 0$, define $\psi^\mu$ by \begin{equation} \psi^\mu(\xi) = \psi(\mu^{-1} \xi). \label{psi} \end{equation} \noindent Then, we can write a function $\phi$ on $\mathbb{R}^d$ as \[ \phi = \sum_{n \in \mathbb{Z}^d} \psi^\mu (D- \mu n) \phi.\] Now, we introduce the randomization $\phi^{\omega, \mu} $ of $\phi$ on dilated cubes of scale $\mu$ by \begin{equation} \phi^{\omega, \mu} : = \sum_{n \in \mathbb{Z}^d} g_n (\omega) \psi^\mu(D- \mu n) \phi, \label{R3} \end{equation} \noindent where $\{g_n\}_{n \in \mathbb{Z}^d}$ is a sequence of independent mean zero complex-valued random variables, satisfying \eqref{R2} as before. Then, we have the following global well-posedness of \eqref{NLS1} with a large probability. \begin{theorem}\label{THM:4} Let $ d\geq 3$ and $\phi \in H^s(\mathbb{R}^d)$, for some $s \in ( s_d, s_\textup{crit})$, where $s_d$ is as in \eqref{Sd1}. Then, given the randomization $\phi^{\o, \mu}$ on dilated cubes of scale $ \mu \ll1 $ defined in \eqref{R3}, satisfying \eqref{R2}, the cubic NLS \eqref{NLS1} on $\mathbb{R}^d$ is globally well-posed with a large probability. More precisely, for each $0< \varepsilon \ll 1$, there exists a small dilation scale $\mu_0 = \mu_0(\varepsilon, \|\phi\|_{H^s})> 0$ such that for each $\mu \in (0, \mu_0)$, there exists a set $\Omega_\mu \subset \Omega$ with the following properties: \begin{itemize} \item[\textup{(i)}] $P(\Omega_\mu^c) < \varepsilon$, \item[\textup{(ii)}] If $\phi^{\o, \mu}$ is the randomization on dilated cubes defined in \eqref{R3}, satisfying \eqref{R2}, then, for each $\o \in \O_\mu$, there exists a (unique) global-in-time solution $u$ to \eqref{NLS1} with $u|_{t = 0} = \phi^{\o, \mu}$ in the class \[ S(t) \phi^\o + C(\mathbb{R}: H^{\frac{d-2}{2}} (\mathbb{R}^d)) \subset C(\mathbb{R}:H^s(\mathbb{R}^d)).\] \noindent Moreover, for each $\o \in \O_\mu$, scattering holds in the sense that there exists $v_+^\o \in H^\frac{d-2}{2}(\mathbb{R}^d)$ such that \[ \| u(t) - S(t)( \phi^{\o, \mu} + v_+^\o) \|_{H^\frac{d-2}{2}(\mathbb{R}^d)} \to 0\] \noindent as $t \to \infty$. A similar statement holds for $t \to -\infty$. \end{itemize} \end{theorem} We conclude this introduction with several remarks. \begin{remark}\rm In probabilistic well-posedness results \cite{BO7, Bo97, CO, NS} for NLS on $\mathbb{T}^d$, random initial data are assumed to be of the following specific form: \begin{equation} \label{I3} u_0^\omega(x) = \sum_{n \in \mathbb{Z}^d} \frac{g_n(\o)}{(1+|n|^2)^\frac{\alpha}{2}}e^{i n \cdot x}, \end{equation} \noindent where $\{g_n\}_{n \in \mathbb{Z}^d}$ is a sequence of independent complex-valued standard Gaussian random variables. The expression \eqref{I3} has a close connection to the study of invariant measures and hence it is of importance. At the same time, due to the lack of a full range of Strichartz estimates on $\mathbb{T}^d$, one could not handle a general randomization of a given function as in \eqref{I2}. In this paper, we consider NLS on $\mathbb{R}^d$ and thus we do not encounter this issue thanks to a full range of the Strichartz estimates. For NLW, finite speed of propagation allows us to use a full range of Strichartz estimates even on compact domains, at least locally in time. Thus, one does not encounter such an issue. \end{remark} \begin{remark}\label{REM:LM} \rm In a recent preprint, L\"uhrmann-Mendelson \cite{LM} considered the defocusing NLW on $\mathbb{R}^3$ with randomized initial data, essentially given by \eqref{R1}, below the critical regularity and proved almost sure global well-posedness in the energy-subcritical case, following the method developed in \cite{CO}, namely an adaptation of Bourgain's high-low method \cite{Bo98} in the probabilistic setting. As Bourgain's high-low method is a subcritical tool, their global result misses the energy-critical case.\footnote{ In \cite{OP}, the second and third authors recently proved almost sure global well-posedness of the energy-critical defocusing quintic NLW on $\mathbb{R}^3$.} The third author \cite{POC} recently proved almost sure global well-posedness of the energy-critical defocusing NLW on $\mathbb{R}^d$, $d = 4, 5$, with randomized initial data below the critical regularity. The argument is based on an application of a perturbation lemma as in Theorem \ref{THM:3} along with a probabilistic a priori control on the energy, which is not available for the cubic NLS \eqref{NLS1}. \end{remark} This paper is organized as follows. In Section \ref{SEC:2}, we state some probabilistic lemmata. In Section \ref{SEC:Up}, we go over the basic definitions and properties of function spaces involving the $U^p$- and $V^p$-spaces. We prove the key nonlinear estimates in Section \ref{SEC:4} and then use them to prove Theorems \ref{THM:1} and \ref{THM:2} in Section \ref{SEC:THM12}. We divide the proof of Theorem \ref{THM:3} into three sections. In Sections \ref{SEC:6} and \ref{SEC:7}, we discuss the Cauchy theory for the defocusing cubic NLS with a deterministic perturbation. We implement these results in the probabilistic setting and prove Theorem \ref{THM:3} in Section \ref{SEC:8}. In Section \ref{SEC:9}, we show how Theorem \ref{THM:4} follows from the arguments in Sections \ref{SEC:4} and \ref{SEC:THM12}, once we consider a randomization on dilated cubes. In Appendix \ref{SEC:A}, we state and prove some additional properties of the function spaces defined in Section \ref{SEC:Up}. Lastly, note that we present the proofs of these results only for positive times in view of the time reversibility of \eqref{NLS1}. \section{Probabilistic lemmata} \label{SEC:2} In this section, we summarize the probabilistic lemmata used in this paper. In particular, the probabilistic Strichartz estimates (Lemmata \ref{PROP:Str1} and \ref{PROP:Str2}) play an essential role. First, we recall the usual Strichartz estimates on $\mathbb{R}^d$ for readers' convenience. We say that a pair $(q, r)$ is Schr\"odinger admissible if it satisfies \begin{equation} \frac{2}{q} + \frac{d}{r} = \frac{d}{2} \label{Admin} \end{equation} \noindent with $2\leq q, r \leq \infty$ and $(q, r, d) \ne (2, \infty, 2)$. Then, the following Strichartz estimates are known to hold. \begin{lemma}[\cite{Strichartz, Yajima, GV, KeelTao}]\label{LEM:Str0} Let $(q, r)$ be Schr\"odinger admissible. Then, we have \begin{equation} \| S(t) \phi\|_{L^q_t L^r_x (\mathbb{R}\times \mathbb{R}^d)} \lesssim \|\phi\|_{L^2_x(\mathbb{R}^d)}. \label{Str0} \end{equation} \end{lemma} \noindent In particular, when $q = r$, we have $q = r = \frac{2(d+2)}{d}$. By applying Sobolev inequality and \eqref{Str0}, we also have \begin{equation} \| S(t) \phi\|_{L^p_{t, x} (\mathbb{R}\times \mathbb{R}^d)} \lesssim \big\||\nabla|^{\frac d2 - \frac{d+2}{p}}\phi\big\|_{L^2_x(\mathbb{R}^d)}. \label{Str0a} \end{equation} \noindent for $p \geq \frac{2(d+2)}{d}$. Recall that the derivative loss in \eqref{Str0a} depends only on the size of the frequency support and not its location. Namely, if $\widehat{\phi}$ is supported on a cube $Q$ of side length $N$, then we have \begin{equation} \| S(t) \phi\|_{L^p_{t, x} (\mathbb{R}\times \mathbb{R}^d)} \lesssim N^{\frac d2 - \frac{d+2}{p}}\|\phi\|_{L^2_x(\mathbb{R}^d)}, \label{Str0b} \end{equation} regardless of the center of the cube $Q$. Next, we present improvements of the Strichartz estimates under the Wiener randomization \eqref{R1} and where, throughout, we assume \eqref{R2}. See \cite{BOP1} for the proofs. \begin{lemma}[Improved local-in-time Strichartz estimate]\label{PROP:Str1} Given $\phi \in L^2(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, given finite $q, r \geq 2$, there exist $C, c>0$ such that \begin{align*} P\Big( \|S(t) \phi^\omega\|_{L^q_t L^r_x([0, T]\times \mathbb{R}^d)}> \lambda\Big) \leq C\exp \bigg(-c \frac{\lambda^2}{ T^\frac{2}{q}\|\phi\|_{L^2}^{2}}\bigg) \end{align*} \noindent for all $ T > 0$ and $\lambda>0$. In particular, with $\lambda = T^\theta $, we have \begin{equation} \|S(t) \phi^\o\|_{L^q_tL^r_x([0, T]\times \mathbb{R}^d)} \lesssim T^\theta \label{Str1a} \end{equation} \noindent outside a set of probability \[ \leq C\exp \bigg(-c \frac{1}{ T^{2(\frac{1}{q}-\theta)}\|\phi\|_{L^2}^{2}}\bigg). \] Note that this probability can be made arbitrarily small by letting $T\to 0$ as long as $\theta < \frac{1}{q}$. \end{lemma} \noindent The next lemma states an improvement of the Strichartz estimates in the global-in-time setting. \begin{lemma}[Improved global-in-time Strichartz estimate]\label{PROP:Str2} Given $\phi \in L^2(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. Given a Schr\"odinger admissible pair $(q, r)$ with $q, r < \infty$, let $\widetilde {r} \geq r$. Then, there exist $C, c>0$ such that \begin{align*} P\Big( \|S(t) \phi^\omega\|_{L^q_t L^{\widetilde{r}}_x ( \mathbb{R} \times \mathbb{R}^d)} > \lambda\Big) \leq Ce^{-c \lambda^2 \|\phi\|_{L^2}^{-2}}. \end{align*} \noindent In particular, given any small $\varepsilon > 0$, we have \[ \|S(t) \phi^\omega\|_{L^q_t L^{\widetilde{r}}_x ( \mathbb{R} \times \mathbb{R}^d)} \lesssim \Big( \log \frac{1}{\varepsilon}\Big)^\frac{1}{2} \|\phi\|_{L^2(\mathbb{R}^d)} \] \noindent outside a set of probability $< \varepsilon$. \end{lemma} \noindent Recall that the diagonal Strichartz admissible index is given by $p= \frac{2(d+2)}{d}$. In the diagonal case $q = \widetilde r$, it is easy to see that the condition of Lemma \ref{PROP:Str2} is satisfied if $q = \widetilde r \geq p = \frac{2(d+2)}{d}$. In the following, we apply Lemma \ref{PROP:Str2} in this setting. We also need the following lemma on the control of the size of $H^s$-norm of $\phi^\o$. \begin{lemma} \label{LEM:Hs} Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, we have \begin{align} P\Big( \| \phi^\omega \|_{ H^s( \mathbb{R}^d)} > \lambda\Big) \leq C e^{-c \lambda^2 \|\phi\|_{ H^s}^{-2}}. \label{Hs1} \end{align} \end{lemma} We conclude this section by introducing some notations involving Strichartz and space-time Lebesgue spaces. In the sequel, given an interval $I\subset\mathbb{R}$, we often use $L^q_tL^r_x(I)$ to denote $L^q_tL^r_x(I\times \mathbb{R}^d)$. We also define the $\dot S^{s_\textup{crit}}(I)$-norm in the usual manner by setting \[ \|u \|_{\dot S^{s_\textup{crit}}(I)} := \sup\Big\{ \big\| |\nabla|^\frac{d-2}{2} u \big\|_{L^q_tL^r_x(I\times \mathbb{R}^d) }\Big\},\] \noindent where the supremum is taken over all Schr\"odinger admissible pairs $(q, r)$. \section{Function spaces and their properties} \label{SEC:Up} In this section, we go over the basic definitions and properties of the $U^p$- and $V^p$-spaces, developed by Tataru, Koch, and their collaborators \cite{KochT, HHK, HTT1}. These spaces have been very effective in establishing well-posedness of various dispersive PDEs in critical regularities. See Hadac-Herr-Koch \cite{HHK} and Herr-Tataru-Tzvetkov \cite{HTT1} for detailed proofs. Let $H$ be a separable Hilbert space over $\mathbb{C}$. In particular, it will be either $H^s(\mathbb{R}^d)$ or $\mathbb{C}$. Let $\mathcal{Z}$ be the collection of finite partitions $\{t_k\}_{k = 0}^K$ of $\mathbb{R}$: $-\infty < t_0 < \cdots < t_K \leq \infty$. If $t_K = \infty$, we use the convention $u(t_K) :=0$ for all functions $u:\mathbb{R}\to H$. We use $\chi_I$ to denote the sharp characteristic function of a set $I \subset \mathbb{R}$. \begin{definition} \label{DEF:X1}\rm Let $1\leq p < \infty$. \smallskip \noindent \textup{(i)} A $U^p$-atom is defined by a step function $a:\mathbb{R}\to H$ of the form \[ a = \sum_{k = 1}^K \phi_{k - 1} \chi_{[t_{k-1}, t_k)}, \] \noindent where $\{t_k\}_{k = 0}^K \in \mathcal{Z}$ and $\{\phi_k\}_{k = 0}^{K-1} \subset H$ with $\sum_{k = 0}^{K-1} \|\phi_k\|_H^p = 1$. Then, we define the atomic space $U^p(\mathbb{R}; H)$ to be the collection of functions $u:\mathbb{R}\to H$ of the form \begin{equation} u = \sum_{j = 1}^\infty \lambda_j a_j, \quad \text{ where $a_j$'s are $U^p$-atoms and $\{\lambda_j\}_{j \in \mathbb{N}}\in \l^1(\mathbb{N}; \mathbb{C})$}, \label{X1} \end{equation} \noindent with the norm \[ \|u\|_{U^p(\mathbb{R}; H)} : = \inf \Big\{ \|{ \bf \lambda} \|_{\l^1} : \eqref{X1} \text{ holds with } \lambda = \{\lambda_j \}_{j \in \mathbb{N}} \text{ and some $U^p$-atoms } a_j\Big\}.\] \smallskip \noindent \textup{(ii)} We define the space $V^p(\mathbb{R}; H)$ of functions of bounded $p$-variation to be the collection of functions $u : \mathbb{R} \to H$ with $\|u\|_{V^p(\mathbb{R}; H)} < \infty$, where the $V^p$-norm is defined by \[ \|u\|_{V^p(\mathbb{R}; H)} := \sup_{\{t_k\}_{k = 0}^K \in \mathcal{Z}} \bigg(\sum_{k = 1}^K\|u(t_k) - u(t_{k-1})\|_H^p\bigg)^\frac{1}{p}. \] \noindent We also define $V^p_\text{rc}(\mathbb{R}; H)$ to be the closed subspace of all right-continuous functions in $V^p(\mathbb{R}; H)$ such that $\lim_{t \to -\infty} u(t) = 0$. \smallskip \noindent \textup{(iii)} Let $s \in \mathbb{R}$. We define $U^p_\Delta H^s$ (and $V^p_\Delta H^s$, respectively) to be the spaces of all functions $u: \mathbb{R} \to H^s(\mathbb{T}^d)$ such that the following $U^p_\Delta H^s$-norm (and $V^p_\Delta H^s$-norm, respectively) is finite: \[ \|u \|_{U^p_\Delta H^s} := \|S(-t) u\|_{U^p(\mathbb{R}; H^s)} \quad \text{and} \quad \|u \|_{V^p_\Delta H^s} := \|S(-t) u\|_{V^p(\mathbb{R}; H^s)}, \] \noindent where $S(t) = e^{it\Delta}$ denotes the linear propagator for \eqref{NLS1}. We use $V^p_{\text{rc}, \Delta} H^s$ to denote the subspace of right-continuous functions in $V^p_\Delta H^s$. \end{definition} \begin{remark}\label{REM:UpVp} \rm Note that the spaces $U^p(\mathbb{R}; H)$, $V^p(\mathbb{R}; H)$, and $V^p_\text{rc}(\mathbb{R}; H)$ are Banach spaces. The closed subspace of continuous functions in $U^p(\mathbb{R}; H)$ is also a Banach space. Moreover, we have the following embeddings: \begin{equation*} U^p(\mathbb{R}; H) \hookrightarrow V^p_\text{rc}(\mathbb{R}; H) \hookrightarrow U^q(\mathbb{R}; H) \hookrightarrow L^\infty(\mathbb{R}; H) \end{equation*} \noindent for $ 1\leq p < q < \infty$. Similar embeddings hold for $U^p_\Delta H^s$ and $V^p_\Delta H^s$. \end{remark} Next, we state a transference principle and an interpolation result. \begin{lemma}\label{LEM:Xinterpolate} \textup{(i)} {\rm (Transference principle)} Suppose that we have \[ \big\| T(S(t) \phi_1, \dots, S(t) \phi_k)\big\|_{L^p_t L^q_x(\mathbb{R}\times \mathbb{R}^d)} \lesssim \prod_{j = 1}^k \|\phi_j\|_{L^2_x}\] \noindent for some $1\leq p, q \leq \infty$. Then, we have \[ \big\| T(u_1, \dots, u_k)\big\|_{L^p_t L^q_x(\mathbb{R}\times \mathbb{R}^d)} \lesssim \prod_{j = 1}^k \|u_j\|_{U^p_\Delta L^2_x}.\] \noindent \textup{(ii)} {\rm (Interpolation)} Let $E$ be a Banach space. Suppose that $T: U^{p_1}\times \cdots \times U^{p_k} \to E$ is a bounded $k$-linear operator such that \[ \|T(u_1, \dots, u_k)\|_{E} \leq C_1 \prod_{j = 1}^k \|u_j\|_{U^{p_j}}\] \noindent for some $p_1, \dots, p_k > 2$. Moreover, assume that there exists $C_2 \in (0, C_1]$ such that \[ \|T(u_1, \dots, u_k)\|_{E} \leq C_2 \prod_{j = 1}^k \|u_j\|_{U^{2}}.\] \noindent Then, we have \[ \|T(u_1, \dots, u_k)\|_{E} \leq C_2 \bigg(\ln \frac{C_1}{C_2}+ 1\bigg)^k \prod_{j = 1}^k \|u_j\|_{V^2}\] \noindent for $u_j \in V^2_\textup{rc}$, $j = 1, \dots, k$. \end{lemma} \noindent A transference principle as above has been commonly used in the Fourier restriction norm method. See \cite[Proposition 2.19]{HHK} for the proof of Lemma \ref{LEM:Xinterpolate} (i). The proof of the interpolation result follows from extending the trilinear result in \cite{HTT1} to a general $k$-linear case. See also \cite[Proposition 2.20]{HHK}. \smallskip Let $\eta: \mathbb{R} \to [0, 1]$ be an even, smooth cutoff function supported on $[-\frac{8}{5}, \frac{8}{5}]$ such that $\eta \equiv 1$ on $[-\frac{5}{4}, \frac{5}{4}]$. Given a dyadic number $N \geq 1$, we set $\eta_1(\xi) = \eta(|\xi|)$ and \[\eta_N(\xi) = \eta\bigg(\frac{|\xi|}{N}\bigg) - \eta\bigg(\frac{2|\xi|}{N}\bigg)\] \noindent for $N \geq 2$. Then, we define the Littlewood-Paley projection operator $\mathbf{P}_N$ as the Fourier multiplier operator with symbol $\eta_N$. Moreover, we define $\mathbf{P}_{\leq N}$ and $\mathbf{P}_{\geq N}$ by $\mathbf{P}_{\leq N} = \sum_{1 \leq M \leq N} \mathbf{P}_M$ and $\mathbf{P}_{\geq N} = \sum_{ M \geq N} \mathbf{P}_M$. \begin{definition} \label{DEF:X3} \rm \textup{(i)} Let $s\in \mathbb{R}$. We define $X^s(\mathbb{R})$ to be the space of all tempered distributions $u : \mathbb{R} \to H^s(\mathbb{R}^d)$ such that $\| u\|_{X^s(\mathbb{R})} < \infty$, where the $X^s$-norm is defined by \[ \|u \|_{X^s (\mathbb{R})} : = \bigg( \sum_{\substack{N\geq 1\\\textup{dyadic}}} N^{2s} \| \mathbf{P}_N u \|_{U^2_{\Delta}L^2}^2 \bigg)^\frac{1}{2}. \] \smallskip \noindent \textup{(ii)} Let $s\in \mathbb{R}$. We define $Y^s(\mathbb{R})$ to be the space of all tempered distributions $u : \mathbb{R} \to H^s(\mathbb{R}^d)$ such that for every $N\in \mathbb{N}$, the map $t \mapsto \mathbf{P}_N u(t)$ is in $V^2_{\textup{rc}, \Delta} H^s$ and $\| u\|_{Y^s(\mathbb{R})} < \infty$, where the $Y^s$-norm is defined by \[ \|u \|_{Y^s(\mathbb{R})} : = \bigg( \sum_{\substack{N\geq 1\\\textup{dyadic}}} N^{2s} \| \mathbf{P}_N u\|_{V^2_\Delta L^2}^2\bigg)^\frac{1}{2}. \] \end{definition} \noindent Recall the following embeddings: \begin{equation}\label{inclusions} U^2_\Delta H^s \hookrightarrow X^s \hookrightarrow Y^s \hookrightarrow V^2_\Delta H^s \hookrightarrow U^p_\Delta H^s, \end{equation} for $p>2$. Given an interval $I \subset \mathbb{R}$, we define the local-in-time versions $X^s(I)$ and $Y^s(I)$ of these spaces as restriction norms. For example, we define the $X^s(I)$-norm by \[ \|u \|_{X^s(I)} = \inf\big\{ \|v\|_{X^s(\mathbb{R})}: \, v|_I = u\big\}.\] \noindent We also define the norm for the nonhomogeneous term: \begin{align} \| F\|_{N^s(I)} = \bigg\|\int_{t_0}^t S(t - t') F(t') dt'\bigg\|_{X^s(I)}. \end{align} \noindent In the following, we will perform our analysis in $X^s(I) \cap C(I; H^s)$, that is, in a Banach subspace of continuous functions in $X^s(I)$. See Appendix \ref{SEC:A} for additional properties of the $X^s(I)$-spaces. We conclude this section by presenting some basic estimates involving these function spaces. \begin{lemma}\label{LEM:Ys1} \textup{(i) (Linear estimates)} Let $s \geq 0$ and $0 < T \leq \infty$. Then, we have \begin{align*} \| S(t) \phi \|_{X^s([0, T))} & \leq \|\phi\|_{H^s}, \\ \|F\|_{N^s([0, T))} & \leq \sup_{\substack{v \in Y^{-s}([0, T))\\\|v\|_{Y^{-s} }= 1}} \bigg| \int_0^T \int_{\mathbb{R}^d} F(t, x) \overline{v(t, x)} dx dt\bigg| \end{align*} \noindent for all $\phi \in H^s(\mathbb{R}^d)$ and $F \in L^1([0, T); H^s(\mathbb{R}^d))$. \smallskip \noindent \textup{(ii) (Strichartz estimates)} Let $(q, r)$ be Schr\"odinger admissible with $ q > 2$ and $p \geq \frac{2(d+2)}{d}$. Then, for $0< T\leq \infty$ and $N_1 \leq N_2$, we have \begin{align} \| u \|_{L^q_t L^r_x([0, T)\times \mathbb{R}^d)} & \lesssim \|u\|_{Y^0([0, T))}, \label{Ys2}\\ \| u \|_{L^p_{t,x}([0, T)\times \mathbb{R}^d)} & \lesssim \big\||\nabla|^{\frac d2 - \frac {d+2}p} u\big\|_{Y^0([0, T))}, \label{Ys3}\\ \| \mathbf{P}_{N_1} u_1 \mathbf{P}_{N_2}u_2\|_{L^2_{t, x}([0, T)\times \mathbb{R}^d)} & \lesssim N_1^\frac{d-2}{2} \bigg(\frac{N_1}{N_2}\bigg)^{\frac 12-} \|\mathbf{P}_{N_1} u_1\|_{Y^0([0, T))}\|\mathbf{P}_{N_2} u_2\|_{Y^0([0, T))}. \label{Ys4} \end{align} \end{lemma} \noindent Note that there is a slight loss of regularity in \eqref{Ys4} since we use the $Y^0$-norm on the right-hand side instead of the $X^0$-norm. In view of \eqref{inclusions}, we may replace the $Y^0$-norms on the right-hand sides of \eqref{Ys2}, \eqref{Ys3}, and \eqref{Ys4} by the $X^0$-norm in the following. \begin{proof} In the following, we briefly discuss the proof of (ii). See \cite{HHK, HTT1} for the proof of (i). The first estimate \eqref{Ys2} follows from the Strichartz estimate \eqref{Str0}, Lemma \ref{LEM:Xinterpolate} (i), and \eqref{inclusions}: \[\|u\|_{L^q_tL^r_x}\lesssim \|u\|_{U^q_{\Delta}L^2}\lesssim \|u\|_{Y^0},\] \noindent for $q>2$. The second estimate \eqref{Ys3} follows from \eqref{Str0a} in a similar manner. It remains to prove \eqref{Ys4}. On the one hand, the following bilinear refinement of the Strichartz estimate by Bourgain \cite{Bo98} and Ozawa-Tsutsumi \cite{OT}: \[\|\mathbf{P}_{N_1}S(t) \phi_1 \mathbf{P}_{N_2}S(t) \phi_2\|_{L^2_{t,x}} \lesssim N_1^{\frac{d-2}{2}}\bigg(\frac{N_1}{N_2}\bigg)^{\frac 12} \|\mathbf{P}_{N_1}\phi_1\|_{L^2}\|\mathbf{P}_{N_2}\phi_2\|_{L^2}\] \noindent and Lemma \ref{LEM:Xinterpolate} (i) yield \begin{equation}\label{biU2} \|\mathbf{P}_{N_1}u_1\mathbf{P}_{N_2}u_2\|_{L^2_{t,x}}\lesssim N_1^{\frac{d-2}{2}} \bigg(\frac{N_1}{N_2}\bigg)^{\frac 12}\|\mathbf{P}_{N_1}u_1\|_{U^2_{\Delta}L^2}\|\mathbf{P}_{N_2}u_2\|_{U^2_{\Delta}L^2}. \end{equation} \noindent On the other hand, by Bernstein's inequality and noting that $(4, \frac{2d}{d-1})$ is Strichartz admissible, we have \begin{align*} \|\mathbf{P}_{N_j}S(t) \phi_j\|_{L^4_{t,x}} &\lesssim N_j^{\frac{d-2}{4}} \|\mathbf{P}_{N_j}S(t) \phi_j\|_{L^4_tL^{\frac{2d}{d-1}}_x} \lesssim N_j^{\frac{d-2}{4}} \|\mathbf{P}_{N_j}\phi_j\|_{L^2}. \end{align*} \noindent Then, by Cauchy-Schwarz inequality and Lemma \ref{LEM:Xinterpolate} (i), we obtain \begin{equation}\label{biU4} \|\mathbf{P}_{N_1}u_1\mathbf{P}_{N_2}u_2\|_{L^2_{t,x}}\lesssim N_1^{\frac{d-2}{4}}N_2^{\frac{d-2}{4}}\|\mathbf{P}_{N_1}u_1\|_{U^4_{\Delta}L^2}\|\mathbf{P}_{N_2}u_2\|_{U^4_{\Delta}L^2}. \end{equation} \noindent Hence, by Lemma \ref{LEM:Xinterpolate} (ii), with \eqref{biU2} and \eqref{biU4}, we have \begin{align} \|\mathbf{P}_{N_1}u_1 \mathbf{P}_{N_2}u_2\|_{L^2_{t,x}} \lesssim N_1^{\frac{d-2}{2}} \bigg(\frac{N_1}{N_2}\bigg)^{\frac 12} \bigg(\ln \Big(\frac{N_2}{N_1}\Big)+1\bigg)^2 \|\mathbf{P}_{N_1}u_1\|_{V^2_{\Delta}L^2}\|\mathbf{P}_{N_2}u_2\|_{V^2_{\Delta}L^2}. \label{biU5} \end{align} \noindent Finally, \eqref{Ys4} follows from \eqref{inclusions} and \eqref{biU5}. \end{proof} Similarly to the usual Strichartz estimate \eqref{Str0a}, the derivative loss in \eqref{Ys3} depends only on the size of the spatial frequency support and not its location. Namely, if the spatial frequency support of $\widehat{u}(t, \xi)$ is contained in a cube of side length $N$ for all $t \in \mathbb{R}$, then we have \begin{equation} \| u \|_{L^p_{t,x}([0, T)\times \mathbb{R}^d)} \lesssim N^{\frac d2 - \frac{d+2}{p}} \| u\|_{Y^0([0, T))}. \label{Ys5} \end{equation} \noindent This is a direct consequence of \eqref{Str0b}. Lastly, we recall Schur's test for readers' convenience. \begin{lemma}[Schur's test]\label{LEM:Schur} Suppose that we have \[ \sup_m \sum_{n} |K_{m, n}| + \sup_n \sum_{m} |K_{m, n}| < \infty\] \noindent for some $K_{m, n} \in \mathbb{C}$, $m, n \in \mathbb{Z}$. Then, we have \[\sum_{m, n } K_{m, n} a_m b_n \lesssim \|a_m\|_{\l^2_m}\|b_n\|_{\l^2_n}.\] for any $\ell^2$-sequences $\{a_m\}_{m\in\mathbb{Z}}$ and $\{b_n\}_{n\in\mathbb{Z}}$. \end{lemma} \section{Probabilistic nonlinear estimates} \label{SEC:4} In this section, we prove the key nonlinear estimates in the critical regularity $s_\textup{crit} = \frac{d-2}{2}$. In the next section, we use them to prove Theorems \ref{THM:1} and \ref{THM:2}. Given $z(t) = S(t) \phi^\o$, define $\Gamma$ by \begin{equation} \Gamma v(t) =\mp i \int_0^t S(t-t') \mathcal{N} (v+z)(t') dt', \label{NLS5} \end{equation} where $\mathcal{N}(v+z)=|v+z|^2(v+z)$. \noindent Then, we have the following nonlinear estimates. \begin{proposition}\label{PROP:NL1} Given $d\geq 3$, let $s \in (s_d, s_\textup{crit}]$, where $s_d$ is defined in \eqref{Sd1}. Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. \smallskip \noindent \textup{(i)} Let $0<T \leq 1$. Then, there exists $0<\theta\ll1$ such that we have \begin{align} \|\Gamma v\|_{X^\frac{d-2}{2}([0, T))} & \leq C_1 \big(\|v\|_{X^\frac{d-2}{2}([0, T))} ^3 + T^\theta R^3\big), \label{nl1a}\\ \|\Gamma v_1 - \Gamma v_2 \|_{X^\frac{d-2}{2}([0, T))} & \leq C_2 \Big(\sum_{j = 1}^2 \|v_j\|_{X^\frac{d-2}{2}([0, T))} ^2 + T^\theta R^2\Big) \|v_1 -v_2 \|_{X^\frac{d-2}{2}([0, T))}, \label{nl1b} \end{align} \noindent for all $v, v_1, v_2 \in X^{\frac{d-2}{2}}([0, T))$ and $R>0$, outside a set of probability $\leq C \exp\big(-c \frac{R^2}{\|\phi\|_{H^s}^2}\big)$. \noindent \textup{(ii)} Given $0 < \varepsilon \ll1$, define $\widetilde \Gamma$ by \begin{equation} \widetilde \Gamma v(t) = \mp i \int_0^t S(t-t') \mathcal{N} (v+\varepsilon z)(t') dt'. \label{NLS6} \end{equation} \noindent Then, we have \begin{align} \|\widetilde \Gamma v\|_{X^\frac{d-2}{2}(\mathbb{R} )} & \leq C_3 \big(\|v\|_{X^\frac{d-2}{2}(\mathbb{R})} ^3 + R^3\big), \label{nl1c}\\ \|\widetilde \Gamma v_1 - \widetilde \Gamma v_2 \|_{X^\frac{d-2}{2}(\mathbb{R})} & \leq C_4 \Big(\sum_{j = 1}^2 \|v_j\|_{X^\frac{d-2}{2}(\mathbb{R})} ^2 + R^2\Big) \|v_1 -v_2 \|_{X^\frac{d-2}{2}(\mathbb{R})}, \label{nl1d} \end{align} \noindent for all $v, v_1, v_2 \in X^{\frac{d-2}{2}}(\mathbb{R})$ and $R>0$, outside a set of probability $\leq C \exp\big(-c \frac{R^2}{\varepsilon ^2 \|\phi\|_{H^s}^2}\big)$. \end{proposition} \begin{proof} (i) Let $0<T \leq 1$. We only prove \eqref{nl1a} since \eqref{nl1b} follows in a similar manner. Given $N \geq 1$, define $\Gamma_N$ by \begin{equation} \Gamma_N v(t) =\mp i \int_0^t S(t-t')\mathbf{P}_{\leq N} \mathcal{N} (v+z)(t') dt'. \label{NNLS1} \end{equation} \noindent By Bernstein's and H\"older's inequalities, we have \begin{align} \|\mathbf{P}_{\leq N} \mathcal{N} (v+z)\|_{L^1_t ([0, T); H^{\frac{d-2}{2}}_x)} & \lesssim N^\frac{d-2}{2} \| \mathcal{N} (v+z)\|_{L^1_t L^2_x}\notag \\ & \lesssim N^\frac{d-2}{2} \| v\|_{L^3_t([0, T); L^6_x)}^3 + N^\frac{d-2}{2} \| z\|_{L^3_t([0, T); L^6_x)}^3. \label{NNLS2} \end{align} \noindent On the one hand, it follows from Lemma \ref{PROP:Str1} that the second term on the right-hand side of \eqref{NNLS2} is finite almost surely. On the other hand, noting that $(3, \frac{6d}{3d-4})$ is Strichartz admissible, it follows from Sobolev's inequality and \eqref{Ys2} in Lemma \ref{LEM:Ys1} that \begin{align} \| v\|_{L^3_t([0, T); L^6_x)} \lesssim \big\| \jb{\nabla}^\frac{d-2}{3}v\big\|_{L^3_t([0, T); L^\frac{6d}{3d-4}_x)} \lesssim \big\| v \|_{X^\frac{d-2}{3}([0, T))} < \infty. \label{NNLS3} \end{align} \noindent Therefore, by Lemma \ref{LEM:Ys1} (i), we have \begin{align} \|\Gamma_N v(t)\|_{X^\frac{d-2}{2}} \lesssim \sup_{\substack{v_4 \in Y^{0}([0, T))\\\|v_4\|_{Y^{0}} = 1}} \bigg| \int_0^T \int_{\mathbb{R}^d} \jb{\nabla}^\frac{d-2}{2} \mathcal{N}(v+z)(t, x) \overline{v_4(t, x)} dx dt\bigg|, \label{nl1} \end{align} \noindent almost surely, where $v_4 = \mathbf{P}_{\leq N} v_4$. In the following, we estimate the right-hand side of \eqref{nl1}, independently of the cutoff size $N \geq 1$, by performing a case-by-case analysis of expressions of the form: \begin{align} \bigg| \int_0^T \int_{\mathbb{R}^d} \jb{\nabla}^\frac{d-2}{2} ( w_1 w_2 w_3 )v_4 dx dt\bigg| \label{nl2} \end{align} \noindent where $\|v_4\|_{Y^{0}([0, T))} \leq 1$ and $w_j= v$ or $z$, $j = 1, 2, 3$. As a result, by taking $N \to \infty$, the same estimates hold for $\Gamma v$ without any cutoff, thus yielding \eqref{nl1a}. Before proceeding further, let us simplify some of the notations. In the following, we drop the complex conjugate sign. We also denote $X^s([0, T))$ and $Y^s([0, T))$ by $X^s$ and $Y^s$ since $T$ is fixed. Similarly, it is understood that the time integration in $L^p_{t, x}$ is over $[0, T)$. Lastly, in most of the cases, we dyadically decompose $w_j = v_j$ or $z_j$, $j = 1, 2, 3$, and $v_4$ such that their spatial frequency supports are $\{ |\xi_j|\sim N_j\}$ for some dyadic $N_j \geq 1$ but still denote them as $w_j= v_j \text{ or } z_j$, $j = 1, 2, 3$, and $v_4$. Note that, if we can afford a small derivative loss in the largest frequency, there is no difficulty in summing over the dyadic blocks $N_j$, $j = 1, \dots, 4$. \medskip \noindent {\bf Case (1):} $v v v$ case. In this case, we do not need to perform dyadic decompositions and we divide the frequency spaces into $\{|\xi_1| \geq |\xi_2|, |\xi_3|\}$, $\{|\xi_2| \geq |\xi_1|, |\xi_3|\}$, and $\{|\xi_3| \geq |\xi_1|, |\xi_2|\}$. Without loss of generality, assume that $|\xi_1| \geq |\xi_2|, |\xi_3|$. By $L^\frac{2(d+2)}{d}_{t,x}L^{d+2}_{t,x}L^{d+2}_{t,x}L^{\frac{2(d+2)}{d}}_{t,x}$-H\"older's inequality, \eqref{Ys3} in Lemma \ref{LEM:Ys1}, and \eqref{inclusions}, we have \begin{align*} \bigg|\int_0^T\int_{ \mathbb{R}^d} \jb{\nabla}^\frac{d-2}{2} v_1 v_2 v_3 v_4 dx dt \bigg| & \leq \| \jb{\nabla}^\frac{d-2}{2} v_1\|_{L^\frac{2(d+2)}{d}_{t, x}} \|v_2\|_{L^{d+2}_{t, x}}\|v_3\|_{L^{d+2}_{t, x}} \|v_4\|_{L^{\frac{2(d+2)}{d}}_{t, x}}\\ & \lesssim \prod_{j = 1}^3 \|v_j\|_{Y^\frac{d-2}{2}} \|v_4\|_{Y^0} \lesssim \prod_{j = 1}^3 \|v_j\|_{X^\frac{d-2}{2}}. \end{align*} \medskip \noindent {\bf Case (2):} $zz z$ case. \quad Without loss of generality, assume $N_3 \geq N_2 \geq N_1$. \smallskip \noindent {\bf $\bullet$ Subcase (2.a):} $N_2 \sim N_3$. By $L^{d+2}_{t,x}L^{4}_{t,x}L^{4}_{t,x}L^{\frac{2(d+2)}{d}}_{t,x}$-H\"older's inequality, we have \begin{align*} \bigg|\int_0^T\int_{ \mathbb{R}^d} z_1 z_2 \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| & \lesssim \|z_1\|_{L^{d+2}_{t, x}} \|\jb{\nabla}^\frac{d-2}{4}z_2\|_{L^{4}_{t, x}} \|\jb{\nabla}^\frac{d-2}{4} z_3 \|_{L^{4}_{t, x}}\| v_4 \|_{L^\frac{2(d+2)}{d}_{t, x}}. \end{align*} \noindent Hence, by Lemmata \ref{PROP:Str1} and \ref{LEM:Ys1}, the contribution to \eqref{nl1} in this case is at most $\lesssim T^{0+} R^3$ outside a set of probability \begin{equation*} \leq C\exp\bigg(- c\frac{R^2}{T^{\frac{2}{d+2}-}\|\phi\|_{L^2}^2}\bigg) + C\exp\bigg(- c\frac{R^2}{T^{\frac{1}{2}-}\|\phi\|_{H^{\frac{d-2}{4}+}}^2}\bigg) \end{equation*} \noindent as long as $s > \frac{d-2}{4}$. Note that $s$ needs to be strictly greater than $\frac {d-2}{4}$ due to the summations over dyadic blocks. See \cite{BOP1} for more details. Similar comments apply in the following. \medskip \noindent {\bf $\bullet$ Subcase (2.b):} $N_3 \sim N_4 \gg N_1, N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.i):} $N_1, N_2 \ll N_3^\frac{1}{d-1}$. For small $\alpha > 0$, it follows from Cauchy-Schwarz inequality and Lemma \ref{LEM:Ys1} that \begin{align} \|z_2 \jb{\nabla}^\frac{d-2}{2}z_3\|_{L^2_{t, x}} & \lesssim N_3^\frac{d-2}{2} \|z_2\|_{L^4_{t, x} }^\alpha \|z_3\|_{L^4_{t, x} }^\alpha \|z_2 z_3\|_{L^2_{t, x}}^{1-\alpha}\notag\\ & \lesssim N_2^{\frac{d-1}{2}-s-\frac{d-1}{2}\alpha-} N_3^{\frac{d-3}{2} - s+\frac{1}{2}\alpha +} \prod_{j = 2}^3\big(\|\jb{\nabla}^s z_j\|_{L^4_{t, x} }^\alpha \|\mathbf{P}_{N_j} \phi^\o\|_{H^{s}}^{1-\alpha}\big). \label{T1a} \end{align} \noindent Then, by \eqref{T1a} and the bilinear estimate \eqref{Ys4} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{ \mathbb{R}^d} z_1 z_2 & \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| \lesssim \|z_2 \jb{\nabla}^\frac{d-2}{2} z_3\|_{L^2_{t, x}} \|z_1 v_4\|_{L^2_{t, x}}\\ & \lesssim N_1^{\frac{d-1}{2} - s-}N_2^{\frac{d-1}{2} - s-\frac{d-1}{2}\alpha-} N_3^{\frac{d-4}{2}-s +\frac{1}{2}\alpha+} \\ & \hphantom{XXXXXX} \times \|\mathbf{P}_{N_1} \phi^\o\|_{H^{s}} \prod_{j = 2}^3\big(\|\jb{\nabla}^s z_j\|_{L^4_{t, x} }^\alpha \|\mathbf{P}_{N_j} \phi^\o\|_{H^{s}}^{1-\alpha}\big) \|v_4\|_{Y^0}\\ & \lesssim N_3^{\frac{d-2}{2} - \frac{d+1}{d-1} s + } \|\mathbf{P}_{N_1} \phi^\o\|_{H^{s}} \prod_{j = 2}^3\big(\|\jb{\nabla}^s z_j\|_{L^4_{t, x} }^\alpha \|\mathbf{P}_{N_j} \phi^\o\|_{H^{s}}^{1-\alpha}\big) \|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemmata \ref{PROP:Str1} and \ref{LEM:Hs}, the contribution to \eqref{nl1} in this case is at most $\lesssim T^{0+} R^3$ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{1}{2}-} \|\phi\|_{H^{s}}^{2}}\bigg) + C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \begin{equation} s> \frac{d-1}{d+1}\cdot \frac{d-2}{2}=s_d \label{nl3} \end{equation} \noindent and $\alpha < 1- \frac{2}{d-1} s$. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.ii):} $N_2\gtrsim N_3^\frac{1}{d-1} \gg N_1$. By H\"older's inequality and the bilinear estimate \eqref{Ys4} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T\int_{ \mathbb{R}^d} z_1 z_2 & \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| \lesssim \|z_2\|_{L^4_{t, x}} \|\jb{\nabla}^\frac{d-2}{2} z_3 \|_{L^4_{t, x}}\|z_1 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{\frac{d-1}{2}-s-} N_2^{-s} N_3^{\frac{d-3}{2}-s+} \|\mathbf{P}_{N_1}\phi^\o\|_{H^s} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^4_{t, x}} \|v_4\|_{Y^0}\\ & \lesssim N_3^{\frac{d-2}{2} - \frac{d+1}{d-1} s + } \|\mathbf{P}_{N_1}\phi^\o\|_{H^s} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^4_{t, x}} \|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemmata \ref{PROP:Str1} and \ref{LEM:Hs}, the contribution to \eqref{nl1} in this case is at most $\lesssim T^{0+} R^3$ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{1}{2}-} \|\phi\|_{H^{s}}^{2}}\bigg) + C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \eqref{nl3} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.iii):} $N_1, N_2\gtrsim N_3^\frac{1}{d-1} $. By $L^{\frac{6(d+2)}{d+4}}_{t,x}L^{\frac{6(d+2)}{d+4}}_{t,x}L^{\frac{6(d+2)}{d+4}}_{t,x}L^{\frac{2(d+2)}{d}}_{t,x}$-H\"older's inequality and \eqref{Ys3} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{\mathbb{R}^d} z_1 z_2 \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| & \lesssim N_3^{\frac{d-2}{2} - \frac{d+1}{d-1} s } \prod_{j =1}^3 \| \jb{\nabla}^s z_j\|_{L^\frac{6(d+2)}{d+4}_{t, x}} \|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemma \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $\lesssim T^{0+} R^3$ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{d+4}{3(d+2)}-} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \eqref{nl3} is satisfied. \medskip \noindent {\bf Case (3):} $v v z$ case. Without loss of generality, assume $N_1 \geq N_2$. \medskip \noindent {\bf $\bullet$ Subcase (3.a):} $N_1 \gtrsim N_3$. In the following, we apply dyadic decompositions only to $v_1$, $v_2$, and $z_3$. In this case, we have $N_1 \sim \max (N_2, N_3, |\xi_4|)$, where $\xi_4$ is the spatial frequency of $v_4$. Then, by H\"older's inequality, \eqref{Ys4}, and \eqref{Ys3}, we have \begin{align*} \bigg| \int_0^T \int_{\mathbb{R}^d} \jb{\nabla}^\frac{d-2}{2} v_1 v_2 z_3 v_4 dx dt \bigg| & \lesssim \sum_{N_1 \gtrsim N_2, N_3} \|\jb{\nabla}^{\frac{d-2}{2}}\mathbf{P}_{N_1}v_1 \mathbf{P}_{N_2} v_2\|_{L^2_{t,x}}\|\mathbf{P}_{N_3}z_3\|_{L^{d+2}_{t,x}} \|v_4\|_{L^{\frac{2(d+2)}{d}}_{t,x}}\\ &\lesssim \sum_{N_1 \geq N_2} \bigg(\frac{N_2}{N_1}\bigg)^{\frac{1}{2}-}\prod_{j = 1}^2 \|\mathbf{P}_{N_j}v_j\|_{X^\frac{d-2}{2}} \sum_{N_3} \|\mathbf{P}_{N_3}z_3\|_{L^{d+2}_{t, x}} \|v_4\|_{Y^{0}} \intertext{By Lemma \ref{LEM:Schur} and summing over $N_3$ with a slight loss of derivative, } &\lesssim \prod_{j = 1}^2 \|v_j\|_{X^\frac{d-2}{2}} \| \jb{\nabla}^{0+}z_3\|_{L^{d+2}_{t, x}} \|v_4\|_{Y^{0}}. \end{align*} \noindent Hence, by Lemma \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+}R \prod_{j = 1}^2 \|v_j\|_{X^\frac{d-2}{2}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{2}{d+2}-} \|\phi\|_{H^{0+}}^{2}}\bigg) \end{equation*} \noindent as long as $s > 0$. \medskip \noindent {\bf $\bullet$ Subcase (3.b):} $N_3\sim N_4 \gg N_1 \geq N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (3.b.i):} $N_1\gtrsim N_3^\frac 1{d-1}$. By H\"older's inequality followed by \eqref{Ys3} and \eqref{Ys4} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{\mathbb{R}^d} v_1 v_2 & \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| \lesssim \|v_1\|_{L^\frac{2(d+2)}{d}_{t, x}} \|\jb{\nabla}^\frac{d-2}{2} z_3 \|_{L^{d+2}_{t, x}}\|v_2 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{-\frac{d-2}{2}}N_2^{\frac 12-}N_3^{\frac{d-3}{2}-s+} \|v_1\|_{X^\frac{d-2}{2}} \|v_2\|_{X^{\frac {d-2}2}} \|\jb{\nabla}^{s} z_3\|_{L^{d+2}_{t, x}} \|v_4\|_{Y^0}\\ & \lesssim N_3^{\frac{d-3}{d-1}\frac{d-2}{2} - s + } \|v_1\|_{X^\frac{d-2}{2}} \|v_2\|_{X^{\frac {d-2}2}} \|\jb{\nabla}^{s} z_3\|_{L^{d+2}_{t, x}} \|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemma \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+} R \prod_{j = 1}^2 \|v_j\|_{X^{\frac {d-2}2}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{2}{d+2}-} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \begin{align} s > \frac{d-3}{d-1} \cdot\frac{d-2}{2}. \label{nl4} \end{align} \noindent Note that the condition \eqref{nl4} is less restrictive than \eqref{nl3}. \smallskip \noindent $\circ$ \underline{Subsubcase (3.b.ii):} $N_2 \leq N_1\ll N_3^\frac 1{d-1}$. For small $\alpha > 0$, it follows from H\"older's inequality and Lemma \ref{LEM:Ys1} that \begin{align} \|v_1 \jb{\nabla}^\frac{d-2}{2}z_3\|_{L^2_{t, x}} & \lesssim N_3^\frac{d-2}{2} \|v_1\|_{L^{\frac{2(d+2)}{d}}_{t, x} }^\alpha \|z_3\|_{L^{d+2}_{t, x} }^\alpha \|v_1 z_3\|_{L^2_{t, x}}^{1-\alpha}\notag\\ & \lesssim N_1^{\frac 12 -\frac {d-1}2 \alpha-} N_3^{\frac{d-3}{2} - s+\frac{1}{2}\alpha +} \|v_1\|_{X\frac{d-2}{2}} \|\jb{\nabla}^s z_3\|_{L^{d+2}_{t, x} }^\alpha \|\mathbf{P}_{N_3} \phi^\o\|_{H^{s}}^{1-\alpha}. \label{nl4a} \end{align} \noindent Then, by \eqref{nl4a} and \eqref{Ys4} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T & \int_{\mathbb{R}^d} v_1 v_2 \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| \lesssim \|v_1 \jb{\nabla}^\frac{d-2}{2}z_3\|_{L^2_{t, x}} \|v_2 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{\frac 12 -\frac {d-1}2 \alpha-} N_2^{\frac 12-} N_3^{\frac{d-4}{2} - s+\frac{1}{2}\alpha +} \|v_1\|_{X^\frac{d-2}{2}} \|v_2\|_{X^{\frac {d-2}2}} \|\jb{\nabla}^s z_3\|_{L^{d+2}_{t, x} }^\alpha \|\mathbf{P}_{N_3} \phi^\o\|_{H^{s}}^{1-\alpha} \|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemmata \ref{PROP:Str1} and \ref{LEM:Hs}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+} R \prod_{j = 1}^2 \|v_j\|_{X^{\frac {d-2}2}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{2}{d+2}-} \|\phi\|_{H^{s}}^{2}}\bigg) + C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \eqref{nl4} is satisfied and $\alpha< \frac{1}{d-1}$. \medskip \noindent {\bf Case (4):} $v z z$ case. Without loss of generality, assume $N_3 \geq N_2$. \smallskip \noindent {\bf $\bullet$ Subcase (4.a):} $N_1 \gtrsim N_3$. By $L^\frac{2(d+2)}{d}_{t,x}L^{d+2}_{t,x}L^{d+2}_{t,x}L^\frac{2(d+2)}{d}_{t,x}$-H\"older's inequality and \eqref{Ys3} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{\mathbb{R}^d} \jb{\nabla}^\frac{d-2}{2} v_1 z_2 z_3 v_4 dx dt \bigg| & \lesssim \|v_1\|_{X^{\frac{d-2}{2}}} \|z_2\|_{L^{d+2}_{t, x}} \|z_3\|_{L^{d+2}_{t, x}} \|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemma \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+}R^2 \|v_1\|_{X^\frac{d-2}{2}} $ outside a set of probability \begin{equation} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{2}{d+2}-} \|\phi\|_{H^{0+}}^{2}}\bigg) \label{T4b} \end{equation} \noindent as long as $s> 0$. As before, we have $\|\phi\|_{H^{0+}}$ instead of $\|\phi\|_{L^2}$ in \eqref{T4b}, allowing us to sum over $N_2$ and $N_3$. If $N_3 \gtrsim \max(N_1, N_4)$, then this also allows us to sum over $N_1$ and $N_4$. Otherwise, we have $N_1\sim N_4\gg N_3$. In this case, we can use Cauchy-Schwarz inequality to sum over $N_1 \sim N_4$. \medskip \noindent {\bf $\bullet$ Subcase (4.b):} $N_3 \gg N_1$. First, suppose that $N_2 \sim N_3$. Note that we must have $N_3 \gtrsim N_4$ in this case. Then, by $L^{d+2}_{t, x}L^4_{t, x}L^4_{t, x}L^\frac{2(d+2)}{d}_{t, x}$-H\"older's inequality with \eqref{Ys3} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{ \mathbb{R}^d} v_1 z_2 \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| & \lesssim\|v_1\|_{X^\frac{d-2}{2}} \|\jb{\nabla}^\frac{d-2}{4}z_2\|_{L^4_{t, x}} \|\jb{\nabla}^\frac{d-2}{4} z_3 \|_{L^4_{t, x}}\|v_4 \|_{Y^0}\\ & \lesssim N_3^{\frac{d-2}{2} - 2s} \|v_1\|_{X^\frac{d-2}{2}} \|\jb{\nabla}^s z_2\|_{L^4_{t, x}} \|\jb{\nabla}^s z_3 \|_{L^4_{t, x}} \|v_4 \|_{Y^0}. \end{align*} \noindent Hence, by Lemma \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+} R^2 \|v_1\|_{X^{\frac {d-2}2}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{1}{2}-} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as $ s> \frac {d-2}4$. Hence, it remains to consider the case $N_3 \sim N_4 \gg N_1, N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.i):} $N_1, N_2\ll N_3^\frac 1{d-1}$. By \eqref{T1a} and \eqref{Ys4} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{ \mathbb{R}^d} & v_1 z_2 \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| \lesssim \|z_2\jb{\nabla}^\frac{d-2}{2} z_3 \|_{L^2_{t, x}}\|v_1 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{\frac 12 -}N_2^{\frac{d-1}{2}-s-\frac{d-1}{2}\alpha-} N_3^{\frac{d-4}{2}-s+\frac{1}{2}\alpha+} \|v_1\|_{X^\frac{d-2}{2}} \prod_{j = 2}^3\big(\|\jb{\nabla}^s z_j\|_{L^4_{t, x} }^\alpha \|\mathbf{P}_{N_j} \phi^\o\|_{H^{s}}^{1-\alpha}\big) \|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemmata \ref{PROP:Str1} and \ref{LEM:Hs}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+} R^2 \|v_1\|_{X^\frac{d-2}{2}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{1}{2}-} \|\phi\|_{H^{s}}^{2}}\bigg) + C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \begin{align} s > \frac{(d-2)^2}{2d} = \frac{d-2}{d}\cdot \frac{d-2}{2} \label{nl5} \end{align} \noindent and $\alpha< 1 - \frac{2}{d-1} s$. Note that the condition \eqref{nl5} is less restrictive than \eqref{nl3} and thus does not add a further constraint. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.ii):} $N_1\ll N_3^{\frac 1{d-1}} \lesssim N_2$. By H\"older's inequality and \eqref{Ys4} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{ \mathbb{R}^d} v_1 z_2 & \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| \lesssim \|z_2\|_{L^4_{t, x}} \|\jb{\nabla}^\frac{d-2}{2} z_3 \|_{L^4_{t, x}}\|v_1 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{\frac 12 - } N_2^{-s} N_3^{\frac{d-3}{2} - s+} \|v_1\|_{X^\frac{d-2}{2}} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^4_{t, x}}\|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemma \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+}R^2 \|v_1\|_{X^{\frac {d-2}2}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac 12-} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \eqref{nl5} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.iii):} $N_2 \ll N_3^{\frac 1{d-1}} \lesssim N_1$. By H\"older's inequality and Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{\mathbb{R}^d} v_1 z_2 & \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| \lesssim \|v_1\|_{L^\frac{2(d+2)}{d}_{t, x}} \|\jb{\nabla}^\frac{d-2}{2} z_3 \|_{L^{d+2}_{t, x}}\|z_2 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{ - \frac {d-2}{2}} N_2^{\frac{d-1}{2} - s-} N_3^{\frac{d-3}{2}-s+} \|v_1\|_{X^{\frac {d-2}2}} \|\mathbf{P}_{N_2}\phi^\o \|_{H^s} \|\jb{\nabla}^s z_3\|_{L^{d+2}_{t, x}}\|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemmata \ref{LEM:Hs} and \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+} R^2 \|v_1\|_{X^{\frac {d-2}2}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) + C\exp\bigg(-c \frac {R^2}{ T^{\frac 2{d+2}-} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \eqref{nl5} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.iv):} $N_1, N_2\gtrsim N_3^{\frac 1{d-1}}$. By $L^\frac {2(d+2)}{d}_{t,x}L^{d+2}_{t,x}L^{d+2}_{t,x}L^\frac{2(d+2)}{d}_{t,x}$-H\"older's inequality and \eqref{Ys3} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg|\int_0^T \int_{\mathbb{R}^d} v_1 z_2 \jb{\nabla}^\frac{d-2}{2} z_3 v_4 dx dt \bigg| & \lesssim \|v_1\|_{L^\frac{2(d+2)}{d}_{t, x}} \|z_2\|_{L^{d+2}_{t, x}} \|\jb{\nabla}^\frac{d-2}{2} z_3 \|_{L^{d+2}_{t, x}}\| v_4 \|_{L^\frac{2(d+2)}{d}_{t, x}}\\ & \lesssim N_1^{- \frac {d-2}2 } N_2^{-s} N_3^{\frac {d-2}2 -s} \|v_1\|_{X^\frac{d-2}{2}} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^{d+2}_{t, x}}\|v_4\|_{Y^0}. \end{align*} \noindent Hence, by Lemma \ref{PROP:Str1}, the contribution to \eqref{nl1} in this case is at most $ \lesssim T^{0+} R^2 \|v_1\|_{X^{\frac {d-2}2 }} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^{\frac{2}{d+2}-} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent as long as \eqref{nl5} is satisfied. \smallskip Putting together Cases (1) - (4) above, the conclusion of Part (i) follows, provided that \eqref{nl3} is satisfied. \medskip \noindent (ii) First, define $\widetilde \Gamma_N$ by \begin{equation*} \widetilde \Gamma_N v(t) =\mp i \int_0^t S(t-t')\mathbf{P}_{\leq N} \mathcal{N} (v+\varepsilon z)(t') dt', \end{equation*} \noindent for $N \geq 1$. As before, we have \begin{align} \|\mathbf{P}_{\leq N} \mathcal{N} (v+\varepsilon z)\|_{L^1_t (\mathbb{R}; H^{\frac{d-2}{2}}_x)} & \lesssim N^\frac{d-2}{2} \| v\|_{L^3_t(\mathbb{R}; L^6_x)}^3 + \varepsilon^3 N^\frac{d-2}{2} \| z\|_{L^3_t(\mathbb{R}; L^6_x)}^3. \label{NNLS5} \end{align} \noindent By a computation similar to \eqref{NNLS3}, we see that the first term is finite. Noting that $(3, \frac{6d}{3d-4})$ is Strichartz admissible and $6 \geq \frac{6d}{3d-4}$, it follows from Lemma \ref{PROP:Str2} that the second term on the right-hand side of \eqref{NNLS5} is finite almost surely. Hence, we can apply Lemma \ref{LEM:Ys1} (i) to $\widetilde \Gamma_N v$ for each finite $N\geq 1$, almost surely. The rest of the proof for this part follows in a similar manner to the proof of Part (i) by changing the time interval from $[0, T)$ to $\mathbb{R}$ and replacing $z$ by $\varepsilon z$. By applying Lemma \ref{PROP:Str2} instead of Lemma \ref{PROP:Str1} in the above computation, we see that the contribution to \eqref{nl1}, where $[0, T)$ is replaced by $\mathbb{R}$, is given by \[\text{Case (2):} \ R^3,\quad \text{Case (3):} \ R\prod_{j = 1}^2 \|v_j\|_{X^\frac{d-2}{2}(\mathbb{R})}, \quad \text{Case (4):} \ R^2 \|v_1\|_{X^\frac{d-2}{2}(\mathbb{R})}\] \noindent outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{\varepsilon^2 \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent in all cases as long as $s > s_d$. \end{proof} \section{Proofs of Theorems \ref{THM:1} and \ref{THM:2}} \label{SEC:THM12} In this section, we establish the almost sure local well-posedness (Theorem \ref{THM:1}) and probabilistic small data global theory (Theorem \ref{THM:2}). First, we present the proof of Theorems \ref{THM:1}. Given $C_1$ and $C_2$ as in \eqref{nl1a} and \eqref{nl1b}, let $\eta_1 > 0$ be sufficiently small such that \begin{equation} C_1 \eta_1^2 \leq \frac{1}{2} \qquad \text{and}\qquad 2C_2 \eta_1^2 \leq \frac{1}{4}. \label{T0a} \end{equation} \noindent Also, given $R\gg1$, choose $T = T(R)$ such that \[ T^\theta = \min \bigg( \frac{\eta_1}{2C_1R^3}, \frac{1}{4C_2R^2}\bigg).\] \noindent Then, it follows from Proposition \ref{PROP:NL1} that $\Gamma$ is a contraction on the ball $B_{\eta_1}$ defined by \[ B_{\eta_1} := \{ u\in X^\frac{d-2}{2}([0, T))\cap C([0, T); H^\frac{d-2}{2}):\, \|u\|_{X^\frac{d-2}{2}([0, T))} \leq \eta_1\}\] \noindent outside a set of probability \[\leq C \exp\bigg(-c \frac{R^2}{\|\phi\|_{H^s}^2}\bigg) \sim C \exp\bigg(-c \frac{1}{T^\gamma \|\phi\|_{H^s}^2}\bigg)\] \noindent for some $\gamma > 0$. This proves Theorem \ref{THM:1}. Next, we prove Theorem \ref{THM:2}. Let $\eta_2>0$ be sufficiently small such that \begin{equation} 2 C_3 \eta_2^2 \leq 1\qquad \text{and}\qquad 3C_4 \eta_2^2 \leq \frac{1}{2}, \label{small0} \end{equation} \noindent where $C_3$ and $C_4$ are as in \eqref{nl1c} and \eqref{nl1d}. Then, by Proposition \ref{PROP:NL1} with $R = \eta_2$ and $\phi^\o$ replaced by $\varepsilon \phi^\o$, we have \begin{align} \|\widetilde \Gamma v\|_{X^\frac{d-2}{2}(\mathbb{R})} & \leq 2 C_3\eta_2^3 \leq \eta_2, \label{small1}\\ \|\widetilde \Gamma v_1 - \widetilde \Gamma v_2 \|_{X^\frac{d-2}{2}(\mathbb{R})} & \leq 3 C_4 \eta_2^2 \|v_1 -v_2 \|_{X^\frac{d-2}{2}(\mathbb{R})} \leq \frac{1}{2} \|v_1 -v_2 \|_{X^\frac{d-2}{2}(\mathbb{R})} \label{small2} \end{align} \noindent outside a set of probability $\leq C \exp\big(-c \frac{\eta_2^2}{\varepsilon^2\|\phi\|_{H^s}^2}\big)$. Noting that $\eta_2$ is an absolute constant, we conclude that there exists a set $\Omega_\varepsilon \subset \O$ such that (i) $\widetilde \Gamma = \widetilde \Gamma^\o$ is a contraction on the ball $B_{\eta_2}$ defined by \[ B_{\eta_2} := \{ u\in X^\frac{d-2}{2}(\mathbb{R})\cap C(\mathbb{R}; H^\frac{d-2}{2}):\, \|u\|_{X^\frac{d-2}{2}(\mathbb{R})} \leq \eta_2\}\] \noindent for $\o \in \O_\varepsilon$, and (ii) $P(\O_\varepsilon^c) \leq C \exp\big(- \frac{c}{\varepsilon^2 \|\phi\|_{H^s}^2}\big)$. This proves global existence for \eqref{NLS1} with initial data $\varepsilon \phi^\o$ if $\o \in \O_\varepsilon$. Fix $\o \in \O_\varepsilon$ and let $v = v(\varepsilon, \o)$ be the global-in-time solution with $v|_{t = 0} = \varepsilon \phi^\o$ constructed above. In order to prove scattering, we need to show that there exists $v_+^\o \in H^\frac{d-2}{2}(\mathbb{R}^d)$ such that \begin{equation} S(-t) v(t) = \mp i \int_0^t S(-t') \mathcal{N}(v + \varepsilon z)(t') dt' \to v_+^\o \label{small2a} \end{equation} \noindent in $H^\frac{d-2}{2}(\mathbb{R}^d)$ as $ t\to \infty$. With $w(t) = S(-t) v(t)$, define $I(t_1, t_2)$ and $\widetilde I(t_1, t_2)$ by \begin{align*} I(t_1, t_2) & := S(t_2) \big(w(t_2) - w(t_1) ), \\ \widetilde I(t_1, t_2)& :=\mp i \int_{0}^{t_2} S(t_2-t') \chi_{[t_1, \infty)}(t')\mathcal{N}(v + \varepsilon z)(t') dt'. \end{align*} \noindent Then, for $0< t_1\leq t_2< \infty$, we have \begin{align*} I(t_1, t_2) = \mp i S(t_2) \int_{t_1}^{t_2} S(-t') \mathcal{N}(v + \varepsilon z)(t') dt' = \widetilde I(t_1, t_2). \end{align*} \noindent Also, note that $\widetilde I(t_1, t_2) = 0$ if $t_1 > t_2$. In the following, we view $\widetilde I(t_1, t_2)$ as a function of $t_2$ and estimate its $X^\frac{d-2}{2}([0, \infty))$-norm. We now revisit the computation in the proof of Proposition \ref{PROP:NL1} for $\widetilde I(t_1, t_2)$. In Case (1), we proceed slightly differently. By Lemma \ref{LEM:Ys1} (i), H\"older's inequality, and \eqref{Ys3}, we have \begin{align} \|\widetilde I(t_1, \cdot)\|_{X^\frac{d-2}{2}(\mathbb{R}_+)} & \lesssim \sup_{\substack{v_4 \in Y^{0}(\mathbb{R}_+)\\\|v_4\|_{Y^{0}} = 1}} \bigg|\int_0^\infty\int_{ \mathbb{R}^d} \chi_{[t_1, \infty)}(t) \jb{\nabla}^\frac{d-2}{2} v\overline{ v} v v_4 dx dt \bigg| \notag \\ & \leq \| \jb{\nabla}^\frac{d-2}{2} v\|_{L^\frac{2(d+2)}{d}_{t, x}([t_1, \infty))} \|v\|_{L^{d+2}_{t, x}([t_1, \infty))}^2. \label{small3} \end{align} \noindent By \eqref{Ys3} in Lemma \ref{LEM:Ys1}, we have \[ \| \jb{\nabla}^\frac{d-2}{2} v\|_{L^\frac{2(d+2)}{d}_{t, x}(\mathbb{R})} + \|v\|_{L^{d+2}_{t, x}(\mathbb{R})}\lesssim \|v\|_{X^\frac{d-2}{2}(\mathbb{R})} \leq \eta_2.\] \noindent Then, by the monotone convergence theorem, \eqref{small3} tends to 0 as $t_1 \to \infty$. In Cases (2), (3), and (4), we had at least one factor of $z$. We multiply the cutoff function $\chi_{[t_1, \infty)}$ only on the $(\varepsilon z)$-factors but not on the $v$-factors. Note that $\|v\|_{X^\frac{d-2}{2}(\mathbb{R})} \leq \eta_2$. As in the proof of Proposition \ref{PROP:NL1}, we estimate at least a small portion of these $z$-factors in $\|\jb{\nabla}^s \varepsilon z^\o\|_{L^q_{t, x}([t_1, \infty))}$, $q = 4, \frac{6(d+2)}{d+4}$, or $d+2$, in each case. Recall that we have $\|\jb{\nabla}^s \varepsilon z^\o\|_{L^q_{t, x}(\mathbb{R})} \leq \eta_2$ for $\o \in \O_\varepsilon$. See Lemma \ref{PROP:Str2}. Hence, again by the monotone convergence theorem, we have $\|\jb{\nabla}^s \varepsilon z^\o\|_{L^q_{t, x}([t_1, \infty))} \to 0$ as $t_1 \to \infty$ and thus the contribution from Cases (2), (3), and (4) tends to 0 as $t_1 \to \infty$. Therefore, we have \[ \lim_{t_1\to \infty } \| \widetilde I(t_1, t_2) \|_{X^\frac{d-2}{2}([0, \infty))} = 0.\] In conclusion, we obtain \begin{align*} \lim_{t_1\to \infty }& \sup_{t_2 > t_1} \| w(t_2) - w(t_1) \|_{H^\frac{d-2}{2}} = \lim_{t_1\to \infty } \sup_{t_2 > t_1} \| I(t_1, t_2) \|_{H^\frac{d-2}{2}}\notag\\ & = \lim_{t_1\to \infty } \| \widetilde I(t_1, t_2) \|_{L^\infty_{t_2}([0, \infty); H^\frac{d-2}{2})} \lesssim \lim_{t_1\to \infty } \| \widetilde I(t_1, t_2) \|_{X^\frac{d-2}{2}([0, \infty))} = 0. \end{align*} \noindent This proves \eqref{small2a} and scattering of $u^\o(t) = \varepsilon S(t) \phi^\o + v^\o(t)$, which completes the proof of Theorem \ref{THM:2}. \section{Local well-posedness of NLS with a deterministic perturbation} \label{SEC:6} In this and the next sections, we consider the following Cauchy problem of the defocusing NLS with a perturbation: \begin{equation} \begin{cases} i \partial_t v + \Delta v = |v + f|^2(v+f)\\ v|_{t = t_0} = v_0 \end{cases} \label{ZNLS1} \end{equation} \noindent where $f$ is a given {\it deterministic} function. Assuming some suitable conditions on $f$, we prove local well-posedness of \eqref{ZNLS1} in this section (Proposition \ref{PROP:LWP2}) and long time existence under further assumptions in Section \ref{SEC:7} (Proposition \ref{PROP:perturb2}). Then, we show, in Section \ref{SEC:8}, that the conditions imposed on $f$ for long time existence are satisfied with a large probability by setting $f(t) = z(t) = S(t) \phi^\o$. This yields Theorem \ref{THM:3}. Our main goal is to prove long time existence of solutions to the perturbed NLS \eqref{ZNLS1} by iteratively applying a perturbation lemma (Lemma \ref{LEM:perturb}). For this purpose, we first prove a ``variant'' local well-posedness of \eqref{ZNLS1}. As in the usual critical regularity theory, we first introduce an auxiliary scaling-invariant norm which is weaker than the $X^\frac{d-2}{2}$-norm. Given an interval $I \subset \mathbb{R}$, we introduce the $Z$-norm by \begin{equation} \|u\|_{Z(I)} := \bigg( \sum_{\substack{N\geq 1 \\ \text{ dyadic}}} N^{d-2} \|\mathbf{P}_{N} u\|_{L^4_{t, x}(I\times \mathbb{R}^d)}^4 \bigg)^\frac{1}{4}. \label{Z0} \end{equation} \noindent By the Littlewood-Paley theory and \eqref{Ys3} in Lemma \ref{LEM:Ys1}, we have \begin{equation*} \|u\|_{Z(I)} \lesssim \| \jb{\nabla}^\frac{d-2}{4} u \|_{L^4_{t, x}(I\times \mathbb{R}^d)} \lesssim \|u\|_{X^\frac{d-2}{2}(I)}. \end{equation*} \noindent Given $\theta \in (0, 1)$, we define the $Z_\theta$-norm by \[ \|u\|_{Z_\theta(I)} := \|u\|_{Z(I)}^\theta \|u\|_{X^\frac{d-2}{2}(I)}^{1-\theta}.\] \noindent Note that the $Z_\theta$-norm is weaker than the $X^\frac{d-2}{2}$-norm: \begin{equation} \|u\|_{Z_\theta(I)} \leq C_0 \|u\|_{X^\frac{d-2}{2}(I)}. \label{Z0a} \end{equation} \noindent for some $C_0 > 0$ independent of $I$. First, we present the bilinear Strichartz estimate adapted to the $Z_\theta$-norm. \begin{lemma}\label{LEM:Z1} Let $N_1 \leq N_2$. Then, we have \begin{align} \| \mathbf{P}_{N_1} u_1 \mathbf{P}_{N_2}u_2\|_{L^2_{t, x}(I\times \mathbb{R}^d)} & \lesssim \bigg(\frac{N_1}{N_2}\bigg)^{\frac 12(1-\theta)-} \|\mathbf{P}_{N_1} u_1\|_{Z_\theta(I)}\|\mathbf{P}_{N_2} u_2\|_{Y^0(I)}. \label{Z1a} \end{align} \end{lemma} \begin{proof} Given a cube $R$ of side length $N_1$ centered at $\xi_0\in N_1\mathbb{Z}^d$, let $\mathbf{P}_R = \psi\big(\frac{D-\xi_0}{N_1}\big)$ denote a smooth projection onto $R$ on the frequency side. Here, $\psi$ denotes the smooth projection onto the unit cube defined in \eqref{mod1a}. By \eqref{Ys5}, we have \begin{align*} \| \mathbf{P}_{N_1} u_1 \mathbf{P}_{R}\mathbf{P}_{N_2}u_2\|_{L^2_{t, x}(I)} & \leq \|\mathbf{P}_{N_1} u_1 \|_{L^4_{t, x}} \|\mathbf{P}_{R}\mathbf{P}_{N_2}u_2\|_{L^4_{t, x}}\\ & \lesssim N_1^\frac{d-2}{4} \|\mathbf{P}_{N_1} u_1 \|_{L^4_{t, x}(I)} \|\mathbf{P}_{R}\mathbf{P}_{N_2}u_2\|_{Y^0(I)}. \end{align*} \noindent Then, by almost orthogonality, we have \begin{align*} \| \mathbf{P}_{N_1} u_1 & \mathbf{P}_{N_2}u_2\|_{L^2_{t, x}(I)} \sim \Big(\sum_{R} \| \mathbf{P}_{N_1} u_1 \mathbf{P}_R \mathbf{P}_{N_2}u_2\|^2_{L^2_{t, x}(I)}\Big)^\frac{1}{2}\notag\\ & \lesssim \|\mathbf{P}_{N_1} u_1\|_{Z(I)} \Big(\sum_{R} \|\mathbf{P}_{R}\mathbf{P}_{N_2} u_2\|^2_{Y^0(I)}\Big)^\frac{1}{2} \lesssim \|\mathbf{P}_{N_1} u_1\|_{Z(I)} \|\mathbf{P}_{N_2} u_2\|_{Y^0(I)}. \end{align*} \noindent Then, \eqref{Z1a} follows from interpolating this with \eqref{Ys4}. \end{proof} Next, we state the key nonlinear estimate. Given $I \subset \mathbb{R}$, we define the $W^s$-norm by \begin{align} \|f\|_{W^s(I)} := \max\Big(\|\jb{\nabla}^s f \|_{L^4_{t, x}(I)} , \|\jb{\nabla}^sf\|_{L^{d+2}_{t, x}(I)}, \|\jb{\nabla}^sf\|_{L^\frac{6(d+2)}{d+4}_{t, x}(I)}\Big). \label{W1} \end{align} \noindent As in the proof of Proposition \ref{PROP:NL1}, different space-time norms of $f$ appear in the estimate but they are all controlled by this $W^s$-norm. The following lemma is analogous to Proposition \ref{PROP:NL1} but with one important difference. All the terms on the right-hand side have (i) two factors of the $Z_\theta$-norm of $v_j$, which is weaker than the $X^s$-norm, or (ii) the $W^s$-norm of $f$, which can be made small by shrinking the interval $I$. \begin{lemma}\label{LEM:NL2} Let $d \geq 3$ and $\theta \in (0, 1)$. Suppose that $s , \alpha \in \mathbb{R}$ satisfy \begin{align} s \in (s_d, s_\textup{crit}] \qquad \text{and} \qquad 0 < \alpha < 1- \frac{2}{d-1} s, \label{NL2a} \end{align} \noindent where $s_d$ is as in \eqref{Sd1}. Then, given any interval $I = [t_0, t_1]\subset \mathbb{R}$, we have \begin{align*} \bigg\| \prod_{j = 1}^3(v_j & + f)^* \bigg\|_{N^\frac{d-2}{2}(I)} \lesssim \sum_{j = 1}^3 \|v_j\|_{X^\frac{d-2}{2}(I)} \prod_{\substack{k = 1\\k\ne j}}^3\|v_k\|_{Z_\theta(I)}\\ & \hphantom{X} + \prod_{\substack{j, k =1\\j\ne k}}^3 \|v_j\|_{X^\frac{d-2}{2}(I)}\|v_k\|_{X^\frac{d-2}{2}(I)} \Big( \|f\|_{W^s(I)} + \|f\|_{Y^s(I)}^{1-\alpha} \|f\|_{W^s(I)}^\alpha \Big) \\ & \hphantom{X} + \sum_{j = 1}^3 \|v_j\|_{X^{\frac{d-2}{2}}(I)} \Big( \|f\|_{Y^s(I)} \|f\|_{W^s(I)} + \|f\|_{Y^s(I)}^{2-2\alpha} \|f\|_{W^s(I)}^{2\alpha} + \|f\|_{W^s(I)}^2\Big)\\ & \hphantom{X} + \|f \|_{Y^s(I)}\|f\|_{W^s(I)}^2 + \|f\|_{Y^s(I)}^{3-2\alpha} \|f\|_{W^s(I)}^{2\alpha} + \|f\|_{W^s(I)}^3, \end{align*} \noindent for all $f \in W^s(I) \cap Y^s(I)$ and $v_j\in X^{\frac{d-2}{2}}(I)$, $j=1,2,3$, where $(v_j + f)^* = v_j + f$ or $\overline{v}_j + \overline{f}$. \end{lemma} We first state and prove the following local well-posedness result for the perturbed NLS \eqref{ZNLS1}, assuming Lemma \ref{LEM:NL2}. The proof of Lemma \ref{LEM:NL2} is presented at the end of this section. \begin{proposition}[Local well-posedness of the perturbed NLS]\label{PROP:LWP2} Given $d \geq 3$, let $s \in ( s_d, s_\textup{crit}]$, where $s_d$ is defined in \eqref{Sd1}. Let $\theta \in (\frac 12, 1)$ and $ \alpha \in \mathbb{R}$ satisfy \eqref{NL2a}. Suppose that \[\|v_0\|_{H^{\frac{d-2}{2}}}\leq R \quad \text{ and } \quad \|f\|_{Y^s(I)}\leq M,\] \noindent for some $R, M\geq 1$. Then, there exists small $\eta_0 = \eta_0(R,M)>0$ such that if \[ \|S(t-t_0)v_0\|_{Z_\theta(I)}\leq \eta \qquad \text{and} \qquad \|f\|_{W^s(I)} \leq \eta^{\frac{4-\alpha}{\alpha}}\] \noindent for some $\eta \leq \eta_0$ and some time interval $I = [t_0, t_1] \subset \mathbb{R}$, then there exists a unique solution $v \in X^\frac{d-2}{2}(I)\cap C(I; H^\frac{d-2}{2}(\mathbb{R}^d))$ to \eqref{ZNLS1} with $v(t_0) = v_0$. Moreover, we have \begin{align} \|v - S(t-t_0) v_0\|_{X^\frac{d-2}{2}(I)} \lesssim \eta^{3-2\theta}. \label{LWP2a} \end{align} \end{proposition} \begin{proof} For $\theta \in (\frac 12 , 1)$, we show that the map $\Gamma$ defined by \begin{equation} \Gamma v(t) := S(t-t_0) v_0 -i \int_{t_0}^t S(t -t') \mathcal{N}(v+f)(t') dt' \label{LWP2b} \end{equation} \noindent is a contraction on \[ B_{R, M, \eta} = \{ v \in X^\frac{d-2}{2}(I)\cap C(I; H^\frac{d-2}{2}): \, \|v\|_{X^\frac{d-2}{2}(I)}\leq 2\widetilde R, \ \|v\|_{Z_\theta(I)} \leq 2\eta\}\] \noindent where $\widetilde R :=\max(R, M)$. Now, choose \begin{align} \eta_0 \ll \widetilde R^{-\frac{1}{2\theta -1}}. \label{LWP2c} \end{align} \noindent In particular, we have $\eta_0 \ll \widetilde R^{-1} \leq 1$. Fix $\eta \leq \eta_0$ in the following. Noting that $\frac{4-\alpha}{\alpha} > 3$, Lemma \ref{LEM:NL2} with Lemma \ref{LEM:Ys1} yields \begin{align} \|\Gamma v\|_{X^\frac{d-2}{2}(I)} & \leq \|S(t-t_0)v_0\|_{X^\frac{d-2}{2}(I)} + \|\Gamma v - S(t-t_0) v_0\|_{X^\frac{d-2}{2}(I)} \notag \\ & \leq \|v_0\|_{H^\frac{d-2}{2}} + C \eta^2 \widetilde R \leq 2\widetilde R, \label{LWP2d} \end{align} \noindent and \begin{align*} \| \Gamma v_1 - \Gamma v_2 \|_{X^\frac{d-2}{2}(I)} \leq \frac 12 \| v_1 - v_2 \|_{X^\frac{d-2}{2}(I)} \end{align*} \noindent for $v, v_1, v_2 \in B_{R, M, \eta}$. Moreover, we have \begin{align*} \|\Gamma v\|_{Z_\theta(I)} & \leq \big(\|S(t-t_0) v_0\|_{Z(I)} + C \eta^2 \widetilde R\big)^{\theta} \big(\|S(t-t_0) v_0\|_{X^\frac{d-2}{2}(I)} + C \eta^2\widetilde R \big)^{1-\theta}\notag \\ & \leq \eta + C \eta^{2\theta}\widetilde R +C\eta^{2-\theta}\widetilde R^{1-\theta}+ C\eta^2 \widetilde R \leq 2 \eta \end{align*} \noindent for $v \in B_{R, M, \eta}$. Hence, $\Gamma$ is a contraction on $B_{R, M, \eta}$. The estimate \eqref{LWP2a} follows from \eqref{LWP2c} and \eqref{LWP2d}. \end{proof} We conclude this section by presenting the proof of Lemma \ref{LEM:NL2}. Some cases follow directly from the proof of Proposition \ref{PROP:NL1}. However, due to the use of the $Z_\theta$-norm, we need to make modifications in several cases. \begin{proof}[Proof of Lemma \ref{LEM:NL2}] As in the proof of Proposition \ref{PROP:NL1}, we need to estimate the right-hand side of \eqref{nl1} by performing a case-by-case analysis of expressions of the form: \begin{align} \bigg| \iint_{I\times \mathbb{R}^d} \jb{\nabla}^\frac{d-2}{2} ( w_1 w_2 w_3 )v_4 dx dt\bigg| \label{Znl1} \end{align} \noindent where $\|v_4\|_{Y^{0}(I)} \leq 1$ and $w_j= v$ or $f$, $j = 1, 2, 3$. Before proceeding further, let us simplify some of the notations. In the following, as before, we drop the complex conjugate sign and denote $X^s(I)$ and $Y^s(I)$ by $X^s$ and $Y^s$. Lastly, we dyadically decompose $w_j$, $j = 1, 2, 3$, and $v_4$ such that their spatial frequency supports are $\{ |\xi_j|\sim N_j\}$ for some dyadic $N_j \geq 1$ but still denote them as $w_j = v_j$ or $f_j$, $j = 1, 2, 3$, and $v_4$ if there is no confusion. \medskip \noindent {\bf Case (1):} $v v v$ case. Without loss of generality, assume that $N_1\geq N_2, N_3$. \smallskip \noindent {\bf $\bullet$ Subcase (1.a):} $N_1 \sim N_4$. By Lemma \ref{LEM:Z1}, we have \begin{align*} \bigg|\iint_{ I\times \mathbb{R}^d} & \jb{\nabla}^\frac{d-2}{2} v_1 v_2 v_3 v_4 dx dt \bigg| \lesssim \sum_{N_1\sim N_4\gtrsim N_2, N_3} N_1^\frac{d-2}{2} \|\mathbf{P}_{N_1} v_1\mathbf{P}_{N_3} v_3\|_{L^2_{t, x}} \|\mathbf{P}_{N_2}v_2\mathbf{P}_{N_4} v_4\|_{L^2_{t, x}}\\ & \lesssim \sum_{N_1, \dots, N_4} \bigg(\frac{N_2N_3}{N_1N_4}\bigg)^{\frac 12(1-\theta)-} \|\mathbf{P}_{N_1} v_1\|_{Y^\frac{d-2}{2}}\|\mathbf{P}_{N_2}v_2\|_{Z_\theta} \|\mathbf{P}_{N_3}v_3\|_{Z_\theta} \|\mathbf{P}_{N_4}v_4\|_{Y^0} \intertext{By first summing over $N_2, N_3 \leq N_1$ and then applying Cauchy-Schwarz inequality in summing over $N_1 \sim N_4$, we have} & \lesssim \|v_1\|_{X^\frac{d-2}{2}(I)}\|v_2\|_{Z_\theta(I)} \|v_3\|_{Z_\theta(I)}. \end{align*} \noindent {\bf $\bullet$ Subcase (1.b):} $N_1 \sim N_2 \gg N_4$. By Lemma \ref{LEM:Z1} and \eqref{Ys3} in Lemma \ref{LEM:Ys1}, we have \begin{align*} \bigg| \iint_{ I\times \mathbb{R}^d} & \jb{\nabla}^\frac{d-2}{2} v_1 v_2 v_3 v_4 dx dt \bigg| \lesssim \sum_{N_1 \sim N_2 \geq N_3, N_4} N_1^\frac{d-2}{2} \|\mathbf{P}_{N_1} v_1\mathbf{P}_{N_3} v_3\|_{L^2_{t, x}} \|\mathbf{P}_{N_2}v_2\mathbf{P}_{N_4} v_4\|_{L^2_{t, x}}\\ & \lesssim \sum_{N_1 \sim N_2 \geq N_3, N_4} \bigg(\frac{N_3}{N_1}\bigg)^{\frac 12(1-\theta)-} \|\mathbf{P}_{N_1} v_1\|_{Y^\frac{d-2}{2}} \|\mathbf{P}_{N_3}v_3\|_{Z_\theta} \|\mathbf{P}_{N_2}v_2\|_{L^4_{t, x}} \|\mathbf{P}_{N_4}v_4\|_{L^4_{t, x}}\\ & \lesssim \sum_{N_1 \sim N_2 \geq N_3, N_4} \bigg(\frac{N_3}{N_1}\bigg)^{\frac 12(1-\theta)-} \bigg(\frac{N_4}{N_2}\bigg)^\frac{d-2}{4} \|\mathbf{P}_{N_1} v_1\|_{Y^\frac{d-2}{2}} \|\mathbf{P}_{N_3}v_3\|_{Z_\theta}\\ & \hphantom{XXXXXXXXXXXXXXXX} \times N_2^\frac{d-2}{4} \|\mathbf{P}_{N_2}v_2\|_{L^4_{t, x}} \|\mathbf{P}_{N_4}v_4\|_{Y^0} \intertext{Summing over $N_3$ and taking a supremum in $N_2$,} & \lesssim \|v_2\|_{Z}\|v_3\|_{Z_\theta} \sum_{N_1 \gg N_4} \bigg(\frac{N_4}{N_1}\bigg)^\frac{d-2}{4} \|\mathbf{P}_{N_1} v_1\|_{Y^\frac{d-2}{2}} \|\mathbf{P}_{N_4}v_4\|_{Y^0} \intertext{By Lemma \ref{LEM:Schur}, we have} & \lesssim \|v_1\|_{Y^\frac{d-2}{2}} \|v_2\|_{Z_\theta} \|v_3\|_{Z_\theta} \|v_4\|_{Y^0} \lesssim \|v_1\|_{X^\frac{d-2}{2}(I)} \|v_2\|_{Z_\theta(I)}\|v_3\|_{Z_\theta(I)}. \end{align*} In the following, the desired estimates follow from the corresponding cases in the proof of Proposition \ref{PROP:NL1}. Hence, we just state the results. \smallskip \noindent {\bf Case (2):} $fff$ case. \quad Without loss of generality, assume $N_3 \geq N_2 \geq N_1$. \smallskip \noindent {\bf $\bullet$ Subcase (2.a):} $N_2 \sim N_3$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|f\|_{L^{d+2}_{t, x}}^2 \|\jb{\nabla}^{\frac{d-2}{4}+}f\|_{L^{4}_{t, x}}^2 \leq \|f\|^3_{W^s(I)}\] \noindent as long as $s > \frac{d-2}{4}$. \noindent {\bf $\bullet$ Subcase (2.b):} $N_3 \sim N_4 \gg N_1, N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.i):} $N_1, N_2 \ll N_3^\frac{1}{d-1}$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|f\|_{Y^s}^{3-2\alpha} \|\jb{\nabla}^s f\|_{L^4_{t, x} }^{2\alpha} \leq \|f\|_{Y^s(I)}^{3-2\alpha} \|f\|_{W^s(I)}^{2\alpha}\] \noindent as long as \eqref{nl3} is satisfied and $\alpha < 1- \frac{2}{d-1} s$. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.ii):} $N_2\gtrsim N_3^\frac{1}{d-1} \gg N_1$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|f \|_{Y^s(I)} \|\jb{\nabla}^s f\|_{L^4_{t, x}}^2 \leq \|f \|_{Y^s(I)}\|f\|_{W^s(I)}^2\] \noindent as long as \eqref{nl3} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.iii):} $N_1, N_2\gtrsim N_3^\frac{1}{d-1} $. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \| \jb{\nabla}^s f\|_{L^\frac{6(d+2)}{d+4}_{t, x}}^3 \leq \|f\|_{W^s(I)}^3\] \noindent as long as \eqref{nl3} is satisfied. \medskip \noindent {\bf Case (3):} $v v f$ case. Without loss of generality, assume $N_1 \geq N_2$. \medskip \noindent {\bf $\bullet$ Subcase (3.a):} $N_1 \gtrsim N_3$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^\frac{d-2}{2}}^2 \| \jb{\nabla}^s f\|_{L^{d+2}_{t, x}} \leq \|v\|_{X^\frac{d-2}{2}(I)}^2 \|f\|_{W^s(I)}\] \noindent as long as $s > 0$. \medskip \noindent {\bf $\bullet$ Subcase (3.b):} $N_3\sim N_4 \gg N_1 \geq N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (3.b.i):} $N_1\gtrsim N_3^\frac 1{d-1}$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^\frac{d-2}{2}}^2 \| \jb{\nabla}^s f\|_{L^{d+2}_{t, x}} \leq \|v\|_{X^\frac{d-2}{2}(I)}^2 \|f\|_{W^s(I)}\] \noindent as long as \eqref{nl4} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (3.b.ii):} $N_2 \leq N_1\ll N_3^\frac 1{d-1}$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^\frac{d-2}{2}}^2 \| f\|_{Y^s}^{1-\alpha} \| \jb{\nabla}^s f\|_{L^{d+2}_{t, x}}^\alpha \leq \|v\|_{X^\frac{d-2}{2}(I)}^2 \| f\|_{Y^s(I)}^{1-\alpha} \|f\|_{W^s(I)}^\alpha \] \noindent as long as \eqref{nl4} is satisfied. \medskip \noindent {\bf Case (4):} $v f f $ case. Without loss of generality, assume $N_3 \geq N_2$. \medskip \noindent {\bf $\bullet$ Subcase (4.a):} $N_1 \gtrsim N_3$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^{\frac{d-2}{2}}} \|f\|_{L^{d+2}_{t, x}}^2 \leq \|v\|_{X^{\frac{d-2}{2}}(I)} \|f\|_{W^s(I)}^2\] \noindent as long as $s> 0$. \medskip \noindent {\bf $\bullet$ Subcase (4.b):} $N_3 \gg N_1$. First, suppose that $N_2 \sim N_3$. Then, the contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^{\frac{d-2}{2}}} \|\jb{\nabla}^s f\|_{L^4_{t, x}}^2 \leq \|v\|_{X^{\frac{d-2}{2}}(I)} \|f\|_{W^s(I)}^2\] \noindent as long as $ s> \frac {d-2}4$. Hence, it remains to consider the case $N_3 \sim N_4 \gg N_1, N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.i):} $N_1, N_2\ll N_3^\frac 1{d-1}$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^{\frac{d-2}{2}}} \|f\|_{Y^s}^{2-2\alpha} \|\jb{\nabla}^s f\|_{L^4_{t, x} }^{2\alpha} \leq \|v\|_{X^{\frac{d-2}{2}}(I)} \|f\|_{Y^s(I)}^{2-2\alpha} \|f\|_{W^s(I)}^{2\alpha}\] \noindent as long as \eqref{nl5} is satisfied and $\alpha < 1 - \frac{2}{d-1}s$. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.ii):} $N_1\ll N_3^{\frac 1{d-1}} \lesssim N_2$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^{\frac{d-2}{2}}} \|\jb{\nabla}^s f\|_{L^4_{t, x}}^2 \leq \|v\|_{X^{\frac{d-2}{2}}(I)} \|f\|_{W^s(I)}^2\] \noindent as long as \eqref{nl5} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.iii):} $N_2 \ll N_3^{\frac 1{d-1}} \lesssim N_1$. The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^{\frac{d-2}{2}}} \|f\|_{Y^s} \|\jb{\nabla}^s f\|_{L^{d+2}_{t, x}} \leq \|v\|_{X^{\frac{d-2}{2}}(I)} \|f\|_{Y^s(I)} \|f\|_{W^s(I)}\] \noindent as long as \eqref{nl5} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.iv):} $N_1, N_2\gtrsim N_3^{\frac 1{d-1}}$ The contribution to \eqref{Znl1} in this case is at most \[ \lesssim \|v\|_{X^{\frac{d-2}{2}}} \|\jb{\nabla}^s f\|_{L^{d+2}_{t, x}}^2 \leq \|v\|_{X^{\frac{d-2}{2}}(I)} \|f\|_{W^s(I)}^2\] \noindent as long as \eqref{nl5} is satisfied. \end{proof} \section{Long time existence of solutions to the perturbed NLS} \label{SEC:7} The main goal of this section is to establish long time existence of solutions to the perturbed NLS \eqref{ZNLS1} under some assumptions. See Proposition \ref{PROP:perturb2}. We achieve this goal by iteratively applying the perturbation lemma (Lemma \ref{LEM:perturb}) for the energy-critical NLS. We first state the perturbation lemma for the energy-critical cubic NLS involving the $X^\frac{d-2}{2}$- and the $Z$-norms. See \cite{CKSTT, TV, TVZ, KVZurich} for perturbation and stability results on usual Strichartz and Lebesgue spaces. In the context of the cubic NLS on $\mathbb{R}\times \mathbb{T}^3$, Ionescu-Pausader \cite{IP} proved a perturbation lemma involving the critical $X^{s_\textup{crit}}$-norm. Our proof essentially follows their argument and is included for the sake of completeness. \begin{lemma}[Perturbation lemma]\label{LEM:perturb} Let $d \geq 3$ and $I$ be a compact interval with $|I|\leq 1$. Suppose that $v \in C(I; H^\frac{d-2}{2}(\mathbb{R}^d))$ satisfies the following perturbed NLS: \begin{align}\label{PNLS1} i \partial_t v + \Delta v = |v|^2 v + e, \end{align} \noindent satisfying \begin{align} \| v\|_{Z(I)} + \|v\|_{L^\infty(I; H^\frac{d-2}{2}(\mathbb{R}^d))} \leq R \label{PP1} \end{align} \noindent for some $R \geq 1$. Then, there exists $\varepsilon_0 = \varepsilon_0(R) > 0$ such that if we have \begin{align} \|u_0 - v(t_0) \|_{H^\frac{d-2}{2}(\mathbb{R}^d)} + \|e\|_{N^\frac{d-2}{2}(I)} \leq \varepsilon \label{PP2} \end{align} \noindent for some $u_0 \in H^\frac{d-2}{2}(\mathbb{R}^d)$, some $t_0 \in I$, and some $\varepsilon < \varepsilon_0$, then there exists a solution $u \in X^\frac{d-2}{2}(I)\cap C(I; H^\frac{d-2}{2}(\mathbb{R}^d))$ to the defocusing cubic NLS \eqref{NLS1} with $u(t_0) = u_0$ such that \begin{align} \|u\|_{X^\frac{d-2}{2}(I)}+\|v\|_{X^\frac{d-2}{2}(I)} & \leq C(R), \\ \|u - v\|_{X^\frac{d-2}{2}(I)} & \leq C(R)\varepsilon, \label{PP3} \end{align} \noindent where $C(R)$ is a non-decreasing function of $R$. \end{lemma} \begin{proof} Without loss of generality, we assume $t_0=\min I$. Given small $\varepsilon_1 = \varepsilon_1(R)>0$ (to be chosen later), we divide the interval $I$ into subintervals $I_j = [t_j, t_{j+1}]$ such that $I = \bigcup_{j = 0}^{L} I_j$. By choosing $L\sim \big(\frac{R}{\varepsilon_1}\big)^4$, we can guarantee that \begin{equation} \|v\|_{Z(I_j)} \leq \varepsilon_1 \label{PP3y} \end{equation} \noindent for $j = 0, \dots, L$. By assumption, we also have \begin{equation} \|e\|_{N^\frac{d-2}{2}(I_j)} \leq \varepsilon < \varepsilon_0 \label{PP3x} \end{equation} \noindent \noindent for $j = 0, \dots, L$. \smallskip \noindent $\bullet$ {\bf Step 1:} Let $\theta \in (\frac 12, 1)$. We first claim that there exist $\eta_0= \eta_0(R) >0$ and $\varepsilon_0= \varepsilon_0(R)>0$ such that if \begin{equation} \|S(t - t_*) v(t_*)\|_{Z_\theta(J)} \leq \eta_0 \qquad\text{and} \qquad \|e\|_{N^\frac{d-2}{2}(J)} \leq \varepsilon_0 \label{PP3b} \end{equation} \noindent for some $t_*$ in a subinterval $J \subset I$, then there exists a unique solution $v$ to \eqref{PNLS1} on $J$, satisfying \begin{equation} \|v - S(t - t_*) v(t_*) \|_{X^\frac{d-2}{2}(J)} \leq C \|S(t - t_*) v(t_*)\|_{Z_\theta(J)}^{3-2\theta} + 2 \|e\|_{N^\frac{d-2}{2}(J)}. \label{PP3a} \end{equation} We choose $\eta_0 = \eta_0(R)$ and $\varepsilon_0 = \varepsilon_0(R)$ such that \begin{align}\label{PP4a} \eta_0 \ll R^{-\frac{1}{2\theta -1}} \qquad \text{and} \qquad \varepsilon_0 \ll R^{-\frac{2}{2\theta -1}}. \end{align} \noindent In the following, we set \[\eta := \|S(t - t_*) v(t_*)\|_{Z_\theta(J)} \leq \eta_0 \qquad \text{and} \quad \varepsilon: = \|e\|_{N^\frac{d-2}{2}(J)} \leq \varepsilon_0.\] \noindent Then, proceeding as in the proof of Proposition \ref{PROP:LWP2}, we show that the map $\Gamma$ defined by \begin{equation} \Gamma v(t) := S(t-t_*) v (t_*) -i \int_{t_*}^t S(t -t')\mathcal{N}(v)(t') dt' -i \int_{t_*}^t S(t -t')e(t') dt'. \label{PP4} \end{equation} \noindent is a contraction on \[ B_{R, \eta, \varepsilon} = \big\{ v \in X^\frac{d-2}{2}(J)\cap C(J; H^\frac{d-2}{2}): \, \|v\|_{X^\frac{d-2}{2}(J)}\leq 2 R, \ \|v\|_{Z_\theta(J)} \leq 2 (\eta+ \varepsilon^{\frac{\theta}{2}+\frac 14})\big\}. \] Indeed, by Lemma \ref{LEM:NL2} (with $f = 0$), we have \begin{align} \|\Gamma v\|_{X^\frac{d-2}{2}(J)} & \leq \|v(t_\ast)\|_{H^\frac{d-2}{2}} + C (\eta+\varepsilon^{\frac{\theta}{2}+\frac 14})^2 R + \varepsilon \notag \\ & \leq \|v(t_\ast)\|_{H^\frac{d-2}{2}} + C \eta^2 R+ 2\varepsilon \leq 2 R, \label{PP5} \end{align} \noindent and \begin{align*} \| \Gamma v_1 - \Gamma v_2 \|_{X^\frac{d-2}{2}(J)} & \leq C(\eta + \varepsilon^{\frac{\theta}{2}+\frac 14})R \| v_1 - v_2 \|_{X^\frac{d-2}{2}(J)} \leq \frac 12 \| v_1 - v_2 \|_{X^\frac{d-2}{2}(J)}, \notag \end{align*} \noindent for $v, v_1, v_2 \in B_{R, \eta, \varepsilon}$. Moreover, we have \begin{align*} \|\Gamma v\|_{Z_\theta(J)} & \leq \big(\|S(t-t_\ast) v(t_\ast)\|_{Z(J)} + C \eta^2 R+ \varepsilon\big)^{\theta}\\ & \hphantom{XXXXXXX} \times \big(\|S(t-t_\ast) v(t_\ast)\|_{X^\frac{d-2}{2}(J)} + C \eta^2 R + \varepsilon \big)^{1-\theta}\notag \\ & \leq \eta+ C \eta^{2\theta} R + C \eta^{2-\theta} R^{1-\theta} + C\eta^\theta\varepsilon^{1-\theta} + C\varepsilon^\theta R^{1-\theta} \leq 2 (\eta+ \varepsilon^{\frac{\theta}{2}+\frac 14}), \end{align*} \noindent for $v \in B_{R, \eta, \varepsilon}$. Hence, $\Gamma$ is a contraction on $B_{R, \eta_1}$. The estimate \eqref{PP3a} follows from \eqref{PP4a} and \eqref{PP5}. \smallskip \noindent $\bullet$ {\bf Step 2:} Next, we claim that, given $\varepsilon_2 > 0$, we can choose $\varepsilon_j = \varepsilon_j(R, \varepsilon_2)$, $j = 0, 1$, in \eqref{PP3x} and \eqref{PP3y} sufficiently small such that we have \begin{align} \|S(t - t_j) v(t_j) \|_{Z_\theta(I_j)} & \leq \varepsilon_2 \qquad \text{and} \qquad \|v \|_{Z_\theta(I_j)} \leq \varepsilon_2. \label{PP7} \end{align} Without loss of generality, assume $\varepsilon_2 \leq \frac{\eta_0}{2}$, where $\eta_0 = \eta_0(R)$ is as in Step 1. Let $h(\tau) = \|S(t - t_j) v(t_j) \|_{Z_\theta([t_j, t_j + \tau])} $. Note that $h$ is continuous and $h(0) = 0$. Thus, we have $h(\tau) \leq 2\varepsilon_2 \leq \eta_0$ for small $\tau > 0$. Then, from the Duhamel formula \eqref{PP4} with \eqref{PP3y}, \eqref{PP3x}, and \eqref{PP3a}, we have \begin{align} h(\tau) & \leq \|S(t - t_j) v(t_j) \|_{X^\frac{d-2}{2}([t_j, t_j + \tau])}^{1-\theta} \|S(t - t_j) v(t_j) \|_{Z([t_j, t_j + \tau])}^{\theta} \notag \\ & \leq C R^{1-\theta}\big(\varepsilon_1 + \varepsilon_2^{3-2\theta} + \|e\|_{N^\frac{d-2}{2}(I_j)}\big)^\theta \notag \\ & \leq C R^{1-\theta} \varepsilon_2^{\theta(3-2\theta)} + CR^{1-\theta}\big(\varepsilon_1 + \|e\|_{N^\frac{d-2}{2}(I_j)}\big)^\theta \label{PP8} \end{align} \noindent From \eqref{PP4a} with $\varepsilon_2 \leq \frac{\eta_0}{2}$, we have \begin{equation} CR^{1-\theta}\varepsilon_2^{\theta(3-2\theta)}\leq C\big (R\eta_0^{(2\theta-1)}\big)^{1-\theta}\varepsilon_2\ll\varepsilon_2. \label{PP8a} \end{equation} \noindent Hence, it follows from \eqref{PP8} and \eqref{PP8a} that \begin{align} h(\tau) & \leq \tfrac 12 \varepsilon_2 + C R^{1-\theta}(\varepsilon_1 + \varepsilon_0 )^{\theta} \leq \varepsilon_2 \label{PP9} \end{align} \noindent by choosing $\varepsilon_j = \varepsilon_j(R,\varepsilon_2)>0$ sufficiently small, $j = 0, 1$. Then, by the continuity argument, we see that \eqref{PP9} holds for all $\tau \leq t_{j+1} - t_j$. From Step 1 and \eqref{PP3y}, we have \begin{equation} \|v\|_{Z_\theta(I_j)} = \|v\|_{Z(I_j)}^\theta \|v\|_{X^\frac{d-2}{2}(I_j)}^{1-\theta} \leq C \varepsilon_1^\theta R^{1-\theta}. \label{PP9a} \end{equation} \noindent Therefore, \eqref{PP7} follows from \eqref{PP9} and \eqref{PP9a}, by choosing $\varepsilon_1 = \varepsilon_1(R, \varepsilon_2)$ smaller if necessary. \smallskip \noindent $\bullet$ {\bf Step 3:} Given $\varepsilon_2 = \varepsilon_2(R) >0$ (to be chosen later), it follows from Step 2 that \eqref{PP7} holds as long as $\varepsilon_ j= \varepsilon_j(R)>0$ $j = 0, 1$, are sufficiently small. From Step 1 with \eqref{PP3x}, \eqref{PP7}, and \eqref{PP5}, we have \begin{align} \|v\|_{X^\frac{d-2}{2}(I_j)} \leq 2R \label{PP10} \end{align} \noindent as long as $\varepsilon_j = \varepsilon_j(R)>0$, $j = 0, 1, 2$, are sufficiently small. Let $u$ be a solution to the defocusing cubic NLS \eqref{NLS1} with initial data $u(t_j)$ given at $t = t_j$ such that \begin{align} \| u(t_j) - v(t_j) \|_{H^\frac{d-2}{2}} \leq \varepsilon < \varepsilon_0. \label{PP10a} \end{align} \noindent Let $J_j = [t_j, t_j + \tau] \subset I_j$ be the maximal time interval such that \begin{equation} \| u - v\|_{Z_\theta(J_j)} \leq 6C_0\varepsilon, \label{PP11} \end{equation} \noindent where $C_0$ is as in \eqref{Z0a}. Such an interval exists and is non-empty, since $\tau \mapsto \| u - v\|_{Z_\theta(t_j, t_j + \tau )}$ is finite and continuous (see Lemma \ref{LEM:U6}), at least on the interval of local existence of $u$, and vanishes for $\tau = 0$. Let $w : = u-v$. By Lemma \ref{LEM:NL2} (with $f = 0$) with \eqref{PP3x}, \eqref{PP7}, \eqref{PP10}, \eqref{PP10a}, and \eqref{PP11}, we have \begin{align} \|w\|_{X^\frac{d-2}{2}(J_j)} & \leq \| u(t_j) - v(t_j)\|_{H^\frac{d-2}{2}} + C_1\Big\{ \|v\|_{X^\frac{d-2}{2}(J_j)}\|v\|_{Z_\theta(J_j)}\|w\|_{X^\frac{d-2}{2}(J_j)} \notag \\ & \hphantom{XXXXX} + \|v\|_{X^\frac{d-2}{2}(J_j)}\|w\|_{X^\frac{d-2}{2}(J_j)}\|w\|_{Z_\theta(J_j)} \notag \\ & \hphantom{XXXXX} + \|w\|_{X^\frac{d-2}{2}(J_j)}\|w\|_{Z_\theta(J_j)}^2\Big\} + \|e\|_{N^\frac{d-2}{2}(J_j)}\notag\\ & \leq 2 \varepsilon + C_2 (\varepsilon_0 + \varepsilon_2) R \|w\|_{X^\frac{d-2}{2}(J_j)}. \notag \end{align} \noindent Taking $\varepsilon_j = \varepsilon_j(R)>0$ sufficiently small, $j = 0, 2$, such that $(\varepsilon_0+\varepsilon_2) R\ll 1$, we obtain \begin{align} \|w\|_{X^\frac{d-2}{2}(J_j)} \leq 4 \varepsilon. \label{PP12} \end{align} \noindent Hence, from \eqref{Z0a}, we have \begin{align} \|w\|_{Z_\theta(J_j)} \leq C_0 \|w\|_{X^\frac{d-2}{2}(J_j)} \leq 4 C_0 \varepsilon. \label{PP13} \end{align} From \eqref{PP10} and \eqref{PP12}, we have $\|u\|_{X^\frac{d-2}{2}(J_j)} \leq 3R<\infty$. Then, from \eqref{Ys2}, we have $\|u\|_{\dot{S}^{\frac{d-2}{2}}(J_j)} < \infty$. In particular, this implies that $u$ can be extended to some larger interval $ J' \supset \overline {J_j}$. Therefore, in view of \eqref{PP11} and \eqref{PP13}, we can apply the continuity argument and conclude that $J_j = I_j$. \smallskip \noindent $\bullet$ {\bf Step 4:} By \eqref{PP2}, we have $\|u(t_0)-v(t_0)\|_{H^{\frac{d-2}{2}}}\leq \varepsilon$ for some $\varepsilon < \varepsilon_0$. Then, by Step 3, we have $\|w\|_{X^{\frac{d-2}{2}}(I_0)}\leq 4\varepsilon$ on $I_0 = [t_0, t_1]$. In particular, this yields \[\|u(t_1)-v(t_1)\|_{H^{\frac{d-2}{2}}}\leq 4C\varepsilon.\] \noindent Then, we can apply Step 3 on the interval $I_1$ by choosing $\varepsilon_0$ (and hence $\varepsilon)$ even smaller. We argue recursively for each interval $I_j$, $j=2,\dots, L$. Note that, at each step, we make $\varepsilon_0$ smaller by a factor of $(4C)^{-1}$. Since $L\sim\big(\frac{R}{\varepsilon_1}\big)^4$ and $\varepsilon_1 = \varepsilon_1 (R)$, there are a finite number of iterative steps depending only on $R$. This allows us to choose new $\varepsilon_0 = \varepsilon_0(R)> 0$ such that, by Lemma \ref{LEM:U2}, we have \begin{align*} \|u\|_{X^{\frac{d-2}{2}}(I)}+\|v\|_{X^{\frac{d-2}{2}}(I)} &\lesssim L R \lesssim C(R), \\ \|u - v\|_{X^\frac{d-2}{2}(I)} & \lesssim L\varepsilon \lesssim C(R)\varepsilon, \end{align*} \noindent This completes the proof of Lemma \ref{LEM:perturb}. \end{proof} In the remaining part of this section, we consider long time existence of solutions to the perturbed NLS \eqref{ZNLS1} under several assumptions. Given $T>0$, we assume that there exist $\beta, C, M > 0$ such that \begin{equation} \|f\|_{W^s(I)} \leq C |I|^{\beta} \qquad \text{and}\qquad \|f\|_{Y^s([0, T])} \leq M \label{P0} \end{equation} \noindent for any interval $I \subset [0, T]$. Then, Proposition \ref{PROP:LWP2} guarantees existence of a solution to the perturbed NLS \eqref{ZNLS1}, at least for a short time. \begin{proposition}\label{PROP:perturb2} Let $d \geq 3$. Let $s \in (s_d, s_\textup{crit}]$, where $s_d$ is defined in \eqref{Sd1}. Given $T>0$, assume the following conditions \textup{(i)} - \textup{(iii)}: \begin{itemize} \item[\textup{(i)}] Hypothesis \textup{(B)} holds if $d \ne 4$, \item[\textup{(ii)}] $f \in Y^s([0, T]) \cap W^s([0, T])$ satisfies \eqref{P0}, \item[\textup{(iii)}] Given a solution $v$ to \eqref{ZNLS1}, the following a priori bound holds: \begin{align} \| v\|_{L^\infty([0, T]; H^\frac{d-2}{2}(\mathbb{R}^d))} \leq R \label{P0a} \end{align} \noindent for some $R>0$. \end{itemize} \noindent Then, there exists $\tau = \tau(R,M, T, s, \beta)>0$ such that, given any $t_0 \in [0, T)$, the solution $v$ to \eqref{ZNLS1} exists on $[t_0, t_0 + \tau]\cap [0, T]$. In particular, the condition \textup{(iii)} guarantees existence of $v$ on the entire interval $[0, T]$. \end{proposition} \begin{remark}\label{REM:perturb3}\rm We point out that the first condition in \eqref{P0} can be weakened as follows. Let $\tau = \tau(R, M, T, s, \beta) > 0$ be as in Proposition \ref{PROP:perturb2}. Then, it follows from the proof of Proposition \ref{PROP:perturb2} (see \eqref{P4a} and \eqref{P4b} below) that if we assume that \begin{equation*} \|f\|_{W^s([t_0, t_0+\tau_*))} \leq C |\tau_*|^{\beta} \end{equation*} \noindent for some $\tau_* \leq \tau$ instead of the first condition in \eqref{P0}, then the conclusion of Proposition \ref{PROP:perturb2} still holds on $[t_0, t_0 + \tau_*]\cap [0, T]$. Indeed, we use this version of Proposition \ref{PROP:perturb2} in Section \ref{SEC:8}. \end{remark} \begin{proof} By setting $e = |v+f|^2 (v+f) - |v|^2 v$, \eqref{ZNLS1} reduces to \eqref{PNLS1}. In the following, we iteratively apply Lemma \ref{LEM:perturb} on short intervals and show that there exists $\tau = \tau(R, M, T, s, \beta) > 0$ such that \eqref{PNLS1} is well-posed on $[t_0, t_0 + \tau] \cap [0, T] $ for any $t_0 \in [0, T)$. Let $w$ be the global solution to the defocusing cubic NLS \eqref{NLS1} with $w(t_0) = v(t_0) = v_0$. By \eqref{P0a}, we have $\|w(t_0) \|_{H^\frac{d-2}{2}} \leq R$. Then, by Hypothesis (B), we have \[\| w\|_{L^{d+2}_{t, x}([0, T])} \leq C(R, T) < \infty.\] \noindent By the standard argument, this implies that $\big\||\nabla|^\frac{d-2}{2} w\big\|_{L^q_tL^r_x([0, T])} \leq C'(R, T) < \infty$ for all Schr\"odinger admissible pairs $(q, r)$. In particular, we have $\|w\|_{Z([0, T])} \leq C''(R, T) < \infty$ and \begin{align} \|w\|_{X^\frac{d-2}{2}([0, T])} & \leq \|v_0\|_{H^\frac{d-2}{2}} + \big\||\nabla|^\frac{d-2}{2} w\big\|_{L^\frac{2(d+2)}{d}_{t, x}([0, T])} \|w\|_{L^{d+2}_{t, x}([0, T])}^2 \notag \\ & \leq C'''(R, T) < \infty. \label{P1} \end{align} Let $\theta \in (\frac 12, 1)$. Given small $\eta > 0$ (to be chosen later), we divide the interval $ [t_0, T]$ into $J = J(R, T, \eta)$ many subintervals $I_j = [t_j, t_{j+1}]$ such that \[ \|w\|_{Z_\theta(I_j)} \sim \eta.\] \noindent In the following, we fix the value of $\theta$ and suppress dependence of various constants such as $\tau$ and $\eta$ on $\theta$. Fix $\tau > 0$ (to be chosen later in terms of $R$, $M$, $T$, $s$, and $\beta$) and write $[t_0, t_0+\tau] = \bigcup_{j = 0}^{J'} \big([t_0, t_0+\tau]\cap I_j\big)$ for some $J' \leq J - 1$, where $[t_0, t_0+\tau]\cap I_j \ne \emptyset$ for $0 \leq j \leq J'$ and $[t_0, t_0+\tau]\cap I_j=\emptyset$ for $j \geq J'$. Since the nonlinear evolution $w$ is small on each $I_j$, it follows that the linear evolution $S(t-t_j) w(t_j)$ is also small on each $I_j$. Indeed, from the Duhamel formula, we have \[S(t-t_j) w(t_j) = w(t) +i \int_{t_j}^t S(t - t') |w|^2w(t') dt'.\] \noindent Then, from Case (1) in the proof of Lemma \ref{LEM:NL2} with \eqref{P1}, we have \begin{align*} \|S(t-t_j) w(t_j) \|_{Z_\theta(I_j)} &\leq \|w\|_{Z_\theta(I_j)} + C \|w\|_{X^\frac{d-2}{2}(I_j)} \|w\|_{Z_\theta(I_j)}^2 \leq \eta + C(R, T) \eta^2. \end{align*} \noindent By taking $\eta = \eta(R, T)>0$ sufficiently small, we have \begin{align} \|S(t-t_j) w(t_j) \|_{Z_\theta(I_j)} \leq 2 \eta \label{P2} \end{align} \noindent for all $j = 0, \dots, J-1$. Now, we estimate $v$ on the first interval $I_0$. Let $\eta_0 = \eta_0(R, M)$ be as in Proposition \ref{PROP:LWP2}. Then, by Lemma \ref{LEM:Ys1} (i), \eqref{P0a}, and Proposition \ref{PROP:LWP2}, we have \begin{align*} \|v\|_{X^\frac{d-2}{2} (I_0)} & \leq \|S(t- t_0) v(t_0)\|_{X^\frac{d-2}{2}(I_0)} + \|v - S(t- t_0) v(t_0)\|_{X^\frac{d-2}{2}(I_0)} \notag \\ & \leq R + C\eta^{3-2\theta} \leq 2R, \end{align*} \noindent as long as $2\eta < \eta_0$ and $\tau = \tau(\eta, \alpha, \beta) = \tau(R,M, T, \alpha, \beta)> 0$ is sufficiently small so that \begin{align} \|f\|_{W^s([t_0, t_0+ \tau))} \leq C\tau^\beta \leq \eta^\frac{4-\alpha}{\alpha}, \label{P4a} \end{align} \noindent where $\alpha = \alpha(s)$ satisfy \eqref{NL2a}. Next, we estimate the error term. By Lemma \ref{LEM:NL2} with \eqref{P0}, we have \begin{align} \|e\|_{N^\frac{d-2}{2}(I_0)} \leq C(R, M) \tau^{\alpha\beta}. \label{P4b} \end{align} \noindent Given $\varepsilon > 0$, we can choose $\tau = \tau(R, M, T, \varepsilon, \alpha, \beta)>0$ sufficiently small so that \begin{align*} \|e\|_{N^\frac{d-2}{2}(I_0)} \leq \varepsilon. \end{align*} \noindent In particular, for $\varepsilon < \varepsilon_0$ with $\varepsilon_0= \varepsilon_0(R) > 0$ dictated by Lemma \ref{LEM:perturb}, the condition \eqref{PP2} is satisfied on $I_0$. Therefore, all the conditions of Lemma \ref{LEM:perturb} are satisfied on the first interval $I_0$, provided that $\tau = \tau (R, M, T, \varepsilon, \alpha, \beta)>0$ is chosen sufficiently small. Hence, we obtain \begin{align} \| w - v\|_{X^\frac{d-2}{2}(I_0)} \leq C_0(R) \varepsilon. \label{P5} \end{align} \noindent In particular, we have \begin{align} \|w(t_1) - v(t_1) \|_{H^\frac{d-2}{2}} \leq C_1(R) \varepsilon. \label{P6} \end{align} \noindent Then, from \eqref{P2} and Lemma \ref{LEM:Ys1} (i) with \eqref{P6}, we have \begin{align*} \|S(t - t_1) v(t_1)\|_{Z_\theta(I_1)} &\leq \|S(t-t_1)w(t_1)\|_{Z_\theta(I_1)}+\|S(t-t_1)(w(t_1)-v(t_1))\|_{Z_\theta(I_1)}\\ &\leq 2 \eta + C'_1(R) \varepsilon \leq 3\eta \end{align*} \noindent by choosing $\varepsilon = \varepsilon(R, \eta) > 0$ sufficiently small. Proceeding as before, it follows from Proposition \ref{PROP:LWP2} with \eqref{P0a} and \eqref{P2} that \begin{align*} \|v\|_{X^\frac{d-2}{2} (I_1)} & \leq R + C\eta^{3-2\theta} \leq 2R, \end{align*} \noindent as long as $3\eta \leq \eta_0$ and $\tau > 0$ is sufficiently small so that \eqref{P4a} is satisfied. Similarly, it follows from Lemma \ref{LEM:NL2} with \eqref{P0} that \begin{align} \|e\|_{N^\frac{d-2}{2}(I_1)} \leq C(R, M) \tau^{\alpha \beta}\leq \varepsilon \label{P7} \end{align} \noindent by choosing $\tau = \tau(R, M, T, \varepsilon, \alpha, \beta)>0$ sufficiently small. Therefore, all the conditions of Lemma \ref{LEM:perturb} are satisfied on the second interval $I_1$, provided that $\tau = \tau (R, M, T, \varepsilon,\alpha, \beta)$ is chosen sufficiently small and that $ (C_1(R) + 1)\varepsilon <\varepsilon_0$. Hence, by Lemma \ref{LEM:perturb}, we obtain \begin{align*} \| w - v\|_{X^\frac{d-2}{2}(I_1)} \leq C_0(R) (C_1(R) + 1)\varepsilon. \end{align*} \noindent In particular, we have \begin{align*} \|w(t_2) - v(t_2) \|_{H^\frac{d-2}{2}} \leq C_2(R) \varepsilon. \end{align*} By choosing $\eta = \eta(R, M, T) > 0$ and $\tau = \tau(R, M, T, \varepsilon, \alpha,\beta)>0$ sufficiently small, we can argue inductively and obtain \begin{align} \|w(t_j) - v(t_j) \|_{H^\frac{d-2}{2}} \leq C_j(R) \varepsilon. \end{align} \noindent for all $0 \leq j \leq J'$, as long as (i) $(J'+2)\eta \leq \eta_0$ and (ii) $\varepsilon = \varepsilon(R, \eta, J)$ is sufficiently small such that $(C_j(R)+1) \varepsilon < \varepsilon_0$, $j = 1, \dots, J'$. Recalling that $J'+1 \leq J = J(R, T, \eta)$, we see that this can be achieved by choosing $\eta = \eta(R, M, T)>0$, $\varepsilon = \varepsilon(R, M, T) > 0$, and $\tau = \tau(R,M, T, \alpha, \beta) = \tau(R,M, T, s, \beta) >0$ sufficiently small. This guarantees existence of the solution $v$ to \eqref{PNLS1} on $[t_0, t_0+\tau]$. Under the conditions (i) - (iii), we can apply the above local argument on time intervals of length $\tau = \tau(R, M, T, s, \beta)>0$, thus extending the solution $v$ to \eqref{ZNLS1} on the entire interval $[0, T]$. \end{proof} \section{Proof of Theorem \ref{THM:3}} \label{SEC:8} In this section, we prove the following ``almost'' almost sure global existence result. \begin{proposition}\label{PROP:asGWP} Let $d\geq 3$ and $s \in (s_d, s_\textup{crit}]$. Assume Hypothesis \textup{(A)}. Furthermore, assume Hypothesis \textup{(B)} if $d \ne 4$. Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its Wiener randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, given any $T, \varepsilon > 0$, there exists a set $\widetilde \O_{T, \varepsilon}\subset \O$ such that \begin{itemize} \item[\textup{(i)}] $P(\widetilde \O_{T, \varepsilon}^c) < \varepsilon$, \item[\textup{(ii)}] For each $\o \in \widetilde \O_{T, \varepsilon}$, there exists a (unique) solution $u$ to \eqref{NLS1} on $[0, T]$ with $u|_{t = 0} = \phi^\o$. \end{itemize} \end{proposition} \noindent It is easy to see that ``almost'' almost sure global existence implies almost sure global existence. See \cite{CO}. For completeness, we first show how Theorem \ref{THM:3} follows as an immediate consequence of Proposition \ref{PROP:asGWP}. Given small $\varepsilon > 0$, let $T_j = 2^j$ and $\varepsilon_j = 2^{-j} \varepsilon$, $j \in \mathbb{N}$. For each $j$, we apply Proposition \ref{PROP:asGWP} and construct $ \widetilde \Omega_{T_j, \varepsilon_j}$. Then, let $\Omega_\varepsilon = \bigcap_{j = 1}^\infty \widetilde \Omega_{T_j, \varepsilon_j}$. Note that (i) $P (\Omega_\varepsilon^c) < \varepsilon$, and (ii) for each $\o \in \O_\varepsilon$, we have a global solution $u$ to \eqref{NLS1} with $u|_{t = 0} = \phi^\o$. Now, let $\Sigma = \bigcup_{\varepsilon > 0} \Omega_\varepsilon$. Then, we have $P (\Sigma^c) = 0$. Moreover, for each $\o \in \Sigma$, we have a global solution $u$ to \eqref{NLS1} with $u|_{t = 0} = \phi^\o$. This proves Theorem \ref{THM:3}. The rest of this section is devoted to the proof of Proposition \ref{PROP:asGWP}. \begin{proof}[Proof of Proposition \ref{PROP:asGWP}] Given $T, \varepsilon > 0$, set \begin{equation} M = M(T, \varepsilon) \sim \|\phi\|_{H^s}\Big(\log\frac{1}{\varepsilon}\Big)^\frac{1}{2}. \label{asGWP0} \end{equation} \noindent Defining $ \O_1 = \O_1(T, \varepsilon)$ by \[ \O_1 := \big\{ \o \in \O:\, \|S(t) \phi^\o \|_{Y^s([0, T])} \leq M\big\},\] \noindent it follows from Lemma \ref{LEM:Ys1} (i) and Lemma \ref{LEM:Hs} that \begin{equation} P( \O_1^c) < \frac{\varepsilon}{3}. \label{asGWP1} \end{equation} Given $T, \varepsilon > 0$, let $R = R(T, \frac \varepsilon 3)$ and $M$ be as in \eqref{HypA} and \eqref{asGWP0}, respectively. With $\tau = \tau(R, M, T)$ as in Proposition \ref{PROP:perturb2}. write \[ [0, T] = \bigcup_{j = 0}^{[\frac T{ \tau_*}]} [j \tau_*, (j + 1) \tau_*] \cap [0, T], \] \noindent for some $\tau_* \leq \tau$ (to be chosen later). Now, define $\O_2$ by \[ \O_2 = \Big\{ \o \in \O: \| S(t) \phi^\o \|_{W^s([j \tau_*, (j+1)\tau_*])} \leq C \tau_*^\frac{1}{2(d+2)}, \, j = 0, \dots, \big[\tfrac T{\tau_*}\big] \Big\}, \] \noindent \noindent where $C$ is as in \eqref{P0}. Then, by Lemma \ref{PROP:Str1}, we have \begin{align*} P(\O_2^c) & \leq \sum_{j = 0}^{[\frac{T}{\tau_*}] } P\Big( \| S(t) \phi^\o \|_{W^s([j\tau_*, (j+1)\tau_*])}\geq C\tau_*^\frac{1}{2(d+2)}\Big) \lesssim \frac{T}{\tau_*} \exp\bigg(-\frac{c}{\tau_*^\frac{1}{d+2} \|\phi\|_{H^s}^2}\bigg)\notag \\ \intertext{By making $\tau_* = \tau_*(\|\phi\|_{H^s})$ smaller, if necessary, we have} & \lesssim \frac{T}{\tau_\ast}\cdot \tau_* \exp\bigg(-\frac{c}{2\tau_*^\frac{1}{d+2} \|\phi\|_{H^s}^2}\bigg) = T \exp\bigg(-\frac{1}{2\tau_*^\frac{1}{d+2} \|\phi\|_{H^s}^2}\bigg). \end{align*} \noindent Hence, by choosing $\tau_* = \tau_*(T, \varepsilon, \|\phi\|_{H^s})$ sufficiently small, we have \begin{align} P(\O_2^c) < \frac{\varepsilon}{3}. \label{asGWP4} \end{align} Finally, set $\widetilde \O_{T, \varepsilon} = \Omega_{T, \frac \varepsilon 3}\cap \O_1 \cap\O_2$, where $\Omega_{T, \frac \varepsilon 3}$ is as in Hypothesis (A) with $\varepsilon$ replaced by $\frac \varepsilon 3$. Then, from \eqref{asGWP1} and \eqref{asGWP4}, we have \[ P(\widetilde \O_{T, \varepsilon}^c) < \varepsilon.\] \noindent Moreover, for $\o \in \widetilde \O_{T, \varepsilon}$, we can iteratively apply Proposition \ref{PROP:perturb2} and Remark \ref{REM:perturb3} and construct the solution $v = v^\o$ on each $[j \tau_*, (j+1)\tau_*]$, $j = 0, \dots, [\frac T\tau_*]-1$, and $\big[ [\frac T \tau_*]\tau, T\big]$. This completes the proof of Proposition \ref{PROP:asGWP}. \end{proof} \begin{remark}\label{REM:asGWP} \rm It is worthwhile to mention that the proof of Proposition \ref{PROP:asGWP} strongly depends on the quasi-invariance property of the distribution of the linear solution $S(t) \phi^\o$. More precisely, in the proof above, we exploited the fact that the distribution of $\| S(t) \phi^\o \|_{W^s([t_0, t_0 + \tau_*])}$ depends basically only on the length $\tau_*$ of the interval, but is independent of $t_0$. \end{remark} \section{Probabilistic global existence via randomization on dilated cubes} \label{SEC:9} In this section, we present the proof of Theorem \ref{THM:4}. The main idea is to exploit the dilation symmetry of the cubic NLS \eqref{NLS1}. For a function $\phi=\phi(x)$, we define its scaling $\phi_\mu$ by \[\phi_\mu (x) := \mu^{-1} \phi(\mu^{-1}x),\] \noindent while for a function $f=f(t,x)$, we define its scaling $f_\mu$ by \[ f_\mu(t,x):=\mu^{-1}f(\mu^{-2}t,\mu^{-1}x).\] \noindent Then, given $\phi \in H^s(\mathbb{R}^d)$, we have \begin{align} \|\phi_\mu\|_{\dot H^s(\mathbb{R}^d)} = \mu^{\frac{d-2}{2} - s}\|\phi\|_{\dot H^s(\mathbb{R}^d)} . \label{D1} \end{align} \noindent If $s < s_\textup{crit} = \frac{d-2}{2}$, that is, if $\phi$ is supercritical with respect to the scaling symmetry, then we can make the $H^s$-norm of the scaled function $\phi_\mu$ small by taking $\mu \ll 1$. The issue is that the Strichartz estimates we employ in proving probabilistic well-posedness are (sub-)critical and do not become small even if we take $\mu \ll 1$. It is for this reason that we consider the randomization $\phi^{\omega, \mu}$ on {\it dilated} cubes. Fix $\phi \in H^s(\mathbb{R}^d)$ with $s \in( s_d, s_\textup{crit})$, where $s_\textup{crit} = \frac{d-2}{2}$ and $s_d$ is as in \eqref{Sd1}. Let $\phi^{\omega, \mu}$ be its randomization on dilated cubes of scale $\mu$ as in \eqref{R3}. Instead of considering \eqref{NLS1} with $u_0 = \phi^{\omega, \mu}$, we consider the scaled Cauchy problem: \begin{equation} \begin{cases}\label{NLS3} i \partial_t u_\mu + \Delta u_\mu = \pm |u_\mu|^{2}u_\mu \\ u_\mu\big|_{t = 0} = u_{0, \mu} = (\phi^{\omega, \mu})_\mu, \end{cases} \end{equation} \noindent where $u_\mu$ is as in \eqref{scaling} and $(\phi^{\omega, \mu})_\mu(x) := \mu^{-1} \phi^{\omega, \mu}(\mu^{-1}x)$ is the scaled randomization. For notational simplicity, we denote $(\phi^{\omega, \mu})_\mu$ by $\phi^{\omega, \mu}_\mu$ in the following. Denoting the linear and nonlinear part of $u_\mu$ by $z_\mu (t) = z^\o_\mu(t) : = S(t) \phi^{\o, \mu}_\mu$ and $v_\mu(t) := u_\mu(t) - S(t) \phi^{\o, \mu}_\mu$ as before, we reduce \eqref{NLS3} to \begin{equation} \begin{cases} i \partial_t v_\mu + \Delta v_\mu = \pm |v_\mu + z_\mu|^2(v_\mu+z_\mu)\\ v_\mu|_{t = 0} = 0. \end{cases} \label{NLS4} \end{equation} \noindent Note that if $u$ satisfies \eqref{NLS1} with initial data $u(0)=\phi^{\o,\mu}$ then $u_\mu$, $z_\mu$, and $v_\mu$ are indeed the scalings of $u$, $z:=S(t)\phi^{\o, \mu}$, and $v : = u -z$, respectively. For $u_\mu$ this simply follows from the scaling symmetry of \eqref{NLS1}. For $z_\mu$ and $v_\mu$, this follows from the following observation: \begin{align}\label{S_mu} \mathcal F_x\Big[\big(S(t)\phi^{\o,\mu}\big)_\mu\Big](\xi) = \mu^{d-1}e^{-i\frac{t}{\mu^2}|\mu\xi|^2} \widehat{\phi^{\o,\mu}}(\mu \xi) = e^{-it|\xi|^2}\widehat{\phi^{\o, \mu}_\mu}( \xi) =\widehat{z_\mu}(t, \xi). \end{align} Define $\Gamma_\mu$ by \begin{equation} \Gamma_\mu v_\mu(t) =\mp i \int_0^t S(t-t') \mathcal{N} (v_\mu+z_\mu )(t') dt'. \label{NLS7} \end{equation} \noindent In the following, we show that there exists $\mu_0 = \mu_0(\varepsilon, \|\phi\|_{H^s} ) > 0$ such that, for $\mu \in (0, \mu_0)$, the estimates \eqref{nl1c} and \eqref{nl1d} in Proposition \ref{PROP:NL1} (with $\widetilde \Gamma$ replaced by $\Gamma_\mu$) hold with $R = \eta_2$ outside a set of probability $< \varepsilon$, where $\eta_2$ be as in \eqref{small0}. In view of \eqref{psi}, it is easy to see that \begin{equation*} \psi(D-n) \phi_\mu =\big( \psi^\mu(D-\mu n) \phi)_\mu. \end{equation*} \noindent Hence, we have \begin{equation} \phi^{\o, \mu}_\mu = (\phi^{\o, \mu})_\mu= \sum_{n \in \mathbb{Z}^d} g_n(\o) \psi(D-n) \phi_\mu. \label{D2} \end{equation} \noindent Given $\eta_2$ as in \eqref{small0} and $\mu>0$, define $\Omega_{1, \mu}$ by \begin{equation*} \Omega_{1, \mu} = \Big\{ \o \in \O:\, \|S(t) \phi^{\omega, \mu}_\mu \|_{L^q_t W^{s, q}_x ( \mathbb{R} \times \mathbb{R}^d)} \leq \eta_2, \, q = 4, \tfrac{6(d+2)}{d+4}, d+2\Big\} . \end{equation*} \noindent We also define $\Omega_{2, \mu}$ by \begin{equation*} \Omega_{2, \mu} = \big\{ \o \in \O:\, \| \phi^{\omega, \mu}_\mu \|_{H^s(\mathbb{R}^d)} \leq \eta_2\big\}. \end{equation*} \noindent Now, let $\O_\mu = \Omega_{1, \mu} \cap \Omega_{2, \mu} $. Noting that $ 4, \frac{6(d+2)}{d+4}$, and $d+2$ are larger than the diagonal Strichartz admissible index $\frac{2(d+2)}{d}$, it follows from Lemma \ref{PROP:Str2} and Lemma \ref{LEM:Hs} with \eqref{D2} and \eqref{D1} that \begin{align*} P( \O_{ \mu}^c) \leq C\exp\bigg( -c \frac{\eta_2^2}{ \|\phi_\mu\|_{H^s}^2}\bigg) \leq C\exp\bigg( -c \frac{\eta_2^2}{ \mu^{d-2 -2 s}\|\phi\|_{H^s}^2}\bigg) \end{align*} \noindent for $\mu \leq 1$. Then, by setting \begin{equation} \mu_0 \sim \Bigg(\frac{\eta_2}{\|\phi\|_{H^s} \big(\log \frac{1}{\varepsilon}\big)^\frac{1}{2}}\Bigg)^\frac{1}{\frac{d-2}{2}-s}, \label{D2a} \end{equation} \noindent we have \begin{align} P( \O_\mu^c) <\varepsilon \label{D3} \end{align} \noindent for $\mu \in (0, \mu_0)$. Note that $\mu_0 \to 0$ as $\varepsilon \to 0$. Recall that $q = 4, \frac{6(d+2)}{d+4}$, and $d+2$ are the only relevant values of the space-time Lebesgue indices controlling the random forcing term in the proof of Proposition \ref{PROP:NL1}. Hence, the estimates \eqref{nl1c} and \eqref{nl1d} in Proposition \ref{PROP:NL1} (with $\widetilde \Gamma$ replaced by $\Gamma_\mu$) hold with $R = \eta_2$ for each $\o \in \O_\mu$. Then, by repeating the proof of Theorem \ref{THM:2} in Section \ref{SEC:THM12}, we see that, for each $\o \in \O_\mu$, there exists a global solution $u_\mu$ to \eqref{NLS3} with $u_\mu|_{t = 0} = \phi^{\o, \mu}_\mu$ which scatters both forward and backward in time. By undoing the scaling, we obtain a global solution $u$ to \eqref{NLS1} with $u|_{t = 0} = \phi^{\o, \mu}$ for each $\o \in \O_\mu$. Moreover, scattering for $u_\mu$ implies scattering for $u$. Indeed, as in Theorem \ref{THM:2}, there exists $v_{+,\mu}\in H^{\frac{d-2}{2}}(\mathbb{R}^d)$ such that \[\lim_{t\to\infty}\big\|u_\mu(t)-S(t)\big(\phi^{\omega,\mu}_\mu+v_{+,\mu}\big)\big\|_{H^{\frac{d-2}{2}}}=0.\] Then, a computation analogous to \eqref{S_mu} yields \[S(t)(\phi^{\omega,\mu}_\mu+v_{+,\mu}) =\big(S(t)(\phi^{\omega,\mu}+v_+)\big)_{\mu},\] \noindent where $v_+:=(v_{+,\mu})_{\mu^{-1}}\in H^{\frac{d-2}{2}}(\mathbb{R}^d)$. Then, by \eqref{D1}, it follows that \[\lim_{t\to \infty}\big\|u-S(t)\big(\phi^{\omega,\mu}+v_+\big)\big\|_{H^{\frac{d-2}{2}}}=0.\] \noindent This proves that $u$ scatters forward in time. Scattering of $u$ as $ t\to -\infty$ can be proved analogously. This completes the proof of Theorem \ref{THM:4}.
2,869,038,154,511
arxiv
\section{Density Balanced Decomposition} \label{sec:algo} Density balance, especially local density balance, is seamlessly integrated into each step of our decomposition flow. In this section, we first elaborate how to integrate the density balance into the mathematical formulation and corresponding SDP formulation. Followed by some discussion for density balance in all other steps. \vspace{-.1in} \subsection{Density Balanced SDP Algorithm} \label{sec:db_sdp} \begin{table}[tb] \renewcommand{\arraystretch}{1} \centering \caption{Notations used in color assignment} \label{table:notation} \begin{tabular}{|c|c|} \hline \hline $CE$ & the set of conflict edges\ \ \ \ \ \\ \hline $SE$ & the set of stitch edges\\ \hline $V$ & the set of features\\ \hline $B$ & the set of local bins\\ \hline\hline \end{tabular} \vspace{-.2in} \end{table} For each decomposition graph, density balanced color assignment is carried out. Some notations used are listed in Table \ref{table:notation}. See Appendix for some preliminary of semidefinite programming (SDP) based algorithm. \vspace{-.05in} \subsubsection{Density Balanced Mathematical Formulation} The mathematical formulation for the general density balanced layout decomposition is shown in (\ref{eq:math}), where the objective is to simultaneously minimize the conflict number, the stitch number and the density uniformity of all bins. Here $\alpha$ and $\beta$ are user-defined parameters for assigning the relative weights among the three values. \begin{figure}[h] \vspace{-.2in} \begin{align} \label{eq:math} \textrm{min} & \sum_{e_{ij} \in CE} c_{ij} + \alpha \sum_{e_{ij} \in SE}s_{ij} + \beta \cdot \sum_{b_k \in B} DU_k \\ \textrm{s.t}.\ \ & c_{ij} = ( x_i == x_j ) \qquad \qquad \qquad \ \forall e_{ij} \in CE \tag{$1a$}\label{math_a}\\ & s_{ij} = x_i \oplus x_j \qquad \qquad \qquad \qquad \forall e_{ij} \in SE \tag{$1b$}\label{math_b}\\ & x_i \in \{1, 2, 3\} \qquad \qquad \qquad \qquad \forall r_i \in V \tag{$1c$}\label{math_c}\\ & d_{kc} = \sum_{x_i = c} den_{ki} \quad \qquad \qquad \forall r_i \in V, \ \ b_k \in B \tag{$1d$}\label{math_d}\\ & DU_k = \textrm{max}\{d_{kc}\} / \textrm{min}\{d_{kc}\} \qquad \forall b_k \in B \tag{$1e$}\label{math_e} \end{align} \vspace{-.3in} \end{figure} Here $x_i$ is a variable representing the color (mask) of feature $r_i$, $c_{ij}$ is a binary variable for the conflict edge $e_{ij} \in CE$, and $s_{ij}$ is a binary variable for the stitch edge $e_{ij} \in SE$. The constraints (\ref{math_a}) and (\ref{math_b}) are used to evaluate the conflict number and stitch number, respectively. The constraint (\ref{math_e}) is nonlinear, which makes the program (\ref{eq:math}) hard to be formulated into integer linear programming (ILP) as in \cite{TPL_ICCAD2011_Yu}. Similar nonlinear constraints occur in the floorplanning problem \cite{FLOOR_DAC00_Chen}, where Tayor expansion is used to linearize the constraint into ILP. However, Tayor expansion will introduce the penalty of accuracy. Compared with the traditional time consuming ILP, semidefinite programming (SDP) has been shown to be a better approach in terms of runtime and solution quality tradeoffs \cite{TPL_ICCAD2011_Yu}. However, how to integrate the density balance into the SDP formulation is still an open question. In the following we will show that instead of using the painful Tayor expansion, this nonlinear constraint can be integrated into SDP without losing any accuracy. \subsubsection{Density Balanced SDP Formulation} In SDP formulation, the objective function is the representation of vector inner products, i.e., $\vec{v_i} \cdot \vec{v_j}$. At the first glance, the constraint (\ref{math_e}) cannot be formulated into an inner product format. However, we will show that density uniformity $DU_k$ can be optimized through considering another form $DU_k^* = d_{k1} \cdot d_{k2} + d_{k1} \cdot d_{k3} + d_{k2} \cdot d_{k3}$. This is based on the following observation: maximizing $DU_k^*$ is equivalent to minimizing $DU_k$. \iffalse \begin{proof} First we prove that when $DU_k^*$ is maximum, $DU_k$ is minimum. Since $d_{k1} + d_{k2} + d_{k3}$ is a constant $n$, after replacing $d_{k3}$ with $n - d_{k1} - d_{k2}$, we can achieve: \begin{eqnarray} DU_k^* & = & d_{k1} d_{k2} + d_{k1} (n - d_{k1} - d_{k2}) + d_{k2} (n - d_{k1} - d_{k2}) \notag\\ & = & n (d_{k1} + d_{k2}) - d_{k1}^2 - d_{k2}^2 - d_{k1} d_{k2} \notag \end{eqnarray} If $DU_k^*$ gets the maximal value, then $\partial DU_k^* / \partial d_{k1} = 0$ and $\partial DU_k^* / \partial d_{k2} = 0$. \begin{displaymath} \left\{ \begin{array}{c} n - 2 \cdot d_{k1} - d_{k2} = 0 \\ n - 2 \cdot d_{k2} - d_{k1} = 0 \end{array} \right. \Rightarrow d_{k1} = d_{k2} = d_{k3} \end{displaymath} Besides, $\partial^2 DU_k^* / \partial d_{k1}^2 = -2$, which means $DU_k^*$ is maximum. Meanwhile, $DU_k = \textrm{max} \{d_{kc}\} / \textrm{min} \{d_{kc}\}$. It is easy to see that $DU_k$ achieve minimal value $1$. From the other direction, if $DU_k$ is minimum, i.e. $1$, then $d_{k1} = d_{k2} = d_{k3}$. As discussed above , $DU_k^*$ is maximum. \end{proof} \fi \begin{mylemma} $DU_k^* = 2/3 \cdot \sum_{i, j \in V} den_{ki} \cdot den_{kj} \cdot (1 - \vec{v_i} \cdot \vec{v_j})$, where $den_{ki}$ is the density of feature $r_i$ in bin $b_k$. \label{lem:2} \vspace{-.1in} \end{mylemma} \begin{proof} First of all, let us calculate $d_1 \cdot d_2$. For all vectors $\vec{v_i} = (1, 0)$ and all vectors $\vec{v_j} = (-\frac{1}{2}, \frac{\sqrt{3}}{2})$, we can see that \begin{align} & \sum_i \sum_j len_i \cdot len_j \cdot (1 - \vec{v_i} \cdot \vec{v_j}) = \sum_i \sum_j len_i \cdot len_j \cdot 3/2 \notag\\ = & 3/2 \cdot \sum_i len_i \sum_j len_j = 3/2 \cdot d_1 \cdot d_2 \notag \end{align} So $d_1 \cdot d_2 = 2/3 \cdot \sum_i \sum_j len_i \cdot len_j \cdot (1 - \vec{v_i} \cdot \vec{v_j})$, where $\vec{v_i} = (1, 0)$ and $\vec{v_j} = (-\frac{1}{2}, \frac{\sqrt{3}}{2})$. We can also calculate $d_1 \cdot d_3$ and $d_2 \cdot d_3$ using similar methods. Therefore, \begin{align} DU_2 & = d_1 \cdot d_2 + d_1 \cdot d_3 + d_2 \cdot d_3 \notag\\ & = 2/3 \cdot \sum_{i, j \in V} len_i \cdot len_j \cdot (1 - \vec{v_i} \cdot \vec{v_j}) \notag \end{align} \end{proof} Because of Lemma \ref{lem:2}, the $DU_k^*$ can be represented as a vector inner product, then we have achieved the following theorem. \begin{mytheorem} Maximizing $DU_k^*$ can achieve better density balance in bin $b_k$. \vspace{-.1in} \end{mytheorem} Note that we can remove the constant $\sum_{i,j \in V} den_{ki} \cdot den_{kj} \cdot 1$ in $DU_k^*$ expression. Similarly, we can eliminate the constants in the calculation of the conflict and stitch numbers. The simplified vector program is as follows: \begin{figure}[h] \vspace{-.1in} \begin{align} \vspace{-.1in} \label{eq:vp} \textrm{min} & \sum_{e_{ij} \in CE} ( \vec{v_i} \cdot \vec{v_j} ) - \alpha \sum_{e_{ij} \in SE} ( \vec{v_i} \cdot \vec{v_j} ) - \beta \cdot \sum_{b_k \in B} DU_k^*\\ \textrm{s.t}.\ \ & DU_k^* = - \sum_{i, j \in V} den_{ki} \cdot den_{kj} \cdot ( \vec{v_i} \cdot \vec{v_j}) \quad \forall b_k \in B \label{vp_a}\tag{$2a$}\\ & \vec{v_i} \in \{(1, 0), (-\frac{1}{2}, \frac{\sqrt{3}}{2}), (-\frac{1}{2}, -\frac{\sqrt{3}}{2})\} \label{vp_b}\tag{$2b$} \end{align} \vspace{-.2in} \end{figure} Formulation (\ref{eq:vp}) is equivalent to the mathematical formulation (\ref{eq:math}), and it is still NP-hard to be solved exactly. Constraint (\ref{vp_b}) requires the solutions to be discrete. To achieve a good tradeoff between runtime and accuracy, we can relax (\ref{eq:vp}) into a SDP formulation, as shown in Theorem \ref{the:sdp}. \begin{mytheorem} \vspace{0in} \label{the:sdp} Relaxing vector program (\ref{eq:vp}) can get the SDP formulation (\ref{eq:sdp}). \vspace{-.1in} \end{mytheorem} \begin{figure}[h] \vspace{-.1in} \begin{align} \label{eq:sdp} \textrm{SDP:\ \ min} & \ \ \ A \bullet X \\ & X_{ii} = 1, \ \ \forall i \in V \tag{$3a$}\\ & X_{ij} \ge -\frac{1}{2}, \ \ \forall e_{ij} \in CE \tag{$3b$}\\ & X \succeq 0 \tag{$3c$}\label{sdp_c} \end{align} \vspace{-.3in} \end{figure} where $A_{ij}$ is the entry that lies in the $i$-th row and the $j$-th column of matrix $A$: \begin{equation} A_{ij} = \left\{ \begin{array}{cc} 1 + \beta \cdot \sum_{k} den_{ki} \cdot den_{kj}, & \forall b_k \in B, e_{ij} \in CE\\ -\alpha + \beta \cdot \sum_{k} den_{ki} \cdot den_{kj}, & \forall b_k \in B, e_{ij} \in SE\\ \beta \cdot \sum_{k} den_{ki} \cdot den_{kj}, & \textrm{otherwise} \end{array} \right. \notag \label{eq:sdp_a} \end{equation} \iffalse \begin{proof} To achieve a good tradeoff between runtime and accuracy, we relax constraint (\ref{vp_b}) to generate formula (\ref{eq:rvp}) as following: \begin{align} \vspace{-.1in} \textrm{min} & \sum_{e_{ij} \in CE} ( \vec{y_i} \cdot \vec{y_j}) - \alpha \sum_{e_{ij} \in SE} ( \vec{y_i} \cdot \vec{y_j} ) - \beta \cdot \sum_{b_k \in B} DU_k^* \label{eq:rvp}\\ \textrm{s.t}.\ \ & DU_k^* = - \sum_{i, j \in V} den_{ki} \cdot den_{kj} \cdot (\vec{y_i} \cdot \vec{y_j}) \ \ \forall b_k \in B \label{sdpa}\tag{$a$}\\ & \vec{y_i} \cdot \vec{y_i} = 1 , \ \ \ \forall i \in V \label{sdpb}\tag{$b$}\\ & \vec{y_i} \cdot \vec{y_j} \ge -\frac{1}{2}, \ \ \ \forall e_{ij} \in CE \label{sdpc}\tag{$c$} \end{align} Here each unit vector $\vec{v_i}$ is replaced by a $n$ dimensional vector $\vec{y_i}$. In the following we prove that SDP formulation (\ref{eq:sdp}) and relaxed vector programming (\ref{eq:rvp}) are equivalent. Given solutions $\{\vec{v_1}, \vec{v_2}, \cdots \vec{v_m}\}$ of (\ref{eq:rvp}), the corresponding matrix $X$ is defined as $X_{ij}=\vec{v_i} \cdot \vec{v_j}$. In the other direction, given a matrix $X$ from (\ref{eq:sdp}), we can find a matrix $V$ satisfying $X=VV^T$ by using the Cholesky decomposition. The rows of $V$ are vectors $\{v_i\}$ that form the solutions of (\ref{eq:rvp}). \end{proof} \fi Due to space limit, the detailed proof is omitted. The solution of (\ref{eq:sdp}) is continuous instead of discrete, and provides a lower bound of vector program (\ref{eq:vp}). In other words, (\ref{eq:sdp}) provides an approximated solution to (\ref{eq:vp}). \input{doc/db_mapping} \vspace{-.1in} \subsection{Density Balanced Layout Graph Simplification} Here we show that the layout graph simplification, which was proposed in \cite{TPL_ICCAD2011_Yu}, can consider the local density balance as well. During layout graph simplification, we iteratively remove and push all vertices with degree less than or equal to two. After the color assignment on the remained vertices, we iteratively recover all the removed vertices and assign legal colors. Instead of randomly picking one, we search a legal color which is good for the density uniformities. \begin{comment} The second technique is presented during decomposition graphs merging. Using bridge computation \cite{TPL_DAC2012_Fang}, the whole design can be broken down into several components. After color assignment independently on each component, the results on all decomposition graphs are merged together. Since that two components share one or two vertex/edge(s), we can rotate the colors in one component so that the merging can introduces better balanced density. Note that there is no cycle when we merge the decomposition graphs, thus the merging can be done in linear time through a breadth-first search (BFS). \end{comment} \section{Decomposition Example} \label{sec:app_example} \section{Proof of Lemmas} \label{sec:app_proof} \section{Additional Experimental Results} \label{sec:app_result} \section{Stitch Candidate Generation} \label{sec:app_stitch} Stitch candidate generation is one important step to parse a layout. \cite{TPL_DAC2012_Fang}\cite{TPL_SPIE2011_Ghaida} pointed out that the stitch candidates generated by previous DPL works cannot be directly applied in TPL layout decomposition. Therefore, we provide a procedure to generate appropriate stitch candidates for TPL. The main idea is that after projection, each feature is divided into several segments each of which is labeled with a number representing how many other features are projected onto it. If one segment is projected by less than two features, then a stitch can be introduced. Note that to reduce the problem size, we restrict the maximum stitch candidate number on each feature. \section{Conclusion} \label{sec:conclusion} In this paper, we propose a high performance TPL layout decomposer with balanced density. Density balancing is integrated into all the key steps of our decomposition flow. In addition, we propose a set of speedup techniques, such as layout graph cut vertex stitch forbiddance, decomposition graph vertex clustering, and fast color assignment trial. Compared with state-of-the-art frameworks, our decomposer demonstrates the best performance in minimizing the cost of conflicts and stitches. Furthermore, our balanced decomposer can obtain less EPE while maintaining very comparable conflict and stitch results. As TPL may be adopted by industry for 14nm/11nm nodes, we believe more research will be needed to enable TPL-friendly design and mask synthesis. \vspace{-.1in} \section*{Acknowledgment} This work is supported in part by NSF grants CCF-0644316 and CCF-1218906, SRC task 2414.001, NSFC grant 61128010, and IBM Scholarship. \vspace{-.1in} \subsection{Density Balanced Mapping} \label{sec:db_mapping} Each $X_{ij}$ in solution of (\ref{eq:sdp}) corresponds to a feature pair $(r_i, r_j)$. The value of $X_{ij}$ provides a guideline, i.e., whether two features $r_i$ and $r_j$ should be in same color. If $X_{ij}$ is close to $1$, features $r_i$ and $r_j$ tend to be in the same color (mask); while if it is close to $-0.5$, $r_i$ and $r_j$ tend to be in different colors (masks). With these guidelines a mapping procedure is adopted to finally assign all input features into three colors (masks). \subsubsection{Limitations of Greedy Mapping} In \cite{TPL_ICCAD2011_Yu}, a greedy approach was applied for the final color assignment. The idea is straightforward: all $X_{ij}$ values are sorted, and vertices $r_i$ and $r_j$ with larger $X_{ij}$ value tend to be in the same color. The $X_{ij}$ can be classified into two types: clear and vague. If most of the $X_{ij}$s in matrix $X$ are clear (close to 1 or -0.5), this greedy method may achieve good result. However, if the decomposition graph is not 3-colorable, some values in matrix $X$ are vague. For the vague $X_{ij}$, e.g., 0.5, the greedy method may not be so effective. \subsubsection{Density Balanced Partition based Mapping} Contrary to the previous greedy approach, we propose a partition based mapping, which can solve the assignment problem for the vague $X_{ij}$s in a more effective way. The new mapping is based on a three-way maximum-cut partitioning. The main ideas are as follows. If a $X_{ij}$ is vague, instead of only relying on the SDP solution, we also take advantage of the information in decomposition graph. The information is captured through constructing a graph, denoted by $G_M$. Through formulating the mapping as a three-way partitioning on the graph $G_M$, our mapping can provide a global view to search better solutions. \begin{algorithm}[thb] \caption{Partition based Mapping} \label{alg:mapping} \begin{algorithmic}[1] \REQUIRE Solution matrix $X$ of the program (\ref{eq:sdp}). \STATE Label each non-zero entry $X_{i, j}$ as a triplet $(X_{ij}, i, j)$; \STATE Sort all $(X_{ij}, i, j)$ by $X_{ij}$; \FOR { all triples with $X_{ij} > th_{unn}$} \STATE Union(i, j); \ENDFOR \FOR { all triples with $X_{ij} < th_{sp}$} \STATE Separate(i, j); \ENDFOR \STATE Construct graph $G_M$; \IF{graph size $\le$ 3} \STATE return; \ELSIF{graph size $\le 7$} \STATE Backtracking based three-way partitioning; \ELSE \STATE FM based three-way partitioning; \ENDIF \end{algorithmic} \end{algorithm} \begin{figure}[tb] \centering \subfigure[]{\includegraphics[width=0.16\textwidth]{DAC13_DG1}} \hspace{.1in} \subfigure[]{\includegraphics[width=0.16\textwidth]{DAC13_GroupGraph1}} \subfigure[]{\includegraphics[width=0.16\textwidth]{DAC13_GroupGraph2}} \subfigure[]{\includegraphics[width=0.16\textwidth]{DAC13_GroupGraph3}} \vspace{-.1in} \caption{ Density Balanced Mapping. ~(a) Decomposition graph.~(b) Construct graph $G_M$. ~(c) Mapping result with cut value 8.1 and density uniformities 24. ~(d) A better mapping with cut 8.1 and density uniformities 23. } \label{fig:db_mapping} \vspace{-.1in} \end{figure} Algorithm \ref{alg:mapping} shows our partition based mapping procedure. Given the solutions from program (\ref{eq:sdp}), some triplets are constructed and sorted to maintain all non-zero $X_{ij}$ values (lines 1--2). The mapping incorporates two stages to deal with the two different types. The first stage (lines 3--8) is similar to that in \cite{TPL_ICCAD2011_Yu}. If $X_{ij}$ is clear then the relationship between vertices $r_i$ and $r_j$ can be directly determined. Here $th_{unn}$ and $th_{sp}$ are user-defined threshold values. For example, if $X_{ij} > th_{unn}$, which means that $r_i$ and $r_j$ should be in the same color, then function Union(i, j) is applied to merge them into a large vertex. Similarly, if $X_{ij} < th_{sp}$, then function Separate(i, j) is used to label $r_i$ and $r_j$ as incompatible. In the second stage (lines 9--16) we deal with the vague $X_{ij}$ values. During the previous stage some vertices have been merged, therefore the total vertex number is not large. Here we construct a graph $G_M$ to represent the relationships among all the remanent vertices (line 9). Each edge $e_{ij}$ in this graph has a weight representing the cost if vertices $i$ and $j$ are assigned into same color. Therefore, the color assignment problem can be formulated as a maximum-cut partitioning problem on $G_M$ (line 10--16). Through assigning a weight to each vertex representing its density, graph $G_M$ is able to balance density among different bins. Based on the $G_M$, a partitioning is performed to simultaneously achieve a maximum-cut and balanced weight among different parts. Note that we need to modify the gain function, then in each move, we try to achieve a more balanced and larger cut partitions. An example of the density balanced mapping is shown in Fig. \ref{fig:db_mapping}. Based on the decomposition graph (see Fig. \ref{fig:db_mapping} (a)), SDP is formulated. Given the solutions of SDP, after the first stage of mapping, vertices $a_2$ and $d_2$ are merged in to a large vertex. As shown in Fig. \ref{fig:db_mapping}(b), the graph $G_M$ is constructed, where each vertex is associated with a weight. There are two partition results with the same cut value 8.1 (see Fig. \ref{fig:db_mapping} (c) and Fig. \ref{fig:db_mapping} (d)). However, their density uniformities are 24 and 23, respectively. To keep a more balanced density result, the second partitioning in Fig. \ref{fig:db_mapping} (c) is adopted as color assignment result. It is well known that the maximum-cut problem, even for a 2-way partition, is NP-hard. However, we observe that in many cases, after the global SDP optimization, the graph size of $G_M$ could be quite small, i.e., less than 7. For these small cases, we develop a backtracking based method to search the entire solution space. Note that here backtracking can quickly find the optimal solution even through three-way partitioning is NP-hard. If the graph size is larger, we propose a heuristic method, motivated by the classic FM partitioning algorithm \cite{PAR_DAC82_FM}\cite{PAR_TC89_Sanchis}. Different from the classic FM algorithm, we make the following modifications. (1) In the first stage of mapping, some vertices are labeled as incomparable, therefore before moving a vertex from one partition to another, we should check whether it is legal. (2) Classical FM algorithm is for min-cut problem, we need to modify the gain function of each move to achieve a maximum cut. The runtime complexity of graph construction is $O(m)$, where $m$ is the vertex number in $G_M$. The runtime of three-way maximum-cut partitioning algorithm is $O(m logm)$. Besides, the first stage of mapping needs $O(n^2logn)$ \cite{TPL_ICCAD2011_Yu}. Since $m$ is much smaller than $n$, the complexity of density balanced mapping is $O(n^2logn)$. \iffalse Here an important concept is called the ``vertex move". At every move, the vertex who can achieve the most cut increasing is chosen to move from current partition to another one. Another important concept is called the ``pass", where all vertices are moved exactly once during a single pass. When a vertex is chosen to move because it has the maximum cut increasing, we move it even when the increasing value is negative. At the end of the current pass, we accept the first $K$ movings that lead to the best partition, which may contain some not so good movings. The Disjoint-set data structure is used to implement functions Union() and Separate(). Under a special implementation, the runtime of each function can be almost constant \cite{book90Algorithm}. Let $n$ be the vertex number, then the number of triplets is $O(n^2)$. Sorting all the triplets requires $O(n^2logn)$. The runtime complexity of graph construction is $O(m)$, where $m$ is the vertex number in $G_M$. The runtime of three-way maximum-cut partitioning algorithm is $O(m logm)$. Since $m$ is much smaller than $n$, then the complexity of density balanced mapping is $O(n^2logn)$. \fi \section{Introduction} As the minimum feature size further decreases, the semiconductor industry faces great challenge in patterning sub-22nm half-pitch due to the delay of viable next generation lithography, such as extreme ultra violet (EUV) and electric beam lithography (EBL). Triple patterning lithography (TPL), along with self-aligned double patterning (SADP), are solution candidates for the 14nm logic node \cite{ITRS}. Both TPL and SADP are similar to double patterning lithography (DPL), but with different or more exposure/etching processes \cite{LITH_ICCAD2012_Yu}. SADP may be significantly restrictive on design, i.e., cannot handle irregular arrangements of contacts and does not allow stitching. Therefore, TPL began to receive more attention from industry, especially for metal 1 layer patterns. For example, industry has already explored test-chip patterns with triple patterning and even quadruple patterning \cite{2009Intel}. Similar to DPL, the key challenge of TPL lies in the decomposition process where the original layout is divided into three masks. During decomposition, when the distance between any two features is less than the minimum coloring distance $dis_m$, they need to be assigned into different masks to avoid a conflict. Sometimes, a conflict can be resolved by splitting a pattern into two touching parts, called \textit{stitches}. After the TPL layout decomposition, the features are assigned into three masks (colors) to remove all conflicts. The advantage of TPL is that the effective pitch can be tripled which can further improve lithography resolution. Besides, some native conflicts in DPL can be resolved. In layout decomposition, especially for TPL, density balance should also be considered, along with the conflict and stitch minimization. A good pattern density balance is also expected to be a consideration in mask CD and registration control \cite{TPL_SPIE2012_Lucas}, while unbalanced density would cause lithography hotspots as well as lowered CD uniformity due to irregular pitches \cite{DPL_ASPDAC2010_Yang}. However, from the algorithmic perspective, achieving a balanced density in TPL could be harder than that in DPL. (1) In DPL, two colors can be more implicitly balanced; while in TPL, often times existing/previous strategies may try to do DPL first, and then do some ``patch" with the third mask, which causes a big challenge to ``explicitly" consider the density balance. (2) Due to the one more color, the solution space is much larger \cite{TPL_SPIE08_Cork}. (3) Instead of global density balance, local density balance should be considered to reduce the potential hotspots, since neighboring patterns are one of the main sources of hotspots. As shown in Fig. \ref{fig:balance} (a)(b), when only global density balance is considered, feature $a$ is assigned white color. Since two black features are close to each other, hotspot may be introduced. To consider the local density balance, the layout is partitioned into four bins \{$b_1, b_2, b_3, b_4$\} (see Fig. \ref{fig:balance} (c)). Feature $a$ is covered by bins $b_1$ and $b_2$, therefore it is colored as blue to maintain the local density balances for both bins (see Fig. \ref{fig:balance} (d)). \begin{figure}[tb] \centering \vspace{-.0in} \subfigure[]{\includegraphics[width=0.22\textwidth]{DAC13_GBalance1}} \subfigure[]{\includegraphics[width=0.22\textwidth]{DAC13_GBalance2}} \subfigure[]{\includegraphics[width=0.22\textwidth]{DAC13_LBalance1}} \subfigure[]{\includegraphics[width=0.22\textwidth]{DAC13_LBalance2}} \vspace{-.1in} \caption{ ~Decomposed layout with (a) (b) global balanced density.~(c) (d) local balanced density in all bins. } \label{fig:balance} \vspace{-.2in} \end{figure} \iffalse /*{{{*/ \begin{figure}[tb] \centering \includegraphics[width=0.4\textwidth]{UnbalancedLayout} \caption{~***Need new aerial image for triple patterning case. Unbalanced decomposition may cause some disconnections.} \label{fig:aerial} \addtolength{\belowcaptionskip}{-3mm} \end{figure} We use an example to further motivate such requirement. Even though two features within minimum space are assigned to different masks, unbalanced density can cause lithography hotspots as well as lowered CD uniformity due to irregular pitch. The aerial image of an unbalanced decomposition can have a patterning problem as shown in Fig. \ref{fig:aerial}. Even though the decomposed layout in Fig. \ref{fig:aerial} does not represent a general case, balanced decomposition provides more lithography friendly layout because of higher regularity \cite{DPL_ASPDAC2010_Yang}. Therefore, balanced density should be considered in the decomposition problem. /*}}}*/ \fi \iffalse A lot of research focuses on the DPL layout decomposition, which is generally regarded as a two-coloring problem on the conflict graphs. There are two strategies to minimize the stitch number and the conflict number: The first one is to iteratively remove conflicts and then minimize the stitches. \cite{DPL_ASPDAC2010_Yang}\cite{DPL_ICCAD09_Xu} proposed some cut based approaches. If the conflict graph is planar, the stitch minimization problem can be optimally solved in polynomial time. However, conflicts are eliminated greedily that it cannot guarantee a minimal conflict number. The second strategy is to minimize the stitch number and the conflict number simultaneously. Integer linear programming (ILP) is adopted in \cite{DPL_TCAD2010_Kahng} \cite{DPL_ISPD09_Yuan} with different feature pre-slicing techniques. However, these methods may suffer from long runtime penalty since ILP is well known NP-hard. \fi There are investigations on TPL layout decomposition \cite{TPL_SPIE08_Cork,TPL_ICCAD2011_Yu,TPL_DAC2012_Fang,TPL_ICCAD2012_Tian,TPLEC_SPIE2013_Yu,TPL_DAC2013_Kuang} or TPL aware design \cite{DFM_DAC2012_Ma,DFM_ICCAD2012_Lin,DFM_ICCAD2013_Yu}. \cite{TPL_SPIE08_Cork} provided a three coloring algorithm, which adopts a SAT Solver. Yu et al. \cite{TPL_ICCAD2011_Yu} proposed a systematic study for the TPL layout decomposition, where they showed that this problem is NP-hard. Fang et al. \cite{TPL_DAC2012_Fang} presented several graph simplification techniques to reduce the problem size, and a maximum independent set (MIS) based heuristic for the layout decomposition. \cite{TPL_ICCAD2012_Tian} proposed a layout decomposer for row structure layout. However, these existing studies suffer from one or more of the following issues: (1) cannot integrate the stitch minimization for the general layout, or can only deal with stitch minimization as a post-process; (2) directly extend the methodologies from DPL, which loses the global view for TPL; (3) assigning colors one by one prohibits the ability for density balance. In this paper, we propose a high performance layout decomposer for TPL. Compared with previous works, our decomposer provides not only less conflict and stitch number, but also more balanced density. In this work, we focus on the coloring algorithms and leave other layout related optimizations to post-coloring stages, such as compensation for various mask overlay errors introduced by scanner and mask write control processes. However, we do explicitly consider balancing density during coloring, since it is known that mask write overlay control generally benefits from improved density balance. Our key contributions include the following. (1) Accurately integrate density balance into the mathematical formulation; (2) Develop a three-way partition based mapping, which not only achieves less conflicts, but also more balanced density; (3) Propose several techniques to speedup the layout decomposition; (4) Our experiments show the best results in solution quality while maintaining better balanced density (i.e., less EPE). The rest of the paper is organized as follows. Section \ref{sec:problem} presents the basic concepts and the problem formulation. Section \ref{sec:overview} gives the overall decomposition flow. Section \ref{sec:algo} presents the details to improve balance density and decomposition performance, and Section \ref{sec:speedup} shows how we further speedup our decomposer. Section \ref{sec:result} presents our experimental results, followed by a conclusion in Section \ref{sec:conclusion}. \subsection{Density Balanced Mapping} \label{sec:mapping} As discussed above, each $X_{ij}$ in solution of (\ref{eq:sdp2}) provides a guideline, i.e., whether $r_i$ and $r_j$ should be in same color. With these guidelines a mapping procedure is adopted to finally assign all vertices into three colors (masks). In \cite{TPL_ICCAD2011_Yu}, a greedy approach was applied for this color assignment. The idea is straightforward: $X_{ij}$s are sorted, and vertices $r_i$ and $r_j$ with larger $X_{ij}$ value tend to be in the same color. If most of the $X_{ij}$s in matrix $X$ are clear (close to 1 or -0.5), this greedy method may achieve good result. However, if the decomposition graph is not 3-colorable, some values in matrix $X$ are vague. For these vague $X_{ij}$, e.g., 0.5, the greedy method is not so effective. Different from previous greedy approach, we propose a density balanced mapping, which is based on a three-way partitioning. The main ideas are as follows. All $X_{ij}$s are classified into two types: clear and vague. If a $X_{ij}$ is clear, the relationship of vertices $r_i$ and $r_j$ can be directly decided. For those vague $X_{ij}$s, instead of only relying on the SDP solution, we also take advantage of the information in decomposition graph. These information are captured in a graph construction. Followed by a three-way partitioning on this graph, our mapping can provide a global view to search solutions. In addition, density balance can be naturally integrated into the partitioning, and therefore our mapping can provide more balanced decomposed results. \begin{algorithm}[thb] \caption{Density Balanced Mapping} \label{alg:mapping} \begin{algorithmic}[1] \REQUIRE Solution matrix $X$ of the program (\ref{eq:sdp2}). \STATE Label each non-zero entry $X_{i, j}$ as a triplet $(X_{ij}, i, j)$; \STATE Sort all $(X_{ij}, i, j)$ by $X_{ij}$; \FOR { all triples with $X_{ij} > th_{unn}$} \STATE Union(i, j); \ENDFOR \FOR { all triples with $X_{ij} < th_{sp}$} \STATE Separate(i, j); \ENDFOR \STATE Construct graph; \IF{graph size $\le$ 3} \STATE return; \ELSIF{graph size $\le 7$} \STATE Backtracking based three-way partitioning; \ELSE \STATE FM based three-way partitioning; \ENDIF \end{algorithmic} \end{algorithm} Algorithm \ref{alg:mapping} shows our density balanced mapping procedure. Given the solutions from program (\ref{eq:sdp2}), some triplets are constructed and sorted to maintain all non-zero $X_{ij}$ values (lines 1--2). Then the mapping incorporates two stages to deal with the two different types. The first stage (lines 3--8) is similar to that in \cite{TPL_ICCAD2011_Yu}. If $X_{ij}$ is clear then the relationship between vertices $r_i$ and $r_j$ can be directly determined. Here $th_{unn}$ and $th_{sp}$ are user-defined threshold values. For example, if $X_{ij} > th_{unn}$, which means that $r_i$ and $r_j$ should be in the same color, then function Union(i, j) is applied to merge them into a large vertex. Similarly, if $X_{ij} < th_{sp}$, then function Separate(i, j) is used to label $r_i$ and $r_j$ as incompatible. If $r_i$ and $r_j$ are incompatible, they cannot be merged and function Compatible(i, j) will return $false$. In the second stage (lines 9--16) we deal with the vague $X_{ij}$s. During the first stage some vertices have been merged, therefore the total vertex number is not large. Here we construct a graph to represent the relationships among all the remanent vertices (line 9). Each edge $e_{ij}$ in this graph has a weight representing the cost if vertices $i$ and $j$ are assigned into a same color. Therefore, the color assignment problem can be formulated as a maximum-cut partitioning problem on this graph (line 10--16). Note that the graph can further consider density balance by adding a weight on each vertex to represent its density. An example of the graph construction and maximum-cut partitioning is shown in Fig. \ref{fig:mapping}. Based on the decomposition graph (see Fig. \ref{fig:mapping} (a)), SDP is formulated. Given the solutions of SDP, after the first stage, vertices $a_2$ and $d_2$ are merged in to a large vertex $A$, while other four vertices are un-merged. Then a new graph is constructed as in Fig. \ref{fig:mapping} (b). Each edge is associated with a value representing the cost of assigning those vertices the same color. To consider the density balance, each vertex has a weight as its density. A partitioning is performed on this graph to simultaneously search a max-cut and balance weight among different parts. There are two partition results with the same cut value (see Fig. \ref{fig:mapping} (c) and Fig. \ref{fig:mapping} (d)). Their density balance values are 24 and 23, respectively. To keep a more balanced density result, the second partitioning is adopted. Corresponding color assignment result is shown in Fig. \ref{fig:mapping} (e). \begin{figure}[tb] \centering \subfigure[]{ \includegraphics[width=0.15\textwidth]{DAC_ex5} } \hspace{-.1in} \subfigure[]{ \includegraphics[width=0.15\textwidth]{GroupGraph1} } \hspace{-.1in} \subfigure[]{ \includegraphics[width=0.15\textwidth]{GroupGraph2} } \subfigure[]{ \includegraphics[width=0.15\textwidth]{GroupGraph3} } \subfigure[]{ \includegraphics[width=0.15\textwidth]{DAC_ex6} } \nocaptionrule \caption{~(a) Decomposition graph.~(b) Graph construction for three-way maximum-cut partitioning. ~(c) One partitioning result with cut value 8.1 and DB 24. ~(d) A better result with cut 8.1 and DB 23. ~(e) Final color assignment result.} \label{fig:mapping} \addtolength{\belowcaptionskip}{-5mm} \end{figure} It is well known that the maximum-cut problem, even for a 2-way partition, is NP-hard. However, we observe that in lots of cases, after the global SDP optimization, the partitioning problem size could be quite small, i.e., less than 7. For these small problems, we apply a backtracking based method to search entire solution space. Note that for these small problems backtracking can quickly find the optimal solution even three-way partitioning is NP-hard. If the graph size is larger, we propose a heuristic method, motivated by the classic FM partitioning algorithm \cite{PAR_DAC82_FM}\cite{PAR_TC89_Sanchis}. Different from the classic FM partitioning algorithm, we make the following modifications. (1) In the first stage some vertices are labeled as incomparable, therefore before moving a vertex from one partition to another, we should check whether it is legal. (2) Different from the traditional min-cut partition, we modify the gain function of each move to achieve a maximum cut. (3) Density balance value is considered in the gain function. In other words, in each move, we try to achieve a balanced density. \iffalse Here an important concept is called the ``vertex move". At every move, the vertex who can achieve the most cut increasing is chosen to move from current partition to another one. Another important concept is called the ``pass", where all vertices are moved exactly once during a single pass. When a vertex is chosen to move because it has the maximum cut increasing, we move it even when the increasing value is negative. At the end of the current pass, we accept the first $K$ movings that lead to the best partition, which may contain some not so good movings. \fi The Disjoint-set data structure is used to implement functions Union() and Separate(). Under a special implementation, the running time of each function can be almost constant \cite{book90Algorithm}. Let $n$ be the number of vertices and the number of triplets is $O(n^2)$. Sorting all the triplets requires $O(n^2logn)$. The runtime complexity of graph construction is $O(m)$, where $m$ is the number of vertices. The runtime of three-way maximum-cut partitioning algorithm is $O(m logm)$. Since $m$ is much smaller than $n$, then the complexity of density balanced mapping is $O(n^2logn)$. \vspace{-.1in} \subsection{Density Balanced Graph Simplification} In previous we have discussed the color assignment for the decomposition graphs. It shall be noted that the problem size can be dramatically simplified through graph simplification \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang}. In this subsection, we further discuss some techniques not only speed up the computation of decomposition, but also keep a better density balancing. \vspace{-.05in} \subsubsection{Density Balanced Iterative Vertex Recover} Motivated by the work in \cite{TPL_ICCAD2011_Yu}, we iteratively remove and push all vertices with degree less than or equal to two. After the color assignment on the remained vertices, we iteratively recover all the removed vertices and assign legal colors. Instead of randomly assigning a legal color, we assign one legal color which is good to the density balance. \vspace{-.05in} \subsubsection{Density Balanced Decomposition Graph Merging} Using independent component computation and bridge computation \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang}, the whole design can be broken down into several components. After color assignment independently on each component, all the color assignment results are merged together. Note that two components share almost one edge or one vertex, then we can rotate the colors in one component so that merging introduces no more stitch or conflict. Besides, there is no cycle when we merge the decomposition graphs, the merging can be done in linear time through a breadth-first search (BFS). \vspace{-.05in} \subsubsection{Color Assignment for Simple Decomposition Graph} Through the layout graph simplification \cite{TPL_ICCAD2011_Yu}, for each vertex in the layout graph, its degree is larger than or equal to 2. However, after introducing the stitch edges and the bridge computation, corresponding decomposition graphs would have some differences. For each decomposition graph, we first iteratively remove the vertex with degree less than 3. If the whole decomposition graph can be removed, the color assignment is completed in linear time. Otherwise, we recover the vertices and apply SDP based color assignment as discussed above. Our preliminary results show that lots of decomposition graphs can be decomposed using this fast method. By this way, the runtime can be further reduced. \section{Motivation and Preliminaries} \label{sec:motivation} \vspace{-.05in} \subsection{Motivation} \begin{figure}[bht] \centering \includegraphics[width=0.4\textwidth]{UnbalancedLayout} \nocaptionrule \caption{~***Need new aerial image for triple patterning case. Unbalanced decomposition may cause some disconnections.} \label{fig:aerial} \addtolength{\belowcaptionskip}{-3mm} \end{figure} In the previous section we discussed the need for a systematic methodology to consider the density balance in the layout decomposition. In this section, we use an example to further motivate such requirement. Even though two features within minimum space are assigned to different masks, unbalanced density can cause lithography hotspots as well as lowered CD uniformity due to irregular pitch. The aerial image of an unbalanced decomposition can have a patterning problem as shown in Fig. \ref{fig:aerial}. Even though the decomposed layout in Fig. \ref{fig:aerial} does not represent a general case, balanced decomposition provides more lithography friendly layout because of higher regularity \cite{DPL_ASPDAC2010_Yang}. Therefore, balanced density should be considered in the decomposition problem. \subsection{Layout Graph and Decomposition Graph} \begin{figure}[bht] \centering \subfigure[]{ \includegraphics[width=0.12\textwidth]{DAC_DG1} } \subfigure[]{ \includegraphics[width=0.12\textwidth]{DAC_DG2} } \subfigure[]{ \includegraphics[width=0.12\textwidth]{DAC_DG3} } \nocaptionrule \caption{~(a) Input layout.~(b) Layout graph, where all edges are conflict edges.~(c) Decomposition graph after the stitch candidate generation, where dash edges denote stitch edges.} \label{fig:graphs} \end{figure} Our layout decomposition is carried out on graph representations. As shown in Fig. \ref{fig:graphs}, after getting an input layout we construct the layout graph and the decomposition graph. In the layout graph each vertex represents a polygonal feature (shape) where there is an edge (conflict edge) between two vertices if and only if those two vertices are within the minimum coloring distance $min_s$. The decomposition graph maintains all the information about conflict edges and stitch candidates. For example, in Fig. \ref{fig:graphs} (c) the solid edges are the conflict edges while the dashed edges are the stitch edges and function as stitch candidates. \section{Density Balanced Decomposition} \label{sec:algo_db} Compared with the traditional time consuming integer linear programming (ILP), semidefinite programming (SDP) has been shown to be a better approach for the TPL layout decomposition in terms of runtime and solution quality tradeoff \cite{TPL_ICCAD2011_Yu}. However, how to integrate the density balance issue into the formulation is still an open question. In this section, we show that using a novel transformation, the density balance can be accurately integrated into the vector programming and the corresponding SDP. Besides, several graph based methods considering density balance will be discussed. Because of all these methods, the density balancing can be taken into account in each step of the decomposition flow. \subsection{Density Balanced Mathematical Formulation} The mathematical formulation for the density balanced TPL layout decomposition is shown in (\ref{eq:math}). The objective of (\ref{eq:math}) is to simultaneously minimize the conflict number, the stitch number and the density balance value $DB$. $\alpha$ and $\beta$ are user-defined parameters for assigning the relative weights among the three values. \begin{align} \footnotesize \label{eq:math} \textrm{min} & \sum_{e_{ij} \in CE} c_{ij} + \alpha \sum_{e_{ij} \in SE}s_{ij} + \beta \cdot DB & \\ \textrm{s.t}.\ \ & c_{ij} = ( x_i == x_j ) &\forall e_{ij} \in CE \label{1a}\tag{$1a$}\\ & s_{ij} = x_i \oplus x_j &\forall e_{ij} \in SE \label{1b}\tag{$1b$}\\ & x_i \in \{0, 1, 2\} &\forall r_i \in V \label{1c}\tag{$1c$}\\ & d_k = \sum_{x_i = k} len_i &\forall r_i \in V \label{1d}\tag{$1d$}\\ & DB = \textrm{max}\{d_k\} / \textrm{min}\{d_k\} \label{1e}\tag{$1e$} \end{align} Here $x_i$ is a variable representing the color (mask) of feature $r_i$, $c_{ij}$ is a binary variable for the conflict edge $e_{ij} \in CE$, and $s_{ij}$ is a binary variable for the stitch edge $e_{ij} \in SE$. The constraint (\ref{1a}) is used to evaluate the conflict number when the touched vertices $r_i$ and $r_j$ are assigned different masks. The constraint (\ref{1b}) is used to calculate the stitch number. If vertices $r_i$ and $r_j$ are assigned the same mask, the stitch $s_{ij}$ is introduced. The constraint (\ref{1e}) is nonlinear, which makes the program (\ref{eq:math}) hard to be formulated into ILP like in \cite{TPL_ICCAD2011_Yu}. Similar nonlinear constraints occur in the floorplanning problem \cite{FLOOR_DAC00_Chen}, where Tayor expansion was used to linearize the constraint into ILP. However, Tayor expansion will introduce the penalty of accuracy. In the following section we show that instead of using the painful Tayor expansion, this nonlinear constraint can be integrated into a vector program without losing any accuracy. \subsection{Density Balanced Vector Programming} We use three unit vectors to represent three different masks. Each vertex is associated with one of three unit vectors: $(1, 0), (-\frac{1}{2}, \frac{\sqrt{3}}{2})$ and $(-\frac{1}{2}, -\frac{\sqrt{3}}{2})$. Note that the angle between any two vectors of the same mask is $0$, while the angle between vectors with different masks is $2\pi/3$. Additionally, from the inner product for any two unit vectors $\vec{v_i}, \vec{v_j}$, we get the following property: \begin{equation} \vec{v_i} \cdot \vec{v_j} = \left\{ \begin{array}{cc} 1, & \vec{v_i} = \vec{v_j}\\ -\frac{1}{2} & \vec{v_i} \ne \vec{v_j} \end{array} \right. \label{eq:vivj} \end{equation} \cite{TPL_ICCAD2011_Yu} shows that without the nonlinear constraint (\ref{1e}), the program (\ref{eq:math}) can be formulated into a vector programming, where the objective function is the representation of the vector inner products, i.e., $\vec{v_i} \cdot \vec{v_j}$. At the first glance, the constraint (\ref{1e}) cannot be formulated into an inner product format. However, we show that density balance value $DB$ has a special relationship with $DB_2$ as follows: \begin{displaymath} DB_2 = d_1 \cdot d_2 + d_1 \cdot d_3 + d_2 \cdot d_3 \end{displaymath} \begin{lemma} Maximizing $DB_2$ is equivalent to minimizing $DB$. \label{lem:1} \end{lemma} The proof of lemma \ref{lem:1} is provided in Appendix \ref{chap:app_proof}. Since maximizing $DB_2$ is equivalent to minimizing $DB$, if $DB_2$ can be formulated as a representation of vector inner products, then we find a way to integrate the constraint (\ref{1e}) into the vector program. \begin{lemma} $DB_2 = 2/3 \cdot \sum_{i, j \in V} len_i \cdot len_j \cdot (1 - \vec{v_i} \cdot \vec{v_j})$ \label{lem:2} \end{lemma} Here the $len_i$ is the density of feature $r_i$. The proof of lemma \ref{lem:2} is given in Appendix \ref{chap:app_proof}. Combining Lemma \ref{lem:1} and Lemma \ref{lem:2}, we have proved the following theorem. \begin{theorem} Maximizing $DB_2$ can achieve a better density balance. \end{theorem} Note that the expression $\sum_{i,j \in V} len_i \cdot len_j \cdot 1$ in $DB_2$ is a constant, and we can replace $DB_2$ with $DB_2^*$ by removing the constant part. Similarly, we can eliminate the constants in the calculation of the conflict and stitch numbers. The simplified vector program is as follows: \begin{align} \small \label{eq:vp} \textrm{min} & \sum_{e_{ij} \in CE} ( \vec{v_i} \cdot \vec{v_j} ) - \alpha \sum_{e_{ij} \in SE} ( \vec{v_i} \cdot \vec{v_j} ) - \beta \cdot DB_2^*\\ \textrm{s.t}.\ \ & DB_2^* = - \sum_{i, j \in V} len_i \cdot len_j \cdot ( \vec{v_i} \cdot \vec{v_j}) \label{3a}\tag{$3a$}\\ & \vec{v_i} \in \{(1, 0), (-\frac{1}{2}, \frac{\sqrt{3}}{2}), (-\frac{1}{2}, -\frac{\sqrt{3}}{2})\} \label{3b}\tag{$3b$} \end{align} \subsection{Density Balanced SDP Approximation} Formulation (\ref{eq:vp}) is equivalent to the mathematical formulation (\ref{eq:math}), and it is still NP-hard to be solved exactly. Constraint (\ref{3b}) requires the solutions to be discrete. To achieve a good tradeoff of runtime and accuracy, we relax constraint (\ref{3b}) to generate formula (\ref{eq:sdp}) as following: \begin{align} \vspace{-.1in} \label{eq:sdp} \textrm{min} & \sum_{e_{ij} \in CE} ( \vec{y_i} \cdot \vec{y_j}) - \alpha \sum_{e_{ij} \in SE} ( \vec{y_i} \cdot \vec{y_j} ) - \beta \cdot DB_2^*\\ \textrm{s.t}.\ \ & DB_2^* = - \sum_{i, j \in V} len_i \cdot len_j \cdot (\vec{y_i} \cdot \vec{y_j}) \label{sdpa}\tag{$4a$}\\ & \vec{y_i} \cdot \vec{y_i} = 1 , \ \ \ \forall i \in V \label{sdpb}\tag{$4b$}\\ & \vec{y_i} \cdot \vec{y_j} \ge -\frac{1}{2}, \ \ \ \forall e_{ij} \in CE \label{sdpc}\tag{$4c$} \end{align} Here each unit vector $\vec{v_i}$ is replaced by a $n$ dimensional vector $\vec{y_i}$. The solution of (\ref{eq:sdp}) is continuous instead of discrete, and provides a lower bound of vector program (\ref{eq:vp}). In other words, (\ref{eq:sdp}) provides an approximated solution to (\ref{eq:vp}). To solve program (\ref{eq:sdp}) in polynomial time, we show that it can be represented as the following standard SDP: \begin{align} \label{eq:sdp2} \textrm{SDP:\ \ min} & \ \ \ A \bullet X \\ & X_{ii} = 1, \ \ \forall i \in V \tag{$5a$}\\ & X_{ij} \ge -\frac{1}{2}, \ \ \forall e_{ij} \in CE \tag{$5b$}\\ & X \succeq 0 \label{sdp2c}\tag{$5c$} \end{align} where $A \bullet X$ is an inner product between two matrices $A$ and $X$, i.e., $\sum_i \sum_j A_{ij}X_{ij}$. Here $A_{ij}$ is the entry that lies in the $i$-th row and the $j$-th column of matrix $A$. \begin{displaymath} A_{ij} = \left\{ \begin{array}{cc} 1 + \beta \cdot len_i \cdot len_j, & \forall e_{ij} \in CE\\ -\alpha + \beta \cdot len_i \cdot len_j, & \forall e_{ij} \in SE\\ \beta \cdot len_i \cdot len_j, & \textrm{otherwise} \end{array} \right. \end{displaymath} The constraint (\ref{sdp2c}) means that the matrix $X$ should be positive semidefinite. SDP (\ref{eq:sdp2}) can be solved in polynomial time, and the results are stored in a matrix $X$. Each $X_{ij}$ in the matrix $X$ represents $\vec{y_i} \cdot \vec{y_j}$, which is an approximation of $\vec{v_i} \cdot \vec{v_j}$. Because of the property (\ref{eq:vivj}), ideally we hope $X_{ij}$ can be distinguishable (either close to $1$ or $-0.5$). However, SDP (\ref{eq:sdp2}) only provides an approximation solution to the vector program. For each $X_{ij}$, if it is close to $1$, vertices $r_i$ and $r_j$ tend to be in the same color (mask); while if it is close to $-0.5$, $r_i$ and $r_j$ tend to be in different colors (masks). Our preliminary results show that with reasonable threshold such as $0.9 < X_{ij} \le 1$ for same color, and $-0.5 \le X_{ij} < -0.4$ for different colors, more than 80\% of vertices can be decided by the global SDP optimization. \section{Stitch Candidate Generation} \label{sec:stitch} Stitch candidate generation is one of the most important steps to parse the layout. It not only determines the vertex number in the decomposition graph, but also affects the decomposed result. We use \textit{DPL candidates} to represent the stitch candidates generated by all previous DPL research. \cite{TPL_ICCAD2011_Yu} directly applies these DPL candidates in the TPL layout decomposition. In this section, we show that since DPL stitch candidates may be redundant or lose some useful candidates, they cannot be directly used in TPL layout decomposition. Furthermore, we provide a procedure to generate appropriate stitch candidates for TPL. \subsection{Redundant Stitch and Lost Stitch} \begin{figure}[htb] \vspace{-.1in} \centering \subfigure[]{ \includegraphics[width=0.18\textwidth]{RedundantStitch1} } \subfigure[]{ \includegraphics[width=0.2\textwidth]{LostStitch1} } \nocaptionrule \caption{~(a) Redundant stitch.~(b) This stitch cannot be detected in the DPL stitch candidate generation.} \label{fig:DPL_Stitch} \addtolength{\belowcaptionskip}{-3mm} \end{figure} We provide several examples to demonstrate why DPL candidates are not appropriate for TPL. First, because of an extra color choice, some DPL candidates may be redundant. As shown in Fig. \ref{fig:DPL_Stitch} (a), the stitch can be removed because no matter how to assign colors for the features $b$ and $c$, the feature $a$ can always be assigned a legal color. We denote this kind of stitch as a \textit{redundant stitch}. After removing these redundant stitches, some extra vertices in the decomposition graph can be merged. In this way, we can reduce the problem size. Besides, some useful stitch candidates cannot be detected during previous DPL stitch candidate generation. In DPL, the stitch candidate has one precondition: it cannot intersect with any projection. For example, as shown in Fig. \ref{fig:DPL_Stitch} (b), because this stitch intersects with the projection of feature $b$, it cannot belong to the DPL candidates. However, if features $b, c$ and $d$ are assigned with three different colors, only introducing this stitch can resolve the conflict. In other words, the precondition in DPL limits the ability of stitches to resolve the triple patterning conflicts and may result in unnoticed conflicts. We denote the useful stitches forbidden by the DPL precondition as a \textit{lost stitch}. \subsection{Stitch Candidate Generation} Given the projection results, we carry out stitch candidate generation. As discussed above, compared with the DPL stitches, in TPL the redundant stitches need to be removed and some lost stitches should be added. For a better explanation, let us define the projection sequence. \begin{define}[Projection Sequence] After the projection, the feature is divided into several segments each of which is labeled with a number representing how many other features are projected onto it. The sequence of numbers on these segments is the projection sequence. \end{define} \begin{figure}[htb] \vspace{-.1in} \centering \includegraphics[width=0.35\textwidth]{ProjectNum} \nocaptionrule \caption{~The projection sequence of the feature is $01212101010$, and the last $0$ is a default terminal zero.} \label{fig:projectNum} \addtolength{\belowcaptionskip}{-3mm} \end{figure} Instead of analyzing each feature and its all neighbor features, we can directly carry out stitch candidate generation based on the projection sequence. For convenience, we provide a terminal zero rule, i.e., the beginning and the ending of the projection sequence must be $0$. To maintain this rule, sometimes a default $0$ needs to be added at the end of sequence. For example, in Fig. \ref{fig:projectNum}, the middle feature has five conflict features $b, c, d, e$ and $f$. Based on the projection results, the feature is divided into ten segments. Through labeling each segment, we can get its projection sequence: $01212101010$. Here a default $0$ is added at the end of the feature. Based on the definition of projection sequence, we summarize the rule for redundant stitches and lost stitches. First, motivated by the case in Fig. \ref{fig:DPL_Stitch} (a), we can summarize the redundant stitches as follows: if the projection sequence begins with ``$01010$", then the first stitch candidate in DPL is redundant. Since the projection of a feature can be symmetric, if the projection sequence ends with ``$01010$", then the last stitch candidate is also redundant. Besides, the rule for lost stitches is as follows. if a projection sequence contains the sub-sequence ``$xyz$", where $x, y, z > 0$ and $x>y, z>y$, then there is one lost stitch at the segment labeled as $y$. For example, the stitch candidate in Fig. \ref{fig:DPL_Stitch} (b) is contained in the sub-sequence ``$212$", so it is a lost stitch. The details of stitch candidate generation are shown in Algorithm \ref{alg:stitch}. If necessary, at first each multiple-pin feature is decomposed into several two-pin features. Then for each feature, we calculate its projection sequence. We remove the redundant stitches by checking if the projection sequence begins or ends with ``$01010$". Next we search for and insert the lost stitches. Here we define a \textit{sequence bunch}. A sequence bunch is a sub-sequence of a projection sequence, and contains at least three non-$0$ segments. For simplicity, in our algorithm to identify the lost stitch candidate, at most one stitch is introduced between two $0$s for each sequence bunch. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{StitchGeneration} \nocaptionrule \caption{~Stitch candidates generated for DPL and TPL.} \label{fig:stitch} \addtolength{\belowcaptionskip}{-3mm} \end{figure} \begin{algorithm}[htb] \caption{Stitch Candidate Generation} \label{alg:stitch} \begin{algorithmic}[1] \REQUIRE Projection results on features. \STATE Decompose multiple-pin features; \FOR { each feature $w_i$} \STATE Calculate the projection sequence $ps_i$; \IF{$ps_i$ begins or ends with ``$01010$"} \STATE Remove redundant stitch(es); \ENDIF \FOR { each sequence bunch of $ps_i$ } \STATE Search and insert at most one lost stitch candidate; \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} An example of the stitch candidate generation is shown in Fig. \ref{fig:stitch}. Using the DPL stitch candidate generation, there are two stitch candidates generated by the DPL methodology (stitch 2 and stitch 3). Through our rule for TPL stitch candidate, stitch 3 is labeled as a redundant stitch. Besides, stitch 1 is identified as a lost stitch candidate because it is located in a sub-sequence ``$212$". Therefore, stitch 1 and stitch 2 are chosen as stitch candidates for TPL. \section{Overall Decomposition Flow} \label{sec:overview} \begin{figure}[htb] \vspace{-.05in} \centering \includegraphics[width=0.5\textwidth]{DAC13_overview} \vspace{-.3in} \caption{~Overall flow of proposed density balanced decomposer.} \label{fig:overview} \end{figure} The overall flow of our TPL decomposer is illustrated in Fig. \ref{fig:overview}. It consists of two stages: graph construction / simplification, and color assignment. Given input layout, layout graphs and decomposition graphs are constructed, then graph simplifications \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang} are applied to reduce the problem size. Two additional graph simplification techniques are introduced in Sec. \ref{sec:nostitch} and \ref{sec:cluster}. During stitch candidate generation, the methods described in \cite{TPL_DAC2013_Kuang} are applied to search all stitch candidates for TPL. In second stage, for each decomposition graph, color assignment is proposed to assign each vertex one color. Before calling SDP formulation, fast color assignment trial is proposed to achieve better speedup (see Section \ref{sec:fastassign}). \begin{figure}[tb] \vspace{-.1in} \centering \subfigure[] {\includegraphics[width=0.15\textwidth]{DAC13_input}} \subfigure[] {\includegraphics[width=0.16\textwidth]{DAC13_sections}} \subfigure[] {\includegraphics[width=0.16\textwidth]{DAC13_LG1}} \hspace{.1em} \subfigure[] {\includegraphics[width=0.16\textwidth]{DAC13_LG2}} \subfigure[] {\includegraphics[width=0.15\textwidth]{DAC13_scandidates}} \subfigure[] {\includegraphics[width=0.16\textwidth]{DAC13_DG1}} \hspace{.1em} \subfigure[] {\includegraphics[width=0.16\textwidth]{DAC13_DG2}} \subfigure[] {\includegraphics[width=0.15\textwidth]{DAC13_DG3}} \subfigure[] {\includegraphics[width=0.16\textwidth]{DAC13_output}} \iffalse \caption{~An example of the layout decomposition flow.~(a) Input layout.~(b) Layout graph construction.~(c) Simplified layout graph. (d) Projection.~(e) Stitch candidate generation.~(f) Decomposition graph construction.~(g) SDP formulation and partition based mapping on the decomposition graph. (h) Color assignment result for the decomposition graph.~(i) After iteratively recovering all the removed vertices, the decomposed layout. } \fi \caption{~An example of the layout decomposition flow.} \label{fig:example} \vspace{-.4in} \end{figure} Fig. \ref{fig:example} illustrates an example to show the decomposition process step by step. Given the input layout as in Fig. \ref{fig:example}(a), we partition it into a set of bins $\{b_1, b_2, b_3, b_4\}$ (see Fig. \ref{fig:example}(b)). Then the layout graph is constructed (see Fig. \ref{fig:example}(c)), where the ten vertices representing the ten features in the input layout, and each vertex represents a polygonal feature (shape) where there is an edge (conflict edge) between two vertices if and only if those two vertices are within the minimum coloring distance $min_s$. During the layout graph simplification, the vertices whose degree equal or smaller than two are iteratively removed from the graph. The simplified layout graph, shown in Fig. \ref{fig:example}(d), only contains vertices $a, b, c$ and $d$. Fig. \ref{fig:example}(d) shows the projection results. Followed by stitch candidate generation \cite{TPL_DAC2013_Kuang}, there are two stitch candidates for TPL (see Fig. \ref{fig:example}(e)). Based on the two stitch candidates, vertices $a$ and $d$ are divided into two vertices, respectively. The constructed decomposition graph is given in Fig. \ref{fig:example}(f). It maintains all the information about conflict edges and stitch candidates, where the solid edges are the conflict edges while the dashed edges are the stitch edges and function as stitch candidates. In each decomposition graph, a color assignment, which contains semidefinite programming (SDP) formulation and partition based mapping, is carried out. During color assignment, the six vertices in the decomposition graph are assigned into three groups: $\{a_1, c\}, \{b\}$ and $\{a_2, d_1, d_2\}$ (see Fig. \ref{fig:example}(g) and Fig. \ref{fig:example}(h)). Here one stitch on feature $a$ is introduced. After iteratively recover the removed vertices, the final decomposed layout is shown in Fig. \ref{fig:example}(i). Our last process should be decomposition graphs merging, which combines the results on all decomposition graphs. Since this example has only one decomposition graph, this process is skipped. \section{Problem Formulation} \label{sec:problem} Given input layout which is specified by features in polygonal shapes, we partition the layout into $n$ bins $B = \{b_1, \dots, b_n\}$. Note that neighboring bins may share some overlapping. For each polygonal feature $r_i$, we denote its area as $den_i$, and its area covered by bin $b_k$ as $den_{ki}$. Clearly $den_i \ge den_{ki}$ for any bin $b_k$. During layout decomposition, all polygonal features are divided into three masks. For each bin $b_k$, we define three densities ($d_{k1}, d_{k2}, d_{k3}$), where $d_{kc} = \sum den_{ki}$, for any feature $r_i$ assigned to color $c$. Therefore, we can define the local density uniformity as follows: \begin{mydefinition}[Local Density Uniformity] For the bin $b_k$ $\in S$, the local density uniformity is $\textrm{max} \{d_{kc}\} / \textrm{min} \{d_{kc}\}$ given three densities $d_{k1}, d_{k2}$ and $d_{k3}$ for three masks and is used to measure the ratio difference of the densities. A lower value means better local density balance. The local density uniformity is denoted by $DU_k$. \end{mydefinition} For convenience, we use the term density uniformity to refer to local density uniformity in the rest of this paper. It is easy to see that $DU_k$ is always larger than or equal to 1. To keep a more balanced density in bin $b_k$, we expect $DU_k$ as small as possible, i.e., close to 1. \begin{problem}[Density Balanced Layout Decomposition] Given a layout which is specified by features in polygonal shapes, the layout graphs and the decomposition graphs are constructed. Our goal is to assign all vertices in the decomposition graph into three colors (masks) to minimize the stitch number and the conflict number, while keeping all density uniformities $DU_k$ as small as possible. \end{problem} \section{Experimental Results} \label{sec:result} We implement our decomposer in C++ and test it on an Intel Xeon 3.0GHz Linux machine with 32G RAM. ISCAS 85\&89 benchmarks from \cite{TPL_ICCAD2011_Yu} are used, where the minimum coloring spacing $dis_m$ was set the same with previous studies \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang}. Besides, to perform a comprehensive comparison, we also test on other two benchmark suites. The first suite is with six dense benchmarks (``c9\_total''-``s5\_total''), while the second suite is two synthesized OpenSPARC T1 designs ``mul\_top'' and ``exu\_ecc'' with Nangate 45nm standard cell library \cite{nangate}. When processing these two benchmark suites we set the minimum coloring distance $dis_m = 2 \cdot w_{min}+3 \cdot s_{min}$, where $w_{min}$ and $s_{min}$ denote the minimum wire width and the minimum spacing, respectively. The parameter $\alpha$ is set as $0.1$. The size of each bin is set as $10 \cdot dis_m \times 10 \cdot dis_m$. We use CSDP \cite{CSDP} as the solver for the semidefinite programming (SDP). \vspace{-.05in} \subsection{Comparison with other decomposers} \footnotetext{The results of DAC'13 decomposition are from \cite{TPL_DAC2013_Kuang}.} In the first experiment, we compare our decomposer with the state-of-the-art layout decomposers which are not balanced density aware \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang}\cite{TPL_DAC2013_Kuang}. We obtain the binary files from \cite{TPL_ICCAD2011_Yu} and \cite{TPL_DAC2012_Fang}. Since currently we cannot obtain the binary for decomposer in \cite{TPL_DAC2013_Kuang}, we directly use the results listed in \cite{TPL_DAC2013_Kuang}. Here our decomposer is denoted as ``\textbf{SDP+PM}'', where ``PM'' means the partition based mapping. The $\beta$ is set as 0. In other words, SDP+PM only optimizes for stitch and conflict number. Table \ref{tab:result1} shows the comparison in terms of runtime and performance. For each decomposer we list its stitch number, conflict number, cost and runtime. The columns ``cn\#" and ``st\#" denote the conflict number and the stitch number, respectively. ``cost'' is the cost function, which is set as cn\# $+ 0.1 \times$ st\#. ``CPU(s)" is computational time in seconds. First, we compare SDP+PM with the decomposer in \cite{TPL_ICCAD2011_Yu}, which is based on SDP formulation as well. From Table \ref{tab:result1} we can see that the new stitch candidate generation (see \cite{TPL_DAC2013_Kuang} for more details) and partition-based mapping can achieve better performance (reducing the cost by around 55\%). Besides, SDP+PM can get nearly $4\times$ speed-up. The reason is that, compared with \cite{TPL_ICCAD2011_Yu}, a set of speedup techniques, i.e., 2-vertex-connected component computation, layout graph cut vertex stitch forbiddance (Sec. \ref{sec:nostitch}), decomposition graph vertex clustering (Sec. \ref{sec:cluster}), and fast color assignment trial (Sec. \ref{sec:fastassign}), are proposed. Second, we compare SDP+PM with the decomposer in \cite{TPL_DAC2012_Fang}, which applies several graph based simplifications and maximum independent set (MIS) based heuristic. From Table \ref{tab:result1} we can see that although the decomposer in \cite{TPL_DAC2012_Fang} is faster, MIS based heuristic has worse solution qualities (around 33\% cost penalty compared to SDP+PM). Compared with the decomposer in \cite{TPL_DAC2013_Kuang}, although SDP+PM is slower, it can reduce the cost by around 6\%. In addition, we compare SDP-PM with other two decomposers \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang} for some very dense layouts, as shown in Table \ref{tab:result2}. We can see that for some cases the decomposer in \cite{TPL_ICCAD2011_Yu} cannot finish in 1000 seconds. Compared with \cite{TPL_DAC2012_Fang} work, SDP+PM can reduce cost by 65\%. It is observed that compared with other decomposers, SDP+PM demonstrates much better performance when the input layout is dense. The reason may be that when the input layout is dense, through graph simplification, each independent problem size may still be quite large, then SDP based approximation can achieve better results than heuristic. It can be observed that for the last three cases our decomposer could reduce thousands of conflicts. Each conflict may require manual layout modification or high ECO efforts, which are very time consuming. Therefore, even our runtime is more than \cite{TPL_DAC2012_Fang}, it is still acceptable (less than 6 minutes for the largest benchmark). \vspace{-.05in} \subsection{Comparison for Density Balance} \begin{table}[bt] \centering \renewcommand{\arraystretch}{1.0} \caption{Balanced density impact on EPE} \label{tab:result2} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \hline \multirow{2}{*}{Circuit} & \multicolumn{3}{c||}{SDP+PM} & \multicolumn{3}{c|}{SDP+PM+DB} \\ \cline{2-7} &cost & CPU(s) & EPE\# &cost & CPU(s) & EPE\# \\ \hline C432 &0.4 &0.2 &0 &0.4 &0.2 &0 \\ C499 &0 &0.2 &0 &0 &0.2 &0 \\ C880 &0.7 &0.3 &10 &0.7 &0.3 &7 \\ C1355 &0.3 &0.3 &18 &0.3 &0.3 &15 \\ C1908 &0.1 &0.3 &130 &0.1 &0.3 &58 \\ C2670 &0.6 &0.4 &168 &0.6 &0.4 &105 \\ C3540 &1.8 &0.5 &164 &1.8 &0.5 &79 \\ C5315 &0.9 &0.7 &225 &1.0 &0.7 &115 \\ C6288 &22.3 &2.7 &31 &32.0 &2.8 &15 \\ C7552 &2.2 &1.1 &273 &2.5 &1.1 &184 \\ S1488 &0.2 &0.3 &72 &0.2 &0.3 &44 \\ S38417 &24.5 &7.9 &420 &24.5 &8.5 &412 \\ S35932 &48.8 &21.4 &1342 &49.8 &24 &1247 \\ S38584 &48.8 &22.2 &1332 &49.1 &23.7 &1290 \\ S15850 &44.1 &20 &1149 &47.3 &21.3 &1030 \\ \hline avg. &13.0 &5.23 &355.6 &14.0 &5.64 &306.7\\ ratio &1.0 &1.0 &\textbf{1.0} &1.07 &1.08 &\textbf{0.86}\\ \hline \hline \end{tabular} \vspace{-.1in} \end{table} In the second experiment, we test our decomposer for the density balancing. We analyze edge placement error (EPE) using Calibre-Workbench \cite{Calibre} and industry-strength setup. For analyzing the EPE in our test cases, we use systematic lithography process variation, such as focus $\pm 50$nm and dose $\pm 5\%$. In Table \ref{tab:result2}, we compare SDP+PM with ``\textbf{SDP+PM+DB}'', which is our density balanced decomposer. Here $\beta$ is set as 0.04 (we have tested different $\beta$ values, we found that bigger $\beta$ does not help much any more; meanwhile, we still want to give conflict and stitch higher weights). Column ``cost'' also lists the weighted cost of conflict and stitch, i.e., cost $=$ cn\#$+ 0.1 \times$st\#. From Table \ref{tab:result2} we can see that by integrating density balance into our decomposition flow, our decomposer (SDP+PM+DB) can reduce EPE hotspot number by 14\%. Besides, density balanced SDP based algorithm can maintain similar performance to the baseline SDP implementation: only 7\% more cost of conflict and stitch, and only 8\% more runtime. In other words, our decomposer can achieve a good density balance while keeping comparable conflicts/stitches. \begin{table}[tb] \centering \renewcommand{\arraystretch}{1.0} \caption{Additional Comparison for Density Balance} \label{tab:denseEPE} \begin{tabular}{|c|c|c|c||c|c|c|} \hline \hline \multirow{2}{*}{Circuit} & \multicolumn{3}{c||}{SDP+PM} & \multicolumn{3}{c|}{SDP+PM+DB} \\ \cline{2-7} &cost & CPU(s) & EPE\# &cost & CPU(s) & EPE\# \\ \hline mul\_top &145.1 &57.6 &632 &147.5 &63.8 &630 \\ exu\_ecc &28.4 &4.3 &140 &33.9 &4.8 &138 \\ c9\_total &217.9 &7.7 &60 &218.6 &8.3 &60 \\ c10\_total &435.6 &19 &77 &431.3 &19.6 &76 \\ s2\_total &1225.6 &70.7 &482 &1179.3 &75 &433 \\ s3\_total &2015.2 &254.5 &1563 &1937.5 &274.5 &1421 \\ s4\_total &2260.1 &306 &1476 &2176.3 &310 &1373 \\ s5\_total &2759.3 &350.4 &1270 &2673.9 &352 &1171 \\ \hline avg. &1135.9 &134 &712.5 &1099.8 &138.5 & 662.8 \\ ratio &1.0 &1.0 &\textbf{1.0} &0.97 &1.04 & \textbf{0.93}\\ \hline \hline \end{tabular} \vspace{-.1in} \end{table} We further compare the density balance, especially EPE distributions for very dense layouts. As shown in Table \ref{tab:denseEPE}, our density balanced decomposer (SDP+PM+DB) can reduce EPE distribution number by 7\%. Besides, for very dense layouts, density balanced SDP approximation can maintain similar performance with plain SDP implementation: only 4\% more runtime. \vspace{-.05in} \subsection{Scalability of SDP Formulation} \begin{figure}[tbh] \vspace{-.1in} \centering \includegraphics[width=0.44\textwidth]{scalabilitySDP} \caption{~Scalability of SDP Formulation.} \label{fig:scalability} \vspace{-.1in} \end{figure} In addition, we demonstrate the scalability of our decomposer, especially the SDP formulation. Penrose benchmarks from \cite{TPL_SPIE08_Cork} are used to explore the scalability of SDP runtime. No graph simplification is applied, therefore all runtime is consumed by solving SDP formulation. Fig. \ref{fig:scalability} illustrates the relationship between graph (problem) size against SDP runtime. Here the X axis denotes the number of nodes (e.g., the problem size), and the Y axis shows the runtime. We can see that the runtime complexity of SDP is less than O($n^{2.2}$). \section{Speedup Techniques} \label{sec:speedup} Our layout decomposer applies a set of graph simplification techniques proposed by recent works: \begin{itemize} \item Independent Component Computation \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang}\cite{TPL_DAC2013_Kuang}; \item Vertex with Degree Less than 3 Removal \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang}\cite{TPL_DAC2013_Kuang}; \item 2-Edge-Connected Component Computation \cite{TPL_ICCAD2011_Yu}\cite{TPL_DAC2012_Fang}\cite{TPL_DAC2013_Kuang}; \item 2-Vertex-Connected Component Computation \cite{TPL_DAC2012_Fang}\cite{TPL_DAC2013_Kuang}. \end{itemize} Apart from the above graph simplifications, our decomposer proposes a set of novel speedup techniques, which would be introduced in this section. \vspace{-.1in} \subsection{LG Cut Vertex Stitch Forbiddance} \label{sec:nostitch} \begin{figure}[hbt] \centering \vspace{-.1in} \subfigure[]{\includegraphics[width=0.16\textwidth]{DAC13_nostitch1}} \hspace{.1in} \subfigure[]{\includegraphics[width=0.16\textwidth]{DAC13_nostitch2}} \subfigure[]{\includegraphics[width=0.16\textwidth]{DAC13_nostitch3}} \hspace{.1in} \subfigure[]{\includegraphics[width=0.15\textwidth]{DAC13_nostitch4}} \vspace{-.1in} \caption{Layout graph cut vertex stitch forbiddance.} \label{fig:nostitch} \vspace{-.1in} \end{figure} A vertex of a graph is called a cut vertex if its removal decomposes the graph into two or more connected components. Cut vertices can be identified through the process of bridge computation \cite{TPL_ICCAD2011_Yu}. During stitch candidate generation, forbidding any stitch candidate on cut vertices can be helpful for later decomposition graph simplification. Fig. \ref{fig:nostitch} (a) shows a layout graph, where feature $a$ is a cut vertex, since its removal can partition the layout graph into two parts: \{b, c, d\} and \{e, f, g\}. If stitch candidates are introduced within $a$, the corresponding decomposition graph is illustrated in Fig. \ref{fig:nostitch} (b), which is hard to be further simplified. If we forbid the stitch candidate on $a$, the corresponding decomposition graph is shown in Fig. \ref{fig:nostitch} (c), where $a$ is still cut vertex in decomposition graph. Therefore we can apply 2-connected component computation \cite{TPL_DAC2012_Fang} to simplify the problem size, and apply color assignment separately (see Fig. \ref{fig:nostitch} (d)). \vspace{-.05in} \subsection{Decomposition Graph Vertex Clustering} \label{sec:cluster} \begin{figure}[tb] \centering \vspace{-.1in} \subfigure[]{\includegraphics[width=0.15\textwidth]{DAC13_cluster1}} \subfigure[]{\includegraphics[width=0.15\textwidth]{DAC13_cluster2}} \vspace{.1in} \caption{DG vertex clustering to reduce the decomposition graph size.} \label{fig:cluster} \vspace{-.1in} \end{figure} Decomposition graph vertex clustering is a speedup technique to further reduce the decomposition graph size. As shown in Fig. \ref{fig:cluster} (a), vertices $a$ and $d_1$ share the same conflict relationships against $b$ and $c$. Besides, there is no conflict edges between $a$ and $d_1$. If no conflict is introduced, vertices $a$ and $d_1$ should be assigned the same color, therefore we can cluster them together, as shown in Fig. \ref{fig:cluster} (b). Note that the stitch and conflict relationships are also merged. Applying vertex clustering in decomposition graph can further reduce the problem size. \vspace{-.05in} \subsection{Fast Color Assignment Trial} \label{sec:fastassign} Although the SDP and the partition based mapping can provide high performance for color assignment, it is still expensive to be applied to all the decomposition graphs. We derive a fast color assignment trial before calling SDP based method. If no conflict or stitch is introduced, our trial solves the color assignment problem in linear time. Note that SDP method is skipped only when decomposition graph can be colored without stitch or conflict, our fast trial does not lose any solution quality. Besides, our preliminary results show that more than half of the decomposition graphs can be decomposed using this fast method. Therefore, the runtime can be dramatically reduced. \begin{algorithm}[htb] \caption{Fast Color Assignment Trial} \label{alg:trial} \begin{algorithmic}[1] \REQUIRE Decomposition graph $G$, stack $S$. \WHILE{$\exists n \in G$ s.t. $d_{conf}(n) < 3$ \& $d_{stit}(n) < 2$} \STATE $S.push(n)$; $G.delete(n)$; \ENDWHILE \IF{$G$ is not empty} \STATE Recover all vertices in $S$; \RETURN \FALSE; \ELSE \WHILE{ $ ! S.empty()$} \STATE $n = S.pop()$; $G.add(n)$; \STATE Assign $n$ a legal color; \ENDWHILE \RETURN \TRUE; \ENDIF \end{algorithmic} \end{algorithm} The fast color assignment trial is shown in Algorithm \ref{alg:trial}. First, we iteratively remove the vertex with conflict degree ($d_{conf}$) less than 3 and stitch degree ($d_{stit}$) less than 2 (lines 1--3). If some vertices cannot be removed, we recover all the vertices in stack $S$, then return $false$; Otherwise, the vertices in $S$ are iteratively popped (recovered) (lines 8--12). For each vertex $n$ popped, since it is connected with at most one stitch edge, we can always assign one color without introducing conflict or stitch. \section{Stitch Candidate Generation} Stitch candidate generation is one of the most important steps to parse a layout. It not only determines the vertex number in the decomposition graph, but also affects the decomposition result. We use \textit{DPL candidates} to represent the stitch candidates generated by all previous DPL research. In this section, we show that since DPL candidates may be redundant or lose some useful candidates, they cannot be directly applied in TPL layout decomposition. Therefore, we provide a procedure to generate appropriate stitch candidates for TPL. \vspace{-.1in} \subsection{Limitations of DPL Candidates} \begin{figure}[htb] \vspace{-.1in} \centering \subfigure[]{ \includegraphics[width=0.18\textwidth]{RedundantStitch1} } \subfigure[]{ \includegraphics[width=0.2\textwidth]{LostStitch1} } \caption{~(a) Redundant stitch.~(b) This stitch cannot be detected in the DPL candidate generation.} \label{fig:DPL_Stitch} \vspace{-.1in} \end{figure} We provide two examples to demonstrate why DPL candidates are not appropriate for TPL. First, because of an extra color choice, some DPL candidates may be redundant. As shown in Fig. \ref{fig:DPL_Stitch} (a), the stitch can be removed because no matter what color is assigned to features $b$ and $c$, the feature $a$ can always be assigned a legal color. We denote this kind of stitch as a \textit{redundant stitch}. After removing these redundant stitches, some extra vertices in the decomposition graph can be merged. In this way, we can reduce the problem size. Besides, DPL candidates may cause the stitch loss problem, i.e., some useful stitch candidates cannot be detected and inserted in layout decomposition. In DPL, the stitch candidate has one precondition: it cannot intersect with any projection. For example, as shown in Fig. \ref{fig:DPL_Stitch} (b), because this stitch intersects with the projection of feature $b$, it cannot belong to the DPL candidates. However, if features $b, c$ and $d$ are assigned with three different colors, only introducing this stitch can resolve the conflict. In other words, the precondition in DPL limits the ability of stitches to resolve the triple patterning conflicts and may result in unnoticed conflicts. We denote the useful stitches forbidden by the DPL precondition as a \textit{lost stitch}. \cite{TPL_ICCAD2011_Yu} directly applies the DPL candidates, which limit their ability to search better solutions. Although in \cite{TPL_DAC2012_Fang} the stitch insertion followed by color assignment may overcome the stitch loss problem, their post-stage process suffers from unnecessary stitch penalty. \vspace{-.1in} \subsection{Stitch Candidate Generation for TPL} Given the projection results, we propose a new stitch candidate generation. Compared with the DPL candidates, our methodology can remove some redundant stitches and systematically solve the stitch loss problem. For a better explanation, let us define the projection sequence. \begin{mydefinition}[Projection Sequence] After the projection, the feature is divided into several segments each of which is labeled with a number representing how many other features are projected onto it. The sequence of numbers on these segments is the projection sequence. \end{mydefinition} \begin{figure}[htb] \vspace{-.1in} \centering \includegraphics[width=0.35\textwidth]{ProjectNum} \caption{~The projection sequence of the feature is $01212101010$, and the last $0$ is a default terminal zero.} \label{fig:projectNum} \vspace{-.1in} \end{figure} Instead of analyzing each feature and all its neighboring features, we can directly carry out stitch candidate generation based on the projection sequence. For convenience, we provide a terminal zero rule, i.e., the beginning and the end of the projection sequence must be $0$. To maintain this rule, sometimes a default $0$ needs to be added. An example of projection sequence is shown in Fig. \ref{fig:projectNum}, where the middle feature has five conflict features, $b, c, d, e$ and $f$. Based on the projection results, the feature is divided into ten segments. Through labeling each segment, we can get its projection sequence: $01212101010$. Here a default $0$ is added at the end of the feature. Based on the definition of projection sequence, we summarize the rules for redundant stitches and lost stitches. First, motivated by the case in Fig. \ref{fig:DPL_Stitch} (a), we can summarize the redundant stitches as follows: if the projection sequence begins with ``$01010$", then the first stitch in DPL candidates is redundant. Since the projection of a feature can be symmetric, if the projection sequence ends with ``$01010$", then the last stitch candidate is also redundant. Besides, the rule for lost stitches is as follows, if a projection sequence contains the sub-sequence ``$xyz$", where $x, y, z > 0$ and $x>y, z>y$, then there is one lost stitch at the segment labeled as $y$. For example, the stitch candidate in Fig. \ref{fig:DPL_Stitch} (b) is contained in the sub-sequence ``$212$", so it is a lost stitch. The details of stitch candidate generation for TPL are shown in Algorithm \ref{alg:stitch}. If necessary, at first each multiple-pin feature is decomposed into several two-pin features. Then for each feature, we can calculate its projection sequence. We remove the redundant stitches by checking if the projection sequence begins or ends with ``$01010$". Next we search for and insert stitches, including the lost stitches. Here we define a \textit{sequence bunch}. A sequence bunch is a sub-sequence of a projection sequence, and contains at least three non-$0$ segments. For simplicity, in our algorithm to identify the lost stitch candidate, at most one stitch is introduced between two $0$s for each sequence bunch. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{StitchGeneration} \caption{~Stitch candidates generated for DPL and TPL.} \label{fig:stitch} \vspace{-.1in} \end{figure} \begin{algorithm}[htb] \caption{Stitch Candidate Generation for TPL} \label{alg:stitch} \begin{algorithmic}[1] \REQUIRE Projection results on features. \STATE Decompose multiple-pin features; \FOR { each feature $w_i$} \STATE Calculate the projection sequence $ps_i$; \IF{$ps_i$ begins or ends with ``$01010$"} \STATE Remove redundant stitch(es); \ENDIF \FOR { each sequence bunch of $ps_i$ } \STATE Search and insert at most one stitch candidate; \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} An example of the stitch candidate generation is shown in Fig. \ref{fig:stitch}. In the DPL candidate generation, there are two stitch candidates generated (stitch 2 and stitch 3). Through our stitch candidate generation, stitch 3 is labeled as a redundant stitch. Besides, stitch 1 is identified as a lost stitch candidate because it is located in a sub-sequence ``$212$". Therefore, stitch 1 and stitch 2 are chosen as stitch candidates for TPL.
2,869,038,154,512
arxiv
\section{Introduction} Given an image of a person, a pose transformation model aims to reconstruct the person's appearance in another pose. While for humans it is very easy to imagine how a person would appear in a different body pose, it has been a difficult problem in computer vision to generate photorealistic images conditioned only on pose; given a single 2D image of the human subject. The idea of pose transformation can help construct a viewpoint invariant representation. This has several interesting applications in 3D reconstruction, movie making, motion prediction or human computer interaction etc. The task of pose transformation given a single image and a desired pose, is achieved by any machine learning model in basically two steps: (1) learning the significant visual features of the person-of-interest along with the background from the given image, and (2) imposing the desired pose on the person-of-interest, generate a photorealistic image while preserving the previously learned features. Generative Adversarial Networks (GAN) \cite{goodfellow-gan} have been widely popular in this field due to its sharp image generation capability. While the majority of successful pose transformation models use different variation of GANs as their primary component, they give little importance to efficient data augmentation and utilization of inherent CNN features to achieve robustness. Recent developments in this field have been targeted to develop complex deep neural network models with the use of multiple external features such as human body parsing \cite{soft-gated}, semantic segmentation \cite{soft-gated}\cite{balakrishnan}, spatial transformation \cite{balakrishnan}\cite{deformable-gans} etc. Although this is helpful in some scenarios, there is accuracy issues and computational overhead due to each intermediate step that affects the final result. In this work, we aim to develop an improved end-to-end model for pose transformation given only the input image and the desired pose, and without any other external features. We make use of the Residual learning strategy \cite{resnet} in our GAN architecture by incorporating a number of residual blocks. We achieve robustness in terms of occlusion, scale, illumination and distortion by using efficient data augmentation techniques and utilizing inherent CNN features. Our results in two large datasets, a low-resolution person re-identification dataset Market-1501 \cite{market-1501} and high-resolution fashion dataset DeepFashion \cite{deepfashion} have been demonstrated. Our contributions are two folds: First, we develop an improved pose transformation model to synthesize photorealistic images of a person in any desired pose, given a single instance of the person’s image, and without any external features. Second, we achieve robustness in terms of occlusion, scale and illumination by efficient data augmentation techniques and utilizing inherent CNN features. \section{Related work} There has been a lot of research in the field of generative image modelling using deep learning techniques. One line of work follow the idea of Variational Autoencoders (VAE) \cite{kingma-vae} which uses the reparameterization trick to maximize the lower bound of data likelihood \cite{mypaper}. VAEs have been popular for its image interpolation capability, but the generated images lack sharpness and high frequency details. GAN \cite{goodfellow-gan} models make use of adversarial training for generating images from random noise. Most works in pose guided person image generation make use of GANs because of its capability to produce fine details. Amongst the large number of successful GAN architectures, many were developed upon the DCGAN \cite{dcgan} model that combines Convolutional Neural Network (CNN) with GANs. Pix2pix \cite{pix2pix} proposed a conditional adversarial network (CGAN) for image-to-image translation by learning the mapping from condition image to target image. Yan et al. \cite{skeleton-aided} explored this idea for pose conditioned video generation, where the human images are generated based on skeleton poses. GANs with different variations of U-Net \cite{unet} have been extensively used for pose guided image generation. The PG$^2$ \cite{pg2} model proposes a 2-step process with a U-Net-like network to generate an initial coarse image of the person conditioned on the target pose and then refines the result based upon the difference map. Balakrishnan et al. \cite{balakrishnan} uses separate foreground and background synthesis using a spatial transformer network and U-Net based generator. Ma et al. \cite{disentangled-pig} uses pose sampling using a GAN coupled with an encoder-decoder model. Dong et al. \cite{soft-gated} produces state-of-the-art results in pose driven human image generation and uses human body parsing as an additional attribute for Warping-GAN rendering. These additional attribute learning generates an overhead in computational capability and affects the final results. Other significant works for pose transfer in the field of person re-identification~\cite{pn-gan}~\cite{fd-gan} mostly deals with low resolution images and a complex training procedure. In this work, we propose a simplified end-to-end model for pose transformation without using additional feature learning at any stage. \section{Methodology}\label{methodology} \vspace{-5mm} \begin{figure} \centering \includegraphics[width=\linewidth]{figs/gan_full.pdf} \caption{Proposed architecture of the pose transformational GAN (pt-GAN). The idea is to transform the given person image to the desired pose. The additional classification branch of the Discrminator helps the Generator's learning to produce realistic images.} \label{fig:gan_arch} \end{figure} \vspace{-5mm} \subsection{Pose Estimation} The image generation is conditioned on an input image and the target pose represented by a pose vector. In order to get the encoded pose vector, we use off-the-shelf pose detection algorithm OpenPose~\cite{openpose}, which is trained without using either of the datasets deployed in this work. Given an input person image $I_i$, the pose estimation network OpenPose produces a pose vector $P_i$, which localizes and detects 25 anatomical key-points. \subsection{Generator} The Image generator ($G_P$) aims at producing the same person's images under different poses. Particularly, given an input person image $I_i$ and a desired pose $P_j$, the generator aims to synthesize a new person image $I_{P_j}$, which contains the same identity but with a different pose defined by $P_j$. The image vector obtained using a pretrained ResNet-50~\cite{resnet} model (ImageNet~\cite{imagenet}), and the pose vector are concatenated and fed to the generator. The architecture is depicted in Figure~\ref{fig:generator}. The generator consists of multiple Convolution and Transposed Convolution layers. The key element of the proposed Generator is the residual blocks. Each residual block performs downsampling using convolution followed by upsampling using transposed convolution and then re-using the input by addition (Figure \ref{fig:both}(b)). The motivation is to take advantage of Residual Learning ($y = F(x)+x$) that can be used to pass invariable information (e.g. clothing color, texture, background) from the bottom layers to the higher layers and change the variable information (pose) to synthesize more realistic images, achieving pose transformation at the same time. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figs/generator.png} \caption{Architecture of the Generator Network. The Generator network consists of 9 residual blocks, which helps the GAN to preserve low level features (clothing, texture), while transforming high level features (pose) of the subject.} \label{fig:generator} \end{figure} \vspace{-5mm} \subsection{Discriminator} In our implementation, the Discriminator ($D_P$) predicts the class label for the image along with the binary classification of determing whether the image is real or generated. Studies \cite{discrim} show that incorporating classification loss in discriminator along with the real/fake loss, in turn increases the generator's capability to produce sharp images with high details. The Discriminator consists of stacked Conv-ReLU-Pool layer and the final fully connected layer has been modified to incorporate both binary loss and classification loss (Figure~\ref{fig:both}(a)). \begin{figure} \centering \includegraphics[width=\linewidth]{figs/new_both.png} \caption{\textbf{(a)} Architecture of the discriminator of pt-GAN. A classification task is added with the real/fake prediction. This simultaneously helps the Generator to produce more realistic images. \textbf{(b)} Architecture of the Residual Blocks used in the Generator. The Residual Learning strategy preserves low level features (color, texture) and learns high level features (pose) simultaneously.} \label{fig:both} \end{figure} \subsection{Data Augmentation} \begin{enumerate} \item \textbf{Image Interpolation}: The input images have been resized to $256\times 256$ before passing through ResNet. Market-1501 images ($128\times 64$) are resized to $256\times 128$, and zero-padded to make $256\times 256$. The images in DeepFashion are of the desired dimension by default. \item \textbf{Random Erasing~\cite{randomerasing}}: Random erasing is helpful in achieving robustness against occlusion. A random patch of the input image is given random values while the reconstruction is expected to be perfect. Thus, the GAN learns to reconstruct (and remove) the occluded regions in the generated images. \item \textbf{Random Crop}: The input image is randomly cropped and upscaled to the input dimension ($256\times 256$) to augment the cases where the human detection is inaccurate or only the partial body is visible. \item \textbf{Jitter}: We use random jitter in terms of brightness, Contrast, Hue and Saturation (random jitter to each channel) to augment the effects of illumination variations. \item \textbf{Random Horizontal Flip}: Inspecting the dataset, it is seen that most human subjects has both left-right profile images. Hence flipping the image left-right is a good choice for image augmentation. \item \textbf{Random distortion}: We have incorporated random distortion with a grid size of $10$, to compensate the distortion in the generated image as well as enforce our model to learn important features of the input image even in the presence of non-idealities. \end{enumerate} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth, clip]{figs/data_augmentation.png} \caption{The data augmentation techniques used in this work: (a) Original Image, (b) Random Erasing, (c) Random Crop, (d) Random Distortion, (e)-(g) Random Jitter: (e) Brightness, (f) Contrast, (g) Saturation; (h) Random Flip} \label{fig:data_augmentation} \end{figure} The CNN by itself enforces scale invariance through max-pooling and convolution layers. Thereafter we claim to have achieved invariance from distortion, occlusion, illumination and scale. A demonstration of the data augmentation techniques is shown in Figure~\ref{fig:data_augmentation}. \section{Experiments} \subsection{Datasets} \subsubsection{DeepFashion:} The DeepFashion (In-shop Clothes Retrieval Benchmark) dataset \cite{deepfashion} consists of 52,712 in-shop clothes images, and 200,000 cross-pose/scale pairs. The images are of 256$\times$256 resolution. We follow the standard split adopted by \cite{pg2} to construct the training set of 146,680 pairs each composed of two images of the same person but different poses.\vspace{-3mm} \subsubsection{Market-1501:} We also show our results on the re-identification dataset Market-1501~\cite{market-1501} containing 32,668 images of 1,501 persons. The images vary highly in pose, illumination, viewpoint and background in this dataset, which makes the person image generation task more challenging. Images have size 128$\times$64. Again, we follow \cite{pg2} to construct the training set of 439,420 pairs, each composed of two images of the same person but different poses. \subsection{Implementation and Training} For image descriptor generation, We have used a pretrained ResNet-50 network whose weights were not updated during the training of the generator and discrimiator. The input image and the target image are of the same class with different poses. The reconstruction loss (MSE) is incorporated with the negative discriminator loss to update the Generator. In our implementation, we have used 9 Residual blocks sequentially in the generator architecture. The discriminator is trained on the combined loss (binary crossentropy and categorical crossentropy). The architecture of the proposed model is described in detail in section~\ref{methodology}. For training the Generator as well as Discriminator we have used Adam optimizer with $\beta_1=0.5$ and $\beta_2=0.999$. The initial learning rate was set to 0.0002 with a decay factor 10 at every 20 epoch. A batch size of 32 is taken as standard. \section{Results and Discussion} \subsection{Qualitative Results} We demonstrate a series of results in high resolution fashion dataset DeepFashion~\cite{deepfashion} as well as a low resolution re-identification dataset Market-1501~\cite{market-1501}. In both the datasets, by visual inspection, we can say that our model performs good reconstruction and is able to learn invariable information like the colour and texture of clothing, characteristics of make/female attributes such as hair and face while successfully performing image transformation into the desired pose. The results on DeepFashion is better due to good details and simple background, whereas the low resolution affects the quality of the generated images in Market-1501. The results are demonstrated in Figure~\ref{fig:main_result}. \vspace{-2mm} \begin{figure}[!ht] \centering \includegraphics[width=0.98\linewidth]{figs/main_result_gan_2.png} \caption{Qualitative results on DeepFashion and Market-1501 datasets. The proposed model is able to reproduce good details, and also learn invariable information like the colour and texture of clothing, characteristics of make/female attributes such as hair and face while successfully perform image transformation into the desired pose.} \label{fig:main_result} \end{figure} \vspace{-8mm} \subsection{Quantitative Results} We use two popular measures of GAN performance, namely Structural Similarity (SSIM)~\cite{ssim} and Inception Score (IS)~\cite{is} for verifying the performance of our model. We compare our work with the already existing methods based on SSIM and IS scores on both DeepFashion and Market-1501 datasets in Table~\ref{tab:result-table}. Our model achieves the best IS score in Market-1501 dataset while achieving second best results in SSIM score in both the datasets. However, the deviation from the state-of-the-art is $\sim 1.5\%$ in these cases which can be overcome through rigorous testing and hyperparameter tuning. We also inspect the improvement incorporated by data augmentation as seen in Table~\ref{tab:result-table}. The proposed augmentation methods give an average improvement of $\sim 9\%$. This essentially strengthens our argument that a significant boost in performance can be gained by exploring effective training schemes, without changing the model parameters or loss function. \begin{table}[!ht] \caption{Comparative study with existing methods in DeepFashion and Market-1501 datasets. The best and second best results are denoted in red and blue respectively.} \label{tab:result-table} \begin{center} \begin{tabular}{ C{4cm} C{1.2cm} C{1.2cm} C{1.2cm} C{1.2cm} } \hline \\[-2.5ex] {} & \multicolumn{2}{c}{DeepFashion} & \multicolumn{2}{c}{Market-1501} \\ \cline{2-5} \\[-2.4ex] Model & SSIM & IS & SSIM & IS \\ \hline \\[-2.5ex] pix2pix~\cite{pix2pix} & 0.692 & 3.249 & 0.183 & 2.678\\ PG2~\cite{pg2} & 0.762 & 3.090 & 0.253 & 3.460\\ DSCF~\cite{deformable-gans} & 0.761 & \textcolor{red}{3.351} & 0.290 & 3.185\\ BodyROI7~\cite{disentangled-pig} & 0.614 & 3.228 & 0.099 & \textcolor{blue}{3.483}\\ Dong et al.~\cite{soft-gated} & \textcolor{red}{0.793} & \textcolor{blue}{3.314} & \textcolor{red}{0.356} & 3.409\\ \hline \\[-2.5ex] Ours w/o augmentation & 0.713 & 3.006 & 0.268 & 3.425 \\ Ours (full) & \textcolor{blue}{0.781} & 3.238 & \textcolor{blue}{0.302} & \textcolor{red}{3.488} \\ \hline \end{tabular} \end{center} \end{table} \vspace{-8mm} \subsection{Failure Cases} \vspace{-4mm} \begin{figure}[H] \centering \includegraphics[width=0.98\linewidth]{figs/fail_result_gan.png} \caption{Failure Cases in our pt-GAN model. If the input contains fine details (text, stripes) or the target pose is incomplete then the reconstruction is poor. The external attribute (handbag) learning is of limited success.} \label{fig:fail_result} \end{figure} \vspace{-3mm} We analyse some of our failure cases in both the datasets to understand the shortcoming of our model. As seen in Figure~\ref{fig:fail_result}, the text in clothing as well as very fine patterns of clothing (stripes, dots) are not modelled properly. The external attribute features (e.g. the handbag in Figure~\ref{fig:fail_result}) are not learned properly as it is difficult to map external attributes to the output image when conditioned only on pose. The accuracy is also dependent on the completeness of the target pose. Finally, there is some limitation in cases where a rare complex pose is presented which has scarce training data. In Market-1501, the reconstruction of faces is not very good due to poor resolution. \subsection{Further Analysis} Along with the quantitative and qualitative results, we demonstrate a special case to show the improvement caused by data augmentation methods. As seen in Figure~\ref{fig:occlusion_inv}, the occlusion in the input image is partially carried forward when the data augmentation methods are not used. With data augmentation the generated image is better in quality and the artifacts generated in the edges are less. \begin{figure}[!ht] \centering \includegraphics[width=0.6\linewidth]{figs/result_data_augmentation.png} \caption{Occlusion invariance using the proposed model. The occlusion is partially carried forward when data augmentation methods are not used. With data augmentation, the resultant image is free from the artifacts.} \label{fig:occlusion_inv} \end{figure} \vspace{-6mm} \section{Conclusion} In this work, we proposed an improved end-to-end pose transformation model to synthesize photorealistic images of a given person in any desired pose without any external feature learning. We make use of the residual learning strategy with effective data augmentation techniques to achieve robustness in terms of occlusion, scale, illumination and distortion. For future work, we plan to achieve better results by utilising feature transport from the source image and conditioning the discriminator on both source image and target pose, alongwith using a perceptual (content) loss for reconstruction. \bibliographystyle{splncs04}
2,869,038,154,513
arxiv
\section{Introduction to the characteristic method} The characteristic methods, especially those within the semi-Lagrangian framework, have proved very successful in solving kinetic equations and other nonlocal problems \cite{CrouseillesLatuSonnendrucker2009,Kormann2015,KormannReuterRampp2019,XiongChenShao2016,GuoLiWang2018b,DimarcoLoubereNarskiRey2018}. In order to make the materials self-contained, we will briefly review their basic settings. \subsection{The Lawson integrators for partial integro-differential equations} Consider the model problem \begin{equation} \frac{\partial}{\partial t} y(\bm{x}, t) = \mathcal{L} y(\bm{x}, t) + \mathcal{N} y(\bm{x}, t), \end{equation} where $\mathcal{L}$ is the linear local operator and $\mathcal{N}$ is the nonlocal operator. Under the Lawson transformation $v(\bm{x}, t) = \mathrm{e}^{(t_{n-1}-t)\mathcal{L}} y(\bm{x}, t)$, it yields that \begin{equation} \frac{\partial}{\partial t} v(\bm{x}, t) = \mathrm{e}^{(t_{n-1} - t)\mathcal{L}} \mathcal{N}( \mathrm{e}^{(t_{n-1} - t)\mathcal{L}} v(\bm{x}, t)). \end{equation} Applying a $q$-step Adams method and transforming back to original variable yields the Lawson-Adams method, \begin{equation} y^n(\bm{x}) = \mathrm{e}^{\tau\mathcal{L}} y^{n-1}(\bm{x}) + \sum_{k=0}^{q} \beta_k \mathrm{e}^{k\tau\mathcal{L}} \mathcal{N} y^{n-k}(\bm{x}), \end{equation} where $\tau = t_{n} - t_{n-1}$ is the time stepsize, and $y^n(\bm{x})$ denotes the solution at $n$-th step. Specifically, for the partial integro-differential equation with a nonlocal operator $\Theta_V[f]$, e.g., the Boltzmann equation and the Wigner equation, of the form: \begin{equation}\label{eq.Wigner} \frac{\partial }{\partial t}f(\bm{x}, \bm{k}, t)+ \frac{\hbar \bm{k}}{m} \cdot \nabla_{\bm{x}} f(\bm{x},\bm{k}, t) = \Theta_V[f](\bm{x}, \bm{k}, t). \end{equation} The commonly used Lawson schemes are collected as follows. \begin{itemize} \item[(1)] One-stage Lawson predictor-corrector scheme (LPC-1) \begin{equation*} \boxed{ \begin{split} \textup{P}:\widetilde{f}^{n+1}(\bm{x}, \bm{k}) &= f^{n}(\mathcal{A}_\tau(\bm{x}, \bm{k})) + \tau \Theta_{V}[f^{n}](\mathcal{A}_\tau(\bm{x}, \bm{k})), \\ \textup{C}: f^{n+1}(\bm{x}, \bm{k}) &= f^{n}(\mathcal{A}_\tau(\bm{x}, \bm{k})) + \frac{\tau}{2} \Theta_V[\widetilde{f}^{n+1}](\bm{x}, \bm{k}) + \frac{\tau}{2} \Theta_{V}[f^{n}](\mathcal{A}_\tau(\bm{x}, \bm{k})). \end{split} } \end{equation*} \item[(2)] Two-stage Lawson-Adams predictor-corrector scheme (LAPC-2): \begin{equation*} \boxed{ \begin{split} \textup{P}: \widetilde{f}^{n+1}(\bm{x}, \bm{k}) =& f^{n}(\mathcal{A}_\tau(\bm{x}, \bm{k})) + \frac{3\tau}{2} \Theta_{V}[f^{n}](\mathcal{A}_\tau(\bm{x}, \bm{k})) \\ &- \frac{\tau}{2} \Theta_{V}[f^{n-1}](\mathcal{A}_{2\tau}(\bm{x}, \bm{k})), \\ \textup{C}: f^{n+1}(\bm{x}, \bm{k}) = & f^{n}(\mathcal{A}_\tau(\bm{x}, \bm{k})) + \frac{5\tau}{12} \Theta_V[\widetilde{f}^{n+1}](\bm{x}, \bm{k}) + \frac{8\tau}{12} \Theta_{V}[f^{n}](\mathcal{A}_\tau(\bm{x}, \bm{k})) \\ &- \frac{\tau}{12} \Theta_{V}[f^{n-1}](\mathcal{A}_{2\tau}(\bm{x}, \bm{k})). \end{split} } \end{equation*} \item[(3)] Three-stage Lawson-Adams predictor-corrector scheme (LAPC-3): \begin{equation*} \boxed{ \begin{split} \textup{P}: \widetilde{f}^{n+1}(\bm{x}, \bm{k}) = &f^{n}\mathcal{A}_\tau(\bm{x}, \bm{k})) + \frac{23\tau}{12} \Theta_V[f^{n}](\mathcal{A}_{\tau}(\bm{x}, \bm{k})) \\ &- \frac{16\tau}{12} \Theta_V[f^{n-1}](\mathcal{A}_{2\tau}(\bm{x}, \bm{k})) + \frac{5\tau}{12} \Theta_V[f^{n-2}](\mathcal{A}_{3\tau}(\bm{x}, \bm{k})), \\ \textup{C}:f^{n+1}(\bm{x}, \bm{k}) = &f^{n}\mathcal{A}_\tau(\bm{x}, \bm{k})) + \frac{9\tau}{24} \Theta_V[\widetilde{f}^{n+1}](\bm{x}, \bm{k}) + \frac{19\tau}{24} \Theta_V[f^{n}](\mathcal{A}_{\tau}(\bm{x}, \bm{k})) \\ &- \frac{5\tau}{24} \Theta_V[f^{n-1}](\mathcal{A}_{2\tau}(\bm{x}, \bm{k})) + \frac{\tau}{24} \Theta_V[f^{n-2}](\mathcal{A}_{3\tau}(\bm{x}, \bm{k})). \end{split} } \end{equation*} \end{itemize} Here we use the notation $\mathcal{A}_\tau(\bm{x}, \bm{k}) = (\bm{x} - \frac{\hbar \bm{k}}{m} \tau, \bm{k})$. The Lawson scheme exploits the exact advection along the characteristic line, i.e., the semigroup $\mathrm{e}^{-\frac{\hbar \tau}{m} \bm{k} \cdot \nabla_{\bm{x}}} f(\bm{x}, \bm{k}, t) = f(\mathcal{A}_\tau(\bm{x}, \bm{k}), t - \tau)$. The convergence order of $q$-stage Lawson predictor-corrector scheme is between $q$ and $q+1$ as it can be regarded as an implicit integrator with incomplete iteration. In practice, the one-step predictor-corrector scheme LPC-1 is used to obtain missing starting points for multistep schemes LAPC-2 and LAPC-3. Apart from the non-splitting scheme, another commonly used scheme is the operator splitting (OS). Take the Strang splitting as an example. \begin{equation*} \boxed{ \begin{split} &\textup{Half-step advection}: f^{n+1/2}(\bm{x}, \bm{k}) = f^{n}(\mathcal{A}_{\tau/2}(\bm{x}, \bm{k})), \\ &\textup{Full-step of ${\rm \Psi} \textup{DO}$}:\widetilde{f}^{n+1/2}(\bm{x}, \bm{k}) = f^{n+1/2}(\bm{x}, \bm{k}) + \tau \Theta_V[f^{n+1/2}](\bm{x}, \bm{k}), \\ &\textup{Half-step advection}: f^{n+1}(\bm{x}, \bm{k}) = \widetilde{f}^{n+1/2}(\mathcal{A}_{\tau/2}(\bm{x}, \bm{k})). \\ \end{split} } \end{equation*} The Strang splitting adopted here is a first-order scheme overall as one of the subproblems is integrated by the backward Euler method. \subsection{Cubic spline interpolation} The standard way to evaluate $ f^{n}(\mathcal{A}_{\tau}(\bm{x}, \bm{k}))$ and $ \Theta_V[f^{n}](\mathcal{A}_{\tau}(\bm{x}, \bm{k}))$ on the shifted grid is realized by interpolation via a specified basis expansion of $f^n$. Typical choices include the spline wavelets \cite{CrouseillesLatuSonnendrucker2006,CrouseillesLatuSonnendrucker2009}, the Fourier pseudo-spectral basis and the Chebyshev polynomials \cite{ChenShaoCai2019}. Regarding the fact that the spatial advection is essentially local, we only consider the cubic B-spline as it is a local wavelet basis with low numerical dissipation, and the cost scales as $\mathcal{O}(N_x^d)$ with $d$ the dimensionality \cite{CrouseillesLatuSonnendrucker2009}. Now we focus on the unidimensional uniform setting as the multidimensional spline can be constructed by its tensor product. Suppose the computational domain is $[x_0, x_{N}]$ containing $N+1$ grid points with uniform spacing $h = \frac{x_{N} - x_0}{N}$. The projection of $\varphi(x)$ onto the cubic spline basis is given by \begin{equation}\label{interpolation} \varphi(x) \approx s(x) = \sum_{\nu = -1}^{N+1} \eta_{\nu} B_{\nu}(x) \quad \textup{subject to} \quad \varphi(x_i) = s(x_i), \quad i = 0, \dots, N. \end{equation} $B_\nu$ is the cubic B-spline with compact support over four grid points, \begin{equation} B_{\nu}(x) = \left\{ \begin{split} &\frac{(x - x_{\nu-2})^3}{6h^3}, \quad x \in [x_{\nu-2}, x_{\nu-1}],\\ &-\frac{(x - x_{\nu-1})^3}{2h^3} + \frac{(x - x_{\nu-1})^2}{2h^2} + \frac{(x - x_{\nu-1})}{2h} + \frac{1}{6}, \quad x \in [x_{\nu-1}, x_{\nu}],\\ &-\frac{(x_{\nu+1} - x)^3}{2h^3} +\frac{(x_{\nu+1} - x)^2}{2h^2} + \frac{(x_{\nu+1} - x)}{2h} + \frac{1}{6}, \quad x \in [x_{\nu}, x_{\nu+1}],\\ &\frac{(x_{\nu+2} - x)^3}{6h^3}, \quad x \in [x_{\nu+1}, x_{\nu+2}],\\ &0, \quad \textup{otherwise}, \end{split} \right. \end{equation} implying that $B_{\nu - 1}, B_{\nu}, B_{\nu+1}, B_{\nu+2}$ overlap a grid interval $(x_{\nu}, x_{\nu+1})$ \cite{MalevskyThomas1997}. Now it requires to solve the $N+3$ coefficients $\bm{\eta} = (\eta_{-1}, \dots, \eta_{N+1})$. Since only $B_{i \pm 1}(x_i) = \frac{1}{6}$ and $B_i(x_i) = \frac{2}{3}$, substituting it into Eq.~\eqref{interpolation} yields $N+1$ equations for $N+3$ variables, \begin{equation}\label{three_term_relation} \varphi(x_i) = \frac{1}{6} \eta_{i-1} + \frac{2}{3} \eta_{i} + \frac{1}{6} \eta_{i+1}, \quad 0 \le i \le N. \end{equation} Two additional equations are needed to determine the unique solution of $\bm{\eta}$, which are given by a specified boundary condition at both ends of the interval. For instance, consider the Hermite boundary condition (also termed the clamped spline) \cite{CrouseillesLatuSonnendrucker2006}, \begin{equation} s^{\prime}(x_0) = \phi_L, \quad s^{\prime}(x_{N}) = \phi_R, \end{equation} where $\phi_L$ and $\phi_R$ are parameters to be determined. In particular, when $\phi_L = \phi_R = 0$, it reduces to the Neumann boundary condition on both sides. Since \begin{equation} s^{\prime}(x_i) = - \frac{1}{2h} \eta_{i-1} + \frac{1}{2h} \eta_{i+1}, \quad i = 0, \dots, N, \end{equation} it is equivalent to add two constraints, \begin{equation} \phi_L = -\frac{1}{2h} \eta_{-1} + \frac{1}{2h} \eta_1,\quad \phi_R = -\frac{1}{2h} \eta_{N-1} + \frac{1}{2h} \eta_{N+1}. \end{equation} Thus all the coefficients can be obtained straightforwardly by solving the equation \begin{equation}\label{cubic_spline} A (\eta_{-1}, \dots, \eta_{N+1})^T = (\phi_L, \varphi(x_0), \dots, \varphi(x_{N}), \phi_R)^T. \end{equation} Note that $(N+3)\times (N+3)$ coefficient matrix $A$ has an explicit LU decomposition, \begin{equation}\label{coefficient_matrix} A = \frac{1}{6} \begin{pmatrix} -3/h & 0 & 3/h & 0 & \cdots & 0 \\ 1 & 4 & 1 & 0 & & \vdots \\ 0 & 1 & 4 & 1 & & \vdots \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ \vdots & & 0 & 1 & 4 & 1 \\ 0 & 0 & 0 & -3/h & 0 & 3/h \\ \end{pmatrix}. \end{equation} where \begin{equation}\label{explicit_L} L = \begin{pmatrix} 1 & 0 & 0 &\cdots & \cdots & 0 \\ -h/3 & 1 & 0 &\ddots & & \vdots \\ 0 & l_1 & 1 &\ddots & & \vdots \\ 0 & 0 & l_2 & \ddots & & \vdots \\ \vdots & \vdots & & l_{N} & 1 & 0 \\ 0 & 0 & \cdots & -\frac{3l_{N}}{h} & \frac{3l_{N+1}}{h} & 1 \\ \end{pmatrix} \end{equation} and \begin{equation}\label{explicit_U} U = \frac{1}{6} \begin{pmatrix} -3/h & 0 & 3/h & 0 &\cdots & \cdots & 0 \\ 0 & d_1 & 2 & 0 &\ddots & & \vdots \\ 0 & 0 & d_2 & 1 &\ddots & & \vdots \\ 0 & 0 & 0 & d_3 & \ddots & & \vdots \\ \vdots & \vdots & & & 0 & d_{N+1} & 0 \\ 0 & 0 & \cdots & & 0 & 0 & 3 d_{N+2}/h \\ \end{pmatrix}, \end{equation} with \begin{equation} \begin{split} & d_1 = 4, \quad l_1 = 1/4, \quad d_2 = 4 - 2l_1 = 7/2, \\ & l_i = 1/d_i, \quad d_{i+1} = 4 - l_i, \quad i = 2, \dots, N+1, \\ & l_{N+1} = 1/(d_{N} d_{N+1}), \quad d_{N+2} = 1 - l_{N+1}. \end{split} \end{equation} The above scheme can achieve fourth order convergence in spatial spacing $h$ and conserves the total mass. Besides, the time step in the semi-Lagrangian method is usually not restricted by the CFL condition, that is, $C = \hbar \max |k| \tau /h > 1 $ is allowed. \section{Parallel characteristic method} For a 6-D problem, the foremost problem is the storage of huge 6-D tensors as the memory to store a $101^3 \times 64^3$ grid is $1.08$TB in single precision, which is still prohibitive for modern computers. Fortunately, the characteristic method can be realized in a distributed manner as pointed out in several pioneering works \cite{MalevskyThomas1997,CrouseillesLatuSonnendrucker2009}. Without loss of generality, we divide $N+1$ grid points on a line into $p$ uniform parts, with $M = N/p$, \begin{align*} \underbracket{x_0 < x_1 < \cdots < x_{M-1}}_{\textup{the 1st processor}} < \underbracket{x_{M}}_{\textup{shared}} < \cdots < \underbracket{ x_{(p-1)M}}_{\textup{shared}} < \underbracket{x_{(p-1)M+1} < \cdots < x_{pM}}_{\textup{$p$-th processor}}, \end{align*} where the $l$-th processor only manipulates $\mathcal{X}_l$ with $\mathcal{X}_l = (x_{(l-1)M+1}, \dots, x_{l M})$, $l = 1, \dots, p$. The grid points $x_{M}, x_{2M}, \dots, x_{(p-1)M}$ are shared by the adjacent patches. Our target is to make \begin{equation} \bm{\eta}^{(l)}= (\eta_{-1}^{(l)}, \dots, \eta_{M+1}^{(l)}) \approx (\eta_{-1 +(l-1)M}, \dots, \eta_{(l-1)M+M+1}), \quad l = 1, \dots, p, \end{equation} say, the local spline coefficients $\bm{\eta}^{(l)}$ for $l$-th piece should approximate to those in global B-spline as accurately as possible. \subsection{Effective Hermite boundary condition based on finite difference stencils} In order to solve $\bm{\eta}^{(l)}$ efficiently, Crouseilles, Latu and Sonnendr{\"u}cker suggested to impose an effective Hermite boundary condition on the shared grid points (CLS-HBC for short) \cite{CrouseillesLatuSonnendrucker2006,CrouseillesLatuSonnendrucker2009} . \begin{equation}\label{Hermite_boundary_condition} \varphi^{\prime}(x_{lM}) = s^{\prime}(x_{lM}), \quad l = 1, \dots, p, \end{equation} so that it needs to solve \begin{align} \varphi^{\prime}(x_{lM}) &= -\frac{1}{2h} \eta_{M-1}^{(l)} + \frac{1}{2h} \eta_{M+1}^{(l)} = -\frac{1}{2h} \eta_{-1}^{(l+1)} + \frac{1}{2h} \eta_{1}^{(l+1)}. \end{align} The problem is that the derivates $\varphi^{\prime}(x_{lM})$ on the adjacent points are actually unknown, so that they have to be interpolated by a finite difference stencil. The authors suggest to use the recursive relation from the spline transform matrix \eqref{coefficient_matrix} and three-term relation $\varphi(x_i) = \frac{1}{6} \eta_{i-1} + \frac{2}{3} \eta_{i} + \frac{1}{6} \eta_{i+1}$, $0 \le i \le N$. Following \cite{CrouseillesLatuSonnendrucker2006} and taking $i = lM$, it starts from \begin{equation} \begin{split} s^{\prime}(x_{i}) = & -\frac{1}{2h}\eta_{i-1} + \frac{1}{2h}\eta_{i+1} \\ = & -\frac{1}{2h} \left( \frac{3}{2} \varphi(x_{i-1}) - \frac{1}{4} \eta_{i-2} - \frac{1}{4} \eta_{i}\right) \\ & + \frac{1}{2h} \left( \frac{3}{2} \varphi(x_{i+1}) - \frac{1}{4} \eta_{i} - \frac{1}{4} \eta_{i+2}\right) \\ = & \frac{3}{4h}(\varphi(x_{i+1}) - \varphi(x_{i-1})) + \frac{1}{8h}(\eta_{i- 2} - \eta_{i+2}), \end{split} \end{equation} so that it arrives at the recursive relation, \begin{equation}\label{recursive_relation} s^{\prime}(x_i) = \frac{3}{4h}(\varphi(x_{i-1}) + \varphi(x_{i+1})) - \frac{1}{4}(s^{\prime}(x_{i - 1}) - s^{\prime}(x_{i+1})), \end{equation} By further replacing $s^{\prime}(x_{i-1})$ and $s^{\prime}(x_{i+1})$ by Eq.~\eqref{recursive_relation}, it arrives at a longer expansion \begin{equation} \begin{split} s^{\prime}(x_i) = & \frac{6}{7h}(\varphi(x_{i+1}) + \varphi(x_{i-1})) - \frac{3}{14h}(\varphi(x_{i+2}) + \varphi(x_{i-2})) \\ &+ \frac{1}{14}(s^{\prime}(x_{i+2}) - s^{\prime}(x_{i-2})). \end{split} \end{equation} It obtains the final expansion with $\alpha = 1 - 2/14^2$, \begin{equation} \left(\alpha - \frac{2}{\alpha 14^2}\right) s^{\prime}(x_i) = \sum_{j = -8}^{8} \omega_j \varphi(x_{i+j}) + \frac{1}{\alpha 14^2} ( s^{\prime}(x_{i+8}) + s^{\prime}(x_{i-8}) ), \end{equation} associated with the fourth-order finite difference approximation \begin{equation} s^{\prime}(x_{i+8}) \approx \frac{-\varphi(x_{i+10}) + 8\varphi(x_{i+9}) - 8 \varphi(x_{i+7}) + \varphi(x_{i+6})}{12h}. \end{equation} To sum up, it arrives at the formula \begin{equation} s^{\prime}(x_i) = \sum_{j = -10}^{-1} \tilde{\omega}_j^- \varphi(x_{i+j}) + \sum_{j=1}^{10}\tilde{\omega}_j^+ \varphi(x_{i+j}), \end{equation} where the coefficients $\tilde{\omega}_j^-$ are collected in Table \ref{Coefficients} and $\tilde{\omega}_j^+ = -\tilde{\omega}_j^-$. \begin{table}[!h] \centering \caption{\small Coefficients for the approximation of the derivatives \cite{CrouseillesLatuSonnendrucker2009}. }\label{Coefficients} \begin{lrbox}{\tablebox} \begin{tabular}{cccccccc} \hline\hline $j$ & $-10$ & $-9$ & $-8$ & $-7$ & $-6$ & \\ \hline $\tilde{\omega}_j$ & $0.2214309755$E-5 & $-1.771447804$E-5 & $7.971515119$E-5 & $-3.011461267$E-4 & $1.113797807$E-3 \\ \hline $j$ & $-5$ & $-4$ & $-3$ & $-2$ & $-1$ & \\ \hline $\tilde{\omega}_j$ & $-4.145187862$E-3 & $0.01546473933$ & $-0.05771376946$ & $0.2153903385$ & $-0.8038475846$ \\ \hline\hline \end{tabular} \end{lrbox} \scalebox{0.80}{\usebox{\tablebox}} \end{table} At each step, $\sum_{j = -10}^{-1} \tilde{\omega}_j^- \varphi(x_{i+j})$ and $ \sum_{j=1}^{10}\tilde{\omega}_j^+ \varphi(x_{i+j})$ can be assembled by left and right processor independently, and then data is exchanged only in adjacent processors to merge the effective boundary condition. The remaining task to solve algebraic equations in each processor independently \begin{equation} A^{(l)} \bm{\eta}^{(l)} = (\phi_L^{(l)}, \varphi(x_{(l-1)M}), \dots, \varphi(x_{lM}), \phi_R^{(l)})^T, \end{equation} where $A^{(l)}$ is a $(M+3)\times(M+3)$ matrix with the form like Eq.~\eqref{coefficient_matrix}, and \begin{equation} \phi_{R}^{(l)} = \phi_{L}^{(l+1)} \approx \sum_{j = -10}^{-1} \tilde{\omega}_j^- \varphi(x_{lM+j}) + \sum_{j=1}^{10}\tilde{\omega}_j^+ \varphi(x_{lM+j}). \end{equation} \subsection{Perfectly matched boundary condition (PMBC)} We suggest to adopt another effective boundary condition, termed the perfectly matched boundary condition (PMBC), based on a key observation made in \cite{MalevskyThomas1997}. Specifically, we start from the exact solution of $\bm{\eta}$, with $(b_{ij}) = A^{-1}$ of size $(N+3)\times (N+3)$. For the sake of convenience, the subindex of $(b_{ij})$ starts from $-1$ and ends at $N+1 = pM+1$. The solution of Eq.~\eqref{cubic_spline} reads that \begin{equation} \eta_i = b_{ii} \varphi(x_i) + \sum_{j=-1}^{i-1} b_{ij} \varphi(x_j) + \sum_{j = i+1}^{pM+1} b_{i j} \varphi(x_{j}), \quad i = -1, \dots, pM+1. \end{equation} We can make a truncation for $|i-j| \ge n_{nb}$ as the off-diagonal elements exhibit exponential decay away from the main diagonal, \begin{equation}\label{truncation} \eta_i \approx b_{ii} \varphi(x_i) + \sum_{j= i - n_{nb}+1}^{i-1} b_{ij} \varphi(x_j) + \sum_{j = i+1}^{i + n_{nb}-1} b_{i j} \varphi(x_{j}), \quad i = -1, \dots, pM+1. \end{equation} Using the truncated stencils \eqref{truncation}, \begin{equation*} \begin{split} & \eta_{lM-1} \approx \sum_{j=(lM-1)-n_{nb}+1}^{(lM-1)+n_{nb}-1} b_{lM-1, j} \varphi(x_j) = \sum_{j=-n_{nb}}^{n_{nb}-2} b_{lM-1, lM+j} \varphi(x_{lM+j}), \\ & \eta_{lM+1} \approx \sum_{j=(lM+1)-n_{nb}+1}^{(lM+1)+n_{nb}-1} b_{lM+1, j} \varphi(x_j) = \sum_{j=-n_{nb}+2}^{n_{nb}} b_{lM+1, lM+ j} \varphi(x_{lM+j}). \\ \end{split} \end{equation*} By further adding four more terms to complete the summations from $-n_{nb}$ to $n_{nb}$, it yields that \begin{equation*} \begin{split} -\frac{1}{2h} \eta_{lM-1} + \frac{1}{2h} \eta_{lM+1} \approx &\sum_{j=-n_{nb}}^{n_{nb}} \left(-\frac{1}{2h} b_{lM-1, lM+j} + \frac{1}{2h} b_{lM+1, lM+j}\right) \varphi(x_{lM+j})\\ = &\underbracket{\sum_{j=-n_{nb}}^{-1} \left(-\frac{1}{2h} b_{lM-1, lM+j} + \frac{1}{2h} b_{lM+1, lM+j}\right) \varphi(x_{lM+j})}_{\textup{stored in left processor}} \\ &~+\underbracket{\sum_{j=1}^{n_{nb}} \left(-\frac{1}{2h} b_{lM-1, lM+j} + \frac{1}{2h} b_{lM+1, lM+j}\right) \varphi(x_{lM+j})}_{\textup{stored in right processor}} \\ &~+ \underbracket{\left(-\frac{1}{2h} b_{lM-1, lM} + \frac{1}{2h} b_{lM+1, lM}\right) \varphi(x_{lM}).}_{\textup{shared by adjacent two processors}} \end{split} \end{equation*} Thus it arrives at the formulation of PMBC \begin{equation*} \begin{split} \phi_{R}^{(l)} = \phi_{L}^{(l+1)} \approx & \underbracket{\frac{1}{2}c_{0,l} \varphi(x_{lM}) + \sum_{j = 1}^{n_{nb}} c_{j, l}^{-} \varphi(x_{lM-j})}_{\textup{stored in left processor}} + \underbracket{\frac{1}{2}c_{0,l} \varphi(x_{lM}) + \sum_{j = 1}^{n_{nb}} c_{j, l}^{+} \varphi(x_{lM+j})}_{\textup{stored in right processor}}, \end{split} \end{equation*} where $c_{0, l} = -\frac{b_{lM-1, lM}}{2h} + \frac{b_{lM+1, lM}}{2h}$ and \begin{equation}\label{PMBC_coeffcients} \begin{split} &c_{j, l}^+ = -\frac{b_{lM-1, lM+j}}{2h} + \frac{b_{lM+1, lM+j}}{2h}, \quad c_{j, l}^- = -\frac{b_{lM-1, lM-j}}{2h} + \frac{b_{lM+1, lM-j}}{2h}. \end{split} \end{equation} It deserves to mention that other spline boundary conditions can also be represented by PMBC, following the same idea in Eq.~\eqref{truncation}. When the natural boundary conditions are adopted, \begin{equation}\label{natural_boundary} \frac{1}{h^2} \eta_{-1} - \frac{2}{h^2} \eta_0 + \frac{1}{h^2} \eta_{1} = 0, \quad \frac{1}{h^2} \eta_{N-1} - \frac{2}{h^2} \eta_N + \frac{1}{h^2} \eta_{N+1} = 0, \end{equation} the coefficient matrix is \begin{equation}\label{coefficient_matrix_2} \widetilde A = \frac{1}{6} \begin{pmatrix} 1/h^2 & -2/h^2 & 1/h^2 & 0 & \cdots & 0 \\ 1 & 4 & 1 & 0 & & \vdots \\ 0 & 1 & 4 & 1 & & \vdots \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ \vdots & & 0 & 1 & 4 & 1 \\ 0 & 0 & 0 & 1/h^2 & -2/h^2 & 1/h^2 \\ \end{pmatrix}. \end{equation} Denote by $(\widetilde{b}_{ij}) = \widetilde{A}^{-1}, -1\le i, j \le N+1$. Equivalently, the equations $\widetilde A\bm{\eta}^T= (0, \varphi(x_0), \dots, \varphi(x_N), 0)^T$ can be cast into $A\bm{\eta}^T= (\phi_{L}^{(1)}, \varphi(x_0), \dots, \varphi(x_N), \phi_{R}^{(p)})^T$ since \begin{equation} \begin{split} & \eta_{-1} \approx \sum_{j = -1}^{n_{nb} - 2} \widetilde b_{-1, j} \varphi(x_{j}), \quad \eta_{1} \approx \sum_{j= -1 }^{n_{nb}} \widetilde b_{1, j} \varphi(x_j). \end{split} \end{equation} By adding two terms and noting that $\varphi(x_{-1}) = 0$, it yields that \begin{equation} \phi_{L}^{(1)} =\frac{\eta_1-\eta_{-1}}{2h} \approx \sum_{j=0}^{n_{nb}} c_{j, 0}^{-} \varphi(x_j), \quad c_{j, 0}^{-} = \frac{1}{2h}(-\widetilde{b}_{-1, j} + \widetilde{b}_{1, j}). \end{equation} Similarly, for the other end, noting that $\varphi(x_{N+1}) = 0$, \begin{equation} \begin{split} & \eta_{N-1} \approx \sum_{j = -1}^{n_{nb} - 2} \widetilde b_{N-1, N-j} \varphi(x_{N-j}), \quad \eta_{N+1} \approx \sum_{j= -1 }^{n_{nb}} \widetilde b_{N+1, N-j} \varphi(x_{N-j}), \end{split} \end{equation} so that \begin{equation} \phi_{R}^{(p)} = \frac{\eta_{N+1}-\eta_{N-1}}{2h} \approx \sum_{j=0}^{n_{nb}} c_{j, p}^{+}\varphi(x_{N-j}), \quad c_{j, p}^{+} = \frac{1}{2h}(-\widetilde{b}_{N-1, N-j} + \widetilde{b}_{N+1, N-j}). \end{equation} \subsection{Comparison between two effective Hermite boundary conditions} It is shown that PMBC is more preferable than CLS-HBC in consideration of numerical accuracy. \begin{example}[1-D spline] The test problem is \begin{equation} \varphi(x) = \sin(x), \quad x \in [0, 8], \end{equation} subject to \begin{equation} \varphi^{\prime}(0) = 0, \quad \varphi^{\prime}(8) = 0. \end{equation} \end{example} For parallel implementation, the spline is decomposed into $4$ patches as given in Figure \ref{plot_patch_spline}, and each patch contains $(N-1)/4+1$ grid points . \begin{figure}[!!h] \centering \includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./plot_patch_spline.pdf} \caption{\small An illustration of cubic B-spline on four patches. The grid points $x_{lM}$ $(l = 1, \dots, p) $ are shared by adjacent processors. In principle, the splines on patches should approximate to the global one as accurately as possible. \label{plot_patch_spline}} \end{figure} First, we adopt CLS-HBC and the results are shown in Figure \ref{spline_CLS_HBC}. It is observed that the errors of are concentrated at the junction points of adjacent patches. Indeed, the accuracy is improved under more collocation points (or equivalently, using smaller step size). The relative errors are less than $5\%$ when $N = 81$. \begin{figure}[!h] \centering \subfigure[$N = 81$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_N81.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_error_N81.pdf}}} \\ \centering \subfigure[$N = 161$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_N161.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_error_N161.pdf}}} \\ \centering \subfigure[$N = 321$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_N321.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_error_N321.pdf}}} \caption{\small Spline coefficients (left) and absolute errors (right) under CLS-HBC. Large errors are observed at the junction points. \label{spline_CLS_HBC}} \end{figure} \begin{figure}[!h] \centering \subfigure[$n_{nb} = 5$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_nb5.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_error_nb5.pdf}}} \\ \centering \subfigure[$n_{nb} = 10$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_nb10.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_error_nb10.pdf}}} \\ \centering \subfigure[$n_{nb} = 20$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_nb20.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_coeff_error_nb20.pdf}}} \caption{\small Spline coefficients (left) and absolute errors (right) under PMBC ($n_{nb}=10$). The errors at the junction points are dramatically suppressed. \label{spline_matched_HBC}} \end{figure} By contrast, the results under PMBC are given in Figure \ref{spline_matched_HBC}. One can see that the errors are significantly smaller. When $N$ is fixed to be $161$, we find that $n_{nb} = 12$ can achieve relative error about $10^{-8}$ and $n_{nb} = 26$ can achieve that about $10^{-16}$. \begin{example}[Free advection of a 2-D Gaussian wavepacket\label{ex3}] \begin{figure}[!h] \centering {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_comp.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./spline_convergence.pdf}} \caption{\small The time evolution of $\varepsilon_{\infty}(t)$ under the parallel spline reconstruction and different spatial stepsizes. It perfectly matches the theoretical global convergence order $3$. \label{free_spline_convergence}} \end{figure} \begin{figure}[!h] \centering \subfigure[Serial cubic B-spline interpolation.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_serror_N81.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_serror_evo_N81.pdf}}} \\ \centering \subfigure[Parallel cubic B-spline interpolation with PMBC ($n_{nb} = 10$).]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_N81_PM.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_evo_N81_PM.pdf}}} \\ \centering \subfigure[Parallel cubic B-spline interpolation with CLS-HBC.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_N81_CLS.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_evo_N81_CLS.pdf}}} \caption{\small A comparison of 2-D free advection: $f^{\textup{num}}(x, k, t) - f^{\textup{exact}}(x, k, t)$ at $t = 5$ (left) and the time evolution of $\varepsilon_{\infty}(t)$ (right) under $N = 81$. When PMBC is adopted, it produces almost the same results as that of the serial implementation. By contrast, when the CLS-HBC is adopted, small oscillations are observed at the junction points. \label{Wigner2d_N81}} \end{figure} \begin{figure}[!h] \centering \subfigure[Serial cubic B-spline interpolation.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_serror_N161.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_serror_evo_N161.pdf}}} \\ \centering \subfigure[Parallel cubic B-spline interpolation with PMBC ($n_{nb} = 10$).]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_N161_PM.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_evo_N161_PM.pdf}}} \\ \centering \subfigure[Parallel cubic B-spline interpolation with CLS-HBC.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_N161_CLS.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Wigner2d_perror_evo_N161_CLS.pdf}}} \caption{\small A comparison of 2-D free advection: $f^{\textup{num}}(x, k, t) - f^{\textup{exact}}(x, k, t)$ at $t = 5$ (left) and the time evolution of $\varepsilon_{\infty}(t)$ (right) under $N = 161$. Numerical results are further improved under smaller spatial spacing. \label{Wigner2d_N161}} \end{figure} The second test problem is the free-advection of the Wigner function in 2-D phase space: \begin{equation} \frac{\partial}{\partial t} f(x, k, t) = \frac{\hbar k}{m} \frac{\partial }{\partial x} f(x, k, t), \end{equation} with \begin{equation} f(x, k, 0) = \frac{1}{\pi} \exp\left(-\frac{x^2}{2a^2} - 2a^2 (k-k_0)^2\right). \end{equation} The exact solution reads that \begin{equation} f(x, k, t) = \frac{1}{\pi} \exp\left(-\frac{(x - \frac{\hbar k t}{m})^2}{2a^2} - 2a^2 (k-k_0)^2\right). \end{equation} Here we take $a = 1$, $\hbar = m = 1$, $k_0 = 0.5$. The final time is $t_{fin} = 5$ with time step $\tau = 0.05$. \end{example} To measure the numerical error, we adopt the $l^\infty$-error as the metric \begin{equation} \varepsilon_{\infty}(t) = \max_{(x, k) \in \mathcal{X} \times \mathcal{K}}| f^{\textup{num}}(x, k, t) - f^{\textup{exact}}(x, k, t) |, \end{equation} where $f^{\textup{num}}$ and $f^{\textup{exact}}$ denote the solutions produced by the spline interpolation and exact one, respectively. We make a comparison of two kinds of effective Hermite boundary conditions. When $N = 81$ and $n_{nb} = 10$ are fixed, one can see in Figure \ref{Wigner2d_N81} that the performance of the local splines under PMBC are almost the same as that of the serial spline, while the solutions under CLS-HBC exhibit small oscillations around the junction regions. Such trend is also observed when further increasing $N$ to 161. As presented in Figure \ref{Wigner2d_N161}, the $l^\infty$-error $\varepsilon_{\infty}(5)$ decreases from $4.86\times 10^{-4}$ to $6.68 \times 10^{-5}$ and the convergence order is $3$ (see Figure \ref{free_spline_convergence}). \subsection{The influence of different spline boundary conditions} \begin{figure}[!h] \centering \subfigure[Evolution of errors. (left: Neumann boundary, right: natural boundary)]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_error_reflect.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_error_natural.pdf}}} \\ \centering \subfigure[Difference at $t = 3$. (left: Neumann boundary, right: natural boundary)]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_err_30.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_err_30_natural.pdf}}} \\ \centering \subfigure[Difference at $t = 4$. (left: Neumann boundary, right: natural boundary)]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_err_40.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_err_40_natural.pdf}}} \\ \centering \subfigure[Difference at $t = 10$. (left: Neumann boundary, right: natural boundary)]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_err_100.pdf}} {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./free_err_100_natural.pdf}}} \caption{\small The free advection until $t = 10$ under the Neumann boundary condition (left) or the natural boundary condition (right) for global cubic spline. Under the Neumann boundary condition, the wave packet tends to be reflected back and leads to an evident accumulation of errors near the boundary. By contrast, the reflection of wave packet can be significantly suppressed under the natural boundary condition. \label{free_compare_Neumann_natural}} \end{figure} Now it turns to investigate the influence of different boundary conditions on the global spline. Again, we simulate the free advection in Example \ref{ex3} until $t_{fin} = 10$ under either the natural boundary condition \eqref{natural_boundary} or the Neumann boundary condition $f^{\prime}(x_0) = 0$ and $f^{\prime}(x_N) = 0$ imposed on the global spline. As seen in Figure \ref{free_compare_Neumann_natural}, when the Neumann boundary condition is adopted, the wavepacket will be reflected back when it touches the boundary and leads to a rapid accumulation of errors. By contrast, under the natural boundary condition, the reflection of wavepacket is evidently suppressed and growth rate of errors is dramatically smaller. The numerical evidence indicates that it is more appropriate to impose the natural cubic spline to let wavepackets leave the domain without reflecting back. \section{Comparison between TKM and pseudo-spectral method} The pseudo-spectral method (PSM for brevity) is a typical way to approximate the ${\rm \Psi} \textup{DO}$ \cite{Ringhofer1990,Goudon2002} \begin{equation}\label{def.pdo_convolution} \Theta_V[f](\bm{x}, \bm{k}, t) = \frac{1}{\mathrm{i} \hbar (2\pi)^3} \iint_{\mathbb{R}^{6}} \mathrm{e}^{-\mathrm{i} (\bm{k} - \bm{k}^{\prime}) \cdot \bm{y} }D_V(\bm{x}, \bm{y}, t) f(\bm{x}, \bm{k}^{\prime}, t) \D \bm{y} \D \bm{k}^{\prime} \end{equation} with $D_V(\bm{x}, \bm{y}, t) = V(\bm{x} + \frac{\bm{y}}{2}) - V(\bm{x} - \frac{\bm{y}}{2})$. Suppose the Wigner function $f(\bm{x}, \bm{k}, t)$ decays outside the finite domain $\mathcal{X} \times [-L_k, L_k]^3$, then one can impose artificial periodic boundary condition in $\bm{k}$-space and use PSM (or the Poisson summation formula) \begin{equation} f(\bm{x}, \bm{k}, t) \approx \sum_{\bm{n} \in \mathbb{Z}^3} \widehat{f}_{\bm{n}}(\bm{x}, t) \mathrm{e}^{\frac{2\pi \mathrm{i} \bm{n} \cdot \bm{k}}{2L_k}}. \end{equation} In addition, starting from the convolution representation of ${\rm \Psi} \textup{DO}$, it yields that \begin{equation*} \begin{split} \Theta_V[f](\bm{x}, \bm{k}, t) & \approx \frac{1}{\mathrm{i} \hbar (2\pi)^3} \int_{\mathbb{R}^3} \mathrm{e}^{-\mathrm{i} \bm{k} \cdot \bm{y}} D_V(\bm{x}, \bm{y}) \sum_{\bm{n} \in \mathbb{Z}^3} \left(\int_{\mathbb{R}^3} \widehat{f}_{\bm{n}}(\bm{x}, t) \mathrm{e}^{\mathrm{i} (\frac{\pi}{L_k} \bm{n} - \bm{y}) \cdot \bm{k}^{\prime}} \D \bm{k}^{\prime}\right) \D \bm{y} \\ &= \frac{1}{\mathrm{i} \hbar (2\pi)^3} \sum_{\bm{n} \in \mathbb{Z}^3} \widehat{f}_{\bm{n}}(\bm{x}, t) \int_{\mathbb{R}^3} \mathrm{e}^{-\mathrm{i} \bm{k} \cdot \bm{y}} D_V(\bm{x}, \bm{y}) \left(\int_{\mathbb{R}^3} \mathrm{e}^{\mathrm{i} (\frac{\pi }{L_k} \bm{n} - \bm{y}) \cdot \bm{k}^{\prime}} \D \bm{k}^{\prime}\right) \D \bm{y} \\ &= \frac{1}{\mathrm{i} \hbar } \sum_{\bm{n} \in \mathbb{Z}^3} \widehat{f}_{\bm{n}}(\bm{x}, t) \int_{\mathbb{R}^3} \mathrm{e}^{-\mathrm{i} \bm{k} \cdot \bm{y}} D_V(\bm{x}, \bm{y}) \delta(\frac{\pi \bm{n}}{L_k} - \bm{y}) \D \bm{y} \\ &= \frac{1}{\mathrm{i} \hbar } \sum_{\bm{n} \in \mathbb{Z}^3} \widehat{f}_{\bm{n}}(\bm{x}, t) D_V(\bm{x}, \frac{\pi \bm{n}}{L_k}) \mathrm{e}^{-\frac{\pi\mathrm{i}}{L_k} \bm{k} \cdot \bm{n}}, \end{split} \end{equation*} where the third equality uses the Fourier completeness relation. By further truncating $\bm{n}$, we arrive at the approximation formula \begin{equation}\label{PSM_approximation} \Theta_V[f](\bm{x}, \bm{k}, t) \approx \frac{1}{\mathrm{i} \hbar} \sum_{\bm{n} \in \mathcal{I} } \widehat{f}_{\bm{n}}(\bm{x}, t) (V(\bm{x} - \frac{\pi \bm{n}}{2L_k}) - V(\bm{x} + \frac{\pi \bm{n}}{L_k})) \mathrm{e}^{-\frac{\pi\mathrm{i}}{2L_k} \bm{k} \cdot \bm{n}}, \end{equation} where the dual index set $\mathcal{I}$ is that \begin{equation} \mathcal{I}\coloneqq \{(n_1, n_2, n_3)\in \mathbb{Z}^3 | n_j = -N_k/2, \dots, N_k/2-1\}. \end{equation} However, we would like to report that PSM might fail to produce proper results when $V(\bm{x})$ has singularities and the formula \eqref{PSM_approximation} is actually not well-defined for $\bm{x} = \pm \frac{\pi \bm{n}}{2 L_k}$. For the sake of comparison, we consider a 6-D problem under the attractive Coulomb potential. \begin{example}\label{ex_qcs} Consider a Quantum harmonic oscillator $V(x) = -{1}/{|\bm{x}|}$ and a Gaussian wavepacket adopted as the initial condition. \begin{equation} f_0(\bm{x}, \bm{k}) = \pi^{-3} \mathrm{e}^{-\frac{(x_1-1)^2 + x_2^2 + x_3^2}{2} - 2k_1^2 - 2k_2^2 - 2k_3^2}. \end{equation} \end{example} We first calculate ${\rm \Psi} \textup{DO}$ under TKM or PSM for a $\mathcal{X}$-grid mesh $[-6, 6]^3$ with $N_x = 41, \Delta x = 0.3$ and $\mathcal{K}$-grid mesh $[-4, 4]^3$ with $N_k = 64, \Delta k = 0.125$. In order to get rid of the blow-up in the formula \eqref{PSM_approximation}, we try to adopt two ways. The first is to shift $\mathcal{X}$-grid mesh to $[-6 + \delta x, 6 + \delta x]^3$ with a small spacing $\delta x$. The second is to set $\delta x = 0$ and let $V(\bm{x}) = 0$ when $|\bm{x}| = 0$. A comparison among the initial ${\rm \Psi} \textup{DO}$ under different strategies is given in Figure \ref{initial_PDO_comp}. At first glance, no evident differences are observed in the numerical results under TKM or PSM. \begin{figure}[!h] \centering \subfigure[TKM.]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./TKM_PDO.pdf}}} \subfigure[PSM ($\delta x= 0$).]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./PSM_PDO_xshift0.pdf}}} \subfigure[PSM ($\delta x = 0.01$).]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./PSM_PDO_xshift001.pdf}}} \caption{\small A comparison between TKM and PSM for a ${\rm \Psi} \textup{DO}$ with an initial Gaussian wavepacket.} \label{initial_PDO_comp} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./TKM_PSM_comp.pdf} \caption{\small The time evolution of numerical $\min_{\mathcal{X}} P(x_1, x_2, x_3)$ in log10 scale. PSM may introduce very large artificial negative parts and finally suffers from numerical instability. \label{PSM_instability}} \end{figure} However, when simulating the Wigner dynamics with $\mathcal{X}\times \mathcal{K} = [-6, 6]^3\times [-4, 4]^3$ and $N_x = 41, \Delta x = 0.3$, $N_k = 32, \Delta k = 0.25$, we have found that TKM and PSM exhibit distinct performances. Specifically, PSM may suffer from large errors near singularity and numerical instability as it treats the singularity near the origin incorrectly. For the sake of illustration, we consider the spatial marginal density \begin{equation} P(x_1, x_2, x_3) = \iiint_{\mathbb{R}^3} f(\bm{x}, k_1, k_2, k_3) \D k_1 \D k_2 \D k_3. \end{equation} The spatial marginal density is proved to be positive semi-definite. Therefore, the negative value of numerical solution can be used an indicator for accuracy and stability and is visualized in Figure \ref{PSM_instability}. Although the spectral method might not preserve the positivity of the spatial marginal density, the errors remain at a stable level when TKM is adopted. By contrast, PSM may introduce very large artificial negative parts and finally results in numerical instability. In Figure \ref{comp_TKM_PSM}, we visualize the spatial marginal distribution projected onto $(x_1$-$x_2$) plane. It is found that the peak of spatial marginal distribution has been evidently smoothed out by PSM at $2$ a.u. and artificial negative valleys are clearly seen at $6$a.u. This coincides with the observation in Figure \ref{PSM_instability} that PSM suffers from instability soon after $6$a.u. \begin{figure}[!h] \centering \subfigure[$t = 2$a.u. (left: TKM, middle: PSM with $\delta x = 0$, right: PSM with $\delta x = 0.01$).]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_TKM_0020.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_PSM_sh001_0020.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_PSM_sh01_0020.pdf}}} \\ \centering \subfigure[$t = 4$a.u. (left: TKM, middle: PSM with $\delta x = 0$, right: PSM with $\delta x = 0.01$).]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_TKM_0040.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_PSM_sh001_0040.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_PSM_sh01_0040.pdf}}} \\ \centering \subfigure[$t = 6$a.u. (left: TKM, middle: PSM with $\delta x = 0$, right: PSM with $\delta x = 0.01$).]{ {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_TKM_0060.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_PSM_sh001_0060.pdf}} {\includegraphics[width=0.32\textwidth,height=0.18\textwidth]{./xdist_PSM_sh01_0060.pdf}}} \caption{\small Visualization of the spatial marginal distribution projected onto $(x_1$-$x_2$) plane under TKM and PSM. PSM might not produce correct numerical results and suffers from instability. \label{comp_TKM_PSM}} \end{figure} \section{Comparison among exponential integrators and splitting method} Finally, it needs to make a thorough comparison among various integrators, which in turn provides a guiding principle in choosing an appropriate integrator for our 6-D simulations. To this end, we provide two examples with exact solution. The first is the quantum harmonic oscillator in 2-D phase space and the second is the Hydrogen Wigner function of 1s state in 6-D phase space (see Example \ref{ex_qcs}). The performance metrics include the $L^2$-error $\varepsilon_{2}(t)$: \begin{align} \varepsilon_{2}(t) &= \left[\iint_{\mathcal{X}\times \mathcal{K}} \left(f^{\textup{ref}}\left(\bm{x},\bm{k},t\right)-f^{\textup{num}}\left(\bm{x},\bm{k},t\right)\right)^{2}\textup{d}\bm{x}\textup{d} \bm{k}\right]^{\frac{1}{2}},\label{eq:e2} \end{align} the maximal error $\varepsilon_{\infty}(t)$: \begin{align} \varepsilon_{\infty}(t) &=\max_{(\bm{x},\bm{k})\in\mathcal{X}\times \mathcal{K}}\big |f^{\textup{ref}}\left(\bm{x},\bm{k},t\right)-f^{\textup{num}}\left(\bm{x},\bm{k},t\right) \big |, \label{eq:ef} \end{align} and the deviation of total mass $\varepsilon_{\textup{mass}}(t)$: \begin{align} \varepsilon_{\textup{mass}}(t) &= \Big |\iint_{\mathcal{X}\times \mathcal{K}} f^{\textup{num}}\left(\bm{x},\bm{k},t\right)\textup{d}\bm{x}\textup{d} \bm{k}-\iint_{\Omega} f^{\textup{ref}}\left(\bm{x},\bm{k},t=0\right)\textup{d}\bm{x}\textup{d} \bm{k} \Big |,\label{eq:emass} \end{align} where $f^{\textup{ref}}$ and $f^{\textup{num}}$ denote the reference and numerical solution, respectively, and $\mathcal{X}\times \mathcal{K}$ is the computational domain. In practice, the integral can be replaced by the average over all grid points. Besides, the relative maximal error and relative $L^2$-error are obtained by $\frac{\varepsilon_{\infty}(t)}{\max(|f(\bm{x}, \bm{k}, 0)|)}$ and $\varepsilon_{2}(t)/ \sqrt{\iint(|f(\bm{x}, \bm{k}, 0)|^2 \D \bm{x} \D \bm{k})}$, respectively. Our main observations are summarized as follows. \begin{itemize} \item[1.] In order to ensure the accuracy of temporal integration, it is recommended to use LPC1, instead of splitting scheme or multi-stage schemes. \item[2.] The operator splitting scheme is still useful in practice, as it saves half of the cost in calculation of nonlocal terms. \item[3.] It is suggested to choose the stencil length $n_{nb} = 15$ for PMBC to maintain the accuracy, while $n_{nb} < 10$ might lead to an evident loss of total mass. \end{itemize} The one-stage Lawson predictor-corrector scheme exhibit the best performance. Actually, the advantage of the Lawson scheme in both accuracy and stability has also been reported in the Boltzmann community \cite{CrouseillesEinkemmerMassot2020} recently. \subsection{Quantum harmonic oscillator in 2-D phase space} The third example is the quantum harmonic oscillator $V(x) = \frac{1}{2} m \omega x^2$. In this situation, ${\rm \Psi} \textup{DO}$ reduces to the first-order derivative, \begin{equation}\label{Wigner_harmonic} \frac{\partial }{\partial t} f(x, k, t) + \frac{\hbar k}{m} \nabla_{x} f(x, k, t) - \frac{1}{\hbar} \nabla_{x} V(x) \nabla_{k} f(x, k, t) = 0. \end{equation} The exact solution can be solved by $f(x, k, t) = f(x(t), k(t), 0)$, where $(x(t), k(t))$ obey a (reverse-time) Hamiltonian system ${\partial x}/{\partial t} = -{\hbar k}/{m}, {\partial k}/{\partial t} = {m\omega x}/{\hbar}$, and has the following form \begin{equation} \begin{split} &x(t) = \cos \left(\sqrt{\omega} t\right) x(0) - \frac{\hbar}{m \sqrt{\omega}} \sin \left(\sqrt{ \omega}t\right) k(0), \\ &k(t) =\frac{m\sqrt{\omega}}{\hbar}\sin \left(\sqrt{\omega} t\right) x(0) + \cos \left(\sqrt{ \omega}t\right) k(0). \end{split} \end{equation} \begin{example} Consider a Quantum harmonic oscillator $V(x) = \frac{m \omega x^2}{2}$ and a Gaussian wavepacket $f_0(x, k) = \pi^{-1} \mathrm{e}^{-\frac{(x-1)^2}{2} - 2k^2}$ adopted as the initial condition. Here we choose and $\omega = (\pi/5)^2$ so that the wavepacket returns back to the initial state at the final time $T = 10$. \end{example} \begin{figure}[!h] \centering \subfigure[$\Delta x = 0.3$. \label{harmonic_comp_dx_03}]{ {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx03_serial.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx03_nb10.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx03_nb20.pdf}}} \centering \subfigure[$\Delta x = 0.2$. \label{harmonic_comp_dx_02}]{ {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx02_serial.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx02_nb10.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx02_nb20.pdf}}} \\ \centering \subfigure[$\Delta x = 0.1$.]{ {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx01_serial.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx01_nb10.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx01_nb20.pdf}}} \\ \centering \subfigure[$\Delta x = 0.05$. \label{harmonic_comp_dx_005}]{ {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx005_serial.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx005_nb10.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx005_nb20.pdf}}} \\ \centering \subfigure[$\Delta x = 0.025$. \label{harmonic_comp_dx_0025}]{ {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx0025_serial.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx0025_nb10.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./comp_integrator_dx0025_nb20.pdf}}} \caption{\small Quantum harmonic oscillator: A comparison among different integrators under serial and parallel implementations. (left: serial, middle: $n_{nb=10}$, right: $n_{nb}=20$). LPC1 definitely outperforms other integrators, especially when $\Delta x$ is small. \label{harmonic_comp_integrator}} \end{figure} \begin{figure}[!h] \centering \subfigure[Operator splitting scheme (OS).]{{\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb10_OS.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb20_OS.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./OS_mass_nb.pdf}}} \\ \centering \subfigure[One-step Lawson predictor-corrector scheme (LPC1).]{{\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb10_LPC1.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb20_LPC1.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./LPC1_mass_nb.pdf}}} \\ \centering \subfigure[Two-step Lawson predictor-corrector scheme (LAPC2).]{{\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb10_LAPC2.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb20_LAPC2.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./LAPC2_mass_nb.pdf}}} \\ \centering \subfigure[Three-step Lawson predictor-corrector scheme (LAPC3).]{{\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb10_LAPC3.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./convergence_nb20_LAPC3.pdf}} {\includegraphics[width=0.32\textwidth,height=0.22\textwidth]{./LAPC3_mass_nb.pdf}}} \caption{\small Quantum harmonic oscillator: The convergence (left: $n_{nb=10}$, middle: $n_{nb}=20$) and $\varepsilon_{\textup{mass}}(t)$ (right) of different integrators. LPC1 can achieve fourth-order convergence in $\Delta x$, while other integrators may suffer from the reduction in convergence rate. PMBC indeed has some influences on both accuracy and mass conservation, but fortunately they can be eliminated when $n_{nb} \ge 20$.} \label{harmonic_convergence} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./OS_instability.pdf} \caption{\small Quantum harmonic oscillator: The Strang operator splitting suffers from numerical instability under time step $\tau = 0.0005$ and spatial spacing $\Delta x = 0.1$, while LPC1 is stable under such setting even up to $T = 20$. \label{OS_instability}} \end{figure} \begin{figure}[!h] \centering \subfigure[OS.\label{harmonic_visual_OS}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_err_nb10_OS.pdf}}} \subfigure[LPC1.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_err_nb10_LPC1.pdf}}} \\ \centering \subfigure[LAPC2.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_err_nb10_LAPC2.pdf}}} \subfigure[LAPC3. \label{harmonic_visual_LAPC3}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./harmonic_err_nb10_LAPC3.pdf}}} \caption{\small Quantum harmonic oscillator: A visualization of numerical errors $f^{\textup{num}}(x, k, t) - f^{\textup{ref}}(x, k, t)$ at $t = 5$ induced by PMBC with $n_{nb} = 10$, $\Delta x =0.1$. It is seen that PMBC may bring in small oscillations at the junction of adjacent patches. The worse is the accumulation of errors near the boundary (see OS and LAPC3), which might lead to numerical instability for long-time evolution. \label{harmonic_error_visualization}} \end{figure} The computational domain is $\mathcal{X} \times \mathcal{K} = [-12, 12] \times [-6.4, 6.4]$, which is evenly decomposed into 4 patches for MPI implementation. The natural boundary condition is adopted at two ends so that there is a slight loss of mass (about $10^{-13}$) up to $T=10$. Since we mainly focus on the convergence with respect to $\Delta x$ and $n_{nb}$, simulations under $\Delta x = 0.025, 0.05, 0.1, 0.2,0.3$ and $n_{nb}=10, 15,20,30$ are performed, where other parameters are set as: the time step $\tau = 0.00002$ to avoid numerical stiffness and $\Delta k = 0.025$ to achieve very accurate approximation to ${\rm \Psi} \textup{DO}$. A comparison of all integrators under different $\Delta x$ and $n_{nb}$ is presented in Figure \ref{harmonic_comp_integrator}, and numerical errors $f^{\textup{num}} - f^{\textup{ref}}$ are visualized in Figure \ref{harmonic_error_visualization}. The convergence with respect to $\Delta x$ and the mass conservation under different $n_{nb}$ are given in Figure \ref{harmonic_convergence}. From the results, we can make the following observations. {\bf Comparison of non-splitting and splitting scheme:} It is clearly seen that LPC1 outperforms the splitting scheme and multi-stage non-splitting schemes in accuracy, especially when $\Delta x$ is small, because it avoids both the accumulation of the splitting errors and additional spline interpolation errors in multi-stage Lawson scheme. While for sufficiently large $\Delta x$, e.g., $\Delta x = 0.2$ or $0.3$, the performances of all integrators are comparable as the interpolation error turns out to be dominated. {\bf Numerical stability:} The first order derivative in Eq.~\eqref{Wigner_harmonic} brings in strong numerical stiffness and puts a severe restriction on the time step $\tau$ in the parallel CHAracteristic-Spectral-Mixed (CHASM) scheme. Nevertheless, the non-splitting scheme seems to be more stable than the splitting scheme, and one-step scheme is more stable than multi-stage ones. In Figures~\ref{harmonic_comp_dx_03} and \ref{harmonic_comp_dx_02}, we can observe an abrupt reduction in accuracy for OS. This is induced by the accumulation of errors near the boundary (see the small oscillations in Figures \ref{harmonic_visual_OS} and \ref{harmonic_visual_LAPC3}). In fact, LPC1 turns out to be stable up to $T=20$ even under a larger time step $\tau =0.0005$ and $\Delta x = 0.1$, while OS suffers from numerical instability under such setting (see Figure \ref{OS_instability}). {\bf Convergence with respect to $\Delta x$:} The convergence rate is plotted in Figure \ref{harmonic_convergence}. Only LPC1 can achieve fourth order convergence in $\Delta x$, according with the theoretical value of the cubic spline interpolation. By contrast, for other schemes, the accumulation of errors induced by temporal integration and mixed interpolations contaminate the numerical accuracy, leading to a reduction in convergence order for small $\Delta x$. {\bf Influence of PMBCs:} From Figures \ref{harmonic_comp_dx_005} and \ref{harmonic_comp_dx_0025}, one can see that $n_{nb} = 10$ only bring in additional errors about $10^{-5}$, e.g., the small oscillations are found near the junctions of patches in Figure \ref{harmonic_error_visualization}. But such errors seem to be negligible when $n_{nb} \ge 15$, which also coincides with the observations made in \cite{MalevskyThomas1997}. However, the truncation of stencil indeed has a great influence on the mass conservation as seen in Figure \ref{harmonic_convergence}, where $\varepsilon_{\textup{mass}}$ is about $10^{-6}$ when $n_{nb}=10$ or $10^{-9}$ when $n_{nb}=15$. Fortunately, its influence on total mass can be completely eliminated when $n_{nb} \ge 20$. {\bf Efficiency:} For one-step evolution, OS requires spatial interpolations twice and calculation of ${\rm \Psi} \textup{DO}$ once, while LPC1 requires spatial interpolations once and calculation of ${\rm \Psi} \textup{DO}$ twice. Thus computational complexity of multi-stage schemes is definitely higher than that of OS and LPC1. \subsection{The Wigner function for the Hydrogen 1s state} The Hydrogen Wigner function is the stationary solution of the Wigner equation \eqref{eq.Wigner} with the pseudo-differential operator under the attractive Coulomb interaction $V(\bm{x}) = -1/|\bm{x} - \bm{x}_A|$, \begin{equation}\label{def.pdo} \Theta_{V}[f](\bm{x}, \bm{k}, t) = \frac{2}{c_{3, 1}\mathrm{i}} \int_{\mathbb{R}^3} \mathrm{e}^{2 \mathrm{i} (\bm{x} - \bm{x}_A) \cdot \bm{k}^{\prime}} \frac{1}{|\bm{k}^{\prime}|^2} ( f(\bm{x}, \bm{k} - \bm{k}^{\prime}, t) - f(\bm{x}, \bm{k}+\bm{k}^{\prime}, t) )\D \bm{k}^{\prime}. \end{equation} The twisted convolution of the form \eqref{def.pdo} can be approximated by the truncated kernel method \cite{VicoGreengardFerrando2016,GreengardJiangZhang2018}. For the 1s orbital, $\phi_{\textup{1s}}(\bm{x}) = \frac{1}{2\sqrt{2} \pi^2} \exp( - |\bm{x}|)$, and the corresponding Wigner function reads \begin{equation}\label{1s_Wigner_function} f_{\textup{1s}}(\bm{x}, \bm{k}) = \frac{1}{(2\pi)^3}\int_{\mathbb{R}^3} \phi_{\textup{1s}}(\bm{x} - \frac{\bm{y}}{2}) \phi_{\textup{1s}}^\ast(\bm{x} + \frac{\bm{y}}{2}) \mathrm{e}^{-\mathrm{i} \bm{k} \cdot \bm{y}} \D \bm{y}. \end{equation} Although it is too complicated to obtain an explicit formula \cite{PraxmeyerMostowskiWodkiewicz2005}, the Hydrogen Wigner function of 1s state can be highly accurately approximated by the discrete Fourier transform of Eq.~\eqref{1s_Wigner_function}: For $\bm{k}_{\bm{\zeta}} = \bm{\zeta} \Delta k$, \begin{equation*} f_{\textup{1s}}(\bm{x}, \bm{k}_{\bm{\zeta}}) \approx \sum_{\eta_1 = -\frac{N_{y}}{2}}^{\frac{N_{y}}{2}-1} \sum_{\eta_2 = -\frac{N_{y}}{2}}^{\frac{N_{y}}{2}-1} \sum_{\eta_3 = -\frac{N_{y}}{2}}^{\frac{N_{y}}{2}-1} \phi_{\textup{1s}}(\bm{x} - \frac{\bm{\eta} \Delta y}{2}) \phi_{\textup{1s}}^\ast(\bm{x} + \frac{\bm{\eta} \Delta y}{2}) \mathrm{e}^{- \mathrm{i} (\bm{\zeta} \cdot \bm{\eta} ) \Delta k \Delta y } (\Delta y)^3. \end{equation*} By taking $\Delta y = \frac{2\pi}{N_k \Delta k}$, it can be realized by FFT with $N_y =128$. The Hydrogen 1s Wigner function can be adopted as the initial and reference solutions for dynamical testing. Besides, for multidimensional case, the reduced Wigner function $W_1(x, k, t) $, defined by the projection of $f$ onto $(x_1$-$k_1$) plane, is used for visualization. \begin{equation}\label{def.reduced_Wigner_function} W_1(x, k, t) = \iint_{\mathbb{R}^2 \times \mathbb{R}^2} f(\bm{x}, \bm{k}, t) \D x_2 \D x_3 \D k_2 \D k_3. \end{equation} For 1s state, the reduced Wigner function is plotted in Figure \ref{1s_wigner}, which exhibits a heavy tail in $\bm{k}$-space as shown in Figure \ref{1s_tail}. \begin{figure}[!h] \centering \subfigure[ $W_1(x, k)$ for 1s orbital.\label{1s_wigner}]{ \includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_Wigner_init.pdf}} \subfigure[The heavy tail in momental space.\label{1s_tail}]{ \includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_kdist.pdf}} \caption{\small The Hydrogen 1s Wigner function: Plot of the reduced Wigner function $W_1(x, k)$. \label{W1_init}} \end{figure} The computational domain is $\mathcal{X} \times \mathcal{K} = [-9, 9]^3 \times [-6.4, 6.4]^3$ with a fixed spatial step size $\Delta x = 0.3$ ($N_{x_1} = N_{x_2} = N_{x_3} = 61$), which is evenly divided into $4\times 4 \times 4$ patches and distributed into $64$ processors, and each processor provides 4 threads for shared-memory parallelization using the OpenMP library. The natural boundary conditions are adopted at two ends. As the accuracy of spline interpolation has been already tested in the above 2-D example, we will investigate the convergence of nonlocal approximation under five groups: $N_k =8, 16, 32, 64, 80$ ($\Delta k = 1.6, 0.8, 0.4, 0.2, 0.16$). Other parameters are set as: the stencil length in PMBC is $n_{nb} = 15$ and the time stepsize is $\tau = 0.025$. Again, a comparison of OS and LPC1 under different $\Delta k$, as well as the convergence in $\bm{k}$-space, is presented in Figures \ref{1s_comp_integrator} and \ref{1s_comp_error}. Numerical errors for reduced Wigner function $W_1^{\textup{num}} - W_1^{\textup{ref}}$ under $N_k = 32$ and $N_k = 64$ are visualized in Figure \ref{1s_error_visualization}. From the results, we can make the following observations. \begin{figure}[!h] \centering \subfigure[LPC, $\varepsilon_\infty(t)$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_maxerr_LPC.pdf}}} \subfigure[OS, $\varepsilon_\infty(t)$. ]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_maxerr_OS.pdf}}} \\ \centering \subfigure[LPC, $\varepsilon_2(t)$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_L2err_LPC.pdf}}} \subfigure[OS, $\varepsilon_2(t)$ ]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_L2err_OS.pdf}}} \\ \centering \subfigure[LPC1, deviation in total mass.\label{LPC_mass}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_mass_LPC.pdf}}} \subfigure[OS, deviation in total mass. \label{OS_mass}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_mass_OS.pdf}}} \\ \centering \subfigure[Convergence of $\varepsilon_{\infty}$ at $t=5$. \label{1s_convergnce_maxerr}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_LPC_convergence.pdf}}} \subfigure[Convergence of $\varepsilon_{2}$ at $t=5$. \label{1s_convergnce_L2err}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_OS_convergence.pdf}}} \caption{\small The Hydrogen 1s Wigner function: The performance of TKM under different $\Delta k$, with $\Delta x = 0.3$. The convergence of TKM is verified, albeit with lower convergence rate due to errors caused by the spline interpolation and truncation of $\bm{k}$-space. In addition, LPC1 still outperforms OS in both accuracy and mass conservation. \label{1s_comp_integrator}} \end{figure} \begin{figure}[!h] \centering \subfigure[Time evolution of $\varepsilon_{\infty}$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_maxerr_comp.pdf}}} \subfigure[Time evolution of $\varepsilon_{2}$. ]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./Hydro1s_L2err_comp.pdf}}} \caption{\small The Hydrogen 1s Wigner function: The non-splitting scheme outperforms the Strang splitting in accuracy. \label{1s_comp_error}} \end{figure} \begin{figure}[!h] \centering \subfigure[LPC1, $N_k = 32$.]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_err_LPC_Nk32.pdf}}} \subfigure[OS, $N_k = 32$. ]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_err_OS_Nk32.pdf}}} \\ \centering \subfigure[LPC1, $N_k =64$.\label{1s_error_visual_LPC_N64}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_err_LPC_Nk64.pdf}}} \subfigure[OS, $N_k =64$. \label{1s_error_visual_OS_N64}]{ {\includegraphics[width=0.48\textwidth,height=0.27\textwidth]{./1s_err_OS_Nk64.pdf}}} \caption{\small Hydrogen 1s Wigner function: A visualization of numerical errors $W_1^{\textup{num}}(x, k, t) - W_1^{\textup{ref}}(x, k, t)$ at $t = 5$ induced by truncation of $\bm{k}$-space. Since the 1s Wigner function is not compactly supported in $[-6.4, 6.4]^3$, there are small errors found near the $\bm{k}$-boundary, as well as near $\bm{x}$-boundary due to the natural boundary condition. \label{1s_error_visualization}} \end{figure} {\bf Convergence with respect to $\Delta k$}: The convergence of TKM is clearly verified in Figures \ref{1s_convergnce_maxerr} and \ref{1s_convergnce_L2err}, albeit its convergence rate is slower than expectation due to the mixture of various error sources. Since the initial 1s Wigner function is not compactly supported in $[-6.4, 6.4]^3$ (see Figure \ref{1s_tail}), the overlap with the periodic image may produce small oscillations near the $\bm{k}$-boundary, which is also visualized in Figures \ref{1s_error_visual_LPC_N64} and \ref{1s_error_visual_OS_N64}. {\bf Comparison of LPC1 and OS}: Nonetheless, with $61^3 \times 64^3$ uniform grid mesh and LPC1 integrator, CHASM can still achieve relative maximal error about $3.45\%$ and relative $L^2$-error about $7.41\%$ for the reduced Wigner function \eqref{def.reduced_Wigner_function} up to $T=5$, where $\max(|f_{\textup{1s}(\bm{x}, \bm{k})}| = 1/\pi^3 \approx 0.0323$ and $\sqrt{\iint (|f_{\textup{1s}}(\bm{x}, \bm{k})|^2 \D \bm{x} \D \bm{k}} \approx 0.0635$. When $N_k = 80$, the relative maximal error and $L^2$-error reduce to $2.93\%$ and $6.33\%$, respectively. By contrast, when the Strang splitting is adopted under the mesh size $61^3 \times 64^3$ , the relative maximal error is $6.20\%$ and relative $L^2$-error is about $11.02\%$. It is also clearly seen in Figure \ref{1s_comp_error} that the non-splitting scheme outperforms the splitting scheme for $N_k \ge 32$ in accuracy. {\bf Mass conservation}: A slight deviation of the total mass is observed in time evolution. From Figures \ref{LPC_mass} and \ref{OS_mass}, one can see that $\varepsilon_{\textup{mass}}$ up to $t=5$ of LPC1 is about $0.66\%$, regardless of $N_k$, while that of OS is about $1.35\%$ when $N_k = 64$ and becomes even larger when $N_k$ goes smaller. \section*{Acknowledgement} This research was supported by the National Natural Science Foundation of China (No.~1210010642), the Projects funded by China Postdoctoral Science Foundation (No.~2020TQ0011, 2021M690227) and the High-performance Computing Platform of Peking University. SS is partially supported by Beijing Academy of Artificial Intelligence (BAAI). The authors are sincerely grateful to Haoyang Liu and Shuyi Zhang at Peking University for their technical supports on computing environment, which have greatly facilitated our numerical simulations. \bibliographystyle{spmpsci}
2,869,038,154,514
arxiv
\section{Introduction} A scalable fault tolerant universal quantum computer demands quantum error correction codes to protect qubit information from decoherence noises~\cite{shor1995,NC10}. One promising type of quantum error correcting codes is Bosonic quantum codes that encode a logical qubit into a quantum harmonic oscillator, such that a two-dimensional code space lies in a corner of an infinite-dimensional Hilbert space. This type of quantum error correcting codes include cat codes~\cite{Cochrane1999,Ralph2003,Leghtas2013,mirrahimi2014,bergmann16}, Gottesman-Kiteav-Preskill (GKP) codes~\cite{GKP2001}, and some other rotational symmetric codes~\cite{michael2016,albert2018,grimsmo2020}. Significant experimental progress have been made on encoding qubit information into cat states~\cite{ofek2016} and GKP states~\cite{fluhmann2019,campagne2020} on superconducting cavities as well as trapped ions, along with many theoretical proposals on state preparation and engineering~\cite{terhal2016,puri2017,Weigand2018,shi2019}. Then a natural question is how to experimentally characterize these prepared states in harmonic oscillators. To implement fault tolerant quantum computing with Bosonic quantum error correcting codes, efficient and reliable quantum certification of scalable Bosonic code state preparations is significant. The most common approach to characterize a quantum state in experiments is quantum state tomography~\cite{Lvo09,DAr03}, which, however, requires a huge number of measurements and long classical postprocessing time. Quantum state tomography cannot handle multi-mode entangled nonGaussian states. Quantum tomography with neural networks~\cite{tiunov2020,Ahmed2021PRL,Ahmed2021PRR,zhu2022,wu2022} evidently reduces the required number of measurements and shorten the classical postprocessing time, but still cannot deal with multi-mode quantum states. In practical applications, physicists have the prior knowledge about the classical description of a target quantum state. Hence, rather than fully characterizing an experimental quantum state, in these scenarios, physicists can apply quantum certification~\cite{Eis20}, to efficiently determine whether a prepared quantum state is close enough to certain target state. L.\ Aolita et al. investigated quantum certification of photonic states~\cite{Aol15}, including Gaussian states, and those states prepared in Boson sampling~\cite{aaronson2013} and Knill–Laflamme–Milburn (KLM) schemes~\cite{knill2001}. Later U.\ Chabaud developed another verification protocol of quantum outputs in Boson sampling~\cite{Uly20}. Up to now, how to certify cat states and GKP states, without performing quantum state tomography or fidelity estimation~\cite{da11,Uly19}, is still open. On the other hand, although lots of work have been done on efficient verification of many-qubit states, different nature between CV states and qubit states makes certification approaches of many-qubit states~\cite{pallister2018} cannot be directly applied in CV quantum systems~\cite{liu2021}. In this paper, we generalize the concept of fidelity witness~\cite{Aol15,Glu18} to witness of Bosonic code space, and propose protocols to certify both two-component cat code space, four-component cat code space, and a realistic GKP code space with finite truncation in phase space. The measurements in all the protocols are experimentally friendly, being either homodyne detections or heterodyne detections. We propose certification protocols for both the resource states of CV fault-tolerant measurement-based quantum computing~\cite{men14} and the quantum outputs of CV instantaneous quantum polynomial-time (IQP) circuits~\cite{douce2017} by estimating fidelity witnesses. \section{Certification of Cat State Code Space} Let $\mathcal H$ denote the infinite-dimensional Hilbert space of a quantum harmonic oscillator. A quantum device outputs $n$ copies of quantum states $\rho^{\otimes n}$ on $\mathcal H^{\otimes n}$. Given a Bosonic code logical subspace $\bar{\mathcal H}\subset \mathcal H$, an experimenter, who has no knowledge of $\rho$, wants to determine whether $\tr_{\bar{\mathcal H}}\rho \ge 1-\epsilon$, by applying local measurement at each copy of $\rho$. To certify whether a single-mode state $\rho$ falls on $\bar{\mathcal H}$, we measure an observable $W$, which satisfies the following conditions: (i) if $\rho$ falls on $\bar{\mathcal H}$, i.e. $\tr_{\bar{\mathcal H}}\rho=1$, then $\braket{W}_{\rho}=1$; (ii) otherwise, $\braket{W}_{\rho}<1$. We call the observable $W$ a witness of the code space $\bar{\mathcal H}$. By estimating $\braket{W}_\rho$, we can determine whether to accept $\rho$ as a reliable Bosonic code state or not. In this subsection, we introduce code witnesses for different cat codes and explain how to estimate their mean values by Gaussian measurements. A Schrodinger cat state is a superposition of coherent states with opposite amplitudes~\cite{buvzek1995} and can be used to encode qubit information~\cite{Cochrane1999}. We call this code two-component cat code to differentiate it from the cat code with a superposition of four coherent states. A two-component cat code space $\bar{\mathcal H}$ is spanned by \begin{align*} &\ket{\bar{0}}= \frac{1}{\sqrt{2(1+e^{-2|\alpha|^2})}}\left(\ket{\alpha}+ \ket{-\alpha}\right)\\ &\ket{\bar{1}}= \frac{1}{\sqrt{2(1+e^{-2|\alpha|^2})}}\left(\ket{\alpha}- \ket{-\alpha}\right). \end{align*} A two-component cat code witness is \begin{equation}\label{eq:t-catwitness} W=\mathds{1}-\frac{\left(\hat{a}^{\dagger 2}-\alpha^{*\,2}\right)\left(\hat{a}^{2}-\alpha^2\right)}{2}. \end{equation} The two-component cat code space is the degenerate ground space of the Hamiltonian \begin{equation} \hat{H}_{tCat}=\left(\hat{a}^{\dagger 2}-\alpha^{*\,2}\right)\left(\hat{a}^{2}-\alpha^2\right) \end{equation} with eigenvalue zero. $\hat{H}$ can be written as \begin{equation} \hat{H}_{tCat}=0\left( \ket{\bar{0}}\bra{\bar{0}}_{tCat}+\ket{\bar{1}}\bra{\bar{1}}_{tCat}\right)+\lambda_1 \ket{\psi}\bra{\psi}+\cdots \end{equation} where $\ket{\psi}$ is the first excited state of $\hat{H}_{tCat}$ and $\lambda_1$ is the first excited energy. Then \begin{equation} \mathds{1}-\frac{\hat{H}_{tCat}}{2}=\ket{\bar{0}}\bra{\bar{0}}_{tCat}+\ket{\bar{1}}\bra{\bar{1}}_{tCat} +\left(1-\frac{\lambda_1}{2} \right)\ket{\psi}\bra{\psi}-\cdots \end{equation} As $\lambda_1\ge 2$, we get \begin{equation} \mathds{1}-\frac{\hat{H}_{tCat}}{2}\le \ket{\bar{0}}\bra{\bar{0}}_{tCat}+\ket{\bar{1}}\bra{\bar{1}}_{tCat}. \end{equation} Furthermore, any state $\rho$ satisfying $\tr\left[\rho \left(\mathds{1}-\frac{\hat{H}_{tCat}}{2}\right)\right]=1$ if and only if $\rho$ falls on the two-component cat code space. The witness of squeezed two-component cat code space can be obtained by using the fact that squeezing operation is a Gaussian operation inducing a linear transformation on phase space. To clarify how to measure this witness, for simplicity, we rewrite it in terms of quadrature operators when $\alpha\in \mathbb R$, \begin{align}\notag W_{tCat}&=\frac{3-2\alpha^4}{4}\mathds{1}-\frac{1}{16}(\hat{x}^4+\hat{p}^4) -\frac{1}{12}\Bigg[\left(\frac{\hat{x}+\hat{p}}{\sqrt{2}}\right)^4+ \\ \label{eq:catcodewitnessquadrature} & \left(\frac{\hat{x}-\hat{p}}{\sqrt{2}}\right)^4\Bigg] +\frac{1}{2}\left( 1+\alpha^2\right)\hat{x}^2 +\frac{1}{2}\left(1-\alpha^2\right)\hat{p}^2, \end{align} where $\hat{x}=\frac{\hat{a}+\hat{a}^\dagger}{\sqrt{2}}$ and $\hat{p}=\frac{\hat{a}-\hat{a}^\dagger}{\sqrt{2}\text{i}}$ are the position and momentum operators. The expectation value of such a witness can be estimated by homodyne detections on at most four different quadrature bases, i.e. $\hat{x}$, $\hat{p}$, $\frac{\hat{x}+\hat{p}}{\sqrt{2}}$ and $\frac{\hat{x}-\hat{p}}{\sqrt{2}}$. The witness (\ref{eq:t-catwitness}) can also be rewritten as \begin{equation} W_{tCat}=-\frac{1}{2}\hat{a}^2\hat{a}^{\dagger\,2}+2\hat{a}\hat{a}^{\dagger}+\frac{1}{2}(\alpha^{* 2}\hat{a}^2+\alpha^2\hat{a}^{\dagger\,2})-\frac{1}{2}|\alpha|^4. \end{equation} Each term in the above expression is a product of annihilation operators and creation operators in anti-normal order. Using optical equivalence principle~\cite{scully1997}, i.e., \begin{equation} \braket{\hat{a}^m\hat{a}^{\dagger\, n}}=\int \text{d}^2 \alpha Q_\rho(\alpha) \alpha^m\alpha^{* n} , \quad m,n\in \mathbb N \end{equation} $\braket{W_{tCat}}_\rho$ can be estimated by sampling Humusi-Q function $Q_\rho(\alpha):=\braket{\alpha|\rho|\alpha}$ using heterodyne detections. Four-component cat code, which is superposition of four coherent states, is introduced to protect qubit information from photon loss errors~\cite{mirrahimi2014}. A four-component cat code space $\bar{\mathcal H}$ with even parity is spanned by \begin{align*} &\bar{\ket{0}}=\frac{1}{\sqrt{2(1+e^{-2|\alpha|^2})}}\left(\ket{\alpha}+ \ket{-\alpha}\right), \\ &\bar{\ket{1}}=\frac{1}{\sqrt{2(1+e^{-2|\alpha|^2})}}\left(\ket{\text{i}\alpha}+ \ket{-\text{i}\alpha}\right). \end{align*} To certify whether a single-mode state $\rho$ falls on $\bar{\mathcal H}$, we measure the witness \begin{equation}\label{eq:fourCat} W_{fCat}=\frac{\mathds{1}+(-1)^{\hat{n}}}{2}-\frac{\hat{H}_{fCat}}{24}, \end{equation} where $\hat{n}=\hat{a}^\dagger\hat{a}$ is photon number operator, and $ \hat{H}_{fCat}:=\left(\hat{a}^{\dagger 2}+\alpha^{* 2}\right)\left(\hat{a}^{\dagger 2}-\alpha^{* 2}\right)\left(\hat{a}^{2}-\alpha^2\right)\left(\hat{a}^{2}+\alpha^2\right)$. The first term is the parity of photon number, equal to the value of Wigner function at origin, and can be measured by a parity measurement. The second term can be written as \begin{align}\notag \frac{\hat{H}_{fCat}}{24}=&\hat{a}^4\hat{a}^{\dagger\,4}/24-2/3\hat{a}^3\hat{a}^{\dagger\, 3}+3\hat{a}^2\hat{a}^{\dagger\,2}-4\hat{a}\hat{a}^\dagger-\alpha^4/24\hat{a}^{\dagger\,4}\\ \label{eq:fourcomponentCatwitnessantinormal} &-\alpha^{*\, 4}/24\hat{a}^4+(|\alpha|^8/24+1)\mathds{1}. \end{align} Again the expectation value of each term can be estimated by heterodyne detections. A squeezed two-component cat code~\cite{schlegel2022} consists of superpositions of two squeezed coherent states \begin{align*} &\ket{\bar{0}}\propto \left(D(-\alpha)+ D(\alpha)\right)S(r)\ket{0}\\ &\ket{\bar{1}}\propto \left(D(-\alpha)- D(\alpha)\right)S(r)\ket{0}, \end{align*} where $S(r):= \text{e}^{1/2(r^*\hat{a}^2-r\hat{a}^{\dagger\, 2})}$ is a squeezing operation, $r\in \mathbb C$ is a squeezing parameter, and $\text{arg}(r)=\text{arg}(\alpha)/2$ such that the squeezed quadrature is in the same direction as that of the displacement operation. For simplicity, we suppose $\alpha\in\mathbb R$. From the fact that $D(\alpha)S(r)=S(r)D(\alpha \text{e}^{r})$, we obtain a witness of the squeezed two-component cat code space \begin{equation}\label{eq:witnessSqueezedCat} W_{stCat}=\mathds{1}-\frac{S(r) \left(\hat{a}^{\dagger 2}-\alpha^{2}\text{e}^{2r}\right)\left(\hat{a}^{2}-\alpha^2\text{e}^{2r}\right)S(r)^\dagger}{2}. \end{equation} Using the fact that $S(r)\hat{x}S(r)^\dagger=\text{e}^{-r}\hat{x}$ and $S(r)\hat{p}S(r)^\dagger=\text{e}^{r}\hat{p}$, the witness~(\ref{eq:witnessSqueezedCat}) can be rewritten in terms of quadrature operators \begin{align*} W_{stCat}=&\frac{3-2\alpha^4 \text{e}^{4r}}{4}\mathds{1}-\frac{1}{16}\left(\text{e}^{-4r}\hat{x}^4+\text{e}^{4r}\hat{p}^4\right)\\ &-\frac{1}{48}\left[\left(\text{e}^{-r}\hat{x}+\text{e}^{r}\hat{p}\right)^4 + \left(\text{e}^{-r}\hat{x}-\text{e}^{r}\hat{p}\right)^4\right]\\ &+\frac{1}{2}\left( \text{e}^{-2r}+\alpha^2 \right)\hat{x}^2+\frac{1}{2}\left( 1-\alpha^2\text{e}^{2r}\right)\text{e}^{2r}\hat{p}^2, \end{align*} which can be estimated by applying homodyne detections at four quadratures. \iffalse \begin{align*} \hat{a}^{\dagger\, 4}=&\frac{1}{4}(\hat{x}-\text{i}\hat{p})^4\\ =&\frac{1}{4}\Big\{(1+\text{i})^2\hat{x}^4 +(\text{i}-1)^2\hat{p}^4-(\hat{x}+\hat{p})^4-2(\hat{x}^2\hat{p}^2+\hat{p}^2\hat{x}^2)\\ &-\text{i}(1+\text{i})\left[\hat{x}^2(\hat{x}+\hat{p})^2+(\hat{x}+\hat{p})^2\hat{x}^2\right]-\text{i}(\text{i}-1)\left[\hat{p}^2(\hat{x}+\hat{p})^2+(\hat{x}+\hat{p})^2\hat{p}^2\right]\Big\}\\ \end{align*} Using the fact that $\hat{x}^2(\hat{x}+\hat{p})^2+(\hat{x}+\hat{p})^2\hat{x}^2=\frac{1}{6}[(2\hat{x}+\hat{p})^4+\hat{p}^4]-\frac{1}{3}[\hat{x}^4+(\hat{x}+\hat{p})^4]-1$ and $\hat{p}^2(\hat{x}+\hat{p})^2+(\hat{x}+\hat{p})^2\hat{p}^2=\frac{1}{6}[(\hat{x}+2\hat{p})^4+\hat{x}^4]-\frac{1}{3}[\hat{p}^4+(\hat{x}+\hat{p})^4]-1$, we have \begin{equation} \hat{a}^{\dagger\, 4}=\frac{1}{4}\left[\frac{1+5\text{i}}{2}\hat{x}^4+\frac{1-5\text{i}}{2}\hat{p}^4-2(\hat{x}+\hat{p})^4-\frac{1}{3}(\hat{x}-\hat{p})^4+\frac{1-\text{i}}{6}(2\hat{x}+\hat{p})^4+\frac{1+\text{i}}{6}(\hat{x}+2\hat{p})^4\right] \end{equation} Hence \begin{align*} \alpha^4\hat{a}^{\dagger 4}+\alpha^{* 4}\hat{a}^4= &(\text{Re}(\alpha^4)-5\text{Im}(\alpha^4))\hat{x}^4+(\text{Re}(\alpha^4)+5\text{Im}(\alpha^4))\hat{p}^4-\text{Re}(\alpha^4)\left[4(\hat{x}+\hat{p})^4+\frac{2}{3}(\hat{x}-\hat{p})^4\right]\\ &+\frac{1}{3}(\text{Re}(\alpha^4)+\text{Im}(\alpha^4))(2\hat{x}+\hat{p})^4+\frac{1}{3}(\text{Re}(\alpha^4)-\text{Im}(\alpha^4))(\hat{x}+2\hat{p})^4. \end{align*} \fi \section{Certification of a realistic GKP state} GKP states are defined to be simultaneous eigenstates of two commutative displacement operators $S_q=e^{2\text{i}\sqrt{\pi} \hat{x}}$ and $S_p=e^{-2\text{i} \sqrt{\pi} \hat{p}}$, with eigenvalue one~\cite{GKP2001}. A qubit is encoded into a GKP state in the following way \begin{align} \ket{\bar{0}}_{iGKP}&:=\sum_{k=-\infty}^{\infty} \delta(x-2k\sqrt{\pi})\ket{x}, \\ \ket{\bar{1}}_{iGKP}&:=\sum_{k=-\infty}^{\infty} \delta\left(x-(2k+1)\sqrt{\pi}\right)\ket{x}. \end{align} Ideal GKP states are coherent superpositions of position eigenstates, demanding infinite squeezing, which is not physically realistic. From now on, we consider realistic GKP states with finite squeezing. A realistic GKP state replaces position eigenstates by finitely squeezed states, i.e., \begin{align} \ket{\bar{0}}_{rGKP}&\propto \sum_{k=-\infty}^{\infty} e^{- 4k^2\sigma^2 \pi } D(2k\sqrt{\pi}) \ket{\psi_x}, \\ \ket{\bar{1}}_{rGKP}&\propto \sum_{k=-\infty}^{\infty} e^{- (2k+1)^2\sigma^2 \pi} D((2k+1)\sqrt{\pi}) \ket{\psi_x}, \end{align} where $D(\alpha):=\text{e}^{\text{i}\sqrt{2}(-\text{Re}(\alpha)\hat{p}+\text{Im}(\alpha)\hat{x})}$ is a displacement operator, $\ket{\psi_x}=\frac{1}{\sqrt{\sigma}\pi^{1/4}}\int \text{d}x \text{e}^{-\frac{x^2}{2\sigma^2}}\ket{x}$ is a position-squeezed vacuum state, and $0<\sigma<1$ is the variance of a Gaussian distribution. When $\sigma$ is small, a GKP state that is a superposition of Gaussian peaks with width~$\sigma$, separated by~$2\sqrt{\pi}$, with Gaussian envelop of width~$1/\sigma$ in position basis is a superposition of Gaussian peaks with width~$\sigma$, separated by~$\sqrt{\pi}$, with Gaussian envelop of width~$\frac{1}{\sigma }$ in momentum basis. A straightforward way to certify a GKP state is to detect the eigenvalue of two stabilizer operators $S_q$ and $S_p$ to check whether both eigenvalues are one. However, a realistic GKP state with finite squeezing can never pass this certification test. To circumvent this obstacle, one solution is to set a threshold on the eigenvalue in the certification test to pass those realistic GKP states which are close enough to the ideal GKP states. Nevertheless, this approach does not exclusively certify a realistic GKP state or a GKP code space with a certain target degree of squeezing. In this paper, we aim to certify a realistic GKP state with a certain target degree of squeezing. To certify a realistic GKP state, we must truncate the phase space to consider only finite number of superpositions of squeezed coherent states. Suppose we truncate at $x=\pm\sqrt{\pi}m$ in position quadrature and $p=\pm\sqrt{\pi}m$ in momentum quadrature, respectively, where $m\in\mathbb N^+$. Then we obtain a GKP code space witness \begin{align}\notag &W_{rGKP}=\\ \notag &\mathds{1}-\frac{1}{(2m+1)!} \prod_{-m\le k\le m}\left(\cosh r\hat{a}^\dagger-\sinh r\hat{a} -\frac{\sqrt{\pi }k}{\sigma}\right)\cdot\text{h.c.}\\ \label{eq:GKPcode} &-\frac{1}{(2m+1)!} \prod_{-m\le k\le m}\left(\cosh r\hat{a}^\dagger+\sinh r\hat{a}+\text{i}\frac{\sqrt{\pi }k}{\sigma}\right)\cdot \text{h.c.}, \end{align} where $r=\frac{1}{2}\ln\frac{1}{\sigma}$ and $\text{h.c.}$ denotes Hermitian conjugate. As GKP states are applied in fault-tolerant CV quantum computing, we use this witness to certify output state in fault-tolerant CV quantum computing. \section{Application in verification of fault-tolerant quantum computing} In this subsection, we apply the results in the last subsection to certify two important types of many-mode quantum states. The first are the resource states in universal fault-tolerant CV measurement-based quantum computing~\cite{men14}. These states are CV cluster states~\cite{men06,Mil09,larsen2019,hastrup2021}, attached with GKP states. The second are the output states of CV IQP circuits~\cite{douce2017}. These states are prepared by applying unitary gates diagonal in position quadrature on the combination of momentum-squeezed states and GKP states. The preparations of both two types of target states are plotted in Fig.~\ref{fig:circuits}. We first slightly revise the code witness in Eq.~(\ref{eq:GKPcode}) to obtain a fidelity witness of the GKP state \begin{equation} \ket{\bar{+}}_{rGKP}\propto \mathcal N_0 \sum_{k=-\infty}^{\infty} e^{- \sigma^2 \pi k^2} D(\sqrt{\pi}k) \ket{\psi_x}. \end{equation} As $\ket{\bar{+}}_{rGKP}$ is a grid of Gaussian peaks separated by $\sqrt{\pi}$ in position quadrature and separated by $2\sqrt{\pi}$ in momentum quadrature. We obtain a fidelity witness \begin{align}\notag &W_{\ket{\bar{+}}_{rGKP}}=\mathds{1}-\\ \notag &\frac{1}{(2m+1)!} \prod_{-m\le k\le m}\left(\cosh r\hat{a}^\dagger-\sinh r\hat{a} -\frac{\sqrt{\pi }k}{\sigma}\right)\cdot\text{h.c.}- \\ \label{eq:GKPstate} &\frac{1}{(m+1)!} \prod_{-m/2\le k\le m/2}\left(\cosh r\hat{a}^\dagger+\sinh r\hat{a}+2\text{i}\frac{\sqrt{\pi }k}{\sigma}\right)\cdot \text{h.c.} \end{align} A CV cluster state can be considered as more generally a CV graph state, where each vertex represents a quantum mode and each edge connecting vertices $i$ and $j$ represents a quantum gate $CZ:=\text{e}^{\text{i}\hat{x}_i\otimes \hat{x}_j}$. Suppose a target cluster state has $N_s+N_{GKP}$ modes in total, where $N_s$ modes are initially momentum-squeezed vacuum states $\ket{\psi_p}:=\frac{1}{\sqrt{\sigma}\pi^{1/4}}\int \text{d}p \text{e}^{-\frac{p^2}{2\sigma^2}}\ket{p}$ and the other $N_{GKP}$ modes are initially $\ket{\bar{+}}_{rGKP}$. The target CV cluster state is obtained by applying a CZ gate at each pair of adjacent modes $i$ and $j$ in the graph. A fidelity witness of this resource state is \begin{widetext} \begin{align}\notag W=&\left(1+\frac{N_s}{2}\right)\mathds{1}-\sum_{i=1}^{N_{GKP}}\Bigg[\frac{1}{(2m+1)!} \prod_{-m\le k\le m}\left(\frac{\text{e}^{-r}}{\sqrt{2}}\hat{x}_i-\frac{\text{e}^r}{\sqrt{2}}\text{i}\hat{\tilde{p}}_i-\frac{\sqrt{\pi }k}{\sigma}\right)\cdot h.c.\\ \label{eq:resourceState} &+\frac{1}{(m+1)!} \prod_{-m/2\le k\le m/2} \left(\frac{\text{e}^{r}}{\sqrt{2}}\hat{x}_i-\frac{\text{e}^{-r}}{\sqrt{2}}\text{i}\hat{\tilde{p}}_i+2\text{i}\frac{\sqrt{\pi }k}{\sigma}\right)\cdot h.c.\Bigg]-\frac{1}{2}\sum_{i=1}^{N_s} \left(e^{2r}\hat{x}_{i}^2 +e^{-2r}\hat{\tilde{p}}_{i}^2\right), \end{align} where $\hat{\tilde{p}}_i=\hat{p}_i-\sum_{j\in\mathcal N(i)}\hat{x}_j$ and $\mathcal N(i)$ denotes the set of modes adjacent to mode $i$ in the graph. \end{widetext} The fidelity witness in Eq.~(\ref{eq:resourceState}) contains at most $(n+2)^{4m+2}$ products of quadrature operators with maximal order $4m+2$, where $n$ is the maximal number of neighborhood modes and is as most four in a square-lattice cluster state. To calculate the required sample complexity to estimate the mean value of fidelity witness, we denote $\sigma_k$ to be the uniform upper bound of the mean square of all products of $k$ quadrature operators and $\sigma_{\le k}$ to be the maximum value of all $\sigma_j$ with $1\le j\le k$. Suppose $m$ is a constant, that is we always truncate the phase space of each mode at a certain fixed value in both position and momentum quadratures. If we use the approach of importance sampling~\cite{flammia2011} to estimate $\braket{W}$, by Hoeffding's inequality, we find that the minimum number of required copies of states to obtain an estimate $\omega$ such that $\text{Pr}\left(|\omega-\braket{W}_\rho|\ge\epsilon\right)\le \delta$, is upper bounded by \begin{equation} O\left[\frac{\ln1/\delta}{\epsilon^2}\left(N_{GKP}^2\text{e}^{r(4m+2)}\sigma_{\le 4m+2}+N_s^2\text{e}^{2r}\sigma_{2}\right) \right]. \end{equation} This sample complexity scales polynomially in both $N_s$ and $N_{GKP}$. A CV IQP circuit~\cite{douce2017} is a uniformly random combination of three quantum gates in the set $$\left\{Z:=e^{\text{i}\hat{x}\sqrt{\pi}}, CZ, T:=e^{\text{i}\frac{\pi}{4}[2(\hat{x}/\sqrt{\pi})^3+(\hat{x}/\sqrt{\pi})^2-2\hat{x}/\sqrt{\pi}]}\right\},$$ with the input of combination of $N_s$ copies of $\ket{\psi_p}$ and $N_{GKP}$ copies of $\ket{\bar{+}}_{rGKP}$. Denote $n_Z^i$, and $n_{T}^i$ as the number of Z gates and T gates applied at $i$th mode, respectively. Then the fidelity witness of the output state of a CV IQP circuit is given in Eq.~(\ref{eq:resourceState}) with \begin{equation*} \hat{\tilde{p}}_i=\hat{p}_i-n_T^i(3\hat{x}_i^2/(2\sqrt{\pi})+\hat{x}_i/2-\sqrt{\pi}/2) -n_Z^i\sqrt{\pi}-\sum_{j\in\mathcal N(i)}\hat{x}_{j} \end{equation*} There are at most $N_{GKP}(3+n_{CZ})^{4m+2}$ terms of products of quadrature operators with order at most $4m+2$, and the absolute value of coefficient is no larger than $(n_T\text{e}^r)^{4m+2}$. Thus the sample complexity is bounded by \begin{align*} O&\Bigg[\frac{\ln1/\delta}{\epsilon^2} \Big(N_{GKP}^2(n_T\text{e}^r)^{4m+2}(n_{CZ}+3)^{8m+4}\sigma_{\le 4m+2}+\\ &N_s^2 n_T^2 (n_{CZ}+3)^4 \sigma_{\le 4}\Big)\Bigg]. \end{align*} The sample complexity scales polynomially with respect to both the number of modes and the number of gates. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{circuits.png} \caption{Diagrams of quantum circuits to generate the two types of target states. } \label{fig:circuits} \end{figure} \section{Methods} \label{sec:method} The four-component cat code space is the degenerate ground space of Hamiltonian \begin{align}\notag \hat{H}=&\left(\hat{a}^{\dagger 2}+\alpha^{* 2}\right)\left(\hat{a}^{\dagger 2}-\alpha^{* 2}\right)\left(\hat{a}^{2}-\alpha^2\right)\left(\hat{a}^{2}+\alpha^2\right)\\ =&\hat{a}^{\dagger 4}\hat{a}^4-(\alpha^4\hat{a}^{\dagger 4}+\alpha^{* 4}\hat{a}^4) +|\alpha^2|^4 \end{align} with even photon parity. Following idea similar to above two-component cat code, and using the fact that $\frac{\mathds{1}+(-1)^{\hat{n}}}{2}$ is the projection onto the even parity subspace, we obtain the witness of four-component cat code space as shown in Eq.~(\ref{eq:fourCat}). A realistic GKP state is a superposition of squeezed coherent states. Each component $D(\sqrt{\pi}k)\ket{\psi_x}$ has the nullifier $S(r)\left(\hat{a}^\dagger -\frac{\sqrt{\pi}k }{\sigma}\right)\left( \hat{a} -\frac{\sqrt{\pi}k}{\sigma}\right) S(r)^\dagger$. Then the superposition of $D(\sqrt{\pi}k) \ket{\psi_x}$ for $k\in \mathbb{Z}$ has the nullifier \begin{equation} S(r) \prod_{k\in \mathbb{Z}}\left(\hat{a}^\dagger -\frac{\sqrt{\pi }k}{\sigma}\right) \prod_{l\in \mathbb{Z}} \left(\hat{a}-\frac{\sqrt{\pi }l}{\sigma}\right) S(r)^\dagger. \end{equation} Similarly, any state $D( 2\sqrt{\pi} k\text{i} ) \ket{\psi_p}$ has the nullifier \begin{equation} S(\text{i} r) \left(\hat{a}^\dagger+\text{i}\frac{2\sqrt{\pi} k}{\sigma}\right) \left(\hat{a}-\text{i}\frac{2\sqrt{\pi} k}{\sigma }\right) S(\text{i}r)^\dagger. \end{equation} Thus, a realistic GKP state has another nullifier \begin{equation} S ( \text{i}r) \prod_{k\in \mathbb{Z}}\left(\hat{a}^\dagger +\text{i}\frac{2\sqrt{\pi} k}{\sigma }\right) \prod_{l\in \mathbb{Z}} \left(\hat{a}-\text{i}\frac{2\sqrt{\pi} l}{\sigma }\right) S(\text{i}r)^\dagger. \end{equation} Combining these two nullifiers together, we obtain the fidelity witness of a realistic GKP state as shown in Eq.~(\ref{eq:GKPstate}). To obtain the fidelity witness of a cluster state attached with GKP states, we also need a witness of CV cluster state. A cluster state is constructed from tensor product of $n$ $\hat{p}$-squeezed vacuum states by applying CV CZ gates. A nullifier of each mode $1\le i \le n$ in a $n$-mode CV cluster state is , \begin{equation} \frac{e^{2r} \left(\hat{x}_{i}^2 - \sum_{j\in \mathcal N(i)} \hat{x}_j^2\right)+e^{-2r}\hat{p}_{i}^2-1}{2}. \end{equation} Combining all the nullifiers, we obtain a fidelity witness in Eq.~(\ref{eq:resourceState}). The fidelity witness of output states of IQP circuits can be obtained by noticing that $$ T\hat{p}T^\dagger=\hat{p}-3\hat{x}^2/(2\sqrt{\pi})-\hat{x}/2+\sqrt{\pi}/2, $$ and following a similar strategy. The sample complexities for both the certification protocols of CV cluster states attached with GKP states and output states of IQP circuits can be calculated using importance sampling and Hoeffding's inequality. Suppose a random variable $\bm{F} =\sum_{i=1}^m \lambda_i f_i$, where each $f_i$ is a polynomial function of quadrature operators. Then by importance sampling approach, Hoeffindg's inequality implies $\text{Pr}(|F^*-\bm{F}|>\epsilon)\le 8\text{e}^{-\frac{N\epsilon^2}{33\braket{\bm{F}^2}}}$, where $\braket{\bm{F}^2}\le m^2\max_i|\lambda_i| \max_i \braket{f_i^2}$. As calculated in Ref.~\cite{Liu19,farias2021}, to make the failure probability of estimation less than $\delta$, the required sample complexity is upper bounded by $O\left(\frac{\ln(1/\delta)}{\epsilon^2} m^2\max_i|\lambda_i| \max_i\braket{f_i^2}\right)$. \section{Discussion} Bosonic quantum error correcting codes is a promising way to realize universal fault tolerant quantum computing is a qubit-into-qumode manner alternative to the KLM scheme. Preparation of bosonic code quantum states with high fidelity is an key issue for the implementation of Bosonic quantum error correction. Here we propose realistic protocols to certify the preparations of experimental Bosonic code sates using Gaussian measurements. Different from state tomography, this protocol is extended to certification of many-mode quantum states efficiently with respect to the number of modes. Most previous work on certification or verification of nonGaussian states are about nonGaussian states with finite stellar rank~\cite{Chabaud2020prl}. In contrast to quantum states in Boson sampling and KLM schemes, cat states and GKP states have infinite stellar rank, indicating higher nonGaussianity. In this paper, we first develop an approach to certify whether a CV state falls inside a certain two-dimensional Bosonic code space. If a CV state passes this certification protocol, then we can ignore the CV nature of this state and consider only the logical code space. By combining our certification approach, together with quantum characterization approaches on many-qubit systems, including direct fidelity estimation~\cite{flammia2011}, qubit-state verification~\cite{pallister2018} and classical shadow estimation~\cite{huang2020}, one can certify preparations of many-qubit Bsonic code states. Although here we assume independent and identical copies of prepared quantum states, we can extend this work into non-i.i.d scenario by using the technique developed in Ref.~\cite{wu2021}. Hence, the protocol can be applied to verifiable blind fault-tolerant quantum computing~\cite{hayashi2015,gheorghiu2019}. A server prepares quantum states, which are claimed to be resource states for CV fault-tolerant quantum computing, and sends them to a client. After receiving these states, the client randomly chooses some of them to perform dimension test and fidelity test. If both tests are passed, then the remaining states can be used for measurement-based fault tolerant quantum computing with homodyne detections. Besides verification of resource states for universal quantum computing, we also propose protocols to certify output states of IQP circuits. It has been shown that the statstics of practical homodyne measurements on position quadratures of the quantum output of IQP circuits cannot be efficiently simulated by a classical computer~\cite{douce2017}. Hence, a certified quantum output state of IQP circuits can be used to show quantum supremacy. \section{Acknowleadgement} This work was supported by funding from the Hong Kong Research Grant Council through grants no.\ 17300918 and no.\ 17307520.
2,869,038,154,515
arxiv
\section{Introduction} Liquid water is arguably one of the most important liquids due to its role in chemistry, biology and geophysics and, as such, also one of the most studied systems.\cite{RahmanStillinger1971} Despite this, a detailed understanding of the physical chemistry of water is still lacking due to its complex behaviour and unusual properties.\cite{Kauzmann1969} However, computational studies of water are rather challenging due to the presence of the various physical phenomena that conspire to make water unique, such as the cooperativity of the hydrogen bond (HB) network, large polarizability effects, strong permanent dipoles and sizeable nuclear quantum effects (NQE). \cite{StillingerScience} The role of zero-point energy (ZPE) and tunnelling effects in modifying the strength of interactions in the HB network of ambient liquid water, and the consequences for the static and dynamic properties, has been appreciated for almost three decades now.\cite{kuharski1085studystructure,wallqvist1985pimd, miller2005complexmolecularsystems} Although it is well known that NQE generally weaken intermolecular hydrogen-bonding, resulting in a less-structured liquid and concomitantly faster rotational and translational dynamics, \cite{lobaugh1997quantumwater,paesani:184507,PaesaniVoth2009,miller:154504,hernandez2006modeldependence,comp_quant_eff} there is ongoing debate regarding the magnitude of this effect. For example, while comparisons of classical and quantum (path integral) simulations of liquid water using empirical force-fields generally predict that the rates of dynamic processes are increased by around 50\% due to NQE,\cite{lobaugh1997quantumwater,paesani:184507,miller:154504} recent simulations using a water model specifically parameterized for quantum simulations suggests an enhancement of just 15\%. \cite{comp_quant_eff} An \textit{ab initio} PIMD approach, where the interatomic forces are calculated on-the-fly from accurate electronic structure calculations, would be very attractive to address the questions surrounding the role of NQE in liquid water. Considerable effort has gone into devising practical density functional theory (DFT) based PIMD methods \cite{marx:4077, tuckerman:5579} and much progress has been reported.\cite{PhysRevLett.91.215503, PhysRevLett.101.017801} Nevertheless, the computational expense of this route still severely limits the length- and time-scales that can be studied. In this work, we take a different route. To circumvent the computational cost associated with an \textit{ab initio} PIMD technique, we instead develop here a flexible water models which are derived by matching the interatomic forces to those from accurate electronic structure calculations,\cite{ercolessi2007force_matching} without relying on any empirical parameters or experimental input. This not only facilitates large-scale PIMD simulations with an accuracy that is similar to DFT-based PIMD calculations, but at variance to empirical force-fields that are parameterized to reproduce experimental data, is also not plagued by a``double-counting" of NQE.\cite{comp_quant_eff} Furthermore, this allows us to assess the accuracy and intrinsic properties of potential DFT-based PIMD simulations as distinct from those that arise from numerical approximations, insufficient sampling and finite-size effects. However, contrary to explicit electronic structure-based PIMD simulations, the employed functional form of the recently devised q-TIP4P/F force-field entails that the resulting water model is neither polarizable nor able to simulate chemical reactions that may take place in water.\cite{comp_quant_eff} The remainder of this paper is organized as follows. In Section~II, we present the force-matching scheme used to derive the parameters of new flexible q-TIP4P/F-like water models. The finite temperature path-integral methods used to rigorously account for ZPE and tunnelling effects, and to investigate the influence of NQE in liquid water are described in Section~III. Thereafter, in Section~IV, we describe computational details, and in Section~V we assess the accuracy of water models derived using the force-matching procedure. The eventual performance of our newly derived water models and the influence of NQE are discussed in Section~V, which is followed by conclusions in Section~VI. \section{Force-Matching\label{sec:theory}} Empirical water force-fields are typically parameterized so as to reproduce experimental data such as the radial distribution function (RDF), structure factor, heat of vaporization and the density maximum of liquid water.\cite{RahmanStillinger1974, jorgensen_TIP, spce, TIP5P, TIP4P2005, Guillot2002} While these potentials are usually remarkably accurate in reproducing the underlying experiments, the transferability to regions of the phase diagram or situations different from that in which they have been fitted may be restricted. Furthermore, since NQE are already present in experiment, they will be considered twofold when taken explicitly into account within a PIMD simulation.\cite{miller2005quantumdiffusion} Using results from accurate \textit{ab initio} electronic structure calculations where, contrary to experimental data, NQE are not present, this ``double-counting'' of NQE is circumvented from the outset and permits to study the impact of NQE in a direct and systematic manner. Beside the finite-difference approach \cite{VegaAbascal2011} there are many schemes to fit empirical models to \textit{ab initio} data, including the inverse Monte Carlo \cite{IMC1,IMC2} or iterative Boltzmann inversion \cite{IBI} technique that both rely on Henderson's theorem, which states that a potential with only pairwise interactions is uniquely determined by the RDF up to an additive constant.\cite{Henderson1974} However, the application of Henderson's theorem is not without problems since at finite numerical accuracy essentially indistinguishable RDFs may entail very different pair potentials.\cite{Potestio2013} Furthermore, the generation of reference RDFs from first-principles by \textit{ab initio} MD (AIMD) simulations is computationally rather time consuming, \cite{laasonen1993ailiqwater, SprikWater1996, PhysRevE.68.041505, GrossmanWater2004, KuoWater2004, FernandezWater2004, JoostWater2005, SitMarzariWater2005, lee:154507, TodorovaWater2006, SchiffmannWater2008, schmidtNPT2009, kuehnewater2009, GuidonWater2010, BanyaiNMR2010, GalliSpanuWater2011, FernandezSerra2011, GalliVdWWater2011, chun2012structure, kuehne2ptwater2012, TuckermanWater2012, kuehnewater2013, WaterPNAS2013, kuehnewaterreview2013, MP2Water2013} in particular when considering many state points to guarantee that the resulting water model is as transferable as possible. In contrast, the force-matching technique of Ercolessi and Adams,\cite{ercolessi2007force_matching} where the interaction potential is derived so as to mimic the forces of accurate reference calculations, not only includes many-body environmental effects, but also allows to employ a higher level of theory since fewer electronic structure calculations are required, in general. To determine the parameters of an empirical interaction potential given \textit{ab initio} force calculations for a set of configurations, we minimize the normalized $L_1$ force distance $\|\delta\bm{F}\|_{1}$, \begin{equation} \label{eq:chi} \|\delta\bm{F}\|_{1} = \frac{1}{3} \left< \sum_{i=1}^{N} \sum_{\alpha \in (x,y,z)} \left[ \frac{|\bm{F}_{i,\alpha}^{\text{QM}} - \bm{F}_{i,\alpha}^{\text{FF}}|}{\sigma_i} \right] \right> \text{,} \end{equation} where $N$ is the number of atoms and $\sigma_i$ for the standard deviation of the force distribution $\bm{F}_{i, \alpha}$ of atom $i$ in directions $\alpha \in (x,y,z)$, while $\left< \cdot \cdot \cdot \right>$ implies the ensemble average of selected configurations from a PIMD simulation. The quantum mechanical reference forces are denoted as $\bm{F}_{i,\alpha}^{\text{QM}}$, while $\bm{F}_{i,\alpha}^{\text{FF}}$ are the nuclear forces of the classical interaction potential, respectively. In any case, the minimization of Eq.~\ref{eq:chi} with respect to the parameters of $\bm{F}_{i,\alpha}^{\text{FF}}$ represents an ill-posed problem, in particular when including atomic partial charges in the optimization procedure. From this it follows that the optimization may not be stable under small variations of the corresponding parameters. This is reflected in an error landscape with many saddle points and flat areas, where the Hessian matrix is nearly singular, which leads to inaccuracies due to the limited precision of floating point arithmetic. As a consequence, an important problem of gradient-based minimization methods is the particular form of the objective function, whose derivative with respect to partial charges are often found to be ill-conditioned. Even though it is possible to mitigate this difficulty by augmenting the penalty function with additional properties such as the total force or torque with its respective weights,\cite{akin-ojo2008qualityforcefields,sala2011fm, sala2012fm} here we propose to circumvent this using the sequential least-squares quadratic programming algorithm (SLSQP) together with physically-sensible bound constraints \cite{SLSQP}. The SLSQP method treats the original problem as a sequence of constrained least-squares problems that is equivalent to a quadratic programming algorithm for nonlinearly-constrained gradient-based optimization, hence the name. Specifically, each SLSQP step involves solving a quadratic approximation of the original objective function, where the linear term is the gradient and the quadratic term is an approximate Hessian, with first-order affine approximations of the nonlinear constraints. The approximate Hessian, which is initialized to the identity matrix, is continuously updated, while keeping it positive definite, based on the gradients and function values at subsequent steps similar to the BFGS quasi-Newton scheme.\cite{NumericalRecipes} As a consequence, like any quasi-Newton method, the true Hessian is only approached in the limit of many iterations close to the minimum. As a result of the ill-posed nature of the problem, we search for the minimum along the direction of the modified quasi-Newton scheme by first bracketing the minimum and then using Brent's method.\cite{Brent1973} At variance to more elaborate techniques that exploit gradient information, here the availability of the function's derivative is not required. However, it should be noted that this procedure offers no guarantees about whether the global minimum of the optimization function is located. \subsection{Water model\label{params_desc}} The aim of this work is to use the force matching procedure outlined above to fit simple empirical force-fields to \textit{ab initio} force data; this requires us to choose a functional form for the empirical water model, within which the parameters will be optimised. Among the large number of simple point charge models that have been developed for liquid water, we have chosen the flexible q-TIP4P/F water model of Habershon \textit{et al.} \cite{comp_quant_eff}, which has been shown to offer a good reproduction of several key experimental properties of liquid water under ambient conditions, including diffusion coefficients, liquid density and liquid structure. The q-TIP4P/F water model consists of two positive charge sites of magnitude $|\frac{q}{2}|$ on the hydrogen atoms and a negative charge of magnitude $q$ positioned at $\bm{r}_M = \gamma \bm{r}_O + (1-\gamma)(\bm{r}_{H_1}-\bm{r}_{H_2})/2$ to ensure local charge neutrality of each water molecule. These so-called M-sites and the hydrogen atoms on different water molecules interact with each other through a simple Coulomb potential. In conjunction with a Lennard-Jones potential between the oxygen atoms, this constitutes the following pairwise-additive intermolecular potential \begin{eqnarray} \nonumber V_{\text{inter}} &=& \sum_i \sum_{j>i} 4 \epsilon \left[\left(\frac{\sigma}{r_{ij}}\right)^{12} - \left(\frac{\sigma}{r_{ij}}\right)^6\right] \\ &+& \sum_{m \in i} \sum_{n \in j} \frac{q_m q_n}{r_{mn}} \text{,} \end{eqnarray} where $r_{ij}$ is the distance between the oxygen atoms and $r_{mn}$ the distance between the partial charges in molecules $i$ and $j$. Flexibility is added to this model by an intramolecular potential, which consists of an anharmonic quartic expansion of the Morse potential and a harmonic bending term, \begin{eqnarray} V_{\text{intra}} = \sum_i \left[ \frac{1}{2} k_{\theta} (\theta_i - \theta_{\text{eq}})^2 + V_{\text{OH}}(r_{i1}) + V_{\text{OH}}(r_{i2}) \right] \!\! , \, \quad \end{eqnarray} where \begin{eqnarray} V_{\text{OH}}(r) &=& D_r \Bigl[ \alpha_r^2 (r - r_{\text{eq}})^2 - \alpha_r^3 (r - r_{\text{eq}})^3 \nonumber \\ &+& \frac{7}{12} \alpha_r^4 (r - r_{\text{eq}})^4 \Bigr]. \nonumber \end{eqnarray} Here $r_{\text{eq}}$ denotes the intramolecular O-H equilibrium distance, $r_{i1}$ and $r_{i2}$ are the two covalent O-H bonds of water molecule $i$, $\theta_{\text{eq}}$ is the equilibrium H-O-H bond angle and $\theta_{i}$ is the H-O-H bond angle in molecule $i$. In this work, the central aim is to modify the nine independent parameters of the original q-TIP4P/F water model such that it reproduces the forces determined in \textit{ab initio} calculations. In particular, we optimise these parameters for a series of different DFT functionals, resulting in several different q-TIP4P/F-like water models. \section{Path Integral Formalism\label{sec:pimd}} \subsection{Path Integral Molecular Dynamics} In the path integral molecular dynamics (PIMD) method, each quantum particle is replaced by a classical harmonic $p$-bead ring-polymer. This extended system is isomorphic to the original quantum system, enabling calculation of quantum-mechanical properties of the system by sampling the path integral phase space. \cite{Feynman, chandler1981isomorphism, parrinello1984moltenkcl, CeperleyRMP} The canonical quantum partition function, $Z_{p}$, can be expressed in terms of the Hamiltonian $\hat H = \hat T + \hat V$ and the inverse temperature $\beta^{-1} = {k_B T}$, \begin{equation} Z = \text{Tr}\left[ e^{-\beta \hat H} \right] = \text{Tr} \left[ \left( e^{-\beta_{p} \hat H} \right)^{p} \right] = \lim_{p \rightarrow \infty}Z_p. \label{QuantPartFunc} \end{equation} Inserting $p-1$ complete sets of position eigenstates, and using the symmetric Trotter splitting to represent the Boltzmann operator, Eq.~(\ref{QuantPartFunc}) can be written in a computationally convenient form, which can be directly sampled using the Monte Carlo technique, as \begin{eqnarray} Z_p &=& \left(\frac{m}{2 \pi \beta_p }\right)^{\frac{3p}{2}}\int\!d^p \, \bm{r} \\ &\times& e^{-\beta_p \sum \limits_{k=1}^{p} \big{[}\frac{1}{2}m\omega^2_p(\bm{r}^{(k)}-\bm{r}^{(k+1)})^2+V(\bm{r}^{(k)})\big{]}_{\bm{r}^{(p+1)}=\bm{r}^{(1)} }} \text{,} \nonumber \end{eqnarray} where $p$ is the number of imaginary time slices, $m$ the particle mass and $\omega_p=p/\beta = 1/\beta_p$ the angular frequency of the harmonic spring potential between adjacent beads. The constraint $\bm{r}^{(p+1)} = \bm{r}^{(1)}$, where the parenthesis in the exponent denotes the bead index, is a result of the trace in Eq.~(\ref{QuantPartFunc}) and means that the corresponding $p$-bead system is a closed ring-polymer, while $\lim_{p \rightarrow \infty}{Z}_p={Z}$ is a direct consequence of the Trotter theorem, which states that \begin{equation}\label{trotter} e^{\alpha (\hat A+ \hat B)} = \lim_{p\rightarrow\infty}[e^{\frac{\alpha}{2p}\hat B}e^{\frac{\alpha}{p}\hat A}e^{\frac{\alpha}{2p}\hat B}]^p \text{.} \end{equation} The latter implies that in the limit $p\rightarrow\infty$ the solution of sampling $Z_p$ classically is equivalent to the exact quantum partition function. \cite{parrinello1984moltenkcl} Making use of the standard Gaussian integral to introduce momenta, $Z_p$ can be also be sampled using MD. If we further generalize the resulting expression for more than one particle, the quantum partition function eventually reads as \begin{eqnarray} Z_p&=&\mathcal{N}\int\!d^{Np}\,\bm{r} \int\! d^{Np}\,\bm{p} \; e^{- \beta_{p} H_p(\{\bm{r}\}, \{\bm{p}\})}, \end{eqnarray} where $\mathcal{N}$ is a normalisation constant and \begin{eqnarray} \nonumber H_p(\{\bm{r}\}, \{\bm{p}\}) &=& \sum_{k=1}^{p} \left[\sum_{i=1}^{N}\left(\frac{(\bm{p}_i^{(k)})^2}{2 m_i^{(k)'}}+\frac{m_i\omega_p^2}{2}(\bm{r}_i^{(k)} - \bm{r}_i^{(k+1)})^2\right)\right] \\ &+& V(\bm{r}_1^{(k)}, ...,\bm{r}_N^{(k)}) \end{eqnarray} is the so-called bead-Hamiltonian that describes the interactions between all $N$ particles of a system and for all $p$ beads. Finally, we note that time-independent quantum thermal properties of position-dependent operators can now be calculated straightforwardly in PIMD simulations according to \begin{eqnarray} \langle A \rangle_p &=& \frac{\mathcal{N}}{Z_{p}}\int\!d^{Np}\,\bm{r} \int\! d^{Np}\,\bm{p} \; e^{- \beta_{p} H_p(\{\bm{r}\}, \{\bm{p}\})} A_{p}(\mathbf{r}), \end{eqnarray} where $A_{p}(\mathbf{r})$ is given as the bead-average of the operator $\hat{A}$, thus \begin{equation} A_{p}(\mathbf{r}) = \frac{1}{p} \sum_{k=1}^{p} A( \mathbf{r}^{(k)}). \end{equation} By comparing classical ($p=1$) and PIMD simulations, this approach allows one to assess the impact of NQE in time-independent observables such as RDFs. In order to reduce the computational effort required to calculate the long-range electrostatic interactions $p$ times, we use the ring-polymer contraction scheme of Markland and Manolopoulos.\cite{Markland2008256} Here, we split the Hamiltonian into its inter- and intramolecular contributions and limit the computationally-expensive intermolecular force calculation to a single Ewald sum at the centroid of the ring-polymer system: \begin{equation} \bm{r}_i^{(c)}= \frac{1}{p}\sum_{k=1}^p\bm{r}_i^{(k)}\text{.} \end{equation} Short-range corrections are subsequently added to account for the impact of this approximation on the actual ring-polymer beads. \subsection{Ring Polymer Molecular Dynamics} In contrast to the original PIMD approach, the ring-polymer MD (RPMD) scheme of Craig and Manolopoulos allows one to approximate dynamical properties within the path-integral framework\cite{craig:3368,RPMDreview}. The diffusion coefficient, for instance, is obtained as the time-integral of the Kubo-transformed velocity auto-correlation function $\tilde{c}_{vv} (t)$, \begin{equation}\label{Diffusion} D = \frac{1}{3} \int_0^{\infty} dt \, \tilde c_{vv} (t). \end{equation} The RPMD method approximates the quantum-mechanical Kubo-transformed time-correlation function $\tilde c_{AB} (t)$ as a classical time-correlation function calculated in the extended path integral phase-space. Thus, in RPMD, we have \begin{eqnarray} \label{RPMDcorrfunc} \tilde{c}_{AB}(t) &\approx& \frac{\mathcal{N}}{Z_p} \int\! d^{Np}\bm{p}\, d^{Np}\bm{r} \\ &\times& e^{-\beta_p H_p(\{\bm{r}\},\{\bm{p}\})} A_p(\{\bm{r}(0)\})B_p(\{\bm{r}(t)\})\, , \nonumber \end{eqnarray} where \begin{subequations} \begin{eqnarray} B_p(\{\bm{r}(t)\})&=&\frac{1}{p}\sum\limits_{k=1}^p B(\bm{r}_1^{(k)}(t),\ldots,\bm{r}_N^{(k)}(t)) \end{eqnarray} and \begin{eqnarray} A_p(\{\bm{r}(0)\})&=&\frac{1}{p}\sum\limits_{k=1}^p A(\bm{r}_1^{(k)}(0),\ldots,\bm{r}_N^{(k)}(0)) \end{eqnarray} \end{subequations} are ensemble averages over the beads of a closed ring-polymer. Manolopoulos and coworkers have shown that this approximation is exact in the high-temperature limit, where Eq.~\ref{RPMDcorrfunc} reduces to the classical correlation function, and also in the short-time and simple harmonic oscillator limits \cite{craig:3368, habershon2009quantumleakage, RPMDreview}. In this work, RPMD is used to calculate molecular diffusion coefficients for each of the water models developed by our force-matching approach. However, to circumvent the spurious vibrational modes which arise from the internal ring-polymer modes in RPMD simulations \cite{habershonIR}, simulations of vibrational spectra in this work employ the Partially Adiabatic Centroid Molecular Dynamic (PACMD) method. \cite{hone:154103}. In this approach, the effective masses of the ring-polymer beads are adjusted so as to shift the spurious oscillations beyond the spectral range of interest \cite{parrinello1984moltenkcl}. Specifically, the elements of the Parrinello-Rahman mass matrix are chosen so that the internal modes of the ring-polymer are shifted to a frequency of \begin{equation} \Omega = \frac{p^{p/p-1}}{\beta \hbar}, \end{equation} which allows for similar integration time-steps to be used in both RPMD and PACMD simulations.\cite{habershonIR} \section{Computational Details} In attempting to generate empirical water models that are as transferable as possible, we have extracted 1500 decorrelated snapshots from PIMD simulations consisting of 125 water molecules in the constant-NPT (isothermal,isobaric) ensemble using the q-TIP4P/F water potential of Habershon \textit{et al.} \cite{comp_quant_eff}. Specifically, we have selected 125 different configurations at 1~bar pressure for each temperature over the whole liquid temperature range between $248~\text{K}$ to $358~\text{K}$ in $10~\text{K}$ steps. In this way, the resulting water model is not just parametrized to a single state point at ambient conditions but spans a range of state points from undercooled water to near the vapor phase. Force matching, as described in Section~\ref{sec:theory}, was conducted based on reference forces from DFT calculations. We employed the mixed Gaussian and plane wave approach \cite{lippert1997gaussianplanewave} as implemented in the \textsc{CP2K/Quickstep} code \cite{VandeVondele_quickstep}. In this approach the Kohn-Sham orbitals are represented by a TZV2P Gaussian basis set \cite{vandevondele2007gaussianbasis}, while the charge density is expanded in plane waves using a density cutoff of 320~Ry. The exchange and correlation (XC) energy was described by a series of common generalized gradient approximations, and norm-conserving Goedecker-Teter-Hutter pseudopotentials were used to describe the interactions between the valence electrons and the ionic cores \cite{GTH1996pseudo,hartwigsen1998separabledualspacegaussian,KrackGTH}. Van der Waals (vdW) interactions, which are typically left out by common local and semi-local XC functionals, are either approximated by an additional pair-potential, or by dispersion-corrected atom-centered pseudopotentials (DCACP) \cite{grimme2010d3, lilienfeld2005dcacp}. \begin{table*} \caption{Parameters of q-TIP4P/F-like water models obtained with the force-matching approach.}\label{tab:parameters} \begin{tabular}{l|c|c|c|c|c|c|c|c|c} XC Functional & $q$ [e] & $\gamma$ & $\sigma$ [$a_{0}$] & $\epsilon$ [$E_{h}$] & $ \theta_{\text{HOH}}$ [deg] & $r_{\text{OH}}$[$a_{0}$] & $D_r$ [$E_{h}$] & $k_\theta/2$ [$E_{h}$/deg$^2$] & $\alpha_{r}$ [1/$a_{0}$] \\ \hline \hline B97G \cite{B97, B97Grimme} & -1.1437 & 0.65603 & 6.0330 & 2.1035$\times 10^{-4}$ & 107.42 & 1.8099 & 0.13773 & 6.2700$\times 10^{-2}$ & 1.3671 \\ B97G-D3 & -1.1228 & 0.65798 & 6.0122 & 2.2092$\times 10^{-4}$ & 107.42 & 1.8103 & 0.13670 & 6.2647$\times 10^{-2}$ & 1.3701 \\ BLYP \cite{becke, lyp} & -1.0891 & 0.65468 & 6.0025 & 2.1267$\times 10^{-4}$ & 107.44 & 1.8296 & 0.13280 & 6.2954$\times 10^{-2}$ & 1.3509 \\ BLYP-D3 & -1.0738 & 0.65880 & 5.9702 & 2.3220$\times 10^{-4}$ & 107.40 & 1.8301 & 0.16625 & 6.2796$\times 10^{-2}$ & 1.2089 \\ BLYP-DCACP & -1.0806 & 0.65194 & 5.9925 & 2.1913$\times 10^{-4}$ & 107.44 & 1.8320 & 0.13319 & 6.2799$\times 10^{-2}$ & 1.3432 \\ BP86 \cite{becke, perdew86} & -1.1439 & 0.65123 & 5.9734 & 2.2413$\times 10^{-4}$ & 107.41 & 1.8297 & 0.16232 & 6.1953$\times 10^{-2}$ & 1.2282 \\ BP86-D3 & -1.1316 & 0.65539 & 5.9725 & 2.2810$\times 10^{-4}$ & 107.41 & 1.8297 & 0.16116 & 6.1854$\times 10^{-2}$ & 1.2320 \\ BP86-DCACP & -1.1309 & 0.64901 & 5.9723 & 2.2765$\times 10^{-4}$ & 107.41 & 1.8330 & 0.15981 & 6.1897$\times 10^{-2}$ & 1.2308 \\ PBE \cite{pbe} & -1.1347 & 0.65551 & 5.9746 & 2.2681$\times 10^{-4}$ & 107.41 & 1.8277 & 0.16249 & 6.1706$\times 10^{-2}$ & 1.2324 \\ PBE-D3 & -1.1309 & 0.65681 & 5.9745 & 2.2861$\times 10^{-4}$ & 107.41 & 1.8276 & 0.16199 & 6.1640$\times 10^{-2}$ & 1.2341 \\ PBE-DCACP & -1.1357 & 0.65528 & 5.9732 & 2.2328$\times 10^{-4}$ & 107.41 & 1.8277 & 0.16274 & 6.1789$\times 10^{-2}$ & 1.2317 \\ revPBE \cite{revPBE} & -1.1042 & 0.66934 & 6.0272 & 2.1281$\times 10^{-4}$ & 107.38 & 1.8223 & 0.13504 & 6.1977$\times 10^{-2}$ & 1.3600 \\ revPBE-D3 & -1.0992 & 0.67121 & 6.0258 & 2.1496$\times 10^{-4}$ & 107.37 & 1.8222 & 0.13414 & 6.1935$\times 10^{-2}$ & 1.3642 \\ revPBE-DCACP & -1.1022 & 0.66937 & 6.0134 & 2.1512$\times 10^{-4}$ & 107.38 & 1.8226 & 0.13748 & 6.1923$\times 10^{-2}$ & 1.3472 \\ TPSS \cite{tpss} & -1.0552 & 0.71151 & 6.0645 & 2.0568$\times 10^{-4}$ & 107.21 & 1.8265 & 0.12072 & 6.3820$\times 10^{-2}$ & 1.4260 \\ TPSS-D3 & -1.0318 & 0.72981 & 5.9782 & 2.5136$\times 10^{-4}$ & 107.38 & 1.8273 & 0.16199 & 6.3405$\times 10^{-2}$ & 1.2316 \\ \hline\hline q-TIP4P/F \cite{comp_quant_eff} & -1.1128 & 0.73612 & 5.9694 & 2.9515$\times 10^{-4}$ & 107.40 & 1.7800 & 0.185 & 7.0000$\times 10^{-2}$ & 1.2100 \\ \end{tabular} \end{table*} The parameters of the q-TIP4P/F-like water potentials were obtained by minimizing Eq.~\ref{eq:chi} using the SLSQP algorithm of Kraft with a convergence tolerance of $10^{-6}$ on the penalty function between different iterations \cite{SLSQP}. Gradients with respect to the various optimization parameters were computed using finite differences with a displacement of $10^{-8}$. The initial parameters were taken from the original q-TIP4P/F water model \cite{comp_quant_eff}, while the optimized parameters for the various XC functionals we have considered here are listed in Tab.~\ref{tab:parameters}. The resulting water models are denoted as ``fm-TIP4P/F-XC'', where ``XC'' represents the employed XC functional of the DFT-based reference calculations. Unless stated otherwise, all of our PIMD calculations were performed at a temperature of 298~K and a pressure of 1~bar in the constant-NPT ensemble using 125 water molecules in a cubic simulation box. Periodic boundary conditions were applied using the minimum image convention. Short-range interactions were truncated at 9~\AA \, and Ewald summation was employed to calculate the long-range electrostatic interactions. The ring-polymer contraction scheme with a cut-off value of $\sigma$ = 5~\AA \, was employed to reduce the electrostatic potential energy and force evaluations to single Ewald sum, thereby significantly speeding up the calculations \cite{Markland2008256}. Specifically, $p$ = 32~beads were employed, while the computationally expensive part of the electrostatic interactions were contracted to the centroid, which in the following is indicated as $p = 32 \to 1$. The evolution of the ring-polymer in time was performed analytically in the normal mode representation by a multiple time-step algorithm using a discretized time-step of 1.0~fs for the intermolecular and 0.125~fs for the intramolecular forces \cite{RESPA}. For comparison, additional simulations with classical nuclei were also performed ($p$=1). In all simulations, the system was pre-equilibrated in the constant-NVT ensemble for 50~ps followed by a 100~ps equilibration in the constant-NPT ensemble using an Andersen thermo- and an anisotropic Berendsen barostat, respectively \cite{AndersenThermo, BerendsenBaro}. Ensemble averages were then computed over an additional $5~\text{ns}$ PIMD trajectory. Two-phase simulations were performed to calculate the melting point of water \cite{BonevGalli2004}. For this purpose, direct coexistence simulations of the water-ice interface were performed under atmospheric pressure \cite{BrykHaymet2002, fernandez:144506}. The initial hexagonal ice configurations were generated by placing the oxygen atoms at their crystallographic sites \cite{HaywardReimers1997}, and determining the positions of the hydrogen atoms using the Monte Carlo procedure of Buch \textit{et al.} \cite{BuchIceMC1998} in such a way that the Bernal-Fowler rules \cite{BernalFowler1933, Petrenko1999} were satisfied and the total dipole moment of the simulation cell was exactly zero. The initial 288 molecules ice configuration was equilibrated in the presence of an Andersen thermostat and an anisotropic Berendsen barostat for $50~\text{ps}$ before putting the secondary prismatic ($1\bar{2}10$) face of the ice cell in direct contact with a separately equilibrated water system consisting of 280 molecules \cite{Furukawa2005}. Finally, the combined solid/liquid system consisted of 568 water molecules and was simulated for $10~\text{ns}$. The velocity autocorrelation function $\tilde c_{vv} (t)$ in Eq.~\ref{Diffusion} was calculated for 5~ps by time averaging over 100 consecutive constant-NVE RPMD trajectories of length 10~ps. After an initial equilibration in the constant-NVT ensemble for 100~ps, the momenta were resampled between each constant-NVE RPMD trajectory and the system re-equilibrated for another 2~ps before correlation functions were accumulated. Infrared (IR) spectra were calculated using the PACMD method by averaging over 300 constant-NVE PACMD trajectories, each of 20~ps length. Here, a time-step of 0.5~fs for the intermolecular and 0.1~fs for the intramolecular interactions was employed. After an initial equilibration in the constant-NVT ensemble for 100~ps, the momenta were resampled and the system re-equilibrated for another 2~ps between each constant-NVE PACMD trajectory. To assess the accuracy of our force matching procedure, an explicit 50~ps long classical ($p$=1) AIMD simulation was performed using the second-generation Car-Parrinello scheme of K\"uhne \textit{et al.} \cite{CP2G, CP2Greview}. The nuclear forces were computed at the DFT level using the PBE XC functional and otherwise the exactly same settings as before. This calculation, denoted as ``125 Water (PBE)'', was conducted in the constant-NVT ensemble at $300~\text{K}$ employing the thermostat of Bussi \textit{et al.} \cite{bussi2007csvr} with a time constant of $25.0~\text{fs}$. \section{Assessment of force-matched water potentials \label{sec:application}} Before studying the static and dynamic properties of the force-matched water models derived here, it is worth considering the optimised parameters, as shown in Table 1. We see that, while the $M$-site charge parameter $q$ tends to be similar to that of the original q-TIP4P/F model, the parameter determining the position of the $M$-site, namely $\gamma$, is in general smaller than that of q-TIP4P/F; as a result, we expect that the average dipole moments of the water molecules in the force-matched potentials will be slightly smaller than in q-TIP4P/F water. However, we note that decreasing $\gamma$ has the effect of increasing the tetrahedral quadrupole moment of the water molecules, and hence may promote tetrahedral structuring; this is consistent with the fact that the DFT-based water simulations, which were used as force input in this work, tend to be over-structured. Another interesting trend is seen in the Lennard-Jones parameter $\epsilon$, which is generally smaller than that found in q-TIP4P/F; this most likely arises to balance the increased structure caused by the increased tetrahedral quadrupole moments of the force-matched potentials, as noted above. Finally, we see that the intramolecular potential parameters in the new force-matched models suggest that the intramolecular modes may be slightly ``softer'' than q-TIP4P/F; the difference here must arise from the differing parameterisation approaches adopted for the different models, and possibly reflects the fact that the new water models were derived by force-matching to sampled water configurations while q-TIP4P/F was not. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{PBE-reference_OO-1.pdf} \caption{Oxygen-Oxygen of the fm-TIP4P/F-PBE water model and a DFT-based AIMD simulation. The experimental RDFs from Refs.~\onlinecite{soper2000radial} and \onlinecite{skinner2013oo} are shown for comparison. \label{fig:ref_PBE_OO}} \end{figure} To assess the quality of our force-matching procedure, we began by comparing the partial RDFs,\cite{kuehnegofr2013} as obtained by a classical MD simulation (p=1) using the fm-TIP4P/F-PBE potential with the corresponding DFT-based AIMD reference. The resulting O-O RDF are shown in Fig.~\ref{fig:ref_PBE_OO} and compared with recent neutron and x-ray diffraction measurements.\cite{soper2000radial,skinner2013oo} As can be seen the comparison with the experimental data reveals the well known overstructuring of DFT-based AIMD simulations.\cite{schmidtNPT2009, kuehnewater2009, BanyaiNMR2010, FernandezSerra2011, chun2012structure, kuehne2ptwater2012, TuckermanWater2012, kuehnewater2013, WaterPNAS2013, kuehnewaterreview2013} However, it also shows that the fm-TIP4P/F-PBE water model slightly underestimates the average O-O bond length and overestimates the height of the first peak within the O-O RDF with respect to the AIMD reference, whereas the second solvation shells are in excellent quantitative agreement. The O-H and H-H RDF, respectively, are shown as Figs.~S1 and S2 in the supporting information. The remaining error in the short-range portion of the RDFs are clearly most likely due to the simplicity of the force-matched potential, notably the exclusion of explicit polarisability, which is captured in the DFT simulations. Nevertheless, these results are promising, particularly considering that van der Waals interactions \cite{schmidtNPT2009, FernandezSerra2011, chun2012structure, TuckermanWater2012, kuehnewaterinterface2011} and inclusion of NQE \cite{lobaugh1997quantumwater,paesani:184507,PaesaniVoth2009,miller:154504,hernandez2006modeldependence,comp_quant_eff, PhysRevLett.91.215503, PhysRevLett.101.017801} would be expected to improve agreement with experiment. \subsection{Impact of Nuclear Quantum Effects} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{PBE-nucl_effects_OO-1.pdf} \caption{Oxygen-Oxygen RDF from classical MD and PIMD simulations using the fm-TIP4P/F-PBE water model. The experimental RDFs from Refs.~\onlinecite{soper2000radial} and \onlinecite{skinner2013oo} are shown for comparison\label{fig:nb_PBE_OO}.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{PBE-nucl_effects_OH-1.pdf} \caption{Oxygen-Hydrogen RDF from classical MD and PIMD simulations using the fm-TIP4P/F-PBE water model. The experimental RDFs from Ref.~\onlinecite{soper2000radial} is shown for comparison\label{fig:nb_PBE_OH}.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{PBE-nucl_effects_HH-1.pdf} \caption{Hydrogen-Hydrogen RDF from classical MD and PIMD simulations using the fm-TIP4P/F-PBE water model. The experimental RDFs from Ref.~\onlinecite{soper2000radial} is shown for comparison\label{fig:nb_PBE_HH}.} \end{figure} To investigate the impact of NQE on the structure of liquid water, and to assess the approximation due to the ring-polymer contraction scheme in our force-matched models, we employed PIMD simulations. The corresponding results are displayed in Figs.~\ref{fig:nb_PBE_OO}, \ref{fig:nb_PBE_OH} and \ref{fig:nb_PBE_HH}, respectively. As expected, the inclusion of NQE softens the liquid water structure and, for the fm-TIP4P/F-PBE model, improves the agreement simulated and experimental RDFs. While the importance of NQE on the O-O RDF is rather small, they are clearly much more important whenever light atoms such as hydrogen are involved. The latter is a direct consequence of the fact that the radius-of-gyration of the (free) ring-polymer, which is a measure for the delocalization of the nuclear wave function, scales as $1/\sqrt{MT}$, where $M$ is the atomic mass and $T$ the nuclear temperature, and as such a clear manifestation that even at room temperature liquid water is a mild quantum fluid. The implications are particularly apparent in Fig.~\ref{fig:nb_PBE_OH}, where the delocalization of the average intramolecular O-H bond length is substantially increased, in agreement with experiment, as well as in Fig.~\ref{fig:nb_PBE_HH} where the height of the first peak is significantly reduced by quantum delocalisation. However, NQE had only a minor effect on the average bond lengths, so that all bonds are still somewhat too short compared to experiment. Finally, it is evident that the results using the ring-polymer contraction scheme ($p=32 \rightarrow 1$) are almost indistinguishable from explicit PIMD simulations ($p=32$), and is thus exclusively employed in the following. \subsection{Influence of van der Waals interactions} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{vdw_correction_PBE-1.pdf} \caption{Oxygen-Oxygen RDF from PIMD simulations using the fm-TIP4P/F-PBE water model with and without London dispersion corrections. The experimental RDF from Refs.~\onlinecite{soper2000radial} and \onlinecite{skinner2013oo} are shown for comparison\label{fig:vdw_PBE}.} \end{figure} Since long-range vdW interactions are typically neglected by common local and semi-local XC functionals, we investigated to what extent approximate London dispersion correction schemes to DFT, such as DCACP and the ``D3'' correction of Grimme and coworkers, could improve the structure of liquid water. \cite{lilienfeld2005dcacp, grimme2010d3} The corresponding O-O RDFs are shown in Fig.~\ref{fig:vdw_PBE}, while the O-H and H-H are displayed in the supporting information as Fig.~S3 and S4, respectively. It is apparent that with the inclusion of NQE, both vdW correction schemes exhibit a marginal improvement in the RDFs. Nevertheless, due to the fact that both schemes systematically improve the agreement with experiment, from now on only results including the ``D3'' vdW correction will be presented, in particular since the latter have been shown to also improve the density and the translational diffusion of liquid water. \cite{schmidtNPT2009, kuehnewaterinterface2011, TuckermanWater2012, chun2012structure} \subsection{Effect of the exchange-correlation functional} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_diff_func-1.pdf} \caption{Oxygen-Oxygen RDFs from PIMD simulations using the fm-TIP4P/F-XC-D3 water model for the BP86, BLYP, revPBE, PBE and TPSS XC functionals, respectively. The experimental RDFs from Refs.~\onlinecite{soper2000radial} and \onlinecite{skinner2013oo} are shown for comparison\label{fig:comp_diff_func_OO}.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_diff_func_OH-1.pdf} \caption{Oxygen-Hydrogen RDFs from PIMD simulations using the fm-TIP4P/F-XC-D3 water model for the BP86, BLYP, revPBE, PBE and TPSS XC functionals, respectively. The experimental RDFs from Ref.~\onlinecite{soper2000radial} is shown for comparison\label{fig:comp_diff_func_OH}.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{comp_diff_func_HH-1.pdf} \caption{Hydrogen-Hydrogen RDFs from PIMD simulations using the fm-TIP4P/F-XC-D3 water model for the BP86, BLYP, revPBE, PBE and TPSS XC functionals, respectively. The experimental RDFs from Ref.~\onlinecite{soper2000radial} is shown for comparison\label{fig:comp_diff_func_HH}.} \end{figure} The force-matched water models shown in Table~\ref{tab:parameters} now allow us to investigate the influence of the various approximations to the XC functional, as reported in Fig.~\ref{fig:comp_diff_func_OO}, \ref{fig:comp_diff_func_OH} and \ref{fig:comp_diff_func_HH}, respectively. Taken together, these simulation results show that the RDFs calculated using the BP86-D3 XC functional is remarkably close to the the ones of the PBE-D3 functional, while the revPBE-D3, BLYP-D3 and in particular the TPSS-D3 XC functionals produced RDFs in increasing agreement with experiment. The former reflects the fact that the parameters of the fm-TIP4P/PBE-D3 and fm-TIP4P/BP86-D3 water potentials were rather similar to each other, as detailed in Table~\ref{tab:parameters}. All XC functionals led to water models with over-structured RDFs, as noted previously. Nevertheless, given that the present water models were all derived from semi-local DFT calculations, the fm-TIP4P/TPSS-D3 water model was altogether in remarkably good agreement with the experimental measurements. In fact, it turned out to be in much better agreement than a previous calculation using the TPSS XC functional, though without van der Waals correction and NQE, suggested \cite{JoostWater2005}. It not only qualitatively reproduced the various average bond lengths and the correct relative heights of the first two intermolecular peaks of the O-H RDF, but also the correct delocalization of the average intramolecular O-H bond length, as well as the second solvation shell of the O-O RDF. As a consequence, in spite of the observed variations, and given the challenge of simulating liquid water, the performance of semi-local DFT that is underlying the present water models, can be judged to be reasonably good. \section{Results and Discussion\label{sec:results}} The results so far have focussed on assessing whether the force-matching procedure produces reasonable water models, as well as the impact of nuclear quantum effects, van der Waals interactions and XC functional; these results have primarily focussed on the reproduction of the experimental partial RDFs for liquid water, which are often poorly reproduced by DFT-based AIMD simulations. In this section, we perform more extensive simulations of static and dynamic equilibrium properties for a range of force-matched water models that otherwise would have not been feasible by direct AIMD simulations; as noted above, the force-matched models considered here were all derived from DFT calculations which employed the ``D3'' London dispersion correction. \subsection{Static Properties} \begin{table*} \caption{Static equilibrium properties of the force-matched water models for the different semi-local XC functionals obtained from PIMD simulations in the constant-NPT ensemble: $p$ denotes the number of ring-polymer beads (or imaginary time slices), $r_{\text{OH}}$ the intramolecular O-H bond length, $\theta_{\text{HOH}}$ the H-O-H bond angle, $\mathcal{\mu}$ the molecular dipole moment, $\rho$ the equilibrium density and $\epsilon_{s}$ the static dielectric constant.} \label{tab:properties} \begin{tabular}{l|c|c|c|c|c|l} XC Functional & $p$ &$r_{\text{OH}}$\;[\AA]&$\theta_{\text{HOH}}$\;[deg]&$\mathcal{\mu}$\;[D]&$\rho$\;[g/cm$^3$]&$\epsilon_{s}$\\ \hline\hline PBE-D3 \cite{pbe} & 1&0.9931&106.5224&2.1177&1.059&37.00\\ PBE-D3 & 32$\to$1&1.0100&106.5183&2.1537&1.067&27.31\\ BP86-D3 \cite{becke, perdew86} & 1&0.9949&106.6028&2.1164&1.063&43.55\\ BP86-D3 & 32$\to$1&1.0118&106.5948&2.1525&1.071&35.08\\ BLYP-D3 \cite{becke, lyp} & 1&0.9888&106.3276&2.0127&1.025&35.46\\ BLYP-D3 & 32$\to$1&1.0048&106.3005&2.0460&1.030&31.35\\ revPBE-D3 \cite{revPBE} & 1&0.9863&106.0220&2.1012&1.011&40.05\\ revPBE-D3 &32$\to$1&1.0042&106.0142&2.1396&1.018&35.77\\ TPSS-D3 \cite{tpss} & 1&0.9858&105.1119&2.1660&1.000&48.38\\ TPSS-D3 &32$\to$1&1.0018&105.0494&2.2026&1.005&45.69\\ \hline q-TIP4P/F \cite{comp_quant_eff} & 32$\to$1 & 0.978(1) & 104.7(1) & 2.348(1) & 0.998(2) & 60(3) \\ \hline Expt. & $\cdots$ &0.97 \cite{soper2000radial} &105.1 \cite{soper2000radial} & 2.9(6) \cite{BadyalSoper2000} & 0.997 \cite{SaulWagner1989}&78.4 \cite{Fernandez1995} \\ \end{tabular} \end{table*} Molecular static equilibrium properties such as the intramolecular O-H bond length $r_{\text{OH}}$ and the H-O-H bond angle $\theta_{\text{HOH}}$, as well as the molecular dipole moment $\mathcal{\mu}$ are shown in Table~\ref{tab:properties}. We find that the inclusion of NQE increases $r_{\text{OH}}$, which is indeed in agreement with path-integral calculations of others \cite{lobaugh1997quantumwater, SternBerne2001, FanourgakisXantheas2008, PhysRevLett.91.215503, PhysRevLett.101.017801}, but our calculated values are larger than the experimental value.\cite{soper2000radial} However, NQE reduced $\theta_{\text{HOH}}$ in contrast with previous path-integral simulations,\cite{SternBerne2001, lobaugh1997quantumwater} but consistent with Ref.~\onlinecite{FanourgakisXantheas2008} and, more importantly, systematically improved the agreement with experiment \cite{soper2000radial}. We find that density also increases when NQE are included, which is again just like the flexible and polarizable TTM3-F water model of Fanourgakis and Xantheas \cite{FanourgakisXantheas2008}, though at variance with Paesani \textit{et al.} \cite{paesani:184507, paesani2007quantumeffects}. In addition, $\mathcal{\mu}$ is also enhanced upon inclusion of NQE, though it still substantially underestimated relative to the experimental value.\cite{BadyalSoper2000} While this is consistent with previous classical and DFT-based PIMD simulations \cite{SternBerne2001, PhysRevLett.91.215503, PhysRevLett.101.017801}, it is in contrast with CMD simulations of Voth and coworkers using empirical force-fields.\cite{lobaugh1997quantumwater, paesani2007quantumeffects} The fact that the dipole moment magnitude is smaller than the values of previous classical MD calculations using polarizable force-fields (2.5-2.85~D) \cite{SprikKlein1988, SprikWater1991, RickBerne1994, lobaugh1997quantumwater, DangChang1997, SternBerneFriesner2001, AMOEBA2003, YuVanGunsterenWater2004, FanourgakisXantheas2006, LamoureuxWater2006, paesani2007quantumeffects, KolafaWater2011} and semi-classical AIMD simulations (2.7-3.1~D) \cite{laasonen1993ailiqwater, PhysRevLett.82.3308, silvestrelli:3572, PhysRevLett.91.215503, PhysRevLett.98.247401} can thus be attributed to the lack of polarizability of the present fixed point-charge water model. \subsubsection{Dielectric Constant} As well as a large permanent dipole moment, liquid water also displays a large static dielectric constant of $\epsilon_{s} = 78.4$.\cite{Fernandez1995} In fact, this is the highest of all polar solvents with comparable dipole moments, and can be associated with the presence of a macroscopically extended HB network.\cite{Kauzmann1969} However, calculating $\epsilon_{s}$ using \begin{equation} \epsilon_{s} = \epsilon_{\infty}+\frac{4\pi\beta}{3 V}(\langle \bm{\mu}_p\cdot\bm{\mu}_p\rangle-\langle \bm{\mu}_p\rangle\cdot\langle\bm{\mu}_p\rangle)\, , \end{equation} where $\epsilon_{\infty}$ is the infinite-frequency dielectric constant and $\bm{\mu}_p$ the total dipole-moment averaged over all imaginary-time slices $p$, requires a PIMD trajectory of several nanoseconds in length to converge.\cite{DeLeeuw1980, adams1981, neumann1983} Because it is not feasible to converge this property with DFT-based AIMD simulations,\cite{PhysRevB.56.12847, MarzariVanderbiltParrinello} only rather crude estimates ($\epsilon_{s}$ = 67-86) using Kirkwood's theory \cite{Kirkwood1939} are available from first principles calculations. \cite{PhysRevLett.98.247401, silvestrelli:3572} In order to obtain full dielectric relaxation, we equilibrated the system for $2~\text{ns}$ before calculating $\epsilon_{s}$ as an ensemble average over an additional $5~\text{ns}$. The corresponding results for the various XC functionals we have considered are shown in Table~\ref{tab:properties}. The fm-TIP4P/F-TPSS-D3 water model, which was consistently in best agreement with experiment within the present force-matched water potentials, also exhibits the highest dielectric constant. However, it severely underestimates the experimental value, as well as those obtained with several other empirical force-fields.\cite{Guillot2002, VegaAbascal2011} We note that the higher dipole moment of polarizable water models typically results in a dielectric constant that exceeds experiment, with typical values being in the range $\epsilon_{s} = 79-116$. \cite{LamoureuxWater2006, AMOEBA2003, SprikWater1991, YuVanGunsterenWater2004, SternBerneFriesner2001, RickBerne1994, KolafaWater2011} This suggests that the central reason for the underestimation of the dielectric constant in the force-matched models is due to the relatively low molecular dipole moments, which are typically around 0.7~D lower than the experimental estimate. \cite{BadyalSoper2000} With this large difference in dipole moment, as well as clear differences in the liquid structure for these different models, it is not surprising that the DFT-based models developed here underestimate the dielectric constant. Interestingly, we found that NQE reduced $\epsilon_{s}$ even further, which is rather surprising since at the same time they enhanced $\mathcal{\mu}$, as well as $r_{\text{OH}}$ and thus the root-mean square total dipole moment. Due to the fact that the latter is proportional to $\epsilon_{s}$, this immediately suggests that NQE should have lead to a larger instead of a lower value. Nevertheless, this is consistent with previous CMD calculations using the SPC/F ($\epsilon_{s}$ = 94 $\rightarrow$ 74) and SPC/Fw ($\epsilon_{s}$ = 80 $\rightarrow$ 64) water models \cite{lobaugh1997quantumwater, paesani:184507}, whereas the flexible and polarizable TTM2.1-F water potential of Fanourgakis and Xantheas \cite{FanourgakisXantheas2006} predicts a NQE induced increase of $\epsilon_{s}$ from 67 to 74.\cite{paesani2007quantumeffects} \subsubsection{Density Maximum and Temperature of Maximum Density} Due to the fact that the remaining calculations were computationally rather costly, we have restricted ourselves to simulations based on the fm-TIP4P/F-TPSS-D3 water potential, which has so far been found to give the overall best agreement with experimental properties, as noted above. To accurately calculate the average liquid density, we extended the equilibration time to $5~\text{ns}$ for temperatures below 280~K to account for the reduced molecular translational diffusion of undercooled water. The corresponding data points were fit to a parabola of the form $f(T) = a \left( T-T_{\text{max}} \right)^2 + \rho_0$ and are shown together with results from the q-SPC/Fw and q-TIP4P/F water models in Fig.~\ref{fig:density}.\cite{paesani:184507, comp_quant_eff} We find that the q-SPC/Fw force-field underestimates the experimental temperature of maximum density at $T_{\text{max}} = 277~K$ by as much as $\sim 48~K$, while results for q-TIP4P/F and the present fm-TIP4P/F-TPSS-D3 are in much better agreement with the experimental $T_{\text{max}}$. \cite{DonchevPNAS2006,paesani2007quantumeffects} The maximum density of the q-SPC/Fw and fm-TIP4P/F-TPSS-D3 water potentials, however, are somewhat too high, while the q-TIP4P/F is in excellent agreement with experiment.\cite{comp_quant_eff} The fact that including the ``D3'' London dispersion correction had the tendency to overcorrect the otherwise too low density of liquid DFT water is consistent with recent DFT-based AIMD simulations \cite{kuehnewaterinterface2011, TuckermanWater2012}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{density-max-1.pdf} \caption{Liquid water density as a function of temperature for fm-TIP4P/TPSS-D3 water potential. The corresponding results of the q-TIP4P/F and q-SPC/F water models \cite{paesani:184507,comp_quant_eff}, as well as the experimental data,\cite{SaulWagner1989} are shown for comparison. \label{fig:density} } \end{figure} \subsubsection{Melting Point} We have performed PIMD simulations at atmospheric pressure to determine the quantum melting point of the fm-TIP4P/F-TPSS-D3 water model by direct coexistence simulations of the water/hexagonal ice interface. Because liquid water has a higher density than hexagonal ice, we have chosen to use the simulation box density as an order parameter the distinguish between formation of solid hexagonal ice and liquid water. \begin{figure} \includegraphics[width=0.5\textwidth]{melting_point_density-1.pdf} \caption{Density profiles during PIMD simulations to determine the melting point of fm-TIP4P/F-TPSS-D3. At a temperature of 230~K, the system clearly remains in the ice phase. Just above 235~K, the ice phase melts and an higher (liquid) density is observed.} \label{fig:melt_density} \end{figure} Figure~\ref{fig:melt_density} illustrates typical density traces as a function of time in these coexistence simulations. Below 235 K, we find that the system adopts a density of around 0.96~g~cm$^{-3}$, corresponding to that of hexagonal ice; however, a simulation run at 236K demonstrates that the system melts to form liquid water. As a result, the melting temperature of the fm-TIP4P/F-TPSS-D3 potential has been found to be between $235-236~\text{K}$, which is around 38~K lower than the experimental value. For comparison, classical MD simulations of common rigid water models have been found to give melting temperatures that range from about 146~K for TIP3P to 274~K for TIP4P/ice.\cite{vega2005melting, VegaAbascal2011} Including NQE by means of PIMD calculations resulted in a melting temperature of $251\pm1~\text{K}$ and $195\pm5~\text{K}$ for the q-TIP4P/F and q-SPF/Fw water potentials, respectively \cite{comp_quant_eff}. The corresponding values from DFT-based AIMD simulations, however, are much higher, namely 360~K with and 411~K without vdW correction \cite{YooXantheas2009, YooXantheas2011}. We note that previous classical MD simulations have suggested that it is not possible to reproduce the experimental difference of 4~K between the melting point of hexagonal ice and the temperature of maximum density using fixed point-charge models.\cite{VegaAbascal2005} In fact, the present PIMD simulations using the fm-TIP4P/F-TPSS-D3 water model predict a difference of 35~K between these two temperatures, which is within the 21-37 K range of typical differences found by classical MD simulations using empirical force-fields.\cite{vega2005melting} Since the average molecular dipole moment of ice is significantly larger than that of liquid water, indicating that significant charge redistribution occurs upon freezing \cite{Batista1998, Batista1999}, it indeed appears that an explicit treatment of electronic polarization will be needed to quantitatively reproduce the small temperature difference between the temperature of maximum density and the melting point of water. \subsection{Dynamic Properties} \subsubsection{Translational Diffusion Constant} For the calculation of the translational diffusion constant $D$, one should bear in mind that it is sensitive to finite-size effects which arise from the fact that a diffusing particle sets up a hydrodynamic flow which decays slowly as $r^{-1}$. In a periodically repeated system this leads to an interference between one particle and its periodic images. To account for this well-known finite-size-dependence, we have therefore performed two RPMD simulations of smaller systems (containing 216 and 343 water molecules) and extrapolated $D$ to the infinite system-size limit using the relation of D\"unweg and Kremer, \cite{dunweg1993polymermd,yeh2004systemsizediffusion} \begin{equation} D_{\text{PBC}}(L) = D_\infty - \frac{kT\xi}{6\pi\eta L}, \end{equation} where $D_\infty$ is the diffusion coefficient for an infinite system, $\eta$ is the translational shear viscosity, $L$ is the length of the periodic simulation box and $\xi = 2.837$ a numerical coefficient which depends on the geometry of the simulation cell. We found $D_{\infty}^{qm} = 0.288$~\AA$^2$/ps for the fm-TIP4P/F-TPSS-D3 water model, which is 25~\% above the experimental value of 0.23~\AA$^{2}$/ps \cite{Price1999}. For comparison, the translational diffusion constant has been reported by others to be 0.19-0.548~\AA$^{2}$/ps using CMD and RPMD simulations, respectively.\cite{DonchevPNAS2006, lobaugh1997quantumwater, paesani:184507, pena:5992, paesani2007quantumeffects, comp_quant_eff} In any case, this demonstrates that our fm-TIP4P/F-TPSS-D3 model suggests that DFT water is indeed fluid (at least for this combination of XC functional and vdW corrections). \cite{kuehnewater2009, LeeTuckerman2007} A further interesting result relates to the observed quantum effect, defined here as the ratio of the quantum and classical diffusion coefficients. In the original development of the q-TIP4P/F model, it was found that the quantum effect was around 1.15; this was much smaller than previous values of 1.38-1.58, which had been obtained for either rigid or harmonically-flexible fixed-charge water models \cite{paesani2007quantumeffects, paesani:184507, DonchevPNAS2006, pena:5992, lobaugh1997quantumwater}. The relatively small quantum effect for q-TIP4P/F was found to be due to the existence of a ``competition'' between intramolecular and intermolecular ZPE contributions; in particular, intermolecular hydrogen bonds are weakened by ZPE, leading to faster translational dynamics, but the strength of intermolecular interactions is increased by changes to the molecular dipole moment which arise due to the incorporation of intramolecular ZPE. In the present work $D_{\infty}^{cl} = 0.300$~\AA$^2$/ps, which is smaller than the value including NQE, such that $D_{\infty}^{qm}/D_{\infty}^{cl} = 0.96$. In other words, the fm-TIP4P/F-TPSS-D3 water model exhibits an ``inverse'' quantum effect, meaning that diffusion actually slows down when NQE are included. Although difficult to confirm without further detailed investigations, it seems that a likely explanation is the fact that the intramolecular potential in the force-matched fm-TIP4P/F-TPSS-D3 potential derived here is much ``softer'' than the original q-TIP4P/F model, as already noted above. As a result, the addition of intramolecular ZPE as one moves from classical simulation to one including NQE may have a larger impact on intermolecular forces than in the original q-TIP4P/F model; this effect, along with the overly-tetrahedral structure of the water model proposed here, may lead to the observation of an ``inverse'' quantum effect. Given the experimental evidence from isotopically-substituted water, where normal water diffuses faster than heavy water (D$_{2}$O), this suggests that there remain some feature of our empirical models which fails to account correctly for the influence of quantum fluctuations; investigating these features will be an aim of future work. \subsubsection{IR Spectrum} The IR absorption spectrum of liquid water at ambient conditions using the fm-TIP4P/F-TPSS-D3 water model was obtained as the Fourier transform of the dipole autocorrelation function \begin{equation} n(\omega)\alpha(\omega)=\frac{\pi \beta \omega^2}{3 c V \epsilon_0} \tilde{c}_{\bm{\mu}\cdot\bm{\mu}}(\omega)\,. \end{equation} Here, the PACMD approximation to the Kubo-transformed dipole autocorrelation function $\tilde{c}_{\bm{\mu}\cdot\bm{\mu}}(t)$ was calculated using \begin{eqnarray} \mu_J(t)=\mu_J(\bm{R}_{J,O}^{(c)}(t),\bm{R}_{J,H_1}^{(c)}(t),\bm{R}_{J,H_2}^{(c)}(t)), \end{eqnarray} corresponds to the dipole moment operator of molecule $J$ evaluated at the centroid of the PACMD ring-polymer system at time $t$. \begin{figure} \includegraphics[keepaspectratio,width=1.15\linewidth,height=0.55\textheight]{IR-1.pdf} \caption{Classical and quantum PACMD dipole absorption spectrum of the fm-TIP4P/F-TPSS-D3 water model. The experimental bulk water values from Ref.~\onlinecite{venyaminov1997absorptivity} are drawn vertically for comparison.}\label{fig:IR} \end{figure} The classical and quantum dipole absorption spectra of the fm-TIP4P/F-TPSS-D3 water model are compared in Fig.~\ref{fig:IR}. The two calculated IR spectra clearly reproduce the general features of the experimental spectrum, with O-H stretching absorptions above $\sim 3000~\text{cm}^{-1}$, a water bending band at around $\sim 1600~\text{cm}^{-1}$, and intermolecular librational features below $\sim 1000~\text{cm}^{-1}$. Moreover, the peak at $\sim 200~\text{cm}^{-1}$ is absent from both of the simulated spectra. This peak arises from the low-frequency modulation of dipole-induced dipole interactions which are clearly not present in simple point-charge models. \cite{ImpeyMadden1982} However, the calculation including NQE shows the typical red-shift in comparison to the classical one, \cite{lobaugh1997quantumwater, paesani:184507, paesani2007quantumeffects, RosskyPNAS2005, habershonIR, FanourgakisXantheas2008} although we note that it remains unclear to what extent this is due to the well-known ``curvature problem'' observed by Marx and coworkers. \cite{WittRPMD2009, IvanovRPMD2010, WhMiller2009, WhMiller2011,PaesaniIR2010} We find that, while the O-H stretching frequencies of the force-matched model reproduce the experimental values reasonably well, whereas the q-SPC/Fw water model predicts distinct antisymmetric and symmetric stretching peaks \cite{zhangwater2013}. However, the bending frequency is under-estimated by around $100~\text{cm}^{-1}$; again, this may be a simple consequence of the parameters determined in the force-matching procedure; all of the force-fields derived here exhibit bending force constants $k_{\theta}$ which are smaller than that of the original q-TIP4P/F model, which itself reproduces the experimental bending frequency quite accurately. \section{Conclusion\label{sec:conclusion}} In this paper, we have developed a series of q-TIP4P/F-like water models using a force-matching algorithm based on reference forces from DFT calculations. Subsequent classical and quantum simulations of the resulting water models demonstrated a wide range of results depending upon which exchange-correlation functional was employed in the calculation of the reference forces used as input for the force-matching procedure; however, some trends are apparent. Almost all force-matched water models resulted in over-structured liquid water when compared to experimental results; this finding is not uncommon in DFT-based simulations, so it is not surprising that empirical models based on DFT reference forces exhibit a similar propensity. Overall, we found that the fm-TIP4P/F-TPSS-D3 model offered the best agreement with experimental properties, including the density maximum, temperature of maximum density, melting point, translational diffusion constant and the IR spectrum. However, it is interesting to note that none of the force-matched models developed here offered performance on par with the original q-TIP4P/F water model; this may point to an insufficient accuracy of the DFT reference forces, but we must also bear in mind that the q-TIP4P/F force-field was designed to specifically reproduce experimental properties in quantum simulations, rather than being derived from \textit{ab initio} reference forces. Despite this, there are many improvements which could be made to build on the present study. For example, the q-TIP4P/F-like models developed here clearly neglect polarisability, an effect which could be easily incorporated into the current force-matching scheme. Furthermore, the same fitting procedure could be applied to reference data obtained at a higher level of theory. Both of these are areas for future work. \begin{acknowledgments} We would like to thank Professor David Manolopoulos for many fruitful discussions regarding this paper. Financial support from the Graduate School of Excellence MAINZ, the IDEE project of the Carl Zeiss Foundation and the University of Warwick is kindly acknowledged. T.D.K. gratefully acknowledge the Gauss Center for Supercomputing (GCS) for providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS share of the supercomputer JUQUEEN at the J\"ulich Supercomputing Centre (JSC). \end{acknowledgments}
2,869,038,154,516
arxiv
\section{Introduction} A meromorphic function $Q$ on the upper half-plane $\mathbb H$ is called a meromorphic modular form of weight $k\in\mathbb Z$ with respect to $\mathrm{SL}(2,\mathbb Z)$ if $Q$ satisfies $$ Q(\gamma\cdot z)=(cz+d)^kQ(z),\quad \gamma=\M abcd\in\mathrm{SL}(2,\mathbb Z), $$ and $Q$ is also meromorphic at the cusp $\infty$. When $k=0$, a meromorphic modular form is called a modular function. We refer to \cite{Apostol} and \cite{Serre} for the elementary theory of (holomorphic) modular forms. Given a meromorphic modular form $Q$ of weight $4$ on $\mathrm{SL}(2,\mathbb Z)$, we consider a Fuchsian modular differential equation of second order on $\mathbb H$ \begin{equation}\label{(1.1)} y''=Q(z)y\quad \text{on}\ \mathbb H,\qquad y':=\frac{dy}{dz}. \end{equation} The differential equation \eqref{(1.1)} is called Fuchsian if the order of any pole of $Q$ is less than or equal to $2$. At $\infty$, by using $q=e^{2\pi iz}$, \eqref{(1.1)} can be written as \begin{equation} \left(q\frac{d}{dq}\right)^2y=-\frac{1}{4\pi^2}y''=-\frac{Q(z)}{4\pi^2}y. \end{equation} So \eqref{(1.1)} is Fuchsian at $\infty$ if and only if $Q$ is holomorphic at $\infty$. Suppose that $z_0$ is a pole of $Q$. The local exponents of \eqref{(1.1)} are $1/2\pm\kappa$, $\kappa\ge0$. If the difference $2\kappa$ of the two local exponents is an integer, then the ODE \eqref{(1.1)} might have a solution with a logarithmic singularity at $z_0$. A singular point $z_0$ of \eqref{(1.1)} is called \emph{apparent} if the local exponents are $1/2\pm\kappa$ with $\kappa\in\frac{1}{2}\mathbb Z_{\ge0}$ and any solution of \eqref{(1.1)} has no logarithmic singularity near $z_0$. In such a case, it is necessary that $\kappa>0$. The ODE \eqref{(1.1)} or $Q$ is called \emph{apparent} if \eqref{(1.1)} is apparent at any pole of $Q$ on $\mathbb H$. Clearly, if \eqref{(1.1)} is apparent then the local monodromy matrix at any pole is $\pm I$, where $I$ is the $2\times 2$ identity matrix. A solution $y(z)$ of \eqref{(1.1)} might be multi-valued. For $\gamma\in\mathrm{SL}(2,\mathbb Z)$, $y(\gamma\cdot z)$ is understood as the analytic continuation of $y$ along a path connecting $z$ and $\gamma\cdot z$. Even though $y(\gamma\cdot z)$ is not well-defined, the slash operator of weight $k$ ($k\in\mathbb Z$) can be defined in the usual way by \begin{equation} \left(y\big|_k\gamma\right)(z):=(cz+d)^{-k}y(\gamma\cdot z),\quad \gamma=\M abcd\in\mathrm{SL}(2,\mathbb Z), \end{equation} where $\gamma\cdot z=(az+b)/(cz+d)$. We have the well-known Bol's identity \cite{Bol} $$ \left(y\big|_{-1}\gamma\right)^{(2)}(z)=\left(y^{(2)}\big|_{3} \gamma\right)(z). $$ Hence, if $y(z)$ is a solution of \eqref{(1.1)}, then $\left(y\big|_{-1}\gamma\right)(z)$ is also a solution of \eqref{(1.1)}. Here $f^{(k)}(z)$ is the $k$-th derivative of $f(z)$. Suppose that \eqref{(1.1)} is \emph{apparent} and $y_i$, $i=1,2$, are two independent solutions. Since the local monodromy matrix at any pole of $Q$ is $\pm I$, the ratio $h(z)=y_2(z)/y_1(z)$ is well-defined and meromorphic on $\mathbb H$. By Bol's identity, both $\left(y_i\big|_{-1}\gamma\right)(z)$ are solutions of \eqref{(1.1)}, where $y_1(\gamma\cdot z)$ and $y_2(\gamma\cdot z)$ are understood as the analytic continuation of $y_1(z)$ and $y_2(z)$ along the same path connecting $z$ and $\gamma\cdot z$. Note that since \eqref{(1.1)} is assumed to be apparent, difference choices of paths from $z$ to $\gamma\cdot z$ only result in sign changes in $y_1(\gamma\cdot z)$ and $y_2(\gamma\cdot z)$. Therefore, there is a matrix $\rho(\gamma)$ in $\mathrm{GL}(2,\mathbb C)$ such that \begin{equation}\label{equation: representation of weight -1} \begin{pmatrix} \left(y_1\big|_{-1}\gamma\right)(z)\\ \left(y_2\big|_{-1}\gamma\right)(z) \end{pmatrix}=\pm\rho(\gamma)\begin{pmatrix} y_1(z)\\ y_2(z) \end{pmatrix}. \end{equation} Note that $\det \rho(\gamma)=1$ because the two Wronskians of fundamental solutions $\left( y_1\big|_{-1}\gamma,y_2\big|_{-1}\gamma\right)$ and $(y_1,y_2)$ are equal. Hence $\rho$ is a homomorphism from $\mathrm{SL}(2,\mathbb Z)$ to $\rm{PSL}(2,\mathbb C)$. In this paper, we call the homomorphism $\gamma\mapsto\pm\rho(\gamma)$ \emph{the Bol representation} associated to \eqref{(1.1)}. There is an old problem in conformal geometry related to \eqref{(1.1)}. The problem is to find a metric $ds^2$ with curvature $1/2$ on $\mathbb H$ that is locally conformal to the flat metric and invariant under the change $z\mapsto\gamma\cdot z$, $\gamma\in\mathrm{SL}(2,\mathbb Z)$. Write $ds^2=e^u\abs{dz}^2$. Below, we collect some basic results concerning the metric which will be proved in Section $2$. \begin{enumerate} \item[(1)] The curvature condition is equivalent to saying that $u$ satisfies the curvature equation \eqref{2.4}. Then \begin{equation} Q(z)=-\frac{1}{2}\left( u_{zz}-\frac{1}{2}u^2_z\right) \end{equation} is a meromorphic function. \item[(2)] The invariant condition ensures that $Q$ is a meromorphic modular form of weight $4$ with respect to $\mathrm{SL}(2,\mathbb Z)$ and holomorphic at $\infty$. Moreover, $Q(\infty)\le0$. \item[(3)] The metric might have conic singularity at some $p\in\mathbb H$ with a conic angle $\theta_p$, and the metric is smooth at $p$ if and only $\theta_p=1$. Thus $Q$ has a pole at $p$ if and only $ds^2$ has a conic singularity at $p$ (i.e., $\theta_p\neq 1$), provided that $p\not\in\braces{\gamma\cdot i,\gamma\cdot\rho:\gamma\in\mathrm{SL}(2,\mathbb Z)}$, where $i=\sqrt{-1}$ and $\rho=(1+\sqrt{-3})/2$. \item[(4)] Let $1/2\pm\kappa_p$, $\kappa_p>0$ be the local exponents at $p$ of \eqref{(1.1)} with this $Q$. Then $\theta_p=2\kappa_p/e_p$, where $e_p$ is the elliptic order of $p$. Moreover, if $\kappa_p\in\frac{1}{2}\mathbb Z$ for any $p$, then \eqref{(1.1)} is automatically apparent. \end{enumerate} We say the solution $u$ or the metric $e^u\abs{dz}^2$ \emph{realizes} $Q$ or the associated ODE \eqref{(1.1)} is realized by $u$. We note that for a given $Q$, finding a metric $e^u\abs{dz}^2$ realizing $Q$ is equivalent to solving the curvature equation \eqref{2.4} in Section 2 with the RHS being $4\pi\sum n_p\delta_p$, where $n_p=2\kappa_p-1$, $\delta_p$ is the Dirac measure at $p\in\mathbb H$ and the summation runs over all poles of $Q$ on $\mathbb H$. In particular, $\kappa_p\in\frac{1}{2}\mathbb N$, if and only if the coefficient $n_p\in\mathbb N$, the set of positive integers. In view of this connection, throughout the paper, we assume that the ODE \eqref{(1.1)} satisfy the following conditions ($\mb H_1$) or ($\mb H_2$).\\ \noindent{($\mb{H_1)}$} The ODE \eqref{(1.1)} is apparent with the local exponents $1/2\pm\kappa_p$ at any pole $p$ of $Q$, $\kappa_p\in\frac{1}{2}\mathbb N$, and $Q(\infty)\le0$. Denote the local exponents at $\infty$ by $\pm\kappa_\infty$. Moreover, if $p\not\in\braces{i,\rho}$, then $\kappa_p>1/2$.\\ Note that $Q(z)$ is smooth at $p$ if and only if $\kappa_p=1/2$, so the requirement $\kappa_p>1/2$ means that that $Q(z)$ has a pole at $p$. Note that by (4), the angle $\theta_\rho$ at $\rho$ is $2\kappa_\rho/3$ and $\theta_i$ at $i$ is $\kappa_i$.\\ \noindent{($\mb{H_2}$)} The angles $\theta_\rho$ and $\theta_i$ are not integers.\\ Suppose $\kappa_\infty\not\in\frac{1}{2}\mathbb N$. Then there is $r_\infty\in (0,1/2)$ such that \begin{equation} \label{equation: r} \text{either}\ \kappa_\infty\equiv r_\infty\ \mathrm{mod}\ 1\quad\text{or}\quad\kappa_\infty\equiv-r_\infty\ \mathrm{mod}\ 1. \end{equation} \begin{Theorem}\label{theorem: 1.1} Suppose that \eqref{(1.1)} satisfies ($\mb H_1$), ($\mb H_2$), and $\kappa_\infty\not\in\frac{1}{2}\mathbb N$. If $1/12<r_\infty<5/12$, then there is an invariant metric of curvature $1/2$ realizing $Q$. Moreover, the metric is unique. Conversely, if $Q$ is realized then $1/12\leq r_\infty\leq 5/12$. Furthermore, assume that $r_\infty=1/12$ or $r_\infty=5/12$. Let $\chi$ be the character of $\mathrm{SL}(2,\mathbb Z)$ determined by $$ \chi(T)=e^{2\pi i/6}, \qquad \chi(S)=-1, $$ where $T=\SM1101$ and $S=\SM0{-1}10$. Then there is an invariant metric of curvature $1/2$ realizing $Q$ if and only there are two solutions $y_1(z)$ and $y_2(z)$ of \eqref{(1.1)} such that $y_1(z)^2$ and $y_2(z)^2$ are meromorphic modular forms of weight $-2$ with character $\chi$ and $\overline\chi$, respectively, on $\mathrm{SL}(2,\mathbb Z)$. \end{Theorem} \begin{Remark} Let $\mathbb H^*=\mathbb H\cup\mathbb Q\cup\braces\infty$. Since $\mathrm{SL}(2,\mathbb Z)\backslash\mathbb H^\ast$ is conformally diffeomorphic to the standard sphere $S^2$, Theorem \ref{theorem: 1.1} can be formulated in terms of the existence of metrics on $S^2$ with prescribed singularities at poles of $Q$ and prescribed angle $\theta_p$ at each singular point $p$. In this sense, Theorem \ref{theorem: 1.1} is a special case of a result of Eremenko and Tarasov \cite{Eremenko-Tarasov}\footnote{We thank the referee for pointing out this and providing the reference.}, quoted as Theorem \ref{theorem: ET} in the appendix. In the appendix, we give an alternative and self-contained proof of their result in the form of Theorem \ref{thm1} as it is elementary and involves only straightforward matrix computation. (In the notation of Theorem \ref{thm1}, Theorem \ref{theorem: 1.1} corresponds to the case $\theta_1=1/2$, $\theta_2=1/3$, and $\theta_3=2r_\infty$ or $\theta_3=1-2r_\infty$, depending on whether $2r_\infty\le1/2$ or $2r_\infty>1/2$.) The threshold case $r_\infty\in\{1/12,5/12\}$ is more delicate. In Section \ref{section: proof of theorem 1.1}, we provides examples of existence and nonexistence of an invariant metric with $r_\infty=1/12$. Our examples suggest that to each $Q(z)$ with $r_\infty\in\{1/12,5/12\}$, one may associate a meromorphic differential $1$-form $\omega$ of the second kind on a certain elliptic curve $E$, and whether there exists an invariant metric realizing $Q$ hinges on whether $\omega$ is exact, i.e., whether $\omega$ is the identity element in the first de Rham cohomology group of $E$. Also, in the nonexistence example, we find that the entries in the monodromy matrices can be expressed in terms of periods or the central value of the $L$-function of the elliptic curve $y^2=x^3-1728$. We plan to study the threshold case in more details in the future. \end{Remark} Motivated by Theorem \ref{theorem: 1.1}, we consider the datas given below. \begin{equation}\label{equation: datas} \begin{split} &\text{A set of positive half-integers }\ \kappa_\rho,\kappa_i, \kappa_j\in\mathbb N/2, j=1,2,\ldots,m,\\ &\text{such\ that}\ 2\kappa_\rho/3\not\in\mathbb N,\ \kappa_i\not\in\mathbb N;\ \text{a set of inequivalent points}\ p_j\in\mathbb H,\\ &j=1,2,\ldots, m;\text{ and a positve number}\ \kappa_\infty. \end{split} \end{equation} \begin{Definition}\label{definition: Q is equipped} We say $Q$ is \emph{equipped with \eqref{equation: datas}} if \begin{enumerate} \item[(i)] $\braces{\rho,i,z_j:1\leq j\leq m}$ are the set of poles of $Q$; \item[(ii)]The local exponents of $Q$ at $\rho, i, z_j$ are $1/2\pm\kappa_\rho$, $1/2\pm\kappa_i$ and $1/2\pm\kappa_j$, respectively; \item[(iii)] $Q$ is apparent on $\mathbb H$; and \item[(iv)] The local exponents at $\infty$ are $\pm\kappa_\infty$. \end{enumerate} \end{Definition} \begin{Theorem}\label{theorem: existence of modular form Q with described local exponents and apparentness} Given \eqref{equation: datas}, there are modular forms $Q$ of weight $4$ equipped with \eqref{equation: datas}. Moreover, the number of such $Q$ is at most $\prod^m_{j=1}(2\kappa_j)$. \end{Theorem} To prove the theorem, we first show that there is a finite set of polynomials such that the set of $Q(z)$ equipped with \eqref{equation: datas} is in a one-to-one correspondence with the set of common zeros of the polynomial. Then the theorem follows immediately from the clasical B\'ezout theorem. Note that Eremenko and Tarasov \cite[Theorem 2.4]{Eremenko-Tarasov} has proved a stronger result, which in our setting states that for generic singular points $z_1,\ldots,z_m$, the number of $Q(z)$ is precisely $\prod_{j=1}^m(2\kappa_j)$. If the local exponents at $\infty$ are $\pm n/4$, $n$ is odd, then our second result asserts that there is a modular form of weight $-4$ coming from the equation. In the following, we use $T=\SM 1101$ and $S=\SM 0{-1}10$. \begin{Theorem}\label{theorem: 1.2} Suppose that ($\mb H_1$) and ($\mb H_2$) hold and $\kappa_\infty=n/4$, $n$ a positive odd integer. Then there is a constant $c\in\mathbb C$ such that $F(z):=y_-(z)^2+cy_+(z)^2$ satisfies $$ \left(F\big|_{-2}T\right)(z)=\left(F\big|_{-2}S\right)(z)=-F(z), $$ where $$ y_\pm(z)=q^{\pm n/4}\left(1+\sum_{j\geq 1}c_j^\pm q^j\right) $$ are solutions of \eqref{(1.1)}. \end{Theorem} The constant $c$ is rational if all coefficients of $Q(z)/\pi^2$ in the $q$-expansion are rational. We conjecture $c$ is positive, but it is not proved yet. Obviously, $F(z)^2$ is a modular form of weight $-4$ with respect to $\mathrm{SL}(2,\mathbb Z)$. Let $\Gamma_2$ be the group generated by $T^2=\SM 1201$ and $ST=\SM0{-1}11$, which is an index $2$ subgroup of $\mathrm{SL}(2,\mathbb Z)$. Then $F$ is a modular form of weight $-2$ on $\Gamma_2$. This fact can help us to compute $c$ and $F(z)$ explicitly. For example, if $Q(z)=-\pi^2n^2E_4(z)/4$, then $F(z)$ is holomorphic on $\mathbb H$, but with a pole of order $n$ at $\infty$ ($\Gamma_2$ has only one cusp $\infty$ and two elliptic points of order $3$). Thus it is not difficult to prove \begin{Corollary}\label{corollary: 1.3} Let $Q(z)=-\pi^2(n/2)^2E_4(z)$, where $n$ is a positive odd integer. Then there is a polynomial $P_{n-1}(x)\in\mathbb Q[x]$ of degree $(n-1)/2$ such that $$ F(z)=\frac{E_4(z)}{\Delta(z)^{1/2}}P_{n-1}(j(z)). $$ Here $E_4$ and $E_6$ are the Eisenstein series of weight $4$ and $6$ on $\mathrm{SL}(2,\mathbb Z)$ respectively: \begin{align*} &E_4(z)=1+240\sum_{m=1}^\infty\frac{m^3q^m}{1-q^m} =1+240\sum^\infty_{m=1}\left(\sum_{d|n}d^3\right)q^n, \quad q=e^{2\pi iz},\\ &E_6(z)=1-504\sum_{m=1}^\infty\frac{m^5q^m}{1-q^m} =1-504\sum^\infty_{m=1}\left(\sum_{d|n}d^5\right)q^n, \end{align*} $\Delta(z)=(E_4(z)^3-E_6(z)^2)/1728=q-24q^2+\cdots$, and $j(z)=E_4(z)^3/\Delta(z)$. \end{Corollary} For small $n$, $P_{n-1}$ are shown in the following list. $$ \extrarowheight3pt \begin{array}{c|ll} \hline\hline n & F & P_{n-1} \\ \hline 1 & y_-^2+3(2^3y_+)^2 & 1 \\ 3 & y_-^2+3(2^{12}y_+)^2 & j-1536 \\ 5 & y_-^2+3(2^{18}7^1y_+)^2 & j^2-2240j+1146880 \\ 7 & y_-^2+3(2^{28}3^1y_+)^2 & j^3-3072j^2+2752512j-704643072\\ 9 & (7y_-)^2+3(2^{34}11^113^1y_+)^2 & \begin{split} 49j^4&-192192j^3+253034496j^2-\\ &-125954949120j+19346680184832 \end{split}\\ \hline\hline \end{array} $$ In practice, it seems not easy to verify the apparentness at a singular point with local exponents $1/2\pm\kappa$, $\kappa\in\frac{1}{2}\mathbb N$. Take a simple example \begin{equation*} \left(q\frac{d}{dq}\right)^2y=-\frac{1}{4\pi^2}y''=\left(\frac{n}{2}\right)^2E_4(z)y\quad \text{on}\ \mathbb H. \end{equation*} The local exponents at $\infty$ are $\pm n/2$. The standard method to verify the apparentness at $\infty$ is to show that there is a solution $y_-(z)$ having a $q$-expansion $$ y_-(z)=q^{-n/2}\left(1+\sum_{j\geq 1}c_jq^j\right). $$ Suppose $E_4(z)=\sum^\infty_{j\geq 0}b_jq^j$. Substituting the $q$-expansion of $y_-$ and $E_4$ into the equation, then the coefficient $c_j$ satisfies \begin{equation}\label{1.7} \left(\left(j-\frac{n}{2}\right)^2-\left(\frac{n}{2}\right)^2\right)c_j=\left(\frac{n}{2}\right)^2\sum_{k+\ell=j,\ \ell<j}b_kc_\ell. \end{equation} For $j=1,2,\ldots,n-1$, $c_j$ can be determined from $c_0=1$. However at $j=n$, the LHS of \eqref{1.7} vanishes. Therefore, $\infty$ is apparent if and only the RHS of \eqref{1.7} is $0$ at $j=n$. If $n$ is small, then it is easy to check that the RHS of \eqref{1.7} is not $0$ at $j=n$. For a general $n$, nevertheless, it seems not easy to see why it does not vanish from the recursive relation \eqref{1.7}. Thus for a modular ODE, the standard method is not efficient for this purpose. We need other ideas. We consider \begin{equation}\label{equation: DE 3} y''(z)=\pi^2\left(rE_4(z)+s\frac{E_6(z)^2}{E_4(z)^2}+t\frac{E_4(z)^4}{E_6(z)^2}\right)y(z), \end{equation} where $r,s$ and $t$ are constant parameters. For simplicity, we denote the potential of \eqref{equation: DE 3} by $Q_3(z;r,s,t)$ or $Q_3(z)$ for short. Modulo $\mathrm{SL}(2,\mathbb Z)$, \eqref{equation: DE 3} has singularities only at $\rho$ and $i$ (recall that $E_4(z_0)=0$ if and only if $z_0$ is equivalent to $\rho$ under $\mathrm{SL}(2,\mathbb Z)$ and $E_6(z_0)=0$ if and only if $z_0$ is equivalent to $i$). Assume the local exponents of \eqref{equation: DE 3} are $1/2\pm\kappa_i$ at $i=\sqrt{-1}$ and $1/2\pm\kappa_\rho$ at $\rho=(1+\sqrt{-3})/2$. Then it is easy to prove that $s=s_{\kappa_\rho}$, $t=t_{\kappa_i}$, where \begin{equation}\label{equation: parameter s,t} s_{\kappa_\rho}=\frac{1-4\kappa^2_\rho}{9},\qquad \text{and}\qquad t_{\kappa_i}=\frac{1-4\kappa^2_i}{4}. \end{equation} See Section 3 for the computation. At $\infty$, the local exponents are $\pm\kappa_\infty$ if and only if \begin{equation*}\label{1.9} r+s_{\kappa_\rho}+t_{\kappa_i}=-(2\kappa_\infty)^2. \end{equation*} In the following, we set the triple $\left(n_i,\ n_\rho,\ n_\infty\right)$ by $$ (n_i,n_\rho,n_\infty)=\left(\kappa_i,\frac{2\kappa_\rho}{3},2\kappa_\infty\right). $$ \begin{Theorem}\label{theorem: necessary and sufficient conditions for appartness at infinity} The modular differential equation \eqref{equation: DE 3} is apparent throughout $\mathbb H\cup\{\text{cusps}\}$ if and only if the triple $(n_i,n_\rho, n_\infty)$ are positive integers satisfying (i) the sum of these three integers is odd, and (ii) the sum of any two of these integers is greater than the third. Moreover, In such a case, the ratio of any two solutions is a modular function on $\mathrm{SL}(2,\mathbb Z)$. \end{Theorem} For example, if $$ Q(z)=\pi^2\left(\frac{23}{36}E_4(z)-\frac{9n^2-1}{9}\frac{E_6(z)^2}{E_4(z)^2}-\frac{3}{4}\frac{E_4(z)^4}{E_6(z)^2}\right),\quad n\in\mathbb N, $$ then we have $(n_i,n_\rho,n_\infty)=(1,n,n)$. By Theorem \ref{theorem: necessary and sufficient conditions for appartness at infinity}, \eqref{equation: DE 3} is apparent throughout $\mathbb H\cup\{\text{cusps}\}$. On the other hand, $\infty$ is not apparent for the ODE \begin{equation*}\label{equation: DE 4} y''(z)=-\pi^2n^2E_4(z)y(z). \end{equation*} As discussed in \eqref{1.7}, it seems very difficult to verify ($\mb H_1$). So we would like to present some examples to show how to verify the condition ($\mb H_1$). The first example is \begin{equation}\label{equation: DE in introduction} y''(z)=\pi^2\left(rE_4(z)+s\frac{E_6(z)^2}{E_4(z)^2}\right)y(z), \end{equation} where $r,s$ are constant parameters. For simplicity, we denote the potential of \eqref{equation: DE in introduction} by $Q_1(z;r,s)$ or $Q_1(z)$ for short. The singular points modulo $\mathrm{SL}(2,\mathbb Z)$ is $\rho$ only. If the local exponents are $1/2\pm \kappa_\rho$, then a simple calculation in Section 3 shows $s=s_{\kappa_\rho}$, where $s_{\kappa_\rho}$ is given in \eqref{equation: parameter s,t}. \begin{Theorem}\label{theorem: 1.5} Let $\kappa_\rho\in\frac12\mathbb N$. \begin{enumerate} \item[(a)] Assume $3\nmid 2\kappa_\rho$. Then $Q_1(z;r,s)$ is apparent if $s=s_{\kappa_\rho}$ and any $r\in\mathbb C$. \item[(b)] Assume $3|2\kappa_\rho$. Then there exists a polynomial $P(x)\in\mathbb Q[x]$ of degree $2\kappa_\rho/3$ such that $Q_1(z;r,s)$ with $s=s_{\kappa_\rho}$ is apparent if and only if $r$ is a root of $P(x)$. Moreover, $r$ satisfies \begin{equation}\label{equation: r+s=-(l+1/2)^2} r+s_{\kappa_\rho}=-\left(\ell+\frac{1}{2}\right)^2, \quad \text{where}\ \ell=0,1,2,\ldots,\frac{2\kappa_\rho}{3}-1. \end{equation} \end{enumerate} \end{Theorem} Next, we consider the ODE \begin{equation}\label{equation: DE 2 in introduction} y''(z)=\pi^2\left(rE_4(z)+ t\frac{E_4(z)^4}{E_6(z)^2} \right) y(z)\quad \text{on}\ \mathbb H, \end{equation} where $r$ and $t$ are constant parameters. For simplicity, the potential of \eqref{equation: DE 2 in introduction} is denoted by $Q_2(z;r,t)$ or $Q_2(z)$ for short. Similar to \eqref{equation: DE in introduction}, \eqref{equation: DE 2 in introduction} has local exponents $1/2\pm \kappa_i$ at $i$ if and only if $ t=t_{\kappa_i}, $ where $t_{\kappa_i}$ is given in \eqref{equation: parameter s,t}. \begin{Theorem}\label{theorem: 1.7} Let $\kappa_i\in\frac12\mathbb N$. \begin{enumerate} \item[(a)] Assume $\kappa_i\in\frac{1}{2}+\mathbb Z_{\geq 0}$. Then \eqref{equation: DE 2 in introduction} is apparent if and only if $t=t_{\kappa_i}$ and any $r\in\mathbb C$. \item[(b)] Assume $\kappa_i\in\mathbb N$. Then there exists a polynomial $P(x)\in\mathbb Q[x]$ of degree $\kappa_i$ such that \eqref{equation: DE 2 in introduction} with $t=t_{\kappa_i}$ is apparent if and only if $r$ is a root of $P(x)$. Moreover, $r$ satisfies \begin{equation} \label{equation: r+t=-(l+1/3)^2} r+t_{\kappa_i}=-\left(\ell\pm\frac{1}{3}\right)^2, \quad\begin{cases} \ell=0,2,4,\ldots,\kappa_i-1,\quad&\text{if}\ \kappa_i\ \text{is\ odd},\\ \ell=1,3,5,\ldots,\kappa_i-1,\qquad&\text{if}\ \kappa_i\ \text{is\ even}.\ \end{cases} \end{equation} \end{enumerate} \end{Theorem} We use the Frobenius method to prove Part (a) of Theorem \ref{theorem: 1.5} and Theorem \ref{theorem: 1.7}. However, due to the modularity, our expansion of functions are expanded in terms of powers of $w_\rho:=(z-\rho)/(z-\bar{\rho})$ and $w_i:=(z-i)/(z+i)$, not powers of $z-\rho$ and $z-i$ as the standard method does. This kind of expansion has been used in \cite{Shimura-Maass} and \cite{Zagier123}. We will see in Section 3 that this type of expansions not only simplifies computations greatly, but also obtains the degree of $P(x)$ in Theorem \ref{theorem: 1.5}(b) and Theorem \ref{theorem: 1.7}(b) precisely. We will present two proofs of \eqref{equation: r+s=-(l+1/2)^2} in Theorem \ref{theorem: 1.5}(b) and \eqref{equation: r+t=-(l+1/3)^2} in Theorem \ref{theorem: 1.7}(b) in Section 4 and Section 5. One is to apply Riemann's existence theorem on compact Riemann surfaces. The other is to apply the existence theorems of the invariant metrics with curvature $1/2$. This geometric theorems are obtained by Eremenko \cite{Eremenko,Eremenko2}. Hopefully, these methods are useful for treating this kind of problems in modular differential equations. The paper is organized as follows. In Section 2, we will discuss the connection between the invariant metric $ds^2=e^u\abs{dz}^2$ of curvature $1/2$ and modular ODEs, in particular, the relation among the behavior of $u$ near cusps, angles and the local exponents of the realized modular ODE by $u$. In Section 3, we will explain the expansion of modular forms in terms of the natural coordinate $w=(z-z_0)/(z-\bar{z}_0)$, and prove Theorem \ref{theorem: 1.5}(a) and Theorem \ref{theorem: 1.7}(a). Both Theorem \ref{theorem: 1.5}(b) and Theorem \ref{theorem: 1.7}(b) are proved in Section 4, and Theorem \ref{theorem: necessary and sufficient conditions for appartness at infinity} is proved in Section 5. Finally, we will prove Theorem \ref{theorem: 1.1} and Theorem \ref{theorem: 1.2} to complete the paper in Section 6 and Section 7 respectively. \section{Curvature equations and the modular ODEs} \subsection*{2.1.} Let $M$ be a compact Riemann surface, $p\in M$, and $z$ be a complex coordinate in an open neighborhood $U$ of $p$ with $z(p)=0$. We consider the following curvature equation: \begin{equation}\label{2.1} 4u_{z\bar{z}}+e^u=f\quad \text{on}\ U, \end{equation} where $f=4\pi\sum\alpha_i\delta_{p_i}$ is a sum of Dirac measures and $0\neq\alpha_i>-1$. The assumption $\alpha_i>-1$ ensures that $e^u$ is locally integrable in a neighborhood of $p_i$. The $L^1$-integrability implies \begin{equation}\label{equation: behavior of u near p} u(z)=2\alpha_i\log\abs{z-p_i}+O(1)\quad \text{near}\ p_i. \end{equation} This is a general result from the elliptic PDE theory, see \cite{Chen-Lin-dc, Chen-Lin-cpa}. Let $w=w(z)$ be a coordinate change and set \begin{equation}\label{equation: transformation of u} \hat{u}(w)=u(z)-2\log\left|{\frac{dw}{dz}}\right|. \end{equation} Then $\hat{u}(w)$ also satisfies \begin{equation*} 4 \hat{u}_{w\bar{w}}+e^{\hat{u}}=\hat f, \qquad f=4\pi\sum\alpha_i\delta_{\hat p_i}, \end{equation*} where $\hat p_i=w(p_i)$. In other words, $e^u\abs{dz}^2$ is invariant under a coordinate change. Since $u$ has singularities at $p_i$, the metric $ds^2=e^u\abs{dz}^2$ has a conic singularity at $p_i$. If $u$ is a solution of \eqref{2.1}, then the metric $ds^2=e^u\abs{dz}^2$ has curvature $1/2$ at any point $p\not\in\braces{p_i}$. Suppose that $M$ is covered by $\braces{U_i}$ and $z_i$ is a coordinate in $U_i$. We call the collection $\braces{u_i}$ to be a solution of \eqref{2.1} on $M$ if $u_i$ is a solution of \eqref{2.1} on $U_i$ for each $i$ and satisfy the transformation law $u_j=u_i-2\log\left|\frac{dz_j}{dz_i}\right|$ on $U_i\cap U_j$. Let $g$ be a metric of $M$ with the curvature $K$, and the equation \eqref{2.1} on $M$ is equivalent to the curvature equation: \begin{equation}\label{2.3} \Delta_g\hat{u}+e^{\hat{u}}-K=4\pi\sum{\alpha}_i\delta_{p_i}\quad \text{on}\ M, \end{equation}where $\Delta_g$ is the Beltrami-Laplace operator of $(M,g)$. We could normalize the metric $g$ such that the area of $M$ is equal to $1$. In the case when $g$ has a constant curvature, \eqref{2.3} can be written as \begin{equation*} \Delta_g\hat{u}+\rho\left(\frac{e^{\hat{u}}}{\int e^{\hat{u}}}-1\right)=4\pi\sum{\alpha}_i(\delta_{p_i}-1)\quad \text{on}\ M. \end{equation*} This nonlinear PDE is often call \emph{a mean field equation} in analysis. See \cite{Chai-Lin-Wang, Chen-Lin-cpa, Chen-Lin-dc, Chen-Lin-spectrum, Chen-Lin-weight2, Chen-Kuo-Lin-simplezero} and \cite{Lin-green-function, Lin-Wang-elliptic, Lin-Wang-mean-field} for the recent development of mean field equations. In this paper, we consider the compact Riemann surface that is the quotient of $\mathbb H^*:=\mathbb H\cup\mathbb Q\cup\braces{\infty}$ by a finite index subgroup $\Gamma$ of $\mathrm{SL}(2,\mathbb Z)$, and the equation \eqref{2.1} is defined on the upper half space $\mathbb H$: \begin{equation}\label{2.4} 4u_{z\bar{z}}+e^u=4\pi\sum\alpha_i\delta_{p_i}\quad \text{on}\ \mathbb H, \end{equation} where the RHS is invariant under the action of $\Gamma$, i.e., the set $\braces{p_i}$ is invariant under the action of $\Gamma$ and $\alpha_i=\alpha_j$ if $p_i=\gamma\cdot p_j$ for some $\gamma\in\Gamma$. The transformation law \eqref{equation: transformation of u} for coordinate change is equivalent to asking $u$ to satisfy \begin{equation}\label{2.5} u(\gamma z)=u(z)+4\log\abs{cz+d},\quad \forall\gamma=\M abcd\in\Gamma. \end{equation} Let $s$ be a cusp of $\Gamma$ and $\gamma\in\mathrm{SL}(2,\mathbb Z)$ be a matrix such that $\gamma\cdot\infty=s$. Then we define $u_\gamma$ by $$ u_\gamma(z):=u(\gamma\cdot z)-4\log\abs{cz+d}. $$ Thus, $u$ is required to satisfy the following behavior near $s$: there is $\alpha_s>0$ such that \begin{equation}\label{2.6} e^{u_\gamma(z)}=\abs{q_N}^{4\alpha_s}(c+o(1)),\quad q_N=e^{2\pi i z/N},\ c>0, \end{equation} where $N$ is the width of the cusp $s$ and $o(1)\rightarrow 0$ as $q_N\rightarrow 0$. Given the RHS of \eqref{2.4} and a positive $\alpha_s$ at the cusp $s$, we ask for a solution $u$ of \eqref{2.4} satisfying \eqref{2.5} and \eqref{2.6} at any cusp. The conic angle $\theta$, defined at a singularity $p_i$ or a cusp $s$, is an important geometric quantity. Suppose that a metric $ds^2$, conformal to the flat metric $\abs{dz}^2$, has a conic singularity at $p$, and $w$ is a coordinate near $p$ with $w(p)=0$. If \begin{equation}\label{equation: ds^2=} ds^2=\abs w^{2(\theta-1)}(c+o(1))\abs{dw}^2,\quad c>0, \end{equation} then we call $\theta$ the \emph{angle} at $p$, and $2\pi\theta$ the \emph{total angle} at $p$. Since $ds^2$ is required to have a finite area, the angle $\theta$ is always \emph{positive}. Note that $ds^2$ is smooth (as a metric) at $p$ if and only if $\theta=1$. Next, we want to calculate the angles of $ds^2=e^u\abs {dz}^2$, if $u$ is a solution of \eqref{2.4}. Note that $z$ is not a coordinate of $M$ if $p_i$ is an elliptic point of order $e_i>1$. Indeed, $w=(z-p_i)^{e_i}$ is the local coordinate near $p_i$. For simplicity, we denote $z-p_i$ by $z$ ($z(p_i)=0$). By \eqref{equation: behavior of u near p}, we have $u(z)=2\alpha_i\log\abs z+O(1)$, i.e., $e^{u(z)}=\abs{z}^{2\alpha_i}(c_0+o(1))$, $c_0>0$. Then $$ e^{u(z)}\abs{dz}^2=\abs{w}^{(2\alpha_i+2)/e_i-2}(d+o(1))\abs{dw}^2,\quad d>0. $$ By \eqref{equation: ds^2=}, we have \begin{equation}\label{2.7} \theta_i=\frac{\alpha_i+1}{e_i}. \end{equation} At a cusp $s$, the coordinate is $q_N=e^{2\pi i z/N}$, where $N$ is the width of the cusp $s$. By \eqref{2.6}, $$ e^{u_{\gamma}(z)}\abs{dz}^2=\abs{q_N}^{4\alpha_s-2}(c+o(1))\abs{dq_N}^2,\quad c>0. $$ So the angle $\theta_s$ at $s$ is \begin{equation}\label{2.8} \theta_s=2\alpha_s. \end{equation} \subsection*{2.2. Integrability and modular differential equations} Equation \eqref{2.4} is also known as an integrable system. There are two important features related to the integrability. One is that \begin{equation}\label{equation: uzz-uz/2 mero} Q(z):=-\frac{1}{2}\left( u_{zz}-\frac{1}{2}u^2_z\right)\quad\text{is\ a\ meromorphic\ function}, \end{equation} because $Q(z)_{\bar{z}}=-\frac{1}{2}(u_{z\bar{z}z}-u_{z\bar{z}}u_z)=0$ by \eqref{2.4}. \begin{Lemma}\label{lemma: 2.1} Each $p_i$ is a double pole of $Q(z)$ with the expansion $\frac{\alpha_i}{2}\left(\frac{\alpha_i}{2}+1\right)(z-p_i)^{-2}+O\left((z-p_i)^{-1}\right)$. \end{Lemma} \begin{proof} Since $u(z)=2\alpha_i\log\abs{z-p_i}+O(1)$ near $p_i$, we have $u_z(z)=\alpha_i(z-p_i)^{-1}+O(1)$ and $u_{zz}(z)=-\alpha_i(z-p_i)^{-2}+O\left((z-p_i)^{-1}\right)$. Then the lemma follows immediately. \end{proof} On the other hand, the Liouville theorem asserts that locally any solution $u$ can be expressed as \begin{equation}\label{2.10} u(z)=\log\frac{8\abs{h'(z)}^2}{\left(1+\abs{h(z)}^2\right)^2}, \end{equation} where $h(z)$ is a meromorphic function. Recall the Schwarz derivative \begin{equation}\label{3.3} \braces{h,z}=\left(\frac{h''}{h'}\right)' -\frac{1}{2}\left(\frac{h''}{h'}\right)^2. \end{equation} Note that the Schwarz derivative can be used to recover $h$ from $u$. Indeed, a direct computation from \eqref{2.10} yields that \begin{equation}\label{2.12} \braces{h,z}=-2Q(z). \end{equation} See \cite{Chai-Lin-Wang, Lin-green-function, Lin-Wang-elliptic, Lin-Wang-mean-field} for the detail of the proofs \eqref{2.10}--\eqref{2.12}. The meromorphic function $h$ is called a \emph{developing map} for the solution $u$. Any two developing maps $h_i$, $i=1,2$, of $u$ have the same Schwarz derivative by \eqref{2.12}, thus they can be connected by a M\"obius transformation, \begin{equation}\label{2.13} h_2(z)=\frac{ah_1(z)+b}{ch_1(z)+d},\quad \M abcd\in\mathrm{SL}(2,\mathbb C). \end{equation} By \eqref{2.10}, we obtain \begin{equation}\label{3.6} \frac{\abs{h'_1(z)}^2}{\left(1+\abs{h_1(z)}^2\right)^2} =\frac{\abs{h_2'(z)}^2}{\left(1+\abs{h_2(z)}^2\right)^2}, \end{equation} which implies that the matrix $\SM abcd$ is an unitary matrix. Next, we recall the classical Hermite theorem, see \cite{Whittaker-Watson}. \begin{theorem} Let $y_i$, $i=1,2$, be two independent solutions of $$ y''=Q(z)y. $$ Then the ratio $h(z)=y_2(z)/y_1(z)$ satisfies $\braces{h,z}=-2Q(z)$. \end{theorem} Let $Q(z)$ be the meromorphic function \eqref{equation: uzz-uz/2 mero} obtained from the solution $u$. Consider the ODE \begin{equation}\label{equation: y''=Qy in section 2} y''=Q(z)y. \end{equation} Then \eqref{equation: uzz-uz/2 mero} and the Hermite theorem together imply that $h(z)$ is a ratio of two solutions of \eqref{3.6}. \begin{Theorem}\label{theorem: 2.1} Suppose $u$ is a solution of \eqref{2.4}. Then \eqref{equation: y''=Qy in section 2} satisfies ($\mb H_1$) and the followings hold. \begin{enumerate} \item[(a)] The function $Q(z)$ is a meromorphic modular form of weight $4$ with respect to $\Gamma$ and holomorphic at any cusp. Moreover, at a cusp $s$, $Q(s)<0$. \item[(b)] \eqref{equation: y''=Qy in section 2} is Fuchsian and the local exponents of \eqref{equation: y''=Qy in section 2} at $p_i$ are $-\alpha_i/2$, $\alpha_i/2+1 $, and $\pm\alpha_s$ at a cusp. \item[(c)] If $\alpha_i\in\mathbb N$ for all $i$, then \eqref{equation: y''=Qy in section 2} is apparent. \end{enumerate} \end{Theorem} \begin{proof} (a) By the chain rule, we have \begin{align*} (u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex\gamma)_z(z)&=u_z(\gamma\cdot z)(cz+d)^{-2},\\ (u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex\gamma)_{zz}(z)&=u_{zz}(\gamma\cdot z)(cz+d)^{-4} -u_z(\gamma\cdot z)\frac{2c}{(cz+d)^3}. \end{align*} Thus \begin{align*} (u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex \gamma)_{zz}-\frac{1}{2}(u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex \gamma)^2_z &=\left(u_{zz}(\gamma\cdot z)-\frac{1}{2}u^2_z(\gamma\cdot z)\right)\\ &\qquad\times(cz+d)^{-4}-u_z(\gamma\cdot z)\cdot\frac{2c}{(cz+d)^3}. \end{align*} On the other hand, the transformation law \eqref{2.5} yields \begin{align*} (u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex \gamma)_z(z)&=u_z(z)+\frac{2c}{(cz+d)}, \\ (u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex\gamma)_{zz}&=u_{zz}-\frac{2c^2}{(cz+d)^2}. \end{align*} Hence, we have \begin{align*} (u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex\gamma)_{zz}-\frac{1}{2}(u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex\gamma)^2_z&=\left(u_{zz}-\frac{1}{2}u^2_z\right)-u_z(z)\cdot\frac{2c}{(cz+d)}-\frac{4c^2}{(cz+d)^2}\\ &=\left(u_{zz}-\frac{1}{2}u^2_z\right)-\frac{2c}{(cz+d)}(u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex\gamma)_z(z). \end{align*} Since $$ \frac{-2c}{(cz+d)}(u\kern0.5ex\vcenter{\hbox{$\scriptstyle\circ$}}\kern0.5ex\gamma)_z=\frac{-2c}{(cz+d)^3}u_z(\gamma\cdot z), $$ we find that $Q:=-\frac{1}{2}\left(u_{zz}-\frac{1}{2}u^2_z\right)$ satisfies $$ Q(\gamma\cdot z)=Q(z)\cdot(cz+d)^4. $$ This proves the modularity of $Q$. To prove the holomorphy of $Q$ at cusps, without loss of generality, we may assume that the cusp $s$ is $\infty$. Then $q_N=e^{2\pi iz/N}$ is the local coordinate near $\infty$, where $N$ is the width of the cusp $\infty$. By the transformation law of coordinate changes, the solution $\hat{u}$ in terms of $q_N$ should be expressed by $\hat{u}(q_N)=u(z)-2\log\left|\frac{dq_N}{dz}\right|$. Thus, $$ e^{\hat{u}(q_N)}=\frac{8\abs{h'(z)}^2}{\left(1+\abs{h(z)}^2\right)^2} \left|\frac{dq_N}{dz}\right|^2 =8\left|\frac{d}{dq_N}h(z)\right|^2\left(1+\abs{h(z)}^2\right)^{-2}. $$ Hence the developing map $h(z)=\hat{h}(e^{2\pi iz/N})=\hat{h}(q_N)$, where $q_N=e^{2\pi iz/N}$. Note that \begin{align*} \braces{h,z}&=\braces{\hat{h},q_N}\left(\frac{dq_N}{dz}\right)^2+\braces{q_N,z}\\ &=\braces{\hat{h},q_N}q^2_N\left(\frac{-4\pi^2}{N^2}\right)+\frac{2\pi^2}{N^2}. \end{align*} Since $$ -\frac{1}{2}\braces{\hat{h},q_N}=\hat{u}_{q_Nq_N}-\frac{1}{2}\hat{u}_{q_N}^2=\frac{\alpha}{2}\left(\frac{\alpha}{2}+1\right)q^{-2}_N+O\left(q^{-1}_N\right), $$ where $\alpha=\theta-1$, $\theta$ is the angle at $\infty$, we have \begin{equation*} \begin{split} \lim_{\mathrm{Im\,} z\rightarrow\infty}Q(z)&=-\frac{\pi^2}{N^2} \left(1+\frac{4\alpha}{2}\left(\frac{\alpha}{2}+1\right)\right)=-\frac{\pi^2}{N^2}(1+\alpha)^2<0, \end{split} \end{equation*} because $\alpha>-1$. This proves Part (a). Part (b) is a consequence of Lemma \ref{lemma: 2.1}. For Part (c), if $\alpha_i\in\mathbb N$ then the local exponents $-\alpha_i/2$ and $\alpha_i/2+1$ can be written as $1/2\pm \kappa_i$, $\kappa_i=(\alpha_i+1)/2\in\frac{1}{2}\mathbb N$ and by the Liouville theorem \eqref{2.10}, we see easily that $h(z)$ can not have a logarithmic singularity at $p_i$. The fact that $h(z)$ is a ratio of two solutions of \eqref{equation: y''=Qy in section 2} implies any solution of \eqref{equation: y''=Qy in section 2} has no logarithmic singularity. This proves Part (c). \end{proof} Together with the Liouville theorem, we have \begin{Proposition}\label{proposition: Q can be realized} Suppose $Q$ is a meromorphic modular form of weight $4$ on $\mathrm{SL}(2,\mathbb Z)$. If there are two independent solutions $y_1$ and $y_2$ of \eqref{equation: y''=Qy in section 2} such that $h(z)=y_2(z)/y_1(z)$ satisfies $h(\gamma z)=\frac{a h(z)+b}{ch(z)+d}$ for some unitary matrix $\SM abcd$ depending on $\gamma$, for any $\gamma\in\mathrm{SL}(2,\mathbb Z)$, then $Q$ can be realized. \end{Proposition} \begin{proof} Let $u(z)=\log\frac{8\abs{h'(z)}^2}{\left(1+\abs{h(z)^2}\right)^2}$. Since $h(z)$ is unitary, $u(z)$ is well-defined on $\mathbb H$ and satisfies \eqref{2.5}. Further, the Liouville theorem says that $u(z)$ satisfies \eqref{2.4}. \end{proof} \subsection*{2.3. Examples} In this subsection, we will give some examples to indicate how to determine $Q$ provided that the RHS of \eqref{2.4} is known and $\alpha_\infty$ is given at $\infty$. Here, $\Gamma=\mathrm{SL}(2,\mathbb Z)$.\\ \noindent{\bf Example 1.} Assume that the RHS of \eqref{2.4} is equal to $0$. Then $Q:=-\frac{1}{2}\left(u_{zz}-\frac{1}{2}u^2_z\right)$ is a holomorphic modular form of weight $4$. Thus, \begin{equation} Q(z)=\pi^2rE_4(z). \end{equation} Since $\pm\alpha_\infty$ are the local exponents of \eqref{(1.1)} at $\infty$, we have $r=-4\alpha^2_\infty$. Thus, $Q$ is uniquely determined. Note that at $\infty$, the angle $\theta_\infty$ is equal to $2\alpha_\infty$.\\ \noindent{\bf Example 2.} Assume that the RHS of \eqref{2.4} is $4\pi n\sum\delta_p$, where the summation is over $\gamma\cdot\rho$ for every $\gamma\in\Gamma$. Then $Q$ is a meromorphic modular form of weight $4$ whose poles occur at $\gamma\cdot\rho$ and the order is $2$. Thus, $E_4(z)^2 Q(z)$ is holomorphic a modular form of wright $12$, and then $$ Q(z)=\pi^2\left(rE_4(z)+s\frac{E_6(z)^2}{E_4(z)^2}\right), $$ where we recall that the graded ring of modular forms on $\mathrm{SL}(2,\mathbb Z)$ is generated by $E_4(z)$ and $E_6(z)$. By Theorem \ref{theorem: 2.1}, the local exponents at $\rho$ are $-n/2$ and $n/2+1$, which implies $\kappa _\rho=(n+1)/2$, $s=(1-4\kappa^2_\rho)/9$, and $-(r+s)/4=\alpha^2_\infty$. Thus $Q$ is uniquely determined. Moreover, the angles $\theta_j$ in this example are $\theta_i=1/2$, $\theta_\rho=(n+1)/3$ and $\theta_\infty=2\alpha_\infty$.\\ \noindent{\bf Example 3.} Assume that the RHS of \eqref{2.4} is equal to $4\pi n\sum\delta_p$, where the summations is over $\gamma\cdot i$ for any $\gamma\in\Gamma$. Reasoning as Example 2, we have \begin{equation} Q(z)=\pi^2\left(rE_4(z)+t\frac{E_4(z)^4}{E_6(z)^2}\right). \end{equation} By Theorem \ref{theorem: 2.1}, we have $$ \kappa_i=\frac{n+1}{2},\quad t=\frac{1-4\kappa^2_i}{4}, \quad \text{and}\quad r+t=-4\alpha^2_\infty. $$ Thus $Q$ is uniquely determined. Moreover, we have $\theta_i=(n+1)/2$, $\theta_\rho=1/3$, and $\theta_\infty=2\alpha_\infty$. \\ \noindent{\bf Example 4.} Assume the RHS of \eqref{2.4} is $4\pi\left(n\sum_{p_1}\delta_{p_1}+m\sum_{p_2}\delta_{p_2}\right)$, where $p_1,p_2$ run over zeros of $E_4(z)$ and $E_6(z)$, respectively. Then \begin{equation} Q(z)=\pi^2\left(rE_4(z)+s\frac{E_6(z)^2}{E_4(z)^2}+t\frac{E_4(z)^4}{E_6(z)^2}\right). \end{equation} The conditions on the local exponents at $\rho$, $i$ and $\infty$ yield that \begin{align*} &s=\frac{1-4\kappa_\rho^2}{9},\quad \kappa_\rho=\frac{n+1}{2};\quad t=\frac{1-4\kappa^2_i}{4},\quad \kappa_i=\frac{m+1}{2};\\ &r+s+t=-4\alpha^2_\infty. \end{align*} Then $Q$ is uniquely determined. Moreover, $\theta_1=(m+1)/2$, $\theta_2=(n+1)/3$ and $\theta_\infty=2\alpha_\infty$. \subsection*{2.4 Eremenko's theorem} A. Eremenko \cite{Eremenko, Eremenko2} gave a necessary and sufficient conditions of the angles $\theta_i$, $1\leq i\leq 3$, at the three singular points $i,\rho,\infty$ for the existence of $u$ of \eqref{2.4}-\eqref{2.6}.\\ When one of angles is an integer, the following conditions are required.\\ \noindent{\bf (A)} If only one (say $\theta_1$) of angles is an integer, then either $\theta_2+\theta_3$ or $\abs{\theta_2-\theta_3}$ is an integer $m$ of opposite parity to $\theta_1$ with $m\leq\theta_1-1$. If all the angles are integers, then (1) $\theta_1+\theta_2+\theta_3$ is odd, and (2) $\theta_i<\theta_j+\theta_k$ for $i\neq j\neq k$.\\ \noindent{\bf Eremenko's theorem.} If one of $\theta_j$ is an integer, then a necessary and sufficient condition for the existence of a conformal metric of positive constant curvature on the sphere with three conic singularities of angles $\theta_1$, $\theta_2$, $\theta_3$ ($\theta_j\neq 1$, $1\leq j\leq 3$), is that $\braces{\theta_1,\theta_2,\theta_3}$ satisfies (A). Moreover, if (A) holds and there is only one integral angle, then the metric is unique. \section{Expansions of Eisenstein series at $\rho$ and $i$} The $q$-expansion of a modular form $f(z)$, i.e., the expansion of $f(z)$ with respect to the local parameter $q$ at the cusp $\infty$, is frequently studied and is of great significance in many problems in number theory. Here we shall review properties of series expansions of modular forms at a point $z_0\in\mathbb H$, other than the cusp $\infty$. \begin{Definition} Let $\Gamma$ be a Fuchsian subgroup of the first kind of $\mathrm{SL}(2,\mathbb R)$. Let $f(z)$ be a meromorphic modular form of weight $k$ on $\Gamma$. Given $z_0\in\mathbb H$, let $$ w=w_{z_0}(z)=\frac{z-z_0}{z-\overline z_0}. $$ The expansion of the form \begin{equation} \label{equation: power series expansion} f(z)=(1-w)^k\sum_{n\ge n_0}\frac{b_n}{n!}w^n \end{equation} is called the \emph{power series expansion} of $f$ at $z_0$. \end{Definition} One advantage of this expansion is that its coefficients $b_n$ have a simple expression in terms of the Shimura-Maass derivatives of $f$. To state the result, we recall that if $f:\mathbb H\to\mathbb C$ is said to be \emph{nearly holomorphic} if it is of the form $$ f(z)=\sum_{d=0}^n\frac{f_d(z)}{(z-\overline z)^d} $$ for some holomorphic functions $f_d$. If $k$ is an integer and $f:\mathbb H\to\mathbb C$ is a nearly holomorphic function such that $$ f\left(\frac{az+b}{cz+d}\right)=(cz+d)^kf(z) $$ for all $\SM abcd\in\Gamma$ and each $f_d$ is holomorphic at every cusp, then we say $f$ is a \emph{nearly holomorphic modular form} of weight $k$ on $\Gamma$. For a nearly holomorphic function $f:\mathbb H\to\mathbb C$, we define its \emph{Shimura-Maass} derivative of weight $k$ by $$ (\partial_kf)(z):=\frac1{2\pi i}\left(f'(z)+\frac{kf(z)}{z-\overline z}\right). $$ We have the following important properties of Shimura-Maass derivatives. \begin{Lemma}[{\cite[Equations (1.5) and (1.8)]{Shimura-Maass}}] \label{lemma: Maass} For any nearly holomorphic functions $f,g:\mathbb H\to\mathbb C$, any integers $k$ and $\ell$, and any $\gamma\in\mathrm{GL}^+(2,\mathbb R)$, we have $$ \partial_{k+\ell}(fg)=(\partial_k f)g+f(\partial_\ell g) $$ and $$ \partial_k\left(f\big|_k\gamma\right)=(\partial_k f)\big|_{k+2}\gamma. $$ \end{Lemma} \begin{Remark} The second property in the lemma implies that if $f$ is a nearly holomorphic modular form of weight $k$ on $\Gamma$, then $\partial_kf$ is a nearly holomorphic form of weight $k+2$ on $\Gamma$. \end{Remark} Set also $$ \partial_k^nf=\partial_{k+2n-2}\ldots\partial_kf. $$ Then the coefficients $b_n$ in \eqref{equation: power series expansion} has the following expression. \begin{Proposition}[{\cite[Proposition 17]{Zagier123}}] \label{proposition: power series coefficients} If $f(z)$ is a holomorphic modular form of weight $k$ on $\Gamma$, then the coefficients $b_n$ in \eqref{equation: power series expansion} are $$ b_n=(\partial^n_kf)(z_0)(-4\pi\,\mathrm{Im\,} z_0)^n $$ for $n\ge 0$. That is, we have $$ f(z)=(1-w)^k\sum_{n=0}^\infty\frac{(\partial^n_kf)(z_0) (-4\pi\,\mathrm{Im\,} z_0)^n}{n!}w^n. $$ \end{Proposition} Note that there is a misprint in Proposition 17 \cite{Zagier123}. The proof of the proposition shows that $b_n=(\partial^nf)(z_0)(-4\pi \,\mathrm{Im\,} z_0)^n$, but the statement misses the minus sign. We will use these properties of power series expansions of modular forms to show that the apparentness of \eqref{(1.1)} at a point $z_0$ will imply the apparentness at $\gamma z_0$ for all $\gamma\in\mathrm{SL}(2,\mathbb Z)$. We first prove two lemmas. The first lemma relates the power series expansion of a meromorphic modular form at $z_0$ to that at $\gamma z_0$. \begin{Lemma} \label{lemma: other points} Assume that $f$ is a meromorphic modular form of weight $k$ on $\mathrm{SL}(2,\mathbb Z)$. Assume that the power series expansion of $f$ at $z_0\in\mathbb H$ is $$ f(z)=(1-w)^k\sum_{n=n_0}^\infty a_nw^n, \qquad w=w_{z_0}(z)=\frac{z-z_0}{z-\overline z_0}. $$ For $\gamma=\SM abcd\in\mathrm{SL}(2,\mathbb Z)$, let $\widetilde w=w_{\gamma z_0}(z)=(z-\gamma z_0)/(z-\gamma\overline{z}_0)$. Then the power series expansion of $f$ at $\widetilde z_0$ is $$ (cz_0+d)^k(1-\widetilde w)^k\sum_{n=n_0}^\infty a_n\left(\frac{cz_0+d}{c\overline z_0+d} \right)^n\widetilde w^n. $$ \end{Lemma} \begin{proof} Since every meromorphic modular form on $\mathrm{SL}(2,\mathbb Z)$ can be written as the quotient of two holomorphic modular forms on $\mathrm{SL}(2,\mathbb Z)$, it suffices to prove the lemma under the assumption that $f$ is a holomorphic modular form. According to Proposition \ref{proposition: power series coefficients}, the power series expansions of $f$ at $z_0$ and at $\gamma z_0$ are $$ (1-w)^k\sum_{n=0}^\infty\frac{(\partial^n_kf)(z_0)(-4\pi\,\mathrm{Im\,} z_0)^n}{n!}w^n $$ are $$ (1-\widetilde w)^k\sum_{n=0}^\infty\frac{(\partial^n_kf)(\gamma z_0)(-4\pi\,\mathrm{Im\,} \gamma z_0)^n}{n!}\widetilde w^n, $$ respectively. Since $\partial^nf(z)$ is modular of weight $k+2n$ (see the remark following Lemma \ref{lemma: Maass}), we have $$ (\partial^nf)(\gamma z_0)=(cz_0+d)^{k+2n}(\partial^nf)(z_0). $$ Also, \begin{equation} \label{equation: Im} \mathrm{Im\,}\gamma z_0=\frac{\mathrm{Im\,} z_0}{|cz_0+d|^2}. \end{equation} Thus, if the power series expansion of $f$ at $z_0$ is $$ (1-w)^k\sum_{n=0}^\infty\frac{b_n}{n!}w^n, $$ then that of $f$ at $\gamma z_0$ is \begin{equation*} \begin{split} &(1-\widetilde w)^k\sum_{n=0}^\infty\frac{b_n}{n!} \frac{(cz_0+d)^{k+2n}}{|cz_0+d|^{2n}}\widetilde w^n \\ &\qquad=(cz_0+d)^k(1-\widetilde w)^k\sum_{n=0}^\infty \frac{b_n}{n!}\left(\frac{cz_0+d}{c\overline z_0+d}\right)^n \widetilde w^n. \end{split} \end{equation*} This proves the lemma. \end{proof} The next lemma expresses $y''(z)$ in terms of $w$. \begin{Lemma} \label{lemma: df/dw} Let $z_0\in\mathbb H$ and set $w=w_{z_0}(z)=(z-z_0)/(z-\overline z_0)$. If $$ y(z)=\frac1{1-w}\sum_{n=0}^\infty a_nw^{\alpha+n} $$ for some real number $\alpha$, then \begin{equation*} \frac{d^2}{dz^2}y(z)=\frac{(1-w)^3}{(z_0-\overline z_0)^2} \sum_{n=0}^\infty a_n(\alpha+n)(\alpha+n-1)w^{\alpha+n-2}. \end{equation*} \end{Lemma} \begin{proof} We first note that $$ 1-w=\frac{z_0-\overline z_0}{z-z_0} $$ and hence \begin{equation} \label{equation: w'} \frac{dw}{dz}=\frac{z_0-\overline z_0}{(z-z_0)^2} =\frac{(1-w)^2}{z_0-\overline z_0}, \quad \frac{d^2w}{dz^2}=-2\frac{z_0-\overline z_0}{(z-z_0)^3} =-\frac{2(1-w)^3}{(z_0-\overline z_0)^2}. \end{equation} Let $g(w)=\sum a_nw^{\alpha+n}$. We compute that $$ \frac{dy}{dz}=\left(\frac1{(1-w)^2}g(w) +\frac1{1-w}\frac{dg(w)}{dw}\right)\frac{dw}{dz} $$ and \begin{equation*} \begin{split} \frac{d^2y}{dz^2}&=\left( \frac2{(1-w)^3}g(w)+\frac2{(1-w)^2}\frac{dg(w)}{dw} +\frac1{1-w}\frac{d^2g(w)}{dw^2}\right) \left(\frac{dw}{dz}\right)^2 \\ &\qquad+\left(\frac1{(1-w)^2}g(w)+\frac1{1-w}\frac{dg(w)}{dw}\right) \frac{d^2w}{dz^2}. \end{split} \end{equation*} Using \eqref{equation: w'}, we reduce this to $$ \frac{d^2y}{dz^2}=\frac{(1-w)^3}{(z_0-\overline z_0)^2} \frac{d^2g(w)}{dw^2}. $$ This proves the lemma. \end{proof} \begin{Proposition} \label{proposition: same exponents} Suppose that $Q$ is a meromorphic modular form of weight $4$ with respect to $\mathrm{SL}(2,\mathbb Z)$ such that \eqref{(1.1)} is Fuchsian. Let $z_0$ be a pole of $Q$. Then the local exponents of \eqref{(1.1)} at $\gamma z_0$ are the same for all $\gamma\in\mathrm{SL}(2,\mathbb Z)$. Also, if \eqref{(1.1)} is apparent at $z_0$, then it is apparent at $\gamma z_0$ for all $\gamma\in\mathrm{SL}(2,\mathbb Z)$. \end{Proposition} \begin{proof} Let $\gamma=\SM abcd\in\mathrm{SL}(2,\mathbb Z)$, $w=(z-z_0)/(z-\overline z_0)$, and $\widetilde w=(z-\gamma z_0)/(z-\gamma\overline z_0)$. It suffices to prove that if $$ y(z)=\frac1{1-w}w^\alpha\sum_{n=0}^\infty c_nw^n $$ is a solution of \eqref{(1.1)} near $z_0$, then $$ \widetilde y(z)=\frac1{1-\widetilde w}\widetilde w^\alpha\sum_{n=0}^\infty c_n(C\widetilde w)^n, \qquad C=\frac{cz_0+d}{c\overline z_0+d}, $$ is a solution of \eqref{(1.1)} near $\gamma z_0$. Since \eqref{(1.1)} is assumed to be Fuchsian, the order of poles of $Q(z)$ at $z_0$ is at most $2$. We have \begin{equation*} \begin{split} Q(z)=(1-w)^4\sum_{n=-2}^\infty a_nw^n \end{split} \end{equation*} for some complex numbers $a_n$. Then by Lemma \ref{lemma: df/dw}, $y(z)$ being a solution of \eqref{(1.1)} near $z_0$ means that \begin{equation} \label{equation: change} \begin{split} &\frac1{(2i\,\mathrm{Im\,} z_0)^2}\sum_{n=0}^\infty c_n(\alpha+n)(\alpha+n-1) w^{\alpha+n-2} \\ &\qquad=\left(\sum_{n=-2}^\infty a_nw^n\right) \left(\sum_{n=0}^\infty c_nw^{\alpha+n}\right). \end{split} \end{equation} On the other hand, by Lemmas \ref{lemma: df/dw} and \ref{lemma: other points}, we have \begin{equation*} \begin{split} Q(z)=(cz_0+d)^4(1-\widetilde w)^4\sum_{n=-2}^\infty a_n(C\widetilde w)^n \end{split} \end{equation*} near $\gamma z_0$ and \begin{equation*} \begin{split} \widetilde y''(z)&=\frac{C^2(1-\widetilde w)^3}{(2i\,\mathrm{Im\,}\gamma z_0)^2} \sum_{n=0}^\infty c_n(\alpha+n)(\alpha+n-1)C^n\widetilde w^{\alpha+n-2} \\ &=(cz_0+d)^4\frac{(1-\widetilde w)^3}{(2i\,\mathrm{Im\,} z_0)^2} \sum_{n=0}^\infty c_n(\alpha+n)(\alpha+n-1)C^n\widetilde w^{\alpha+n-2}, \end{split} \end{equation*} where in the last step we have used \eqref{equation: Im} and $C=(cz_0+d)/(c\overline z_0+d)$. From these two expressions and \eqref{equation: change}, we see that if $y(z)$ is a solution of \eqref{(1.1)} near $z_0$, then $\widetilde y(z)$ is a solution of \eqref{equation: DE 3} near $\gamma z_0$, and the proof is completed. \end{proof} For our purpose, we need the following properties of power series expansions of modular forms on $\mathrm{SL}(2,\mathbb Z)$. These properties are well-known to experts (see \cite{I-O-elliptic}, for example). For convenience of the reader, we reproduce the proofs here. \begin{Lemma} Let $$ w_i(z)=\frac{z-i}{z+i}. $$ Then $$ w_i(-1/z)=-w_i(z), \qquad 1-w_i(-1/z)=-iz(1-w_i(z)). $$ Also, let $\rho=(1+\sqrt{-3})/2$, $$ w_\rho(z)=\frac{z-\rho}{z-\overline\rho} $$ and $\gamma=\SM0{-1}1{-1}$. Then $$ w_\rho(\gamma z)=e^{2\pi i/3}w_\rho(z), \qquad 1-w_\rho(\gamma z)=e^{4\pi i/3}(z-1)(1-w_\rho(z)). $$ \end{Lemma} \begin{proof} The proof is straightforward. Here we will only provide details for the case of $w_\rho(z)$. We have $$ w_\rho(z)=\M1{-\rho}1{-\overline\rho}z. $$ Hence, $$ w_\rho(\gamma z)=\M1{-\rho}1{-\overline\rho}\M0{-1}1{-1}z. $$ We then compute that $$ \M1{-\rho}1{-\overline\rho}\M0{-1}1{-1} \M1{-\rho}1{-\overline\rho}^{-1} =\M{(-1-\sqrt{-3})/2}00{(-1+\sqrt{-3})/2}. $$ It follows that $$ w_\rho(\gamma z)=e^{2\pi i/3}w_\rho(z). $$ Then we have $$ 1-w_\rho(\gamma z)=1-\rho^2w_\rho(z) =1-\frac{\rho^2z+1}{z-\overline\rho} =\frac{(1-\rho^2)(z-1)}{z-\overline\rho}, $$ while $$ 1-w_\rho(z)=\frac{\rho-\rho^{-1}}{z-\overline\rho}. $$ Hence, $$ 1-w_\rho(\gamma z)=-\rho(z-1)(1-w_\rho(z)) =e^{4\pi i/3}(z-1)(1-w_\rho(z)). $$ This proves the lemma. \end{proof} From the lemma, we deduce the following properties of expansions of modular forms at $i$ and $\rho$. These properties will be crucial in the proofs of Theorem \ref{theorem: 1.5}(a) and Theorem \ref{theorem: 1.7}(a). \begin{Corollary}\label{corollary: expansion of f at i} Let $f(z)$ be a meromorphic modular form of even weight $k$ on $\mathrm{SL}(2,\mathbb Z)$. Suppose that the power series expansion of $f$ at $i$ is $$ f(z)=(1-w_i(z))^k\sum_{n=n_0}^\infty a_nw_i(z)^n, \qquad w_i(z)=\frac{z-i}{z+i}. $$ Then $a_n=0$ whenever $n+k/2\not\equiv 0\ \mathrm{mod}\ 2$. Also, if the power series expansion of $f$ at $\rho=(1+\sqrt{-3})/2$ is $$ f(z)=(1-w_\rho(z))^k\sum_{n=n_0}^\infty b_nw_\rho(z)^n, \qquad w_\rho(z)=\frac{z-\rho}{z-\overline\rho}, $$ then $b_n=0$ whenever $n+k/2\not\equiv 0\ \mathrm{mod}\ 3$. \end{Corollary} \begin{proof} Here we will only prove the case of $\rho$. Let $\gamma=\SM0{-1}1{-1}$. Since $f(z)$ is a meromorphic modular form of weight $k$ on $\mathrm{SL}(2,\mathbb Z)$, we have $$ f(\gamma z)=(z-1)^kf(z) =(z-1)^k(1-w_\rho(z))^k\sum_{n=n_0}^\infty b_nw_\rho(z)^n $$ On the other hand, by the lemma above, we have $$ f(\gamma z)=e^{4\pi ik/3}(z-1)^k(1-w_\rho(z))^k \sum_{n=n_0}^\infty b_ne^{2\pi in/3}w_\rho(z)^n. $$ Comparing the two expressions, we conclude that $b_n=0$ whenever $n+k/2\not\equiv0\ \mathrm{mod}\ 3$. \end{proof} To determine local exponents of modular differential equations at $\rho$ and $i$, we need to know the leading terms of the expansions of $E_6(z)^2/E_4(z)^2$ and $E_4(z)^4/E_6(z)^2$. \begin{Lemma} \label{lemma: leading terms of E4 E6} \begin{enumerate} \item[(a)] Let $$ w_\rho=w_\rho(z)=\frac{z-\rho}{z-\overline\rho}. $$ Then we have $$ \pi^2\frac{E_6(z)^2}{E_4(z)^2} =(1-w_\rho^4)\left(\frac34w_\rho^{-2}+\sum_{n=1}^\infty a_nw_\rho^n\right) $$ for some complex numbers $a_n$ such that $a_n=0$ whenever $n\not\equiv 1\ \mathrm{mod}\ 3$. \item[(b)] Let $$ w_i=w_i(z)=\frac{z-i}{z+i}. $$ Then $$ \pi^2\frac{E_4(z)^4}{E_6(z)^2}=(1-w_i)^4 \left(\frac14w_i^{-2}+\sum_{n=0}^\infty b_nw_i^n\right) $$ for some complex numbers $b_n$ such that $a_n=0$ whenever $n\not\equiv0\ \mathrm{mod}\ 2$. \end{enumerate} \end{Lemma} \begin{proof} It is known that, as an analytic function on $\mathbb H$, $E_4(z)$ has a simple zero at $\rho$. Also, $E_6(\rho)\neq 0$. Thus, by Corollary \ref{corollary: expansion of f at i}, $$ \pi^2\frac{E_6(z)^2}{E_4(z)^2}=(1-w_\rho)^4\left( a_{-2}w_\rho^{-2}+\sum_{n=1}^\infty a_nw_\rho^n\right) $$ for some complex numbers $a_n$ such that $a_n=0$ whenever $n\not\equiv1\ \mathrm{mod}\ 3$. To determine the leading coefficient $a_{-2}$, we use the well-known Ramanujan's identity $$ \frac1{2\pi i}E_4'(z)=\frac{E_2(z)E_4(z)-E_6(z)}3, $$ where $E_2(z)$ is the Eisenstein series of weight $2$ on $\mathrm{SL}(2,\mathbb Z)$ (see \cite[Proposition 15]{Zagier123}). Hence, \begin{equation*} \begin{split} \lim_{z\to\rho}w_\rho(z)\frac{E_6(z)}{E_4(z)} &=\frac{E_6(\rho)}{\rho-\overline\rho}\lim_{z\to\rho} \frac{z-\rho}{E_4(z)}=\frac{E_6(\rho)}{\sqrt3i}\frac1{E'_4(\rho)}\\ &=-\frac{E_6(\rho)}{2\pi\sqrt3}\frac3{E_2(\rho)E_4(\rho)-E_6(\rho)} =\frac{\sqrt 3}{2\pi}, \end{split} \end{equation*} which implies that $a_{-2}=3/4$. This proves Part (a). The proof of Part (b) is similar. We use another identity $$ \frac1{2\pi i}E_6'(z)=\frac{E_2(z)E_6(z)-E_4(z)^2}2 $$ of Ramanujan's to conclude that the leading term of $\pi^2E_4(z)^4/E_6(z)^2$ is $w_i^{-2}/4$. We omit the details. \end{proof} \begin{Corollary} \label{corollary: indicial} The local exponents of the modular differential equation \eqref{equation: DE 3} at $\rho$ and at $i$ are roots of $$ x^2-x+\frac 94s=0 $$ and $$ x^2-x+t=0, $$ respectively. \end{Corollary} \begin{proof} Here we prove only the case of $\rho$; the proof of the case of $i$ is similar. Let $w=w_\rho(z)=(z-\rho)/(z-\overline\rho)$. Assume that $$ y(z)=\frac1{1-w}\sum_{n=0}^\infty a_nw^{\alpha+n}, \quad a_0\neq 0, $$ is a solution of \eqref{equation: DE 3}. By Lemmas \ref{lemma: leading terms of E4 E6} and \ref{lemma: df/dw}, we have $$ y''(z)=-\frac{(1-w)^3}3\left(\alpha(\alpha-1)a_0w^{\alpha-2}+\cdots\right) $$ while \begin{equation*} \begin{split} &\pi^2\left(rE_4(z)+s\frac{E_6(z)^2}{E_4(z)^2}+t\frac{E_4(z)^4}{E_6(z)^2} \right)y(z) \\ &\qquad=(1-w)^3\left(\frac34sa_0w^{\alpha-2}+\cdots\right). \end{split} \end{equation*} Comparing the leading terms, we see that the exponent $\alpha$ satisfies $\alpha^2-\alpha+9s/4=0$. \end{proof} We are now ready to prove Part (a) of Theorem \ref{theorem: 1.5}. \begin{proof}[Proof of Theorem \ref{theorem: 1.5}(a)] By Proposition \ref{proposition: same exponents}, we only need to determine when \eqref{equation: DE in introduction} is apparent at $\rho$. Let $\kappa_\rho\in\frac12\mathbb N$ and set $s=s_{\kappa_\rho}=(1-4\kappa_\rho)/9$ so that the local exponents of the modular differential equation \eqref{equation: DE in introduction} with $s=s_{\kappa_\rho}$, i.e., \begin{equation} \label{equation: sk} y''(z)=\pi^2\left(rE_4(z)+s_{\kappa_\rho} \frac{E_6(z)^2}{E_4(z)^2}\right)y(z) \end{equation} at $\rho$ are $1/2\pm\kappa_\rho$, by Corollary \ref{corollary: indicial}. Let $w=w_\rho(z)=(z-\rho)/(z-\overline\rho)$. According to Corollary \ref{corollary: expansion of f at i} and Lemma \ref{lemma: leading terms of E4 E6}, we have \begin{equation} \label{equation: E4} \pi^2E_4(z)=(1-w)^4\sum_{n=1}^\infty a_nw^n, \end{equation} and \begin{equation} \label{equation: E6/E4} \pi^2\frac{E_6(z)^2}{E_4(z)^2} =(1-w)^4\left(\frac34w^{-2}+ \sum_{n=1}^\infty b_n w^n\right), \end{equation} where $a_n$ and $b_n$ are complex numbers satisfying \begin{equation}\label{equation: conditions for an, bn not zero} a_n=b_n=0\quad \text{if}\quad n\not\equiv 1\ \mathrm{mod}\ 3. \end{equation} We also remark that $a_1\neq 0$ since the zero $\rho$ of $E_4(z)$, as a holomorphic function on $\mathbb H$, is simple. Now the differential equation \eqref{equation: sk} is apparent at $\rho$ if and only if it has a solution of the form \begin{equation*} \label{equation: f in w} y(z)=\frac1{1-w}w^{1/2-\kappa_\rho}\sum_{n=0}^\infty c_n w^n\quad \text{with}\ c_0=1. \end{equation*} Plugging this series into \eqref{equation: sk} and using Lemma \ref{lemma: df/dw}, \eqref{equation: E4}, and \eqref{equation: E6/E4}, we find that the coefficients $c_n$ need to satisfy \begin{equation}\label{equation: recursive 1} n\left(n-2\kappa_\rho\right)c_n= -3\sum_{j=0}^{n-2}c_j(ra_{n-j-2} +s_{\kappa_\rho}b_{n-j-2}). \end{equation} Due to \eqref{equation: conditions for an, bn not zero} and \eqref{equation: recursive 1}, we can inductively prove that \begin{equation}\label{equation: condition for cn} c_n=0\quad\text{if}\ n\not\equiv 0\ \mathrm{mod}\ 3. \end{equation} Since the left-hand side of \eqref{equation: recursive 1} vanishes when $n=2\kappa_\rho$, \eqref{equation: sk} is apparent at $\rho$ if and only if \begin{equation} \label{equation: consistency} \sum_{j=0}^{2\kappa_\rho-2}c_j( ra_{2\kappa_\rho-j-2} + s_{\kappa_\rho}b_{2\kappa_\rho-j-2})=0. \end{equation} Suppose $3\nmid 2\kappa_\rho$. Then, $j\equiv 0\ \mathrm{mod}\ 3$ and $2\kappa_\rho-j-2\equiv 1\ \mathrm{mod}\ 3$ cannot hold simultaneously. Hence, by \eqref{equation: conditions for an, bn not zero} and \eqref{equation: condition for cn}, the condition \eqref{equation: consistency} always holds for any $r$, i.e., \eqref{equation: sk} is apparent at $\rho$ for any $r$. This proves (a). For the case $3|2\kappa_\rho$, considering $r$ as an indeterminate and using \eqref{equation: recursive 1} to recursively express $c_n$ as polynomials in $r$, we find that $c_n$ is a polynomial of degree exactly $n/3$ in $r$ when $3|n$ and $n<2\kappa_\rho$. (Note that we use the fact that $a_1\neq 0$ to conclude that the degree is $n/3$.) Thus, the left-hand side of \eqref{equation: consistency} is a polynomial $P(r)$ of degree $2\kappa_\rho/3$ in $r$ and \eqref{equation: sk} is apparent at $\rho$ if and only if $r$ is a root of this polynomial $P(x)$. This proves Part (b) except the identity \eqref{equation: r+s=-(l+1/2)^2}. \end{proof} The proof of Theorem \ref{theorem: 1.7}(a) except \eqref{equation: r+t=-(l+1/3)^2} is very similar to that of Theorem \ref{theorem: 1.5} and will be omitted. \section{ Riemann's existence theorem and its application. } In this section, we will use Riemann's existence theorem to prove Theorems \ref{theorem: necessary and sufficient conditions for appartness at infinity}, \ref{theorem: 1.5}(b), and \ref{theorem: 1.7}(b). The basic idea is as follows. Let $h(z)$ be a modular function on some subgroup $\Gamma$ of finite index of $\mathrm{SL}(2,\mathbb Z)$. A simple computation shows that both $y_1(z)=1/\sqrt{h'(z)}$ and $y_2(z)=h(z)/\sqrt{h'(z)}$ are solutions of $$ y''(z)=Q(z)y(z), \qquad Q(z)=-\frac12\{h(z),z\}, $$ where $\{h(z),z\}$ is the Schwarz derivative. Using either properties of Schwarz derivatives or direct computation, we can verify that $\{h(z),z\}$ is a meromorphic modular form of weight $4$ on $\Gamma$. When $h(z)$ has additional symmetry, $\{h(z),z\}$ can be modular on a larger group. Note that, by construction, this differential equation $y''(z)=Q(z)y(z)$ is apparent on $\mathbb H$. Thus, one way to prove the theorems is simply to prove the existence of a modular function $h(z)$ such that $-\{h(z),z\}/2=Q(z)$ for each $Q(z)$ appearing in the theorems. To achieve this, we will use Riemann's existence theorem. Since some of the readers may not be familiar with Riemann's existence theorem, here we give a quick overview of this important result in the theory of Riemann surfaces. The exposition follows \cite[Chapter III]{Miranda}. Let $F:X\to Y$ be a (branched) covering of compact Riemann surfaces of degree $d$. A point $y$ of $Y$ is a \emph{branch point} if the cardinality of $F^{-1}(y)$ is not $d$ and a point $x$ of $X$ is a \emph{ramification point} if $F$ is not locally one-to-one near $x$. (In particular, $F(x)$ is a branch point.) Let $B$ be the (finite) set of branch points on $Y$ under $F$. Pick a point $y_0\in Y-B$ so that $F^{-1}(y_0)$ has $d$ points, say $x_1,\ldots,x_d$. Every loop $\gamma$ in $Y-B$ based at $y_0$ can be lifted to $d$ paths $\widetilde\gamma_1,\ldots,\widetilde\gamma_d$ with $\widetilde\gamma_j(0)=x_j$ and $\widetilde\gamma_j(1)=x_{j'}$ for some $x_{j'}$. The map $j\mapsto j'$ is then a permutation in $S_d$. The permutation depends only on the homotopy class of $\gamma$. In this way, we get a monodromy representation $$ \rho:\pi_1(Y-B,y_0)\to S_d. $$ Note that since $F^{-1}(Y-B)$ is connected, the image of $\rho$ is a transitive subgroup of $S_d$. Also, let $b\in B$ and $a_1,\ldots,a_k$ be the points in $F^{-1}(b)$ with ramification indices $m_1,\ldots,m_k$, respectively. We can show that if $\gamma$ is a small loop in $Y-B$ around $b$ based at $y_0$, then $\rho(\gamma)$ is a product of disjoint cycles of lengths $m_1,\ldots,m_k$. To state the version of Riemann's existence theorem used in the paper, let us consider the case $Y=\mathbb P^1(\mathbb C)$. Let $B=\{b_1,\ldots,b_n\}$ be the set of branch points of $F:X\to\mathbb P^1(\mathbb C)$. Let $\gamma_j$, $j=1,\ldots,n$, be loops that circles $b_j$ once but no other branch points. Then $\pi_1(\mathbb P^1(\mathbb C)-B,y_0)$ is generated by the homotopy classes $[\gamma_j]$, subject to a single relation $[\gamma_1]\ldots[\gamma_n]=1$ (with a suitable ordering of the points $b_j$). Thus, the image of $\rho$ is generated by $\sigma_j=\rho(\gamma_j)$ satisfying the relation $\sigma_1\cdots\sigma_n=1$. Then Riemann's existence theorem states as follows (see \cite[Corollary 4.10]{Miranda}). \begin{theorem}[Riemann's existence theorem] Let $B=\{b_1,\ldots,b_n\}$ be a finite subset of $\mathbb P^1(\mathbb C)$. Then there exists a one-to-one correspondence between the set of isomorphism classes of coverings $F:X\to\mathbb P^1(\mathbb C)$ of compact Riemann surfaces of degree $d$ whose branch points lie in $B$ and the set of (simultaneous) conjugacy classes of $n$-tuples $(\sigma_1,\ldots,\sigma_n)$ of permutations in $S_d$ such that $\sigma_1\ldots\sigma_n=1$ and the group generated by the $\sigma_j$'s is transitive. Moreover, if the disjoint cycle decomposition of $\sigma_j$ is a product of $k$ cycles of lengths $m_1,\ldots,m_k$, then $F^{-1}(b_j)$ has $k$ points with ramification indices $m_1,\ldots,m_k$, respectively. \end{theorem} We now use this result to prove Theorems \ref{theorem: necessary and sufficient conditions for appartness at infinity}, \ref{theorem: 1.5}(b), and \ref{theorem: 1.7}(b). Since the proofs are similar, we will provide details only for Theorem \ref{theorem: 1.5}(b). \begin{proof}[Proof of Theorem \ref{theorem: 1.5}(b)] Assume that $3|2\kappa_\rho$. Let $\Gamma_2$ be the subgroup of index of $2$ of $\mathrm{SL}(2,\mathbb Z)$ generated by $$ \gamma_1=\M1{-1}10, \qquad\gamma_2=\M01{-1}{-1}. $$ Note that $$ \gamma_1\gamma_2=\M1201. $$ The group $\Gamma_2$ has a cusp $\infty$ and two elliptic points $\rho_1=(1+\sqrt{-3})/2$ and $\rho_2=(-1+\sqrt{-3})/2$ of order $3$, fixed by $\gamma_1$ and $\gamma_2$, respectively. Let $$ j_2(z)=\frac{E_6(z)}{\eta(z)^{12}}, $$ which is a Hauptmodul for $\Gamma_2$, and set $$ J_2(z)=\frac{24}{j_2(z)}. $$ We have $J_2(\infty)=0$, $J_2(\rho_1)=1/\sqrt{-3}$, and $J_2(\rho_2)=-1/\sqrt{-3}$. Set $\ell_0=2\kappa_\rho/3$. We first show that for each $\ell\in\{0,\ldots,\ell_0-1\}$, there exists a modular function $h(z)$ on $\Gamma_2$ such that the covering $h:X(\Gamma_2)\to\mathbb P^1(\mathbb C)$ of compact Riemann surfaces is ramified precisely at $\infty$, $\rho_1$, and $\rho_2$ with ramification index $2\ell+1$, $\ell_0$, and $\ell_0$, respectively. Note that by the Riemann-Hurwitz formula, such a covering has degree $\ell_0+\ell$, i.e., such a modular function $h(z)$ will be a rational function of degree $\ell_0+\ell$ in $J_2(z)$. Consider the two $\ell_0$-cycles $$ \sigma_1=(1,\ldots,\ell_0), \qquad \sigma_2=(\ell_0+\ell,\ell_0+\ell-1,\ldots,\ell+1) $$ in the symmetric group $S_{\ell_0+\ell}$. Since $\ell<\ell_0$, we have $$ \sigma_2\sigma_1=(1,\ldots,\ell,\ell_0+\ell,\ell_0+\ell-1, \ldots,\ell_0), $$ which is a $(2\ell+1)$-cycle. (Notice that if $\ell\ge\ell_0$, then $\sigma_1$ and $\sigma_2$ are disjoint.) It is clear that when $\ell<\ell_0$, the subgroup generated by $\sigma_1$ and $\sigma_2$ is a transitive subgroup of $S_{\ell_0+\ell}$. Thus, by Riemann's existence theorem, there exists a covering of compact Riemann surfaces $H:X\to\mathbb P^1(\mathbb C)$ of degree $\ell_0+\ell$ ramified at three points $\zeta_1$, $\zeta_2$, and $\zeta_3$ of $\mathbb P^1(\mathbb C)$ with corresponding monodromy $\sigma_1$, $\sigma_2$, and $\sigma_1^{-1}\sigma_2^{-1}$, respectively. By the Riemann-Hurwitz formula, the genus of $X$ is $0$, and $H$ is a rational function from $\mathbb P^1(\mathbb C)$ to $\mathbb P^1(\mathbb C)$. Furthermore, by applying a suitable linear fractional transformation on the variable of $H$, we may assume that the three ramified points in $H^{-1}(z_j)$ are $0=J_2(\infty)$, $1/\sqrt{-3}=J_2(\rho_1)$, and $-1/\sqrt{-3}=J_2(\rho_2)$, respectively. Set $h(z)=H(J_2(z))$. Then $h(z)$ has the required properties that the only points of $X(\Gamma_2)$ ramified under $h:X(\Gamma_2)\to\mathbb P^1(\mathbb C)$ are $\rho_1$, $\rho_2$, and the cusp $\infty$ with ramified indices $\ell_0$, $\ell_0$, and $2\ell+1$, respectively. Now consider the Schwarz derivative $\{h(z),z\}$, which is a meromorphic modular form of weight $4$ on $\Gamma_2$. We claim that it is in fact modular on the bigger group $\mathrm{SL}(2,\mathbb Z)$. Indeed, to show $\{h(z),z\}$ is modular on $\mathrm{SL}(2,\mathbb Z)$, it suffices to prove that $\{h(z),z\}\big|T=\{h(z),z\}$, where $T=\SM1101$. Let $\widetilde h(z)=h(z+1)$. Now the automorphism on $X(\Gamma_2)$ induced by $T$ interchanges $\rho_1$ and $\rho_2$. Thus, the ramification data of the covering $\widetilde h:X(\Gamma_2)\to\mathbb P^1(\mathbb C)$ is the same as that of $h$. By the Riemann's existence theorem, $h$ and $\widetilde h$ are related by a linear fractional transformation, i.e., $\widetilde h=(ah+b)/(ch+d)$ for some $a,b,c,d\in\mathbb C$ with $ad-bc\neq 0$. It follows that $\{h(z),z\}\big|T=\{h(z),z\}$ by the well-known property $\{(af(z)+b)/(cf(z)+d),z\}=\{f(z),z\}$ of the Schwarz derivative. This proves that $\{h(z),z\}$ is a meromorphic modular form of weight $4$ on the larger group $\mathrm{SL}(2,\mathbb Z)$. Furthermore, since $\rho_1$ is an elliptic point of order $3$, a local parameter for $\rho_1$ as a point on the compact Riemann surface $X(\Gamma_2)$ is $w^3$, where $w=(z-\rho)/(z-\overline\rho)$. Therefore, we have $$ h(z)=d_0+\sum_{n=3\ell_0}^\infty d_nw^n, $$ for some complex numbers $d_n$ with $d_{3\ell_0}\neq0$ and $d_n=0$ whenever $3\nmid n$. For convenience, set \begin{equation*} \begin{split} A&=\sum_{n=3\ell_0}^\infty nd_nw^{n-1}, \\ B&=\sum_{n=3\ell_0}^\infty n(n-1)d_nw^{n-2}, \\ C&=\sum_{n=3\ell_0}^\infty n(n-1)(n-2)d_nw^{n-3}. \end{split} \end{equation*} Using \eqref{equation: w'}, we compute that \begin{equation*} \begin{split} h'(z)&=\frac{(1-w)^2}{\rho-\overline\rho}A, \\ h''(z)&=\frac{(1-w)^4}{(\rho-\overline\rho)^2}B -2\frac{(1-w)^3}{(\rho-\overline\rho)^2}A, \\ h'''(z)&=\frac{(1-w)^6}{(\rho-\overline\rho)^3}C -6\frac{(1-w)^5}{(\rho-\overline\rho)^3}B +6\frac{(1-w)^4}{(\rho-\overline\rho)^3}A, \end{split} \end{equation*} and hence $$ \{h(z),z\}=\frac{(1-w)^4}{(\rho-\overline\rho)^2}\left(\frac CA -\frac32\frac{B^2}{A^2}\right) =-\frac{(1-w)^4}3\left(\frac{1-9\ell_0^2}{2w^2}+cw+\cdots\right) $$ for some $c$. It follows that, by \eqref{equation: E6/E4}, $$ \{h(z),z\}+2\pi^2s_{\kappa_\rho}\frac{E_6(z)^2}{E_4(z)^2}, \qquad s_{\kappa_\rho}=\frac{1-4\kappa_\rho^2}9=\frac19-\ell_0^2, $$ is a holomorphic modular form of weight $4$ on $\mathrm{SL}(2,\mathbb Z)$. By comparing the leading coefficients of the Fourier expansions at the cusp $\infty$, we conclude that, $$ \{h(z),z\}=-2\pi^2\left(rE_4(z)+s_{\kappa_\rho} \frac{E_6(z)^2}{E_4(z)^2}\right), $$ where $r=-(2\ell+1)^2/4-s_{\kappa_\rho}=\ell_0^2-(2\ell+1)^2/4-1/9$. Equivalently, $1/\sqrt{h'(z)}$ and $h(z)/\sqrt{h'(z)}$ are solutions of \eqref{equation: DE in introduction} which also implies that the singularity of \eqref{equation: DE in introduction} at $\rho$ is apparent. Finally, since we have found $\ell_0$ different $r$ such that \eqref{equation: DE in introduction} has an apparent singularity at $\rho$ for the given $s_{\kappa_\rho}$, by Part (a), this proves the theorem. \end{proof} \begin{Example} For small $\kappa_\rho$, the modular functions $h(z)$ appearing in the proof are given by $$ \extrarowheight3pt \begin{array}{cccc} \hline\hline \kappa_\rho & \ell & (r,s) & h(z) \\ \hline \displaystyle\frac32 & 0 & \displaystyle \left(\frac{23}{36},-\frac89\right)\phantom{\Bigg|} & J_2 \\ 3 & 0 & \displaystyle \left(\frac{131}{36},-\frac{35}9\right) \phantom{\Bigg|} & \displaystyle\frac{J_2}{1-3J_2^2} \\ 3 & 1 & \displaystyle \left(\frac{59}{36},-\frac{35}9\right) \phantom{\Bigg|} & \displaystyle\frac{J_2^3}{1+9J_2^2} \\ \hline\hline \end{array} $$ \end{Example} \begin{proof}[Proof of Theorem \ref{theorem: 1.7}(b)] Assume that $\kappa_i\in\mathbb N$. Let $\Gamma_3$ be the subgroup of index $3$ of $\mathrm{SL}(2,\mathbb Z)$ generated by $$ \gamma_1=\M1{-2}1{-1}, \qquad \gamma_2=\M1{-1}2{-1}, \qquad \gamma_3=\M0{-1}10. $$ We note that $$ \gamma_1\gamma_2\gamma_3=\M1301. $$ The group $\Gamma_3$ has one cusp and three elliptic points $z_1=1+i$, $z_2=(1+i)/2$, and $z_3=i$ of order $2$, fixed by $\gamma_j$, $j=1,2,3$, respectively. Let $$ j_3(z)=\frac{E_4(z)}{\eta(z)^8} $$ be a Hauptmodul for $\Gamma_3$ and set $$ J_3(z)=12j_3(z)^{-1}. $$ Note that $j_3(z)^3$ is equal to the elliptic $j$-function $j(z)$. Since $j(i)=1728$ and $j(\rho)=0$, we have $\{J_3(z_1),J_3(z_2),J_3(z_3)\}=\{1,e^{2\pi i/3},e^{4\pi i/3}\}$, $J_3(\rho)=\infty$, and $J_3(\infty)=0$. Consider the case $r+t_{\kappa_i}=-(\ell+1/3)^2$ first. Our goal here is to construct a modular function $h(z)$ on $\Gamma_3$, for each $\ell$ in the range, such that the covering $h:X(\Gamma_3)\to\mathbb P^1(\mathbb C)$ has degree $$ d=\frac12(3\kappa_i+3\ell-1) $$ and is ramified at precisely the cusp $\infty$ and the three elliptic points $z_1$, $z_2$, and $z_3$ with ramification indices $3\ell+1$, $\kappa_i$, $\kappa_i$, and $\kappa_i$, respectively. (Notice that $\kappa_i$ and $\ell$ have opposite parities, so $d$ is an integer.) Since the covering has four branch points, it is not easy to apply Riemann's existence theorem directly to get $h(z)$. Instead, we shall use the following idea. For convenience, set \begin{equation} \label{equation: m m'} m=\frac12(\kappa_i+\ell-1), \qquad m'=\frac12(\kappa_i-\ell-1). \end{equation} We claim that there exists a rational function $H(x)$ of degree $d$ in $x$ of the form $$ H(x)=\frac{x^{3\ell+1}G(x)^3}{F(x)^3}, \quad \deg F(x)=m, \quad \deg G(x)=m', $$ such that $xF(x)G(x)$ is squarefree and $$ H(x)-1=\frac{(x-1)^{\kappa_i}L(x)}{F(x)^3} $$ for some polynomial $L$ of degree $d-\kappa_i$ with no repeated roots. That is, $H(x)$ is a rational function such that \begin{enumerate} \item[(i)] the covering $H:\mathbb P^1(\mathbb C)\to\mathbb P^1(\mathbb C)$ branches at precisely $\infty$, $0$, and $1$ (note that by the Riemann-Hurwitz formula, $H$ cannot have other branch points), \item[(ii)] the monodromy $\sigma_\infty$ around $\infty$ is a product of $m$ disjoint $3$-cycles, the monodromy $\sigma_0$ around $0$ is a disjoint product of a $(3\ell+1)$-cycle and $m'$ $3$-cycles, and the monodromy $\sigma_1$ around $1$ is a $\kappa_i$-cycle, \item[(iii)] the unique unramified point in $H^{-1}(\infty)$ is $\infty$, the unique point of ramification index $3\ell+1$ in $H^{-1}(0)$ is $0$, and the unique ramified point in $H^{-1}(1)$ is $1$. \end{enumerate} Suppose that such a rational function $H(x)$ exists. We define $h:X(\Gamma_3)\to\mathbb P^1(\mathbb C)$ by $$ h(z)=H(J_3(z)^3)^{1/3}=\frac{J_3(z)^{3\ell+1}G(J_3(z)^3)}{F(J_3(z)^3)}. $$ From the construction, we see that $h$ ramifies only at $z_1=1+i$, $z_2=(1+i)/2$, $z_3=i$, and $\infty$ with ramification indices $\kappa_i$, $\kappa_i$, $\kappa_i$, and $3\ell+1$, respectively. Then following the proof of Theorem \ref{theorem: 1.5}(b), we can prove that the Schwarz derivative $\{h(z),z\}$ is a meromorphic modular form on the larger group $\mathrm{SL}(2,\mathbb Z)$ and that $$ \{h(z),z\}=-2\pi^2\left(rE_4(z)+t_{\kappa_i}\frac{E_4(z)^4}{E_6(z)^2} \right), \quad r=-\left(\ell+\frac13\right)^2-t_{\kappa_i}, $$ which is equivalent to the assertion that $1/\sqrt{h'(z)}$ and $h(z)/\sqrt{h'(z)}$ are solutions of \eqref{equation: DE 2 in introduction} with $t=t_{\kappa_i}$ and $r=-(\ell+1/3)^2-t_{\kappa_i}$ and hence implies that \eqref{equation: DE 2 in introduction} is apparent with these parameters. It remains to prove that a rational function $H(x)$ with properties described above exists. According to Riemann's existence theorem, it suffices to find $\sigma_\infty$ that is a product of $m$ disjoint $3$-cycles and $\sigma_1$ that is a $\kappa_i$-cycle in $S_d$ such that $\sigma_1\sigma_\infty$ is a disjoint product of a cycle of length $3\ell+1$ and $m'$ cycles of length $3$. Indeed, we find that we may choose $$ \sigma_\infty=(2,3,4)(5,6,7)\ldots(3m-1,3m,3m+1) $$ and $$ \sigma_1=(1,2,5,8,\ldots,3m-1,3m'+1,3m'-2,\ldots,7,4). $$ Then $$ \sigma_1\sigma_\infty=(1,2,3)(4,5,6)\ldots(3m'-2,3m'-1,3m') (3m'+1,3m'+2,\ldots,d). $$ This settles the case $r+t_{\kappa_i}=-(\ell+1/3)^2$. The case $r+t_{\kappa_i}=-(\ell-1/3)^2$ can be dealt with in the same way. The difference is that the rational function $H(x)$ in this case has degree $$ d=\frac32(\kappa_i+\ell-1) $$ and is of the form $$ H(x)=\frac{x^{3\ell-1}G(x)^3}{F(x)^3}, \quad \deg F(x)=m, \quad \deg G(x)=m', $$ where $m$ and $m'$ are the same as those in \eqref{equation: m m'}, such that $xF(x)G(x)$ is squarefree and $$ H(x)-1=\frac{(x-1)^{\kappa_i}L(x)}{F(x)^3} $$ for some polynomial $L(x)$ of degree $d-\kappa_i$ with no repeated roots. I.e., $\sigma_\infty$ in this case is a disjoint product of $m$ $3$-cycles, $\sigma_0$ is a a disjoint product of $(3\ell-1)$-cycle and $m'$ $3$-cycles, and $\sigma_1$ is a $\kappa_i$-cycle. We choose $$ \sigma_\infty=(1,2,3)(4,5,6)\ldots(3m-2,3m-1,3m) $$ and $$ \sigma_1=(1,4,7,\ldots,3m-2,3m,3m-3,\ldots,3\ell) $$ with $$ \sigma_1\sigma_\infty=(1,2,3,4,\ldots,3\ell-1) (3\ell,3\ell+1,3\ell+2)\ldots(3m-3,3m-2,3m-1). $$ The rest of proof is the same as the case of $r+t_{\kappa_i}=-(\ell+1/3)^2$. This completes the proof that \eqref{equation: r+t=-(l+1/3)^2} is the complete list of parameters $r$ such that \eqref{equation: DE 2 in introduction} with $t=t_{\kappa_i}$ is apparent. \end{proof} \begin{Example} For small $\kappa_i$, the modular functions $h(z)$ in the proof are given by $$ \extrarowheight3pt \begin{array}{cccc} \hline\hline \kappa_i & \ell\pm1/3 & (r,t) & h(z) \\ \hline 1 & \displaystyle\frac13 & \displaystyle \left(\frac{23}{36},-\frac34\right) \phantom{\Bigg|} & J_3 \\ 2 & \displaystyle\frac23 & \displaystyle \left(\frac{119}{36},-\frac{15}4\right)\phantom{\Bigg|} &\displaystyle\frac{J_3^2}{1+2J_3^3} \\ 2 & \displaystyle\frac43 & \displaystyle \left(\frac{71}{36},-\frac{15}4\right)\phantom{\Bigg|} &\displaystyle\frac{J_3^4}{1-4J_3^3} \\ \hline\hline \end{array} $$ \end{Example} \begin{proof}[Proof of Theorem \ref{theorem: necessary and sufficient conditions for appartness at infinity}] Assume that $n_i$, $n_\rho$, and $n_\infty$ are positive integers satisfying the two conditions. We note that the parameters $r$, $s$, and $t$ in \eqref{equation: DE 3} are \begin{equation} \label{equation: apparent r s t} r=-n_\infty^2+n_\rho^2+n_i^2-\frac{13}{36}, \qquad s=\frac19-n_\rho^2, \qquad t=\frac14-n_i^2. \end{equation} Let $$ d=\frac12(n_i+n_\rho+n_\infty-1). $$ By the second condition, we have $$ d-n_i=\frac12(n_\rho+n_\infty-n_i-1)\ge 0 $$ and similarly, $d-n_\rho\ge 0$. Thus, there are cycles of lengths $n_i$ and $n_\rho$ in the symmetric group $S_d$. Choose $$ \sigma_1=(1,\ldots,n_i), \qquad \sigma_2=(d,d-1,\ldots,d-n_\rho+1) $$ By the second condition again, we have $$ n_i-(d-n_\rho+1)=\frac12(n_i+n_\rho-n_\infty-1)\ge 0. $$ In other words, the two cycles are not disjoint. We then compute that $$ \sigma_2\sigma_1=(1,\ldots,d-n_\rho,d,d-1,\ldots,n_i). $$ This is a cycle of length $$ d-n_\rho+(d-n_i+1)=2d-n_\rho-n_i+1=n_\infty. $$ It is clear that the subgroup of $S_d$ generated by $\sigma_1$ and $\sigma_2$ is transitive. Thus, by Riemann's existence theorem, given three distinct points $\zeta_1$, $\zeta_2$, and $\zeta_3$ on $\mathbb P^1(\mathbb C)$, there is a covering $H:X\to\mathbb P^1(\mathbb C)$ of compact Riemann surfaces of degree $d$ branched at $\zeta_1$, $\zeta_2$, and $\zeta_3$ with monodromy $\sigma_1$, $\sigma_2$, and $\sigma_3=\sigma_1^{-1}\sigma_2^{-1}$, respectively. By the Riemann-Hurwitz formula, the genus of $X$ is $0$ and we may assume that $X=\mathbb P^1(\mathbb C)$. Applying a suitable linear fractional transformation (i.e., an automorphism of $X$) if necessary, we may assume that the ramification points on $X$ are $1728=j(i)$, $0=j(\rho)$, and $\infty=j(\infty)$ with ramification indices $n_i$, $n_\rho$, and $n_\infty$, respectively. Let $h:X_0(1)\to\mathbb P^1(\mathbb C)$ be defined by $h(z)=H(j(z))$. Following the same computation as in the proof of Theorem \ref{theorem: 1.5}(b), we can show that $$ \{h(z),z\}=-2\pi^2\left(rE_4(z)+s\frac{E_6(z)^2}{E_4(z)^2} +t\frac{E_4(z)^4}{E_6(z)^2}\right) $$ with $r$, $s$, and $t$ given as \eqref{equation: apparent r s t} (details omitted). This implies that the singularities of \eqref{equation: DE 3} are all apparent. Conversely, assume that the differential equation \eqref{equation: DE 3} is apparent throughout $\mathbb H\cup\{\text{cusps}\}$. Let $\pm n_\infty/2$ be the local exponents at $\infty$. Then a fundamental pair of solutions near $\infty$ is $$ y_\pm(z)=q^{\pm n_\infty/2}\left(1+\sum_{n=1}^\infty c_n^\pm q^n\right). $$ Let $h(z)=y_+(z)/y_-(z)$. Since \eqref{equation: DE 3} is apparent throughout $\mathbb H$, $h(z)$ is a single-valued function on $\mathbb H$. Arguing as in the second proof of Theorem \ref{theorem: necessary and sufficient conditions for appartness at infinity}, we see that $h(z)$ is a modular function on $\mathrm{SL}(2,\mathbb Z)$. Now since $$ \{h(z),z\}=-2\pi^2\left(rE_4(z)+s\frac{E_6(z)^2}{E_4(z)^2} +t\frac{E_4(z)^4}{E_6(z)^2}\right) $$ have poles only at points equivalent to $\rho$ or $i$ under $\mathrm{SL}(2,\mathbb Z)$, the covering $X_0(1)\to\mathbb P^1(\mathbb C)$ defined by $z\mapsto h(z)$ can only ramify at $\rho$, $i$, or $\infty$. From the computation above, we see that their ramification indices must be $n_\rho$, $n_i$, and $n_\infty$, respectively. Then by the Riemann-Hurwitz formula, $n_\rho+n_i+n_\infty$ must be odd and the degree of the covering is $(n_\rho+n_i+n_\infty-1)/2$. Since the ramification indices $n_\rho$, $n_i$, and $n_\infty$ cannot exceed the degree of the covering, we conclude that the sum of any two of $n_\rho$, $n_i$, and $n_\infty$ must be greater than the remaining one. This completes the proof of the theorem. \end{proof} \section{Eremenko's Theorem and its applications} \begin{proof}[Second proof of \eqref{equation: r+s=-(l+1/2)^2}] In Section 2.3, Example 2 shows that the angle of $Q_1$ at $i,\rho$ and $\infty$ are \begin{equation} \theta_1=\frac{1}{2},\quad \theta_2=\frac{2\kappa_\rho}{3},\quad \text{and}\quad\theta_\infty=\sqrt{-(r+s_{\kappa_\rho})}. \end{equation} First, we consider $\theta_2$ is even, say $\theta_2=2\ell_0$. By Eremanko's Theorem in Section 2, the curvature equation \eqref{2.4} has a solution if and only if either $\abs{\theta_\infty-\theta_1}=2\ell+1$ or $\theta_\infty+\theta_1=2\ell+1$ for some $\ell\in\mathbb Z_{\geq 0}$ and $\ell\leq \ell_0-1$. Since $\theta_\infty>0$, the condition $\abs{\theta_\infty-\theta_1}=2\ell+1\geq 1$ implies $\theta_\infty-\theta_1>0$ and then $\theta_\infty-\theta_1=2\ell+1$. This is equivalent to $-(r+s_{\kappa_\rho})=\theta^2_\infty=(2\ell+1+1/2)^2$, $\ell=0,\ldots,\ell_0-1$. The second condition $\theta_\infty+\theta_1=2\ell+1$ is equivalent to $-(r+s_{\kappa_\rho})=\theta^2_\infty=(2\ell+1/2)^2$, $\ell=0,\ldots,\ell_0-1$. Therefore, there are exactly $2\ell_0$ different $\theta_\infty$ such that the curvature equation \eqref{2.4} has a solution and each of such a curvature equation is associated with the modular form $Q_1(z;r,s_{\kappa_\rho})$ with $(r,s_{\kappa_\rho})$ where $r+s_{\kappa_\rho}=-(\ell+1/2)^2$ for some $\ell\in\braces{0,\ldots,2\ell_0-1}$. By Theorem \ref{theorem: 2.1}, for each $(r,s_{\kappa_\rho})$, the ODE \eqref{equation: DE in introduction} is apparent. However, the first part of Theorem \ref{theorem: 1.5}(b) says that there exists a polynomial $P(x)$ of degree $2\kappa_\rho/3$ such that \eqref{2.4} with $(r,s_{\kappa_\rho})$ is apparent if and only if $P(r)=0$. Therefore, $P(r)$ has distinct roots and each root satisfies $r+s_{\kappa_\rho}=-(\ell+1/2)^2$ for some integer $\ell$, $0\leq\ell\leq 2\ell_0-1=\theta_2-1$. The proves \eqref{equation: r+s=-(l+1/2)^2} when $\theta_2$ is even. For the case $\theta_2$ is odd, the idea of the proof is basically the same. By noting $\theta_1=1/2$, the Eremenko theorem in Section 2 implies either $\abs{\theta_\infty-1/2}=\ell$ or $\theta_\infty+1/2=\ell$, where $\ell$ is even because $\theta_2$ is odd. The first condition can be replaced by $\theta_\infty-1/2=\ell$. Thus we have $\theta_\infty=\ell+1/2$ or $\theta_\infty=\ell-1/2=(\ell-1)+1/2$, that is $r+s=-(\ell+1/2)^2$, $\ell=0,1,2,\ldots,\theta_2-1$. The proof of \eqref{equation: r+s=-(l+1/2)^2} is complete. \end{proof} \begin{proof}[Second proof of \eqref{equation: r+t=-(l+1/3)^2}] The angles for $Q_2(z)$ are $\theta_1=\kappa_i$, $\theta_2=1/3$, and $\theta_\infty=\sqrt{-(r+t_i)}$, where $\frac{1}{2}\pm \kappa_i$ are the local exponents of \eqref{equation: DE 2 in introduction}. Hence $$ \kappa_i-\frac{1}{2} +1=m+\frac{1}{2} $$ i.e., $\theta_1=\kappa_i$ is an integer. Hence, there is a solution $u$ of \eqref{2.4}-\eqref{2.6} with the RHS equals to $4\pi n\sum\delta_p$, where the summation runs over $\gamma\cdot i$, $\gamma\in\mathrm{SL}(2,\mathbb Z)$, if and only if either $\theta_\infty-\theta_2=\abs{\theta_\infty-\theta_2}=\ell$ or $\theta_\infty+\theta_2=\ell$ where $\ell\leq \kappa_i-1$ and $\ell$ has the opposite parity of $\kappa_i$. Hence, $\theta_\infty=\ell\pm1/3$ and $r+t_i=-(\ell\pm1/3)^2$. This proves \eqref{equation: r+t=-(l+1/3)^2}. \end{proof} \begin{proof}[Second proof of Theorem \ref{theorem: necessary and sufficient conditions for appartness at infinity}] Suppose that the ODE \eqref{equation: DE 3} has local exponents $\pm n_\infty$ at $\infty$, $n_\infty\in\frac{1}{2}\mathbb N$. We claim that \emph{\eqref{equation: DE 3} is apparent throughout $\mathbb H^\ast$ if and only if $Q_3(z)=Q_3(z;r,s,t)$ is realized by a metric with curvature $1/2$}. It is clear that the second statement implies the first statement. So it suffices to prove the other direction. Suppose that \eqref{equation: DE 3} is apparent throughout $\mathbb H^\ast$. Let $y_{\pm}(z)=q^{\pm n_\infty/2}\left(1+O(q)\right)$ be two solutions of \eqref{equation: DE 3} and set $h(z)=y_+(z)/y_-(z)$. Since \eqref{equation: DE 3} is apparent on $\mathbb H$, $h(z)$ is a meromorphic single-valued function on $\mathbb H$ and its Schwarz derivative is $-2Q_3(z)$. Recall Bol's theorem that there is a homomorphism $\rho:\mathrm{SL}(2,\mathbb Z)\rightarrow\rm{PSL}(2,\mathbb C)$ such that $$\begin{pmatrix} \left(y_1\big|_{-1}\gamma\right)(z)\\ \left(y_2\big|_{-1}\gamma\right)(z) \end{pmatrix}=\pm\rho(\gamma)\begin{pmatrix} y_1(z)\\ y_2(z) \end{pmatrix},\quad \gamma\in\mathrm{SL}(2,\mathbb C). $$ Clearly, $\rho(T)=\pm I$ because $\infty$ is apparent. Note that $\ker\rho$ is a normal subgroup of $\mathrm{SL}(2,\mathbb Z)$ and contains $\gamma T\gamma^{-1}$ for any $\gamma\in\mathrm{SL}(2,\mathbb Z)$. In particular, $\ker\rho$ contains both $T=\SM1101$ and $STS^{-1}=\SM10{-1}1$, where $S=\SM0{-1}10$. Since $\SM1101$ and $\SM10{-1}1$ generate $\mathrm{SL}(2,\mathbb Z)$, we conclude that $\ker\rho=\mathrm{SL}(2,\mathbb Z)$. In other words, $\rho(\gamma)=\pm I$ and $h(z)$ is a modular function on $\mathrm{SL}(2,\mathbb Z)$. Thus we have a solution $u:=\log\frac{8\abs{h'(z)}^2}{\left(1+\abs{h(z)}^2\right)^2}$ which realizes $Q_3$. This proves the claim. Now, we apply the Eremenko theorem with the angles given by $\theta_1=\kappa_i$, $\theta_2=2\kappa_\rho/3$ and $\theta_3=n_\infty$. Our necessary and sufficient condition in Theorem \ref{theorem: necessary and sufficient conditions for appartness at infinity} is identically the same as the condition of Eremenko's theorem for the existence of $u$ with three integral angles. This proves Theorem \ref{theorem: necessary and sufficient conditions for appartness at infinity}. \end{proof} \begin{Theorem}\label{theorem: condition for Q be realized} Suppose $\kappa_i\in\mathbb N$ and $\kappa_\rho,\kappa_\infty\in\frac{1}{2}\mathbb N$ such that $2\kappa_\rho/3\in\mathbb N$. If $Q_3(z;r,s,t)$ is apparent at $\rho$ and $i$, then $Q$ can be realized. \end{Theorem} \begin{proof} By the assumption, we have that $\theta_i$, $1\leq i\leq 3$, are all integers. Now, given $\kappa_i$ and $\kappa_\rho$, $s$ and $t$ are determined by the same formula in our paper. Further, there are polynomials $P_1$ and $P_2$: \begin{enumerate} \item[$\bullet$] $Q_3(z;r,s,t)$ is apparent at $i$ if and only if $P_1(r)=0$, and $\deg P_1(r)=\kappa_i$. \item[$\bullet$] $Q_3(z;r,s,t)$ is apparent at $\rho$ if and only if $P_2(r)=0$, and $\deg P_2=2\kappa_\rho/3$. \end{enumerate} Therefore, $Q_3(z;r,s,t)$ is apparent at $i$ and $\rho$ if and only if $$ r\in\braces{r:P_1(r)=P_2(r)=0}. $$ Now, we claim that under the assumption $\theta_1\in\mathbb N$, $Q_3(z;r,s,t)$ is apparent if and only if the local exponents at $\infty$ are $\pm \kappa_\infty/2$, $\kappa_\infty\in\mathbb N$ and the curvature equation has a solution. By Eremenko's Theorem (Section 2.4), (recall $\theta_1=\kappa_i$, $\theta_2=2\kappa_\rho/3$, $\theta_3=2\kappa_\infty$) the curvature equation has a solution if and only if $\theta_1+\theta_2+\theta_3$ is odd and $\theta_i<\theta_j+\theta_k$, $i\neq j\neq k$. This condition is equivalent to \begin{enumerate} \item[(a)] $$\theta_2-\theta_1<\theta_3<\theta_2+\theta_1,\quad \text{and} $$ \item[(b)] $$\theta_1-\theta_2<\theta_3<\theta_1+\theta_2.$$ \end{enumerate} Since $\theta_1+\theta_2+\theta_3$ is odd, we have $\theta_2$ solutions of the curvature equation if $\theta_1>\theta_2$, $\theta_1$ solutions if $\theta_2>\theta_1$. Now, $\deg P_1=\kappa_i=\theta_1$ and $\deg P_2=2\kappa_\rho/3=\theta_2$. Then \begin{align*} \min\braces{\theta_1,\theta_2}&\geq\abs{\braces{r:P_1(r)=P_2(r)=0}}\\ &=\ge\#\ \text{of\ curvature\ equations} \geq\min\braces{\theta_1,\theta_2}. \end{align*} Thus $$ \abs{\braces{r:P_1(r)=P_2(r)=0}}=\#\ \text{of\ curvature\ equations}. $$ This proves the theorem. \end{proof} \begin{Remark} In fact, the proof shows that if $\deg P_i\leq\deg P_j$, then $P_i$ is a factor of $P_j$. \end{Remark} \section{proof of Theorem \ref{theorem: 1.1} and Theorem \ref{theorem: 1.2}} \label{section: proof of theorem 1.1} \begin{proof}[Proof of Theorem \ref{theorem: 1.1}] Let $\rho$ be the Bol representation associated to \eqref{(1.1)}, and set $T=\SM 1101$, $S=\SM 0{-1}10$, and $R=TS=\SM 1{-1}10$. They satisfy \begin{equation}\label{equation: S^2 and R^3} S^2=-I,\quad \text{and}\quad R^3=-I. \end{equation} Assume that ($\mb H_1$) and ($\mb H_2$) hold. It follows from either \cite[Theorem 2.5]{Eremenko-Tarasov}, quoted as Theorem \ref{theorem: ET} in the appendix, or Theorem \ref{thm1} (with $\theta_1=1/2$, $\theta_2=1/3$, and $\theta_3=2r_\infty$ or $\theta_3=1-2r_\infty$, depending on whether $2r_\infty\le1/2$ or $2r_\infty>1/2$) in the appendix that if $1/12<r_\infty<5/12$, then an invariant metric realizing $Q(z)$ exists, and if $0<r_\infty<1/12$ or $5/12<r_\infty\le1/2$, then there does not exist an invariant metric realizing $Q(z)$. So here we are concerned with the case $r_\infty=1/12$ or $r_\infty=5/12$. Assume that $r_\infty=1/12$. Then there exists a basis $\{y_1(z),y_2(z)\}$ for the solution space of \eqref{(1.1)} such that \begin{equation} \label{equation: rho(T) 2} \rho(T)=\pm\M{\epsilon}00{\overline\epsilon}, \qquad \epsilon=e^{2\pi i/12}. \end{equation} Since $S^2=-I$, we have $\rho(S)^2=\pm I$. The matrix $\rho(S)$ cannot be equal to $\pm I$ as the relation $R=TS$ will imply that the eigenvalues of $\rho(R)$ are $\pm e^{2\pi i/12}$ or $\pm e^{-2\pi i/12}$, which is absurd. It follows that $\operatorname{tr}\rho(S)=0$ and we have \begin{equation} \label{equation: rho(S) 2} \rho(S)=\pm\M abc{-a}, \qquad \rho(R)=\pm\rho(T)\rho(S) =\pm\M{\epsilon a}{\epsilon b}{\bar{\epsilon}c}{-a\bar{\epsilon}} \end{equation} for some $a,b,c\in\mathbb C$. Since $\rho(R)^3=\pm I$, $\det\rho(R)=1$, and $\rho(R)\neq\pm I$ by a similar reason as above, the characteristic polynomial of $\rho(R)$ has to be $x^2-x+1$ or $x^2+x+1$. In particular, we have $\operatorname{tr}\rho(R)=\pm 1$, i.e., $a(\epsilon-\overline\epsilon)=\pm 1$ and hence $a=\pm i$ and $bc=0$. Under the assumption that there is an invariant metric realizing $Q(z)$, the matrices $\rho(S)$, $\rho(T)$, and $\rho(R)$ must be unitary, after a simultaneous conjugation. (See the discussion in Section 2.2.) If one of $b$ and $c$ is not $0$, this cannot happen. Therefore, we have $b=c=0$. This implies that the function $y_1(z)^2$, which is meromorphic throughout $\mathbb H$ since the local exponents at every singularity are in $\frac12\mathbb Z$, satisfies $$ y_1(Tz)^2=e^{2\pi i/6}y_1(z)^2, \qquad y_1(Sz)^2=-z^{-2}y_1(z)^2. $$ It follows that $y_1(z)^2$ is a meromorphic modular form of weight $-2$ with character $\chi$ on $\mathrm{SL}(2,\mathbb Z)$. Likewise, we can show that $y_2(z)^2$ is a meromorphic modular form of weight $-2$ with character $\overline\chi$. This proves that if there is an invariant metric realizing $Q(z)$, then there are solutions $y_1(z)$ and $y_2(z)$ with the stated properties. The proof of the case $r_\infty=5/12$ is similar and is omitted. The proof of the converse statement is easy. If there exist solutions $y_1(z)$ and $y_2(z)$ of \eqref{(1.1)} such that $y_1(z)^2$ and $y_2(z)^2$ are meromorphic modular forms of weight $-2$ with character $\chi$ and $\overline\chi$, respectively, on $\mathrm{SL}(2,\mathbb Z)$, then $y_1(Tz)^2=e^{2\pi i/6}y_1(z)^2$ and $y_2(Tz)^2=e^{-2\pi i/6}y_2(z)^2$, which implies that $y_1(z)^2$ and $y_2(z)^2$ are of the form $y_1(z)^2=q^{1/6}\sum_{j\ge n_0}c_jq^j$ and $y_2(z)^2=q^{-1/6}\sum_{j\ge n_0}d_jq^j$. It follows that $r_\infty=1/12$ or $r_\infty=5/12$. It is clear that with respect to the basis $\{y_1(z),y_2(z)\}$, the Bol representation is given by $$ \rho(T)=\pm\M{e^{2\pi i/12}}00{e^{-2\pi i/12}}, \qquad \rho(S)=\pm\M{\pm i}00{-i}, $$ and hence is unitary. It follows that there is an invariant metric of curvature $1/2$ realizing $Q(z)$. This proves the theorem. \end{proof} We now give two examples with $r_\infty=1/12$, one of which can be realized by some invariant metric of curvature $1/2$, while the other of which can not. Note that Theorem 1 of \cite{Eremenko} implies that when \eqref{(1.1)} does not have $\mathrm{SL}(2,\mathbb Z)$-inequivalent singularities outside $\{i,\rho\}$, $1/12<r_\infty<5/12$ is the necessary and sufficient condition for the existence of an invariant metric of curvature $1/2$ realizing $Q$. The examples we provide below show that when \eqref{(1.1)} has $\mathrm{SL}(2,\mathbb Z)$-inequivalent singularities other than $i$ and $\rho$, this condition is no longer a necessary condition. \begin{Example} Let $\eta(z)=q^{1/24}\prod_{n=1}^\infty(1-q^n)=\Delta(z)^{1/24}$, \begin{equation} \label{equation: xy} x(z)=\frac{E_4(z)}{\eta(z)^8}=q^{-1/3}+\cdots, \qquad y(z)=\frac{E_6(z)}{\eta(z)^{12}}=q^{-1/2}+\cdots, \end{equation} and $h(z)=x(z)/y(z)=q^{1/6}+\cdots$. They are modular functions on the unique normal subgroup $\Gamma$ of $\mathrm{SL}(2,\mathbb Z)$ of index $6$ such that $\mathrm{SL}(2,\mathbb Z)/\Gamma$ is cyclic. (Another way to describe $\Gamma$ is that $\Gamma=\ker\chi$, where $\chi$ is the character of $\mathrm{SL}(2,\mathbb Z)$ such that $\chi(S)=-1$ and $\chi(R)=e^{2\pi i/3}$.) Using Ramanujan's identities \begin{equation*} \begin{split} D_qE_2(z)&=\frac{E_2(z)^2-E_4(z)}{12}, \\ D_qE_4(z)&=\frac{E_2(z)E_4(z)-E_6(z)}3, \\ D_qE_6(z)&=\frac{E_2(z)E_6(z)-E_4(z)^2}2, \end{split} \end{equation*} where $D_q=qd/dq$ (see \cite[Proposition 15]{Zagier123}) and the relation $\Delta(z)=(E_4(z)^3-E_6(z)^2)/1728$, we can compute that $$ \{h(z),z\}=(2\pi i)^2Q_0(z) $$ where $$ Q_0(z)=E_4(z)\left( -\frac1{72}-\frac{9(E_4(z)^3-E_6(z)^2)^2}{(3E_4(z)^3-2E_6(z)^2)^2} +\frac52\frac{E_4(z)^3-E_6(z)^2}{3E_4(z)^3-2E_6(z)^2}\right). $$ Thus, $$ y_+(z)=\frac{h(z)}{\sqrt{D_qh(z)}}=q^{1/12}+\cdots, \quad y_-(z)=\frac1{\sqrt{D_qh(z)}}=q^{-1/12}+\cdots $$ are solutions of the differential equation $y''(z)=Q(z)y(z)$, where $Q(z)=-(2\pi i)^2Q_0(z)/2$. The meromorphic modular form $Q(z)$ has only one $\mathrm{SL}(2,\mathbb Z)$-inequivalent singularity at the point $z_1$ such that $3E_4(z_1)^3-2E_6(z_1)^2=0$ and is holomorphic at the elliptic points $i$ and $\rho$. In the notation of Theorem \ref{theorem: 1.1}, we have $r_\infty=1/12$. This provides an example of an invariant metric of curvature $1/2$ realizing a meromorphic modular form of weight $4$ with a threshold $r_\infty$. Note that with respect to the basis $\{y_+,y_-\}$, the Bol representation is given by $$ \rho(T)=\pm\M{e^{2\pi i/12}}00{e^{-2\pi i/12}}, \qquad \rho(S)=\pm\M i00{-i}, $$ both of which are unitary. (The information about $\rho(S)$ follows from the transformation formula $\eta(-1/z)=\sqrt{z/i}\eta(z)$ and the fact that $D_qh(z)=C\eta(z)^4(3E_4(z)^3-2E_6(z)^2)/E_6(z)^2$ for some constant $C$.) \end{Example} \begin{Example} Let $x(z)$ and $y(z)$ be defined by \eqref{equation: xy}, and $\Gamma$ be the unique normal subgroup of $\mathrm{SL}(2,\mathbb Z)$ of index $6$ such that $\mathrm{SL}(2,\mathbb Z)/\Gamma$ is cyclic. The modular curve $X(\Gamma):=\Gamma\backslash\mathbb H^\ast$ has one cusp of width $6$, no elliptic points, and is of genus $1$. Since the modular functions $x(z)$ and $y(z)$ on $\Gamma$ have only a pole of order $2$ and $3$, respectively, at the cusp $\infty$ and are holomorphic elsewhere, they generate the function field of $X(\Gamma)$. Then from the relation $E_4(z)^3-E_6(z)^2=1728\eta(z)^{24}$, we see that $x(z)$ and $y(z)$ satisfies $$ y^2=x^3-1728, $$ which we may take as the defining equation for $X(\Gamma)$. Let $f(z)$ be a meromorphic modular form of weight $2$ on $\Gamma$ such that all residues on $\mathbb H$ are $0$. Equivalently, let $\omega=f(z)\,dz$ be a meromorphic differential $1$-form of the second kind on $X(\Gamma)$. Consider $$ y_1(z)=\frac1{\sqrt{f(z)}}\int_{z_0}^zf(u)\,du, \qquad y_2(z)=\frac1{\sqrt{f(z)}}, $$ where $z_0$ is a fixed point in $\mathbb C$ that is not a pole of $f(z)$. Under the assumption that all residues of $f(z)$ are $0$, the integral in the definition of $y_1(z)$ does not depend on the choice of path of integration from $z_0$ to $z$. A straightforward computation shows that the Wronskian of $y_1$ and $y_2$ is a constant and hence $y_1(z)$ and $y_2(z)$ are solutions of the differential equation $y''(z)=Q(z)y(z)$, where $$ Q(z)=\frac{3f'(z)^2-2f(z)f''(z)}{4f(z)^2} $$ can be shown to be a meromorphic modular form of weight $4$ on $\Gamma$. (The numerator of $Q(z)$ is a constant mulitple of the Rankin-Cohen bracket $[f,f]_2$ and hence a mermomorphic modular form of weight $8$. See \cite{Rankin-Cohen}.) By construction, this differential equation is apparent throughout $\mathbb H$. Furthermore, if $f(z)$ is chosen in a way such that $f(\gamma z)=\chi(\gamma)(cz+d)^2f(z)$ holds for all $\gamma=\SM abcd\in\mathrm{SL}(2,\mathbb Z)$ for some character $\chi$ of $\mathrm{SL}(2,\mathbb Z)$ with $\Gamma\subset\ker\chi$, then $Q(z)$ is modular on $\mathrm{SL}(2,\mathbb Z)$. We now utilize this construction of modular differential equations to find $Q(z)$ that cannot be realized, i.e., the monodromy group is not unitary. We let $\omega_1=dx/y$ and $\omega_2=d(x/y^3)$. Note that $\omega_1$ is a holomorphic $1$-form on the curve $y^2=x^3-1728$, while $\omega_2$ is an exact $1$-form and hence a meromorphic $1$-form of the second kind. Using Ramanujan's identities, we check that $\omega_1=f_1(z)\,dz$ and $\omega_2=f_2(z)\,dz$ with $$ f_1(z)=-\frac{2\pi i}3\eta(z)^4, \quad f_2(z)=2\pi i\frac{\eta(z)^4}{E_6(z)^4}\left(\frac76E_4(z)^3\Delta(z) +576\Delta(z)^2\right). $$ Now we choose, say, $$ \omega=-\frac{3}{2\pi i}\left(\omega_1+\omega_2\right) $$ and let $f(z)=q^{1/6}+\cdots$ be the meromorphic modular form of weight $2$ such that $\omega=f(z)\,dz$. Let $y''(z)=Q(z)y(z)$ be the differential equation obtained from $f(z)$ using the construction described above. Note that $f(z+1)=e^{2\pi i/6}f(z)$ and using $\eta(-1/z)=\sqrt{z/i}\eta(z)$, we have $f(-1/z)=-z^2f(z)$. Thus, $f(\gamma z)=\chi(\gamma)(cz+d)^2f(z)$ for all $\gamma=\SM abcd\in\mathrm{SL}(2,\mathbb Z)$, where $\chi$ is the character of $\mathrm{SL}(2,\mathbb Z)$ such that $\chi(T)=e^{2\pi i/6}$ and $\chi(S)=-1$. According the discussion above, the function $Q(z)$ is a meromorphic modular form of weight $4$ with trivial character on $\mathrm{SL}(2,\mathbb Z)$. Note that $f(z)$ has zeros at points where $6E_6(z)^4-7E_4(z)^3\Delta(z)-3456\Delta(z)^2=0$. Now let us compute its Bol representation. We choose $z_0=i\infty$ and find that $$ y_2(z)=q^{-1/12}\left(1+\sum_{j=1}^\infty c_jq^j\right), \qquad y_1(z)=q^{1/12}\sum_{j=0}^\infty d_jq^j $$ for some $c_j$ and $d_j$ with $d_0\neq 0$. Therefore, the local exponents at $\infty$ are $\pm1/12$ and $$ \rho(T)=\pm\M{e^{2\pi i/12}}00{e^{-2\pi i/12}}. $$ Also, since $f(-1/z)=-z^2f(z)$, we have \begin{equation*} \begin{split} \int_{i\infty}^{-1/z}f(u)\,du &=\int_0^zf(-1/u)\,\frac{du}{u^2} =-\int_0^zf(u)\,du \\ &=-\int_0^{i\infty}f(u)\,du-\int_{i\infty}^zf(u)\,du. \end{split} \end{equation*} Thus, $$ \rho(S)=\pm\M iC0{-i}, \qquad C=i\int_0^{i\infty}f(u)\,du. $$ Now recall that $\omega=f(z)\,dz$ is equal to $-3(\omega_1+\omega_2)/(2\pi i)$. Since $\omega_2=d(x/y^3)$ is an exact $1$-form on $X(\Gamma)$ and the modular curve $X(\Gamma)$ has only one cusp, which in particular says that $\infty$ and $0$ are mapped to the same point on $X(\Gamma)$ under the natural map $\mathbb H^\ast\to X(\Gamma)$, the integral $\int_0^{i\infty}f_2(u)\,du$ is equal to $0$. Therefore, we have $$ C=i\int_0^{i\infty}\eta(u)^4\,du. $$ This constant $C$ can be expressed in terms of the central value of the $L$-function of the elliptic curve $E:y^2=x^3-1728$, which is known to be nonzero. From this, it is straightforward to check that there is no simultaneous conjugation such that $\rho(T)$ and $\rho(S)$ both become unitary. \end{Example} \begin{proof}[Proof of Theorem \ref{theorem: 1.2}] We use the notations in the proof of Theorem \ref{theorem: 1.1}. Since $\kappa_\infty=n/4$ for some odd integer $n$, with respect to the basis $\{y_+(z),y_-(z)\}$, we have $\rho(T)=\pm\SM i00{-i}$. If $\rho(S)=\pm I$, then $\rho(R)=\pm\SM i00{-i}$, which is a contradiction to $\rho(R)^3=\pm I$. Hence, $\rho(S)\neq\pm I$, and we have $\operatorname{tr}\rho(S)=0$. Then, by a choosing a suitable scalar $r$, the matrix of $\rho(S)$ with respect to $\{ry_+(z),y_-(z)\}$ will be of the form \begin{equation*} \rho(S)=\pm\M abb{-a} \end{equation*} for some $a,b\in\mathbb C$ with $a^2+b^2=-1$, while $\rho(T)$ is still $\pm\SM i00{-1}$. Set $F(z)=r^2y_+(z)^2+y_-(z)^2$. We then compute that $F(Tz)=-F(z)$ and \begin{equation*} \begin{split} \left(F|_{-2}S\right)(z)&=(ary_+(z)+by_-(z))^2+(bry_+(z)-ay_-(z))^2 \\ &=-r^2y_1(z)^2-y_2(z)^2=-F(z). \end{split} \end{equation*} This proves the theorem. \end{proof} \section{Existence of the curvature equation} In this section, we will prove Theorem \ref{theorem: existence of modular form Q with described local exponents and apparentness} equipped with the data \eqref{equation: datas}. The main purpose of this section is to prove the existence and the number of such $Q$ equipped with data \eqref{equation: datas}. The discussion will be divided into several cases depending on $\kappa_\rho$ and $\kappa_i$. \begin{Lemma}\label{lemma: F has at most simple zero} Suppose $F(z)$ is a modular form of weight $4$ with respect to $\mathrm{SL}(2,\mathbb Z)$, and is holomorphic except at $\rho$ and $i$. If the pole order of $F(z)$ at $\rho$ or $i$ $\leq 1$, then $F(z)$ is holomorphic. \end{Lemma} \begin{proof} Let $n_1$ and $n_2$ be the orders of poles at $i$ and $\rho$ respectively. The counting zero formula of meromorphic modular form (see \cite{Serre}) says $$ m-\frac{n_1}{2}-\frac{n_2}{3}=\frac{4}{12},\qquad m\ \text{is a non-negative integer}. $$ By the assumption, $n_i\leq 1$. From the identity, it is easy to see $n_1\leq 0$ and $n_2\leq 0$. \end{proof} Let $t_j=E_6(z_j)^2/E_4(z_j)^3$ and define $F_j(z)=E_6(z)^2-t_jE_4(z)^3$. By the theorem of counting zeros of modular forms \cite[p. 85, Theorem 3]{Serre}, $F_j(z)$ has a (simple) zero at $z_j\in\mathbb H$. \begin{Lemma} \label{lemma: polynomials} Suppose that $Q$ satisfies the conditions (i) and (ii) in Definition \ref{definition: Q is equipped}. Then \begin{equation}\label{equation: shape of Q} \begin{split} &Q=\pi^2\left( Q_3(z;r,s,t) +\sum^m_{j=1}\frac{r_1^{(j)}E_4(z)^4F_j(z)+r^{(j)}_2E_4(z)^7}{F_j(z)^2}\right), \end{split} \end{equation} where $r$, $r^{(j)}_1$ are free parameters and $s$, $t$, $r^{(j)}_2$ are uniquely determined by \begin{equation} \begin{split} &s=s_{\kappa_\rho}:=(1-4\kappa^2_\rho)/9,\quad t=t_{\kappa_i}=(1-4\kappa^2_i)/4,\quad \text{and}\\ & r^{(j)}_2=r^{(j)}_{2,\kappa_j}=t_j(t_j-1)^2(1-4\kappa^2_j)/4. \end{split} \end{equation} \end{Lemma} \begin{proof} Let $\hat{Q}$ denote the RHS of \eqref{equation: shape of Q}. Then it is a straightforward computation to show that (ii) in Definition \ref{definition: Q is equipped} holds at $p_j$ if and only if $s=s_{\kappa_\rho}$ if $p_j=\rho$, $t=t_{\kappa_i}$ if $p_j=i$, and $r^{(j)}_2=r^{(j)}_{2,\kappa_j}$ if $p_j=z_j$. By the choice of $s$, $t$ and $r^{(j)}_2$, $Q-\hat{Q}$ might contain simple poles only. Further, we can choose $r^{(j)}_1$ to make $Q-\hat{Q}$ holomorphic at $z_j$. By Lemma \ref{lemma: F has at most simple zero}, $Q-\hat{Q}$ is automatically smooth at $\rho$ and $i$. Therefore, $Q-\hat{Q}$ is a holomorphic modular form of weight $4$, and the lemma follows immediately because $E_4(z)$, up to a constant, is the only holomorphic modular form of weight $4$. \end{proof} Now we are in the position to prove Theorem \ref{theorem: existence of modular form Q with described local exponents and apparentness}. \begin{proof}[Proof of Theorem \ref{theorem: existence of modular form Q with described local exponents and apparentness}] We first calculate the parameters $r,r^{(j)}_1$, $1\leq j\leq m$, such that $Q$ is apparent at $z_j$. For simplicity, we assume $j=1$. From \eqref{equation: shape of Q}, we do the Taylor expansion at $z=z_1$. \begin{align*} &Q(z)=a_{-2}(z-z_1)^{-2}+(r_1b_{-1}+a_{-1})(z-z_1)^{-1}\\ &+\sum^\infty_{j=0}\left(a_j+r_1b_j+c_j\left(r,r^{(2)}_1,\ldots,r^{(m)}_1\right)\right)\left(z-z_1\right)^j:=\sum^\infty_{j=-2}A_j\left(z-z_1\right)^j, \end{align*} where $a_j,b_j$ are independent of $r$, $r^{(j)}_1$ and $c_j(r,r^{(2)}_1,\ldots,r^{(m)}_1)$ is linear in all variables, and also $$ y(z)=(z-z_1)^{1/2-\kappa_1}\left(1+\sum^\infty_{j=1}d_j(z-z_1)^j\right). $$ Then we derive the recursive formula by comparing both sides of \eqref{(1.1)} with $Q$ in \eqref{equation: shape of Q}, \begin{equation} j(j-2\kappa_1)d_j=\sum_{k+\ell=j-2,\ k<j}d_kA_\ell,\quad A_{-1}=a_{-1}+r_1b_{-1}, \end{equation} where $d_0=1$ and $$ d_1=\frac{1}{1-2\kappa_1}d_0A_{-1}=\frac{b_{-1}}{1-2\kappa_1}r_1+\text{terms\ of\ lower\ orders}. $$ By induction, \begin{align}\label{equation: recursive formula} &j(j-2\kappa_1)d_j=d_{j-1}A_{-1}+d_{j-2}A_0+d_{j-3}A_1+\cdots+d_0A_{j-2}\\ &\nonumber=\frac{b^{j-1}_{-1}}{(1-2\kappa_1)\cdots\left((j-1)-2\kappa_1\right)}r^{j-1}_1+\ \text{terms\ of\ lower\ orders}. \end{align} At $j=2k_1$, the RHS of \eqref{equation: recursive formula} is $$ P_1\left(r,r^{(1)}_1,\ldots,r^{(m)}_1\right):=d_{2\kappa_1-1}A_{-1}+d_{2\kappa_1-2}A_0+\cdots+d_0A_{\kappa_1-2}. $$ Clearly, $\deg P_1=2\kappa_1$ and \begin{equation}\label{equation: P_1} P_1=B_0r^{2\kappa_1}_1+\text{terms\ of\ lower\ orders},\quad B_0\neq 0. \end{equation} We summarized what are known: \begin{enumerate} \item[$\bullet$] $\kappa_i\not\in\mathbb N$, then $Q$ is apparent at $i$ for any tuple $\left(r,r^{(j)}_1\right)$. \item[$\bullet$] $2\kappa_p/3\not\in\mathbb N$, then $Q$ is apparent at $\rho$ for any tuple $\left(r,r^{(j)}_1\right)$, \item[$\bullet$] $1/2\pm\kappa_j$, there is a polynomial $P_j\left(r,r^{(1)}_1,\ldots,r^{(m)}_1\right)$ of degree $2\kappa_j$ such that $Q$ is apparent at $z_j$ if and only if $P_j\left(r,r^{(1)}_1,\ldots,r^{(m)}_1\right)=0$. \end{enumerate} Since $\kappa_\infty$ is given, we have $\kappa_\infty=\sqrt{-Q(\infty)}/2$, and then \begin{equation}\label{equation: relation of r and r_j} r+\sum^m_{j=1}\left(1-t_j\right)r^{(1)}_j+e=0, \end{equation} where $e$ is given. By Bezout's theorem, we have $N=\prod^m_{j=1}(2\kappa_j)$ common roots with multiplicity of \eqref{equation: P_1} and \eqref{equation: relation of r and r_j} because by \eqref{equation: P_1} it is easy to see that there are no solutions at $\infty$. This proves the theorem. \end{proof}
2,869,038,154,517
arxiv
\section{Introduction} \label{sec:intro} In the past few years, Neural Networks (NNs) have achieved superiors success in various domains, \textit{e.g.}, computer vision~\cite{computervision}, speech recognition~\cite{speechrecognition}, autonomous systems~\cite{autodriving}, \textit{etc}. However, the recent appearance of adversarial attacks~\cite{fgsm} greatly challenges the security of neural network applications: by crafting and injecting human-imperceptible noises into test inputs, neural networks' prediction results can be arbitrarily manipulated~\cite{advtrain}. Until now, the emerging pace, effectiveness, and efficiency of new attacks always take an early lead to the defense solutions~\cite{cw}, and the key factors of the adversarial vulnerabilities are still unclear, leaving the neural network robustness study in a vicious cycle. In this work, we aim to qualitatively interpret neural network models' adversarial vulnerability and robustness, and establish a quantitative metric for the model-intrinsic robustness evaluation. To interpret the robustness, we adopt the loss visualization technique~\cite{goodfellow}, which was widely used in model convergence studies. As adversarial attacks leverage perturbations in inputs, we switch the loss visualization from its original parameter space into the input space and illustrate how a neural network is deceived by adversarial perturbations. Based on the interpretation, we design a robustness evaluation metric to measure a neural network's maximum prediction divergence within a constrained perturbation range. We further optimize the metric evaluation process to keep its consistency under extrinsic factor variance, \textit{e.g.}, model reparameterization~\cite{sharpminima}. Specifically, we have the following contributions: \vspace{-0.5mm} \begin{itemize} \item We interpret the adversarial vulnerability and robustness by defining and visualizing a new loss surface called decision surface. Compared to the cross-entropy based loss surface, the decision surface contains the implicit decision boundary and provides better visualization effect; \vspace{-5mm} \item We testify that adversarial deception is caused by the neural network's neighborhood under-fitting. Our visualization shows that adversarial examples are naturally-existed points lying in the close neighborhood of the inputs. However, the neural network fails to classify them, which caused the adversarial example phenomenon; \vspace{-1mm} \item We propose a robustness evaluation metric. Combined with a new normalization method, the metric can invariantly reflect a neural network's intrinsic robustness property regardless of attacks and defenses; \vspace{-1mm} \item We reveal that under certain cases, \textit{e.g.}, defensive distillation, the commonly-used PGD adversarial testing accuracy can give unreliable robustness estimation, while our metric could reflect the model robustness correctly. \end{itemize} Extensive evaluation results show that our defined robustness metric could well indicate the model-intrinsic robustness across different datasets, various architectures, multiple adversarial attacks, and different defense methods. \section{Background and Related Work}\label{sec:related} \subsection{Adversarial Attacks and Robustness} Adversarial examples were firstly introduced by~\cite{lbfgs}, which revealed neural networks' vulnerability to adversarial noises and demonstrated the gap between the artificial cognition and human visual perception. Since then, various adversarial attacks were proposed, such as L-BFGS attack~\cite{fgsm}, FGSM attack~\cite{advtrain}, C\&W attack~\cite{cw}, black-box attack~\cite{blackbox}, \textit{etc}. Driven by the appearance of adversarial attacks, corresponding defense techniques also emerged, including adversarial training~\cite{advtrain}, defensive distillation~\cite{distillation}, gradient regularization~\cite{aaai}, adversarial logit pairing~\cite{alp}, \textit{etc}. Among those, MinMax robustness optimization~\cite{minmax} is considered as one of the most potent defenses, which boosts model accuracy by integrating the worst-case adversarial examples into the model training. Currently, testing accuracy under adversarial attacks is used to evaluate the model robustness. However, it is highly affected by the attack specifications and can't comprehensively reflect the actual robustness regarding model-intrinsic properties. For example, one commonly used way to evaluate the model robustness is adopting the testing accuracy under projected gradient descent (PGD) attack as an estimation. However, our experiments demonstrate that such a robustness estimation is highly unreliable: a model with a high PGD testing accuracy could be easily broken by other attacks. In this work, we aim to provide an intrinsic robustness property evaluation metric that is invariant from the specifications of models, attacks, and defenses. \subsection{Neural Network Loss Visualization} Neural network loss visualization is considered as one of the most useful approaches in neural network analysis own to its intuitive interpretation. Proposed by~\cite{goodfellow}, the loss visualization is utilized to analyze model training and convergence. Later,~\cite{largebatch} further revealed that flat local minima is the key to model generalization in parameter space. However, a model reparameterization issue was identified by~\cite{sharpminima} that the model parameter scaling may distort the geometry properties. In this work, we adopt the concept of the loss visualization to analyze the neural network's loss behaviors under adversarial perturbations. Meanwhile, we will also provide a normalization method to solve the model reparameterization problem and derive our scaling-invariant robustness metric. \subsection{Visualization Space Selection} Besides of solving the reparameterization issue, the loss visualization needs further customization for the adversarial perturbation analysis. As the loss visualization mainly evaluates a neural network's generalization ability, it focuses on the \textit{parameter space} to analyze the model training and convergence in previous works. However, such an analysis focus doesn't fit well in the adversarial attacks and defenses, whose action scope is in \textit{input space}. On the other hand, loss function in the \textit{input space} measures the network’s loss variations \textit{w.r.t} the input perturbations. It naturally shows the influence of adversarial perturbations and is suitable for studying the robustness to adversarial perturbations. Therefore, we extend the previous methods into the \textit{input space}. Fig.~\ref{fig:1} showed two examples of the visualized loss surface of an ResNet model in the \textit{parameter space} and the \textit{input space}, which illustrate the difference between the two visualization spaces. Although the loss surface in the \textit{parameter space} can show a flat minima, its significant non-smooth variations in the \textit{input space} demonstrate the loss is highly sensitive to input perturbations, which can be adversarial vulnerabilities. In this work, we will adopt the \textit{input space} as the default visualization setting for robustness interpretation. \begin{figure}[!tb] \centering \hspace*{-3mm}\includegraphics[width=3.5in]{compare} \vspace{-5mm} \caption{(a) ResNet's Loss Surface in Parameter Space (b) ResNet's Loss Surface in the Input Space: The loss surface demonstrates significant non-smooth variation.} \label{fig:1} \vspace{-3mm} \end{figure} \section{Adversarial Robustness Interpretation} \label{sec:interpret} In this section, we will use the customized loss visualization to reveal the mechanism of adversarial perturbation and neural network robustness. \subsection{Neural Network Loss Visualization } \noindent\textbf{Loss Visualization Basis: } The prediction of a neural network can be evaluated by its loss function $F(\theta, x)$, where $\theta$ is the model parameter set (weight and bias) and $x$ is the input. As the inputs \textit{x} are usually constructed in a high-dimensional space, direct visualization analysis on the loss surface is impossible. To solve this issue, the loss visualization projects the high-dimensional loss surface into a low-dimensional space to visualize it (\textit{e.g.} a 2D hyper-plane). During the projection, two vectors $\alpha$ and $\beta$ are selected and normalized as the base vectors for $x$-$y$ hyper-plane. Given an starting input point $o$, the points around it can be interpolated, and the corresponding loss values can be calculated as: \begin{equation} V(i, j, \alpha, \beta) = F(o + i \cdot \alpha + j \cdot \beta), \label{eq:1} \end{equation} where, the original point $o$ in the function $F$ denotes the original image, $\alpha$ and $\beta$ can be treated as the unit perturbation added into the image, and the coordinate $(i, j)$ denotes the perturbation intensity. In the loss visualization, a point's coordinate also denotes its divergence degree from the original point along $\alpha$ and $\beta$ direction. After sampling sufficient points' loss values, the function $F$ with high-dimensional inputs could be projected to the chosen hyper-plane. \vspace{1mm} \noindent\textbf{Decision Surface Construction: } As the loss visualization is mostly used to analyze model convergence, the loss function $F(\theta, x)$ is usually represented by the cross-entropy loss, which constructs a conventional \textit{loss surface} in the visualization. However, one critical limitation of the cross-entropy based loss surface is that, it cannot qualitatively show the explicit decision boundary of one input test, and less helpful for adversarial deception analysis. Therefore, we propose a \textit{decision surface} to replace the \textit{loss surface} in the loss visualization: \begin{equation} S(x) = Z(x)_t - max\{ Z(x)_i,~i \neq t\}, \label{eq:2} \end{equation} where, $Z(x)$ is the logit output before the softmax layer, and $t$ is the true class index of the input $x$. The decision function $S(x)$ evaluates the confidence of prediction. In the correct prediction cases, $S(x)$ should always be positive, while $S(x)<0$ denotes a wrong prediction. Specifically, $S(x) = 0$ indicates the equal confidence for both correct and wrong prediction, which is the \textit{decision boundary} of model. Consequently, the visualization surface constructed by the function $S(x)$ is defined the decision. Different from the cross-entropy based \textit{loss surface}, the \textit{decision surface} demonstrates explicit decision boundaries, and assist the adversarial analysis. \subsection{Visualizing Adversarial Vulnerability} \textbf{Experimental Analysis: } Based on the loss visualization, we project a neural network's loss behavior into 2D hyper-planes. By comparing the model's 4 different types loss behavior in \textit{decision surface}, we provide a experimental analysis for the adversarial vulnerability. As shown in Fig.~\ref{fig:xent-cw}, the visualized hyper-planes have the central points ast the original neural network inputs, and their \textit{x}-axes share the same input divergence direction -- $\alpha$. Meanwhile, each hyper-plane has a dedicated input divergence direction -- $\beta$ along the \textit{y}-axis, which indicate 4 kinds of perturbations, including random noise, cross-entropy based non-targeted FGSM attack~\cite{fgsm}, Least-likely targeted FGSM attack~\cite{fgsm}, and non-targeted C\&W attack~\cite{cw}. Specifically $\beta$ values in the three adversarial attacks can be determined as: \begin{equation} \begin{split} &\beta_0 = sign(\textit{N}(\mu=0, \sigma=1)), \\ &\beta_1 = sign(- \nabla_x~y_t \cdot log(softmax(Z))), \\ &\beta_2 = sign(\nabla_x~y_l \cdot log(softmax(Z))), \\ &\beta_3 = sign(\nabla_x~max\{ Z(x)_i, i \neq t\}-Z(x)_t), \end{split} \label{eq:4} \end{equation} where $N$ is normal distribution, $Z$ is the logit output, $y_t$ is the true class label, $y_l$ is least likely class label (both one-hot). In Fig.~\ref{fig:xent-cw}, we use arrows to show the shortest distance to cross the decision boundary $L(x)$=0. As projected in Fig.~\ref{fig:xent-cw}(a), when the input is diverged by the perturbation along a random direction, it will take much longer distance to cross the decision boundary. This explains the common sense that natural images with small random noises won't degrade neural network accuracy significantly. By contrast, for the adversarial attacks projected in Fig.~\ref{fig:xent-cw}(b)$\sim$(d), the attacks find aggressive directions ($\beta$ direction shown in \textit{y}-axis), towards which the decision boundary is in the close neighborhood around the original input. Therefore, adding those small perturbations that even human can't perceive into input can mislead the model decision and generates adversarial examples. \begin{comment} Since the data points in such regions are in the extreme close neighborhood of the natural inputs, they seem no difference to human vision perception, namely adversarial examples. distance to cross necessary distance to cross the decision boundary is much shorter: Towards $y$-axis adversarial direction, adversarial examples could be easily found within one unit range. Since the data points in such regions are in the extreme close neighborhood of the natural input images, they seem no difference by human vision, namely adversarial examples. \end{comment} \vspace{1mm} \noindent\textbf{Vulnerability Interpretation:} The above experimental analysis reveals the nature of adversarial examples: Although a neural network seems to converge well after the model training (the demonstrated model achieves 90\% accuracy on CIFAR10), there still exist large regions of image points that the neural network fails to classify correctly (as shown by the large regions beyond the decision boundary in Fig.~\ref{fig:xent-cw}(b)$\sim$(d)). What's worse, some of these regions are extremely close to the original input point (even within $\ell_{\inf} < 1$ distance). Base on these analysis, we could conclude that, rather than being ``generated" by attackers, the adversarial examples are ``naturally existed" already that models fail to learn correctly. To fix such intrinsic vulnerability of neural networks, the essential and ultimate robustness enhancement should focus on solving the ``neighborhood under-fitting" issue. \begin{figure}[!tb] \centering \includegraphics[width=3.3in]{xent-cw} \vspace{-2mm} \caption{Adversarial vulnerability demonstration when loss surface in input space is projected onto different hyperplanes.} \label{fig:xent-cw} \vspace{-4mm} \end{figure} \subsection{Interpreting Adversarial Robustness} To verify our geometric robustness theory, we compare two pairs of robust and natural models trained on MNIST and CIFAR10 respectively. These models are released from the adversarial attacking challenges \footnote{https://github.com/MadryLab/mnist\_challenge}\footnote{https://github.com/MadryLab/cifar10\_challenge}, and built with the same structure but different robustness degrees (natural training and MinMax training~\cite{minmax}). To verify our theory, we visualize the models' decision surfaces for interpretation: (1) As shown in Fig.~\ref{fig:mnist}, dramatic differences between the natural and robust decision surfaces can be observed: Natural (vulnerable) model's decision surfaces show sharp peaks and large slopes, where the decision confidence could quickly drop to negative areas (wrong classification regions). (2) By comparison, on robust decision surfaces (shown in Fig.~\ref{fig:mnist}(c)(d)), all neighborhood points around the original input point are located on a high plateau with $L(x) > 0$ (correct classification regions). (3) The surface in the neighborhood is rather flat with negligible slopes until it reaches approximately $\ell_\infty=0.3$ constraints, which is exactly the adversarial attack constraint used in robust training. Similar phenomenon could be observed in Fig.~\ref{fig:cifar} on CIFAR10. Such robust model's loss geometry verifies our previous conclusion that, fixing the neighborhood under-fitting issue is the essential robustness enhancement solution for neural networks. And a flat and wide plateau around the original point on decision surface is one of the most desired properties of a robust model. \begin{figure}[!tb] \centering \includegraphics[width=3.3in]{mm-mnist} \caption{Decision surfaces of the natural and robust models on MNIST. (a)-(b): natural model surfaces in random and adversarial projection; (c)-(d): robust model surfaces in random and adversarial projection (each unit denotes 0.05 perturbation step size)} \label{fig:mnist} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=3.3in]{mm-cifar} \caption{Decision Surface of natural and robust model on CIFAR10 (step size = 1). As assumed, natural model's surface shows sharp peaks and cliffs while robust model's shows flat plateau.} \label{fig:cifar} \vspace{-4mm} \end{figure} \section{Adversarial Robustness Evaluation} \label{sec:evaluate} \subsection{Formal Definition of Robustness Metric} As aforementioned, the decision surface of a robust model should have a flat neighborhood around the input point $x$. Intuitively explanation is that a robust model should have good prediction stability--its prediction does not have significant change with small perturbations. In fact, models are not always robust--the predictions of a model on clean and noisy inputs are not always the same and can diverge to a large extent with small adversarial noises. As such, \textit{given a feasible perturbation set, the maximum divergence between the original prediction and the worst-case adversarial prediction could be used to denote the model's vulnerability degree (i.e., the inverses of model robustness)}. Based on this definition, firstly, we calculate the divergence between two predictions on an original input and an adversarial input with perturbations in a defined range. Specifically, we use the Kullback–Leibler divergence, which is known as KL Divergence ($D_{KL}$) and is a common evaluation metric on measuring the divergence between two probability distributions. The formal robustness could be estimated by: \begin{equation}\label{eq:4} \begin{split} \psi(x) = \frac {1} {\max\limits_{\delta \in set} D_{KL}(P(x), ~P(x+\delta))}, \end{split} \end{equation} where $P(\cdot)$ is the prediction results from the evaluated model. A lower divergence $D_{KL}$ indicates the model is more robust as a more stable prediction is maintained. The final robustness metric $\psi(x)$ is defined inversely proportional to the maximum $D_{KL}$ since the largest divergence will generate the smallest robustness score $\psi(x)$. To obtain the $\max$ term in Eq.~\ref{eq:4}, we use the gradient ascent algorithm to directly optimize the KL-divergence, which demonstrates accurate and stable estimations that we will show in Sec.~\ref{sec:exp}. \subsection{Invariant Normalization against \\~\hspace{7mm}Model Reparameterization} The robustness metric defined in previous works has a problem called ``model re-parameterization": on the condition that weights and biases are enlarged by the same coefficients simultaneously, a neural network model's prediction results and its robustness property will not change, while the defined KL divergence can have dramatic change~\cite{sharpminima}. To solve this problem, we design a simple but effective normalization method: the basic idea is to add a scale-invariant normalization layer after the logit layer output. Since the neural network before the logit layer is piecewise-linear, we could then use normalization to safely remove the scaling effect of model reparameterization. The basic process is as follows: firstly, we attain a confidence vector of the logit layer, which can contain either positive or negative values; then we divide them by the max-absolute-value to normalize the confidence vector to the range of (-1, 1) and re-center them into positive range (0, 2). Owning to the max division, the final confidence vector will not change even when the parameters are linearly scaled up (or down). Finally, we use a simple sum-normalization to transform the confidence vector to a valid probability distribution. The overall normalization is: \begin{equation}\label{eq:6} P(x) = \frac {\tilde{F}(x)}{\sum_{i}{\tilde{F}(x_i)}}, ~\tilde{F}(x) = \frac{F(x)}{\max |F(x)|}+1. \end{equation} Here $P(x)$ is the final normalized probability distribution, $\tilde{F}$ is the normalized confidence vector, $F(x)$ is the original logit layer output, and $x$ is the input. By the above normalization method, we could successfully alleviate the model reparameterization effect, which is shown in Sec.~\ref{sec:exp}. \section{Robustness Evaluation Experiments} \label{sec:exp} In this section, we evaluate our proposed robustness metric in various experimental settings and compare it with conventional evaluation based on adversarial testing accuracy. \subsection{Experiment Setup} To test the generality of our metric for neural networks' robustness evaluation, we adopt three common datasets (\textit{i.e.} MNIST, CIFAR10, and ImageNet) and different models for the experiment, including FcNet, LeNet, ConvNet, ResNet18, ResNet152, and DenseNet. To further test our metric on neural networks with different robustness degrees, the following defense settings are applied: No Defense, Adversarial Training~\cite{fgsm}, Gradient Regularization Training~\cite{aaai}, Defensive Distillation~\cite{distillation}, Gradient Inhibition~\cite{wujie} and MinMax Training~\cite{minmax}\footnote{The gradient regularization and MinMax training is re-implemented with Pytorch, which may cause small deviations from the original reported accuracy.}. Correspondingly, the robustness verification is based on referencing the adversarial testing accuracies from two currently strongest attacks: 30-step PGD (PGD-30) attack based on cross-entropy loss and 30-step CW (CW-30) attacks based on C\&W loss. The adversarial perturbations are constrained by the $\ell_\infty$-norm as 0.3/1.0, 8.0/255.0, 16.0/255.0 on MNIST, CIFAR10, and ImageNet respectively. \begin{table}[!tb] \centering \scriptsize \renewcommand\arraystretch{1.3} \setlength{\tabcolsep}{3.5mm}{ \caption{Robustness Metric Evaluation on MNIST} \vspace{-2.5mm} \begin{tabular}{ccccc} \hline \hline Model & Defense & $\psi$(x) & \begin{tabular}[c]{@{}c@{}}PGD-30\\ Accuracy\end{tabular} & \begin{tabular}[c]{@{}c@{}}C\&W-30\\ Accuracy\end{tabular} \\ \hline \hline \multirow{3}{*}{FcNet} & No Defense & \textbf{73.36} & 0.73\% & 0.2\% \\ \cline{2-5} & AdvTrain & \textbf{80.43} & 4.43\% & 2.12\% \\ \cline{2-5} & MinMax & \textbf{297.2} & 82.9\% & 80.3\% \\ \hline \multirow{3}{*}{LeNet} & No Defense & \textbf{93.8} & 2.82\% & 1.01\% \\ \cline{2-5} & AdvTrain & \textbf{264.7} & 51.8\% & 46.2\% \\ \cline{2-5} & MinMax & \textbf{958.4} & 92.3\% & 90.3\% \\ \hline \hline \end{tabular} *AdvTrain: \cite{fgsm}, MinMax: \cite{minmax}. \label{table:mnist}} \vspace{-4.5mm} \end{table} \subsection{Robustness Metric Evaluation} \textbf{MNIST Experiments: } On MNIST dataset, the results are shown in Table~\ref{table:mnist}: (1) The results firstly demonstrate that our metric could well reflect different robustness degrees on the same neural network model. For example, three FcNet models show increasing robustness in $\psi(x)$, which aligns well with their reference accuracies from both PGD-30 and CW-30 attack; (2) The results also show the generality of our metric on FcNet and LeNet models. \vspace{1mm} \noindent\textbf{CIFAR10 Experiments: } Table~\ref{table:cifar10} shows the experimental results on CIFAR10, including three common neural network models (\textit{i.e.} ConvNet, ResNet18, and DenseNet), as well as three robustness settings (\textit{i.e.} No defense, Gradient Regularization, and MinMax Training). The experiment results show that our metric has the same scale with the referenced adversarial testing accuracies, implying our metric's good generality on complex neural network models and different defenses. To better illustrate a neural network model's robustness, we visualized three ResNet18 models with different robustness degrees in Fig.~\ref{robust_degrees}. As the robustness degree increases, the models' loss surfaces become more and more smooth. Our empirical visualization results imply that the smoother decision surface in the input space indicates better adversarial robustness, which coincidentally matches the parameter space generalization hypothesis~\cite{largebatch}. \begin{figure}[!b] \centering \vspace{-3mm} \includegraphics[width=3.3in]{robust_degrees} \vspace{-2mm} \caption{Different models' loss visualizations: model with higher robustness demonstrates more smooth and stable geometry.} \vspace{-1mm} \label{robust_degrees} \end{figure} \vspace{1mm} \noindent\textbf{ImageNet Experiments: } In the experiment on MNIST and CIFAR10, our proposed robustness metric aligns well with adversarial testing accuracies of PGD-30 and CW-30. However, when we evaluate the MinMax model on ImageNet, two reference accuracies demonstrate certain inconsistency: The MinMax model is released as the base model of the state-of-the-art defense on ImageNet by~\cite{imagenet}. To conduct the MinMax training, the reported time needed is about 52 hours on 128 V100 GPUs. Despite that, the reported accuracy showed very good robustness estimation of the model, which can achieve 42.6\% under 2000-iteration PGD attacks. However, when we more thoroughly evaluated the model by CW-30 attack, we found the model's testing accuracy is only 12.5\% under the attack. We call such a case as ``\textit{unreliable estimation}'' in PGD-based adversarial testing, whose robustness estimation cannot generalize to all attacks. We will discuss this case and several other similar ones in details in Sec.~\ref{sec:failure}, and reveal the current deficiency of adversarial testing based robustness estimation. \begin{table}[!tb] \centering \scriptsize \renewcommand\arraystretch{1.3} \setlength{\tabcolsep}{3.0mm}{ \caption{Robustness Metric Evaluation on CIFAR10} \vspace{-2.5mm} \begin{tabular}{ccccc} \hline \hline Model & Defense & $\psi(x)$ & \begin{tabular}[c]{@{}c@{}}PGD-30\\ Accuracy\end{tabular} & \begin{tabular}[c]{@{}c@{}}C\&W-30\\ Accuracy\end{tabular} \\ \hline \hline \multirow{3}{*}{ConvNet} & No Defense & \textbf{58.3} & 0.0\% & 0.0\% \\ \cline{2-5} & GradReg & \textbf{86.5} & 16.0\% & 14.8\% \\ \cline{2-5} & MinMax & \textbf{182.6} & 39.6\% & 38.7\% \\ \hline \multirow{3}{*}{ResNet18} & No Defense & \textbf{67.9} & 0.0\% & 0.0\% \\ \cline{2-5} & GradReg & \textbf{77.8} & 18.7\% & 17.5\% \\ \cline{2-5} & MinMax & \textbf{162.7} & 44.3\% & 43.1\% \\ \hline \multirow{3}{*}{DenseNet} & No Defense & \textbf{59.1} & 0.1\% & 0.0\% \\ \cline{2-5} & GradReg & \textbf{77.9} & 18.6\% & 17.2\% \\ \cline{2-5} & MinMax & \textbf{142.4} & 39.1\% & 38.8\% \\ \hline \hline \end{tabular} *GradReg: \cite{aaai}, MinMax: \cite{minmax}. \vspace{-5mm} \label{table:cifar10}} \end{table} \subsection{Our Metric vs. Adversarial Testing Accuracy} \label{sec:failure} As mentioned above, the adversarial testing accuracy from different adversarial attacks may demonstrate certain inconsistency, and therefore mislead the robustness estimation. In addition to the ImageNet example, we also include another two cases that the adversarial testing accuracies yield unreliable robustness estimation: defensive distillation~\cite{distillation} and gradient inhibition~\cite{wujie}. To demonstrate the unreliability of these cases, we train three new models on MNIST and CIFAR10 respectively, using natural training, defensive distillation, and gradient inhibition methods. For the ImageNet model, we use a public released model \footnote{MinMax model is obtained in following link: https://github.com /facebookresearch/imagenet-adversarial-training.}, which can achieve a state-of-the-art accuracy 45.5\% against PGD-30 attack (within $\ell_\infty \leq$ 16/255). The overall experimental results are shown in Table.~\ref{table:advtest}, which shown that though all these defenses can achieve high PGD-30 adversarial testing accuracy, they actually bring very limited robustness improvement: On MNIST and CIFAR10, the distillation and gradient inhibition defenses provide the models with high adversarial testing accuracy against both FGSM and PGD-30 attacks (even higher than state-of-the-art MinMax methods), which seemly indicates these models are significantly robust. However, when measured by our metric, we have the opposite conclusion: these models are merely as robust as no-defense models and incomparable to the robust models trained by MinMax. To further verify this conclusion, we test these models with more adversarial settings and the testing accuracy dramatically degrades to almost zero in all the tests. \begin{table}[!tb] \centering \scriptsize \renewcommand\arraystretch{1.5} \vspace{1mm} \caption{Unreliable Cases of Adversarial Testing Accuracy} \vspace{-2mm} \begin{tabular}{cccccc} \hline \hline Dataset & Defense & \begin{tabular}[c]{@{}c@{}}FGSM\\ Accuracy\end{tabular} & \begin{tabular}[c]{@{}c@{}}PGD-30\\ Accuracy\end{tabular} & $\psi(x)$ & \begin{tabular}[c]{@{}c@{}}C\&W-30\\ Accuracy\end{tabular} \\ \hline \hline \multirow{4}{*}{MNIST} & No Defense & 23.4\% & 3.5\% & 89.8 & 0.5\% \\ \cline{2-6} & Distillation & \textbf{97.3\%}* & \textbf{97.1\%}* & 70.5 & 0.0\% \\ \cline{2-6} & GradInhib & \textbf{98.3\%}* & \textbf{97.8\%}* & 87.0 & 0.0\% \\ \cline{2-6} & MinMax & {98.3\%} & {92.3\%} & 958.4 & 90.3\% \\ \hline \multirow{4}{*}{CIFAR10} & No Defense & 7.6\% & 0.1\% & 58.3 & 0.0\% \\ \cline{2-6} & Distillation & \textbf{72.6\%}* & \textbf{72.3\%}* & 60.5 & 0.0\% \\ \cline{2-6} & GradInhib & \textbf{79.8\%}* & \textbf{79.7\%}* & 70.0 & 0.1\% \\ \cline{2-6} & MinMax & {55.7\%} & {39.6\%} & 182.6 & 38.7\% \\ \hline \multirow{2}{*}{ImageNet}& No Defense & 15.5\% & 7.3\% & 1.7$\times$10$^5$ & 4.6\% \\ \cline{2-6} & MinMax & \textbf{46.9\%} & \textbf{45.7\%} & 2.3$\times$10$^5$ & 12.5\% \\\hline \hline \end{tabular} \label{table:advtest} *Distillation: \cite{distillation}, GradInhib: \cite{wujie}, \vspace{-1mm} MinMax: \cite{minmax} *Bold accuracies denote the unreliable robustness estimation cases. \normalsize \vspace{-2mm} \end{table} The tests above further prove our statement: the adversarial testing accuracy based on PGD-30 may yield unreliable robustness estimation, which cannot reflect the model's intrinsic robustness. This is because the distillation and gradient inhibition both rely on the input gradient vanishing to achieve robustness enhancement, which is mainly provided by the nonlinear softmax and negative log loss. Since C\&W attack doesn't rely on the cross-entropy loss, it can easily crack those two defenses. Such a case also applies to the ImageNet model trained with MinMax defenses as shown in the last two rows of Table.~\ref{table:advtest}. In contrast, our robustness metric can successfully reflect the model true robustness property with different defenses. Under all the above cases, the robustness metric gives reliable robustness estimation, remaining un-affected by defense methods and the unreliable PGD adversarial testing accuracy. \subsection{Reparemeterization Invariance Evaluation} The reliability of our proposed metric is also reflected in its invariance from the model parameter scaling. Previous work~\cite{largebatch} tried to define a metric, named $\epsilon$-sharpness, to evaluate the loss surface's geometry properties. We adopt its original definition and apply it into our input space to evaluate sharpness of input space loss surface, which can empirically reflect the adversarial generalization as aforementioned. The experiment results are shown in Table.~\ref{table:repar}, where $\epsilon$ denotes the $\epsilon$-sharpness, $\psi_s$ denotes our robustness metric based on softmax layer without normalization, and $\psi_n$ denotes our robustness metric with normalization. For the test cases, \textit{Org.} indicates the tests with the original model without reparemeterization, {*100} and {/100} denote the model's logit layer weights and biases are scaled accordingly. Please note that, such scaling won't introduce accuracy and robustness change in practice ~\cite{sharpminima}. The experiments show that, both $\epsilon$-sharpness and un-normalized $\psi_s$ give very distinct robustness estimations influenced by the reparameterization. By contrast, the normalization method successfully alleviates the scaling influence and enables our metric $\psi_n$ to keep a stable estimation under model reparameterization. Therefore, our metric could thus be used to more precisely capture one model's robustness degree without being affected by model reparameterization. \begin{table}[!tb] \centering \scriptsize \renewcommand\arraystretch{1.5} \vspace{1mm} \caption{Robustness Metrics Comparison under Reparemeterization} \vspace{-2mm} \begin{tabular}{ccp{4mm}cp{4.5mm}cp{4.5mm}cccc} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Metric} & \multicolumn{3}{c}{No Defense Model} & \multicolumn{3}{c}{MinMax Model} \\ \cline{3-8} & & Org. & *100 & /100 & Org. & *100 & /100 \\ \hline \hline \multirow{3}{*}{ConvNet} & $\epsilon$ & 22.7 & 109.6 & 0.095 & 0.43 & 3.20 & 0.004 \\ \cline{2-8} & $\psi_s$ & 0.96 & 0.012 & 1677.8 & 39.6 & 5.33 & 377443.3 \\ \cline{2-8} & $\psi_n$ & \textbf{58.3} & \textbf{59.5} & \textbf{57.9} & \textbf{182.5} & \textbf{183.1} & \textbf{177.36} \\ \hline \multirow{3}{*}{ResNet18} & $\epsilon$ & 15.4 & 87.4 & 0.048 & 0.085 & 5.63 & 0.005 \\ \cline{2-8} & $\psi_s$ & 0.963 & 0.0097 & 3178.8 & 17.11 & 0.158 & 128089 \\ \cline{2-8} & $\psi_n$ & \textbf{110.9} & \textbf{110.8} & \textbf{102.5} & \textbf{193.0} & \textbf{192.62} & \textbf{172.5} \\ \hline \hline \end{tabular} \label{table:repar} \normalsize \vspace{-2mm} \end{table} \subsection{Efficiency of the Robustness Metric} Here we show the efficiency of our metric compared to adversarial testing methods. Since we are evaluating the model properties, theoretically it should be invariant to how many input we choose. Here we show that as the test-batch-size increases, the calculated robustness metric gradually converge to a stable robustness estimation which is close to the whole test set average robustness. Fig.~\ref{pic:convergence} shows the relation with the batch size and the robustness deviation between batches with same batch-size. We can see that on both datasets, as the batch size increases, the robustness measurement become more accurate since they have much smaller deviations. With the batch-size equals to 1000 (or less), we could get the model's robustness estimation with less than 10\% deviation on MNIST and 5\% on CIFAR10, which demonstrate higher efficiency than accuracy testing running on the whole test set. \subsection{Robustness Estimation Grading} Based on our experiments, we could derive a rough relationship between different robustness evaluation score and the adversarial accuracy. For example, on MNIST dataset within common threat model ($\ell_\infty < 0.3$), we can define the model robustness by three levels: Vulnerable ($acc < 0.3$), Fairly Robust ($0.3 \leq acc < 0.6$) and Robust ($0.6 \leq acc \leq 1.0$). In such a case, the corresponding robust metric range will be ($0,~100$), ($100,~270$), ($270,~\infty$), which could be used to quickly grade a neural network's robustness. The robustness grading for CIFAR and ImageNet cannot be well developed yet due to the limited robustness currently (40\% and 15\%). \section{Conclusion} \label{sec:conclusion} In this work, through visualizing and interpreting neural networks' decision surface in input space, we show that adversarial examples are essentially caused by neural networks' neighborhood under-fitting issue. Oppositely, robust models manage to smoothen their neighborhood and relieve such under-fitting effect. Guided by such observation, we propose a model intrinsic robustness evaluation metric based on the model predictions' maximum KL-divergence in a given neighborhood constrain. Combined with our new-designed normalization layer, the robustness metric shows multiple advantages than previous methods, including: great generality across dataset/models/attacks/defenses, invariance under reparameterization, and excellent computing efficiency.
2,869,038,154,518
arxiv
\section{Introduction} To satisfy the increasingly higher data rate demands, visible light communication (VLC) has drawn tremendous interest from both industry and academia as a promising complement to traditional radio frequency communication (RFC) that suffers from spectrum saturation \cite{Komine,Raja,Elgala}. The maturing of LED manufacturing techniques during the recent decade largely boosts the trend of replacing traditional lighting systems with LED alternatives for both indoor and outdoor illumination purposes, and the resulting infrastructures are ready for deployment of VLC. It is a low-cost technology where one can use the simple intensity modulation and direct detection (IM/DD) techniques. In addition, one can enjoy a bunch of additional advantages such as eye-safety, high security and causing no electromagnetic inference. VLC is both unique from and similar to RFC. With regard to the uniqueness of VLC, it only allows positive and real signals to drive the LEDs as intensities (a non-informative DC-bias is typically used); its channel especially for indoor environment is much more slower varying than a RFC counterpart; baseband waveforms modulates the LEDs directly instead of being up-converted first, etc. As for the similarity between VLC and RFC, many existing RFC techniques can be applied, although possibly with non-straightforward modifications to VLC systems, e.g. optical multiple input multiple output (O-MIMO) \cite{4Zeng09}, optical orthogonal frequency division multiplexing (O-OFDM) \cite{5Jean,Dimitrov,Shlomi}, and other advanced signal processing techniques \cite{You01,Teramoto05,Karout1,Monteiro,Qian,Cossu,Bai12,Vucic}. These (and relevant) works nicely take advantage of various diversities a VLC system provides, such as spatial diversity, frequency diversity, color diversity and adaptive DC-bias, to improve system performance. It is worth noting that the color diversity and adaptive DC-bias configuration are specific to VLC. The motivation behind this work is to exploit the benefits of various diversities jointly for a very power efficient VLC, while the focus of this paper is on the problem of constellation design in high dimensional space. This space is formed by several dimensions of freedoms including adaptive DC-bias, baseband subcarriers and multiple wavelengths corresponding to R/G/B LED lights. According to the fundamental idea that spheres (i.e., constellation points) can pack more compactly in a higher dimensional space, a constellation with larger minimum Euclidean distance (MED) can be expected in a higher dimensional space. \footnote{The system symbol error rate (SER) is governed by the MED for working electrical SNRs for VLC \cite{Qian}.}. This MED maximization problem is formulated in a non-convex optimization form, and is then relaxed to a convex optimization problem by a linear approximation method. Key practical lighting requirements are taken into account as constraints, e.g., the optical power constraint, average color constraint, non-negative intensity constraint, color rendering index (CRI) and luminous efficacy rate (LER) requirements \cite{CIE,Stimson}. \begin{figure*}[bp] \centering \includegraphics*[width=18cm]{Fig_1_revised.pdf} \caption{(a). System block diagram of a decoupled system; (b). System block diagram of the DCI-JCFM. \big($(Re)MAP_{R/G/B}$: bits and constellation (Re)mapper for R/G/B tunnel; $(De)MOD_{R/G/B}$: (De)modulator for R/G/B tunnel; ${R/G/B}^*$: Photo Detector and color filter for R/G/B tunnel; $J_{(Re)MAP}$: joint (Re)mapper; $J_{Dect}$: joint symbol detector \big)}\label{fig1} \end{figure*} For RFC, one well-known shortcoming with utilizing multiple subcarriers is the excessive peak-to-average power ratio (PAPR) problem introduced, which can cause severe nonlinear distortion to degrade system performance. Plenty of methods have been proposed to reduce PAPR (see \cite{Han} and references therein). In fact, when using multiple subcarriers for VLC, high PAPR is also a very severe issue, due to the limited linear dynamic range of amplifiers and LEDs. This paper shows that such distortion can be avoided by formulating the dynamic range requirement as (convex) constraints of the optimization problem. In such way distortion control becomes a offline process or a by-product of constellation design. The remainder of this paper is organized as follows. In Section II, we first provide an overview of DC-informative modulation schemes for optical communications. DC-informative multicarrier modulation is introduced as a power efficient alternative to traditional non-DC-informative optical OFDM configurations. In Section III, we propose the DCI-JCFM method for systems with RGB LED. The key lighting constraints including dynamic range control are discussed. The cases of ``Balanced'', ``Unbalanced'' and ``Very Unbalanced'' systems are introduced. In Section IV, we discuss the pros and cons of using dynamic range constraint, short time PAPR constraint and long time PAPR constraint. In Section V, we provide simulation results to verify the significant performance gains of the proposed method over a decoupled method for balance, unbalance, and very unbalanced systems. Finally, Section VI provides conclusions. \vspace{.1in} \section{An Overview of DC-informative Modulation for Optical Communications} Optical communications based on IM/DD has a unique feature of requiring all signals modulating the LEDs to be positive (and real), so multiple schemes are proposed accordingly such as the well-know asymmetrically clipped optical OFDM (ACO-OFDM), DC-biased optical OFDM (DCO-OFDM) and optical multisubcarrier modulation (MSM). These schemes all discard the DC-bias at the receiver, which causes significant power loss. The DC-informative modulation schemes were then proposed such that $100\%$ optical power is used for data transmission (see \cite{Karout1} for single carrier case and \cite{Qian} for multiple carrier selective fading case). To be specific, consider the channel model \begin{equation} y(t)=\gamma\eta s_i(t)\ast h(t)+v(t)\qquad i\in[1,N_c], \label{1} \end{equation} where $s_i(t)$ is a symbol waveform mapped from $\mathbf{b}_i$ that contains $N_b$ bits of information, $\ast$ denotes the convolution operator, $h(t)$ is either flat-fading or selective-fading channel, $v(t)$ denotes white noise, $y(t)$ is the received signal, $\eta$ and $\gamma$ are electrical-to-optical and optical-to-electrical conversion factors respectively \footnote{We assume $\gamma\eta=1$ with out loss of generality (w.o.l.g).}, and $N_c=2^{N_b}$ stands for the constellation size. The key feature of a DC-informative modulation is that the following basis are used {\it jointly} to carry information \begin{equation} \phi_1(t)=\sqrt{\frac{1}{T_s}}\Pi(\frac{t}{T_s}),\label{10} \end{equation} \begin{equation} \phi_{2k}(t)=\sqrt{\frac{2}{T_s}}\cos(2\pi f_kt)\Pi(\frac{t}{T_s})~~ k=1,2,\ldots,K,\label{11} \end{equation} \begin{equation} \phi_{2k+1}(t)=\sqrt{\frac{2}{T_s}}\sin(2\pi f_kt)\Pi(\frac{t}{T_s})~~ k=1,2,\ldots,K,\label{12} \end{equation} where both I and Q channels are used. $\phi_1(t)$ is the DC-bias basis, $T_s$ is the symbol interval, $f_k=\frac{k}{T_s}$ is the $k$-th subcarrier, $K$ is the total number of subcarriers, and a rectangular pulse-shaper is used \begin{equation} \Pi(t) = \left\{ \begin{array}{rl} 1, & \text{if} ~0\leq t<1\\ 0, & \text{otherwise}. \end{array} \right.\label{13} \end{equation} The relationship between symbol waveforms $s_i(t)$ and constellation points $\mathbf{s}_i=[s_{1,i},s_{2,i},\ldots,s_{2K+1,i}]$ is \begin{equation} s_i(t)=\underbrace{s_{1,i}\phi_1(t)}_{\text{Adaptive Bias}}+s_{2,i}\phi_2(t)+\ldots+s_{2K+1,i}\phi_{2K+1}(t).\label{6} \end{equation} And one of the reasonable goals is to minimize the SER subject to fixed electrical/optical power by properly design the constellation matrix \begin{equation*} \mathcal{S}= \begin{bmatrix} s_{1,1} & s_{1,2} & \ldots & s_{1,N_c}\\ s_{2,1} & s_{2,2} & \ldots & s_{2,N_c}\\ \vdots & \vdots & \ddots & \vdots\\ s_{2K+1,1} & s_{2K+1,2} & \ldots & s_{2K+1,N_c}\\ \end{bmatrix}, \end{equation*} where each column of $\mathcal{S}$ is a constellation point, and the MED of all columns should be maximized to reach the goal. We typically stack the columns into a single vector instead, i.e. \begin{equation} \mathbf{s}_J=[\mathbf{s}_1^T~\mathbf{s}_2^T~\ldots~\mathbf{s}_{N_c}^T]^T, \end{equation} for simplification of the formulation of the optimization problem discussed in Section II. \section{DC-informative Joint Color-Frequency Modulation with RGB LEDs} Based on the idea discussed in Section II, we propose two constellation design methods taking advantage of the informative DC-bias for a visible light communication system employing one RGB LED as shown by Fig.1. If the inputs into R/G/B carry independent bit information as shown by Fig.1(a), it is termed a decoupled scheme. In comparison, if the inputs into R/G/B modulators only carry information jointly, it is a joint scheme instead as shown by Fig.1(b). In other words, although for a joint scheme still three modulators are used to create continuous domain waveforms to drive corresponding LEDs, information cannot be estimated though recovery of a single (or any pair) of them. It is observed that the joint scheme may utilize four types of diversities during per channel use, including frequency diversity, color(wavelength) diversity, adaptive DC, and spatial diversity. Spatial diversity can be achieved by extending from employing only one RGB LED to include $N$ ones, which is out of scope of this paper. We only emphasize the first three diversities here. This scheme is termed DC-informative joint color-frequency modulation (DCI-JCFM). The decoupled system shown in Fig.1(a) works as follows: At the transmitter-side three independent bit stream $\mathbf{b}_{x,i},~ x\in[\text{red,green,blue}]~i\in[1,N_c]$, of length $N_x$ are mapped to corresponding constellation points $\mathbf{s}_{x,i}$ of size $(2K+1)\times 1$ first, which are modulated separately to generate continuous symbol waveform (current) $s_{x,i}(t)$ by \eqref{6} for each tunnel. If cross-talks exist for any tunnel, a corresponding precoder $\mathbf{P}_x$ needs to be applied before modulation. Waveforms $s_{x,i}(t)$ are then electrical-to-optical converted to intensity signals $\eta s_{x,i}(t)$ to drive the LEDs. At the receiver-side, photo detectors of each tunnel collect the waveforms (convoluted with channel and corrupted by noise). The received signal is sent through red, green, and blue color filters respectively and after optical-to-electrical conversion waveforms $y_{x,i}(t)$ are obtained. Then $2K+1$ matched filters are employed for each tunnel to demodulate $y_{x,i}(t)$ to obtain signal vector $\mathbf{y}_{x,i}$. Three symbol detectors follow to provide $\hat{\mathbf{s}}_{x,i}$, estimates of the symbol vectors, which are de-mapped separately and the estimate of original bit sequences $\mathbf{b}_{x,i}$ are finally obtained. If cross-talks exist, post-equalizers $\mathbf{U}_x^T$ are applied before the symbol detectors. System using DCI-JCFM as shown by Fig.1(b) works differently. A joint bit sequence \begin{equation} \mathbf{b}_{J,i}=[\mathbf{b}_{R,i}^T~\mathbf{b} _{G,i}^T~\mathbf{b}_{B,i}^T]^T, \end{equation} is firstly mapped jointly to a constellation point $\mathbf{s}_{J,i}$ of size $(6K+3)\times 1$. Then $\mathbf{s}_{J,i}$ is converted by a joint modulator to the continuous domain to obtain $s_{J,i}$ through \begin{align} s_{J,i}(t)&=\underbrace{\sum_{p=1}^3 s_{(p-1)(2K+1)+1,i}\phi_1(t)}_{\text{Adaptive R/G/B Bias}}\notag\\ &+\sum_{p=1}^3\sum_{k=2}^{2K+1}s_{(p-1)(2K+1)+k,i}\phi_k(t). \end{align} If cross-talks exist, a joint precoder $\mathbf{P}_J$ is applied before modulation. Also we observe the expectation of $s_{J,i}(t)$ as follows \begin{align} \mathbb{E}[s_{J,i}(t)]&=\sum_{p=1}^3 s_{(p-1)(2K+1)+1,i}\phi_1(t), \end{align} since all non-DC basis has zero time averages. Therefore, both the average optical power and average color of system are determined solely by these three adaptive DC-bias. While the dynamic range of waveform, instead, is influenced by all subcarriers of all LEDs. For our design, we will demonstrate with a line-of-sight (LOS) scenario when channel has cross-talks, due to the imperfectness of receiver color filters. The discrete channel model can be written as follows \cite{Monteiro} \vspace{-0.02in} \begin{align} \mathbf{y}&=\mathbf{H}\mathbf{s}+\mathbf{n}\notag\\ &=\begin{bmatrix} \mathbf{y}_{R} \\ \mathbf{y}_{G} \\ \mathbf{y}_{B} \end{bmatrix} =\begin{bmatrix} \mathbf{I} & \mathbf{O} & \mathbf{O}\\ \mathbf{O} & (1-2\epsilon)\mathbf{I} & \epsilon\mathbf{I} \\ \mathbf{O} & \epsilon\mathbf{I} & (1-2\epsilon)\mathbf{I} \end{bmatrix} \begin{bmatrix} \mathbf{s}_R \\ \mathbf{s}_G \\ \mathbf{s}_B \end{bmatrix}+ \begin{bmatrix} \mathbf{n}_R \\ \mathbf{n}_G \\ \mathbf{n}_B \end{bmatrix},\label{2} \end{align} where $\epsilon\in[0,0.5]$ is termed the ``cross-talk index (CI)'' and $\mathbf{n} \sim {\cal N}({\bf 0}, {\bf I} \cdot N_0)$. \subsection{The objective function} With working SNRs for VLC which are typically medium-to-high, the minimum Euclidean distance between constellation pairs governs SER. Therefore, we seek to minimize the system SER by maximizing the minimum Euclidean distance, through carefully design the constellation vector $\mathbf{s}_J$ subject to key lighting constraints. For a constellation containing $N_c$ points, distances of a total of $N_c(N_c-1)/2$ pairs have to be constrained as follows \cite{Beko12} \begin{equation} \mathbf{s}_J^T\mathbf{F}_l\mathbf{s}_J\geq d^2_{min},\label{8} \end{equation} where we define \begin{equation} \mathbf{F}_{l(p,q)}=\mathbf{E}_{pq}, \end{equation} and \begin{equation} \mathbf{E}_p=\mathbf{e}_p^T\otimes \mathbf{I}_{N_c}, \end{equation} where $\otimes$ denotes Kronecker product, $\mathbf{e}_p$ is the $p$-th column of identity matrix $\mathbf{I}_{6K+3}$, and \begin{equation} \mathbf{E}_{pq}=\mathbf{E}_p^T\mathbf{E}_p-\mathbf{E}_p^T\mathbf{E}_q- \mathbf{E}_q^T\mathbf{E}_p+\mathbf{E}_q^T\mathbf{E}_q, \end{equation} where $l\cong(p-1)N_c-\frac{p(p+1)}{2}+q,~p,q\in1,2,\ldots,N_c,~p<q$. The distance constraints are nonconvex in $\mathbf{s}_J$. We choose to use the follows linear approximation at point $\mathbf{s}_J^{(0)}$ \begin{align} \mathbf{s}_J^T\mathbf{F}_l\mathbf{s}_J\cong 2\mathbf{s}_J^{(0)T}\mathbf{F}_l\mathbf{s}_J- \mathbf{s}_J^{(0)T}\mathbf{F}_l\mathbf{s}_J^{(0)}\geq d^2_{min},\qquad \forall l.\label{28} \end{align} \subsection{Practical lighting requirements} For our design problem, practical lighting issue considered include average optical power, average illumination color, LER and CRI, non-negative intensity, and flickering-free requirements. The first three requirements can be constraint using only one equation written as follow \begin{equation} P_o\cdot\mathbf{s}_{avg}=\frac{1}{N_c}\mathbf{J}\mathbf{s}_J, \end{equation} where $P_o$ is the average optical power of a RGB LED, $\mathbf{J}$ is a selection matrix (containing only ones and zeros) adding up R/G/B components in $\mathbf{s}_J$ respectively by a multiplication of each row with it, $\mathbf{s}_{avg}=[s_R~s_G~s_B]^T$ is termed the average color ratio vector and the follow equation holds \begin{equation} s_R+s_G+s_B=1. \end{equation} Thus the optical power and illumination color requirements are constrained together. The luminous efficacy rate and color rendering index requirements can be satisfied by properly choosing $\mathbf{s}_{avg}$. \subsection{Dynamic range requirement} Since the linear dynamic range of LEDs are limited, the ranges of signal for each LED have to be constrained to avoid nonlinear distortion, i.e. \begin{equation} 0\leq s_{x,i}(t)\leq I_U,\qquad \forall x,i,\label{13} \end{equation} where $I_U$ is the highest current level and for simplicity we have assumed that red, green and blue LEDs have the same dynamic range. We propose to constrain dynamic range of a sequence of sampled signal \begin{equation} 0\leq s_{x,i}(t_n)\leq I_U,\qquad \forall x,i,\label{14} \end{equation} and $t_n$ is picked as \begin{equation} t_n=\frac{nT_s}{2KN_o}, \qquad n=0,1,\ldots,N, \end{equation} where $N_o$ is the oversampling rate, $N=2KN_o$, and $N+1$ is the total number of sample points. It should be noted that although \eqref{14} does not guarantee \eqref{13}, which means the continuous signal waveforms designed subject to \eqref{14} could result in negative amplitudes in between the sample instances, the negative peak is very small compare to the dynamic range of signal. We can compensate this effect by adding a small post DC-bias after obtaining an optimized constellation. Therefore, we can formulate this point-wise dynamic range constraints as follows \begin{equation} \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J\geq 0, \qquad \forall (x,i,n) \end{equation} \begin{equation} \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J\leq I_u, \qquad \forall (x,i,n) \end{equation} where $\mathbf{J}_i$ selects the $i$-th constellation point and $\mathbf{K}_x$ selects the corresponding coefficients for color $x$, $\mathbf{u}_n=[u_{n,0},u^c_{n,1},u^s_{n,1},\ldots,u^c_{n,K},u^s_{n,K}]^T$, $u_{n,0}=\sqrt{1/T_s}$, $u^c_{n,k}=\sqrt{2/T_s}\cos(2\pi f_kt_n)$, and $u^s_{n,k}=\sqrt{2/T_s}\sin(2\pi f_kt_n)$. \subsection{Problem formulation} We first formulate the optimization problem when there is no cross-talks among different colored LEDs, i.e. $\mathbf{H}=\mathbf{I}$, as follows \begin{equation} \begin{aligned} & \underset{\mathbf{s}_J,d_{min,J}}{\text{maximize}} & & d_{min,J} \\ & \text{s.t.} & & P_o\cdot\mathbf{s}_{avg}=\frac{1}{N_c}\mathbf{J}\mathbf{s}_J\\ &&& 2\mathbf{s}_J^{(0)T}\mathbf{F}_l\mathbf{s}_J- \mathbf{s}_J^{(0)T}\mathbf{F}_l\mathbf{s}_J^{(0)}\geq d^2_{min,J}\qquad \forall l.\\ &&& \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J\geq 0 \qquad \forall (x,i,n)\\ &&& \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J\leq I_u \qquad \forall (x,i,n), \end{aligned} \end{equation} which is convex in $\mathbf{s}_J$ and $d_{min}$, and specialized solver such as CVX toolbox for MATLAB can be utilized \cite{cvx}. Start from initial point $\mathbf{s}_J^{(0)}$, the scheme can iteratively converge to a local optima with each run. The best constellation is chosen from local optimal constellation obtained from multiple runs. When the channel suffers from cross-talks, we choose to deal with it by employing the well-known singular value decomposition (SVD) based pre-equalizer $\mathbf{P}=\mathbf{V}\mathbf{S}^{-1}$ and post-equalizer $\mathbf{U}^H$ for our system, where $\mathbf{H}=\mathbf{U}\mathbf{S}\mathbf{V}^H$. Constellation is designed by an optimization with transformed constraints as follow \begin{equation} \begin{aligned} & \underset{\mathbf{s}_J,d_{min,J}}{\text{maximize}} & & d_{min,J} \\ & \text{s.t.} & & P_o\cdot\mathbf{s}_{avg}=\frac{1}{N_c}\mathbf{J}\mathbf{P}_J\mathbf{s}_J\\ &&& 2\mathbf{s}_J^{(0)T}\mathbf{F}_l\mathbf{s}_J- \mathbf{s}_J^{(0)T}\mathbf{F}_l\mathbf{s}_J^{(0)}\geq d^2_{min,J}\qquad \forall l.\\ &&& \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{P}_J\mathbf{s}_J\geq 0 \qquad \forall (x,i,n)\\ &&& \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{P}_J\mathbf{s}_J\leq I_u \qquad \forall (x,i,n), \end{aligned}\label{20} \end{equation} where $\mathbf{P}_J=\mathbf{I}_{N_c}\otimes\mathbf{P}$ is defined \footnote{$\otimes$ is the Kronecker product.} and apparently this optimization is convex as well. For the decoupled system, three independent problems can be formulated to find the MEDs for each color, i.e. \begin{equation} \begin{aligned} & \underset{\mathbf{s}_{R/G/B},d_{min,R/G/B}}{\text{maximize}} & & d_{min,R/G/B} \\ & \text{s.t.} & & P_o\cdot s_{R/G/B}=\frac{1}{N_c}\mathbf{j}^T\mathbf{P}_{R/G/B}\mathbf{s}_{R/G/B}\\ &&& 2\mathbf{s}_{R/G/B}^{(0)T}\tilde{\mathbf{F}}_l\mathbf{s}_{R/G/B}- \mathbf{s}_{R/G/B}^{(0)T}\tilde{\mathbf{F}}_l\mathbf{s}_{R/G/B}^{(0)}\\ &&&\geq d^2_{min,R/G/B}\qquad \forall l.\\ &&& \mathbf{u}_n^T\tilde{\mathbf{J}}_i\mathbf{P}_{R/G/B}\mathbf{s}_{R/G/B}\geq 0 \qquad \forall (i,n)\\ &&& \mathbf{u}_n^T\tilde{\mathbf{J}}_i\mathbf{P}_{R/G/B}\mathbf{s}_{R/G/B}\leq I_{u,R/G/B} \qquad \forall (i,n), \end{aligned} \end{equation} where $d_{min,R/G/B}$, $\mathbf{j}$, $\mathbf{P}_{R/G/B}$, $\tilde{\mathbf{F}}_l$, $\tilde{\mathbf{J}}_i$, and $I_{u,R/G/B}$ are defined in a similar manner with corresponding parameters in \eqref{20}. For brevity we omit the explicit definitions. \section{Dynamic Range VS PAPR Constraints} In fact, although a hard constraint on the dynamic range of symbol waveforms can help avoid non-linear distortion completely, it may bring with side effects such as excessive power efficiency decrease. This is particularly true if only one or few symbol waveforms have notably larger dynamic range than the majority. In such case, one can consider using certain PAPR constraint to replace the dynamic range constraint. In other words, there is a tradeoff between allowable PAPR and power efficiency. Two types of PAPR constraints (for each LED light) can be considered. One is the so-called long-term PAPR (L-PAPR), i.e. the ratio of the peak power of all waveforms and the time average of them. Assuming no cross-talks, the L-PAPR constraint for LED $x$ can be written as \begin{align} \Phi_x(\mathbf{s}_J)&=\frac{[\max_{i,n}(\mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J)]^2} {\mathbf{s}_J^T\mathbf{s}_J/N_c}\leq \beta_x\notag\\ &=\frac{N_c[\max_{i,n}(\mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J)]^2} {\mathbf{s}_J^T\mathbf{s}_J}\leq\beta_x,\label{27} \end{align} where $\beta_x$ is the required L-PAPR for LED $x$. Thus, a set of constraints can be formulated as follows \begin{equation} \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J-\sqrt{\frac{\beta_x\mathbf{s}_J^T\mathbf{s}_J}{N_c}}\leq 0\qquad \forall (i,n), \end{equation} which is non-convex in $\mathbf{s}_J$. A way to deal with this is to use a similar linear approximation as in \eqref{28} at the same initial point $\mathbf{s}_J^{(0)}$. The above constraints are thus transformed as follows \begin{equation} \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J- \sqrt{\frac{\beta_x}{N_c}}(\mathbf{s}_J^{(0)T}\mathbf{s}_J^{(0)})^{-\frac{1}{2}} \mathbf{s}_J^{(0)T} (\mathbf{s}_J-\mathbf{s}_J^{(0)})\leq 0~\forall (i,n). \end{equation} The other is the individual PAPR (I-PAPR), i.e. the ratio of the peak power of each waveform and the average power of it. The I-PAPR for $i$-th waveform for LED $x$ can be written as \begin{align} \Phi_{x,i}(\mathbf{s}_J)&=\frac{[\max_{n}(\mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J)]^2} {\mathbf{s}_J^T\mathbf{J}_i^T\mathbf{J}_i\mathbf{s}_J}\leq \beta_{x,i} \end{align} The corresponding constraint can be written as \begin{equation} \mathbf{u}_n^T\mathbf{K}_x\mathbf{J}_i\mathbf{s}_J-\sqrt{\beta_{x,i}\mathbf{s}_J^T\mathbf{J}_i^T\mathbf{J}_i\mathbf{s}_J}\leq 0\qquad \forall n, \end{equation} and a similar linear approximation process is applied to convert them to convex constraints. To the best of our knowledge, there is no comprehensive comparison on performance of systems applying those three constraints available so far. An interesting observation recently in \cite{Li} shows that the non-linearity mitigation for an IM/DD VLC is a more involved problem than expected. The reason is that the low part of the baseband frequencies are causing larger non-linear distortion than the higher part. This effect, if taken into account along with the three constraints discussed above, is expected to make the design problem even more worthwhile to look into. \section{Performance Evaluation} In this section, we compare the performances of the decoupled scheme and DCI-JCFM by assessing the maximum MED and bit error rate (BER) under different channel cross-talk and color illumination assumptions \footnote{A binary switching (BSA) algorithm is applied for optimally map bit sequences to constellation points\cite{Schreckenbach}.}. Each constellation point is assumed to have equal probability of transmission, and the union bound for SER of both the scheme can be written as \cite[Eq.25]{Karout1} \begin{equation} P_{e,s}\approx \frac{2N_n}{N_c}Q\bigg(\sqrt{\frac{d_{min,z}^2}{2N_0}}\bigg), \end{equation} where $N_n$ is the number of neighbor constellation pairs \cite{Karout1} and \begin{equation} Q(x)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}\exp(-t^2/2)dt \end{equation} denotes the Gaussian Q-function, $d_{min,z\in[J,R,G,B]}$ stands for the MED of the DCI-JCFM and MEDs for decoupled schemes. The bit error rate is thus calculated as \begin{equation} P_{e,b}= \frac{\lambda}{N_b}P_{e,s}, \end{equation} where $\lambda$ is the number of wrongly detected bits in each bit sequence, which can be minimized by employing the BSA mapper. \subsection{System comparison with no channel cross-talk} We first compare the DCI-JCFM and the decoupled scheme when channel cross-talks do not exist. To guarantee a fair comparison, the following system parameters are chosen: the number of Monte-Carlo runs for each scheme $N_M=20$, the length of bit sequence for each channel use is $N_b=6$ for DCI-JCFM and $N_{b_{R/G/B}}=2$ respectively for each tunnel of the decoupled system, the number of subcarriers for each LED is $K=2~\text{or}~3$, the average optical power $P_o=20$, the symbol interval $T_s=1$ is used\footnote{With out loss of generality $T_s=1$ is chosen, since the design is rate independent.}, the upper bound of waveform amplitude $I_U=80$, the average color ratio vector for a balanced system \begin{equation} \mathbf{s}_{avg,B}=[1/3,1/3,1/3]^T, \end{equation} for an unbalanced system \begin{equation} \mathbf{s}_{avg,U}=[4/9,3/9,2/9]^T, \end{equation} and for a very unbalanced system \begin{equation} \mathbf{s}_{avg,VU}=[0.7,0.15,0.15]^T. \end{equation} The MEDs of the two schemes for three systems obtained through picking the best constellatio from the $20$ local optimums are summarized by Table I. From Table I, key observations include: a. With the DCI-JCFM, the ``joint MED'' is much larger than the R/G/B ``decoupled MEDs'', except for the cases with very unbalanced illumination. While for the very unbalanced case the blue and green tunnels could suffer from severe performance loss with the small MEDs, and therefore the DCI-JCFM is still expected to work better. b. Larger MEDs are obtained with an increased number of subcarriers. We are only listing the cases when $K=2$ and $K=3$ for brevity, while we have observed through additional simulations that this gain continue to grow with $K$. c. The more balanced a system is, the better performance is expected. \begin{table}[h] \centering \caption{MED comparison with no cross-talk, DCI-JCFM (row 1$\&$2) VS Decoupled (row 3$\&$4).} \vspace{-0.2in} \begin{tabular}{cccccccc} \\ \hline $d_{min,z}$ & \text{Balanced} & \text{Unbalanced} & \text{Very Unbalanced} \\ \hline $\text{K=2}$ & 19.995 & 19.588 & 18.020 \\ $\text{K=3}$ & 24.818 & 24.559 & 22.752 \\ $\text{K=2}$ & [13.18,13.18,13.18] & [15.90,11.93,7.95] & [27.68,5.93,5.93] \\ $\text{K=3}$ & [15.03,15.03,15.03] & [18.04,13.53,9.02] & [31.56,6.76,6.76] \\ \hline \end{tabular} \end{table} \subsection{DCI-JCFM performance with channel cross-talk} With $K=2$ and other parameters given the same values as in the previous section for DCI-JCFM, we simulate to obtain the best MEDs subject to different cross-talk levels, controlled by CI varying from [0,0.2] (since with only average quality color filters CI beyond 0.2 can be avoided). \begin{table}[h] \centering \caption{MED with different cross-talk levels, DCI-JCFM.} \vspace{-0.2in} \begin{tabular}{cccccccc} \\ \hline $d_{min,z}$ & \text{Balanced} & \text{Unbalanced} & \text{Very Unbalanced} \\ \hline $\epsilon=0$ & 19.995 & 19.588 & 18.020 \\ $\epsilon=0.05$ & 18.834 & 18.798 & 17.014 \\ $\epsilon=0.1$ & 16.898 & 16.835 & 16.346 \\ $\epsilon=0.15$ & 15.334 & 15.037 & 15.192 \\ $\epsilon=0.2$ & 14.444 & 14.250 & 14.273 \\ \hline \end{tabular} \end{table} From Table II, key observations include: a. With increased channel cross-talks, the performances of all system degrade monotonously. b. The performance of the balanced system remains the best with any level of channel cross-talks. c. The DCI-JCFM is kind of robust with cross-talks, since with a severe cross-talk level, i.e. $\epsilon=0.2$, the system performance is still comparable or even better than a decoupled counterpart. \subsection{The designed symbol waveforms with DCI-JCFM} We pick the optimized constellation $\mathbf{s}_J^*$ designed for unbalanced system as an example in this section. If cross-talks do not exist, the corresponding subcarrier symbol waveforms $s_{R,i}(t),~s_{G,i}(t),~s_{B,i}(t)~\forall i$ obtained with DCI-JCFM are plotted in Fig. 2 - Fig. 4. With fixed optical power $P_0=20$ and varying noise power $N_0$, Fig. 5 includes bit error rate curves of the two scheme with different color illumination across selected working electrical SNRs, which is defined as \begin{align} \text{SNR}&=10\log_{10}\frac{\mathbb{E}(\mathbf{s}_i^T\mathbf{s}_i)}{N_0}\notag\\ &=10\log_{10}\frac{\mathbf{s}_J^T\mathbf{s}_J}{N_cN_0}, \end{align} for the DCI-JCFM and SNR for the decoupled scheme is defined similarly. Significant power gains of the DCI-JCFM over the decoupled scheme are observed. Also, the more unbalanced the system is, the worse performance is expected. If cross-talks exist and $\epsilon=0.1$, the corresponding subcarrier symbol waveforms are plotted in Fig. 6 - Fig. 8. With each figure, the symbol waveforms are differentiated by color. In practice, the sampled version of these waveforms can be pre-stored in the memory of a high speed waveform generator. \begin{figure}[H] \centerline{\includegraphics[width=1.1\columnwidth]{Fig_red.eps}} \vspace{-0.1in} \caption{64 red sub waveforms for DCI-JCFM with $K=2$, no cross-talks.} \end{figure} \vspace{-.25in} \begin{figure}[H] \centerline{\includegraphics[width=1.1\columnwidth]{Fig_green.eps}} \vspace{-0.1in} \caption{64 green sub waveforms for DCI-JCFM with $K=2$, no cross-talks.} \end{figure} \vspace{-0.05in} \begin{figure}[H] \centerline{\includegraphics[width=1.1\columnwidth]{Fig_blue.eps}} \vspace{-0.1in} \caption{64 blue sub waveforms for DCI-JCFM with $K=2$, no cross-talks.} \end{figure} \begin{figure}[H] \centerline{\includegraphics[width=1.1\columnwidth]{BER1.eps}} \vspace{-0.1in} \caption{BER performance of DCI-JDCM and the decouple scheme.} \end{figure} \begin{figure}[H] \centerline{\includegraphics[width=1.1\columnwidth]{redepsilon01.eps}} \vspace{-0.1in} \caption{64 red sub waveforms for DCI-JCFM with $K=2$, $\epsilon=0.1$.} \end{figure} \begin{figure}[H] \centerline{\includegraphics[width=1.1\columnwidth]{greenepsilon01.eps}} \vspace{-0.1in} \caption{64 green sub waveforms for DCI-JCFM with $K=2$, $\epsilon=0.1$.} \end{figure} \begin{figure}[H] \centerline{\includegraphics[width=1.1\columnwidth]{blueepsilon01.eps}} \vspace{-0.1in} \caption{64 blue sub waveforms for DCI-JCFM with $K=2$, $\epsilon=0.1$.} \end{figure} \section{Conclusion} We have propose a joint constellation design scheme termed DCI-JCFM taking advantage of the wavelength, frequency, and adaptive bias diversities at the same time for indoor visible light communication systems. By applying the DCI-JCFM scheme, waveform symbols with a much larger MED can be obtained than those from a decoupled scheme with or without channel cross-talks. Future works will include a comprehensive comparison among three systems: one applying dynamic range, one with long-term PAPR, and one with instantaneous PAPR constraint respectively; comparison of power efficiency of the DCI-JCFM and the popular DCO/ACO-OFDM schemes for multi-carrier multi-color VLC systems; and advanced precoder design to replace the SVD-based pre and post-equalizers utilized in this paper.
2,869,038,154,519
arxiv
\section*{Introduction} Let $E/F$ be a quadratic extension of number fields and $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ be the quasi-split unitary group of rank $2$ defined for $E/F$. In $\S$7 of \cite{GRS}, Gelbart, Rogawski and Soudry defined a global zeta integral for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)\times {\mathrm{Res}}} \newcommand{\Sel}{{\mathrm{Sel}}_{E/F}(\GL_1)$, proved that it is Eulerian and did the unramified calculation. This global zeta integral is an analogue of the integral defined for ${\mathrm{Sp}}_{2n}\times \GL_n$ case considered by Gelbart and Piatetski-Shapiro in \cite{GPS} (the method C in \cite{GPS}). In this paper, we will show the existence of the local $\gamma$-factors for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)\times {\mathrm{Res}}} \newcommand{\Sel}{{\mathrm{Sel}}_{E/F}(\GL_1)$ and obtain a local converse theorem for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ except when $E/F$ is ramified and the residue characteristic of $F$ is $2$. In the following, we describe our results in more details. For the local group $H(F_v)$, there are 2 cases to consider. At a non-split place $v$ of $F$, we have $H(F_v)={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)(F_v)$, where ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ is defined for the local extension $E_w/F_v$ where $w$ is the unique place of $E$ above $v$. At a split place $v$ of $F$, we have $H(F_v)=\GL_2(F_v)$. In this paper, we consider the two local cases separately. We first consider the local ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ case. Assume that $E/F$ is a quadratic extension of $p$-adic fields and $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)(F)$. We fix an additive character $\psi$ of $F$, which is also viewed as a character of the upper triangular unipotent subgroup $N$ of $H$. We also fix an element $\kappa\in F^\times -\Nm_{E/F}(E^\times)$ and consider the character $\psi_\kappa$ defined by $\psi_{\kappa}(x)=\psi(\kappa x)$. Let $\pi$ be an infinite dimensional irreducible smooth representation of $H$. Then $H$ is either $\psi$-generic or $\psi_\kappa$-generic, see $\S$1. Let $\chi$ be a character $E^1$, where $E^1$ is the norm one subgroup of $E$, and $\mu$ be a character of $E^\times$ such that $\mu|_{F^\times}$ is the local class field theory character associated with $E/F$, we then have an irreducible Weil representation $\omega_{\mu,\psi^{-1},\chi}$ of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$. Let $\eta$ be a quasi-character of $E^\times$ and $s\in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$, we can consider the induced representation ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$, where $B$ is the upper triangular Borel subgroup. We assume that the product of the central character of $\pi$, $\omega_{\mu,\psi^{-1},\chi}$ and ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$ is trivial, and we call our given datum are compatible if it is the case. Suppose $\pi$ is $\psi$-generic. Given $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi) , f_s\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta_s), \phi \in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$, where ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$ is the space of the Weil representation of $\omega_{\mu,\psi^{-1},\chi}$, see $\S$1, then the local zeta integral of \cite{GPS} is $$\Psi(W,\phi,f_s)=\int_{ZN\setminus H}W(g)(\omega_{\mu,\psi^{-1},\chi}(g)\phi)(1)f_s(g)dg,$$ where $Z$ is the center of $H$. The intertwining operator on the induced representations will give a $\gamma$-factor $\gamma(s,\pi, \omega_{\mu,\psi^{-1},\chi},\eta)$ by a standard uniqueness property, see $\S$2. Note that if $\pi$ is both $\psi$- and $\psi_\kappa$-generic, there are two $\gamma$-factors $\gamma(s,\pi, \omega_{\mu,\psi^{-1}},\eta)$ and $\gamma(s,\pi,\omega_{\mu, \psi_\kappa^{-1},\chi},\eta)$. Our first result shows that they are essentially the same: \begin{thm}[Proposition \ref{prop214}] Let $(\pi,V)$ be an irreducible smooth representation of $H$ which is both $\psi$- and $\psi_\kappa$-generic. Then $$\gamma(s,\pi,\omega_{\mu,\psi_\kappa^{-1},\chi},\eta)=\eta(\kappa)|\kappa|_F^{2s-1}\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta).$$ \end{thm} The main result of this paper is the following \begin{thm}[Theorem \ref{thm39}, Local converse theorem and the stability of $\gamma$-factors for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$]\label{thm02} Suppose that $E/F$ is unramified, or $E/F$ is ramified but the residue characteristic is not $2$. Let $\pi,\pi'$ be two irreducible smooth $\psi$-generic representations of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)(F)$ with the same central character. \begin{enumerate} \item If $\gamma(s,\pi, \omega_{\mu,\psi^{-1},\chi},\eta)=\gamma(s,\pi', \omega_{\mu,\psi^{-1},\chi},\eta)$ for all compatible quasi-characters $\eta$ of $E^\times$ and characters $\mu$ of $E^1$, then $\pi\cong \pi'$. \item If $\eta$ is highly ramified, then $\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)=\gamma(s,\pi', \omega_{\mu,\psi^{-1},\chi},\eta)$. \end{enumerate} \end{thm} In \cite{Ba1} and \cite{Ba2}, E.M Baruch proved the local converse theorem for ${\mathrm {G}}} \newcommand{\RH}{{\mathrm {H}}{\mathrm{Sp}}_4$ and ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(2,1)$ using Howe vectors. Our proof of the above theorem follows Baruch's method closely. In particular, our main tool is also Howe vectors. The main difference is that we need to deal with the Weil representation. Next, we consider the $\GL_2$ case, which should be viewed as the local theory of the global ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ integral at the split places as we mentioned before. In this case, the Weil representation is $\omega_{\psi^{-1},\chi}$ ($\mu$ is trivial in this case) is the induced representation ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(1\otimes \chi)$. It turns out that the local zeta integral of \cite{GRS} is in fact the local zeta integral for the representation $\pi\times {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}(1\otimes \chi)\otimes \eta$ which was originally defined by Jacquet in \cite{J}. The analogue of Theorem \ref{thm02} is also true: \begin{thm}[Theorem \ref{thm511}, Local converse theorem and the stability of $\gamma$-factors for $\GL_2$]\label{thm03} Let $\pi,\pi'$ be two irreducible smooth $\psi$-generic representations of $\GL_2(F)$ with the same central character. \begin{enumerate} \item If $\gamma(s,\pi, \omega_{\psi^{-1},\chi},\eta)=\gamma(s,\pi', \omega_{\psi^{-1},\chi},\eta)$ for all compatible quasi-characters $\eta$ of $F^\times\times F^\times$, then $\pi\cong \pi'$. \item If $\eta$ is highly ramified, then $\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)=\gamma(s,\pi', \omega_{\mu,\psi^{-1},\chi},\eta)$. \end{enumerate} \end{thm} In \cite{J}, Jacquet showed the multiplicativity of the $\gamma$-factors: $ \gamma(s,\pi, \omega_{\psi^{-1},\chi},\eta)=\gamma(s,\pi, \chi \eta)\gamma(s,\pi, \eta)$. From this and Part (1) of Theorem \ref{thm03}, we get the classical local converse theorem for $\GL_2$ which was originally proved by Jacquet-Langlands, \cite{JL}. \section*{Acknowledgement} I would like to thank my advisor Professor James W.Cogdell for his encouragement, generous support and a lot of invaluable suggestions. Without his suggestions and support, this paper would never exist. \section*{Notations} Let $F$ be a local field, and $E/F$ be a quadratic extension. For $x\in E,$ let $x\rightarrow \bar x$ be the unique nontrivial action in ${\mathrm{Gal}}} \newcommand{\GL}{{\mathrm{GL}}(E/F)$. Let $\epsilon_{E/F}:F^\times \rightarrow \wpair{\pm1}$ be the class field theory character of $E/F$. Let $E^1=\wpair{x\in E: x\bar x=1}$. Let $(W,\pair{~,~})$ be the skew Hermitian vector space of dimension 2 with $$\pair{w_1,w_2}=w_1J_1{}^t\!\bar w_2,$$ where $w_i\in W$ is viewed as a row vector and $J_1=\begin{pmatrix} &1\\-1&\end{pmatrix}$. Let $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ be the isometry group of $W$, i.e., $$H=\wpair{g\in \GL_E(W)|\pair{w_1g,w_2g}=\pair{w_1,w_2},\forall w_1,w_2\in W}=\wpair{g\in \GL_2(E)|~gs^t\!\bar g=s}.$$ Let $B$ be the upper triangular subgroup of $H$, then $B=TN$ with $$T=\wpair{t(a):=\begin{pmatrix} a&\\ &\bar a^{-1}\end{pmatrix}:a\in E^\times},~ N=\wpair{n(b):=\begin{pmatrix} 1& b\\&1\end{pmatrix},b\in F}.$$ Let $Z=\wpair{t(a):a\in E^1}$ be the center of $H$ and $R=ZN$. We denote $\bar N$ (resp. $\bar B$) the lower triangular unipotent subgroup (resp. lower triangular subgroup) of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$. For $x\in F$, we denote $$\bar n(x)=\begin{pmatrix} 1& \\ x&1\end{pmatrix}\in \bar N.$$ \section{Weil representations of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$} \subsection{An exact sequence} Let $\psi$ be a nontrivial additive character of $F$ which is viewed as a character of $N$ by the natural isomorphism $N\cong F$. Fix an element $\kappa\in F^\times-\Nm(E^\times)$. Let $(\pi,V)$ be an infinite dimensional representation of $B$. As a representation of $N$, $(\pi,V)$ is smooth. As a smooth module over $N$, we have ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(N).V=V$, see 2.5 of \cite{BZ1}, where ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(N)$ is the space of Bruhat-Schwartz functions on $N$. Here we also view ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(N)$ the Hecke algebra on $N$. By Fourier inversion formula, we have an isomorphism ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(N)\cong {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\hat N)$, where $\hat N$ is the dual group of $N$. Under this isomorphism, we have ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\hat N).V=V$. Thus there is a sheaf $\CV$ on $\hat N$ such that $\CV_c=V$, see 1.14 of \cite{BZ1}. Consider the action of $B$ on $\hat N$ as follows: for $\psi\in \hat N$, and $b\in B$, define $$b.\psi(n)=\psi(b^{-1}nb).$$ This gives an action of $B$ on $\CV$. The action of $B$ on $\hat N$ has 3 orbits, $\wpair{0}$, $\hat N_1=\wpair{b.\psi: b\in B}$ and $\hat N_2=\wpair{b.\psi_\kappa: b\in B}$. Note that $\hat N_1\cup \hat N_2$ is open in $\hat N$. The stabilizer of $\psi$ is $R$. By 2.23 and 5.10 of \cite{BZ1}, we have $$\CV(\hat N_1)=\ind_R^B(V_{N,\psi}), \textrm{ and } \CV(\hat N_2)=\ind_R^B(V_{N,\psi_{\kappa}}),$$ where $\ind_R^B$ denote the non-normalized compact induction. Now the exact sequence $$0\rightarrow \CV(\hat N_1)\oplus \CV(\hat N_2)\rightarrow \CV(\hat N)\rightarrow \CV(\wpair{0})\rightarrow 0,$$ see 1.16 of \cite{BZ1}, can be identified with \begin{equation}{\label{eq11}} 0\rightarrow \ind_R^B(V_{N,\psi})\oplus \ind_R^B(V_{N,\psi_\kappa})\rightarrow V\rightarrow V_N\rightarrow 0. \end{equation} We call $(\pi,V)$ is $\psi$ (resp. $\psi_\kappa$)-generic if $V_{N,\psi}$ (resp. $V_{N,\psi_\kappa}$) is nonzero. If $V_{N,\psi}\ne 0$, then a nonzero element $$\lambda\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(V,\psi)={\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}(V_{N,\psi},{\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}})$$ is called a $\psi$-th Whittaker functional of $(\pi,V)$. \begin{prop}{\label {prop11}} Let $(\pi,V)$ be an irreducible smooth admissible representation of $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)(F)$. Then we have \begin{enumerate} \item if $V_{N,\psi}\ne 0$, then $V_{N,\psi_{a\bar a}}\ne 0$ for any $a\in E^\times ;$ \item if $V_{N,\psi}=V_{N,\psi_\kappa}=0$, then $V$ has finite dimension$;$ \item we have $\dim V_{N,\psi}\le 1$ and $\dim V_{N,\psi_\kappa}\le 1;$ if $\dim V_{N,\psi}=1$, then $\ind_R^B(V_{N,\psi})$ is an irreducible representation of $B$. Moreover, $\ind_R^B(V_{N,\psi})$ is not equivalent to $\ind_R^B(V_{N,\psi_\kappa})$ if at least one of them is nonzero. \end{enumerate} \end{prop} \begin{proof} (1) If $\lambda\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(V,\psi)$, it is easy to check that $\lambda\circ \pi(t(a))\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(V,\psi_{a\bar a})$. (2) The assertion follows from the exact sequence (\ref{eq11}). (3) The first part is the uniqueness of the Whittaker model and is well-known. Suppose that $\dim{V_{N,\psi}}=1$. The proof of the fact that $\ind_R^B(V_{N,\psi})$ is irreducible is similar to the proof of the corresponding statement in the $\GL_n$ case, see 5.13 of \cite{BZ1}. We give a sketch here. Let $V_1'$ be a nonzero $B$ submodule of $V':=\ind_R^B(V_{N,\psi})$. It is not hard to check that $(V'_1)_N=0$ and $(V'_1)_{N,\psi_\kappa}=0$ by Jacquet-Langlands Lemma, 2.33 of \cite{BZ1}. Thus by the exact sequence (\ref{eq11}), we have $V_1'=\ind_R^B((V_1')_{N,\psi})$. In particular, we have $(V_1')_{N,\psi}\ne 0$. On the other hand, if we take $V_1'=V_1$ in the above argument, we obtained $V'=\ind_R^B(V'_{N,\psi})$. Since $\ind_R^B(V_{N,\psi})=V'=\ind_R^B(V'_{N,\psi})$ and $V_{N,\psi}$ has dimension one, we conclude $\dim V'_{N,\psi}=1$. By the exactness of Jacquet functors we have $0\ne (V_1')_{N,\psi}\hookrightarrow (V')_{N,\psi}$. Thus $(V_1')_{N,\psi}= (V')_{N,\psi}$, and $V_1'=\ind_R^B((V_1')_{N,\psi} )=\ind_R^B((V')_{N,\psi})=V'$. This shows that the only nonzero $B$-submodule of $V'$ is $V'$ itself, and thus $V'$ is an irreducible $B$-module. Suppose that both of $V_{N,\psi}$ and $V_{N,\psi_\kappa}$ are nonzero. As we mentioned before, one can check that $(\ind_R^B(V_{N,\psi}))_{N,\psi_\kappa}=0$ but $(\ind_R^B(V_{N,\psi_\kappa}))_{N,\psi_\kappa}\ne 0$, thus $ \ind_R^B(V_{N,\psi})$ is not equivalent to $\ind_R^B(V_{N,\psi_\kappa})$. This completes the proof. \end{proof} If $(\pi,V)$ is a representation of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ such that exactly one of $V_{N,\psi}$ and $V_{N,\psi_\kappa}$ is nonzero, we call $\pi$ is \textbf{exceptional}. We will show that the Weil representations provide examples of exceptional representations. \subsection{Weil representations} Let $V$ be a Hermitian space of dimension 1. We have the dual pair ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(V)\times {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(W)$, recall that ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(W)={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$. For a character $\mu$ of $E^\times$ with $\mu|_{F^\times}=\epsilon_{E/F}$, we have a splitting $s_\mu: {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(V)\times {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(W)\rightarrow Mp(V\otimes W)$, see \cite{HKS} for example. For a nontrivial additive character $\psi$ of $F$, we then have a Weil representation $\omega_{V,\psi,\mu}$ of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(V)\times {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(W)$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(V)={\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E)$. We have the following formulas \begin{align} \omega(g,1)\phi(x)&=\phi(g^{-1}x), g\in U(V),\\ \omega(1,t(a))\phi(x)&=\mu(a)|a|^{1/2}\phi(xa)\\ \omega(1,n(b))\phi(x)&=\psi((bx,x)_V)\phi(x)\\ \omega(1,w)\phi(x)&=\gamma_\psi\int_{V}\psi(-\tr_{E/F}(x,y)_V)\phi(y)dy, \label{eq15} \end{align} where in the last formula, $$w=\begin{pmatrix} &1\\ -1&\end{pmatrix}$$ is the unique nontrivial Weyl group element of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$, $\gamma_\psi$ is the Weil index, and $dy$ is the measure on $V$ which is self-dual for the Fourier transform defined by Eq.(\ref{eq15}). \begin{prop}\begin{enumerate} \item For $a\in \Nm(E^\times)$, we have $\omega_{V,\psi_a, \mu}\cong \omega_{V,\psi,\mu}$. \item As a representation of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(W)={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$, the restriction of the representation $\omega_{V,\mu,\psi}$ to $Z\cong E^1$ is fully reducible. For a character $\chi$ of $Z\cong E^1$, let $ \omega_{V,\mu,\psi,\chi}$ be the representation of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ on the $\chi$-eigenspace ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$ in $(\omega_{V,\mu,\psi}, {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E))$, then $\omega_{V,\mu,\psi,\chi}$ is irreducible. Moreover, $\omega_{V,\mu,\psi,\chi}$ is supercuspidal if $\chi$ is nontrivial, and $\omega_{V,\mu,\psi,1}$ is a direct summand of ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\mu)$, where $1$ means the trivial character of $E^1$, and ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\mu)$ is the normalized induction. \end{enumerate} \end{prop} This is a special case of the general theorem of Kudla in the symplectic case and then Moeglin-Vigneras-Waldspurger. See p.69 of \cite{MVW}, or Proposition 2.5.1 of \cite{GR}. For $a\in F^\times$, let $V_a$ be the 1-dimensional Hermitian space with $V_a=E,$ and the Hermitian structure $Q_a(x,y)=ax\bar y$. We will write $\omega_{a,\mu,\psi}$ (resp. $\omega_{a,\mu,\psi,\chi}$) for $\omega_{V_a,\mu,\psi}$ (resp. $\omega_{V_a, \mu,\psi,\chi}$) and $\omega_{\mu,\psi}$ (resp. $\omega_{\mu,\psi,\chi}$) for $\omega_{1,\mu,\psi}$ (resp. $\omega_{1,\mu,\psi,\chi}$). \begin{prop} We fix a $\kappa\in F^\times-\Nm(E^\times)$. \begin{enumerate} \item For $a\in F^\times$, we have $\omega_{a,\mu,\psi,\chi}\cong \omega_{\mu,\psi_a,\chi}$. \item The representation $\omega_{\mu,\psi,\chi}$ is $\psi$-generic but not $\psi_\kappa$-generic. Similarly, $\omega_{\mu,\psi_\kappa,\chi}$ is $\psi_\kappa$-generic but not $\psi$-generic. Thus $\omega_{\mu,\psi_a,\chi}$ is an exceptional representation of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ for all $a\in F^\times$. \end{enumerate} \end{prop} \begin{proof} (1) In fact, the identity map ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)\rightarrow {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$ defines an isomorphism $\omega_{a,\mu,\psi,\chi}\cong \omega_{\mu,\psi_a,\chi}$, see Corollary 6.1 of \cite{K}, page 40. (2) This is in fact proved in \cite{KS}. We also include a proof here. Denote $(\pi,V)$ the representation $(\omega_{\mu,\chi,\psi},{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi))$ temporarily. We claim that $V=V(N,\psi_\kappa)$, i.e., $V_{N,\psi_\kappa}=0$. Given $\phi\in V={\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$, by Jacquet-Langlands' Lemma, 2.33 of \cite{BZ1}, we have $\phi\in V(N,\psi_\kappa)$ if and only if there is an open compact subgroup $N'\subset N$ such that $$\int_{N'}\psi_\kappa^{-1}(n)\pi(n)\phi dn=0,$$ or for all $x\in \mathrm{Supp}}\newcommand{\supp}{\mathrm{supp}(\phi)$, \begin{equation}{\label{eq16}}\int_{N'}\psi(n(x\bar x-\kappa))dn\equiv 0.\end{equation} Since $\mathrm{Supp}}\newcommand{\supp}{\mathrm{supp}(\phi)$ is compact and $\kappa \notin \Nm(E^\times)$, we can find an open compact subgroup $N'$ such that $\psi(n(x\bar x-\kappa))$ is a nontrivial character on $N'$ for all $x\in \mathrm{Supp}}\newcommand{\supp}{\mathrm{supp}(\phi)$. Then for this $N'$, (\ref{eq16}) holds, i.e., $\phi\in V(N,\psi_\kappa)$. Thus $V_{N,\psi_\kappa}=0$ and $\omega_{\mu,\psi,\chi}$ is not $\psi_\kappa$-generic. It is clear that $\omega_{\mu,\chi,\psi}$ is $\psi$-generic. \end{proof} \begin{lem}{\label {lem14}} Define a linear functional $\lambda_{\mu,\psi,\chi}:{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)\rightarrow {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$ by $$\lambda_{\mu,\psi,\chi}(\phi)=\phi(1).$$ Then $\lambda_{\mu,\psi,\chi}$ is a nonzero Whittaker functional of $\omega_{\mu,\psi,\chi}$. \end{lem} \begin{proof} For $n(x)\in N$, we have $$\lambda_{\mu,\psi,\chi}(\omega(n(x))\phi)=\omega(n(x))\phi(1)=\psi(x)\phi(1).$$ Thus $\lambda_{\mu,\psi,\chi}$ is a Whittaker functional. It is clear that $\lambda_{\mu,\psi,\chi}$ is nonzero. \end{proof} If $\chi=1$ is the trivial character of $E^1$, we have $\omega_{\mu,\psi,1}\subset {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\mu)$. Kudla and Sweet analyzed the embedding in \cite{KS}. In the ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ case, the main result of \cite{KS} is the following \begin{thm}[Kudla-Sweet]{\label{thm15}} Let $\mu$ be a character of $E^\times$, and $s\in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$. We normalize $(s,\mu)$ as in page $255$ of \cite{KS}. Let $I(s,\mu)={\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\mu|~|^s)$. Then \begin{enumerate} \item If $\mu|_{F^\times}\ne 1,$ and $\mu|_{F^\times}\ne \epsilon_{E/F}$. Then $I(s,\mu)$ is irreducible. \item If $\mu|_{F^\times}=1$, then $I(s,\mu)$ is irreducible except when $s=\pm \frac{1}{2}$. If $s=\pm \frac{1}{2}$, $I(s,\mu)$ has length 2, and one irreducible component is has dimension 1. \item If $\mu|_{F^\times}=\epsilon_{E/F}$, then $I(s,\mu)$ is irreducible except when $s=0$. If $s=0$, we have $$I(0,\mu)=\omega_{\mu,\psi,1}\oplus \omega_{\mu,\psi_\kappa,1}.$$ \end{enumerate} \end{thm} \section{Local zeta integrals for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$} \subsection{The Local Zeta Integral} Let $\psi$ be a fixed nontrivial additive character on $F$. Let $(\pi,V)$ be a $\psi$-generic irreducible smooth representation of $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)(F)$ and let $\omega_\pi$ be the central character of $\pi$. Let $\mu$ be a character of $E^\times$ such that $\mu|_{F^\times}=\epsilon_{E/F}$. For a character $\chi$ of $E^1$, we can consider the Weil representation $\omega_{ \mu,\psi^{-1},\chi}$ of $H=U(1,1)$. For a character $\eta$ of $E^\times$, $s\in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$, let $\eta|~|^{s-1/2}$ be the character of $B$ defined by $$\eta|~|^{s-1/2}(nt(a))=\eta(a)|a|_E^{s-1/2}, a\in E^\times, n\in N.$$ Let ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$ be the normalized induced representation of $H$, which consists smooth functions on $H$ such that $$f(bh)=\eta|~|^s(b)f(h),b\in B, h\in H.$$ We assume that our datum $\pi, \mu, \chi,$ and $ \eta$ satisfy the condition \begin{equation}{\label{eq21}} \omega_\pi\cdot \mu\chi\cdot \eta|_{E^1}=1. \end{equation} For $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)$, $\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$, $f_s\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$, we consider the local zeta integral $$\Psi(W,\phi,f_s, \omega_{\mu,\psi^{-1},\chi})=\int_{R\setminus H}W(h)(\omega_{\mu,\psi^{-1},\chi}(h)\phi)(1)f_s(h)dh.$$ There is a projection ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E)\rightarrow {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$ defined by $\phi\mapsto \phi_\chi$, where $$\phi_\chi(a)=\int_{E^1}\chi^{-1}(u)\phi(ua)du.$$ If we start from an element $\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E)$, and consider $\Psi'(W,\phi,f_s):=\Psi(W,\phi_\chi, f_s)$, then $$\Psi'(W,\phi,f_s, \omega_{\mu,\psi^{-1},\chi})=\int_{N\setminus H}W(h)(\omega_{\mu,\psi^{-1}}(h)\phi)(1)f_s(h)dh.$$ \textbf{Remark:} (1) The term $\omega_{\mu,\psi^{-1},\chi}(h)\phi(1)$ is the Whittaker function of the Weil representation associated with the vector $\phi$. \\ (2) The integral $\Psi'(W,\phi, f_s, \omega_{\mu,\psi^{-1},\chi})$ is the local zeta integral in $\S$7 of \cite{GRS}.\\ (3) In the notation $\Psi(W,\phi,f_s, \omega_{\mu,\psi^{-1},\chi}) $, we add $\omega_{\mu,\psi^{-1},\chi}$ to emphasize that the integral is defined in a way depending on the Weil representation $\omega_{\mu,\psi^{-1},\chi}$. If the Weil representation $\omega_{\mu,\psi^{-1},\chi}$ is clear from the context, we will write $\Psi(W,\phi,f_s, \omega_{\mu,\psi^{-1},\chi})$ as $\Psi(W,\phi, f_s)$ for simplicity. \begin{lem}{\label{lem21}} The integral $\Psi(W,\phi,f_s)$ is absolutely convergent for $\Re(s)>>0$, and defines a rational function of $q_E^{-s}$. Moreover, each $\Psi(W,\phi, f_s)$ can be written with a common denominator determined by $\pi, \omega_{\mu,\psi^{-1},\chi}$ and $\eta$. \end{lem} \begin{proof} This follows from a gauge estimate of the Whittaker function $W$. The proof is similar to the proof of the well-known $\GL_n$ case which can be found in \cite{JPSS} or \cite{C} for example. We omit the details. \end{proof} \subsection{The normalized local zeta integral and the local $L$-factor} We follow \cite{Ba1} to give a parametrization of the induced representation of ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$ using the Bruhat-Schwartz function space ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$. Let $\psi'$ be another fixed additive character of $F$ which can be the same or different with $\psi$. For $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, we define the Fourier transform with respect to $\psi'$ by: $$\hat \Phi(x,y)=\int \Phi(u,v)\psi'(yu-xv)dudv.$$ Let $g\in \GL(2,F).$ Set $(g\Phi)(x,y)=\Phi((x,y)g)$. Then $$(g\Phi)^{\widehat~}=|\det g|_F^{-1}g'\hat\Phi,$$ where $g'={\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(\det(g)^{-1},\det(g)^{-1})g$. For $s\in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}, g\in \GL(2,F),\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$ and a character $\eta$ of $E^\times$, we consider $$z(s,g,\Phi,\eta)=\int_{F^\times}(g\Phi)(0,r)\eta(r)|r|_E^s dr.$$ The above integral is absolutely convergent for $\Re(s)>>0$ and defines a meromorphic function on ${\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$. Note that $\SL_2(F)$ is a subgroup of $H$. The determinant map $\det: H\rightarrow E^1$ induces an exact sequence $$1\rightarrow \SL_2(F)\rightarrow H\rightarrow E^1\rightarrow 1,$$ i.e., $\SL_2=SU(1,1)$. By Hilbert's Theorem 90, for $h\in H$, we can find $a\in E^\times$ such that $h=t(a)g$ for some $g\in \SL_2(F)$. The decomposition $h=t(a)g$ is not unique. We define \begin{equation} f(s,h,\Phi,\eta)=\eta(a)|a|^sz(s,g\Phi, \eta). \end{equation} By Lemma 2.5 of \cite{Ba1}, the definition of $f$ is independent of choice of the decomposition of $h$. It is clear that $f\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$. By Lemma 4.2 of \cite{Ba1}, there exists $s_0\in \BR$ such that for every $s$ with $\Re(s)>s_0$ and $f\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$, there exists $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$ such that $$f(s,h,\Phi,\eta)=f(h).$$ We assume $\pi$, $\mu,\chi$ and $\eta$ satisfy the condition (\ref{eq21}). We define \begin{equation} \Psi(s, W,\phi,\Phi,\eta, \omega_{\mu,\psi^{-1}, \chi})=\int_{R\setminus H}W(h)(\omega_{\mu,\psi^{-1},\chi}(h)\phi)(1)f(s,h,\Phi,\eta)dh. \end{equation} \textbf{Remark:} Again, if the Weil representation $\omega_{\mu,\psi^{-1},\chi}$ is clear from the context, we will omit that from the notation and just write $\Psi(s,W,\phi, \Phi, \eta)$. By Lemma \ref{lem21}, for $\Re(s)>>0$, the local zeta integral $\Psi(s,W,\phi,\Phi,\eta)$ is absolutely convergent and defines a rational function of $q_E^{-s}$. Let $I(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ be the subspace of ${\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}(q_E^{-s})$ spanned by $\Psi(s,W,\phi,\Phi,\eta, \omega_{\mu,\psi^{-1}_{a\bar a},\chi})$ for $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_{a\bar a}),\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi), $, $a\in E^\times$, $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$. Note that if $\pi$ is $\psi$-generic, then it is also $\psi_{a\bar a}$-generic for all $a\in E^\times$. \begin{prop}{\label{prop22}} The space $I(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ is a ${\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}[q_E^s,q_E^{-s}]$-fractional ideal. There is a unique generator $L(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ of $I(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ of the form $P(q_E^{-s})^{-1}$ for $P(X)\in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}[X]$ and $P(1)=1$. \end{prop} \begin{proof} For $\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi), a\in E^\times$, define $\phi_a(x)=\phi(ax)$. One can check that $\phi\mapsto \phi_a$ defines an isomorphism $\omega_{\mu,\psi^{-1},\chi}\cong \omega_{\mu,\psi^{-1}_{a\bar a},\chi}$. In particular, we have $(\omega_{\mu,\psi^{-1},\chi}(h)\phi)_a(x)=(\omega_{\mu,\psi^{-1}_{a\bar a},\chi}(h)\phi_a)(x)$. Let $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)$ and $a\in E^\times$ , we define $W_a(h)=W(t(a)h)$. Then $W_a\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_{a\bar a})$. Take $\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$, we have \begin{align*}&\quad \Psi(s,W_a,\phi_a,\Phi,\eta, \omega_{\mu,\psi^{-1}_{a\bar a},\chi})\\ &=\int_{R\setminus H}W(t(a)h)\omega_{\mu,\psi^{-1}_{a\bar a},\chi}(h)\phi_a(1)f(s,h,\Phi,\eta)dh\\ &=\int_{R\setminus H}W(t(a)h)\omega_{\mu,\psi^{-1},\chi}(h)\phi(a)f(s,h,\Phi,\eta)dh\\ &=\mu(a)^{-1}|a|^{-1/2}\int_{R\setminus H}W(t(a)h)\omega_{\mu,\psi^{-1},\chi}(t(a)h)\phi(1)f(s,h,\Phi,\eta)dh\\ &=\mu(a)^{-1}|a|^{-1/2}\int_{R\setminus H}W(h)\omega_{\mu,\psi^{-1},\chi}(h)f(s,t(a)^{-1}h,\Phi,\eta)dh\\ &=\mu(a)^{-1}\eta(a)^{-1}|a|_E^{-s-1/2}\Psi(s,W,\phi,\Phi,\eta, \omega_{\mu,\psi^{-1}, \chi}). \end{align*} If we take $a$ to be a prime element of $E$, then $|a|_E=q_E^{-1}$. Thus $I(s,\pi, \omega_{\mu,\psi^{-1},\chi},\eta)$ is closed under multiplication by $q_E^{s}$. Since $\Psi(s,W,\theta, \Phi, \eta)$ has bounded denominators, it is a ${\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}[q_E^{-s},q_E^s]$-factional ideal. Since ${\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}[q_E^s,q_E^{-s}]$ is a principal ideal domain, it has a generator. To show the generator has the given form, it suffices to show that we can find $W,\phi,\Phi$ such that $$\Psi(s,W,\phi,\Phi, \eta)=1.$$ This will be proved in next section using Howe vectors, see the proof of Theorem \ref{thm39} and Remark \ref{rem310}. \end{proof} The next Lemma says that, when $E/F$ is unramified, to obtain the whole ideal $I(s,\pi, \omega_{\mu,\psi^{-1},\chi},\eta)$, we do not have to vary $a\in E^\times$. \begin{lem} Suppose $E/F$ is unramified. Let $I'(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ be the space spanned by $\Psi(s,W,\phi,\Phi,\eta)$ for $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi),\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi),$ and $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$. Then $I'(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ is a fractional ideal and $I'(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)=I(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$. \end{lem} \begin{proof} Let $a\in F^\times$, consider $$\Phi_a(x)=\Phi(ax), x\in F^2.$$ Notice that $f(s,h,\Phi_a,\eta)=\eta^{-1}(a)|a|_E^{-s}f(s,h,\Phi,\eta)$. Thus $$\Psi(s,W,\phi,\Phi_a,\eta)=\eta^{-1}(a)|a|_E^{-s}\Psi(s,W,\phi,\Phi,\eta).$$ Since $E/F$ is unramified, we can take $a$ to be a prime element of $E$ and $a\in F$, then $|a|_E=q_E^{-1}$. Thus $I'$ is a fractional ideal. From the calculation in the proof of Proposition \ref{prop22}, we have $$\Psi(s,W_a,\phi_a,\Phi,\eta, \omega_{\mu,\psi^{-1}_{a\bar a},\chi})=\mu^{-1}(a)|a|^{-1/2}\Psi(s,W,\phi,\Phi_a,\eta, \omega_{\mu,\psi^{-1},\chi}).$$ Thus $I=I'$. \end{proof} \subsection{The Local Functional Equation} Consider the intertwining operator $$M(s): {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})\rightarrow {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta^*|~|^{1/2-s})$$ $$(M(s)f_s)(h)=\int_N f_s(wnh)dn,$$ with $w=\begin{pmatrix}&1\\ -1& \end{pmatrix}$. It is well-known that this operator is well-defined for $\Re(s)>>0$ and can be meromorphically continued to all $s\in {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$. By Lemma 14.7.1 of \cite{J}, there is a meromorphic function $c_0(s)$ such that $$(M(s)f(s,\cdot,\Phi,\eta))(h)=c_0(s)f(1-s,h,\hat \Phi, \eta^*),$$ where $\hat \Phi$ is the Fourier transform defined by $\psi'$ and we omit $\psi'$ from the notation. \begin{cor}{\label{cor24}} For $\Re(s)>0$, there are unique $H$-invariant trilinear forms $\beta_s$ and $\beta_s'$ on ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)\times \omega_{\mu,\psi^{-1},\chi}\times {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$ such that if $f$ is the function defined by $f(h)=f(s,h,\Phi,\eta)$, then $$\beta_s(W,\phi,f)=\Psi(s,W,\phi,\Phi,\eta),$$ and $$\beta_s'(W,\phi,f)=\Psi(1-s,W,\phi,\hat \Phi,\eta^*).$$ \end{cor} Let $\CT_s$ be the space of $H$-invariant trilinear forms on ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)\times {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)\times {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$. \begin{prop}{\label{prop25}} Except for a finite number of values of $q_E^{-s}$, we have $$\dim \CT_s= 1.$$ \end{prop} This proposition can be viewed as a special case of the uniqueness of the Fourier-Jacobi model, which was proved by Binyong Sun in \cite{Su}. We also include a proof here based on the method Jacquet used in \cite{J}. To prove Proposition \ref{prop25}, we need the following \begin{lem}{\label{lem26}} Let $(\pi,V_\pi)$ be an irreducible representation of $H$, $(\sigma,V_\sigma)$ be an irreducible exceptional representation of $H$, $\nu$ be a quasi-character of $E^\times$. Then except a finite number of values of $q_E^s$, there is at most one bilinear form $\CB_s:V_\pi\times V_\sigma\rightarrow {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$ such that \begin{equation}{\label{eq22}} \CB_s(\pi(b)v,\sigma(b)v')=\nu_s(b)\CB_s(v,v'), \quad \forall b\in B, \end{equation} where for $b=t(a)n(x)\in B$, $\nu_s$ is defined by $\nu_s(b)=\nu(a)|a|_E^s$. \end{lem} \begin{proof} We suppose that $\sigma$ is $\psi$-generic. Let $\CB_s $ and $\CB_s'$ be two nonzero bilinear forms $V_\pi\times V_\sigma\rightarrow {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$ satisfying (\ref{eq22}). By Proposition \ref{prop11}, $V_\pi(N)=\ind_R^B((V_\pi)_{N,\psi})\oplus \ind_R^B((V_\pi)_{N,\psi_\kappa})$ is a sum of two non-equivalent irreducible $B$-modules (one of them might be zero), $V_\sigma(N)=\ind_R^B((V_\sigma)_{N,\psi})$ is an irreducible $B$-module. Thus there is a constant $c(s)$ such that $\CB_s'|_{V_\pi(N)\times V_\sigma(N)}=c(s)\CB_s|_{V_\pi(N)\times V_\sigma(N)}$. For $v\in V_\pi, n\in N, v'\in V_\sigma(N)$, since $\nu_s(n)=1$, we have $$\CB_s(v,v'-\sigma(n^{-1})v')=\CB_s(v,v')-\CB_s(\pi(n)v,v')=\CB_s(v-\pi(n)v,v').$$ Similarly $$\CB_s'(v,v'-\sigma(n^{-1})v')=\CB_s'(v-\pi(n)v,v').$$ Since $v-\pi(n)v\in V_\pi(N),v'\in V_\sigma(N)$, we have \begin{equation}{\label{eq23}}\CB_s'(v,v'-\sigma(n^{-1})v')=\CB_s'(v-\pi(n)v,v')=c(s)\CB_s(v-\pi(n)v,v')=c(s)\CB_s(v,v'-\sigma(n^{-1})v').\end{equation} Since $V_\sigma^N=\wpair{0}$, we can find $v'\in V_\sigma(N), n\in N$ such that $v'-\sigma(n^{-1}v')\ne 0$. Since $V_\sigma(N)$ is irreducible, we get \begin{equation}{\label{eq24}}\CB_s'|_{V_\pi\times V_\sigma(N)}=c(s)\CB_s|_{V_\pi\times V_\sigma(N)}\end{equation} by (\ref{eq23}). For $v\in V_\pi,n\in N,v'\in V_\sigma$, similar to (\ref{eq23}), we have $$\CB_s'(v-\pi(n)v,v')=\CB'_s(v,v'-\sigma(n^{-1})v')=c(s)\CB_s(v,v'-\sigma(n^{-1})v')=c(s)\CB_s(v-\pi(n)v,v'),$$ since $v'-\sigma(n^{-1})v'\in V_\sigma(N)$ and by (\ref{eq24}). Since $V_\pi(N)$ is spanned by $ v-\pi(n)v$ for $v\in V_\pi, n\in N$, we get \begin{equation}{\label{eq25}} \CB_s'|_{V_\pi(N)\times V_\sigma}=c(s)\CB_s|_{V_\pi(N)\times V_\sigma}. \end{equation} By (\ref{eq24}), (\ref{eq25}), the bilinear form ${\mathcal {C}}} \renewcommand{\CD}{{\mathcal {D}}_s=\CB_s'-c(s)\CB_s$ is zero on $V_\pi\times V_\sigma(N)$ and $V_\pi(N)\times V_\sigma$, thus defines a linear form $(V_\pi)_N\otimes (V_\sigma)_N\rightarrow {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$ such that \begin{equation}{\label{eq28}}{\mathcal {C}}} \renewcommand{\CD}{{\mathcal {D}}_s(\pi(b)v\otimes \sigma(b)v')=\nu_s(b){\mathcal {C}}} \renewcommand{\CD}{{\mathcal {D}}_s(v\otimes v').\end{equation} Denote $W$ the $B$-space $(V_\pi)_N\otimes (V_\sigma)_N$ temporally. Then (\ref{eq28}) implies that if ${\mathcal {C}}} \renewcommand{\CD}{{\mathcal {D}}_S$ is nonzero, then $W$ has a 1-dimensional quotient $(\nu_s,{\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}})$. On the other hand, the space $W$ has finite dimension, in fact has at most dimension 2. Thus it has finite number 1-dimensional quotient $\chi_i$. Thus if $\nu_s\ne \chi_i$, we have ${\mathcal {C}}} \renewcommand{\CD}{{\mathcal {D}}_s=0$, i.e., except for a finite number of values of $q_E^s$, ${\mathcal {C}}} \renewcommand{\CD}{{\mathcal {D}}_s$ is identically zero, i.e., $$ \CB_s'\equiv c(s)\CB_s.$$ \end{proof} \begin{proof}[Proof of Proposition $\ref{prop25}$] In the following proof, we use ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^s)$ to denote the non-normalized induction so that the Frobenius reciprocity has a simpler form. By Frobenius reciprocity, we have \begin{align*} \CT_s =&{\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_H(\pi\otimes \omega_{\mu,\psi^{-1},\chi}, {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta^{-1}|~|^{1-s}))\\ =&{\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_B(\pi\otimes \omega_{\mu,\psi^{-1},\chi}, \eta^{-1}|~|^{1-s})\\ =&{\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_B(\pi\otimes \eta|~|^{s-1}|_B, \widetilde\omega_{\mu,\psi^{-1},\chi}|_B). \end{align*} Note that $\omega_{\mu,\psi^{-1},\chi}$ is exceptional, thus by Lemma \ref{lem26}, we have $$\dim \CT_s\le 1.$$ As noted in the proof of Proposition \ref{prop22}, in next section, we will show that there exists $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi^{-1}),\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$ and $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$ such that $$\Psi(s,W,\phi,\Phi,\eta)=1,$$ except a finite number value of $q_E^s$ after meromorphic continuation. Thus $\dim \CT_s=1$. \end{proof} As a corollary of Corollary \ref{cor24} and Proposition \ref{prop25}, we have \begin{cor} There exists a meromorphic function $\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi')$ such that $$ \Psi(1-s,W,\theta,\hat \Phi,\eta^*)=\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi')\Psi(s,W,\phi,\Phi,\eta)$$ \end{cor} Note that in the notation, $\psi$ is used to define the Weil representation $\omega_{\mu,\psi^{-1},\chi}$ and $\psi'$ is used to define the Fourier transform $\hat \Phi$. We also define the $\epsilon$-factor $$\epsilon(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi')=\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi')\frac{L(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)}{L(1-s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta^*)}.$$ \subsection{Dependence on $\psi$} In the definition, the $\gamma$-factor dependa on choices of $\psi$ and $\psi'$. In this section, we shall consider the (in)dependence on $\psi$ and $\psi'$. \begin{lem}{\label{lem28}} For $a\in E^\times,b\in F^\times$, we have \begin{align*} L(s,\pi,\omega_{\mu,\psi_{a\bar a}^{-1},\chi}, \eta)&=L(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta), \\ \gamma(s,\pi,\omega_{\mu,\psi^{-1}_{a\bar a},\chi},\eta,\psi')&=\eta(a\bar a)|a\bar a|_F^{2s-1}\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi'),\end{align*} and $$\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi'_b)=\eta(b)|b|_E^{s-1}\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi').$$ \end{lem} \begin{proof} By definition, it is clear that $I(s,\pi,\omega_{\mu,\psi^{-1}_{a\bar a},\chi},\eta)=I(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$, and thus $$ L(s,\pi,\omega_{\mu,\psi^{-1}_{a\bar a},\chi}, \eta)=L(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta).$$ Let $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi), \phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$, as in the proof of Proposition \ref{prop22}, we have $W_a\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_{a\bar a}), \phi_a\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$, and $$\Psi(s,W_a,\phi_a,\Phi,\eta, \omega_{\mu,\psi^{-1}_{a\bar a},\chi})=\mu(a)^{-1}\eta(a)^{-1}|a|_E^{-s-1/2}\Psi(s,W,\theta,\eta,\omega_{\mu,\psi^{-1},\chi}).$$ Similarly, we have $$\Psi(1-s,W_a,\phi_a,\hat \Phi,\eta^*, \omega_{\mu,\psi^{-1}_{a\bar a},\chi})=\mu( a)^{-1}\eta(\bar a)|a|_E^{s-3/2}\Psi(1-s,W,\theta,\hat \Phi,\eta^*,\omega_{\mu,\psi^{-1},\chi}).$$ Thus $$\gamma(s,\pi,\omega_{\mu,\psi^{-1}_{a\bar a},\chi},\eta,\psi')=\eta(a\bar a)|a|_E^{2s-1}\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi').$$ Denote the Fourier transform of $\Phi$ with respect to $\psi'_b$ by $\hat \Phi_b$, i.e., $$\hat \Phi_b(x,y)=\int_{F^2}\Phi(u,v)\psi'_b(yu-xv)dudv.$$ Then $\hat \Phi_b(x,y)=\hat \Phi(bx,by)$. Thus we have $$f(1-s,h,\hat \Phi_b,\eta^*)=\eta(b)|b|_E^{s-1}f(1-s,h,\hat \Phi,\eta^*).$$ By the functional equation, we get $$\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi'_b)=\eta(b)|b|_E^{s-1}\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta,\psi').$$ \end{proof} \noindent\textbf{Remark:} Since the dependence of the $\gamma$-factor on $\psi'$ is very simple, we usually take $\psi'=\psi$ and drop it from the notation. Lemma \ref{lem28} shows that the dependence of the $\gamma$-factor on $\psi_b$ for $b\in \Nm_{E/F}(E^\times)$ is also simple. Next, we need to consider the relation between $\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ and $\gamma(s,\pi,\omega_{\mu,\psi^{-1}_\kappa,\chi},\eta)$ for $\kappa\in F^\times-\Nm_{E/F}(E^\times)$ if bote $\gamma$-factors are defined. For any $\kappa\in F^\times$, we denote $\alpha(\kappa)={\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(\kappa,1)\in \GL_2(F)$. We can check if $h=\begin{pmatrix}a& b\\ c& d \end{pmatrix}\in H$, then $$h^\kappa:=\alpha(\kappa)h\alpha(\kappa)^{-1}=\begin{pmatrix} a & \kappa b\\ \kappa^{-1}c & d \end{pmatrix}\in H.$$ We have $(h_1h_2)^\kappa=h_1^\kappa h_2^\kappa.$ We now fix $\kappa\in F^\times-\Nm_{E/F}(E^\times)$. For an irreducible smooth representation $(\pi,V)$ of $H$, let $(\pi^\kappa, V^\kappa)$ be the representation of $H$ defined by $$V^\kappa=V, \textrm{ and }\pi^\kappa(h)=\pi(h^\kappa).$$ \begin{lem} As a vector space, we have $$(V^\kappa)_{N,\psi_\kappa}=V_{N,\psi}.$$ In particular, ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)\ne 0$ if and only if ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi^\kappa,\psi_\kappa)\ne 0$. \end{lem} \begin{proof} Consider the space $$V^\kappa(N,\psi_\kappa)=\pair{\pi^\kappa(n)v-\psi_\kappa(n)v| n\in N,v\in V}.$$ Since $\pi^\kappa(n)v-\psi_\kappa(n)v=\pi(n^\kappa)v-\psi(n^\kappa)v$, we have $V^\kappa(N,\psi_\kappa)=V(N,\psi)$, and thus $V^\kappa_{N,\psi_\kappa}=V_{N,\psi}$. \end{proof} \begin{lem} We have $$(\omega_{\mu,\psi^{-1},\chi})^\kappa\cong \omega_{\mu,\psi_\kappa^{-1},\chi}.$$ \end{lem} \begin{proof} We can check the identity map $$(\omega_{\mu,\psi_\kappa^{-1},\chi}, {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi))\rightarrow ((\omega_{\mu,\psi^{-1},\chi})^\kappa, {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi))$$ defines an isomorphism. \end{proof} We define an involution $h\mapsto h^\delta$ on $H$ by $$\begin{pmatrix}a& b\\ c& d \end{pmatrix}^\delta=\begin{pmatrix}\bar a& -\bar b\\-\bar c& \bar d \end{pmatrix}.$$ This is the so-called MVW-involution. For MVW-involution for more general unitary groups, see \cite{MVW}, p91 or \cite{KS}, p270. For a smooth irreducible admissible representation $\pi$ of $H$, it is known that $\tilde \pi\cong \pi^\delta$, where $\pi^\delta(h)=\pi(h^\delta)$. Let $w=\begin{pmatrix}&1\\ -1& \end{pmatrix}$. Let $h^\alpha=w^{-1}h^\delta w$, and let $\pi^\alpha(h)=\pi(h^\alpha)$. Then $\pi^\alpha\cong \tilde \pi.$ Explicitly, we have $$\begin{pmatrix}a& b\\ c& d \end{pmatrix}^\alpha=\begin{pmatrix}\bar d& \bar c\\\bar b& \bar a \end{pmatrix}.$$ Recall that we use $\bar N$ to denote lower triangular unipotent subgroup of $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ and for $x\in F$, we denote $$\bar n (x)=\begin{pmatrix} 1&\\ x&1 \end{pmatrix}.$$ \begin{cor}{\label{cor211}} We have $${\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\pi,\psi)\ne 0 \textrm{ if and only if }{\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_{\bar N}(\tilde \pi,\psi)\ne 0.$$ \end{cor} \begin{proof} For $n(b)\in N$ with $b\in F$, we have $n(b)^{\alpha}=\bar n(b)$. Thus $ {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\pi,\psi)={\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_{\bar N}(\pi^\alpha, \psi).$ The assertion follows from the fact that $\tilde \pi \cong \pi^\alpha$. \end{proof} The proof of the following theorem uses the Gelfand-Kazhdan's method, and is inspired by the proof of Theorem 4.4.2. of \cite{Bu}, the uniqueness of Whittaker functional for $\GL_2$. \begin{thm} Let $(\pi,V)$ be an irreducible admissible representation of $U(1,1)$ such that ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)\ne 0$ and ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_\kappa)\ne 0$, then $$\pi\cong \pi^\kappa.$$ \end{thm} \begin{proof} The condition means that ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi')\ne 0$ for every nontrivial additive character $\psi'$ of $F$, which is equivalent to ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\tilde \pi,\psi')\ne 0$ for every nontrivial additive character $\psi'$ of $F$ by the isomorphism $\tilde \pi \cong \pi^\delta$. Since each irreducible admissible representation $\pi$ is a contragradient of another irreducible admissible representation, it suffices to show that $$\tilde \pi \cong (\tilde \pi)^\kappa\cong (\pi^\alpha)^\kappa.$$ Let $h^\beta=(h^\kappa)^\alpha$, and $\pi^\beta(h)=\pi(h^\beta)$. Then $\pi^\beta=(\pi^\alpha)^\kappa$. Thus it suffices to show that $\tilde\pi\cong \pi^\beta$. Define $h^\theta=(h^{-1})^\beta=(h^\beta)^{-1}$. Explicitly, we have $$\begin{pmatrix} a&b \\ c& d\end{pmatrix}^\theta=\begin{pmatrix} a&-\kappa^{-1}c \\ -\kappa b& d\end{pmatrix}.$$ We have $(h_1h_2)^\theta=h_2^\theta h_1^\theta$ and $(h^\theta)^\theta=h$. Moreover, we have $N^\theta=\bar N$ and $\bar N^\theta=N$. The assumption implies that ${\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\tilde \pi,\psi)\ne 0$ and ${\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\tilde \pi,\psi_\kappa)\ne 0$. We fix a nonzero element $\mu\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\tilde\pi,\psi_\kappa)$. By Corollary \ref{cor211}, the condition ${\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\tilde \pi,\psi)\ne 0$ is equivalent ${\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_{\bar N}(\pi,\psi)\ne 0$. We fix a nonzero element $\lambda\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_{\bar N}(\pi,\psi)$. The dual map $\mu^*$ of $\mu:\tilde V\rightarrow {\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}$ defines a map $\mu^*:{\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}\rightarrow \tilde V^*$. The smoothness of $\mu$ shows that the image of $\mu^*$ is contained in $\tilde {\tilde V}\cong V$. We define a distribution $T$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)$ by $$T(f)=\lambda\circ \pi(f)\circ \mu^* \in {\mathrm{End}}} \newcommand{\Frob}{{\mathrm{Frob}}({\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}})\cong{\mathbb {C}}} \newcommand{\BD}{{\mathbb {D}}.$$ Let $r$ be the right translation and $l$ be the left translation action on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)$, i.e., $r(h_0)f(h)=f(hh_0), l(h_0)f(h)=f(h_0^{-1}h)$. Consider the action $\rho$ of $\bar N\times N$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)$ defined by $$\rho_{\bar n_1,n_2}f(h)=(l(\bar n_1)r(n_2)f)(h)=f(\bar n_1^{-1}hn_2), \bar n_1\in \bar N, n_2\in N.$$ For all $\bar n_1\in \bar N,n_2\in N$, we have \begin{align*}T(\rho_{\bar n_1,n_2}f)&=\lambda\circ \int_{H}f(\bar n_1^{-1}h n_2)\pi(h)dh \circ \mu^*\\ &=\lambda\circ \int_H f(h) \pi(\bar n_1)\circ \pi(h) \circ \pi(n_2^{-1}) dh \circ \mu^*\\ &=\lambda\circ \pi(\bar n_1)\circ \pi(f)\circ \pi(n_2^{-1})\circ \mu^*.\end{align*} Since $\lambda\circ \pi(\bar n_1)=\psi(\bar n_1)\lambda$ and $\pi(n_2^{-1})\circ \mu^*=\psi_\kappa(n_2)\mu^*$, we get \begin{equation}{\label{eq29}} T(\rho_{\bar n_1,n_2}f)=\psi(\bar n_1)\psi_\kappa(n_2)T(f), \forall \bar n_1\in \bar N,n_2\in N. \end{equation} We claim that $$(*) \quad T(f)=T(f^\theta), \textrm{ for all }f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H),$$ where $f^\theta(h)=f(h^\theta)$. Let $A$ be the torus of $H$ temporarily. By the exact sequence $$0\rightarrow {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NAN)\rightarrow {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)\rightarrow {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar N wAN)\rightarrow 0,$$ to prove Claim $(*)$, it suffices to consider $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NAN)$ and $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NwAN)$ separately. We first assume that $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NAN)$. We define a function $G_f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(A)$ by \begin{equation}G_f(a)=\int_{F\times F}f(\bar n(-x_1)an(x_2))\psi^{-1}(x_1)\psi^{-1}_\kappa(x_2)dx_1dx_2.\end{equation} The assignment $f\mapsto G_f$ defines a surjection ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NAN)\rightarrow {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(A)$. We define a distribution $\tau$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(A)$ by $$\tau(G)=T(f)$$ if $G=G_f$ for some $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NAN)$. To show that $\tau$ is well-defined, we need to check that if $G_f=0$ then $T(f)=0$. By (\ref{eq29}), we have $T(f)=\psi^{-1}(n_1)\psi^{-1}_\kappa(n_2)T(\rho_{\bar n_1,n_2}f)$ for any $\bar n_1\in \bar N, n_2\in N$, thus for any compact subsets $C_1,C_2\subset F$, we have $$T(f)=\frac{1}{\Vol(C_1\times C_2)}\int_{C_1\times C_2}\psi^{-1}(x_1)\psi^{-1}_\kappa(x_2)T(\rho_{\bar n(x_1),n(x_2)}f)dx_1dx_2=\frac{1}{\Vol(C_1\times C_2)}T(f'),$$ where $f'\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NAN)$ is defined by $$f'(h)=\int_{C_1\times C_2}\psi^{-1}(n_1)\psi^{-1}_\kappa(n_2)f(\bar n(-x_1)hn(x_2))dx_1dx_2. $$ If $C_1,C_2$ are large enough, we have $f'(a)=G_f(a)=0, \forall a\in A$ by assumption. Thus $f'=0$ and $T(f)=0$. Then $\tau$ is well-defined. To show $T(f)=T(f^\theta)$ it suffices to show that $G_f=G_{f^\theta}$. We have \begin{align*} G_{f^\theta}(a)&=\int_{F\times F}f^\theta(\bar n(-x_1)an(x_2))\psi^{-1}(x_1)\psi_\kappa^{-1}(x_2)dx_1dx_2\\ &=\int_{F^\times F}f(n(x_2)^\theta a^\theta \bar n(-x_1)^\theta)\psi^{-1}(x_1)\psi^{-1}_\kappa(x_2)dx_1dx_2\\ &=\int_{F^\times F}f(\bar n(-\kappa x_2)a^\theta \bar n(\kappa^{-1}x_1 ))\psi^{-1}(x_1)\psi^{-1}_\kappa(x_2)dx_1dx_2 \end{align*} Let $x_1'=\kappa x_2$ and $x_2'=\kappa^{-1} x_1$, we see that $dx_1dx_2=dx_1'dx_2'$, and the last expression of the above integral becomes $$ \int_{F^\times F}f(\bar n(-x_1')a^\theta \bar n(x_2'))\psi^{-1}_\kappa(x_2')\psi^{-1}(x_1')dx_1'dx_2'=F_f(a^\theta).$$ Thus we get $$G_{f^\theta}(a)=G_f(a^\theta).$$ Since for $a\in A$, we have $a=a^\theta$, we get $G_f=G_{f^\theta}$. This completes the proof of $T(f)=T(f^\theta)$ when $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NAN)$. Next we consider the case $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NwAN)={\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(wB)$, where $B$ is the Borel subgroup of $H$. Similarly as above, we have a surjection map ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NwAN)\mapsto {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(wA)$, $f\mapsto G_f$, where $G_f$ is defined by $$G_f(wa)=\int_N \psi_\kappa^{-1}(n)f(wan)dn.$$ We can define a distribution $\tau$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(wA)$ by $\tau(G)=T(f)$ if $G=G_f$ for some $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NwAN)$. Since $T(r(n)f)=\psi(n)T(f)$, a similar argument as above will show that $\tau$ is well-defined. We claim that $\tau\equiv 0$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(wA)$. Since $T(l(\bar n)f)=\psi(\bar n)T(f)$, we get $G_{l(\bar n)f}-\psi(\bar n)G_f\in {\mathrm{Ker}}} \newcommand{\Ros}{{\mathrm{Ros}}(\tau)$ for any $\bar n\in \bar N$. For $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NwAN)$, and $\bar n\in \bar N$, we define $\Phi_{f,\bar n}=G_{l(\bar n)f}-\psi(\bar n)G_f$. We have \begin{align*} \Phi_{f,\bar n}(wa)&=\int_{N}\psi^{-1}_\kappa(n')f(\bar n^{-1}wa n')dn'-\psi(\bar n)G_f(wa)\\ &=\int_N \psi_\kappa^{-1}(n')f(wa (wa)^{-1}\bar n^{-1}wa n')dn'-\psi(\bar n)G_f(wa)\\ &=(\psi_\kappa((wa)^{-1}\bar n^{-1}wa)-\psi(\bar n))G_f(wa). \end{align*} We suppose that $$wa=\begin{pmatrix} & b\\-\bar b^{-1} & \end{pmatrix}, b\in E^\times, \bar n= \begin{pmatrix} 1& \\x & 1 \end{pmatrix},$$ then $$(wa)^{-1}\bar n^{-1}wa= \begin{pmatrix} 1& b\bar b x \\& 1 \end{pmatrix} .$$ Thus $$(\psi_\kappa((wa)^{-1}\bar n^{-1}wa)-\psi(\bar n))=\psi(\kappa b\bar bx)-\psi(x).$$ Since $\kappa\notin \Nm(E^\times)$, we have $\kappa b\bar b\ne 1$. Since $\psi$ is continuous, for a small open compact neighborhood $D$ of $wa$, we can find an $\bar n=\bar n(x)$ such that $ (\psi_\kappa((wa)^{-1}\bar n^{-1}wa)-\psi(\bar n))$ is a nonzero constant $c_D$, for all $wa\in D$. Let $f_D\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NwAN)$ such that $G_{f_D}$ is the characteristic function of $D$, then we have $$\Phi_{f,\bar n}|_D=c_DG_{f_D}.$$ The space ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(wA)$ is spanned by $G_{f_D}$, and thus can be spanned by $\Phi_{f,\bar n}|_D$. This shows that ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(wA)\subset {\mathrm{Ker}}} \newcommand{\Ros}{{\mathrm{Ros}}(\tau)$, i.e., $T(f)=\tau(G_f)=0$ for all $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(\bar NwAN)$. A similar consideration will show that $T(f^\theta)=0$. Thus $T(f)=T(f^\theta)$ for $f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(wB)$. This finishes the proof of the claim $(*)$. We define a bilinear form $\bB$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)$ by $$\bB(f*\phi)=T(f*\check \phi),$$ where $\check \phi(h)=\phi(h^{-1})$, and $*$ means convolution. By Claim $(*)$, we get \begin{equation}{\label{eq211}}\bB(f,\phi)=T(f*\check\phi)=T((f*\check \phi)^\theta)=T(\check\phi^\theta*f^\theta)=\bB(\phi^\beta,f^\beta).\end{equation} Define a map $$\lambda: {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)\rightarrow \tilde\pi$$ $$f\mapsto \lambda_f$$ where $\lambda_f\in \tilde V$ is defined by $\lambda_f(v)=\lambda(\pi(f)v)$. Let $J(\lambda)$ be the kernel of the map $\lambda$, i.e., $J(\lambda)=\wpair{f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H): \lambda_f=0}$ Similarly, we define $\mu: {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)\rightarrow \tilde {\tilde \pi} \cong \pi$ by $\mu_f(\tilde v)=\mu(\tilde \pi(f)v)$ and $J(\mu)$ to be the kernel of $\mu$. It is easy to see that $$J(\lambda)=\wpair{f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)| \bB(f,\phi)=0,\forall \phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)},$$ and $$J(\mu)=\wpair{\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)| \bB(f,\phi)=0,\forall f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)},$$ By (\ref{eq211}), we have $$J(\lambda)=\wpair{f\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)| B(\phi^\beta, f^\beta)=0, \forall \phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)}=J(\mu)^\beta.$$ Let $(r,{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H))$ be the right translation action of $H$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)$, then $\lambda:(r,{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H))\rightarrow \tilde \pi$ is an intertwining surjection with kernel $J(\lambda)$. Let $(r^\beta,{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H))$ be the representation of $H$ on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)$ defined by $r^\beta(h)f=r(h^\beta)f$. Then the map $\mu:{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H)\rightarrow \pi$ defines an intertwining surjection $\mu:(r^\beta,{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H))\rightarrow \pi^\beta$, with kernel $J(\mu)$. The assignment $f\mapsto f^\beta$ defines an isomorphism $(r,{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H))\rightarrow (r^\beta,{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(H))$. Since $J(\mu)=J(\lambda)^\beta$, we have the following commutative diagram $$\xymatrix{0\ar[r]&J(\lambda)\ar[d]\ar[r]&S(H)\ar[r]^{\lambda} \ar[d]&\tilde \pi\ar@{-->}[d]^{\beta}\ar[r]& 0\\ 0\ar[r]&J(\mu)\ar[r]&S(H)\ar[r]^{\mu}& \pi^\beta\ar[r]&0 }$$ and thus we get an isomorphism $\tilde \pi\rightarrow \pi^\beta$. This completes the proof. \end{proof} \begin{cor}{\label{cor213}} Let $\pi$ be an irreducible smooth representation of $H$ which is both $\psi$- and $\psi_\kappa$-generic. For any $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)$, define a function $W^\kappa$ on $H$ by $W^\kappa(h)=W(h^\kappa)$. Then $W^\kappa\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_\kappa)$. Moreover, the assignment $W\mapsto W^\kappa$ defines a bijection from ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)$ to ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_\kappa)$. \ \end{cor} \begin{proof} By the above theorem, we have an isomorphism $^\kappa: (\pi,V)\rightarrow (\pi^\kappa, V)$. Since the map $^\kappa$ is intertwining, we have \begin{equation}{\label {kappa is intertwining}} (\pi(h)v)^\kappa=\pi^\kappa(h)v^\kappa, h\in H, v\in V. \end{equation} Let $\lambda\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\pi, \psi)$ be a nonzero element. We define $\lambda^\kappa$ by $\lambda^\kappa(v)=\lambda(v^\kappa)$. For $n\in N$, by Eq.(\ref{kappa is intertwining}) we have $$\lambda^\kappa(\pi(n)v)=\lambda((\pi(n)v)^\kappa)=\lambda(\pi^\kappa(n)v^\kappa)=\lambda(\pi(n^\kappa)v^\kappa)=\psi_\kappa(n)\lambda(v^\kappa)=\psi_\kappa(n)\lambda^\kappa(v),$$ thus $\lambda^\kappa\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\pi,\psi_\kappa)$. Since ${}^\kappa$ is an isomorphism, for each $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi, \psi)$, we can take $v\in V$ such that $W(h)=\lambda(\pi(h)v^\kappa)$. Then $$W^\kappa(h)=\lambda(\pi(h^\kappa)v^\kappa)=\lambda((\pi(h)v)^\kappa)=\lambda^\kappa(\pi(h)v).$$ Thus $W^\kappa\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_\kappa)$. The ``moreover" part follows from the fact that ${}^\kappa$ is an isomorphism. \end{proof} \begin{prop}{\label{prop214}} Let $(\pi,V)$ be an irreducible admissible representation of $H$ such that ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi^{-1})\ne 0$ and ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_\kappa^{-1})\ne 0$. The notations $\mu,\chi,\psi,\eta$ are as usual. Let $\kappa$ be an element of $F^\times-\Nm(E^\times)$. Then $$L(s,\pi,\omega_{\mu,\chi,\psi_\kappa},\eta)=L(s,\pi,\omega_{\mu,\chi,\psi},\eta)$$ and $$\gamma(s,\pi,\omega_{\mu,\chi,\psi_\kappa},\eta,\psi')=\eta(\kappa)|\kappa|_F^{2s-1}\gamma(s,\pi,\omega_{\mu,\chi,\psi},\eta,\psi').$$ \end{prop} \begin{proof} We fix an isomorphism $\phi\mapsto \phi^\kappa$ of $(\omega_{\mu,\psi^{-1},\chi})^{\kappa}\rightarrow \omega_{\mu,\psi^{-1}_\kappa,\chi} .$ Then we have $$\omega_{\mu,\psi^{-1},\chi}(h^\kappa)\phi=\omega_{\mu,\psi^{-1}_\kappa,\chi}(h)\phi^\kappa.$$ For $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)$, by Corollary \ref{cor213}, we have $W^\kappa\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_\kappa)$. For $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, and $\Re(s)>>0$, we have \begin{align*}& \quad \Psi(s,W^\kappa, \phi^\kappa, \Phi,\eta,\omega_{\mu,\psi^{-1}_\kappa,\chi})\\ &=\int_{R\setminus H}W^\kappa(h)(\omega_{\mu,\psi^{-1}_\kappa,\chi}(h)\phi^\kappa)(1)f(s,h,\Phi)dh\\ &=\int_{R\setminus H}W(h^\kappa)(\omega_{\mu,\psi^{-1},\chi}(h^\kappa)\phi)(1)f(s,h,\Phi)dh\\ &=\int_{R\setminus H}W(h)(\omega_{\mu,\psi^{-1},\chi}(h)\phi)(1)f(s,h^{\kappa^{-1}},\Phi)dh. \end{align*} Write $h=t(a)g$ with $g\in \SL_2(F)$, we have $h^{\kappa^{-1}}=t(a)g^{\kappa^{-1}}$. Denote $\Phi^\kappa:={\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(\kappa,1)\Phi$. Then \begin{align*}&\quad f(s,h^{\kappa^{-1}}, \Phi)\\ &=\eta(a)|a|^s\int_{F^\times}[\left({\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(\kappa^{-1},1)g{\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(\kappa,1) \right)\Phi](0,r)\eta(r)|r|_E^s d^*r\\ &=\eta(a)|a|^s\int_{F^\times}[\left(g{\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(\kappa,1) \right)\Phi](0,r)\eta(r)|r|_E^s d^*r\\ &=f(s,h, \Phi^{\kappa}).\end{align*} Thus \begin{equation}{\label{eq213}}\Psi(s,W^\kappa,\phi^\kappa,\Phi,\eta,\omega_{\mu,\psi^{-1}_\kappa,\chi})=\Psi(s,W,\theta,\Phi^\kappa,\eta,\omega_{\mu,\psi^{-1},\chi}).\end{equation} By Corollary \ref{cor213} and the fact that $\phi\mapsto \phi^\kappa$ is an isomorphism, $I(s,\pi,\omega_{\mu,\psi_\kappa^{-1},\chi},\eta)$ is generated by $\Psi(s,W^\kappa,\phi^\kappa,\Phi,\eta,\omega_{\mu,\psi^{-1}_{\kappa a\bar a},\chi})$, for $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi_{ a\bar a}), \phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi), a\in E^\times$, $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$. Since $\Phi\mapsto \Phi^\kappa$ is an isomorphism, by Eq.(\ref{eq213}), we have $I(s,\pi,\omega_{\mu,\psi^{-1}_\kappa,\chi},\eta)=I(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$. Consequently, we get $$ L(s,\pi,\omega_{\mu,\psi^{-1}_\kappa,\chi},\eta)=L(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta).$$ Since $\widehat{\Phi^\kappa}=|\kappa|_F^{-1}{\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(\kappa^{-1},\kappa^{-1})\hat\Phi^{\kappa}$, we get $$f(1-s,h,\widehat{\Phi^\kappa},\eta^*)=\eta(\kappa)^{-1}|\kappa|_F^{1-2s}f(1-s,h,\hat \Phi^\kappa,\eta^*).$$ From the functional equation, we have $$\gamma(s,\pi,\omega_{\mu,\psi^{-1}_\kappa,\chi},\eta)=\eta(\kappa)|\kappa|_F^{2s-1}\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta).$$ This finishes the proof. \end{proof} We can combine Lemma \ref{lem28} and Proposition \ref{prop214} together to get \begin{cor} For $a\in F^\times$, suppose that both $L(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ and $L(s,\pi,\omega_{\mu,\psi^{-1}_a,\chi})$ are defined, then we have $$L(s,\pi,\omega_{\mu,\psi^{-1}_a,\chi},\eta)=L(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$$ and $$\gamma(s,\pi,\omega_{\mu,\psi^{-1}_a,\chi},\eta)=\eta(a)|a|_F^{2s-1}\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta).$$ \end{cor} \subsection{Unramified caculation} The unramified calculation is done in \cite{GRS} and we include their result here for completeness. Let $E/F$ be unramified, $\psi$ be an additive character with conductor ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$. Let $p$ be a prime element of $F$. Let $(\pi,V)$ be an irreducible unramified representation of $H$ such that ${\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)\ne 0$. We can assume that $\pi$ is an irreducible unramified component of ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\nu)$ for an unramified quasi-character $\nu$ of $E^\times$. Let $W$ be the Whittaker function associated with the spherical vector, normalized by the condition $W(k)=1$ for $k\in K$. The explicit Casselman-Shalika formula reads: $$W\left(t(a)\right)=0, ~ |a|_E >1,$$ and $$W_\pi(t(p^n))=|p^n|_F\frac{\nu(p)^n-\nu(p)^{-n-1}}{1-\nu(p)^{-1}}, n\ge 0,$$ where $p$ is a prime element in $E$. Let $\mu$ be the unique unramified character of $E^\times$ such that $\mu|_{F^\times}=\epsilon_{E/F}$. Note that $\mu(p)=-1$. Let $\sigma=\omega_{\mu,\psi^{-1},1}$. Then $\sigma$ is a component of ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\mu)$. If we take $\phi$ to be the characteristic function of ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E$ and let $W_\sigma(h)=(\omega_{\mu,\psi^{-1},1}(h)\phi)(e)$. Then a direct calculation or an application of the above Casselman-Shalika formula show that $$W_\sigma(t(a))=0, |a|> 1$$ and $$W_\sigma(t(p^n))=(-1)^n|p|_F^n$$ Let $\eta$ be an unramified character of $E^\times$. Then the condition $\omega_\pi\omega_\sigma\eta|_{E^1}=\nu\mu\eta|_{E^1}=1$ is automatic. Let $\Phi\in{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$ be the characteristic function of ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F\oplus {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$, let $f(s,h,\Phi,\eta)$ be the element in ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$ as before. Write $H=BK$. For $h=um(a)k$, we have $dh=|a|_E^{-1}du dadk$. We have \begin{align*} &\quad \Psi(s,W_\pi,W_\sigma,\Phi,\eta)\\ &=\int_{K}\int_{E^1\setminus E^\times}W_\pi(t(a)k)W_\sigma(t(a)k)f(t(a)k)|a|^{-1}dadk\\ &=\int_{E^1\setminus E^\times} W_\pi(t(a))W_\sigma(t(a))\eta(a)|a|^{s-1}f(1)da\\ &=\frac{f(s,1,\Phi,\eta)}{1-\nu(p)^{-1}}\Vol(E^1\setminus {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E^\times)\left(\sum_{n\ge 0} (-\eta\nu(p))^nq_E^{-ns}-\sum_{n\ge 0} \nu(p)^{-1}(-\eta\nu^{-1}(p))^nq_E^{-ns}\right)\\ &=\Vol(E^1\setminus {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E^\times) \frac{f(s,1,\Phi,\eta)}{1-\nu(p)^{-1}}\left( \frac{1}{1-\mu\eta\nu(p)q_E^{-s}}-\frac{\nu(p)^{-1}}{1-\mu\eta\nu^{-1}(p)q_E^{-s}}\right)\\ &=\Vol(E^1\setminus {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E^\times) \frac{f(s,1,\Phi,\eta)}{1-\nu(p)^{-1}}\frac{(1-\nu(p)^{-1})(1-\eta(p)q_E^{-s})}{(1-\mu\eta\nu(p)q_E^{-s})(1-\mu\eta\nu^{-1}(p)q_E^{-s})}\\ &=\Vol(E^{1}\setminus {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E^\times)f(s,1,\Phi,\eta) L_E(s,\eta)^{-1}L_E(s,\mu\eta \nu)L_E(s,\mu\eta\nu^{-1}). \end{align*} By definition, we have \begin{align*} f(s,1,\Phi,\eta)&=\int_{F^\times}\Phi(0,a)|a|^s_E\eta(a)d^\times a\\ &=\sum_{n\ge 0}|p|_E^{ns}\eta(a)^n\Vol({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times)\\ &=\Vol({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times)L_E(s,\eta). \end{align*} Thus, we have $$\Psi(s,W_\pi,\phi,\Phi,\eta)=cL_E(s,\mu\eta\nu)L_E(s,\mu\eta\nu^{-1}),$$ where $c$ is a nonzero constant which depends on the measure. \begin{prop} Let the notations be as above, we have $$L(s,\pi,\omega_{\mu,\psi^{-1},1},\eta)=L_E(s,\mu\eta\nu)L_E(s,\mu\eta\nu^{-1}),$$ and $$\epsilon(s,\pi,\omega_{\mu,\psi^{-1},1},\eta)=1.$$ \end{prop} \section{A local converse theorem for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$} In this section, we will slightly modify Baruch's method, see \cite{Ba1} and \cite{Ba2}, to give a proof of the local converse theorem for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$, when $E/F$ is unramified or $E/F$ is ramified but the characteristic of the residue field is not 2, using the $\gamma$-factors $\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)$ we defined in the last section. For the field extension $E/F$, there is an associated integer $i=i_{E/F}$ defined as follows. If $E/F$ is unramified, then $i_{E/F}=0$. For $E/F$ ramified, take an element $x\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E$ such that ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E={\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F[x]$, and define $i=v_E(\bar x-x)$, where $v_E$ is the valuation of $E$. The integer $i$ is the smallest integer such that the ramification group $G_i$ is trivial, see Chapter IV, $\S1$ of \cite{Se}. It is known that $i\ne 1$ if and only if the residue field ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F/\CP_F$ has characteristic $ 2$, see Chapter IV, $\S2$ of \cite{Se}. \subsection{Howe vectors} The Howe vectors for the groups $\GL_n$, ${\mathrm {G}}} \newcommand{\RH}{{\mathrm {H}}{\mathrm{Sp}}_4$ and ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(2,1)$ are defined in Baruch's thesis \cite{Ba2}, and the Howe vectors for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(2,1)$ can also be found in \cite{Ba1}. The ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ version we will use can be defined similarly. Let $p_E$ be a prime element of $E$, ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E$ (resp. ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$) be the integer ring of $E$ (resp. $F$), and $\CP_E$ (resp. $\CP_F$) be the maximal ideal of $E$ (resp. $F$). Let $q_F=\# ({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F/\CP_F)$ and $q_E=\#({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E/\CP_E)$. Thus, if $E/F$ is unramified, we can take $p_E=p_F$ and we have $q_E=q_F^2$ and $\CP_F=\CP_E\cap {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$. If $E/F$ is ramified, we have $p_F=p_E^2 u$ for some $u\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E^\times$, $q_E=q_F$ and $\CP_F=\CP_E^2\cap {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$. Let $\psi$ be an unramified additive character of $F$ and let $\psi_E$ be the additive character of $E$ defined by $\psi_E=\psi\circ(\frac{1}{2} \tr_{E/F})$. Thus for $x\in F\subset E$, we have $\psi_E(x)=\psi(x)$. For a positive integer $m$, let $K_m=(1+\textrm{Mat}_{2\times 2}(\CP_E^m))\cap {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ if $E/F$ is unramified, and $K_m=(1+\textrm{Mat}_{2\times 2}(\CP_E^{2m}))\cap {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$. We can write $K_m=(1+\textrm{Mat}_{2\times 2}((\CP_F{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E)^m))$ uniformly. Set $$d_m=\begin{pmatrix} p_F^{-m}&\\ & p_F^{m}\end{pmatrix}.$$ Let $J_m=d_mK_md_m^{-1}$. For $k=(k_{il})\in K$, we have \begin{equation}{\label {kj}} j:=d_mkd_m^{-1}=\begin{pmatrix} k_{11}& p_F^{-2m}k_{12}\\ \bar p_F^{2m}k_{21} & k_{22}\end{pmatrix}.\end{equation} Thus $$J_m=\begin{pmatrix} 1+\CP_E^m& \CP_E^{-m}\\ \CP_E^{3m}& 1+\CP_E^m\end{pmatrix}\cap {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1), \textrm{ if } E/F \textrm{ is unramified},$$ and $$J_m=\begin{pmatrix} 1+\CP_E^{2m}& \CP_E^{-2m}\\ \CP_E^{6m}& 1+\CP_E^{2m}\end{pmatrix}\cap {\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1), \textrm{ if } E/F \textrm{ is ramified}.$$ For a subgroup $A$ of $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$, denote $A_m=A\cap J_m$. Note that, in any case, we have $$N_m=\begin{pmatrix}1& \CP_F^{-m}\\ & 1 \end{pmatrix}.$$ Let $\tau_m$ be the character of $K_m$ defined by $$\tau_m(k)=\psi_E( p_F^{-2m}k_{1,2}), k=(k_{i,l})\in K_m.$$ By our assumption on $\psi_E$, it is easy to see that $\tau_m$ is indeed a character on $K_m$. Define a character $\psi_m$ on $J_m$ by $$\psi_m(j)=\tau_m(d_m^{-1}jd_m), j\in J_m.$$ We have $\psi_m(j)=\psi_E(j_{1,2})$ for $j=(j_{i,l})\in J_m$. Thus $\psi_m$ and $\psi$ agree on $N_m$. Let $(\pi,V)$ be a $\psi$-generic representation. We fix a Whittaker functional. Let $v\in V$ be such that $W_v(1)=1$. For $m\ge 1$, as \cite{Ba1} and \cite{Ba2}, we define \begin{equation}{\label{eq32}}v_m=\frac{1}{\Vol(N_m)}\int_{N_m}\psi(n)^{-1}\pi(n)vdn.\end{equation} Let $L\ge 1$ be an integer such that $v$ is fixed by $K_L$. \begin{lem}{\label {lem31}} We have \begin{enumerate} \item $W_{v_m}(1)=1.$ \item If $m\ge L$, $\pi(j)v_m=\psi_m(j)v_m$ for all $j\in J_m$. \item If $k\le m$, then $$v_m=\frac{1}{\Vol(N_m)}\int_{N_m}\psi(n)^{-1}\pi(n)v_kdn.$$ \end{enumerate} \end{lem} \begin{proof} The proof is the same as in the ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(2,1)$ case, see Lemma 5.2 of \cite{Ba1}. We give some details of the proof of (2) here. Let $$\tilde v_m=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(J_m)}\int_{J_m}\psi_m(j)^{-1} \pi(j)v dj.$$ We have the Iwahori decomposition $J_m= N_m \cdot \bar B_m$ with unique expressions, where $\bar B_m=\bar B\cap J_m$. For $j=n \bar b$, then $dj=d\bar b d n$. Note that $ \bar B_m \subset K_m\subset K_L$ for $m\ge L$, and thus $v$ is fixed by $\bar B_m$. Then \begin{align*} \tilde v_m&=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}} (\bar B_m)}\int_{N_m}\int_{\bar B_m} \psi_m(n\bar b)^{-1}\pi(n \bar b)v d\bar bdn\\ &=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}\int_{N_m} \psi_m(n)\pi(n)v dn\\ &=\tilde v_m. \end{align*} It is clear that $\pi(j)\tilde v_m=\psi_m(j)\tilde v_m$. The assertion follows. \end{proof} The vectors $\wpair{v_m}_{m\ge L}$ are called Howe vectors. Let $w=\begin{pmatrix} &1\\ -1& \end{pmatrix}$ be the unique nontrivial Weyl element of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$. \begin{lem}{\label{lem32}} For $m\ge L$, let $v_m$ be Howe vectors defined as above, and let $W_{v_m}$ be the Whittaker functions associated to $v_m$. Then \begin{enumerate} \item $W_{v_m}(t(a))\ne 0 \textrm{ implies } a\bar a\in 1+\CP_F^m.$ \item $W_{v_m}(t(a)w)\ne 0$ implies $a\bar a\in \CP_F^{-3m}$. \end{enumerate} \end{lem} \begin{proof} (1) For $x\in P_F^{-m}$, we have $n(x)\in N_m$. On the other hand, we have $$t(a)n(x)=n(a\bar ax)t(a).$$ By Lemma \ref{lem31}, we have $$\psi_m(n(x))W_{v_m}(t(a))=\psi(a\bar ax)W_{v_m}(t(a)).$$ Since $\psi_m(n(x))=\psi(x)$, if $W_{v_m}(t(a))\ne 0$, we have $(1-a\bar a)x\in {\mathrm{Ker}}} \newcommand{\Ros}{{\mathrm{Ros}}(\psi)$ for any $x\in \CP_F^{-m}$. Thus $a\bar a\in 1+\CP_F^m$. (2) For $x\in \CP_F^{3m}$, we have $\bar n(-x)\in \bar N_m:=\bar N\cap J_m$. From the relation $$t(a)w\bar n(-x)=n(a\bar a x)t(a)w,$$ we have $$W_{v_m}(t(a)w)=\psi(a\bar a x)W_{v_m}(t(a)w).$$ Thus $W_{v_m}(t(a)w)\ne 0$ implies $\psi(a\bar a x)=1$ for all $x\in \CP_F^{3m}$, i.e., $a\bar a\in \CP_F^{-3m}$. \end{proof} Denote the central character of $\pi$ by $\omega_\pi$. \begin{lem}{\label{lem33}} \begin{enumerate} \item Suppose that $E/F$ is unramified. For $a\in E^\times$, $\Nm_{E/F}(a)=a\bar a\in 1+\CP_F^m$ if and only if $a\in E^1(1+\CP_E^m)$. Thus, for $m\ge L$ $$ W_{v_m}(t(a))=\left\{\begin{array}{lll}\omega_\pi(z), &\textrm{ if } a=zu, \textrm{ for }z\in E^1, u\in 1+\CP_E^m, \\ 0, & \textrm{ otherwise}. \end{array}\right. $$ \item Suppose that $E/F$ is ramified and $m\ge i_{E/F}$. For $a\in E^\times$, $\Nm_{E/F}(a)=a\bar a\in 1+\CP_F^m$ if and only if $a\in E^1(1+\CP_E^{2m-i_{E/F}+1})$. Thus, if the residue characteristic is not $2$, then for $m\ge L\ge i_{E/F}=1$, we have $$ W_{v_m}(t(a))=\left\{\begin{array}{lll}\omega_\pi(z), &\textrm{ if } a=zu, \textrm{ for }z\in E^1, u\in 1+\CP_E^{2m}, \\ 0, & \textrm{ otherwise}. \end{array}\right. $$ \end{enumerate} \end{lem} \begin{proof} We first assume that $E/F$ is unramified, it is known that $\Nm_{E/F}(1+\CP_E^m )=1+\CP_F^m$, see \cite{Se} Chapter V $\S2$, for example. It is clear that $\Nm(a)\in 1+\CP_F^m$ for $a\in E^1(1+\CP_E^m)$. On the other hand, if $a\bar a\in 1+\CP_F^m$, we can find $b\in 1+\CP_E^m$ such that $a\bar a= \Nm_{E/F}(b)=b\bar b$. Thus $b/a\in E^1$ and $a=b\cdot (b/a)\in E^1(1+\CP_E^m)$. If $a\notin E^1(1+\CP_E^m)$, then $a\bar a \notin 1+\CP_F^m$, and thus $W_{v_m}(t(a))=0$ by Lemma \ref{lem32}. If $a\in E^1(1+\CP_E^m)$, we write $a=zu$. Then $t(z)$ is in the center of $H$ and $t(u)\in J_m$ by the definition of $J_m$. By Lemma \ref{lem31} (2), we get $$W_{v_m}(t(a))=\omega_{\pi}(z)W_{v_m}(t(u))=\omega_{\pi}(z).$$ If $E/F$ is ramified, then it is known that, for $m\ge i_{E/F}$, we have $\Nm_{E/F} (1+\CP_E^{2m-i_{E/F}+1})=1+\CP_F^m,$ see Corollary 3 of $\S3$, Chapter V of \cite{Se}. The same argument as above shows that $a\bar a\in 1+\CP_F^{m}$ if and only if $a\in E^1( 1+\CP_E^{2m-i_{E/F}+1})$. Now we assume the residue characteristic is not 2, and hence $i_{E/F}=1$. Thus $a\bar a\in 1+\CP_F^{m}$ if and only if $a\in E^1( 1+\CP_E^{2m})$. If $a\notin E^1(1+\CP_E^{2m})$, then $a\bar a \notin 1+\CP_F^m$, and thus $W_{v_m}(t(a))=0$ by Lemma \ref{lem32} (2). If $a\in E^1(1+\CP_E^{2m})$, write $a=zu$ for $z\in E^1 $ and $u\in 1+\CP_E^{2m}$. By the definition of $J_m$, we have $t(u)\in J_m$. Thus we get $$W_{v_m}(t(a))=\omega_\pi(z)W_{v_m}(t(u))=\omega_\pi(z),$$ by Lemma \ref{lem31} (2) again. \end{proof} In the following of this section, we will assume that $E/F$ is unramified, or $E/F$ is ramified, but the residue characteristic is not 2. We fix two irreducible smooth $\psi$-generic representations $(\pi,V_\pi)$ and $(\pi',V_{\pi'})$ of ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ with the same central character. Fix $v\in V_\pi$ and $v'\in V_{\pi'}$ with $W_v(1)=1=W_{v'}(1)$ and positive integers $m$, we can define Howe vectors $v_m$ and $v_m'$. Let $L\ge 1$ be an integer such that $v$ and $v'$ are fixed by $K_L$ under the action of $\pi$ and $\pi'$ respectively. \begin{prop}{\label{prop34}} Let $n_0\in N$. Then \begin{enumerate} \item $W_{v_m}(g)=W_{v_m'}(g)$ for all $g\in B, m\ge L;$ \item If $n_0\in N_m$, then $W_{v_m}(twn_0)=\psi(n_0)W_{v_m}(tw)$ for all $t\in T, m\ge L;$ \item If $n_0\notin N_m$, then $W_{v_m}(twn_0)=W_{v'_m}(twn_0)$ for all $t\in T$, $m\ge 3L.$ \end{enumerate} \end{prop} \begin{proof} (1) Since $B=NT$, it suffices to show that $W_{v_m}(t)=W_{v_m'}(t)$ for all $t\in T$. Write $t=t(a)$ with $a\in E^\times$. This follows from Lemma \ref{lem33} directly. (2) For $n_0\in N_m$, we have $\pi(n)v_m=\psi(n_0)v_m$ by Lemma \ref{lem31}. The assertion is clear. (3) By Lemma \ref{lem31} (3), we have \begin{equation}{\label{eq34}}W_{v_m}(twn_0)=\Vol(N_m)^{-1}\int_{N_m}W_{v_L}(twn_0n)\psi^{-1}(n)dn.\end{equation} We have a similar relation for $W_{v_m'}$. Thus it suffices to show $W_{v_L}(twn_0n)=W_{v_L'}(twn_0n)$ for all $n\in N_m$. Let $n\in N_m$ and $n'=n_0n$. Since $n_0\notin N_m,n\in N_m$, we get $n'\notin N_m$. Suppose $n'=n(x)$. Then $n'\notin N_m$ is equivalent to $x\notin \CP_F^{-m}$ or $x^{-1}\in \CP_F^m$. We have $$\begin{pmatrix} &1 \\ -1& \end{pmatrix}\begin{pmatrix} 1&x \\ &1 \end{pmatrix}=\begin{pmatrix}- x^{-1}& 1\\ &-x \end{pmatrix}\begin{pmatrix} 1& \\ x^{-1} &1\end{pmatrix}$$ Let $b=\begin{pmatrix}- x^{-1}& 1\\ &-x \end{pmatrix},j=\begin{pmatrix} 1& \\ x^{-1} &1\end{pmatrix}$. Then $b\in B$. Since $m\ge 3L$, we have $x^{-1}\in \CP_F^m\subset \CP_F^{3L}$, i.e., $ j\in J_L$. Thus by Lemma \ref{lem31}, we get $$W_{v_L}(twn')=W_{v_L}(tbj)=W_{v_L}(tb).$$ Since $tb\in B$, we have $W_{v_L}(tb)=W_{v_L'}(tb)$ by Part (1). Thus $W_{v_L}(twn')=W_{v_L'}(twn')$. This completes the proof. \end{proof} \subsection{Howe vectors for Weil representations}{\label{sec32}} Let $\chi$ be a character of $E^1$, we define $\deg(\chi)$ to be the smallest integer $i$ such that $\chi|_{E^1\cap(1+\CP_E^i)}= 1$, i.e., the integer $i$ such that $\chi|_{E^1\cap(1+\CP^i_E)}=1$ but $\chi|_{E^1\cap (1+\CP_E^{i-1})}\ne 1$. If $\chi=\chi_0$ is the trivial character, we define $\deg(\chi_0)=0$. Let $\chi$ be a character of $E^1$, $a\in E^\times$, we consider the integral \begin{equation}F_\chi(a)=\int_{E^1}\psi(\tr_{E/F}(au))\chi(u)du.\label{eq34*}\end{equation} Note that as a character on $E$, $\psi(\tr_{E/F}(\cdot)) $ is unramified if $E/F$, and has conductor $\fD_{E/F}^{-1}$ if $E/F$ is ramified. In any case, $\psi(\tr_{E/F}(\cdot))$ is trivial on ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E$. \begin{lem}{\label{lem35}} Let $\chi_0$ be the trivial character of $E^1$, and $a\in E^\times$ with $|a|\le q_E^n$. If $n\le 0$, then $$F_{\chi_0}(a)\ne 0.$$ \end{lem} \begin{proof} If $n\le 0$, we get $a\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E$, thus $\psi(\tr_{E/F}(au))=1$ for all $u\in E^1$. Then $$F_{\chi_0}(a)=\int_{E^1}\psi(\tr_{E/F}(au))du={\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1)\ne 0.$$ \end{proof} \begin{lem}{\label{lem36}} Suppose that $a\in E^\times$ with $|a|_E=q_E^n$, and $\chi$ is a character of $E^1$ with $h=\deg(\chi)\ge 1.$ \begin{enumerate} \item If $h> \max\{n,1\}$, we have $F_\chi(a)=0$. \item If $n>0$, there is a character $\chi$ of $E^1$ with $\deg(\chi)\le n$ such that $F_\chi(a)\ne 0$. \end{enumerate} \end{lem} \begin{proof} (1) We first suppose that $n\le 0$, then $\psi(\tr_{E/F}(au))=1$ for all $u\in E^1$ and thus $F_\chi(a)=\int_{E^1}\chi(u)du=0$ since $\chi$ is nontrivial on $E^1$. Next, we assume that $1\le n \le\deg(\chi)$. We have $$F_\chi(a)=\sum_{u_0\in E^1/(E^1\cap (1+\CP_E^n))}\int_{u_1\in E^1\cap(1+\CP_E^n)}\psi(\tr_{E/F}(au_0u_1))\chi(u_0u_1)du_1.$$ For $u_0\in E^1,u_1\in E^1\cap(1+\CP_E^n)$, we have $au_0u_1-au_0=au_0(u_1-1)\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E$, since $a\in \CP_E^{-n}$. Thus $\psi(\tr_{E/F}(au_0u_1))=\psi(\tr_{E/F}(au_0))$. Hence $$F_\chi(a)=\sum_{u_0\in E^1/(E^1\cap(1+\CP_E^n))}\psi(\tr_{E/F}(au_0))\chi(u_0)\int_{E^1\cap (1+\CP_E^n)}\chi(u_1)du_1.$$ Since $\deg(\chi)> n$, $\chi|_{E^1\cap(1+\CP_E^n)}\ne 1$, we get $\int_{E^1\cap (1+\CP_E^n)}\chi(u_1)du_1=0 $. Thus $F_\chi(a)=0$. (2) Consider the function $u\mapsto\psi(\tr_{E/F}(au))$ on $E^1$. Since this function is nonzero, there must be a character $\chi$ of $E^1$ such that its $\chi$-th Fourier coefficient $$F_\chi(a)=\int_{E^1}\psi(\tr_{E/F}(au))\chi(u)du$$ is nonzero. By (1), such a character $\chi$ must satisfy $\deg(\chi)\le n$. \end{proof} We will consider the Howe vectors for the representation $(\omega_{\mu,\psi^{-1},\chi},{\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi))$ for a character $\mu$ of $E^\times$ such that $\mu|_{F^\times}=\epsilon_{E/F}$, and a character $\chi$ of $E^1$. If $E/F$ is unramified and $m$ is an integer such that $m\ge \deg(\chi)$, we define $\phi^{m,\chi}$ by $\supp(\phi^{m,\chi})=E^1(1+\CP_E^m)$, and $$\phi^{m,\chi}(zu)=\chi(z), z\in E^1, u\in 1+\CP_E^m.$$ Note that $\phi^{m,\chi}$ is well-defined since $\chi_{E^1\cap(1+\CP_E^m)}=1.$ If $E/F$ is ramified with the residue characteristic not 2, for an integer $m$ with $2m\ge \deg(\chi)$, we define $\phi^{m,\chi}\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$ by $\supp(\phi^{m,\chi})=E^1(1+\CP_E^{2m})$, and $$\phi^{m,\chi}(zu)=\chi(z), z\in E^1, u\in 1+\CP_E^{2m}.$$ Note that $\phi^{m,\chi}$ is well-defined since $\chi_{E^1\cap(1+\CP_E^{2m})}=1.$ Let $\fD_{E/F}=\CP_E^d$ be the different of $E/F$, for some $d\ge 0$. \begin{prop}{\label{prop37}} Let $m$ be an integer such that $m\ge d$. \begin{enumerate} \item For $n\in N_m$, we have $$\omega_{\mu,\psi^{-1},\chi}(n)\phi^{m,\chi}=\psi^{-1}(n)\phi^{m,\chi}.$$ \item For $\bar n \in \bar N_{m}=\bar N\cap J_m$, we have $$\omega_{\mu,\psi^{-1},\chi}(\bar n)\phi^{m,\chi}=\phi^{m,\chi}.$$ \end{enumerate} \end{prop} \begin{proof} (1) For $n=n(b)\in N_m$, we have \begin{align*} \quad\omega_{\mu,\psi^{-1},\chi}(n(b))\phi^{m,\chi}(x) =\psi^{-1}(bx\bar x))\phi^{m,\chi}(x). \end{align*} For $x\in \supp(\phi^{m,\chi})$, we have $x\bar x\in 1+\CP_F^m$ in either case by Lemma \ref{lem33}. For $n(b)\in N_m$, we have $b\in \CP_F^{-m}$, and thus $bx\bar x -b\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$. Since $\psi$ is unramified, we get $\psi^{-1}(bx\bar x)=\psi^{-1}(b)$. Thus $$\omega_{\mu,\psi^{-1},\chi}(n(b))\phi^{m,\chi}=\psi^{-1}(b)\phi^{m,\chi}.$$ (2) For simplicity, we will write $\omega$ for $\omega_{\mu,\psi^{-1},\chi}$. For $\bar n\in \bar N_m$, we can write $\bar n= w^{-1}n(b)w$ with $b\in \CP_F^{3m}$. Let $\phi'=\omega(w)\phi^{m,\chi}$. We have \begin{align*} \phi'(x)=\gamma_{\psi^{-1}}\int_{E}\phi^{m,\chi}(y)\psi^{-1}(\tr(\bar x y))dy. \end{align*} It is clear that $\supp \phi' \subset \CP_E^{-m}$ in the unramified case and $\supp \phi'\subset \CP_E^{-2m-d}$ in the ramified case. For $x\in \supp \phi'$, we have $bx\bar x\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$ in either case since $m\ge d$. Thus \begin{align*} \omega(n(b))\phi'(x)&=\psi(b x\bar x)\phi'(x)=\phi'(x), \end{align*} i.e., $\omega(n(b))$ fixes $\phi'$. Then $$\omega(\bar n)\phi^{m,\chi}=\omega(w^{-1})\omega(n(b))\omega(w)\phi^{m,\chi}$$ $$=\omega(w^{-1}) \omega(n(b)) \phi'=\omega(w^{-1}) \phi'=\omega(w^{-1})\omega(w)\phi^{m,\chi}=\phi^{m,\chi}.$$ This completes the proof. \end{proof} \begin{prop}{\label{prop38}} Given a positive integer $m$. \begin{enumerate} \item Suppose that $E/F$ is unramified. Then for $a\in \CP_E^{-m}$, we have $$(\omega_{\mu,\psi^{-1},\chi}(t(a)w)\phi^{m,\chi})(1)=\mu(a)|a|^{1/2}\gamma_{\psi^{-1}}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}} (1+\CP_E^m){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^m)) F_\chi(\bar a),$$ where $\gamma_{\psi^{-1}}$ is the Weil index, and $F_\chi(\bar a)= \int_{E^1} \chi(z)\psi(\tr( \bar a z ))dz$, which is defined in Eq.$(\ref{eq34*})$. \item Suppose that $E/F$ is ramified. Then for $a\in \CP_E^{-2m}$, we have $$(\omega_{\mu,\psi^{-1},\chi}(t(a)w)\phi^{m,\chi})(1)=\mu(a)|a|^{1/2}\gamma_{\psi^{-1}}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}} (1+\CP_E^{2m}){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^{2m})) F_\chi(\bar a),$$ where $\gamma_{\psi^{-1}}$ is the Weil index, and $F_\chi(\bar a)= \int_{E^1} \chi(z)\psi(\tr( \bar a z ))dz$, which is defined in Eq.$(\ref{eq34*})$. \end{enumerate} \end{prop} \begin{proof} We only consider the unramified case. The ramified case can be done similarly. We have \begin{align*} &\quad \omega(t(a)w)\phi^{m,\chi}(1)\\ &=\mu(a)|a|^{1/2}\omega(w)\phi^{m,\chi}(a)\\ &= \mu(a)|a|^{1/2}\gamma_{\psi^{-1}}\int_E \phi^{m,\chi}(y)\psi^{-1}(-\tr(\bar a y))dy\\ &=\mu(a)|a|^{1/2}\gamma_{\psi^{-1}}\int_{(E^1\cap (1+\CP_E^m))\setminus (1+\CP_E^m)} \int_{E^1} \chi(z)\psi(\tr( \bar a z u))dz du, \end{align*} Since $a\in \CP_E^{-m}$, for $z\in E^1, u\in 1+\CP_E^m$, we have $ \bar a z u-\bar a z\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_E$. Thus \begin{align*} &\quad \omega(t(a)w)\phi^{m,\chi}(1)\\ &=\mu(a)|a|^{1/2}\gamma_{\psi^{-1}}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}} (1+\CP_E^m){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^m))^{-1} \int_{E^1} \chi(z)\psi(\tr( \bar a z ))dz \\ &=\mu(a)|a|^{1/2}\gamma_{\psi^{-1}}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}} (1+\CP_E^m){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^m))^{-1} F_\chi(\bar a), \end{align*} where $F_\chi(\bar a)= \int_{E^1} \chi(z)\psi^{-1}(\tr( \bar a z ))dz$. This completes the proof of (1).\end{proof} \subsection{A local converse theorem}\label{sec33} The following functions and computations are done by Baruch, see page 335 of \cite{Ba1}. For a set $A\subset F$, let $\Upsilon_A$ the characteristic function of $A$. Let $$\Phi_{i,l}(x,y)=\Upsilon_{P_F^i}(x)\Upsilon_{1+P_F^l}(y).$$ Then $$\hat \Phi_{i,l}(x,y)=q_F^{-i-l}\psi(-x)\Upsilon_{P_F^{-l}}(x)\Upsilon_{P_F^{-i}}(y).$$ For $x\in F$, recall we denote $$n(x)=\begin{pmatrix} 1& x\\ & 1\end{pmatrix},$$ and $$\bar n(x):=\begin{pmatrix} 1& \\ x& 1\end{pmatrix}.$$ Let $\eta$ be a quasi-character of $E^\times$, and $l=l(\eta)=\deg(\eta)$ be the conductor of $\eta$, i.e., $\eta(1+P_E^{\ell})=1$ but $\eta(1+P_E^{l-1})\ne 1$. We have \begin{equation}{\label{eq34}}f(\bar n(x),\Phi_{i,l},\eta,s)=\left\{\begin{array}{lll} q_F^{-l}& \textrm{ if } |x|_F\le q_F^{-i}, \\ 0 & \textrm{ otherwise;} \end{array} \right.\end{equation} and \begin{equation}{\label{eq35}} f(wn(x), \hat \Phi_{i,l},\eta^*, 1-s)=\left\{\begin{array}{lll} q_F^{-l-i}\int_{|y|_F\le q_F^l}\psi_F(y)\eta^{-1}(y)|y|^{1-s} d^*y& \textrm{ if } |x|_F\le q_F^{i-l}, \\ q_F^{-l-i}\int_{|y|_F\le q_F^i|x|_F}\psi_F(y)\eta^{-1}(y)|y|^{1-s} d^*y, &\textrm{ if } |x|_F> q_F^{i-l}. \end{array} \right. \end{equation} where the equality in (\ref{eq35}) is in the sense of continuation. Let $$c(s,\eta,\psi)=\int_{|y|\le q_F^{l(\eta)}}\eta^{-1}(y)\psi_F(y)|y|^sd^*y,$$ which has no zeros or poles except for a finite number of $q_F^{-s}$, see \cite{Ba1} and the references given there. \begin{thm}{\label{thm39}} Let $E/F$ be a quadratic extension of $p$-adic fields and $H={\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)(F)$. We exclude the case when $E/F$ is ramified and the residue characteristic is $2$. Let $\psi$ be an additive unramified character of $F$ and $\mu$ of a character of $E^\times$ such that $\mu|_{F^\times}=\epsilon_{E/F}$. Let $\pi,\pi'$ be two $\psi$-generic irreducible representations of $H$ with the same central character. \begin{enumerate} \item If $\gamma(s,\pi, \omega_{\mu,\psi^{-1},\chi},\eta)=\gamma(s,\pi', \omega_{\mu,\psi^{-1},\chi},\eta)$ for all characters $\chi$ of $E^1$, and all quasi-characters $\eta$ with $$\omega_\pi\cdot \mu\chi\cdot \eta|_{E^1}=1,$$ where $\omega_\pi$ is the central character of $\pi$, then $\pi$ is isomorphic to $\pi'$. \item For a character $ \chi$ of $E^1$, there exists an integer $l_0=l_0(\pi,\pi', \mu,\chi)$ such that for any quasi-character $\eta$ of $E^\times$ which satisfies $$\omega_\pi\cdot \mu\chi\cdot \eta|_{E^1}=1,$$ and $l(\eta)>l(\eta_0)$, then $\gamma(s,\pi, \omega_{\mu,\psi^{-1},\chi}, \eta)=\gamma(s,\pi',\omega_{\mu,\psi^{-1},\chi}, \eta)$. \end{enumerate} \end{thm} \begin{proof} In the following, we assume that $E/F$ is unramified. The ramified case can be dealt with similarly. Write $\omega_{\mu,\psi^{-1},\chi}$ as $\omega$ for simplicity. We fix vectors $v\in V, v'\in V'$ such that $W_{v}(1)=1=W_{v'}(1)$. Let $L_1$ be an even integer such that $v,v'$ are fixed by $K_{L_1}$. Let $L=3L_1$, $C=3L/2$, $\eta$ be a quasi-character of $E^\times$ with $l=\deg(\eta)$, $\chi$ be a character of $E^1$ and $m$ be an integer such that $m\ge \max\wpair{C,\deg(\mu), l=\deg( \eta), \deg(\chi)}$. Since $\deg(\chi)\le m$, we can consider the function $\phi^{m,\chi}\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(E,\chi)$ defined in $\S$\ref{sec32}, i.e., $\phi^{m,\chi}(zu)=\chi(z)$ for $z\in E^1$ and $u\in 1+\CP_E^m$, and zero otherwise. Let $i$ be an integer such that $i\ge 3m> m+l,$ we then have a function $\Phi_{i,l}\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$. We now compute the integral $$\Psi(s,W_{v_m},\phi^{m,\chi},\Phi_{i,l},\eta)=\int_{R\setminus H}W_{v_m}(h)\omega(h)\phi^{m,\chi}(1)f(s,h,\Phi_{i,l},\eta)dh.$$ We will take the integral $\Psi(s,W_{v_m}, \phi^{m,\chi},\Phi_{i,l})$ on the open dense set $N\setminus NT\bar N$ of $N\setminus H$. For $h=n(y)t(a)\bar n(x)\in NT\bar N$, the Haar measure can be taken as $dh=|a|_E^{-1}dyd^*a dx$. Thus by Eq.(\ref{eq34}), we get \begin{align*} &\qquad \Psi(s,W_{v_m},\phi^{m,\chi},\Phi_{i,l},\eta)\\ &=\int_{E^1\setminus E^\times}\int_F W_{v_m}(t(a)\bar n(x))(\omega(t(a)\bar n(x))\phi^{m,\chi})(1)\eta(a)|a|^{s-1}f(s,\bar n(x),\Phi_{i,l},\eta)dxd^*a\\ &=q_{F}^{-l}\int_{E^1\setminus E^\times}\int_{|x|_F\le q_F^{-i}}W_{v_m}(t(a)\bar n(x))(\omega(t(a)\bar n(x))\phi^{m,\chi})(1)\eta(a)|a|^{s-1}dxd^*a. \end{align*} Since $i\ge 3m$, $|x|_F\le q_F^{-i}$ implies that $\bar n(x)\in J_m$. Thus by Lemma \ref {lem31} and Proposition \ref{prop37}, we have $$W_{v_m}(t(a)\bar n(x))= W_{v_m}(t(a)), \omega(t(a)\bar n(x))\phi^{m,\chi}=\omega(t(a))\phi^{m,\chi}.$$ On the other hand, by the definition of $\phi^{m,\chi}$, we have $$\omega(t(a))\phi^{m,\chi}(1)=\mu(a)|a|^{1/2}\phi^{m,\chi}(a)=\left\{\begin{array}{lll}\mu(zu)\chi(z), & \textrm{ if } a=zu,\textrm{ for } z\in E^1, u\in 1+\CP_E^m, \\ 0, & \textrm { otherwise }\end{array} \right. $$ Combining these facts and Lemmas \ref{lem33}, we get \begin{align*} &\quad \Psi(s,W_{v_m},\phi^{m,\chi},\Phi_{i,l})\\ &=q_F^{-l-i} \int_{E^1\setminus E^\times}W_{v_m}(t(a))(\omega(t(a))\phi^{m,\chi})(1)\eta(a)|a|^{s-1}d^*a\\ &=q_F^{-l-i}\int_{E^1\setminus E^1(1+\CP_E^m)}W_{v_m}(t(a))(\omega(t(a))\phi^{m,\chi})(1)\eta(a)d^*a\\ &=q_F^{-l-i} \int_{E^1\cap(1+\CP_E^m)\setminus (1+\CP_E^m)}W_{v_m}(t(a))\omega(t(a))\phi^{m,\chi}(1)\eta(a)d^*a\\ &=q_F^{-l-i} \int_{E^1\cap(1+\CP_E^m)\setminus (1+\CP_E^m)}\mu(a)\eta(a)d^*a. \end{align*} Since $m\ge \deg(\mu\eta)$, we have \begin{align} &\quad \Psi(s,W_{v_m},\phi^{m,\chi},\Phi_{i,l})\label{eq36}\\ &=q_F^{-l-i} \int_{E^1\cap (1+\CP_E^m)\setminus (1+\CP_E^m)}\mu(a)\eta(a)d^*a \nonumber\\ &=q_F^{-l-i}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_E^m){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^m))^{-1},\nonumber \end{align} which is a nonzero constant. Note that this finishes the proof of Proposition \ref{prop22} when $E/F$ is unramified. The same calculation works for $\Psi(s,W_{v'_m},\phi^{m,\chi},\Phi_{i,l})$. Thus, we get \begin{equation}{\label{eq37}}\Psi(s,W_{v_m'},\phi^{m,\chi},\Phi_{i,l},\eta)=\Psi(s,W_{v_m},\phi^{m,\chi},\Phi_{i,l})=q_F^{-l-i}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_E^m){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^m))^{-1},\end{equation} for $ i\ge 3m, m\ge \max\wpair{C,\deg(\mu), l=\deg(\eta),\deg(\chi)}.$ Let $W=W_{v_m}$ or $W_{v_m'}$. We compute the integral $\Psi(1-s,W,\phi^{m,\chi}, \hat \Phi_{i,l})$ on the dense set $N\setminus NTwN$ of $N\setminus H$. Since $i\ge m+l $, then $|x|_F\le q_E^{m}$ implies that $|x|_F\le q^{i-l}_F$. Then by Eq.(\ref{eq35}), we have \begin{align} &\quad\Psi(1-s,W,\phi^{m,\chi}, \hat \Phi_{i,l}) \nonumber\\ &=\int_{E^1\setminus E^\times}\int_{F }W(t(a)wn(x))(\omega(t(a)wn(x) )\phi^{m,\chi})(1) f_s(t(a)wn(x), \hat \Phi_{i,l}, \eta^*)\nonumber\\ &=q_F^{-l-i}c(1-s, \eta, \psi)\int_{E^1\setminus E^\times}\int_{|x|\le q_F^m}W(t(a)wn(x) )\omega(t(a)wn(x))\phi^{m,\chi}(1)\eta^*(a)|a|^{-s}dxd^*a \label{eq38}\\ &+\int_{E^1\setminus E^\times}\int_{|x|_F>q_F^m} W(t(a)wn(x) )\omega(t(a)wn(x))\phi^{m,\chi}(1) f(1-s, t(a)wn(x), \hat \Phi_{i,l})dxd^*a \label{eq39} \end{align} By Proposition \ref{prop34}, we have $W_{v_m}(t(a)wn(x))=W_{v_m'}(t(a)wn(x))$ for $|x|>q_F^m$. Thus the expressions (\ref{eq39}) for $W=W_{v_m}$ and $W=W_{v'_m}$ are the same. For $|x|_F\le q_F^m$, we have $n(x)\in N_m$. By Lemma \ref{lem32} and Proposition \ref{prop37}, we have $$W_{v_m}(t(a)wn(x))=\psi(x)W_{v_m}(t(a)w), \textrm{ and }\omega(t(a)wn(x))\phi^{m,\chi}(1)=\psi^{-1}(x)\omega(t(a)w)\phi^{m,\chi}(1).$$ Thus the expression (\ref{eq38}) can be simplified to $$q_F^{-l-i+m}c(1-s, \eta, \psi)\int_{E^1\setminus E^\times}W(t(a)w)\omega(t(a)w)\phi^{m,\chi}(1)\eta^*(a)|a|^{-s}d^*a. $$ By the above discussion, we get \begin{align} &\Psi(s,W_{v_m},\phi^{m,\chi},\hat \Phi_{i,l})-\Psi(s,W_{v_m'},\phi^{m,\chi},\hat \Phi_{i,l} )\label{eq310}\\ =&q_F^{-l-i+m}c(1-s, \eta^*, \psi)\int_{E^1\setminus E^\times}(W_{v_m}(t(a)w)-W_{v_m'}(t(a)w))\omega(t(a)w)\phi^{m,\chi}(1)\eta^*(a)|a|^{-s}d^*a. \nonumber \end{align} By the local functional equation, Eq.(\ref{eq37}) and (\ref{eq310}), we get \begin{align} &d_m(s,\eta,\psi)(\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)-\gamma(s,\pi',\omega_{\mu,\psi^{-1},\chi},\eta)) \label{eq311}\\ =&\int_{E^1\setminus E^\times}(W_{v_m}(t(a)w)-W_{v_m'}(t(a)w))\omega(t(a)w)\phi^{m,\chi}(1)\eta^*(a)|a|^{-s}d^*a,\nonumber \end{align} where $d_m(s,\eta,\psi)=q_F^{-m}c(1-s,\eta,\psi)^{-1}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_E^m){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^m))^{-1}$. Since $m\ge C\ge L$, then by Lemma \ref{lem31} (3), we have $$W_{v_m}(t(a)w)-W_{v_m'}(t(a)w)=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}\int_{N_m}\psi^{-1}(n)(W_{v_L}(t(a)wn)-W_{v'_L}(t(a)wn))dn.$$ By Proposition \ref{prop34}, if $n\in N_m-N_L$, we have $W_{v_L}(t(a)wn)=W_{v_L'}(t(a)wn)$. Thus \begin{align} &\quad W_{v_m}(t(a)w)-W_{v_m'}(t(a)w)\label{eq312}\\ &=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}\int_{N_m}\psi^{-1}(n)(W_{v_L}(t(a)wn)-W_{v'_L}(t(a)wn))dn \nonumber\\ &=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}\int_{N_L}\psi^{-1}(n)(W_{v_L}(t(a)wn)-W_{v'_L}(t(a)wn))dn \nonumber\\ &=\frac{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_L)}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}(W_{v_L}(t(a)w)-W_{v_L'}(t(a)w)).\nonumber \end{align} If we combine Eq.(\ref{eq311}) and Eq.(\ref{eq312}), we get \begin{align} &d_{m,L}(s,\eta,\psi)(\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)-\gamma(s,\pi',\omega_{\mu,\psi^{-1},\chi},\eta)) \label{eq313}\\ =&\int_{E^1\setminus E^\times}(W_{v_L}(t(a)w)-W_{v_L'}(t(a)w))\omega(t(a)w)\phi^{m,\chi}(1)\eta^*(a)|a|^{-s}d^*a,\nonumber \end{align} with $d_{m,L}(s,\eta,\psi)=c(s,\eta,\psi)^{-1}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_L)^{-1}$, for $m\ge \max\wpair{C,\deg(\mu),l=\deg(\eta), \deg(\chi)}$. By Lemma \ref{lem32} (2), we have $$\supp(W_{v_L}(t(a)w)-W_{v'_L}(t(a)w)\subset \CP_E^{-C}.$$ By Proposition \ref{prop38}, for $a\in \CP_E^{-C}\subset \CP_E^{-m}$, (note that $m\ge C$ by assumption), we have $$\omega(t(a)w)\phi^{m,\chi}(1)=\mu(a)|a|^{1/2}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_E^{m}){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^m))^{-1}\gamma_{\psi^{-1}}F_\chi(\bar a).$$ Thus Eq.(\ref{eq313}) can be written as \begin{align} &D_L(s,\eta,\psi)(\gamma(s,\pi,\omega_{\mu,\psi^{-1},\chi},\eta)-\gamma(s,\pi',\omega_{\mu,\psi^{-1},\chi},\eta) )\label{eq314}\\ =& \int_{E^1\setminus E^\times}(W_{v_L}(t(a)w)-W_{v_L'}(t(a)w))\mu(a)\eta^*(a)|a|^{-s+1/2}F_\chi(\bar a)d^*a,\nonumber \end{align} where $D_L(s,\eta,\psi)=c(s,\eta,\psi)^{-1}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_L)^{-1}\gamma_{\psi^{-1}}^{-1}$. Note that Eq.(\ref{eq314}) is independent of $m$, and thus it is true for all choices of compatible $\chi, \eta$. Now we can prove our theorem. To prove (1), we will show that $W_{v_L}(h)=W_{v_L'}(h)$ for all $h\in H=U(1,1)(F)$. By Proposition \ref{prop34}, it suffices to show that $W_{v_L}(t(a)w)=W_{v_L'}(t(a)w)$ for all $a\in E^\times$. By Lemma \ref{lem32} (2), it suffices to show that $$W_{v_L}(t(a)w)-W_{v_L'}(t(a)w)=0, \forall a\in E^\times, \textrm{ with }|a|_E\le q_E^C. $$ By assumption of (1) and Eq.(\ref{eq314}), we have $$\int_{E^1\setminus E^\times}(W_{v_L}(t(a)w)-W_{v_L'}(t(a)w))F_\chi(\bar a) \mu(a)\eta^*(a)|a|^{-s+1/2}d^*a=0,$$ for all compatible $\chi, \eta$. For a fixed $\chi$, if we vary $\eta$, then by inverse Mellin transform on the group $E^1\setminus E^\times$, we get $$(W_{v_L}(t(a)w)-W_{v_L'}(t(a)w))F_\chi(\bar a)=0, \forall \chi. $$ By Lemma \ref{lem35} and Lemma \ref{lem36}, for each $a\in \CP_E^{-C}$, we can find a character $\chi$ of $E^1$ with $\deg(\chi)\le C$ such that $F_\chi(\bar a)\ne 0$. Thus we get \begin{equation}{\label{eq311}}W_{v_L}(t(a)w)=W_{v_L'}(t(a)w), \forall a\in \CP_E^{-C}.\end{equation} This completes the proof of (1). Now we prove (2). Let $l_0=l_0(\pi, \pi',\mu,\chi)$ be an integer such that $l_0>\deg(\mu)$, $l_0>C=3L/2$ and \begin{equation}\label{eq317}W_{v_L}(t(a_0a)w)=W_{v_L}(t(a)w), W_{v_L'}(t(a_0a)w)=W_{v_L'}(t(a)w), \forall a_0\in 1+\CP_E^{l_0}.\end{equation} Such $l_0$ exists since the above functions on $a$ are continuous. In fact, if $l_0>L$, the condition Eq.(\ref{eq317}) holds. By definition of $F_\chi$, from $l_0>C$, we can check that \begin{equation}F_\chi(\bar a_0 \bar a)=F_\chi(\bar a), \forall a_0\in 1+\CP_E^{l_0} \textrm{ and } a\in \CP_E^{-C}.\end{equation} Note that $l_0$ is in fact independent of $\chi$. If $l(\eta)>l_0$, from our choice of $l_0$, the right side of Eq.(\ref{eq314}) vanishes. Since $D_L(s,\eta,\psi)$ has no zeros or poles outside of a finite number of $q_E^{-s}$, we get $$ \gamma(s,\pi, \omega_{\mu,\psi^{-1},\chi},\eta)=\gamma(s,\pi', \omega_{\mu,\psi^{-1},\chi}, \eta).$$ This completes the proof of (2). \end{proof} \begin{rem}\upshape \label{rem310} In the above proof, we omit the details when $E/F$ is ramified. In the ramified case, the analogue of Eq.(\ref{eq36}) is $$ \Psi(s,W_{v_m}, \phi^{m,\chi}, \Phi_{i,l})=q_F^{-i-l}{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_E^{2m}){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(E^1\cap (1+\CP_E^{2m}))^{-1}.$$ It is easy to see that in calculation to derive this formula, we do not have to require that the residue characteristic is not 2. Thus Proposition \ref{prop22} is fully proved. \end{rem} \section{ Local Zeta integrals for $\GL(2,F)$} In this section, we assume$F$ is a $p$-adic local field, and let $H=\GL(2,F)$. We will use the following notations. Let $B$ be the upper triangular subgroup of $H$. Let $B=TN$, with $T$ the torus and $N$ the unipotent subgroup. Let $\bar B$ (resp. $\bar N$) be the opposite of $B$ (resp. $N$). Let $Z$ be the center of $H$ and $R=ZN$. For $x\in F$, let $$n(x)=\begin{pmatrix}1& x\\ &1 \end{pmatrix}\in N, \bar n(x)=\begin{pmatrix}1&\\ x &1 \end{pmatrix}\in \bar N.$$ For $a,b\in F^\times$, let $t(a,b)={\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(a,b)$. For $a\in F^\times$, let $t(a)=m(a,1)$. Let $w=\begin{pmatrix}&1\\ -1& \end{pmatrix}$. \subsection{Weil representation of $\GL_n(F)$} This section follows from \cite{GR}. Let $X=F^n$ be an $n$-dimensional vector space over $F$ with a fixed choice of basis, and let $W=X\oplus X^*$, where $X^*$ is the dual space of $X$. We also identify $X$ and $X^*$ with $F^n$ and denote the standard paring between $F^n$ by $\pair{~,~}$, where $\pair{v,w}=\sum v_jw_j$ for $v=(v_j)$ and $w=(w_j)$. Let $A((v,v^*),(w,w^*))=\pair{v,w^*}-\pair{v^*,w}$ be the symplectic form on $W$. The group $G=\GL_n(F)$ can be identified with the stabilizer of $X$ and $X^*$ in ${\mathrm{Sp}}(W)$. The (Schrodinger model of the) Weil representation $\omega_\psi$ of $\Mp(W)$ is realized on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(X)={\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^n)$. The subgroup $G\subset {\mathrm{Sp}}(W)$ has a lifting to $\Mp(W)$ which is uniquely determined by the following formula: $$\omega_\psi(g)f(x)=|\det(g)|^{1/2}f({}^t\!g x),~g\in G,x\in F^n.$$ We call the representation given by this formulae the standard oscillator representation of $\GL_n(F)$. Any other oscillator representations differ from it by a twist. Let $P$ be the parabolic subgroup with the last row $(0,0,\dots,0,*)$ and $P_n$ be the subgroup of $P$ with the last row $(0,0,\dots,0,1)$. Then there is an isomorphism $$(\omega_\psi,S(F^n))\rightarrow \ind_{P_n}^G(|\textrm{det}|^{1/2})$$ $$f\mapsto \varphi_f,$$ where $\varphi_f(g)=|\det(g)|^{1/2}f({}^t\!ge_n)=(\omega(g)f)(e_n)$, with $e_n=(0,0,\dots,0,1)^t$. Let $Z$ be the center of $\GL_n(F)$ and $\chi$ be a character of $Z$, let $\omega_{\psi,\chi}$ be the largest quotient of $\omega_\psi$ on which $Z$ acts via $\chi$. To determine $\omega_{\psi, \chi}$, we define $$\varphi_{\chi}(g)=\int_{Z}\chi^{-1}(z)\varphi(zg)dz$$ for $\varphi\in \ind_{P_n}^G(|\det|^{1/2})$. Then $(r(z)\varphi)_\chi(g)=\chi(z)\varphi_\chi(g)$, for $z\in Z$, where $r$ is the right translation. The Levi subgroup $M$ of $P$ is isomorphic to $\GL(n-1)\times \GL(1)$. Let $1\otimes \chi$ be the character of $M$ whose restriction to $\GL(n-1)$ is trivial and restriction to $\GL(1)$ is $\chi$. For $h\in \GL(n-1), a\in \GL(1)$, let $p={\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(h,a)\in P$, we have $$\varphi_\chi(pg)=|\det(h)|^{1/2}|a|^{-(n-1)/2}\chi(a)\varphi_\chi(g)=(1\otimes \chi)\delta_{P}^{1/2}(p)\varphi_\chi(g),$$ i.e., $\varphi_\chi\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_{P}^G(1\otimes \chi)$, where ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}$ denotes the normalized induced representation. \begin{prop} There is an isomorphism $\omega_{\psi,\chi}\cong {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_P^G(1\otimes \chi)$, and the map $f\mapsto \varphi_f\mapsto (\varphi_f)_\chi$ defines the projection $(\omega_\psi, {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^n))\cong (r, \ind_{P_n}^G(|\det|^{1/2}))\rightarrow (\omega_{\psi,\chi}\cong {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_P^G(1\otimes \chi)).$ \end{prop} \begin{cor} Suppose that $n=2$. If $\chi\ne |~|^{\pm 1}$, then $\omega(\psi,\chi)$ is irreducible. \end{cor} This is well-known, see \cite{BZ2} for example. \subsection{A different model of the Weil representation for $\GL(2)$} In this section, we consider a different model of the Weil representation for $\GL_2$, which will give us the familiar Weil representation formulas. Let $X=F^2$ and let $e_1,e_2$ be the standard basis of $X$, let $e_1^*,e_2^*$ be the corresponding dual basis. Let $Y=Fe_1\oplus Fe_2^*$ and $Y^*=Fe_1^*\oplus Fe_2$. Then $W=Y\oplus Y^*$ is a complete polarization of $W$. There is an isomorphism $$I: {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(X)\rightarrow {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(Y)$$ defined by the partial Fourier transform $$I(f)(xe_1+ye_2^*)=\int_{F}\psi^{-1}(yz)f(xe_1+ze_2)dz.$$ From the isomorphism, the Weil representation $(\omega_\psi, {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(X))$ can be realized on ${\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(Y)$, which is also denoted by $\omega_\psi$ by ambiguity notation. For $\phi=I(f)\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(Y)={\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, we can check that the following formulas hold: \begin{align} (\omega(n(b))\phi)(x,y)&=\psi(bxy)\phi(x,y), b\in F,\label{eq41}\\ (\omega(t(a_1,a_2))\phi)(x,y)&=|a_1a_2^{-1}|^{1/2}\phi(a_1x,a_2^{-1}y), a_1, a_2\in F^\times, \label{eq42}\\ (\omega(w)\phi)(x,y)&=\int_{F\times F}\psi(xv-yu)\phi(u,v)dudv. \label{eq43} \end{align} For a quasi-character $\chi$ of $F^\times$, we define \begin{equation}\label{eq44}\phi_\chi(x,y)=\int_{F^\times}\chi^{-1}(a)\phi(ax,a^{-1}y)da.\end{equation} Then $(\omega(z)\phi)_\chi=\chi(z)\phi_\chi,$ for $z\in Z\cong F^\times$. The projection map $\omega_\psi\rightarrow \omega_{\psi,\chi}$ can be identified with $$(\omega, {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(Y))\rightarrow {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(1\otimes \chi)$$ $$\phi\mapsto (\omega(h)\phi)_\chi(0,1).$$ \begin{lem}{\label{lem43}} Let $f\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(1\otimes \chi)$ be the function defined by $f(h)=(\omega (h)\phi)_\chi(0,1)$, for some $\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(Y)={\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$. We define $\lambda(f)=\phi_\chi(1,1)$. Then $\lambda$ is well-defined and defines a nontrivial Whittaker functional on $\omega_{\psi,\chi}$. \end{lem} \begin{proof} The kernel of the map $\omega_\psi \rightarrow \omega_{\psi,\chi}$ consists functions of the form $\omega(z)\phi-\chi(z)\phi, z\in Z, \phi \in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(Y)$. To show $\lambda$ is well-defined, it suffices to show that $ \lambda$ vanishes on the kernel, i.e., $$(\omega(z)\phi)_\chi(1,1)=(\chi(z)\phi)_\chi(1,1).$$ This is clear. For $n=n(b)\in N$, one preimage of $r(n)f$ is $\omega(n)\phi$. Thus $$\lambda(r(n)f)=(\omega(n)\phi)_\chi(1,1)=\psi(b)\phi_\chi(1,1),$$ by Eq.(\ref{eq41}) and Eq.(\ref{eq44}). This shows that $\lambda$ is a Whittaker functional. \end{proof} \subsection{The Local Zeta Integral} Recall that $T$ is the diagonal subgroup of $H=\GL_2(F)$ and we have $T\cong F^\times \times F^\times$. A quasi-character $\eta$ of $T$ consists a pair of characters $(\eta_1,\eta_2)$ of $F^\times$: $\eta(t(a,b))=\eta_1(a)\eta_2(b)$. Let $||~||$ be the character of $T$ defined by $$||t(a,b)||=|ab^{-1}|_F.$$ Let $\eta=(\eta_1, \eta_2)$ be a quasi-character of $T$. We consider the induced representation ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$ of $H$. Let $(\pi,V)$ be an infinite-dimensional irreducible admissible representation of $\GL_2(F)$ with central character $\pi$, let $\eta=(\eta_1,\eta_2)$ be a quasi-character of $T$ and $ \chi$ be quasi-characters of $F^\times$. We require that $$\omega_\pi\cdot \chi \cdot \eta_1 \eta_2=1.$$ For $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)$, $\theta\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\omega( \psi^{-1},\chi), \psi^{-1})$, and $f_s\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$, similar to the ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ case, we consider the following local zeta integra $$\Psi(W,\theta,f_s)=\int_{R\setminus H}W(h)\theta(h)f_s(h)dh.$$ \textbf{Remark:} This is the local zeta integral at the split place of the global zeta integral for ${\mathrm {U}}} \newcommand{\RV}{{\mathrm {V}}(1,1)$ defined in \cite{GRS}, which is also a special case of the Rankin-Selberg type local zeta integral of $\GL_2\times \GL_2$ as defined by Jacquet in \cite{J}. For the Rankin-Selberg integral for general $\GL_n\times \GL_m$, see \cite{JPSS}. For $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, we define \begin{equation}{\label{eq45}}f(h,s,\Phi,\eta)=\eta_1(\det(h))|\det(h)|_F^s\int_{F^\times}\Phi((0,r)h)\eta_1(r)\eta_2^{-1}(r)|r|_F^{2s}d^*r.\end{equation} The integral in the definition of $f(s,h,\Phi,\eta)$ converges for $\Re(s)$ large enough and defines a meromorphic function of $s$. It is easy to check that $f(s,\cdot,\Phi,h)\in {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$. For $\Re(s)$ large enough, every element in ${\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(\eta|~|^{s-1/2})$ is of the form $f(h,s, \Phi, \eta)$ for some $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, see \cite{JL} for example. We denote $$\Psi(s,W,\theta,\Phi, \eta_1)=\Psi(s,W,\theta,f(~, s,\Phi,\eta)).$$ The above notation makes sense because $\eta_2$, and hence $\eta$ is uniquely determined by the other parameters. It is standard to show that $\Psi(s,W,\theta,\Phi,\eta_1)$ converges absolutely for $\Re(s)>>0$ and defines a meromorphic function of $s$, see \cite{J}. Plugin the definition of $f(h,s,\Phi, \eta)$, it is easy to see that $$\Psi(s,W,\theta,\Phi,\eta_1)=\int_{N\setminus H} W(h)\theta(h)\eta_1(\det(h))|\det(h)|^s \Phi((0,1)h)dh.$$ Note that in this expression, $\eta_2$ is hidden. Next, we suppose that the Whittaker function $\theta\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\omega_{\psi^{-1},\chi},\psi^{-1})$ is associated with $\phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, i.e., we fix a nonzero $\lambda\in {\mathrm{Hom}}} \renewcommand{\Im}{{\mathrm{Im}}_N(\omega_{\psi^{-1},\chi},\psi^{-1})$, then $\theta(h)=\lambda( \omega(h)\phi )$. By Lemma \ref{lem43}, we have \begin{equation}\label{eq46}\theta(h)=\int_{F^\times}\chi^{-1}(a)(\omega(h)\phi)(a,a^{-1})d^*a=\int_{Z}\chi^{-1}(z)(\omega(zh)\phi)(1,1)dz.\end{equation} We will use this expression later. \subsection{The local $\gamma$-factor and local functional equation} We recall the local functional equation of the Rankin-Selberg zeta integral for $\GL_2$, see \cite{J}, Theorem 14.7. For an irreducible smooth representation $\pi$ of $\GL_2(F)$, we have an isomorphism $\tilde \pi \cong \pi\otimes \omega_\pi^{-1}$. For $W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\pi,\psi)$, if we define $$\widetilde W(h)=W(h)\omega_\pi^{-1}(\det(h)),$$ then $\widetilde W\in {\mathcal {W}}} \newcommand{\CX}{{\mathcal {X}}(\tilde \pi, \psi)$. For $\Phi\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, we define the Fourier transform $\hat \Phi$ by $$\hat \Phi(x,y)=\int \Phi(u,v)\psi(uy-vx)dudv.$$ \begin{thm}[Jacquet, Theorem 14.7 of \cite{J}] There is a meromorphic function $\gamma(s,\pi,\omega_{\psi^{-1},\chi}, \eta_1)$ such that $$\Psi(1-s,\widetilde W, \tilde \theta, \hat \Phi, \eta_1^{-1})=\gamma(s,\pi,\omega_{\psi^{-1},\chi}, \eta_1)\Psi(s,W,\theta, \Phi, \eta_1).$$ \end{thm} It is clear that $\Psi(1-s,\widetilde W, \tilde \theta, \hat \Phi, \eta_1^{-1})=\Psi(1-s, W,\theta, \hat \Phi, \eta_2)$, where $\eta_2=\omega_\pi^{-1}\omega_\theta^{-1}\eta_1^{-1}$. \begin{thm}[Jacquet, Theorem 15.1 of \cite{J}, Multiplicativity of $\gamma$-factors]\label{thm44} If $\chi\ne |~|^{\pm 1}$ so that $\omega_{\psi,\chi}\cong {\mathrm{Ind}}} \newcommand{\ind}{{\mathrm{ind}}_B^H(1\otimes \chi)$ is irreducible, then $$\gamma(s,\pi, \omega_{\psi^{-1},\chi}, \eta)=\gamma(s,\pi, \eta_1)\gamma(s,\pi, \chi \eta_1),$$ where $\gamma(s,\pi, \eta_1)=\gamma(s,\pi\otimes \eta_1)$ is the $\gamma$-factor of $\pi\otimes \eta_1$ defined in \cite{JL}. \end{thm} \section{A new proof of the local converse theorem for $\GL(2)$} \subsection{Howe vector for GL(2,F)} In this section, we consider the Howe vectors for representations of $\GL_2(F)$. The theory is parallel to $\S$3.1. We omit some details, which can be found in \cite{Ba2}. Let $p=p_F$ be a prime element of $F$. Let $\CP=p_F{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$ be the maximal ideal of $F$. Let $\psi$ be an additive character on $F$ such that $\psi({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F)=1$ and $\psi(p^{-1}{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}})\ne 1$. Let $K_m\subset \GL_2(F)$ be a congruence subgroup defined by $1+M_2(P^m)$. Set $$d_m=\begin{pmatrix} p^{-m}&\\ & p^{m}\end{pmatrix}.$$ Let $J_m=d_mK_md_m^{-1}$. Explictly, $$J_m=\begin{pmatrix} 1+P^m& P^{-m}\\ P^{3m}& 1+P^m\end{pmatrix}\cap H.$$ Let $\tau_m$ be the character of $K_m$ defined by $$\tau_m(k)=\psi(p^{-2m}k_{1,2}), k=(k_{i,l})\in K_m.$$ By our assumption on $\psi$, it is easy to see that $\tau_m$ is indeed a character on $K_m$. Define $\psi_m$ on $J_m$ by $$\psi_m(j)=\tau_m(d_m^{-1}jd_m), j\in J_m.$$ One can check that $\psi_m$ and $\psi$ agree on $N_m:=J_m\cap N$. Let $(\pi,V)$ be a $\psi$ generic representation. We fix a Whittaker functional. Let $v\in V$ be such that $W_v(1)=1$. Let $N_m=J_m\cap N$. For $m\ge 1$, define $$v_m=\frac{1}{\Vol(N_m)}\int_{N_m}\psi(n)^{-1}\pi(n)vdn.$$ Let $L$ be an integer such that $v$ is fixed by $K_L$. \begin{lem}{\label {lem51}} We have \begin{enumerate} \item $W_{v_m}(1)=1.$ \item If $m\ge L$, $\pi(j)v_m=\psi_m(j)v_m$ for all $j\in J_m$. \item If $k\le m$, then $$v_m=\frac{1}{\Vol(N_m)}\int_{N_m}\psi(n)^{-1}\pi(n)v_kdn.$$ \end{enumerate} \end{lem} The vectors $v_m$ are called Howe vectors. Let $w=\begin{pmatrix} &1\\ -1& \end{pmatrix}$ be the nontrivial Weyl element of $H$. Recall that we use $t(a)$ to denote the element ${\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(a,1)$ for $a\in F^\times$. \begin{lem}{\label{lem52}} Let $v_m,m\ge L$, be Howe vectors defined as above, and let $W_{v_m}$ be the Whittaker functions associated to $v_m$. Then \begin{enumerate} \item $W_{v_m}(t(a))\ne 0 \textrm{ implies } a\in 1+\CP_F^m.$ \item $W_{v_m}(t(a)w)\ne 0$ implies $a\in \CP_F^{-3m}$. \end{enumerate} \end{lem} \begin{proof} (1) For $x\in \CP_F^{-m}$, we have $n(x)\in N_m$. From Lemma \ref{lem51} and the relation $$t(a)n(x)=n(ax)t(a),$$ we get $$\psi_m(x)W_{v_m}(m(a))=\psi(ax)W_{v_m}(m(a)).$$ If $W_{v_m}(m(a))\ne 0$, we have $(1- a)x\in {\mathrm{Ker}}} \newcommand{\Ros}{{\mathrm{Ros}}(\psi)$ for any $x\in \CP^{-m}$. Thus $a\in 1+\CP^m$. (2) For $x\in \CP_F^{3m}$, we have $\bar n(-x)\in \bar N_m:=J_m\cap \bar N$. It is easy to check the relation $$t(a)w\bar n(-x)=n(a x)t(a)w.$$ By Lemma \ref{lem51}, we have $$W_{v_m}(t(a)w)=\psi(a x)W_{v_m}(t(a)w).$$ Thus $W_{v_m}(m(a)w)\ne 0$ implies $\psi(a x)=1$ for all $x\in \CP_F^{3m}$, i.e., $a\in \CP_F^{-3m}$. \end{proof} \begin{prop}{\label{prop53}} Let $(\pi,V)$ and $(\pi',V')$ be two $\psi$-generic representation of $H$ with the same central character. Choose $v\in V,v'\in V'$ so that $W_v(1)=1=W_{v'}(1)$ and define $v_m\in V,v_m'\in V'$ as above. Let $L$ be an integer such that $v,v'$ are fixed by $K_L$. Let $m\ge 3L$ and $n_0\in N$. Then \begin{enumerate} \item $W_{v_m}(t(a))=1=W_{v'_m}(t(a))$ for all $a\in 1+\CP^m$. \item $W_{v_m}(b)=W_{v_m'}(b)$ for all $b\in B$. \item If $n_0\in N_m$, then $W_{v_m}(twn_0)=\psi(n_0)W_{v_m}(tw)$ for all $t\in T=M$. \item If $n_0\notin N_m$, then $W_{v_m}(twn_0)=W_{v'_m}(twn_0)$ for all $t\in T$. \end{enumerate} \end{prop} \begin{proof} (1) For $a\in 1+P^m,$ we have $t(a)\in J_m$. Thus $W_{v_m}(t(a))=\psi_m(t(a))W_{v_m}(1)=1$. (2) Since $B=NT$, it suffices to check that $W_{v_m}(t)=W_{v_m'}(t)$ for all $t\in T$. From (1) and the above lemma, we have $W_{v_m}(t(a))=W_{v'_m}(t(a))$ for all $a\in F^\times$. For $t={\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(a,b)={\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(ab^{-1},1){\mathrm {diag}}} \newcommand{\Tri}{{\mathrm {Tri}}(b,b)$, thus $$W_{v_m}(t)=\omega_{\pi}(b)W_{v_m}(ab^{-1})=\omega_{\pi'}(b)W_{v_m'}(t(ab^{-1}))=W_{v_m'}(t).$$ (3) For $n_0\in N_m$, we have $\pi(n)v_m=\psi(n_0)v_m$. The assertion is clear. (4) We have $$W_{v_m}(twn_0)=\Vol(N_m)^{-1}\int_{N_m}W_{v_L}(twn_0n)\psi^{-1}(n)dn.$$ Let $n'=n_0n$. Since $n_0\notin N_m,n\in N_m$, we get $n'\notin N_m$. Suppose $n'=n(x)$. Then $n\notin N_m$ is equivalent to $x\notin P^{-m}$. In particular, $x\ne 0$. We have $$\begin{pmatrix} &1 \\ -1& \end{pmatrix}\begin{pmatrix} 1&x \\ &1 \end{pmatrix}=\begin{pmatrix}- x^{-1}& 1\\ &-x \end{pmatrix}\begin{pmatrix} 1& \\ x^{-1} &1\end{pmatrix}$$ Let $b=\begin{pmatrix}- x^{-1}& 1\\ &-x \end{pmatrix},j=\begin{pmatrix} 1& \\ x^{-1} &1\end{pmatrix}$. Then $b\in B_0, j\in J_L$ by assumption. Thus $$W_{v_L}(twn')=W_{v_L}(tbj)=W_{v_L}(tb).$$ By (1), we have $W_{v_L}(tb)=W_{v_L'}(tb)$. Now it is clear that $W_{v_m}(twn_0)=W_{v'_m}(twn_0)$. \end{proof} \subsection{Howe vectors for Weil representations}\label{sec52} We first quote two lemmas on Fourier analysis on $p$-adic fields. Before that, we recall some notations. Let $\chi$ be a character of $F^\times$. If $\chi$ is unramified, we denote $\deg(\chi)=0$. If $\chi$ us ramified, we denote $\deg(\chi)$ the integer $m$ such that $\chi|_{1+\CP^m}=1$ but $\chi|_{1+\CP^{m-1}}\ne 1$. Recall that we fixed an unramified additive character $\psi$ of $F$. In the integrals of the following lemmas, we use the standard Haar measure, i.e., we fix a Haar measure $dx$ on $F$ such that $\Vol({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F, dx)=1$, and we denote $d^* x=\frac{dx}{|x|}$ the multiplicative Haar measure on $F^\times$. Note that we have $\Vol({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times, d^*x)=1-q_F^{-1}$ and ${\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_F^m,d^*x)=q_F^{-m}$. \begin{lem}{\label{lem54}} \begin{enumerate} \item We have $$\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\psi(p^ku)d^*u=\left\{\begin{array}{lll}1-q_F^{-1}=\Vol({\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times, d^*u):=c_0, & k\ge 0 \\ q_F^{-1}(\sum_{a\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}^\times/(1+\CP_F)}\psi(p^{-1}a))=-q_F^{-1}:=c_1, & k=-1,\\ 0,& k\le -2.\end{array}\right.$$ \item Let $\chi$ be a ramified character of $F^\times$ of degree $h$, $h\ge 1$. Then $$\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi(u)\psi(p^ku)d^*u=\left\{\begin{array}{lll}0 & k\ne -h \\ \ne 0, & k=-h.\\\end{array}\right.$$ \end{enumerate} \end{lem} \begin{proof} (1) is easy and can be found in Page 21 of \cite{Ta}. (2) is Lemma 5.1 of \cite{Ta}, Page 49. \end{proof} Let $k$ be a positive integer, $\chi$ be a character of $F^\times$ and $a\in F^\times$, we consider the following integral $$F_\chi(k,a)=\int_{|x|=q^k}\psi(x+ax^{-1})\chi(x)d^*x.$$ \begin{lem}{\label{lem55}} Suppose that $|a|=q_F^n$ with $1\le k<n$. \begin{enumerate} \item If $\chi$ is unramified, then $F_\chi(k,a)\ne 0$ if and only if $n$ is even and $k=n/2$. \item If $\chi$ is ramified of degree $h\ge 1$, $F_\chi(k,a)\ne 0$ if and only if one of the following holds: \begin{enumerate} \item[(a)] $n$ is even, $n\ge 2h$ and $k=n/2$, \item [(b)] $n$ is even, $h<n<2h$ and $k=n/2$, $k=h$ or $k=n-h$, \item[(c)] $n$ is odd, $h<n<2h$ and $k=h$ or $k=n-h$. \end{enumerate} \end{enumerate} \end{lem} \begin{proof} This is Lemma 5.23 of \cite{Ta}, page 67. \end{proof} For a positive integer $m$, we consider the function $\phi^m\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$ defined by $$\phi^m(x,y)=\Upsilon_{1+\CP^m}(x)\cdot \Upsilon_{1+\CP_F^m}(y),$$ where for a subset $A\subset F$, $\Upsilon_A(x)$ denote the characteristic function of $A$. \begin{prop}{\label{prop56}} \begin{enumerate} \item For $n\in N_m$, we have $$\omega_{\psi^{-1}}(n)\phi^{m}=\psi^{-1}(n)\phi^{m}.$$ \item For $\bar n \in \bar N_{m}=\bar N\cap J_m$, we have $$\omega_{\psi^{-1}}(\bar n)\phi^{m,\chi}=\phi^{m,\chi}.$$ \end{enumerate} \end{prop} \begin{proof} (1) For $n=n(b)\in N_m$, we have \begin{align*} \quad\omega_{\psi^{-1}}(n(b))\phi^{m}(x,y) =\psi^{-1}(bxy))\phi^{m}(x,y). \end{align*} For $(x,y)\in \supp(\phi^{m,\chi})$, i.e., $x,y\in 1+\CP_F^m$ and $n(b)\in N_m$, i.e., $b\in \CP_F^{-m}$, we have $bxy -b\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$. Since $\psi$ is unramified, we get $\psi^{-1}(bxy)=\psi(b)$. Thus $$\omega_{\psi^{-1}}(n(b))\phi^{m}=\psi^{-1}(b)\phi^{m}.$$ (2) For $\bar n\in \bar N_m$, we can write $\bar n= w^{-1}n(b)w$ with $b\in \CP_F^{3m}$. Let $\phi'=\omega(w)\phi^{m}$. We have \begin{align*} \phi'(x,y)&=\int_{F^2}\phi(u,v)\psi^{-1}(xv-yu)dudv\\ &=\int_{1+\CP_F^m}\psi^{-1}(xv)dv \int_{1+\CP^m}\psi(yu)du\\ &=\psi^{-1}(x) \Upsilon_{\CP_F^{-m}}(x)\cdot \psi(y)\Upsilon_{\CP_F^{-m}}(y). \end{align*} For $(x,y)\in \supp \phi'=\CP_F^{-m}\times \CP_F^{-m}$, we have $bxy\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$, and thus \begin{align*} \omega(n(b))\phi'(x,y)&=\psi^{-1}(b xy)\phi'(x)=\phi'(x,y), \end{align*} i.e., $\omega(n(b))$ fixes $\phi'$. Then $$\omega(\bar n)\phi^{m}=\omega(w^{-1})\omega(n(b))\omega(w)\phi^{m}$$ $$=\omega(w^{-1}) \omega(n(b)) \phi'=\omega(w^{-1}) \phi'=\omega(w^{-1})\omega(w)\phi^{m}=\phi^{m}.$$ This completes the proof. \end{proof} Recall that, for a character $\chi$ of $F^\times$, we have a projection map $\omega_{\psi^{-1}}\rightarrow \omega_{\psi^{-1}, \chi}$, defined by $\phi\mapsto \phi_\chi$. The following corollary follows from Proposition \ref{prop56} directly. \begin{cor}{\label{cor57}} \begin{enumerate} \item For $n\in N_m$, we have $$\omega_{\psi^{-1},\chi}(n)\phi_\chi^{m}=\psi^{-1}(n)\phi_\chi^{m}.$$ \item For $\bar n \in \bar N_{m}=\bar N\cap J_m$, we have $$\omega_{\psi^{-1},\chi}(\bar n)\phi^{m}_\chi=\phi^{m}_\chi.$$ \end{enumerate} \end{cor} By Lemma \ref{lem43}, the Whittaker function $\theta^{m,\chi}$ associated with $\phi^m_\chi$ for the representation $\omega_{\psi^{-1},\chi}$ is \begin{equation}\label{eq51}\theta^{m,\chi}(h)=(\omega_{\psi^{-1}}(h)\phi^m)_\chi(1,1)=\int_{F^\times}\chi^{-1}(u)(\omega_{\psi^{-1}}(h)\phi^m)(u,u^{-1})d^*u.\end{equation} \begin{prop}\label{prop58} Given positive integers $N, m$, with $m\ge N$. Then there exist a finite number of characters $\wpair{\chi_i}$ of $F^\times$ satisfying the following conditions: \begin{enumerate} \item $\chi_i\ne |~|^{\pm 1}$, \item $\deg(\chi_i)\le N$, and \item for all $a\in F^\times$ with $|a|\le q_F^N$, there exists a character $\chi_i$ such that $$\theta^{m,\chi_i}(t(a)w)\ne 0.$$ \end{enumerate} \end{prop} Recall that $w=\begin{pmatrix}&1\\ -1& \end{pmatrix}$. This proposition is the key to prove the local converse theorem. \begin{proof} By the Whittaker function formula of Weil representation, Eq.(\ref{eq46}) or Eq.(\ref{eq51}), and the Weil representation formulas, Eq.(\ref{eq42}) and Eq.(\ref{eq43}), we have \begin{align*} &\quad \theta^{m,\chi}(t(a)w)\\ &=\int_{F^\times}\chi^{-1}(u)(\omega_{\psi^{-1}}(t(a)w)\phi^m)(u,u^{-1})d^*u\\ &=|a|^{1/2}\int_{F^\times}\chi^{-1}(u)\omega(w)\phi^m(au,u^{-1})d^*u\\ &=|a|^{1/2}\int_{F^\times}\chi^{-1}(u)\int_{F\times F}\psi^{-1}(auy-u^{-1}x)\phi^m(x,y)dxdyd^*u\\ &=|a|^{1/2}\int_{F^\times}\chi^{-1}(u)\left(\int_{1+\CP_F^m} \psi^{-1}(-u^{-1}x)dx \int_{1+\CP_F^m}\psi^{-1}(auy)dy\right) d^*u. \end{align*} It is easy to see that the inner integral $$ \int_{1+\CP_F^m} \psi^{-1}(-u^{-1}x)dx \int_{1+\CP_F^m}\psi^{-1}(auy)dy$$ is $\psi(u^{-1}-ua)$ if $ u\in a^{-1}\CP^{-m},$ and $ u^{-1}\in \CP^{-m}$ and zero otherwise. Suppose that $|a|=q^n$, with $n\le N$. We then have \begin{align*}\theta^{m,\chi}(t(a)w)&=q_F^{n/2}\int_{q^{-m}\le |u|\le q^{m-n}}\chi^{-1}(u)\psi(u^{-1}-au)d^*u \end{align*} Note that $2m\ge N\ge n$, thus the domain $\wpair{u\in F^\times| q^{-m}\le|u|\le q^{m-n}}$ is nonempty. We first assume that $n\le 0$, i.e., $|a|\le 1$. Let $\chi_0$ be an unramified character. By Lemma \ref{lem54}, we have \begin{align*} q_F^{-n/2}\theta^{m,\chi_0}(t(a)w)&=\int_{q^{-m}\le |u|\le 1}\chi_0^{-1}(u)\psi(u^{-1})du+\int_{1<|u|\le q^{m-n}}\chi_0^{-1}(u)\psi(-ua)d^*u\\ &=\sum_{k=-m}^0 \chi_0(p^{k})\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times} \psi(p^ku_0^{-1})d^*u_0+\sum_{k=1}^{m-n}\chi_0(p^k)\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\psi(-p^{-k-n}u_0)d^*u_0\\ &= \chi_0(p^{-1})c_1+c_0+\chi_0(p^{1-n})c_1+\sum_{k=1}^{-n}\chi_0(p^k)c_0, \end{align*} where the sum $ \sum_{k=1}^{-n}\chi_0(p^k)$ should be interpreted as 0 if $n=0$. Since the character $\chi_0$ is determined by $\chi_0(p)$, it is easy to choose $\chi_0(p)$ and hence $\chi_0$ which might depend on $|a|$ such that $\chi_0\ne |~|^{\pm 1}$ and $$\theta^{m,\chi_0}(t(a)w)\ne 0, \textrm{ for } |a|\le 1. $$ For example, one can take $\chi_0$ to be the trivial character if $q_F\ne 2, 3$. If $q_F=2$ and $n\ne -1$, or $q_F=3, n\ne 0$, one can still take $\chi_0$ to be the trivial character. If $q_F=2$ and $n=-1$ or $q_F=3$ and $n=0$, one can take $\chi_0=|~|^{-2}$. From this discussion, we know that at most two characters will work for all $a$ with $|a|\le 1$. Now suppose that $n\ge 1$. We have \begin{align} &|a|^{-1/2}\theta^{\chi}_{\phi_m}(t(a)w)\label{eq52}\\ &=\int_{q^{-m}\le |u|\le q^{-n}}\chi^{-1}(u)\psi(u^{-1})d^*u \label{eq53}\\ &+\int_{ q^{-n}< |u|<1}\chi^{-1}(u)\psi(u^{-1}-au)d^*u \label{eq54}\\ &+\int_{1\le |u|\le q^{m-n} }\chi^{-1}(u)\psi(-au)d^*u. \label{eq55} \end{align} If $n=1$, the term Eq.(\ref{eq54}) is zero. Let $\chi=\chi_0$ be an unramified character, by Lemma \ref{lem54}, the term Eq.(\ref{eq53}) becomes $\chi_0(p)c_1$, and the term Eq. (\ref{eq55}) becomes $c_1$. Thus $$q_F^{-1/2}\theta^{m,\chi}(t(a)w)=(\chi_0(p)+1 )c_1, \textrm{ if } |a|=q_F.$$ Thus we can choose a single $\chi_0\ne |~|^{\pm1}$ such that $$\theta^{m,\chi_0}(t(a)w)\ne 0, \forall a \textrm{ with } |a|= q_F.$$ Next we consider the case $n\ge 2$ and $n$ is even. We still take an unramified character $\chi_0\ne |~|^{\pm1}$. Then by Lemma \ref{lem54}, the term Eq.(\ref{eq53}) and the term Eq.(\ref{eq55}) vanish. By Lemma \ref{lem55} and Eq.(\ref{eq52}), we have \begin{align*} &\quad q_F^{-n/2}\theta^{m,\chi}(t(a)w)\\ &= \int_{q^{-n}<|u|<1}\chi_0^{-1}(u)\psi(u^{-1}-au)d^*u\\ &=\sum_{k=1}^{n-1}\int_{|u|=q^k}\chi_0(u)\psi(u-\frac{a}{u})d^*u\\ &=\sum_{k=1}^{n-1}F_{\chi_0}(k,-a)\\ &=F_{\chi_0}(n/2, -a)\ne 0. \end{align*} Next we suppose that $|a|=q^n$ with $n$ odd and $n\ge 3$. Let $\chi$ be a ramified character of degree $n$. By Lemma \ref{lem55}, the term Eq.(\ref{eq54}) vanishes. In fact, we have \begin{align*} & \int_{q^{-n}<|u|<1}\chi^{-1}(u)\psi(u^{-1}-au)d^*u\\ &=\sum_{k=1}^{n-1}\int_{|u|=q^k}\chi(u)\psi(u-\frac{a}{u})d^*u\\ &=\sum_{k=1}^{n-1}F_{\chi}(k,-a). \end{align*} By Lemma \ref{lem55}, each term $F_\chi(k,-a)$ vanishes. By Lemma \ref{lem54}, the term Eq.(\ref{eq53}) is \begin{align*} &\sum_{k=-m}^{-n}\chi(p)^k\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi(u_0)\psi(p^ku_0)d^*u_0\\ =&\chi(p)^{-n}\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi(u_0)\psi(p^{-n}u_0) d^*u_0 \ne 0. \end{align*} If we suppose $a=p^{-n}a_0$ with $a_0\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times$, then the term Eq (\ref{eq55}) is \begin{align*} &\sum_{k=0}^{m-n}\chi(p)^k\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi^{-1}(u_0)\psi(-a_0u_0p^{-k-n})d^*u_0\\ =&\sum_{k=0}^{m-n}\chi(p)^k \chi(a_0)\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times} \chi^{-1}(u_0)\psi(-p^{-k-n}u_0)d^*u_0\\ =&\chi(-a_0)\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi^{-1}(u_0)\psi(p^{-n}u_0)d^*u_0. \end{align*} Thus we have \begin{align}\label{eq56} q_F^{-n/2}\theta^{m,\chi}(t(a)w)&=\chi^{-1}(p^n)\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi(u)\psi(p^{-n}u)d^*u+\chi(-a_0)\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi^{-1}(u)\psi(p^{-n}u)d^*u. \end{align} For a character $\chi$ of degree $n$, write $c(\chi)=\int_{{\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times}\chi(u)\psi(p^{-n}u)d^*u\ne 0$ temporarily, then Eq.(\ref{eq56}) can be written as \begin{equation}\label{eq57} q_F^{-n/2}\theta^{m,\chi}(t(a)w)= \chi(p)^{-n}c(\chi)+\chi(-a_0)c(\chi^{-1}).\end{equation} We claim that, for $n\ge 2$ with $n$ odd, we can find a character $\chi$ such that $\theta^{m,\chi}(t(a)w)\ne 0$. In fact, if exists a character $\chi_0$ of degree $n$ such that $\theta^{m,\chi_0}(t(a)w)=0$, which is equivalent to $\chi_0(-a_0)c(\chi_0^{-1})=-\chi_0(p)^{-n}c(\chi_0)$ by Eq.(\ref{eq57}). Let $\chi_s$ be the character of $F^\times$ defined by $\chi_0|~|^s$. Note that $\deg( \chi_s)=n$, $\chi_s(-a_0)=\chi_0(a_0)$ and $c(\chi_s)=c(\chi_0)$. Then \begin{align*}q_F^{-n/2}\theta^{m,\chi_s}(t(a)w)&=q_F^{ns}\chi_0^{-1}(p^n)c(\chi_0)+\chi_0(-a_0)c(\chi_0^{-1})\\ &=(q^{ns}-1)\chi_0^{-1}(p^n)c(\chi_0). \end{align*} If $s\ne 0$, i.e., $q^{ns}\ne 1$, then $$\theta^{m,\chi_s}(t(a)w)\ne 0.$$ This shows that for any $a$ with $a\in \CP_F^{-N}$, we can find a character $\chi$ such that $\theta^{m,\chi}(t(a)w)\ne 0$. We also need to prove the ``finiteness" part. By the above discussion, it suffices to show that for each odd $n$ with $n\ge 2$, there exists a finite number of characters $\wpair{\chi_{i,n}}$ such that for each $a$ with $|a|=q_F^n$, we can find a character $\chi_{i,n}$ such that $\theta^{m,\chi_{i,n}}(t(a)w)\ne 0$. In fact, for a character $\chi$ of degree $n$, since it is trivial on $1+\CP^n$, for any $u\in a_0(1+\CP^n)$, Eq.(\ref{eq56}) shows that $$\theta^{m,\chi}(t(p^{-n}u)w)=\theta^{m,\chi}(t(p^{-n}a_0)w).$$ This means that if a character $\chi$ works for $p^{-n}a_0$, then it works for $p^{-n}u$ for all $u\in a_0(1+\CP^n)$. Since ${\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times/(1+\CP^n)$ is finite, the assertion follows. \end{proof} \begin{cor}{\label{cor59}} Given positive integers $m,N$ with $m\ge N$. For any character $\chi$ of $F^\times$, there is an integer $d=d(N,\chi)$ such that $$\theta^{m,\chi}(t(a_0a)w)=\theta^{m,\chi}(t(a)w), \forall a_0\in 1+\CP_F^d, a\in \CP_F^{-N}.$$ In fact, we can take $d=\max\wpair{\deg(\chi),N}$. \end{cor} \begin{proof} The function $\theta^{m,\chi}(t(a)w)$ with variable $a$ is continuous, thus there is an integer $d$ such that $\theta^{m,\chi}(t(a_0a)w)=\theta^{m,\chi}(t(a)w)$ for all $a\in 1+\CP_F^d$. We want to show that such an integer $d$ can be chosen independent of $m$. From the proof of Proposition {\ref{prop58}}, the term $|a|^{-1/2}\theta^{m,\chi}(t(a)w)$ have the expression Eq.(\ref{eq52}). If $n\le 0$, i.e., $q^{-n}\ge 1$, we should view the term Eq.(\ref{eq54}) as zero. Denote the term Eq.(\ref{eq53}) (resp. Eq.(\ref{eq54}), Eq.(\ref{eq55})) by $G_1(a)$ (resp. $G_2(a)$, $G_3(a)$). The term $G_1(a)$ only depends on $|a|$, and thus $G_1(a_0a)=G_1(a)$ for all $a_0\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times$. For $u$ with $q_F^{-n}<|u|<1$, we have $p^{N}au\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F$. From this, one can check that $G_2(a_0a)=G_2(a)$ for all $a_0\in 1+\CP_F^N$ easily. We have $$G_3(a)=\int_{1\le |u|\le q^{m-n} }\chi^{-1}(u)\psi(-au)d^*u.$$ Thus for $a_0\in {\mathcal {O}}} \newcommand{\CP}{{\mathcal {P}}_F^\times$, by changing variable, we have \begin{align*} G_3(a_0a)&=\int_{1\le |u|\le q^{m-n} }\chi^{-1}(u)\psi(-a_0au)d^*u\\ &=\chi(a_0)\int_{1\le |u|\le q^{m-n} }\chi^{-1}(u)\psi(-au)d^*u\\ &=\chi(a_0)G_3(a). \end{align*} Thus we have $G_3(a_0a)=G_3(a)$ for $a\in 1+\CP_F^{\deg(\chi)}$. \end{proof} \subsection{A new proof of the local converse theorem for $\GL_2$} Recall the local converse theorem for $\GL_2$: \begin{thm}[Corollary 2.19 of \cite{JL}]\label{thm510} Let $\pi,\pi'$ be two infinite dimensional irreducible smooth representation of $\GL_2(F)$ with the same central character. If $\gamma(s,\pi,\eta)=\gamma(s,\pi',\eta)$ for all quasi-characters $\eta$ of $F^\times$, then $\pi\cong \pi'$. \end{thm} We are going to give a new proof of this theorem. In fact, we will prove the following \begin{thm}{\label{thm511}} Let $\pi,\pi'$ be two $\psi$-generic irreducible smooth infinite dimensional representations of $\GL_2(F)$ with the same central character $\omega_\pi=\omega_{\pi'}$. \begin{enumerate} \item If $\gamma(s,\pi, \omega_{\psi^{-1}, \chi},\eta_1)=\gamma(s,\pi', \omega_{\psi^{-1},\chi},\eta_1)$ for all quasi-characters $\chi,\eta_1,$ of $F^\times$ with $\chi\ne |~|^{\pm1}$, then $\pi$ is isomorphic to $\pi'$. \item For a fixed quasi-character $\chi$ of $F^\times$, there is an integer $l_0=l_0(\pi,\pi',\chi)$ such that for all quasi-characters $\eta_1,$ of $F^\times$ with $\deg(\eta_1)>l_0$, we have $$\gamma(s,\pi, \omega_{\psi^{-1}, \chi},\eta)=\gamma(s,\pi', \omega_{\psi^{-1},\chi},\eta). $$ \end{enumerate} \end{thm} \begin{rem}\upshape (1) Note that, the condition of (1) of Theorem \ref{thm511} is weaker than that of Theorem \ref{thm510} by the multiplicativity theorem of $\gamma$-factors, Theorem \ref{thm44}. Thus it is clear that part (1) of Theorem \ref{thm511} implies Theorem \ref{thm510}. \\ (2) The conclusion of Part (2) of Theorem \ref{thm511} is weaker than the stability of gamma factors for $\GL_2$. \end{rem} Before the proof, we recall that in $\S$\ref{sec33}, we defined a function $\Phi_{i,l}\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$. We need to compute $f(s,h,\Phi_{i,l}, \eta)$ in our case. The function $f(s,h,\Phi_{i,l},\eta)$ in the $\GL_2$-case is defined in $\S$4, Eq.(\ref{eq45}). For quasi-characters $\chi, \eta_1$ of $F^\times$, we define $\eta_2=\omega_\pi^{-1}\chi^{-1}\eta_1^{-1}$. Let $l=\deg(\eta_1\eta_2^{-1})=\deg(\eta_1^2 \cdot\omega_\pi\cdot \chi)$, $\eta=(\eta_1,\eta_2)$ and $\eta^*=(\eta_2,\eta_1)$. It is easy to see that \begin{equation}{\label{eq58*}} f(\bar n(x), \hat \Phi_{i,l},\eta, s)=\left\{\begin{array}{lll} q_F^{-l}& \textrm{ if } |x|_F\le q_F^{-i}, \\ 0, &\textrm{otherwise} \end{array} \right. \end{equation} and \begin{equation}{\label{eq59*}} f(wn(x), \hat \Phi_{i,l},\eta^*, 1-s)=q_F^{-l-i}c(s,\eta,\psi), \textrm{ if } |x|_F\le q_F^{i-l}, \end{equation} where $ c(s,\eta,\psi)=\int_{|y|_F\le q_F^l}\psi_F(y)\eta_1^{-1}\eta_2(y)|y|^{1-s} d^*y$, which has no zeros or poles except for a finite number of $q_F^{-s}$. \begin{proof}[Proof of Theorem $\ref{thm511}$] The proof is almost identical to the proof of Theorem \ref{thm39}. We fix vectors $v\in V_\pi, v'\in V_{\pi'}$ such that $W_{v}(1)=1=W_{v'}(1)$. Let $L_1$ be an even integer such that $v,v'$ are fixed by $K_{L_1}$. Let $L=3L_1$ and let $m$ be an integer such that $L\le m$. Recall that we have defined $\phi^m\in {\mathcal {S}}} \newcommand{\CT}{{\mathcal {T}}(F^2)$, the space of $\omega_{\psi^{-1}}$ in $\S$\ref{sec52}. Let $\chi$ be a character of $F^\times$ with $\deg(\chi)\le m$, recall that $\theta^{m,\chi}$ is the Whittaker function associated with $\phi^m_\chi\in \omega_{\psi^{-1},\chi}$, see Eq.(\ref{eq51}). For a quasi-character $\eta_1$ of $F^\times$, we take $\eta_2$ such that $\omega_\pi\cdot \chi\cdot \eta_1\eta_2=1$. Let $l=\deg(\eta_1\eta_2^{-1})$. We consider the integral $$\Psi(s,W_{v_m},\theta^{m,\chi},\Phi_{i,l},\eta)=\int_{R\setminus H}W_{v_m}(h)\theta^{m,\chi}(h)f(s,h,\Phi_{i,l},\eta)dh.$$ We will take the integral $\Psi(s,W_{v_m}, \theta^{m,\chi},\Phi_{i,l})$ on the open dense set $R\setminus NT\bar N$ of $R\setminus H$. For $h=n(y)t(a,b)\bar n(x)\in NT\bar N$, the Haar measure will be $dh=|a/b|_F^{-1}dyd^*a dx$. Thus by Eq.(\ref{eq58*}), we get \begin{align*} &\qquad \Psi(s,W_{v_m},\theta^{m,\chi},\Phi_{i,l},\eta)\\ &=\int_{F^\times}\int_F W_{v_m}(t(a)\bar n(x))\theta^{m,\chi}(t(a)\bar n(x))\eta_1(a)|a|^{s-1}f(s,\bar n(x),\Phi_{i,l},\eta)dxd^*a\\ &=q_{F}^{-l}\int_{F^\times}\int_{|x|_F\le q_F^{-i}}W_{v_m}(t(a)\bar n(x))\theta^{m,\chi}(t(a)\bar n(x))\eta_1(a)|a|^{s-1}dxd^*a. \end{align*} Let $i\ge 3m$, then $|x|_F\le q_F^{-i}$ implies that $\bar n(x)\in J_m$. Thus by Lemma \ref {lem51} and Proposition \ref{prop56}, we have $$W_{v_m}(t(a)\bar n(x))= W_{v_m}(t(a)),\theta^{m,\chi}(t(a)\bar n(x))=\theta^{m,\chi}(t(a)) \textrm{ for } |x|_F\le q_F^{-i}.$$ Combining these facts and Lemmas \ref{lem51}, \ref{lem52}, we get \begin{align*} &\quad \Psi(s,W_{v_m},\theta^{m,\chi},\Phi_{i,l})\\ &=q_F^{-l-i} \int_{F^\times}W_{v_m}(t(a))\theta^{m,\chi}(t(a))\eta_1(a)|a|^{s-1}d^*a\\ &=q_F^{-l-i}\int_{1+\CP_F^m}W_{v_m}(t(a))\theta^{m,\chi}(t(a)\eta_1(a)d^*a\\ &=q_F^{-l-i} \int_{1+\CP_F^m}\theta^{m,\chi}(t(a))\eta_1(a)d^*a. \end{align*} By definition of $\theta^{m,\chi}$, see Eq.(\ref{eq51}), for $a\in 1+\CP_E^m$, we have \begin{align*}&\theta^{m,\chi}(t(a))\\ &=\int_{F^\times}\chi^{-1}(u)(\omega_{\psi^{-1}}(t(a))\phi^m)(u,u^{-1})d^*u\\ &=\int_{F^\times}\chi^{-1}(u)|a|^{1/2}\phi^{m}(au,u^{-1})d^*u\\ &=\int_{1+\CP_F^m}\chi^{-1}(u)d^*u\\ &={\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_F^m, d^*u), \end{align*} where in the last step we used the fact that $\deg(\chi)\le m$. Thus if $m\ge l(\eta_1)$, we have \begin{align} &\quad \Psi(s,W_{v_m},\theta^{m,\chi},\Phi_{i,l}) \nonumber\\ &=q_F^{-l-i} {\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_F^m, d^*a)\int_{1+\CP_F^m}\eta_1(a)d^*a \nonumber\\ &=q_F^{-l-i} {\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_F^m, d^*a)^2.\label{eq58} \end{align} The same calculation works for $\Psi(s,W_{v'_m},\theta^{m,\chi},\Phi_{i,l})$. In particular, we get \begin{equation}{\label{eq59}}\Psi(s,W_{v_m'},\phi^{m,\chi},\Phi_{i,l},\eta)=\Psi(s,W_{v_m},\phi^{m,\chi},\Phi_{i,l})=q_F^{-l-i} {\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_F^m, d^*a)^2,\end{equation} for $i\ge 3m, m\ge \max\wpair{L, l(\eta_1),\deg(\chi)}$. Let $W=W_{v_m}$ or $W_{v_m'}$. We compute the integral $\Psi(1-s,W,\phi^{m,\chi}, \hat \Phi_{i,l})$ on the dense set $N\setminus NTwN$ of $N\setminus H$. Let $i\ge m+l $, then $|x|_F\le q_E^{m}$ implies that $|x|_F\le q^{i-l}_F$. Then by Eq.(\ref{eq59*}), we have \begin{align} &\quad\Psi(1-s,W,\phi^{m,\chi}, \hat \Phi_{i,l}) \nonumber\\ &=\int_{F^\times}\int_{F }W(t(a)wn(x))\theta^{m,\chi}(t(a)wn(x)) f_s(t(a)wn(x), \hat \Phi_{i,l}, \eta^*)\nonumber\\ &=q_F^{-l-i}c(1-s, \eta, \psi)\int_{F^\times}\int_{|x|\le q_F^m}W(t(a)wn(x) )\theta^{m,\chi}(t(a)wn(x))\eta_2(a)|a|^{-s}dxd^*a \label{eq510}\\ &+\int_{F^\times}\int_{|x|_F>q_F^m} W(t(a)wn(x) )\theta^{m,\chi}(t(a)wn(x)) f(1-s, t(a)wn(x), \hat \Phi_{i,l})dxd^*a \label{eq511} \end{align} By Proposition \ref{prop53}, we have $W_{v_m}(t(a)wn(x))=W_{v_m'}(t(a)wn(x))$ for $|x|>q_F^m$. Thus the expressions (\ref{eq511}) for $W=W_{v_m}$ and $W=W_{v'_m}$ are the same. For $|x|_F\le q_F^m$, we have $n(x)\in N_m$. By Lemma \ref{lem51} and Proposition \ref{prop56}, we have $$W_{v_m}(t(a)wn(x))=\psi(x)W_{v_m}(t(a)w), \textrm{ and }\theta^{m,\chi}(t(a)wn(x))=\psi^{-1}(x)\omega^{m,\chi}.$$ Thus the expression (\ref{eq510}) can be simplified to \begin{equation}{\label{eq512}}q_F^{-l-i+m}c(1-s, \eta, \psi)\int_{F^\times}W(t(a)w)\theta^{m,\chi}(t(a)w)\eta_2(a)|a|^{-s}d^*a. \end{equation} Thus we get \begin{align} &\Psi(1-s,W_{v_m},\phi^{m,\chi},\hat \Phi_{i,l})-\Psi(1-s,W_{v_m'},\phi^{m,\chi},\hat \Phi_{i,l}) \label{eq513}\\ =&q_F^{-l-i+m}c(1-s, \eta^*, \psi)\int_{F^\times}(W_{v_m}(t(a)w)-W_{v_m'}(t(a)w))\theta^{m,\chi}(t(a)w)\eta_2(a)|a|^{-s}d^*a\nonumber \end{align} By Eq.(\ref{eq59}), Eq.(\ref{eq513}) and the local functional equation, we get \begin{align} &d_{m}(s,\eta,\psi)(\gamma(s,\pi, \omega_{\psi^{-1},\chi}, \eta)-\gamma(s,\pi',\omega_{\psi^{-1}},\eta)) \label{eq514}\\ =&\int_{F^\times}(W_{v_m}(t(a)w)-W_{v_m'}(t(a)w))\theta^{m,\chi}(t(a)w)\eta_2(a)|a|^{-s}d^*a \nonumber \end{align} for $m\ge\max\wpair{L, l(\eta_1)}$, where $d_m(s,\eta,\psi)={\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(1+\CP_F^m,d^*a)^2q_F^{-m}c(1-s,\eta,\psi)^{-1}$. By Proposition \ref{prop53}, for $n\in N_m-N_L$, we have $W_{v_L}(t(a)wn)=W_{v_L'}(t(a)wn)$. Thus by Lemma (\ref{lem51}), we have \begin{align} &\quad W_{v_m}(t(a)w)-W_{v_m'}(t(a)w)\label{eq515}\\ &=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}\int_{N_m}\psi^{-1}(n)(W_{v_L}(t(a)wn)-W_{v'_L}(t(a)wn))dn \nonumber\\ &=\frac{1}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}\int_{N_L}\psi^{-1}(n)(W_{v_L}(t(a)wn)-W_{v'_L}(t(a)wn))dn \nonumber\\ &=\frac{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_L)}{{\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m)}(W_{v_L}(t(a)w)-W_{v_L'}(t(a)w)).\nonumber \end{align} If we combine Eq.(\ref{eq514}) and Eq.(\ref{eq515}), we get \begin{align} &d_{m,L}(s,\eta,\psi)(\gamma(s,\pi,\omega_{\psi^{-1},\chi},\eta)-\gamma(s,\pi',\omega_{\psi^{-1},\chi},\eta)) \label{eq516}\\ =&\int_{F^\times}(W_{v_L}(t(a)w)-W_{v_L'}(t(a)w))\theta^{m,\chi}(t(a)w)\omega_\pi^{-1}\chi^{-1}\eta_1^{-1}(a)|a|^{-s}d^*a,\nonumber \end{align} with $d_{m,L}(s,\eta,\psi)=d_m(s,\eta,\psi){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_m){\mathrm{vol}}} \newcommand{\ab}{{\mathrm{ab}}(N_L)^{-1}$, for $m\ge \max\wpair{L,l(\eta_1)}$. Now we are ready to prove our theorem. To prove (1), it suffices to show that $W_{v_L}(h)=W_{v_L'}(h)$ for all $h\in H=\GL_2(F)$. By Proposition \ref{prop53}, it suffices to show that $W_{v_L}(t(a)w)=W_{v_L'}(t(a)w)$ for all $a\in E^\times$. Let $C=3L/2>L$. By Lemma \ref{lem52}, we have $$W_{v_L}(t(a)w)=W_{v'_L}(t(a)w)=0, \textrm{ if }a\notin \CP_F^{-C}.$$ Thus it suffices to show that $$W_{v_L}(t(a)w)-W_{v_L'}(t(a)w)=0, \forall a\in E^\times, \textrm{ with }|a|_F\le q_F^C. $$ By our assumption on $\gamma$-factors and Eq.(\ref{eq516}), we have $$\int_{F^\times}(W_{v_L}(t(a)w)-W_{v_L'}(t(a)))\theta^{m,\chi}(t(a)w)\eta_2(a)|a|^{-s}d^*a=0. $$ By the inverse Mellin transform, we get $$ (W_{v_L}(t(a)w)-W_{v_L'}(t(a)w))\theta^{m,\chi}(t(a)w), \forall a\in F^\times.$$ We take $m\ge \max\wpair{C,l(\eta_1), \deg(\chi)}$, then by Proposition \ref{prop58}, for each $a\in \CP_F^{-C}$, there exists a character $\chi\ne |~|^{\pm1}$ with $\deg(\chi)\le C$ such that $\theta^{m,\chi}(t(a)w)\ne 0$. Thus we get $$ W_{v_L}(t(a)w)=W_{v_L'}(t(a)w, \forall a\in \CP_F^{-C}.$$ This completes the proof of (1). To prove (2), we take an integer $l_1$ such that $$W_{v_L}(t(a_0a)w)=W_{v_L}(t(a_0a)w), \textrm{ and }W_{v_L'}(t(a_0a)w)=W_{v_L'}(t(a_0a)w),$$ for all $a_0\in 1+\CP_F^{l_1}$. Note that $l_1$ only depends on $\pi,\pi'$. By Corollary (\ref{cor59}), for $m\ge C$, we have $$\theta^{m,\chi}(t(a_0a)w)=\theta^{m,\chi}(t(a)w), \forall a_0\in 1+\CP^{\deg(\chi)+C}.$$ Let $l_0=\max\wpair{\deg(\chi)+C, l_1, \deg(\omega_\pi), \deg(\chi)}$. Note that $l_1, L, C=3L/2$ are only depend on $\pi,\pi'$, thus $l_0$ only depends on $\pi,\pi',\chi$. Then it is clear that for $\deg(\eta_1)>l_0$, the right side of Eq.(\ref{eq516}) vanishes. Since $d_{m,L}(s,\eta,\psi)$ does not have zeros or poles other than finite number of $q_F^{-s}$, we get $$ \gamma(s,\pi,\omega_{\psi^{-1},\chi},\eta_1)=\gamma(s,\pi',\omega_{\psi^{-1},\chi},\eta_1), \textrm{ if } \deg(\eta_1)>l_0.$$ This completes the proof of (2). \end{proof}
2,869,038,154,520
arxiv
\section{Introduction} Puzzles have long been a staple of video games. They can be just enjoyable to play, or necessary to advance in the game, and usually provide in-game rewards upon their completion. In terms of design, puzzles can affect in-game economy, if completion of harder puzzles can lead to better rewards, and rewards can be purchased with, or traded for, in-game currency. Unlike traditional logic puzzles like Sudoku, World Wheel, Tower of Hanoi, Refraction, crosswords etc, where game rules are predefined and static~\cite{puzzlenpcomplete}, puzzles in video games commonly have changing constraints in order to diversify gameplay. With many games having ever evolving puzzles which continue to grow in complexity and difficulty, the task of designing these puzzles becomes time consuming and challenging. The game design process is usually complex and labor intensive. Exploring the design space is how game creators discover and develop the rules and mechanics of the game. One of the principal challenges in designing puzzles, and potentially the most important for game designers, is guaranteeing they can be solved by players~\cite{Smith_quantifyingover}. Unsolvable puzzles are frustrating to players for obvious reasons, but puzzles that become exceptionally hard due to virtually infeasible constraints, such as having barely enough time to perform all necessary moves, can be just as equally frustrating. In order to guarantee the quality of the player experience, designers then need to certify that solution for the puzzle exists. With ever growing constraints and numerous puzzles to create, the task quickly becomes more complex. In addition to time saving, by automating certain aspects of the puzzle creation process, it can also assist in discovering new solutions that might not have been noticed initially. This allows designers to validate and quickly iterate to different versions of their puzzles. It also answers questions such as: how many solutions exist for a particular puzzle, what is the optimal solution, or what is the ``cheapest price'' solution. For the game showcased in this paper, game designers need to produce a significant amount of new puzzles everyday. In order for our system to be of assistance to their workflow, it is necessary that it is able to find solutions within a time frame that allows designers to make quick iterations of their design. This scenario and the sheer size of the search space of potential solutions renders straight-forward techniques, such as exhaustive search, impractical. In order to meet such requirements we propose an evolutionary algorithm to search the space of candidates, making use of an expert knowledge derived heuristic to guide exploration of potential puzzle solutions. Although this technique does not guarantee we will find the optimal solution, which is also of interest to the designers, it consistently finds diverse, close-to-optimal solutions under our strict time constraints. The diverse set of solutions found provides valuable insight to the designers allowing them to quickly analyze the attributes of the solution space and compare different design iterations. The puzzles discussed in this paper are deterministic, single-player, and fully observable at all steps. These puzzles are defined as constraint satisfaction problems with one or more constraints, where the objective is to select and assign items, from a pool of candidates, in order to complete the requirements. If the puzzle is solvable, there is rarely (if ever) a unique solution to it. In this paper, we describe a randomized heuristic-driven construction state-space search based methodology to validate the solvability of a puzzle. With thousands of possible combinations to build a solution from, the search efficiency is key. In addition, each solution also has `solution cost', which is the sum of individual item prices comprising the solution. The cost of the solution should be minimized to better assess the `value' of the puzzle offered to the players. We further expand this methodology to find a set of near-optimal solutions efficiently using a custom designed evolutionary algorithm. Important to emphasize, in our work we focus solely on puzzle validation, regardless if the puzzles were generated manually or automatically. The rest of the paper is formatted as such: Section \ref{sec:relatedwork} discusses how our approach fits in the context of previous research. Section~\ref{PBP} defines the party building puzzle, our example game, followed by the constructive approach to build a solution~\ref{sec:formationplaytesting}. Section~\ref{sec:optimization} further develops our approach on how to harness power of a Genetic Algorithm not only find a solution but how to optimize the search for puzzle solutions, while trying to approximate the global price optimum. Lastly, we conclude by explaining our findings and future work in Section~\ref{sec:conclusionfuture}. \section{Related Work}\label{sec:relatedwork} The goal of this work is to develop an automated solution strategy targeted at validating a puzzle design, which lends itself well to the concept of automating playtesting. Commonly, automation in playtesting revolves around agents that can play through the game, or particular game scenarios. In particular, artificial intelligence agents could be used to test the boundaries of the rules/design constraints as in~\cite{de2017aicontemporaryboardgames, garcia2018automatedhearthstone}. While our approach is not conducted by an agent (puzzles are action-based), but rather the player needs to find a solution by selecting and organizing items that can fulfill the requirements presented. Important to note that for our problem rather than automatically trying to change the design, or to create a new puzzle, we instead assist in evaluating the feasibility of a proposed puzzle. Perhaps more similar to our approach, Bhatt et al. evolve Hearthstone decks, by selecting from existing cards, to fit the strategy of game playing agents~\cite{bhatt2018exploringhearthstone}. The strategy of using custom Genetic Algorithm based solvers for Jigsaw puzzles was first introduced in~\cite{Toyama2002AssemblyOP} for binary image puzzles. The large set of selection items presents a unique challenge in choosing the solution optimization approach. The closest example of application of an evolutionary approach to effectively solve puzzles rather than applying greedy approach is demonstrated by~\cite{SolomonPuzzleEvolution} for very large Jigsaw puzzles, and later extended to multiple puzzles by \cite{MultipleJigsaw}. Specifically, in~\cite{SolomonPuzzleEvolution} authors propose an approach that iteratively improves initial population via the means of natural selection (mutation, selection, crossover) to find more accurate solutions (i.e. the correct image) with the novel puzzle representation and a custom crossover approach. The puzzles considered in our work have similarly large pool of items to (pre)select and assign, which poses comparable computation hurdle. The video game puzzles we consider pose an added challenge of non-uniqueness of a solution as well as absence of a clearly defined benchmark solution to compare with. \section{Example Problem: Party Building Puzzles}\label{PBP} To illustrate our proposed Graph-Based Genetic Algorithm approach to finding a set of near-optimal solutions we, first, describe an example problem to introduce all the necessary terminology, followed by a constructive search based method to build a single solution, which becomes the basis for the evolutionary computations. Party building fantasy combat games are rising in popularity recently, e.g. Idle Champions, Firestone Idle RPG, or others (Fig.~\ref{fig:image2}). In these games, a player is tasked with selecting and filling out a combat formation with different heroes. Each combatant has different properties, making them suitable for different positions in the combat formation as well as making them strategically preferable to combine with other fighters. The genre of party-based, fantasy combat games is a suitable candidate for demonstrating our approach. \begin{figure}[t] \centering \begin{subfigure}[t]{0.225\textwidth} \centering \includegraphics[height=0.8in]{images/IdleFirestone.jpg} \label{fig:subim1} \caption{Idle Firestone} \end{subfigure} ~ \begin{subfigure}[t]{0.225\textwidth} \centering \includegraphics[height=0.8in]{images/Idle-Champions-Cheats-Tips.jpg} \label{fig:subim2} \caption{Idle Champions} \end{subfigure} \caption{These figures illustrate fantasy parties arranged in typical combat formations. For example, melee fighters are positioned in the front and ranged fighters or support characters are in the back. Even if the core game play differs, these concepts are shared, suggesting the wide applicability of formation-building puzzles within this game genre.} \label{fig:image2} \end{figure} The game we are analyzing is called ``Party Building Puzzle'', or PBP for short. In a PBP, the player has to select a number of items (heroes), and then assign those items to the combat formation such that heroes in the formation meet the puzzle requirements. An important thing to note is the puzzle requirements are defined by the game designers, which differentiates this type of formation filling from a more general strategy based games, and hence the use of the term `puzzle'. Each hero has a number of properties such as race, religion, nation, total level, gear quality level, price and how rare it is to obtain that particular hero. Puzzle requirements are constraints the player needs to honor when building their formation, and are usually related to the hero properties. For example, a puzzle might require the player use at least 3 orcs in their party, from at least 4 different nations, while another puzzle requires that the average level of the party be at least 60. Heroes additionally contribute to something called party synergy, a graph-based metric of how well the party works together and will be described in more detail in the next section. Requesting players to form a party of a certain synergy level creates a complex challenge in which the player must not only choose the right heroes, but also arrange them appropriately within the party's formation. Currently the game possess a total amount of unique cards in the order of $10^{5}$, and the number keeps expanding as the designers are constantly releasing new cards. The number of possible heroes is near limitless, which creates a combinatorially complex search space over which a solver must iterate to find solutions that satisfy the requirements. In that scenario, a task of constructing all possible solutions for a given puzzle becomes infeasable. For such, we use the optimization heuristics of evolutionary strategy to efficiently search the space of possible solutions, while optimizing for what is defined as the ``cheapest' priced option. \section{Puzzle Solver Part I: Formation Building and Playtesting}\label{sec:formationplaytesting} \subsection{Problem Formulation} The puzzles we are considering, with both linear and quadratic constraints, can be formulated as an asymmetric quadratic assignment problem (QAP), which consists of selecting $N$ items out of a pool of $M$, where $N<<M$, and placing them into a graph of locations in an optimal way such that certain conditions are satisfied. Each item, i.e. hero, is characterized by a list of properties, so-called traits. Each can be selected once and its selection and corresponding placement are advised by those traits. In our example of the Party Building Puzzle, $N=10$ is the number of nodes in the puzzle graph to fill out and $M=10 000$ is the number of possible items to choose from without repetition. Linear constraints are refereed as simple, and can be either continuous or discrete. For continuous properties, constraints are formulated as `the accumulated quality of the property $P$ over all positions has to be greater or equal to some value $a$': $\sum_{i=1}^N x^P_i\geq a$. For example, \textit{minimum team level is 84}. Discrete property constraints require `not more than $b$ items with a property $P$ are allowed in the placement graph': $ \sum_{i=1}^N I(x_i^P)\leq b,$ where $I$ is a boolean indicator function defining the presence or absence of a certain trait in the feature vector for each item. For example, \textit{at most 8 elves are allowed} or \textit{at most 4 humans and at least 2 elves are allowed}. In our PBP, number of properties for a hero does not exceed $P=8$. In addition, we have a nonlinear complex constraint, so-called `synergy', which is dependent on both item placement and the compatibility between adjacent items in the graph. Hence, its value is not defined till all the items are selected and assigned. Synergy is present within each puzzle, but can have various requirement value from 0 to 1. \begin{equation} \label{complexconstraints} \sum_{j=1, j\neq i}^N \sum_{i=1}^N \sum_{p=1}^Pw(x_j^p, x_i^p) \geq Synergy, \end{equation} where $w(x_p^j, x_p^i)$ is the weight of the edge in the graph between two neighbouring nodes $i$ and $j$, and summations are for each edge, across each trait. The weight function is specific to the puzzle and is a custom-defined, non-linear relation. We formulate Quadratic Optimization problem to maximize the synergy constraint following the Koopmans and Beckmann QAP formulation~\cite{QAPoriginal} over all possible selection and assignment permutations. Our solution must search for which items to use, out of the available ones, as well as which position to place the items in. In addition to the classical QAP, we need to also pre-select $N$ out of $M$ possible items before assigning them to locations, which makes the assignment asymmetrical. The asymmetric nature of the placement presents unique challenges, since the number of candidate items $N$ can be on the order of $10^{5}$, while the graph consists at most a few dozen items. So, in addition to the optimal placement, the selection of items is part of the optimization. \subsection{Our Proposed Solution}\label{sec:approach} In this section we first describe the design to automatically build a deck by finding a feasible solution to a challenge, i.e. the one that honors all requirements specific to that puzzle while maximizing the synergy. Using this design, we further employ evolutionary approach to finding a set of optimal solutions as described in the following section. It is known that QAPs are difficult to solve and are among the hardest NP-complete problems~\cite{puzzlenpcomplete} especially given the size of the combinatorial search space. Any algorithm that guarantees an exact solution (given that it exists) has to consider every item (hero) and every combination, so has an exponential computational time. Heuristic driven approaches are usually employed to either linearize or find approximate solutions~\cite{HandbookMetaheruistics}. Potentially, training a Reinforcement Learning agent to solving a puzzle could lead to a fast performance. However, with dynamically changing sets of requirements as well as a large pool of candidate items it could lead to overfitting and overall would result in data inefficiency in training and might not be practical. There are additional complications of defining an adaptable state representation, as well as catastrophic forgetting (both during training and/or during re-training with new data). This is due to the fact that, though there is a limited, constant, set of well-defined requirements that may appear in a given puzzle, each puzzle has a unique combination of completion requirements. Compare to Sudoku solvers~\cite{sudoku2018} as an example, where similar graph type constraints are defined, but the rules of the game are always the same even when the initial state of the board is different for each puzzle. In addition, placing numbers from 0 to 9 is not comparable to selecting 10 items out of tens of thousands and then placing them optimally. Another set of approaches commonly applied for similar problems are classical search methods like A-star or Monte-Carlo tree search that are based on heuristic-driven or probability-based backtracking~\cite{brownie2012MTCSreview}. However, for problems with quadratic constraints like the graph-dependent synergy value that we are trying to maximize, those approaches have limited application. In particular, deriving admissible heuristics for quadratic optimization is a challenge by itself. Additionally, the size of the search space we are dealing with makes those approaches impractical. As a reminder, our goal is to select $N$ items, each of which has $P$ features, from a list of $M$ items, where $N<<M$. These items must be selected and assigned to positions within a graph so as to satisfy a list of requirements and avoid repetitions. The solution strategy we chose is a constructive, randomized, heuristic search as described in Algorithm~\ref{algo:constructsolution}. To avoid deterministic behaviour, we first define a random traversal order in which to visit each node in the graph. Then, for any unvisited node, we first filter out all items from the list of candidates following the constraint requirements such that any item chosen from the filtered list is valid. Once the list is filtered, we search through it to find an item that maximizes the synergy of the neighbouring candidates that are partially filled. The random path traversal ensures diversity, so that the process can be repeated until the synergy requirement is satisfied. \subsubsection{Filter the candidate pool} Prior to selecting items at each position that optimize for the synergy, at each step of the algorithm iteration we apply heuristics specific to the problem and the set of requirements in order to filter out all the candidate items that are not eligible for the valid assignment. This look-ahead approach allows to do forward checking between the current and future candidates and if at any point the candidate pool is empty given an early signal to start over. This step is applied sequentially $N$ times for each linear constraint for each position until each node in the graph is visited. This heuristic rule is applied for discrete and continuous feature constraints. Consider a given step of solution building process when $L$ positions out of $N$ are already assigned (or, equivalently, visited), $0\leq L<N$, and at least $a$ items of a specific property $P$ are required: $\sum_{i=1}^N I(x^P_i) \geq a$. First, we compute the intermediate value $l$ of that constraint using assigned items: $l = \sum_{i=1}^L I(x^P_i)$. Then, $l \geq a$ means that the constraint is already satisfied and no action is taken. Alternatively, if $l < a$, we have $K=N-L$ nodes yet to be visited, out of which remaining $k=a-l$ items have to have a property $P$. Then, if $K>k$, again no action is taken. Else, if $K=k$, we filter out all items in our candidate pool that do not have property $P$ forcing in subsequent iterations the selection of a property $P$ to honor the requirements. This process is repeated for each requirement. Note, that since at each iteration only one item is selected to fill-up a given node, the repetitive application of these heuristics guarantees that $K$ is never less than $k$, and at the end all requirements are satisfied. \subsubsection{Select the best matching item} Once the item candidate pool is filtered, any randomly selected item will satisfy all the linear constraints. At this step the goal is to select the best matching candidate in the partially filled graph which maximizes complex constraints. The complex constraints are computed for a position that has to be filled with respect to its immediate neighbors in a partially filled graph. For example, if the node $E$ has to be filled as shown on Fig.~\ref{graphBestMatching}, it is only influenced by nodes $A$ and $C$. For an empty graph, a random item is picked. \begin{figure}[b] \centerline{\includegraphics[scale=0.4]{images/GraphBestMatchNeighbours-7.jpg}} \caption{Graph traversal for searching for the best matching item based on synergy with neighbors. The order of traversal is alphabetical, so that node $E$ is influenced only by nodes $A$ and $C$, since they were the only neighbours visited.} \label{graphBestMatching} \end{figure} Of note is the fact that this intermediate synergy value at any given step does not guarantee that the final solution will comply with the complex requirements since these are quadratic requirements. If they should fail to be satisfied, the algorithm starts over with a new randomly selected traversal path. Fig.~\ref{fig:samplesolutions} demonstrates two sample solutions which satisfy all linear constraints, which concerns the selection of the items, but only the right one satisfies quadratic constraint (both selection and optimal arrangement). Since the synergy is, however, usually hard to obtain, requiring a number of attempts for a given search space which makes the problem intractable, our proposed guided, randomized search increases the chances of obtaining feasible solutions in only a few iterations. Fig.~\ref{fig:chemistry_unguided} shows the results of running the algorithm on a given set of linear requirements with and without heuristics guiding it toward synergy maximization. In both cases same constructive approach is used to build a solution, so linear constraints are satisfied through the candidate filtering, the only difference is that we do not select the best matching candidate in the later case. Out of 5000 independent attempts, the maximum synergy value reached without heuristic-guidance was 0.35. This is poor performance, as common in game required values are typically above 0.7-0.8, demonstrating the value of heuristic-guidance. Solution construction is summarized in Algorithm~\ref{algo:constructsolution}. \begin{algorithm}[h] \SetAlgoLined \begin{algorithmic}[0] \State{\bf Initialize}: Random graph traversal path \While{Synergy is not satisfied}{ \For{Each unvisited node in a graph}{ \State Look ahead: filter the list of candidate items \For{all linear requirements}{ Apply the heuristics rule } \State Select the best matching candidate (synergy wise) } \State Compute synergy Eq.~\ref{complexconstraints} \If{synergy requirement is satisfied}{return the solution: {selected items $x$, permutation $\sigma$}} \Else{ New random path and iterate until solution is obtained} } \caption{Guided Randomized Heuristic Search to Construct a Solution} \label{algo:constructsolution} \end{algorithmic} \end{algorithm} \begin{figure}[h] \centering \begin{subfigure}{0.235\textwidth} \includegraphics[height=1in]{images/BadSolution.png} \end{subfigure} \begin{subfigure}{0.235\textwidth} \includegraphics[height=1in]{images/GoodSolution.png} \end{subfigure} \caption{Sample solutions to a puzzle with the requirements `At least 7 Goblins and at least 2 races.'. In both cases the same heroes are used. However, the arrangements are different which result in a higher synergy (edges) for the solution on the right as party members are placed more optimally.} \label{fig:samplesolutions} \end{figure} \begin{figure}[h] \centerline{\includegraphics[width=0.7\linewidth]{images/SynergyHistogramsComparative.jpg}} \caption{Comparison of Synergy values normalized to [0,1] for an independent set of 5000 attempts for a given challenge with and without heuristics guidance. Vertical axes are frequencies as percentage over all the samples. As can be seen, without the heuristics guidance to continuously select the best matching items, the synergy value never reaches more than 0.3. Results are for a sample challenge.} \label{fig:chemistry_unguided} \end{figure} The computational complexity of the algorithm is ${O}(kNM)= {O}(NM)$ per iteration, in the worst-case, where $k$ is the fixed finite number of linear requirements on the order of $1$ to $6$, $N$ is $10$, the number of nodes in the graph, and $M$ is the size of the candidate pool, order of $10^4$. The number of iterations is fixed, typically to a value of $10$ or less for the purposes of our experiments. In the best case scenario, however, linear requirements lead to a significant reduction in the size of the candidate pool such that it approaches the size of the graph. In these cases, the complexity approximately reduces to ${O}(N^2)$ per iteration. Important to note that exhaustive search is intractable for this type of a problem, since the number of all possible combinations of (a) selecting 10 items out of 10 000 and (b) arranging those 10 items into a specific order is roughly of the order of $10^{39}$. Depending on the requirements of the puzzles, there could be a few hundred thousand possible feasible solutions out of those combinations. At the same time, human players are able to find solutions without extensively covering the entire search space by using a limited number of items available to them and obtaining a few missing ones, as well as taking advantage of their intuitive knowledge of the game mechanics and items’ properties. Note, the heuristics used in the algorithm are inspired by how human players act: the way they select items and use in-game tools to filter out certain items’s traits, and how they use their intuition to arrange the items to increase synergy. However, the task of a game designer to find not just a solution, but the ‘best’ solution requires extensive coverage of the entire search space, not just using those few items available to each player (and which could of course be different for each player). \section{Puzzle Solver Part 2: GA-Based Solution Optimizer} \label{sec:optimization} The solution space of the type of puzzles described often times consists of multiple solutions, each which are valid, honoring the constraints, however some solutions are more desirable than others, like having a low price or using items which are easier to obtain. Knowing the optimal solution allows content creators to gauge the desired reward value and puzzle difficulty accordingly. Optimizing for a solution in the class of constraint satisfaction problems considered poses unique challenges. Since they form discrete combinatorial problems, gradient based optimization methods are not applicable. Various population based optimization methods have been applied to solving similar type of puzzles. In many cases, memetic variants of genetic algorithms, i.e. evolutionary approaches hybridized with constraint satisfaction tools, are used successfully for combinatorial optimization problems~\cite{HandbookMetaheruistics}. Similar ideas were also used as a hybrid genetic algorithm, such as when applied to the Light-up puzzle~\cite{LightupPuzzlesHybridGA2007}. In those approaches, the genetic algorithms always work with feasible solutions, and if the individuals become infeasible after crossover and mutation operations, they are being ``healed'' to restore feasibility. In this work we propose a customized hybrid GA algorithm for finding the optimal solution in terms of an objective not part of the constraint by searching the solution space. \subsection{Graph-Based Hybrid Genetic Algorithm for Solution Optimization} There are various ways in which we can choose to setup the architecture of the genetic algorithm, such as representing the solution space, deciding to allow vertical or horizontal crossover, which of multiple ways parents are selected for crossover based on their fitness value, which mutation rate, mutation strategy, and selection strategy to use, and the settings for initial population size, offspring size, crossover and mutation rates. We are using a Graph-Based Genetic Algorithm (GB-GA) in which the relative locations of nodes to be filled are important, so reshuffling of genes is not allowed~\cite{Ashlock1999GraphBased}. The algorithm steps are described below highlighting the specifics of our implementation. \subsubsection{Representation and Initialization} In our terminology, a chromosome is an individual solution to the puzzle, which consists of selected items from the candidate pool assigned to the formation. Same items assigned in a different order are considered as two separate solutions. To start off a genetic algorithm, Algorithm~\ref{algo:constructsolution} is used to randomly generate a number of independent feasible candidate solutions, i.e. chromosomes. \subsubsection{Crossover and Mutation} After examining several variants of crossover, we have chosen a uniform crossover operation such as each unoccupied position in the offspring solution (chromosome) is assigned an item (gene) from one of the parent solutions with the probability $p=1/2$, while enforcing that no item may be repeated in the offspring solution. The selection of parents for the crossover is performed by a rank-based selection rule~\cite{TATE1995}. The mutation rate is set at 20\%, meaning that each individual has a 20\% chance being replaced, resulting in an average of $0.2N$ items removed from the graph during a given mutation pass. In detail, a solution is traversed in a random order and at every step a probabilistic decision is made whether to remove a given item both from the solution and the remainder of the candidate pool. The resultant partial solution is then populated again where missing items are randomly filled with the new items (heroes). \subsubsection{Hybrid Approach to Find Feasible Solutions} Note, that solutions produced by recombination and mutation do not necessarily result in feasible ones, since those operations are not guaranteed to satisfy the challenge constraints. As an extra step before accepting them, we repair those solutions by a process we call `healing' to ensure that the final offspring solution is feasible, or valid~\cite{LightupPuzzlesHybridGA2007, HybridGA2}. More specifically, first, those few items in the offspring that were replaced during mutation are removed. Then, the same Algorithm~\ref{algo:constructsolution} we used to build a new solution is used to replace missing items on the partial solution. If no such replacements are possible, the algorithm rejects that solution in favor of a new randomly generated one. \subsubsection{Selection and Refreshment} The selection process favors solutions with better fitness and diversity. The new generation of solutions are selected from the offspring and previous population. If the selected solutions comprise a diverse enough population, the algorithm moves to the next generation by selecting the best fit individuals. Otherwise, if diversity drops below a threshold, a fixed number of new randomly generated individuals are added to the offspring pool. \subsection{Maintaining Diversity} One of the key reasons the algorithm encounters premature stagnation, or trapping, is when the population looses diversity among its individual solutions. If that happens, recombinations and local perturbations through mutations are not able to lead the individual solutions to escape local minima~\cite{Whitley1995}. We have chosen phenotype diversity as a representative measure relating to the optimal fitness of the solution. We explicitly track diversity within population at every generation, and if it falls below a certain user defined threshold (measured by a normalized coefficient of variation), we randomly replace a third of the existing solutions with the new individuals\footnote{Both the diversity measure and threshold as well as the percentage of the solutions to be replaced are engineering hyperparameters that have been tested to be efficient in average.}. While maintaining diversity is essential to avoid algorithm stagnation, the diversity measure is still a proxy as it does not capture full variations in the solution space. In addition, adding new solutions that are far from the optimal reduces the rate of the convergence. Alternatively, one can introduce an adaptive mutation rate, which increases the number of mutations when the diversity of the population stagnates. These approaches, however, slow down the generation process. Additionally, mutation does not always lead out of a local minimum. One of the most successful solutions to this problem is the ``multi-population'' or ``multi-island'' model~\cite{Leito2015IslandMF, Tomassini2005, Whitley1995}, which allows a unique search trajectory to be followed by each islands, resulting in a more efficient exploration of the search space. An additional benefit is that it is inherently parallelizable and can be implemented employing distributed computing. In our current work we have designed a multi-island approach with migration to encourage the genetic process in maintaining diversity. The overall population is partitioned into sub-populations based on their similarity and each sub-population is assigned to an island. After that, each island evolves independently, and then, after a fixed number of sub-generations or epochs, migration allows the islands to interact. The traditional island model requires additional parameters such as the number of islands, size of the population of each island, the migration frequency, migration rate, migration topology and migration policy. The migration strategy between the islands directly affects the performance. It also impacts the optimization of the algorithm via balancing exploration and exploitation and indirectly supporting diversity. As a result, each island explores a separate optimization path resulting in broader coverage of the search space. The exact mechanism of migration between islands used in this paper is following a fully-connected migration pattern: after a fixed number of sub-generations within islands (10 sub-generations), all solutions across all islands are combined, sorted out by their fitness similarity, and then divided amongst islands equally such that most similar solutions are grouped together. There are multiple choices of the migration strategies~\cite{Li2015HistoryBasedTS} since islands can be clustered together using various similarity measures, either according to their fitness values, or through other measures of diversity like entropy~\cite{Arellano-Verdejo2017}, or even dynamically through spectral clustering based on the pair-wise similarities between individuals~\cite{Meng2017}. In general, having more densely connected islands gives a higher accuracy of the lower bound, but it is more computationally expensive. In the current work we have chosen the fully-connected island model where migration of individuals is not constrained and to use fitness as our similarity measure. \subsection{Experimental Setup} \begin{figure*}[t!] \centering \subfloat {\includegraphics[width=0.31\linewidth]{images/MedianComparison_Type1s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/MedianComparison_Type2s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/MedianComparison_Type3s.png}} \\ \subfloat {\includegraphics[width=0.31\linewidth]{images/DiversityComparisonType1s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/DiversityComparisonType2s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/DiversityComparisonType3s.png}} \caption{\textbf{Top row}: Convergence curves of best fit solutions for challenges Type1, Type2 and Type3, comparing the two approaches while maintaining diversity in both cases, averaged over 50 independent runs. The solid lines indicate the best fitness value across all the runs, the dotted lines are median values at each generation, and shaded regions cover each run. The solid black line represents the optimal solution. \textbf{Bottom row}: Diversity at each generation, solid lines: minimum, dotted line: median.} \label{fig:convergence} \end{figure*} To demonstrate the performance of our approach, we defined 3 types of puzzles that cover different requirements and sizes of the search space. Certain requirements increase the difficulty of achieving a specific synergy threshold, if they create a large search space. Opposed to such, are constraints that greatly reduce the number of candidate items, resulting in quicker convergence, since synergy is easier to satisfy with more similar items. Our puzzles representative of each puzzle types are: \begin{itemize} \setlength\itemsep{0.05em} \item{\textbf{Type 1}} Min Synergy 0.8, Min Team Level 84 \item{\textbf{Type 2}} Min Synergy 0.9, Min Team Level 75, at least 8 religions, at least 6 hometowns, at most 2 heroes with the same religion \item{\textbf{Type 3}} Min Synergy 0.8, Min Team Level 84, Min of 8 elves \end{itemize} For each of the selected challenges the optimal solutions are provided by the game designers as benchmarks. The parameters of the Genetic Approach were selected through trial and error to perform reasonably well across various puzzles as summarized in Table~\ref{TableParameters}. Our goal was to demonstrate the advantage of multi-islands approach in maintaining diversity, in comparison with a single population approach, and thus leading to more reliable results, while the actual model parameters can be tuned for a problem of interest. For example, authors in~\cite{RollingHorizon2020} present an N-Tuple Bandit Evolutionary approach to automatically optimize hyperparameters. \begin{table}{} \begin{center} \caption{GA parameters for ``vanilla'' and ``multi-islands''} \begin{tabular}{lll} \hline Parameter & vanilla GA & multi-island GA \\ \hline mutation rate & $0.2N$ & $0.2N$ \\ crossover rate & 0.5 & 0.5 \\ pop/offspring size & 50/100 & 10/20 per island \\ migration type & N/A & similarity based\\ similarity measure & N/A & fitness \\ migration frequency & N/A & 10 \\ islands epochs & N/A & 10 \\ generations & 100 & 100 \\ \hline \end{tabular} \label{TableParameters} \end{center} \end{table} \subsection{Numerical Results} The experiments were conducted for the three types of puzzles, comparing ``vanilla'' hybrid genetic algorithm with the ``multi-island'' approach. Fig.~\ref{fig:convergence} compares the behaviour of both approaches, the difference between those two approaches across 25 independent restarts with a fixed computational budget for each scenario so that average performance can be traced. Performance is defined by the rate of convergence and solution population diversity. During the early generations, the ``vanilla'' genetic approach slightly outperforms the multi-islands since it has more individuals to choose from. While average over time, shows the multi-island approach better approximates the lower bound with less variance as it manages to escape the local minima by interaction between the islands. We conclude that overall multi-islands approach consistently outperforms the standard GA under the same time, while maintaining at least twice higher diversity. The effect of the initial generations fades out fast for both approaches. The steps on the convergence curve reflects the reshuffling between the islands. \subsection{Notes on Algorithm Feasibility} As with any Genetic Algorithm for combinatorial non-convex optimization, there is no way to prove that any of the local minima are actually the global minima. The approach we took is to show that even though the algorithm is random in nature (both because of initial population and recombinations), if we run it repetitively for the same puzzle we could show that initial population, as well as a series of random crossover/mutations, are all leading to the same minimal fitness value solution, in average. Even more, by maintaining the diversity of the population through a multi-island approach, we could avoid algorithm stagnation and keep exploring the solution space as long as a satisfying minimal solution is found. In theory, the actual global minima is not known for these types of problems. So we are effectively comparing our genetic approach to an alternative of finding and enumerating all possible solutions and selecting the `best' one. That could be possibly finding a few hundred thousand solutions. Even though each of them takes about 10 sec in average to find on a standard cpu machine, it would overall take up to $300$ hours ($10 \times 100 000 / 3600$), while with the Genetic Algorithm we can limit that time to up to $10$ minutes, by reducing the number of independent solutions we need to find to only initialize the population, while recombinations are relatively cheap to compute. The number of puzzles generated by designers then becomes an important factor, as they are constantly generating new puzzles (on average $20$ per day) and would need the solver to run in actionable time to gain the knowledge of the attributes of the solution space. \section{Conclusion and Future Work}\label{sec:conclusionfuture} Our approach is motivated by the recurring problem designers have of improving and optimizing the task of creating new quality puzzle variations. We demonstrate our use case on the Party Building Puzzle game, where players collect heroes and later select from their collection which ones to use and how to arrange them in order to complete puzzle constraints. Our approach optimized for minimum amount of in-game resource value our solution has, to help designers evaluate and compare the puzzles they create. In addition, the proposed solution had to be capable of running in feasible time to allow designers to constantly iterate over their designs in just a few hours. In this work we have proposed an efficient constructive randomized search algorithm to build a solution using heuristics specific to the puzzle's constraints, and show that a solver hybrid, graph-based, genetic approach allows us to find near-optimal solutions to the puzzles. One of the challenges of these types of combinatorial optimization is an early drop into the local optima. We have experimentally demonstrated how the multi-island approach with a randomized selection strategy allows us to reach a near-optimal solution. As mentioned above, instead of decoupling the constraint optimization problem into two different ones: constraint satisfaction for a QAP and a combinatorial optimization to find the best performing solution, one could alternatively formulate the problem as a multi objective optimization, where constraints like synergy, ratings etc are optimized in combination with the fitness. This exploration of a comparative performances of these approaches is a future work. An avenue to explore are methods that can further investigate the search space of solutions. In an effort to increase diversity of potential solutions, and thus provide designers with a more detailed insight into their own design, we plan to explore algorithms for illuminating search spaces, such as Map-Elites~\cite{khalifa2018talakat, fontaine2019mapping}. Map-Elites explores different dimensions of the search space, and with such also provides an alternative solution that is more robust in avoid the local minimal problem discussed. We have demonstrated that power of Genetic Algorithms could be exploited for the NP hard combinatorial optimization problem with large non-unique search space and in the presence of additional constrains. Our proposed framework with the novel puzzle representation and custom designed multi-islands graph based genetic approach could be adapted to other problems with similar properties as long as the genetic operations are specified for the problems of interest. This method could also further be extended to solve puzzles starting from a partial state since there is no dependency of prior state. \section*{Acknowledgment} We would like to genuinely thank the anonymous Reviewers for all valuable comments and suggestions, which helped us to improve the quality of the manuscript. \bibliographystyle{IEEEtran} \section{Introduction} Puzzles have long been a staple of video games. They can be just enjoyable to play, or necessary to advance in the game, and usually provide in-game rewards upon their completion. In terms of design, puzzles can affect in-game economy, if completion of harder puzzles can lead to better rewards, and rewards can be purchased with, or traded for, in-game currency. Unlike traditional logic puzzles like Sudoku, World Wheel, Tower of Hanoi, Refraction, crosswords etc, where game rules are predefined and static~\cite{puzzlenpcomplete}, puzzles in video games commonly have changing constraints in order to diversify gameplay. With many games having ever evolving puzzles which continue to grow in complexity and difficulty, the task of designing these puzzles becomes time consuming and challenging. The game design process is usually complex and labor intensive. Exploring the design space is how game creators discover and develop the rules and mechanics of the game. One of the principal challenges in designing puzzles, and potentially the most important for game designers, is guaranteeing they can be solved by players~\cite{Smith_quantifyingover}. Unsolvable puzzles are frustrating to players for obvious reasons, but puzzles that become exceptionally hard due to virtually infeasible constraints, such as having barely enough time to perform all necessary moves, can be just as equally frustrating. In order to guarantee the quality of the player experience, designers then need to certify that solution for the puzzle exists. With ever growing constraints and numerous puzzles to create, the task quickly becomes more complex. In addition to time saving, by automating certain aspects of the puzzle creation process, it can also assist in discovering new solutions that might not have been noticed initially. This allows designers to validate and quickly iterate to different versions of their puzzles. It also answers questions such as: how many solutions exist for a particular puzzle, what is the optimal solution, or what is the ``cheapest price'' solution. For the game showcased in this paper, game designers need to produce a significant amount of new puzzles everyday. In order for our system to be of assistance to their workflow, it is necessary that it is able to find solutions within a time frame that allows designers to make quick iterations of their design. This scenario and the sheer size of the search space of potential solutions renders straight-forward techniques, such as exhaustive search, impractical. In order to meet such requirements we propose an evolutionary algorithm to search the space of candidates, making use of an expert knowledge derived heuristic to guide exploration of potential puzzle solutions. Although this technique does not guarantee we will find the optimal solution, which is also of interest to the designers, it consistently finds diverse, close-to-optimal solutions under our strict time constraints. The diverse set of solutions found provides valuable insight to the designers allowing them to quickly analyze the attributes of the solution space and compare different design iterations. The puzzles discussed in this paper are deterministic, single-player, and fully observable at all steps. These puzzles are defined as constraint satisfaction problems with one or more constraints, where the objective is to select and assign items, from a pool of candidates, in order to complete the requirements. If the puzzle is solvable, there is rarely (if ever) a unique solution to it. In this paper, we describe a randomized heuristic-driven construction state-space search based methodology to validate the solvability of a puzzle. With thousands of possible combinations to build a solution from, the search efficiency is key. In addition, each solution also has `solution cost', which is the sum of individual item prices comprising the solution. The cost of the solution should be minimized to better assess the `value' of the puzzle offered to the players. We further expand this methodology to find a set of near-optimal solutions efficiently using a custom designed evolutionary algorithm. Important to emphasize, in our work we focus solely on puzzle validation, regardless if the puzzles were generated manually or automatically. The rest of the paper is formatted as such: Section \ref{sec:relatedwork} discusses how our approach fits in the context of previous research. Section~\ref{PBP} defines the party building puzzle, our example game, followed by the constructive approach to build a solution~\ref{sec:formationplaytesting}. Section~\ref{sec:optimization} further develops our approach on how to harness power of a Genetic Algorithm not only find a solution but how to optimize the search for puzzle solutions, while trying to approximate the global price optimum. Lastly, we conclude by explaining our findings and future work in Section~\ref{sec:conclusionfuture}. \section{Related Work}\label{sec:relatedwork} The goal of this work is to develop an automated solution strategy targeted at validating a puzzle design, which lends itself well to the concept of automating playtesting. Commonly, automation in playtesting revolves around agents that can play through the game, or particular game scenarios. In particular, artificial intelligence agents could be used to test the boundaries of the rules/design constraints as in~\cite{de2017aicontemporaryboardgames, garcia2018automatedhearthstone}. While our approach is not conducted by an agent (puzzles are action-based), but rather the player needs to find a solution by selecting and organizing items that can fulfill the requirements presented. Important to note that for our problem rather than automatically trying to change the design, or to create a new puzzle, we instead assist in evaluating the feasibility of a proposed puzzle. Perhaps more similar to our approach, Bhatt et al. evolve Hearthstone decks, by selecting from existing cards, to fit the strategy of game playing agents~\cite{bhatt2018exploringhearthstone}. The strategy of using custom Genetic Algorithm based solvers for Jigsaw puzzles was first introduced in~\cite{Toyama2002AssemblyOP} for binary image puzzles. The large set of selection items presents a unique challenge in choosing the solution optimization approach. The closest example of application of an evolutionary approach to effectively solve puzzles rather than applying greedy approach is demonstrated by~\cite{SolomonPuzzleEvolution} for very large Jigsaw puzzles, and later extended to multiple puzzles by \cite{MultipleJigsaw}. Specifically, in~\cite{SolomonPuzzleEvolution} authors propose an approach that iteratively improves initial population via the means of natural selection (mutation, selection, crossover) to find more accurate solutions (i.e. the correct image) with the novel puzzle representation and a custom crossover approach. The puzzles considered in our work have similarly large pool of items to (pre)select and assign, which poses comparable computation hurdle. The video game puzzles we consider pose an added challenge of non-uniqueness of a solution as well as absence of a clearly defined benchmark solution to compare with. \section{Example Problem: Party Building Puzzles}\label{PBP} To illustrate our proposed Graph-Based Genetic Algorithm approach to finding a set of near-optimal solutions we, first, describe an example problem to introduce all the necessary terminology, followed by a constructive search based method to build a single solution, which becomes the basis for the evolutionary computations. Party building fantasy combat games are rising in popularity recently, e.g. Idle Champions, Firestone Idle RPG, or others (Fig.~\ref{fig:image2}). In these games, a player is tasked with selecting and filling out a combat formation with different heroes. Each combatant has different properties, making them suitable for different positions in the combat formation as well as making them strategically preferable to combine with other fighters. The genre of party-based, fantasy combat games is a suitable candidate for demonstrating our approach. \begin{figure}[t] \centering \begin{subfigure}[t]{0.225\textwidth} \centering \includegraphics[height=0.8in]{images/IdleFirestone.jpg} \label{fig:subim1} \caption{Idle Firestone} \end{subfigure} ~ \begin{subfigure}[t]{0.225\textwidth} \centering \includegraphics[height=0.8in]{images/Idle-Champions-Cheats-Tips.jpg} \label{fig:subim2} \caption{Idle Champions} \end{subfigure} \caption{These figures illustrate fantasy parties arranged in typical combat formations. For example, melee fighters are positioned in the front and ranged fighters or support characters are in the back. Even if the core game play differs, these concepts are shared, suggesting the wide applicability of formation-building puzzles within this game genre.} \label{fig:image2} \end{figure} The game we are analyzing is called ``Party Building Puzzle'', or PBP for short. In a PBP, the player has to select a number of items (heroes), and then assign those items to the combat formation such that heroes in the formation meet the puzzle requirements. An important thing to note is the puzzle requirements are defined by the game designers, which differentiates this type of formation filling from a more general strategy based games, and hence the use of the term `puzzle'. Each hero has a number of properties such as race, religion, nation, total level, gear quality level, price and how rare it is to obtain that particular hero. Puzzle requirements are constraints the player needs to honor when building their formation, and are usually related to the hero properties. For example, a puzzle might require the player use at least 3 orcs in their party, from at least 4 different nations, while another puzzle requires that the average level of the party be at least 60. Heroes additionally contribute to something called party synergy, a graph-based metric of how well the party works together and will be described in more detail in the next section. Requesting players to form a party of a certain synergy level creates a complex challenge in which the player must not only choose the right heroes, but also arrange them appropriately within the party's formation. Currently the game possess a total amount of unique cards in the order of $10^{5}$, and the number keeps expanding as the designers are constantly releasing new cards. The number of possible heroes is near limitless, which creates a combinatorially complex search space over which a solver must iterate to find solutions that satisfy the requirements. In that scenario, a task of constructing all possible solutions for a given puzzle becomes infeasable. For such, we use the optimization heuristics of evolutionary strategy to efficiently search the space of possible solutions, while optimizing for what is defined as the ``cheapest' priced option. \section{Puzzle Solver Part I: Formation Building and Playtesting}\label{sec:formationplaytesting} \subsection{Problem Formulation} The puzzles we are considering, with both linear and quadratic constraints, can be formulated as an asymmetric quadratic assignment problem (QAP), which consists of selecting $N$ items out of a pool of $M$, where $N<<M$, and placing them into a graph of locations in an optimal way such that certain conditions are satisfied. Each item, i.e. hero, is characterized by a list of properties, so-called traits. Each can be selected once and its selection and corresponding placement are advised by those traits. In our example of the Party Building Puzzle, $N=10$ is the number of nodes in the puzzle graph to fill out and $M=10 000$ is the number of possible items to choose from without repetition. Linear constraints are refereed as simple, and can be either continuous or discrete. For continuous properties, constraints are formulated as `the accumulated quality of the property $P$ over all positions has to be greater or equal to some value $a$': $\sum_{i=1}^N x^P_i\geq a$. For example, \textit{minimum team level is 84}. Discrete property constraints require `not more than $b$ items with a property $P$ are allowed in the placement graph': $ \sum_{i=1}^N I(x_i^P)\leq b,$ where $I$ is a boolean indicator function defining the presence or absence of a certain trait in the feature vector for each item. For example, \textit{at most 8 elves are allowed} or \textit{at most 4 humans and at least 2 elves are allowed}. In our PBP, number of properties for a hero does not exceed $P=8$. In addition, we have a nonlinear complex constraint, so-called `synergy', which is dependent on both item placement and the compatibility between adjacent items in the graph. Hence, its value is not defined till all the items are selected and assigned. Synergy is present within each puzzle, but can have various requirement value from 0 to 1. \begin{equation} \label{complexconstraints} \sum_{j=1, j\neq i}^N \sum_{i=1}^N \sum_{p=1}^Pw(x_j^p, x_i^p) \geq Synergy, \end{equation} where $w(x_p^j, x_p^i)$ is the weight of the edge in the graph between two neighbouring nodes $i$ and $j$, and summations are for each edge, across each trait. The weight function is specific to the puzzle and is a custom-defined, non-linear relation. We formulate Quadratic Optimization problem to maximize the synergy constraint following the Koopmans and Beckmann QAP formulation~\cite{QAPoriginal} over all possible selection and assignment permutations. Our solution must search for which items to use, out of the available ones, as well as which position to place the items in. In addition to the classical QAP, we need to also pre-select $N$ out of $M$ possible items before assigning them to locations, which makes the assignment asymmetrical. The asymmetric nature of the placement presents unique challenges, since the number of candidate items $N$ can be on the order of $10^{5}$, while the graph consists at most a few dozen items. So, in addition to the optimal placement, the selection of items is part of the optimization. \subsection{Our Proposed Solution}\label{sec:approach} In this section we first describe the design to automatically build a deck by finding a feasible solution to a challenge, i.e. the one that honors all requirements specific to that puzzle while maximizing the synergy. Using this design, we further employ evolutionary approach to finding a set of optimal solutions as described in the following section. It is known that QAPs are difficult to solve and are among the hardest NP-complete problems~\cite{puzzlenpcomplete} especially given the size of the combinatorial search space. Any algorithm that guarantees an exact solution (given that it exists) has to consider every item (hero) and every combination, so has an exponential computational time. Heuristic driven approaches are usually employed to either linearize or find approximate solutions~\cite{HandbookMetaheruistics}. Potentially, training a Reinforcement Learning agent to solving a puzzle could lead to a fast performance. However, with dynamically changing sets of requirements as well as a large pool of candidate items it could lead to overfitting and overall would result in data inefficiency in training and might not be practical. There are additional complications of defining an adaptable state representation, as well as catastrophic forgetting (both during training and/or during re-training with new data). This is due to the fact that, though there is a limited, constant, set of well-defined requirements that may appear in a given puzzle, each puzzle has a unique combination of completion requirements. Compare to Sudoku solvers~\cite{sudoku2018} as an example, where similar graph type constraints are defined, but the rules of the game are always the same even when the initial state of the board is different for each puzzle. In addition, placing numbers from 0 to 9 is not comparable to selecting 10 items out of tens of thousands and then placing them optimally. Another set of approaches commonly applied for similar problems are classical search methods like A-star or Monte-Carlo tree search that are based on heuristic-driven or probability-based backtracking~\cite{brownie2012MTCSreview}. However, for problems with quadratic constraints like the graph-dependent synergy value that we are trying to maximize, those approaches have limited application. In particular, deriving admissible heuristics for quadratic optimization is a challenge by itself. Additionally, the size of the search space we are dealing with makes those approaches impractical. As a reminder, our goal is to select $N$ items, each of which has $P$ features, from a list of $M$ items, where $N<<M$. These items must be selected and assigned to positions within a graph so as to satisfy a list of requirements and avoid repetitions. The solution strategy we chose is a constructive, randomized, heuristic search as described in Algorithm~\ref{algo:constructsolution}. To avoid deterministic behaviour, we first define a random traversal order in which to visit each node in the graph. Then, for any unvisited node, we first filter out all items from the list of candidates following the constraint requirements such that any item chosen from the filtered list is valid. Once the list is filtered, we search through it to find an item that maximizes the synergy of the neighbouring candidates that are partially filled. The random path traversal ensures diversity, so that the process can be repeated until the synergy requirement is satisfied. \subsubsection{Filter the candidate pool} Prior to selecting items at each position that optimize for the synergy, at each step of the algorithm iteration we apply heuristics specific to the problem and the set of requirements in order to filter out all the candidate items that are not eligible for the valid assignment. This look-ahead approach allows to do forward checking between the current and future candidates and if at any point the candidate pool is empty given an early signal to start over. This step is applied sequentially $N$ times for each linear constraint for each position until each node in the graph is visited. This heuristic rule is applied for discrete and continuous feature constraints. Consider a given step of solution building process when $L$ positions out of $N$ are already assigned (or, equivalently, visited), $0\leq L<N$, and at least $a$ items of a specific property $P$ are required: $\sum_{i=1}^N I(x^P_i) \geq a$. First, we compute the intermediate value $l$ of that constraint using assigned items: $l = \sum_{i=1}^L I(x^P_i)$. Then, $l \geq a$ means that the constraint is already satisfied and no action is taken. Alternatively, if $l < a$, we have $K=N-L$ nodes yet to be visited, out of which remaining $k=a-l$ items have to have a property $P$. Then, if $K>k$, again no action is taken. Else, if $K=k$, we filter out all items in our candidate pool that do not have property $P$ forcing in subsequent iterations the selection of a property $P$ to honor the requirements. This process is repeated for each requirement. Note, that since at each iteration only one item is selected to fill-up a given node, the repetitive application of these heuristics guarantees that $K$ is never less than $k$, and at the end all requirements are satisfied. \subsubsection{Select the best matching item} Once the item candidate pool is filtered, any randomly selected item will satisfy all the linear constraints. At this step the goal is to select the best matching candidate in the partially filled graph which maximizes complex constraints. The complex constraints are computed for a position that has to be filled with respect to its immediate neighbors in a partially filled graph. For example, if the node $E$ has to be filled as shown on Fig.~\ref{graphBestMatching}, it is only influenced by nodes $A$ and $C$. For an empty graph, a random item is picked. \begin{figure}[b] \centerline{\includegraphics[scale=0.4]{images/GraphBestMatchNeighbours-7.jpg}} \caption{Graph traversal for searching for the best matching item based on synergy with neighbors. The order of traversal is alphabetical, so that node $E$ is influenced only by nodes $A$ and $C$, since they were the only neighbours visited.} \label{graphBestMatching} \end{figure} Of note is the fact that this intermediate synergy value at any given step does not guarantee that the final solution will comply with the complex requirements since these are quadratic requirements. If they should fail to be satisfied, the algorithm starts over with a new randomly selected traversal path. Fig.~\ref{fig:samplesolutions} demonstrates two sample solutions which satisfy all linear constraints, which concerns the selection of the items, but only the right one satisfies quadratic constraint (both selection and optimal arrangement). Since the synergy is, however, usually hard to obtain, requiring a number of attempts for a given search space which makes the problem intractable, our proposed guided, randomized search increases the chances of obtaining feasible solutions in only a few iterations. Fig.~\ref{fig:chemistry_unguided} shows the results of running the algorithm on a given set of linear requirements with and without heuristics guiding it toward synergy maximization. In both cases same constructive approach is used to build a solution, so linear constraints are satisfied through the candidate filtering, the only difference is that we do not select the best matching candidate in the later case. Out of 5000 independent attempts, the maximum synergy value reached without heuristic-guidance was 0.35. This is poor performance, as common in game required values are typically above 0.7-0.8, demonstrating the value of heuristic-guidance. Solution construction is summarized in Algorithm~\ref{algo:constructsolution}. \begin{algorithm}[h] \SetAlgoLined \begin{algorithmic}[0] \State{\bf Initialize}: Random graph traversal path \While{Synergy is not satisfied}{ \For{Each unvisited node in a graph}{ \State Look ahead: filter the list of candidate items \For{all linear requirements}{ Apply the heuristics rule } \State Select the best matching candidate (synergy wise) } \State Compute synergy Eq.~\ref{complexconstraints} \If{synergy requirement is satisfied}{return the solution: {selected items $x$, permutation $\sigma$}} \Else{ New random path and iterate until solution is obtained} } \caption{Guided Randomized Heuristic Search to Construct a Solution} \label{algo:constructsolution} \end{algorithmic} \end{algorithm} \begin{figure}[h] \centering \begin{subfigure}{0.235\textwidth} \includegraphics[height=1in]{images/BadSolution.png} \end{subfigure} \begin{subfigure}{0.235\textwidth} \includegraphics[height=1in]{images/GoodSolution.png} \end{subfigure} \caption{Sample solutions to a puzzle with the requirements `At least 7 Goblins and at least 2 races.'. In both cases the same heroes are used. However, the arrangements are different which result in a higher synergy (edges) for the solution on the right as party members are placed more optimally.} \label{fig:samplesolutions} \end{figure} \begin{figure}[h] \centerline{\includegraphics[width=0.7\linewidth]{images/SynergyHistogramsComparative.jpg}} \caption{Comparison of Synergy values normalized to [0,1] for an independent set of 5000 attempts for a given challenge with and without heuristics guidance. Vertical axes are frequencies as percentage over all the samples. As can be seen, without the heuristics guidance to continuously select the best matching items, the synergy value never reaches more than 0.3. Results are for a sample challenge.} \label{fig:chemistry_unguided} \end{figure} The computational complexity of the algorithm is ${O}(kNM)= {O}(NM)$ per iteration, in the worst-case, where $k$ is the fixed finite number of linear requirements on the order of $1$ to $6$, $N$ is $10$, the number of nodes in the graph, and $M$ is the size of the candidate pool, order of $10^4$. The number of iterations is fixed, typically to a value of $10$ or less for the purposes of our experiments. In the best case scenario, however, linear requirements lead to a significant reduction in the size of the candidate pool such that it approaches the size of the graph. In these cases, the complexity approximately reduces to ${O}(N^2)$ per iteration. Important to note that exhaustive search is intractable for this type of a problem, since the number of all possible combinations of (a) selecting 10 items out of 10 000 and (b) arranging those 10 items into a specific order is roughly of the order of $10^{39}$. Depending on the requirements of the puzzles, there could be a few hundred thousand possible feasible solutions out of those combinations. At the same time, human players are able to find solutions without extensively covering the entire search space by using a limited number of items available to them and obtaining a few missing ones, as well as taking advantage of their intuitive knowledge of the game mechanics and items’ properties. Note, the heuristics used in the algorithm are inspired by how human players act: the way they select items and use in-game tools to filter out certain items’s traits, and how they use their intuition to arrange the items to increase synergy. However, the task of a game designer to find not just a solution, but the ‘best’ solution requires extensive coverage of the entire search space, not just using those few items available to each player (and which could of course be different for each player). \section{Puzzle Solver Part 2: GA-Based Solution Optimizer} \label{sec:optimization} The solution space of the type of puzzles described often times consists of multiple solutions, each which are valid, honoring the constraints, however some solutions are more desirable than others, like having a low price or using items which are easier to obtain. Knowing the optimal solution allows content creators to gauge the desired reward value and puzzle difficulty accordingly. Optimizing for a solution in the class of constraint satisfaction problems considered poses unique challenges. Since they form discrete combinatorial problems, gradient based optimization methods are not applicable. Various population based optimization methods have been applied to solving similar type of puzzles. In many cases, memetic variants of genetic algorithms, i.e. evolutionary approaches hybridized with constraint satisfaction tools, are used successfully for combinatorial optimization problems~\cite{HandbookMetaheruistics}. Similar ideas were also used as a hybrid genetic algorithm, such as when applied to the Light-up puzzle~\cite{LightupPuzzlesHybridGA2007}. In those approaches, the genetic algorithms always work with feasible solutions, and if the individuals become infeasible after crossover and mutation operations, they are being ``healed'' to restore feasibility. In this work we propose a customized hybrid GA algorithm for finding the optimal solution in terms of an objective not part of the constraint by searching the solution space. \subsection{Graph-Based Hybrid Genetic Algorithm for Solution Optimization} There are various ways in which we can choose to setup the architecture of the genetic algorithm, such as representing the solution space, deciding to allow vertical or horizontal crossover, which of multiple ways parents are selected for crossover based on their fitness value, which mutation rate, mutation strategy, and selection strategy to use, and the settings for initial population size, offspring size, crossover and mutation rates. We are using a Graph-Based Genetic Algorithm (GB-GA) in which the relative locations of nodes to be filled are important, so reshuffling of genes is not allowed~\cite{Ashlock1999GraphBased}. The algorithm steps are described below highlighting the specifics of our implementation. \subsubsection{Representation and Initialization} In our terminology, a chromosome is an individual solution to the puzzle, which consists of selected items from the candidate pool assigned to the formation. Same items assigned in a different order are considered as two separate solutions. To start off a genetic algorithm, Algorithm~\ref{algo:constructsolution} is used to randomly generate a number of independent feasible candidate solutions, i.e. chromosomes. \subsubsection{Crossover and Mutation} After examining several variants of crossover, we have chosen a uniform crossover operation such as each unoccupied position in the offspring solution (chromosome) is assigned an item (gene) from one of the parent solutions with the probability $p=1/2$, while enforcing that no item may be repeated in the offspring solution. The selection of parents for the crossover is performed by a rank-based selection rule~\cite{TATE1995}. The mutation rate is set at 20\%, meaning that each individual has a 20\% chance being replaced, resulting in an average of $0.2N$ items removed from the graph during a given mutation pass. In detail, a solution is traversed in a random order and at every step a probabilistic decision is made whether to remove a given item both from the solution and the remainder of the candidate pool. The resultant partial solution is then populated again where missing items are randomly filled with the new items (heroes). \subsubsection{Hybrid Approach to Find Feasible Solutions} Note, that solutions produced by recombination and mutation do not necessarily result in feasible ones, since those operations are not guaranteed to satisfy the challenge constraints. As an extra step before accepting them, we repair those solutions by a process we call `healing' to ensure that the final offspring solution is feasible, or valid~\cite{LightupPuzzlesHybridGA2007, HybridGA2}. More specifically, first, those few items in the offspring that were replaced during mutation are removed. Then, the same Algorithm~\ref{algo:constructsolution} we used to build a new solution is used to replace missing items on the partial solution. If no such replacements are possible, the algorithm rejects that solution in favor of a new randomly generated one. \subsubsection{Selection and Refreshment} The selection process favors solutions with better fitness and diversity. The new generation of solutions are selected from the offspring and previous population. If the selected solutions comprise a diverse enough population, the algorithm moves to the next generation by selecting the best fit individuals. Otherwise, if diversity drops below a threshold, a fixed number of new randomly generated individuals are added to the offspring pool. \subsection{Maintaining Diversity} One of the key reasons the algorithm encounters premature stagnation, or trapping, is when the population looses diversity among its individual solutions. If that happens, recombinations and local perturbations through mutations are not able to lead the individual solutions to escape local minima~\cite{Whitley1995}. We have chosen phenotype diversity as a representative measure relating to the optimal fitness of the solution. We explicitly track diversity within population at every generation, and if it falls below a certain user defined threshold (measured by a normalized coefficient of variation), we randomly replace a third of the existing solutions with the new individuals\footnote{Both the diversity measure and threshold as well as the percentage of the solutions to be replaced are engineering hyperparameters that have been tested to be efficient in average.}. While maintaining diversity is essential to avoid algorithm stagnation, the diversity measure is still a proxy as it does not capture full variations in the solution space. In addition, adding new solutions that are far from the optimal reduces the rate of the convergence. Alternatively, one can introduce an adaptive mutation rate, which increases the number of mutations when the diversity of the population stagnates. These approaches, however, slow down the generation process. Additionally, mutation does not always lead out of a local minimum. One of the most successful solutions to this problem is the ``multi-population'' or ``multi-island'' model~\cite{Leito2015IslandMF, Tomassini2005, Whitley1995}, which allows a unique search trajectory to be followed by each islands, resulting in a more efficient exploration of the search space. An additional benefit is that it is inherently parallelizable and can be implemented employing distributed computing. In our current work we have designed a multi-island approach with migration to encourage the genetic process in maintaining diversity. The overall population is partitioned into sub-populations based on their similarity and each sub-population is assigned to an island. After that, each island evolves independently, and then, after a fixed number of sub-generations or epochs, migration allows the islands to interact. The traditional island model requires additional parameters such as the number of islands, size of the population of each island, the migration frequency, migration rate, migration topology and migration policy. The migration strategy between the islands directly affects the performance. It also impacts the optimization of the algorithm via balancing exploration and exploitation and indirectly supporting diversity. As a result, each island explores a separate optimization path resulting in broader coverage of the search space. The exact mechanism of migration between islands used in this paper is following a fully-connected migration pattern: after a fixed number of sub-generations within islands (10 sub-generations), all solutions across all islands are combined, sorted out by their fitness similarity, and then divided amongst islands equally such that most similar solutions are grouped together. There are multiple choices of the migration strategies~\cite{Li2015HistoryBasedTS} since islands can be clustered together using various similarity measures, either according to their fitness values, or through other measures of diversity like entropy~\cite{Arellano-Verdejo2017}, or even dynamically through spectral clustering based on the pair-wise similarities between individuals~\cite{Meng2017}. In general, having more densely connected islands gives a higher accuracy of the lower bound, but it is more computationally expensive. In the current work we have chosen the fully-connected island model where migration of individuals is not constrained and to use fitness as our similarity measure. \subsection{Experimental Setup} \begin{figure*}[t!] \centering \subfloat {\includegraphics[width=0.31\linewidth]{images/MedianComparison_Type1s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/MedianComparison_Type2s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/MedianComparison_Type3s.png}} \\ \subfloat {\includegraphics[width=0.31\linewidth]{images/DiversityComparisonType1s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/DiversityComparisonType2s.png}} \subfloat {\includegraphics[width=0.31\linewidth]{images/DiversityComparisonType3s.png}} \caption{\textbf{Top row}: Convergence curves of best fit solutions for challenges Type1, Type2 and Type3, comparing the two approaches while maintaining diversity in both cases, averaged over 50 independent runs. The solid lines indicate the best fitness value across all the runs, the dotted lines are median values at each generation, and shaded regions cover each run. The solid black line represents the optimal solution. \textbf{Bottom row}: Diversity at each generation, solid lines: minimum, dotted line: median.} \label{fig:convergence} \end{figure*} To demonstrate the performance of our approach, we defined 3 types of puzzles that cover different requirements and sizes of the search space. Certain requirements increase the difficulty of achieving a specific synergy threshold, if they create a large search space. Opposed to such, are constraints that greatly reduce the number of candidate items, resulting in quicker convergence, since synergy is easier to satisfy with more similar items. Our puzzles representative of each puzzle types are: \begin{itemize} \setlength\itemsep{0.05em} \item{\textbf{Type 1}} Min Synergy 0.8, Min Team Level 84 \item{\textbf{Type 2}} Min Synergy 0.9, Min Team Level 75, at least 8 religions, at least 6 hometowns, at most 2 heroes with the same religion \item{\textbf{Type 3}} Min Synergy 0.8, Min Team Level 84, Min of 8 elves \end{itemize} For each of the selected challenges the optimal solutions are provided by the game designers as benchmarks. The parameters of the Genetic Approach were selected through trial and error to perform reasonably well across various puzzles as summarized in Table~\ref{TableParameters}. Our goal was to demonstrate the advantage of multi-islands approach in maintaining diversity, in comparison with a single population approach, and thus leading to more reliable results, while the actual model parameters can be tuned for a problem of interest. For example, authors in~\cite{RollingHorizon2020} present an N-Tuple Bandit Evolutionary approach to automatically optimize hyperparameters. \begin{table}{} \begin{center} \caption{GA parameters for ``vanilla'' and ``multi-islands''} \begin{tabular}{lll} \hline Parameter & vanilla GA & multi-island GA \\ \hline mutation rate & $0.2N$ & $0.2N$ \\ crossover rate & 0.5 & 0.5 \\ pop/offspring size & 50/100 & 10/20 per island \\ migration type & N/A & similarity based\\ similarity measure & N/A & fitness \\ migration frequency & N/A & 10 \\ islands epochs & N/A & 10 \\ generations & 100 & 100 \\ \hline \end{tabular} \label{TableParameters} \end{center} \end{table} \subsection{Numerical Results} The experiments were conducted for the three types of puzzles, comparing ``vanilla'' hybrid genetic algorithm with the ``multi-island'' approach. Fig.~\ref{fig:convergence} compares the behaviour of both approaches, the difference between those two approaches across 25 independent restarts with a fixed computational budget for each scenario so that average performance can be traced. Performance is defined by the rate of convergence and solution population diversity. During the early generations, the ``vanilla'' genetic approach slightly outperforms the multi-islands since it has more individuals to choose from. While average over time, shows the multi-island approach better approximates the lower bound with less variance as it manages to escape the local minima by interaction between the islands. We conclude that overall multi-islands approach consistently outperforms the standard GA under the same time, while maintaining at least twice higher diversity. The effect of the initial generations fades out fast for both approaches. The steps on the convergence curve reflects the reshuffling between the islands. \subsection{Notes on Algorithm Feasibility} As with any Genetic Algorithm for combinatorial non-convex optimization, there is no way to prove that any of the local minima are actually the global minima. The approach we took is to show that even though the algorithm is random in nature (both because of initial population and recombinations), if we run it repetitively for the same puzzle we could show that initial population, as well as a series of random crossover/mutations, are all leading to the same minimal fitness value solution, in average. Even more, by maintaining the diversity of the population through a multi-island approach, we could avoid algorithm stagnation and keep exploring the solution space as long as a satisfying minimal solution is found. In theory, the actual global minima is not known for these types of problems. So we are effectively comparing our genetic approach to an alternative of finding and enumerating all possible solutions and selecting the `best' one. That could be possibly finding a few hundred thousand solutions. Even though each of them takes about 10 sec in average to find on a standard cpu machine, it would overall take up to $300$ hours ($10 \times 100 000 / 3600$), while with the Genetic Algorithm we can limit that time to up to $10$ minutes, by reducing the number of independent solutions we need to find to only initialize the population, while recombinations are relatively cheap to compute. The number of puzzles generated by designers then becomes an important factor, as they are constantly generating new puzzles (on average $20$ per day) and would need the solver to run in actionable time to gain the knowledge of the attributes of the solution space. \section{Conclusion and Future Work}\label{sec:conclusionfuture} Our approach is motivated by the recurring problem designers have of improving and optimizing the task of creating new quality puzzle variations. We demonstrate our use case on the Party Building Puzzle game, where players collect heroes and later select from their collection which ones to use and how to arrange them in order to complete puzzle constraints. Our approach optimized for minimum amount of in-game resource value our solution has, to help designers evaluate and compare the puzzles they create. In addition, the proposed solution had to be capable of running in feasible time to allow designers to constantly iterate over their designs in just a few hours. In this work we have proposed an efficient constructive randomized search algorithm to build a solution using heuristics specific to the puzzle's constraints, and show that a solver hybrid, graph-based, genetic approach allows us to find near-optimal solutions to the puzzles. One of the challenges of these types of combinatorial optimization is an early drop into the local optima. We have experimentally demonstrated how the multi-island approach with a randomized selection strategy allows us to reach a near-optimal solution. As mentioned above, instead of decoupling the constraint optimization problem into two different ones: constraint satisfaction for a QAP and a combinatorial optimization to find the best performing solution, one could alternatively formulate the problem as a multi objective optimization, where constraints like synergy, ratings etc are optimized in combination with the fitness. This exploration of a comparative performances of these approaches is a future work. An avenue to explore are methods that can further explore the search space of solutions. In an effort increase diversity of potential solutions, and thus provide designers with a more detailed insight into their own design, we plan to explore algorithms for illuminating search spaces, such as Map-Elites~\cite{khalifa2018talakat, fontaine2019mapping}. Map-Elites explores different dimensions of the search space, and with such also provides an alternative solution that is more robust in avoid the local minimal problem discussed. We have demonstrated that power of Genetic Algorithms could be exploited for the NP hard combinatorial optimization problem with large non-unique search space and in the presence of additional constrains. Our proposed framework with the novel puzzle representation and custom designed multi-islands graph based genetic approach could be adapted to other problems with similar properties as long as the genetic operations are specified for the problems of interest. This method could also further be extended to solve puzzles starting from a partial state since there is no dependency of prior state. \section*{Acknowledgment} We would like to genuinely thank the anonymous Reviewers for all valuable comments and suggestions, which helped us to improve the quality of the manuscript. \bibliographystyle{IEEEtran}
2,869,038,154,521
arxiv
\section{Introduction} \label{sec:intro} Speech separation aims to separate each source signal from the mixed speech. Many works have been proposed to perform the separation either in the time-frequency domain or the purely time-domain, such as deep clustering \cite{dpcl}, deep attractor network \cite{dan}, permutation invariant training (PIT) \cite{pit} and the recent convolutional time-domain audio separation network (Conv-TasNet) \cite{tasnet}, etc. Our work aims to generalize the time-domain Conv-TasNet to the target speech extraction (TSE) tasks, because of its higher performances over most time-frequency domain methods \cite{tasnet_relate1, tasnet_relate2, tasnet_relate3, tasnet_relate4, tasnet_relate5, tasnet_relate6}. Compared with the pure speech separation techniques, most TSE approaches require additional target speaker clues to guide the separation network to extract the speech of that speaker. Many works have been proposed for the TSE target speaker adaptation, such as the SpeakerBeam-FE \cite{sbfe}, SpEx \cite{spex}, and time-domain SpeakerBeam (TD-SpeakerBeam) \cite{td_speakerbeam} employed a factorized adaptation layer, a concatenating adaptation layer, and a scaling adaptation layer to adjust the internal behavior of the separation network respectively. Although these adaptation methods showed promising results, their exploitations of the target speaker clues are somehow monotonous. Specifically, they average the speaker vectors over an adaptation or enrollment utterance to get a vector that provides a global bias for the target speaker, then, repeat the same vector for each frame adaptation of the mixed speech to drive the network towards extracting the target speech, no matter how the acoustic characteristics of the frame are related to the target speaker identity. Therefore, we think that a more reasonable target speaker bias should be able to adjust itself dynamically according to the acoustic interaction between the mixture and the target speaker. In \cite{ms}, an attention mechanism between the adaptation utterances and the mixed speech was proposed to dynamically compute a frame-wise speaker bias which is a weighted sum of target speaker embeddings. However, it was designed for the time-frequency domain TSE tasks, it is tricky to generalize such attention to the time-domain TSE tasks. Because the frames of the encoded representation for a waveform in the time-domain (e.g. 3199) is usually much larger than that in the time-frequency domain (e.g. 251). Such attention performed on these large frames may result in a very sparse distribution of their softmax probabilities, in which only several frames have big probabilities, while others are close to zero. In this case, the weighted sum of target speaker vectors can no longer provide a sufficient discriminative bias for extracting the target speech. Although the authors in \cite{tencent} applied the same attention mechanism to the time-domain tasks, they only presented the results of the pre-trained speaker embedder. In this study, we design a novel attention mechanism to exploit the interaction between the adaptation utterance and different mixture speech for the time-domain TSE tasks. Different from \cite{ms}, our attention is performed on the global bias vector of the target speaker and the embedding matrix of the mixed speech, because we want to adjust the target speaker bias dynamically according to the acoustic interaction between the mixture and target speaker at different time intervals. Rather than learning the context-dependent information in \cite{ms}, we pay more attention to exploit the dynamic target speaker identity clues between different mixtures and the target speaker adaptation utterance. Furthermore, as our attention is directly performed between a vector and a matrix, unlike the traditional self-attention \cite{self-att} with linear projection weights, there are no additional network parameters is introduced in our ASA, and it has a less computational cost and lower memory requirements than the attention performed on two matrices. All experiments are performed on the publicly available spatialized reverberant WSJ0 2-mix dataset. Results show that the proposed method improves the target speech extraction performance effectively, and the single-channel performances are comparable to that in multi-channel condition with IPD features \cite{ipd}. The rest of this paper is organized as follows. In Section \ref{sec:td-speakerbeam}, we introduce arcthitecture of our baseline. In Section \ref{sec:proposed}, we describe the principle of our proposed ASA. The experiments and results are presented in Section \ref{sec:exp_res}, and conclude in Section \ref{sec:conclusion}. \section{Time-domain speakerbeam} \label{sec:td-speakerbeam} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{td_speakerbeam.png} \caption{The block diagram of the TD-SpeakerBeam with IPD. ``SA'' represents scalling adaptation.} \label{fig:td_speakerbeam} \end{figure} TD-SpeakerBeam is a very effective target speech extraction approach that has been recently proposed in \cite{td_speakerbeam}. The structure of TD-SpeakerBeam with IPD concatenation block is shown in Fig.\ref{fig:td_speakerbeam}. It contains a 1d convolution encoder, several convolution blocks, and a 1d deconvolution decoder. The $\mathbf{y}$, $\mathbf{\hat{x}}^s$ and $\mathbf{a}^s$ are the mixture waveform, the extracted target speech waveform, and the adaptation utterance of the target speaker respectively. The whole TD-SpeakerBeam network follows a similar configuration as Conv-TasNet\cite{tasnet} except for inserting a scaling adaptation (SA) layer \cite{adap} between the first and second convolution blocks to drive the network towards extracting the target speech. The output of the SA layer $\mathbf{Y}^{\rm s} \in \mathbb{R}^{N \times T}$ is obtained by simply performing an element-wise multiplication between the repeated target speaker embedding vectors $[\mathbf{e}^s, \mathbf{e}^s, ..., \mathbf{e}^s]_{N \times T}$ and the input $\mathbf{Y}$ as follows, \begin{equation} \mathbf{Y}^{\rm s} = [\mathbf{e}^s, \mathbf{e}^s, ..., \mathbf{e}^s]_{N \times T} \odot \mathbf{Y} \end{equation} where the $\mathbf{e}^s \in \mathbb{R}^{N \times 1}$ is computed from a time-domain convolutional auxiliary network as shown in the right part of Fig.\ref{fig:td_speakerbeam}, and the $N$ is the dimension of the embedding vectors, $T$ is the number of frames of the convolutional output. Furthermore, as shown in the left part of Fig.\ref{fig:td_speakerbeam}, TD-SpeakerBeam can also be extended to the multi-channel TSE task by combining the IPD features (processed with a 1d convolutional layer, upsampling, and a convolution block) after the SA layer. The whole network of TD-SpeakerBeam is trained jointly in an end-to-end multi-task way. The multi-task loss combines the scale-invariant signal-to-distortion ratio (SiSDR) \cite{sisdr} as the signal reconstruction loss and cross-entropy as the speaker identification loss. The overall loss function is defined as, \begin{equation}\small L(\Theta|\mathbf{y}, \mathbf{a}^s,\mathbf{x}^s,\mathbf{l}^s) = {\rm -SiSDR}(\mathbf{x}^s, \mathbf{\hat{x}}^s) + \alpha{\rm CE}(\mathbf{l}^s, \sigma(\mathbf{We}^s)) \label{eq:mtl} \end{equation} where $\Theta$ represents the model parameters, $\mathbf{x}^s$ is the target speech, $\mathbf{l}^s$ is a one-hot vector representing the target speaker identity, $\alpha$ is a scaling parameter, $\mathbf{W}$ is a weight matrix and $\sigma(\cdot)$ is softmax operation. More details can be found in \cite{td_speakerbeam}. \label{sec:prop} \begin{figure}[t] \centering \includegraphics[width=9cm]{proposed.png} \caption{The block diagram of (a) proposed method based on TD-SpeakerBeam and (b) the structure of ASA. ``M'' represents mean pooling, ``R'' represents repeating, and ``N'' represents nearest upsampling.} \label{fig:asa} \end{figure} \section{Proposed method} \label{sec:proposed} \subsection{Overview} \label{ssec:overiew} The whole structure of the proposed method is shown in Fig.\ref{fig:asa} (a). All network architectures are the same as Fig.\ref{fig:td_speakerbeam} except for replacing the SA layer with our proposed attention-based scaling adaptation (ASA). As shown in Fig.\ref{fig:asa} (b), the ASA layer accepts the mixture embedding matrix $\mathbf{Y} \in \mathbb{R}^{N \times T}$ and the target speaker embedding vector $\mathbf{e}^s$ as inputs, and outputs the scaled mixture embedding matrix $\mathbf{Y}^s \in \mathbb{R}^{N \times T}$ whose each value is weighted by the dynamic target speaker bias $\mathbf{E}$. $N$ is the output dimension of the convolutional network, and $T$ is the number of frames of the convolutional output. As our operations are performed in the time-domain, in order to get a high time-domain resolution, the convolutional kernel used to encode the waveform usually has a small size, which means the number frames of the convolution output $T$ has a large size. For example, assuming the input waveform has $32000$ samples, the 1d convolutional kernel size is $20$, stride is $10$ and padding is 0. Then the number of frames of the convolutional output will be $3199$. When performing the softmax over these frames, the obtained probability vector usually has a very sparse distribution. Such a sparse distribution will prevent the conventional attention mechanism from being used to effectively exploit the relationship between the mixture and the target speaker for the time-domain tasks. \subsection{Attention-based scaling adaptation} \label{sec:asa} Considering that the speaker characteristics of several consecutive frames are almost unchanged, we treat each non-overlapped consecutive $M$ frames as an integral part, by averaging each $M$ column vectors of the mixture embedding matrix $\mathbf{Y}$ to generate a speaker-dependent vector $\mathbf{u}_t$. After such a mean pooling operation, we can obtain a speaker-dependent matrix $\mathbf{U}$, i.e., \begin{equation} \begin{split} \mathbf{u}_t &= \frac{1}{M}\sum_{j=1}^M{\mathbf{y}_{j+(t-1)M}}\\ \mathbf{U} &= [\mathbf{u}_1, \mathbf{u}_2,..., \mathbf{u}_{T_m}] \end{split} \end{equation} where $\mathbf{U} \in \mathbb{R}^{N \times T_m}$, $T_m$ equals to $T / M$, $\mathbf{u}_t \in \mathbb{R}^{N \times 1}$ is the $t$-th column vector of $\mathbf{U}$, and $\mathbf{y}_j \in \mathbb{R}^{N \times 1}$ is the $j$-th column vector of $\mathbf{Y}$. By doing so, the new mixture embedding matrix $\mathbf{U}$ not only has a much smaller size than $\mathbf{Y}$, but also has a relatively compact acoustic representation to reflect the speaker dependent information, each $\mathbf{u}_i$ summaries the speaker information over several frames and represents a different dynamic bias to the target speaker. Next, as shown in Fig.\ref{fig:asa} (b), we take $\mathbf{U}$ and the target speaker embedding vector $\mathbf{e}^s$ as inputs of the attention module. The specific procedure is as follows: \begin{align} \begin{split}\label{eq:1} \mathbf{d} ={}&(\mathbf{e}^s)^\mathbf{T} * \mathbf{U} \end{split}\\ \begin{split}\label{eq:2} w_t ={}&\frac{e^{d_t}}{\sum_{i=1}^{T_m}{e^{d_i}}} \end{split}\\ \mathbf{B} ={}& \mathbf{e}^s * \mathbf{w} \label{eq:3} \end{align} where $*$ is the operation of matrix multiplication operation, $\mathbf{T}$ is the operation of transpose, $\mathbf{d} \in \mathbb{R}^{1 \times T_m}$ is a similarity vector that measures the correlation between $\mathbf{e}^s$ and $\mathbf{u}_t$, $d_t$ is the $t$-th element of $\mathbf{d}$, $w_t$ is the softmax of $d_t$ over $t \in [1, T_m]$, $\mathbf{w} \in \mathbb{R}^{1 \times T_m}$ is the corresponding softmax vector, and $\mathbf{B} \in \mathbb{R}^{N \times T_m}$ is the attention-based target speaker embedding matrix. Each vector at frame $t$ in $\mathbf{B}$ can be regarded as a target speaker-dependent dynamic bias $\mathbf{b}_t$, because in which the $\mathbf{e}^s$ is weighted by the similarity $w_t$ that exploits the dynamic target speaker interaction between the mixture and the adaptation utterance. In addition, as the dynamic bias $\mathbf{b}_t$ will be very small values if the mixture speech is occupied by the interferer, this bias may fail to be a good target speaker guider to supervise the whole network training. In order to dynamically adjust the target speaker bias with discriminative speaker information, as shown in Fig.\ref{fig:asa} (b), we repeat the target speaker embedding vector $\mathbf{e}^s$ and add it to the dynamic bias $\mathbf{b}_t$ to get the output $\mathbf{O}$, i.e., \begin{equation} \begin{split} \mathbf{o}_t &= \mathbf{b}_t + \mathbf{e}^s\\ \mathbf{O} &= [\mathbf{o}_1, \mathbf{o}_2,..., \mathbf{o}_{T_m}] \end{split} \label{eq:add} \end{equation} where $\mathbf{O} \in \mathbb{R}^{N \times T_m}$, $\mathbf{o}_t \in \mathbb{R}^{N \times 1}$ is the $t$-th vector of $\mathbf{O}$ at frame $t$. This repeat operation in (\ref{eq:add}) assures the target speaker bias won't be too weak, while the attention mechanism provides a dynamic and more discriminative bias. Combining these two techniques, the target speaker embedding $\mathbf{O}$ can drive the network towards extracting the target speech more efficiently. Next, we use the nearest upsampling algorithm to map the $\mathbf{O}$ to the same dimension as the original mixture embedding matrix $\mathbf{Y}$ to get the final target speaker embedding matrix $\mathbf{E} \in \mathbb{R}^{N \times T}$. Due to the nearest upsampling, the consecutive $M$ vectors (frames) of $\mathbf{E}$ have the same target speaker characteristics, which is consistent with our intuition, namely, the speaker characteristics won't change in a very short duration. Finally, an element-wise multiplication between $\mathbf{Y}$ and $\mathbf{E}$ is used to adapt the network towards the target speaker, i.e., \begin{equation} \mathbf{Y}^s = \mathbf{Y} \odot \mathbf{E} \end{equation} where $\mathbf{Y}^s \in \mathbb{R}^{N \times T}$ is the output of the proposed attention-based scaling adaptation. It is worth noting that, unlike the traditional self-attention that needs additional learnable weights to project the key, query and value into high-level representation spaces, our ASA is very simple, and it is directly performed on the $\mathbf{U}$ and $\mathbf{e}^s$, no additional parameter is introduced. \section{Experiments and results} \label{sec:exp_res} \subsection{Dataset} \label{ssec:dataset} Our experiments are performed on the simulated spatialized WSJ0 2-mix corpus \cite{dataset}. All recordings are generated by convolving clean speech signals with room impulse responses simulated with the image method for reverberation time ranges from 0.2 s to 0.6 s \cite{td_speakerbeam}. We use the same way as in \cite{xu} to generate adaptation utterances of the target speaker. The adaptation utterance is selected randomly that to be different from the utterance in the mixture. The adaptation recordings used in our experiments are anechoic. The size of the training, validation, and test sets are 20k, 5k, and 3k utterances, respectively. All of the data are resampled to 8kHz from the original 16kHz sampling rate. \subsection{Configurations} \label{ssec:config} Our experiments are implemented based on the open source software \cite{funcwj}. We use the same hyper-parameters as the baseline TD-SpeakerBeam in \cite{td_speakerbeam}. We set $M=20$ in the ASA process. All experiments are performed using the SiSDR loss only ($\alpha=0$ in Eq.(\ref{eq:mtl})) except when we mention the use of the multi-task loss (MTL), in which case we set $\alpha=0.5$ to balance the loss tradeoff between the SiSDR and cross-entropy. For those experiments with IPD combination, the IPD features are extracted using an STFT window of 32 msec and hop size of 16 msec. All experiments are performed on the single-channel (first channel) recordings, except for the experiment with IPD features that extract from two-channel recordings. For the performance evaluation, both the signal-to-distortion ratio (SDR) of BSSeval \cite{sdr} and the SiSDR \cite{sisdr} are used. \subsection{Results and discussion} \label{ssec:results} \subsubsection{Baseline} \label{baseline} We take the single-channel TD-SpeakerBeam \cite{td_speakerbeam} (without IPD combination) as our baseline. Results are shown in Table \ref{tab:base}. System (1) and (2) are the results given in \cite{td_speakerbeam}. As the source code of TD-SpeakerBeam is not open resource, we implemented it by ourselves and reproduced the results as shown in system (3) and (4) on the same WSJ0 2-mix corpus. We see that our reproduced results are slightly better than the ones in \cite{td_speakerbeam}. Moreover, by comparing (1) and (2), or (3) and (4), there is a big performance gap between the single-channel and two-channel TSE tasks, the IPD features extracted from two-channel recordings are effective to improve the TSE performance. In addition, we find that the TSE on the same gender conditions are much more challenge than the mix-gender condition. \begin{table}[t] \caption{SDR / SiSDR (dB) performance of TD-SpeakerBeam (TSB). ``IPD" represents the system with internal combination of IPD spatial features, ``F" represents female, ``M" represents male, and ``Avg" represents the average results. Bold-fonts indicate the best performance.} \label{tab:base} \footnotesize \setlength{\tabcolsep}{0.3mm} \centering \begin{tabular}{l|c|c|c|c|c} \toprule System & IPD & FF & MM & FM & Avg\\ \midrule (1) TSB [14] & - & 9.13 / - & 9.47 / - & 12.77 / - & 11.17 / - \\ (2) & \checkmark &{\bf 10.17 /-} & 10.30 / - & 12.49 / - & 11.45 / - \\ \midrule (3) TSB (our) & - & 9.43 / 8.84 & 10.02 / 9.52 & 12.54 / 12.06 & 11.26 / 10.76\\ (4) & \checkmark & 10.01 / 9.46 &\bf 10.51 / 10.02 & \bf 12.80 / 12.31 & \bf 11.65 / 11.15 \\ \bottomrule \end{tabular} \end{table} \subsubsection{Results of ASA with SiSDR loss} \label{sisdr_loss} Table \ref{tab:sisdr_loss} shows the experimental results of our proposed ASA with SiSDR loss as the system training criterion. System (1) is our baseline. In order to examine the effects of the proposed mean pooling operation that used in the ASA, we remove the blocks of mean pooling (M) and nearest upsampling (N) used in Fig.\ref{fig:asa} (b). The proposed attention mechanism is performed directly between the mixture embedding matrix $\mathbf{Y}$ and the target speaker embedding vector $\mathbf{e}^s$. The results are shown in system (2). Compared with the baseline (system (1)), it's clear to see that system (2) achieves better performance under all gender-mixing conditions. The results of system (2) indicate that the dynamic interaction between the mixture and the target speaker is helpful to produce a more discriminative speaker bias to guide the network to extract the target speech. \begin{table}[h] \caption{SDR / SiSDR (dB) performance of the proposed attenion based scaling adaptation (ASA) method. ``MP'' represents the mean pooling in ASA.} \label{tab:sisdr_loss} \footnotesize \setlength{\tabcolsep}{0.5mm} \centering \begin{tabular}{l|c|c|c|c} \toprule System & FF & MM & FM & Avg\\ \midrule (1) TSB & 9.43 / 8.84 & 10.02 / 9.52 & 12.54 / 12.06 & 11.26 / 10.76\\ \midrule (2) ASA & 9.78 / 9.23& 10.36 / 9.86 & 12.78 / 12.29 & 11.55 / 11.05 \\ (3) ASA (MP) & 9.83 / 9.26 & 10.47 / 9.97 & 12.89 / 12.40 & 11.65 / 11.15 \\ \midrule (4) Para \cite{cd} & 10.83 / 10.21 & 11.52 / 10.99 & 13.12 / 12.61 & 12.26 / 11.72 \\ (5) Para (ASA) & 11.50 / 10.87 & 12.01 / 11.48 & 13.56 / 13.06 & 12.75 / 12.22 \\ \bottomrule \end{tabular} \end{table} Moreover, as shown in the system (3), the TSE performance can be further improved by introducing the mean pooling operation mentioned in Section \ref{sec:asa}. Summarizing the speaker information prior to the attention calculation can enhance the ability of exploiting the dynamic target speaker-dependent information between different input mixtures and the given adaptation utterance of the target speaker. Therefore, the mean pooling is set to the default configuration for the rest of our experiments. Compared with the baseline (system (1)), the proposed ASA (system(3)) achieves $4.2 / 4.7\%$, $4.5 / 4.7\%$ and $2.8 / 1.9 \%$ relative improvements in SDR / SiSDR for female-female, male-male, female-male conditions respectively. It indicates that the ASA can effectively enhance the target speech extraction, especially for the same-gender mixtures. Furthermore, by comparing the results of system (4) in Table \ref{tab:base} and system (3) in Table \ref{tab:sisdr_loss}, the proposed system with ASA for single-channel TSE task achieves competitive performance as the TD-SpeakerBeam with IPD for the multi-channel task. In addition, please note that the ASA does not introduce any additional learnable parameters like linear transformation weight of query, key, and value in self-attention \cite{self-att}. The proposed ASA only contains pure mathematical operations. These promising results indicate that our proposed ASA is effective to improve the discrimination of target speaker clues for TSE tasks. Although as suggested in \cite{ms}, the local dynamics and the temporal structure will be lost due to the averaging operation, the averaging can provide a more robust speaker bias vector than a single frame vector. Experimental results show that the local mean pooling and the nearest upsampling operations are beneficial for ASA to exploit the target speaker clues in a more efficient way. We also tried applying the attention mechanism from \cite{ms} to the TSB structure, but the training process had difficulties to converge and so we stopped it after several epochs. Furthermore, we also investigate effects of ASA on two-channel parallel encoder based TSB system \cite{cd, para} which directly sums the waveform encodings of each input channel to enhance the mixture representation. The adaptation methods of system (4) and (5) are SA and ASA with mean pooling. Results show that the extraction performance can be further improved by introducing the proposed ASA mechanism. \subsubsection{Results of ASA with multi-task loss} \label{mtl_loss} \begin{table}[t] \caption{SDR / SiSDR (dB) performance of the proposed attention based scaling adaptation (ASA) method with multi-task loss (MTL).} \label{tab:mtl_loss} \footnotesize \setlength{\tabcolsep}{0.5mm} \centering \begin{tabular}{l|c|c|c|c} \toprule System & FF & MM & FM & Avg\\ \midrule (1) TSB & 9.43 / 8.84 & 10.02 / 9.52 & 12.54 / 12.06 & 11.26 / 10.76\\ (2) TSB (MTL) & 9.66 / 9.09 & 10.33 / 9.84 & 12.75 / 12.26 & 11.51 / 11.00 \\ \midrule (3) ASA (MTL) & \bf 9.93 / 9.35 & \bf 10.63 / 10.13 & \bf 12.92 / 12.43& \bf 11.73 / 11.22 \\ \bottomrule \end{tabular} \end{table} We also investigated the performance of ASA with multi-task loss. Results are shown in Table \ref{tab:mtl_loss}. System (1) is the TD-SpeakerBeam trained with SiSDR loss function, and system (2) and (3) are the TD-SpeakerBeam and the proposed ASA system (with mean pooling) trained with a multi-task loss respectively. Results show that even using a multi-task loss, the proposed ASA still achieves better performances than the baselines. Note that in this case, we scaled the attention-based target speaker embedding matrix $\mathbf{B}$ by $\sqrt{N}$ as we found this combination can result in better performance. \section{Conclusion} \label{sec:conclusion} In this work, we propose a novel target speaker adaptation technique for time-domain target speech extraction tasks. A special attention mechanism is designed to effectively exploit the dynamic target speaker-dependent interaction between different mixtures and the given adaptation utterance. This dynamic information can improve the target speaker clues and its discriminations for target speech extraction. Experiments on the spatialized WSJ0 2-mix corpus demonstrate that our proposed method effectively improved the TD-SpeakerBeam for target speech extraction. Furthermore, it is surprising to find that the single-channel performance gains result from the proposed ASA are competitive with the multi-channel TD-SpeakerBeam with IPD features. Our future work will focus on how to combine the proposed method with the multi-task loss in a more appropriate way. \bibliographystyle{IEEEbib}
2,869,038,154,522
arxiv
\section{INTRODUCTION} \label{section:intro} \begin{figure*}{} \includegraphics[width = 0.94\textwidth]{fig1.pdf} \caption{ AGN spectrum extracted from the central 0.2 \arcsec $\times$ 0.2 \arcsec region. In the inset boxes, stellar subtracted spectrum (black), the combined model (red), and Gaussian components (blue) of each emission line from H$\beta$\ to [S~{\sevenrm II}]\ are presented, respectively. \\ \label{fig:allspec1}} \end{figure*} Active galactic nuclei (AGNs) are one of the key components in understanding galaxy formation and evolution as they release large amount of energy via gas outflows or radio jets, affecting star formation in galaxies as well as the properties of intergalactic medium \citep[][and references therein]{Fabian2012,Harrison2017}. In the AGN feedback scenarios, the energy from AGNs suppresses star formation by preventing gas from cooling (e.g., cooling flow problem, \citealt{Fabian1994}) or sweeping out gas from their host galaxies \citep[i.e., negative feedback, e.g.,][]{Silk1998,Fabian2012,Harrison2017}. On the other hand, AGN activities can also compress gas and trigger star formation in other circumstances, e.g., high density regions \citep[i.e., positive feedback, e.g.,][]{Silk2013,Zubovas2013}. Thus, the nature of AGN feedback is complex since AGNs play a role in suppressing as well as enhancing star formation \citep{Zinn2013,Cresci2015a,Carniani2016,Zubovas2017}. { Star formation has been recently detected inside gas outflows, implying that gas outflows trigger star formation \citep{Maiolino2017,Gallagher2019}.} As a strong emission line in the optical wavelength range, the [O~{\sevenrm III}]\ 5007\AA\ line has been popularly used as a tracer of ionized gas outflows. Various studies based on large survey data showed that gas outflows are prevalent in AGNs \citep[e.g.,][]{Mullaney2013,Bae2014,Woo2016,Rakshit2018}. Nonetheless, the overall impact of AGN outflows on star formation is unclear, particularly when the spatially-resolved data are not available. Using $\sim$110,000 AGNs and star forming galaxies from the Sloan Digital Sky Survey (SDSS), for example, \cite{Woo2017} found that specific star formation rate of strong outflow AGNs is comparable to that of main-sequence star forming galaxies, suggesting no evidence of instantaneous negative feedback. The integral field spectroscopy has enabled the detailed investigation of the spatial connection between gas outflows and star formation. Using the spatially-resolved information of AGN gas outflow, a number of studies found that mass outflow rate is much higher (by $\sim$3-1400 times) than mass accretion rate \citep{Barbosa2009,Riffel2009,Storchi-Bergmann2010, Muller-Sanchez2011, Bae2017,Humire2018,Revalski2018,Durre2019} or star formation rate \citep[SFR;][]{ForsterSchreiber2014,Harrison2014}, indicating that gas removal is efficient. While the negative feedback is expected from the large amount of outflowing gas, the instantaneous feedback effect has not been well observed \citep[see e.g.,][]{Karouzos2016a,Karouzos2016b,Bae2017}. Spatial anti-correlation between ionized gas outflows and star formation has beed detected in individual objects, suggesting the negative feedback \citep{Cano-Diaz2012,Cresci2015a,Carniani2016}. On the other hand, \citet{Cresci2015a} and \citet{Carniani2016} found star formation activity at the edge of outflows, suggesting both positive and negative feedback for given objects. Interestingly, \cite{Cresci2015b} reported star forming regions where an AGN-driven gas outflow encounters a dust lane in a Seyfert 2 galaxy, NGC 5643, suggesting that gas outflows may trigger star formation in dense regions. Along with ionized gas, molecular gas provides crucial information on the outflows and their connection to star formation. Observational studies revealed the cold molecular gas outflows (i.e., CO molecules) due to AGN and the suppression of star formation \citep[e.g.,][]{Feruglio2010,Veilleux2013,Cicone2014,Fiore2017,Fluetsch2019}. Based on the Atacama Large Millimeter/submillimeter Array (ALMA) data, several studies reported that the spatially resolved kinematics of the molecular gas are consistent with that of the ionized gas, indicating both molecular and ionized gas are under influence of AGN \citep[e.g.,][]{Garcia-Burillo2014,Zschaechner2016,Slater2019}. { However, the cold molecular gas outflows have been investigated only for a small number of nearby AGNs based on spatially resolved observations.} On the other hand, warm molecular gas ($T\sim10^{3}$ K) traced by, for example, H$_{2}\lambda$2.1218$\rm \mu m$ does not usually show outflow signatures in nearby AGNs \citep[e.g.,][]{Riffel2013,Davies2014,Schonell2019}. It is possible that warm molecular gas has been destroyed due to the radiation from AGN \citep[e.g.,][]{Schonell2019}. The various previous results based on tracers of ionized and molecular gas outflows showed that the effect of AGN-driven gas outflows is diverse and complex. In particular, it is important to investigate gas in multi-phase to fully understand the impact of the outflows. Detailed studies with a population of AGNs are required to unravel the nature of AGN feedback in galaxy evolution. In this paper, we focus on a nearby Seyfert 2 galaxy NGC 5728 at a distance of 40.3 Mpc, in order to investigate the connection between AGN outflows and star formation. This galaxy presents a star formation ring at a $\sim$1 kpc radius, while ionized gas outflows in a biconical shape as well as one-sided radio jet are detected \citep{Schommer1988,Wilson1993,Son2009a,Davies2016,Durre2018,Durre2019}. A particular interest is that the biconical gas outflows intersect with the star formation ring \citep[e.g.,][]{Wilson1993,Durre2018,Durre2019}, which is the densest region in the host galaxy \citep[similar to the dust lane of NGC 5643 of][]{Cresci2015b}, providing a good testbed for investigating AGN feedback via gas outflows. Recently, \cite{Durre2018,Durre2019} investigated the spatially-resolved kinematics of the ionized gas in the nuclear region of NGC 5728, reporting that a substantial amount of gas (38 M$_{\odot}$ yr$^{-1}$) is being removed from the nuclear region due to the powerful AGN gas outflows. They also reported no strong spatial relation between the radio jet and the supernova remnants in the star formation ring. However, the larger scale outflows and their impact on the star formation ring has not been perviously explored. Thus, we will focus on the outflow kinematics in the $\sim$6 kpc $\times$ $\sim$6 kpc scales and the connection between the gas outflows and star formation, particularly in the star forming ring, using the spatially resolved measurements based on the VLT/MUSE and ALMA data. In section 2, we describe the data and data preparation. The analysis is described in section 3, and we present our results and discussion in Section 4 and 5. We adopt a cosmology of $H_{\rm 0}= 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$ and $\Omega_{\rm m}=0.3$. \\ \begin{figure*}{} \includegraphics[width = 0.96\textwidth]{fig2.pdf} \caption{ Flux, velocity, and velocity dispersion maps of the stellar component. { The flux was integrated from 4800 to 6800\AA.} The flux map shows a star formation ring, spiral arms, and a bar. In the velocity map, blue and red represent approaching (blueshift) and receding velocities (redshift), respectively. { Black dashed line indicates the major axis of the star formation ring and divides far-side (SE) and near-side (NW).}\\ \label{fig:allspec1}} \end{figure*} \begin{figure*}{} \includegraphics[width = 0.92\textwidth]{fig3.pdf} \caption{ Flux maps of H$\alpha$\ (left) and [O~{\sevenrm III}]\ (right). Biconical gas outflows are represented with white dashed lines. Gas outflows and the ring encounters in Region A (black box). Black solid line in the right panel indicates a pseudo-slit with the length of 8\arcsec\ to extract a one dimensional radial profile of flux, velocity, and velocity dispersion of [O~{\sevenrm III}], which is used in Section 4.4. \label{fig:allspec1}} \end{figure*} \section{Data}\label{section:Sample} NGC 5728 was observed with the VLT/MUSE in 2016 Apr 3 and Jun 3 as a part of the Time Inference with MUSE in Extragalactic Rings (TIMER) survey \citep[Observation ID: 097.B-0640 (A), PI: Gadotti][]{Gadotti2019}. The observation was divided into 12 exposures, resulting in a total exposure time of 1.6 hrs. The start seeing during the observing night ranged 0.56-0.91\arcsec.\ The data was retrieved from the ESO archive and reduced with the standard ESO reduction pipeline, ESOREFLEX (MUSE version of 2.4.2). The resulting seeing size for the combined cube is 0.66\arcsec To compare with the MUSE data, we also utilized the ALMA archival data of the 12 CO (2-1) line from Project 2015.1.00086 (P.I. N. Nagar). The raw data were re-calibrated and re-imaged using the pipeline, CASA 4.7.0. Phase, bandpass and amplitude were calibrated using J1448-1620, J1517-2422 and Titan, respectively. The channel was sampled into 10 km $\rm s^{-1}$ width and the robust weighting was set to be the robust 0 to optimize the data in sensitivity and resolution. The synthesized beam of the final cube is 0.55\arcsec $\times$ 0.47 \arcsec with a 1-$\sigma$ rms of $\sim$0.65 mJy per beam per channel. \\ \section{Analysis} \label{section:analysis} \subsection{MUSE analysis} \label{section:anlysis} For the MUSE datacube, we performed a spectral fitting analysis by focusing the central 30\arcsec$\times$30\arcsec, where stellar continuum or emission lines are clearly visible. In the analysis, we used the spectral window from $\sim$4800 to 6800\AA, which covers the main optical emission lines { (i.e., H$\beta$, [O~{\sevenrm III}]$\lambda$4959,5007, [O~{\sevenrm I}]$\lambda$6300 H$\alpha$, [N~{\sevenrm II}]$\lambda$$\lambda$6548,6583, and [S~{\sevenrm II}]$\lambda$$\lambda$6716,6731)}. First, we fitted and subtracted the stellar continuum using the Penalized Pixel-Fitting code \citep[][]{Cappellari2017} with 47 ages (from 60 Myr to 12.6 Gyr) and solar metallicity of E-MILES templates \citep{Vazdekis2016}, { which is widely used as it provides large dynamic ranges of metallicity and age.} In the stellar continuum fitting, we masked visible emission lines. Second, we fitted emission lines, which satisfy the amplitude-to-noise (A/N) ratio larger than 3, using the MPFIT package \citep{Markwardt2009}. To reproduce the observed emission lines, we adopted single- or double- Gaussian model as a line profile. The double Gaussian model was considered only if the A/N ratio of each Gaussian component in the fitting result is larger than 3, and the number of Gaussian components was determined depending on the chi-squares of the fitting results. Most fitting parameters are given to be free except for the [O~{\sevenrm III}], [S~{\sevenrm II}], and [N~{\sevenrm II}]. We tied the velocity and velocity dispersion for the [O~{\sevenrm III}], [S~{\sevenrm II}], and [N~{\sevenrm II}]\ doublet, respectively. In the case of \NII6548 and \NII6583, we set flux ratio as 3. { From the best-fit models, we measured flux, velocity, and velocity dispersion (VD) for each line (see Figure~1). Note that we did not separate the broad component from the narrow component in measuring outflow kinematics, since even the narrow component showed non-gravitational kinematics (see the narrow component of each line at the center in Figure~1).} The uncertainties were measured based on Monte-Carlo simulations with 100 mock spectra, which were used in comparison with the model predictions (see Section 4.4). \subsection{ALMA analysis} \label{section:anlysis} In order to investigate molecular gas distribution compared to that of ionized gas, we made a CO (2-1) line intensity map only with the data above $\sim$5$\sigma$ in order to be conservative. Using the line intensity, we estimated total molecular gas mass ($M_{\rm H2}$) by adopting Eq. 3 of \cite{Bolatto2013} and the same conversion factor as Milky Way, i.e. Xco = 2 $\times$ $10^{20} \rm cm^{-2}\ (K \ km^{-1})^{-1}$ \citep{Bolatto2013}. The mean CO (2-1) to CO (1-0) line ratio was assumed to be $\sim$0.8 \citep{Braine1992}. We note that it is arguable whether our assumptions for Xco and CO (2-1) to (1-0) line ratio are applicable for AGN hosts including this case. For example, Xco is measured to be 0.2-0.4 $\times$ $10^{20} \rm cm^{-2}\ (K \ km^{-1})^{-1}$ among the AGN population \citep{Bolatto2013}. CO (2-1) to (1-0) line ratio is also known to vary depending on the environment \citep{Leroy2009}. While investigating more reliable Xco and CO (2-1) to (1-0) line ratio is beyond the scope of this work, we would like to emphasize that all comparisons are made relatively within the area of interest, and hence our interpretation is robust regardless of the choice of those values. \begin{figure*}{} \includegraphics[width = 0.9\textwidth]{fig4.pdf} \caption{ Upper panels: velocity maps of H$\alpha$\ (left) and [O~{\sevenrm III}]\ (right) with respect to the systemic velocity of NGC 5728. Lower panels: maps of the relative velocity of H$\alpha$\ (left) and [O~{\sevenrm III}]\ (right), after subtracting stellar velocity in each spaxel (i.e., $V_{Ha}-V_{stellar}$ and $V_{[OIII]}-V_{stellar}$). Spiral arms and biconical outflows are denoted with gray dashed lines and black dashed lines, respectively, as presented in Figure~3. Black solid lines in the right panel are same as in Figure~3. \label{fig:allspec1}} \end{figure*} \begin{figure*}{} \includegraphics[width = 0.9\textwidth]{fig5.pdf} \caption{ Velocity dispersion maps of H$\alpha$\ (left) and [O~{\sevenrm III}]\ (right). Spiral arms and biconical outflows are denoted with gray dashed lines and black dashed lines, respectively, as presented in Figure~3. Gray and black dashed lines and black solid line are same as in Figure~3. \label{fig:allspec1}} \end{figure*} \section{Result} \label{section:result} \subsection{Stellar component}\label{astrometry correction} In Figure~2, we present the spatial distributions of the flux, velocity, and velocity dispersion of the stellar component, which were measured based on the stellar continuum over the wavelength range of 4800-6800\AA. The flux map shows interesting structures: 1) a star formation ring, 2) two spiral arms, and 3) a bar. The spiral arms may indicate the connection between the star formation ring and the large scale structure of the galaxy. The bar was previously reported by \cite{Wilson1993} and also fully detected in the NIR observations by \cite{Ensellem2001}. The velocity map clearly shows a rotation pattern, of which the northern part is approaching while the southern part is receding with the maximum velocity, -160 km s$^{-1}$\ and 160 km s$^{-1}$, respectively. { Interestingly, the rotation pattern seems to be faster in the central region within 1 kpc compared to that in large scale (i.e., >1 kpc). This result may suggest the presence of a compact disk at the central region of NGC 5728. } { Note that we determine the orientation of the disk as the SE region is far-side, while the NW region is near-side, based on the stellar kinematics assuming that the spiral arms are trailing. The same geometry was constrained by \cite{Son2009a} based on the information of the star formation ring although it is possible that the large scale disk and the star formation ring can be tilted from each other.} The velocity dispersion slightly decreases outwards with an average $\sim$150 km s$^{-1}$, except for the location of the star formation ring, where stellar velocity dispersion is much lower with an average $\sim$100 km s$^{-1}$. Note that the measured stellar kinematics is consistent with that of \cite{Durre2019}. \\ \subsection{Ionized gas}\label{astrometry correction} In this section, we present the spatial distributions of flux, velocity, and velocity dispersion, which were measured from the H$\alpha$\ and [O~{\sevenrm III}]\ emission lines, respectively, to investigate AGN outflows. \subsubsection{Flux distribution}\label{astrometry correction} The flux maps of H$\alpha$\ and [O~{\sevenrm III}]\ are presented in Figure~3. The H$\alpha$\ map reveals four structures: 1) the star formation ring, 2) the spiral arms, 3) biconical outflows, and 4) a doughnut shape. First, the star formation ring is clearly detected in the H$\alpha$\ map, although the shape is not a perfect ring. The H$\alpha$\ flux is stronger in the half ring in NW, while the other half ring in SE is relatively weak. This may be caused by the contamination or overlap of foreground AGN gas outflows to the line-of-sight (see Section 4.2.3). In the NW ring, we find an interesting region with prominent H$\alpha$\ emission, which is marked as Region A in the left panel of Figure~3. Region A is likely to be an intersection of the star formation ring and AGN gas outflows, which will be investigated in the following sections. Second, we detect weak H$\alpha$\ emission along the spiral arms, particularly in the southern arm. These features indicate star formation activity in the spiral arms. Third, as previously reported \citep[see e.g.,][]{Durre2018,Durre2019}, [O~{\sevenrm III}]\ flux map shows a conical shape from the center to 1-2 kpc scales in SE direction, indicating gas outflows. Finally, a doughnut shape is detected in the outflow region at the distance of $\sim$4\arcsec\ from the center. The doughnut shape could be interpreted as representing the hollow cone structure as suggested by previous outflow studies \citep[e.g.,][]{Fischer2010}. The H$\alpha$\ and [O~{\sevenrm III}]\ flux maps show similar trend except for the nuclear star formation ring and spiral arms, where star formation is expected, indicating that there are various ionizing sources in the FOV and this issue will be described in the Section 4.2.3. \subsubsection{Kinematics}\label{astrometry correction} The velocity maps of H$\alpha$\ and [O~{\sevenrm III}]\ with respect to the systemic velocity of NGC 5728 are presented in the upper panels of Figure~4, showing a rotation pattern due to the host galaxy gravitational potential, while there is additional non-gravitational components in NW-to-SE direction. At the very center ($<$ 1 kpc), where the biconical outflows are detected, the location of the two clumps with the highest blueshift/redshift is misaligned with the larger scale gravitational motion (N-S direction). Also, we detect high velocity structures close to, but outside of the spiral arms. The blueshifted region in NW and the redshifted region in SE are clearly present in the [O~{\sevenrm III}]\ velocity map, indicating the presence of non-gravitational motion, i.e., outflows, in $\sim$2 kpc scale. { Gas inflows are often detected along spiral arms, as gas emission lines are blueshifted in the far-side while the near-side presents redshifted emission lines \citep[e.g.,][]{Riffel2008,Riffel2013,Diniz2015,Luo2016}. In contrast, NGC 5728 shows an opposite trend, as H$\alpha$\ and [O~{\sevenrm III}]\ are redshifted in the far side (SE), while in the near side (NW) H$\alpha$\ and [O~{\sevenrm III}]\ show blueshift. Thus, the gas in the far-side (near-side) is receding (approaching), suggesting outflows rather than inflows. } To separate the non-gravitational motion, we construct the relative velocity maps by subtracting stellar velocity from gas velocity in each spaxel (lower panels of Figure~4). The relative motion of the ionized gas clearly shows the biconical outflows at the central region within the location of the star formation ring (hearafter inner region, see Figure~3). They are composed of a pair of receding (redshifted) and approaching (blueshifted) parts. We find that the gas outflows are not confined in the 1 kpc scale. Rather, gas outflows extend to 2 kpc scales slightly beyond the location of the spiral arms (hearafter outer region). The inner region shows the maximum H$\alpha$\ velocity of 250 and -140 km s$^{-1}$\ in the receding and approaching cones, respectively, while the maximum velocity of the outer region is 160 and -180 km s$^{-1}$\ in the receding and approaching cones, respectively. The relative velocity maps of H$\alpha$\ and [O~{\sevenrm III}]\ show qualitatively similar morphology, however, [O~{\sevenrm III}]\ generally shows more negative velocity in the outer region. The maximum velocities of [O~{\sevenrm III}]\ in the inner and outer regions are, respectively, 320 (-140) and 90 (-220) km s$^{-1}$\ for the receding (approaching) cone. The velocity dispersions of H$\alpha$\ and [O~{\sevenrm III}]\ tend to be higher in regions where the gas outflows are detected (i.e., inner and outer regions) compared to the rest of our FOV (see Figure~5). In the gas outflow regions, the observed velocity dispersion is the highest at the central part ($\sim$300 and $\sim$360 km s$^{-1}$\ for H$\alpha$\ and [O~{\sevenrm III}]) and gradually decreases as a function of distance from the center (down to $\sim$100km s$^{-1}$\ in [O~{\sevenrm III}]). The high velocity dispersion detected in the gas outflow region suggests that the ionized gas, in particular [O~{\sevenrm III}], is under strong influence of AGN. At the central part, we find an interesting morphology with very high velocity dispersions (i.e., $>$300 km s$^{-1}$) along the perpendicular direction to the orientation of the gas outflows { (see also Figure 7 of \citealt[][]{Durre2019}), which also has been reported in other AGNs \citep[e.g.,][]{Riffel2014,Lena2015, Couto2017}. This feature can be interpreted as an equatorial outflow \citep{Riffel2014}, while seeing effect is likely to be the case (See Section 4.4 for more discussion).} { In order to investigate the kinematical relation between ionized gas and stars, we compare their V$_{\rm rms}$ (i.e., $\sqrt{V^{2}+\sigma^{2}}$) \citep[e.g.,][]{Cheung2016}. As shown in Figure~6, the V$_{\rm rms}$ ratio between ionized gas (H$\alpha$\ and [O~{\sevenrm III}]) and stars is high (i.e., $\sim$2) at the region, where gas outflows are detected. This result confirms the non-gravitational kinematics (i.e., gas outflows) in those regions, including the central part where a compact stellar disk is present. On the other hand, the V$_{\rm rms}$ ratio is generally close to unity in the rest of the FOV, indicating that gas follows the gravitational potential (see also the middle panel in Figure~2.} \begin{figure*}{} \includegraphics[width = 0.9\textwidth]{fig6.pdf} \caption{ V$_{\rm rms}$ maps of H$\alpha$\ (left) and [O~{\sevenrm III}]\ (right). Spiral arms and biconical outflows are denoted with gray dashed lines and black dashed lines, respectively, as presented in Figure~3. Gray and black dashed lines are same as in Figure~3. \label{fig:allspec1}} \end{figure*} Using the [O~{\sevenrm III}]\ velocity dispersion, we estimate the size of the AGN gas outflows by adopting the method of \cite{Kang2018}. The outflow size is defined at the radius where [O~{\sevenrm III}]\ velocity dispersion becomes comparable to stellar velocity dispersion. To be consistent with \cite{Kang2018}, we adopt stellar velocity dispersion of 160 km s$^{-1}$, that is measured from the integrated spectrum within 3\arcsec\ diameter at the center. Our estimated outflow size is $\sim600$ pc and the luminosity of [O~{\sevenrm III}]\ within the outflow size is $8.51\times 10^{39}$ erg/s. These measurements are consistent with the outflow size-luminosity relation of \cite{Kang2018} within the scatter of 0.1 dex. This result indicates that if NGC 5728 is at a large distance, outflows will be mainly detected from the region where gas velocity dispersion is very high, resulting in a relatively small outflow size. In contrast, we are able to detect gas outflows at much larger scales with high spatial resolution and high sensitivity, although the gas outflows are relatively weak. \\ \begin{figure*}{} \includegraphics[width = 0.98\textwidth]{fig7.pdf} \caption{ BPT morphology maps with three diagnosis ([N~{\sevenrm II}], [S~{\sevenrm II}],\ and [O~{\sevenrm I}]). Red, blue, cyan, and yellow represent AGN, star forming region, composite, and LINER, respectively. In each BPT map, Region A is marked with a white box. \\ \label{fig:allspec1}} \end{figure*} \begin{figure*}{} \includegraphics[width = 0.98\textwidth]{fig8.pdf} \caption{ BPT diagrams with three diagnosis ([N~{\sevenrm II}], [S~{\sevenrm II}],\ and [O~{\sevenrm I}]). Red lines denotes the mixing sequence between the two basis points for star forming region and AGN region. Color presents AGN fraction from 0 to 100\%\ along the mixing sequence.\\ \label{fig:allspec1}} \end{figure*} \subsubsection{photoionization: AGN vs. star formation}\label{astrometry correction} To investigate ionizing sources across the FOV, we investigate line flux ratios and identify their ionizing sources in each spaxel, using the BPT diagrams with three diagnoses of [N~{\sevenrm II}]\, [S~{\sevenrm II}]\, and [O~{\sevenrm I}]. As shown in Figure~7, our BPT classification result well separates the AGN gas outflows and the star formation ring as AGN and star forming region. Also, the BPT diagrams indicate a clear mixture of photoionization due to star formation and AGN in the central part of NGC 5728 (see Figure~8). We note that our BPT classification result is consistent with that presented in the previous works \citep[][]{Davies2016,Durre2018}. We separate ionizing sources using the emission line flux ratios in the BPT diagrams \citep[e.g.,][]{Kauffmann2009,Davies2016}, as similarly performed by \cite{Davies2016}. First, we determine two basis points, respectively, representing pure star formation and pure AGN in the BPT diagrams and draw a line between the two basis points as the mixing sequence (see red lines in Figure~8). Thus, the mixing sequence represents the AGN fraction from 0\% to 100\% between the two basis points. Note that the mixing sequence is curved in the log scale BPT diagrams, but it is defined as a line in the linear scale flux ratio diagrams. Then, for a given location in the BPT diagrams, we adopt the AGN fraction of the closest point in the mixing sequence. In this way we determine the AGN fraction of each location in the BPT diagram as presented in Figure 7. Note that we calculate the minimum distance to the mixing sequence using the linear scale (instead of log scale) BPT diagrams, following \citet{Davies2016}. The AGN fraction is color-coded in Figure~8. While the AGN fraction is determined independently using each BPT diagram, we find only marginal difference of the AGN fraction among them. Thus, we adopt the AGN fraction estimated from the BPT diagram based on [N~{\sevenrm II}]. Comparing Figure~7 and 8, the AGN fraction is low (<$\sim$10\%) in star forming region while it goes up to 50-100\%\ in AGN region. Using the determined AGN fraction in each spaxel, we separate the contribution from star formation to H$\alpha$\ emission, determine star-forming H$\alpha$\ luminosity, and calculate SFR based on Eq. 4 of \cite{Murphy2011}: \begin{equation} SFR(M_{\odot}\ yr^{-1}) = 5.37 \times 10^{42}\ L_{\rm H\alpha}\ \rm (erg/s). \end{equation} For the luminosity calculation, H$\alpha$\ flux is corrected for dust extinction using Eq. A10 of \cite{Vogt2013} with the Balmer decrement (i.e., H$\alpha$/H$\beta$) of 2.86 and $R^{A}_{V}$ of 4.5 \citep{Fischera2005}. The H$\alpha$\ luminosity maps are presented in Figure~10, after separating AGN and star formation contribution to H$\alpha$, along with the SFR map. The AGN outflows and the star formation ring are well separated as consistent with those in \cite{Davies2016}. Moreover, we find that clumpy structures in the star formation ring become more prominent in the SFR map (see also Figure~3). To compare with Region A, we additionally select three regions (i.e., Region B, C, and D) whose SFR is distinctively high in the star formation ring. We arbitrary determine their size and estimate their SFR (Table~1). We note that there is another region with high SFR in the south of Region C. However, AGN is dominant as the BPT classification indicates. Thus, we do not consider this region in the comparison with Region A as a conservative approach. As shown in Figure~9, these regions show small AGN contributions (10$\pm$6, 5$\pm$6, 3$\pm$1, and 2$\pm$2 \% for Region A, B, C, and D, respectively), suggesting that most H$\alpha$\ emission is coming from star formation. Interestingly, we find that the SFR in Region A is the second highest among those of the four regions. Note that this trend does not change even if we consider three sigma uncertainties (up to $\sim$28\%) of the AGN fraction. Similar to the H$\alpha$\ flux map (see Figure~3), the SFR map (right panel of Figure~10) does not reveal the other half ring in SE direction even after the dust extinction correction, indicating that the dust correction was not successful. Since the fluxes of H$\alpha$\ and H$\beta$\ in the SE half ring are largely dominated by AGN ($\sim$50-100\%), the dust extinction correction in that region is heavily weighted to that of the AGN cone, not to the star formation ring \citep[see also][]{Durre2018}. However, this issue is not relevant to the four regions defined in NW, because the AGN fractions in those regions are very low (<10\%). Note that our AGN-star formation separation is consistent with that of \cite{Davies2016}, while it is somewhat different from that of \cite{Durre2018}. This is due to different approaches in the ionizing source separation. \cite{Durre2018} separated the ionizing sources with a logarithmic superposition with two basis points in the BPT diagrams, which is different from the linear superposition used in this work and the work by \cite{Davies2016}. As shown in Figure~18 of \cite{Durre2018}, the AGN fraction is $\sim$40\%\ in the star formation ring, which is much larger than our estimate ($\sim$10\%). However, even if we adopt the AGN fraction of $\sim$40\%, the SFR in Region A is still comparable to that in Region C and D. \\ \begin{figure}{} \includegraphics[width = 0.44\textwidth]{fig9.pdf} \caption{AGN fraction map. Color represents the AGN contribution, ranging from 0 to 100\%. White boxes present Region A-D, which will be described in Section 4.2.3. \label{fig:allspec1}} \end{figure} \begin{figure*}{} \includegraphics[width = 0.99\textwidth]{fig10.pdf} \caption{ H$\alpha$\ luminosity maps using the total H$\alpha$\ line (left) and contribution from AGN (center). The SFR derived from the H$\alpha$\ line after subtracting AGN contribution is shown in the right panel. Region A-D with high SFR are marked with black boxes. \\ \label{fig:allspec1}} \end{figure*} \begin{figure*}{} \centering \includegraphics[width = 0.92\textwidth]{fig11.pdf} \caption{ Left panel: molecular gas mass ($M_{\rm Mol}$) map derived from the CO (2-1) intensity map (left), showing various structures (i.e., a circumnuclear disk, streams, and clumps). Right panel: star formation efficiency (i.e., SFR/M$_{\rm Mol}$). Region A-D are marked with black boxes in each panel.\\ \label{fig:allspec1}} \end{figure*} \begin{deluxetable*}{ccccccc} \tablewidth{0pt} \tablecolumns{7} \tabletypesize{\scriptsize} \tablecaption{SFR and SFE} \tablehead{ \colhead{} & \colhead{SFR\_avg} & \colhead{SFR\_med} & \colhead{SFE\_avg} & \colhead{SFE\_med} & \colhead{$\rm T_{dep}$\_avg} & \colhead{$\rm T_{dep}$\_med} \\ & \colhead{($\rm M_{\odot}/yr/kpc^2$)} & \colhead{($\rm M_{\odot}/yr/kpc^2$)} & \colhead{($\rm yr^{-1}$)} & \colhead{($\rm yr^{-1}$)} & \colhead{($\rm Gyr$)} & \colhead{($\rm Gyr$)} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} } \startdata Region A & $1.82$ & $1.70$ & 1.62$\times \rm 10^{-8}$ & 1.15$\times \rm 10^{-8}$ & 0.062 & 0.087 \\ \hline Region B & $2.15$ & $1.98$ & 5.25$\times \rm 10^{-9}$ & 5.33$\times \rm 10^{-9}$ & 0.190 & 0.188 \\ \hline Region C & $1.20$ & $1.13$ & 3.14$\times \rm 10^{-9}$ & 2.84$\times \rm 10^{-9}$ & 0.318 & 0.352 \\ \hline Region D & $1.04$ & $0.90$ & 2.47$\times \rm 10^{-9}$ & 2.34$\times \rm 10^{-9}$ & 0.405 & 0.427 \enddata \label{table:prop} \tablecomments{Averaged or median value of SFR, SFE, and depletion time scale for the four regions marked in Figure~10 and 10. } \end{deluxetable*} \subsection{Molecular gas} To investigate the molecular gas and its relation with SFR, we present the distribution of the molecular gas mass ($M_{\rm Mol}$) and star formation efficiency (SFE=SFR/$M_{\rm Mol}$) in Figure~11. In each panel, Region A-D with high SFR are marked. The structure of the molecular gas generally follows that of the SFR map (Figure~10) with showing the star formation ring and the spiral arms. However, the molecular gas distribution also shows distinct features (i.e., a circumnuclear disk, non-detection along the outflow orientation, streams, and clumps). First of all, the distribution of the molecular gas is strongly concentrated at the central part, which may indicate a circumnuclear disk with the radius of 1\arcsec\ corresponding to 200pc, similar to e.g., NGC 1068 \citep{Garcia-Burillo2014}. { As shown in the map of stellar velocity (the middle panel of Figure~2), the centrally fast rotating motion of the stars may be related to the circumnuclear disk.} Remarkably, the position angle of the circumnuclear disk is $\sim$40\arcdeg, which is nearly perpendicular to the orientation of the gas outflow (i.e., -53\arcdeg). This may indicate that the dust torus of AGN may be confined in the circumnuclear disk although it is not resolved at the given resolution of our ALMA data. Secondly, we do not detect CO along the outflow orientation, which is a different trend from that of the ionized gas. This non-detection can be due to the excitation or dissociation of CO by X-ray photons (or radio jets) from the AGN. This issue will be discussed in the next section. Thirdly, we detect CO gas streams along the star formation ring at the position angle of $\sim$45\arcdeg and -135\arcdeg (i.e., Region B). These regions may be related to contact points between the star formation ring and large scale structures of the host galaxy. Due to the large amount of inflowing materials along the spiral arms, molecular gas can be accumulated hence star formation can be active near the contact points \citep[e.g.,][]{Boker2008}. Actually, in Region B, we detect the highest star formation compared to the other regions in the star formation ring (see Table~1). Finally, clumpy regions are detected (e.g., Region C and D). With high molecular mass as well as SFR, these regions may be typical star forming regions. Contrary to Region B, C, and D, CO emission is barely detected at Region A. Consequently, the observed SFE (depletion time scale) in Region A is $\sim$3-5 times higher (shorter) than that of other regions (see Table~1). If we consider Region B, C, and D as typical star forming regions without AGN feedback, while region A could have a different mechanism for star formation (i.e., AGN feedback). We will discuss this issue in the next section.\\ \subsection{Bicone model of AGN gas outflow} We constrain the geometry of the gas outflows using a bicone model. Several efforts have been made to model the biconical outflows \citep[e.g.,][]{Fischer2013,Bae2016}. With various structural parameters (i.e., inclination and outer opening angle), the observed kinematics of the gas outflows are reproduced. However, the previous models did not consider seeing effect which can severely change the observed morphology of the gas outflows in the projected plane. To apply the seeing effect, we build a three dimensional bicone model by including a point spread function in the model of \cite{Bae2016}. The updated model requires 11 free parameters (i.e., 7 parameters for the bicone geometry, 3 parameters for a dust plane for extinction effect, and one parameter for the seeing size). A detailed description for the updated model with the seeing effect will be presented in the future (Shin et al. in preparation). Through various tests, we determine the best parameters of the bicone model, which represents the observation with the minimum chi-square value, under three specific conditions as described below. First, we focus on the central region ($\sim10\arcsec \times 10\arcsec$), where the AGN fraction is dominant, to minimize the contamination from host galaxy since this model only accounts for non-gravitational effect due to AGN outflows. Note that even though we separate the ionizing sources (i.e., AGN and star formation), the kinematics are not separated in this work. Second, to reduce the number of geometry parameters in the modeling, we adopt the seeing size as 0.66\arcsec\ from the MUSE observation, and constrain the geometry of the large scale dust plane from the observed properties of the star forming ring, i.e., the position angle of the major axis of the ring: 20\arcdeg, and the inclination of the minor axis: 50\arcdeg\ based on the previous work by \citet{Son2009a}. Lastly, we compare one dimensional radial profiles of the flux, velocity, and velocity dispersion of the bicone model along the AGN gas outflow direction (see a thick black line in Figure~3-5) with the observation of [O~{\sevenrm III}], in order to determine the best physical parameters of the gas outflows. Note that, [O~{\sevenrm III}]\ velocity dispersion suddenly decreases in 2-3\arcsec in the approaching cone (shaded area in Figure~12), of which the origin is not clear. For the comparison, we masked out this region. The dip feature may be caused by the contamination from star formation ($\sim$20\%) or the dust obscuration of the gas outflows. An interaction between host galaxy and AGN gas outflows is also a potential explanation of the dip feature \citep[e.g.,][]{Fischer2017}. Figure~12 presents the radial profiles of the flux, velocity, and velocity dispersion, reproduced from the best bicone model, compared to the observed profiles, as a function of distance from the position of X-ray source \citep[RA=14:42:23.88 and Dec=-17:15:11.25][]{Evans2010}. While the reproduced radial profile of [O~{\sevenrm III}]\ kinematics are not perfectly matching the observations, the model reasonably well explains the measured velocity and velocity dispersion of [O~{\sevenrm III}]. The best bicone model is constrained with an inclination of 20\arcdeg\ and an outer opening angle of $\pm$28\arcdeg. This results suggest that the gas outflows cover the relatively wide inclination angle from -8 to +48\arcdeg, and encounters the stellar disk including the star formation ring and the spiral arms, of which the inclination angle of the minor axis is $\sim$50$\arcdeg$ \citep{Son2009a}. Also, the smaller inclination of the bicone than that of the star formation ring confirms that the approaching cone in NW direction is behind the star formation ring while the receding cone in SE direction is in front of the ring to the line-of-sight. We present the simulated maps of flux, velocity, and velocity dispersion based on the best bicone model (Figure~13). Even though our model does not fully represent the observations due to various limitations in the model and various issues in the observations (i.e., star formation contribution and non-uniform dust extinction), the simulated maps qualitatively represent the flux distribution and kinematics of [O~{\sevenrm III}]. One striking result in our two dimensional model is the elongated feature with the highest velocity dispersion, which is orientated perpendicular to the direction of the gas outflow. This feature is clearly detected in the velocity dispersion map of [O~{\sevenrm III}]\ (Figure~5), suggesting that the seeing effect { can be} responsible for this structure. Note that this perpendicular feature can not be reproduced without the seeing effect in our previous model \citep[see Figure~3 of][]{Bae2016}, indicating the importance of the seeing effect. { Since the effect of the overlap between approaching and receding cones is the strongest at the central region, velocity dispersion is naturally expected to be very high. The seeing effect artificially increases the front of the overlap between approaching and receding cones in the perpendicular direction, while in the outflow direction, the seeing effect is much weaker due the the contribution from the outer region. Thus, the perpendicular shape of the highest velocity dispersion is produced in our bicone model. In our simulation, the size of the perpendicular feature is 2.5\arcsec, which is smaller than that of the observed feature ($\sim$4.4\arcsec). Nonetheless, we qualitatively confirm the possibility that the perpendicular feature is due to the seeing effect. Equatorial outflow is another possibility. For example, \cite{Riffel2014} discussed equatorial outflows aligned with an extended radio emission in NGC 5929. To investigate the possibility of equatorial outflows, we examine the 20 cm radio image of NGC 5728 \citep{Schommer1988}. While there are weak radio emissions in the central region, we find no clear evidence of an extended radio emission along the perpendicular direction with respect to the outflow direction. } We note that \cite{Durre2019} also constrained the inclination and the outer angle of the bicone and discussed the possible interaction between host galaxy-AGN outflow. Even though their results are generally consistent with ours, the values of the inclination (47.6\arcdeg) and outer angle (71\arcdeg) are quantitatively different. If their inclination value is adopted, it is difficult to expect the dust obscuration of the approaching cone in NW direction by the star formation ring or dusty stellar disk, since the inclination of the cone and the disk is very similar. The reason for the discrepancy seems to be due to their specific analytic model of gas outflows in \cite{Durre2019}. For example, they constrained the geometry (i.e., inclination and opening angle) with the assumption that the velocities of [Fe~{\sevenrm II}] 1.644 $\mu$m, which were measured from each Gaussian component in the best-fit model, represent the front and back velocities of the hollow cone (see Eq. 14 and 15 of \citealt{Durre2019}). However, the front and back velocities measured from the two Gaussian components in the emission line, do not represent the same distance from the center due to the projection effect, which varies depending on the inclination and opening angle. Thus, more detailed constraints are needed to fully understand the geometry of the outflows. \begin{figure}{} \centering \includegraphics[width = 0.44\textwidth]{fig12.pdf} \caption{ Radial profiles of the measured flux, velocity, velocity dispersion of [O~{\sevenrm III}]\ (black points) along the AGN gas outflow direction (see the pseudo-slit in Figure~3). The radial distance in the projected plane is measured from the position of X-ray source. Red points represent the measurements from the best bicone model with an inclination of 20$\arcdeg$ and the outer angle of $\pm$28$\arcdeg$. The region from 1.5 to 3.2\arcsec\ is masked out for the comparison between observations and model predictions. \label{fig:allspec1}} \end{figure} \begin{figure*}{} \centering \includegraphics[width = 0.96\textwidth]{fig13.pdf} \caption{ Simulated maps of flux, velocity, and velocity dispersion using the biconical outflow model. The center of the outflows is marked with cross. The model parameters are same as in Figure~12. \\ \label{fig:allspec1}} \end{figure*} \begin{figure}{} \centering \includegraphics[width = 0.44\textwidth]{fig14.pdf} \caption{ Schematic view of the central part of NGC 5728. The PA of the biconical outflows is -53\arcdeg\ and the inclination angle is 20\arcdeg\ with an opening angle of $\pm$28\arcdeg, while the PA of the major axis of the ring is 20\arcdeg and the inclination angle of the minor axis of the ring is 50\arcdeg. The encountering region between the gas outflows and the ring is denoted with A, while the highly star-forming regions in the ring are dented with C, and D. B represents the contact point between the ring and the northern spiral arm. \label{fig:allspec1}} \end{figure} Based on the constraints in this study, we construct a schematic model, consisting of the star formation ring, the spiral arms and the biconical gas outflows (see Figure~14). As expected from the MUSE observation, the geometry of the model well represents that the approaching cone is obscured by the star formation ring, while the receding cone is in front of the star formation ring. As a reference, we mark the four regions with high SFR (i.e., Region A-D). In particular, Region A is presented as the interaction region between the AGN gas outflows and the star formation ring. Region B is marked at the contact point between the star formation ring and the northern spiral arm. Finally, we indicate Region C and D, which are regarded as typical star forming regions in the star formation ring. \\ \section{Discussion}\label{Discussion} \subsection{Positive feedback}\label{astrometry correction} We detect the high SFR in Region A, where the gas outflows encounter the star formation ring at a $\sim$1 kpc projected distance from the center (see Figure~10). The star formation efficiency of Region A is higher than other regions (B, C, and D) in the ring by a factor of $\sim$3-5, suggesting that the AGN-driven outflows may have enhanced star formation in the intersecting area. The relative deficit of CO molecular gas in Region A also indicates that the triggering mechanism of star formation is different compared to other regions, indicating the positive role of the AGN-driven outflows. A possible explanation of the lack of CO gas is that the AGN-driven outflows triggered a burst of star formation, consuming a large amount of molecular gas. Another scenario is that CO is excited by X-ray photons from AGN \citep[e.g.,][]{vanderWerf2010}. Since X-ray emission is detected in the gas outflow regions as well as in Region A \citep[see e.g.,][]{Durre2018}, the marginal detection of the CO (2-1) emission can be explained if CO is mostly excited to higher states (e.g., CO (3-2)) by X-ray photons. Similarly, the excitation or dissociation of CO by shock may cause the lack of the CO (2-1) emission \citep[e.g.,][]{Flower2010,Meijerink2013}. These scenarios of the excitation/dissociation of CO can be also applied to the central part of NGC 5728, where the CO (2-1) emission is not detected along the outflow direction. Multi-phase CO observations are required to verify these scenarios as the origin of the lack of the CO (2-1) emission in Region A. Nevertheless, all the proposed scenarios indicate the interaction between the AGN-driven outflows and the ISM in the star formation ring, supporting the positive feedback interpretation. We turn to the overall impact of the AGN outflows on star formation in NGC 5728. The estimated SFR in Region A is $\sim$ 0.2 M$_{\odot}$ yr$^{-1}$, which is only $\sim$10\%\ of the combined SFR ($\sim$ 2 M$_{\odot}$ yr$^{-1}$) in the $30 \arcsec \times 30 \arcsec$ FOV (i.e., $\rm 6 \ kpc \times 6\ kpc$). Regarding the total SFR in the entire galaxy, the contribution of the enhanced SFR due to the AGN outflows is much lower than 10\%. With this small contribution, it is difficult to claim a significant impact of the AGN outflows in NGC 5728. While the overall effect of AGN feedback is limited, our results indicate that the AGN outflows may trigger star formation in high density regions (e.g., star formation ring or dust lane) as expected by several theoretical works \citep[e.g.,][]{Silk2013,Zubovas2017} and also reported by observational studies \citep{Cresci2015a,Cresci2015b,Carniani2016}. \\ \subsection{Negative feedback}\label{astrometry correction} We detect the two main regions of the gas outflows. While the inner region of the gas outflows at $<$ 1 kpc distance in the projected plane has been extensively discussed \citep[e.g.,][]{Wilson1993,Son2009a,Durre2018,Durre2019}, we newly find the outer region of the gas outflows at $\sim$2 kpc scale, by calculating relative velocity of ionized gas with respect to stellar velocity at each spaxel (see Figure~4). Interestingly, the location of the outer region is further out, compared to the location of the spiral arms. We interpret that inflowing gas along the spiral arms is swept out by the AGN outflows, presenting relatively high velocity and velocity dispersion in the outer region. Since gas supply is a key for star formation, this result implies that the star formation in the spiral arms is quenched due to the gas removal by AGN (i.e., negative feedback). Although the SFR in the outer region of the gas outflows is not well estimated due to the imperfect dust extinction correction, star formation activity seems much weaker than that in the star formation ring. \begin{figure}{} \centering \includegraphics[width = 0.44\textwidth]{fig15.pdf} \caption{ Hydrogen column density map calculated from the extinction. Region A is denoted with a black box. \label{fig:allspec1}} \end{figure} \subsection{Gas density and AGN luminosity}\label{astrometry correction} The characteristics of AGN feedback have been explored with various gas density and AGN luminosity in theoretical studies. For example, \citet{Zubovas2017} showed that positive feedback was more likely detected in regions with high gas density while negative feedback was stronger as AGN luminosity increases. To investigate the role of gas density in the context of AGN feedback, we estimate hydrogen column density (N$_{\rm H}$) using the extinction magnitude \citep[e.g.,][]{Guver2009}, which is calculated with the Balmer decrement (see Figure~15). The hydrogen column density in Region A ($\sim 6 \times 10^{21}\ \rm cm^{-2}$) significantly exceeds the critical density required for star formation \citep[$10^{21}\ \rm cm^{-2}$;][]{Clark2014}. On the other hand, in the outer region of the gas outflows in NW, where we interpret that gas is swept from the northern spiral arm without star formation activity, the detected N$_{\rm H}$ is $\sim 4 \times 10^{21}\ \rm cm^{-2}$, which is lower than that of Region A, but still higher than the critical density, suggesting star formation may be on-going. Thus, we find no clear evidence that gas density itself determines the nature of feedback (i.e., positive or negative). However, for the outer region of the gas outflows, it is limited to reliably determine the hydrogen column density due to the high contamination from AGN emission as the very high AGN fraction indicates ($\sim80\%$, see Figure~9). Thus, the dependence of AGN feedback on gas density needs to be investigated with further observations. Considering the effect of AGN luminosity, \cite{Zubovas2017} showed that negative feedback is effective when AGN luminosity is high ($L_{\rm Bol}>\ \sim4 \times 10^{46}$ erg/s), while a marginal effect in suppressing star formation was found in low AGN luminosity ($L_{\rm Bol}=1.3-2.6 \times 10^{46}$ erg/s). In the case of NGC 5728, the bolometric luminosity of AGN is $1.46\times\ 10^{44}\ \rm erg/s$ estimated from X-ray luminosity (\citealt{Durre2019}, see also \citealt{Davies2015}), which is far lower than that of AGNs explored by \cite{Zubovas2017}. Thus, considering the low luminosity of the AGN in NGC 5728, the overall negative feedback is not expected, which is consistent with our observations. Nevertheless, in small scales, AGN-driven outflows may suppress and enhance star formation, depending on the physical properties, i.e., local density, as manifested in NGC 5728, although the overall impact of AGN outflows in the global star formation may not be significant. \section{Summary}\label{summary} In this work, we present the spatially resolved analysis of ionized and molecular gas in NGC 5728, focusing on the central 6 $\times$ 6 kpc scales, using the VLT/MUSE and ALMA data. We investigate AGN-driven outflows and their connection to star formation. The main results are summarized below.\\ 1. We detect AGN-driven gas outflows out to $\sim$2 kpc from the center. While the inner region of the gas outflows at $<$ 1 kpc has been extensively studied, we newly present the outer region of the gas outflows based on the [O~{\sevenrm III}]\ and H$\alpha$\ gas velocity relative to that of stars in each spaxel. The inner and outer regions are disconnected by the star formation ring. \\ 2. We find that star formation activity is enhanced at the region where the AGN-driven gas outflows intervene the star formation ring, which can be interpreted as triggered by the AGN outflows. This positive feedback interpretation is supported by the deficit of the CO (2-1) emission in that region compared to that of other regions in the ring. \\ 3. The outer region of the gas outflows at $\sim$ 2 kpc scale is detected outside of the spiral arms, suggesting that the AGN outflows remove the inflowing gas from a large scale out of the spiral arms. We interpret this feature as an evidence of negative feedback.\\ 4. Based on the three dimensional kinematical model, combined with the seeing effect, we reproduce the radial trend of gas velocity and velocity dispersion, constraining the physical parameters of the biconnical gas outflows. The constraints of physical parameters further support that the AGN-driven outflows interact with ISM in the star formation ring. \\ 5. Our results show the evidences of positive and negative feedback, while the overall impact of the AGN outflows on the total SFR is insignificant. For locally confined regions, gas density may play an important role in determining the characteristics of AGN feedback.\\ In this work, we find the complex nature of the AGN-driven outflows and their connection to the star formation in NGC 5728, along with the enhanced star formation as well as the gas removal. For better understanding of the role of AGNs in galaxy evolution, future works with a larger sample covering a large dynamic range in AGN luminosity are required.\\ \acknowledgements { We thank the anonymous referee for his/her valuable comments and suggestions that helped to improve the paper.} This work was supported by the National Research Foundation of Korea grant funded by the Korea government (No. 2016R1A2B3011457 and No. 2017R1A5A1070354). Based on observations collected at the European Southern Observatory under ESO programme 097.B-0640(A). This paper makes use of the following ALMA data: ADS/JAO.ALMA$\#$2015.1.00086.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
2,869,038,154,523
arxiv
\section{Design overview} \subsection{Status} The design of the FCC--hh collider has been presented in a Conceptual Design Report (CDR)~\cite{FCC-hhCDR}, which describes the baseline configuration of the ring (see Section~\ref{sec:intro} and following for a brief review of the baseline design and of the recent developments). Note that the discussion presented in the rest of this paper is essentially based on the material collected in the CDR. Since the publication of the CDR, substantial progress has been made, in particular in the domain of ring placement and layout, and the main results are summarized in Section~\ref{sec:progress}. \subsection{Performance matrix} \subsubsection{Attainable energy} The target energy of \SI{100}{TeV} fully relies on the successful development of \SI{16}{T}, superconducting magnets, and any failure to meet the target magnetic field will necessarily impact the final energy of the collider. To mitigate the risk linked to this challenging and new technology, R\&D efforts are needed and accurately detailed in~\cite{FCC-hhCDR}. In this respect, the experience from HL--LHC will be important. \subsubsection{Attainable luminosity and luminosity integrals} Possible limiting factors for the collider luminosity seem more linked to luminosity integrals rather than attainable luminosity. In the case of the LHC, the attainable luminosity has surpassed the nominal one thanks to several elements. Higher-brightness beams delivered by the injectors boosted the luminosity, while in the LHC ring the excellent optics control, which includes measurement and correction, together with an optimal use of the available beam aperture, thanks also to the use of tighter collimators settings~\cite{FusterLHCapertureEPJP}, provided the final touch. It is worth stressing that in the LHC, $\beta^\ast=\SI{30}{cm}$, corresponding to the nominal FCC--hh value, has already been achieved and surpassed. On the downside, the larger number of quadrupole magnets in the straight sections of the FCC--hh might challenge the correction algorithms devised so far for the LHC, and new approaches should be explored. Furthermore, the actual operational performance with crab cavities is still unknown, but the HL--LHC will provide an ideal test-bed for getting ready in view of the FCC--hh. In this respect, attaining the FCC--hh target integrated luminosity might be more challenging for several reasons. The injector chain will increase in terms of the number of accelerator rings; the number of magnets (and of active elements in general) in the FCC--hh lattice will also increase with respect to the LHC or HL--LHC; repairing activities will be challenging, also taking into account the distances to be covered to access the faulty hardware and the large number of components. All these considerations suggest that operational efficiency might be at risk, and that appropriate mitigation measures should be considered (e.g. repairing activities carried out by robots). \subsubsection{Injector and driver systems} The baseline option for the FCC--hh ring is to use the LHC injector chain and the LHC as pre-injector. An alternative consists of replacing the LHC in its role of pre-injector with a superconducting ring to be installed in the SPS tunnel. The LHC injector chain is, with no doubt, a key element in the success of the LHC. The increase in its complexity, with the addition of the LHC, will potentially impact on its reliability. Furthermore, the various accelerators in the injector chain will have rather different ages, with the Proton Synchrotron being the oldest ring (it was commissioned in November 1959). This might have impact on the overall performance and should be properly addressed, e.g. with an appropriate long-term maintenance programme. \subsubsection{Facility scale} Figure~\ref{fig:placement} shows some of the FCC-hh implementation variants under study, including the LHC and the SPS accelerators. The large scale of the FCC--hh ring and the related infrastructure, implies a certain number of risk factors stemming from the civil engineering activities. The tunneling activities (for the ring tunnel as well as the ancillary tunnels) are comparable to those of the recently completed Gotthard Base tunnel (total of about \SI{152}{km}, including two \SI{57}{km} tunnel tubes) in Switzerland. Nevertheless, the handling of excavation materials might pose problems. In this respect, mitigation measures have been put in place in terms of R\&D for finding efficient treatment and use of these materials. As far as the infrastructure on the surface is concerned, possible difficulties might arise because of the regional and national frameworks in the two Host States that regulate the acceptance of an infrastructure development project plan. In this respect, close contacts have been established with national regulatory bodies to mitigate this risk. \begin{figure}[htb] \begin{center} \includegraphics[trim=2truemm 0truemm 2truemm 0truemm,width=0.60\hsize,clip=]{FCC-hh_placement.pdf} \end{center} \caption{Picture of some of the FCC-hh implementation variants under study, including the LHC and the SPS accelerators.} \label{fig:placement} \end{figure} \subsubsection{Power requirements} The FCC--hh collider complex is expected to require about \SI{580}{MW} of electrical power, which could be reduced to about \SI{550}{MW} with further optimization. Of these \SI{550}{MW}, about \SI{70}{MW} are needed for the injector complex, \SI{70}{MW} for cooling, ventilation, and general services. A further \SI{40}{MW} are consumed by the four physics detectors, and \SI{20}{MW} are allocated to the data centers for the four experiments. Among all the subsystems, the highest demand comes from the FCC--hh cryogenics, which requires about \SI{276}{MW} (about \SI{250}{MW} after further optimization, to be compared with about \SI{40}{MW} for the existing cryoplants of the LHC, with a three times shorter ring circumference), roughly half of which is needed to extract the $\sim \SI{5}{MW}$ of FCC--hh synchrotron radiation heat load from inside the cold arcs. These power requirements were obtained thanks to a careful optimization of the FCC--hh components, and, in particular, by an optimized beam-screen temperature, energy-efficient designs, and the use of new energy-saving technologies. Note that losses in the power transmission corresponding to about 5\%-7\% of the peak power should be added to estimate the needed grid power. In addition to the successful efforts in optimizing the power consumption of the FCC--hh, attempts to further decrease the power needed are planned. These studies will be essential to improve the energy efficiency of the collider and thus enhance the public acceptance of this large-scale facility. The three-pronged strategy, put in place already since the conceptual design study phase, envisages a reduction of energy consumption, increase of efficiency of energy use, and the recovery and reuse of energy for other purposes. This strategy will further be pursued in the next phase of the FCC--hh study. \subsection{Challenges} Although the FCC--hh clearly poses a number of possible obstacles in several areas (beam physics and technology), it builds on the experience of previous operational colliders, such as LHC and HL--LHC, which ensures the possibility of developing sound mitigation measures for the various challenges. As an example, it is worth mentioning that the machine design heavily relies on that of the LHC and HL--LHC, which instills confidence in the projected performance. The unprecedented beam energy of \SI{8.3}{GJ} represents a challenge for all systems devoted to protecting the hardware integrity of the FCC--hh ring, such as the collimation and dump systems. Such a challenge translates into beam dynamics challenges, e.g. the optics design of for the straight sections housing collimation and dump systems, which should satisfy multiple constraints, such as phase-advance relations, beam aperture constraints, and beam impedance, just to mention a few. The requirements also bring technological challenges in several areas, e.g.in terms of materials selected for the collimators jaws, and beam dumps, but also for the hardware related to the kickers that are used to dump the beams and to dilute them before interacting with the dump material. The field quality of the main magnets at injection energy is also an aspect that deserves particular care, as an insufficient field quality might lead primarily to beam loss and possible also emittance growth, with a direct impact on machine performance. The technology upon which the FCC--hh design relies is that of high-field Nb$_3$Sn superconducting magnets. Multiple challenges can be identified, linked to different aspects of this hardware. For instance, one challenge is the development of the Nb$_3$Sn wire to sustain the high critical currents needed to achieve the \SI{16}{T} magnetic field. Such a goal should be achieved with the constraint that the wire be economically affordable, given the large-scale production of magnets needed for FCC--hh. An appropriate magnet design is another challenge, as this goal should be achieved by fulfilling several criteria, such as the minimization of the amount of superconductor and the field quality at injection energy. The complexity of this novel hadron collider is such that several other technologies are a key to implementing the FCC--hh. The most important ones are an efficient and cost-effective cryogenic refrigeration, superconducting septum magnets, and solid state generators. The best candidate for better (with respect to the LHC and HL--LHC choice) cryogenic refrigeration is based on a mixture of neon and helium. Superconducting septum magnets are essential for a compact and efficient design of the beam-dump system. Modular, scalable, fast, and affordable high-power switching systems are key components of beam transfer systems. Solid-state devices, currently not commercially available, offer high-performance capabilities, which are needed for efficient FCC--hh operation. These technologies, which are connected with an overall increase of the operation efficiency of accelerator systems, naturally lead to the consideration of environmental aspects linked to the FCC--hh. Such a large-scale facility has an unavoidable impact on the environment due to the civil engineering works, radioactive waste, and energy efficiency. Concerning the first two aspects, CERN has a long experience due to the LEP/LHC experience. Although the FCC--hh scale exceeds by far the LEP/LHC scale, since the beginning of the studies, the respect of the environment and the minimization of the impact on it has been the main guideline. This criterion is applied not only to the underground infrastructure, but also to the surface infrastructure, given its direct societal impact. The radioactive waste management is a delicate aspect, but all means have been put in place to integrate it since the beginning in the global implementation project. Concerning energy efficiency, it is clear that this aspect is new and high in the societal opinion; for this reason several options have been studied and are actively pursued to provide more efficient energy consumption, e.g. via a new cryogenic system, as well as to recover, whenever possible, energy, which is the case of the waste heat recovery. \section{Technology requirements} The technological choices presented in the FCC--hh CDR represent feasible options for the implementation of the hadron collider. The time needed to move from the CDR stage to a TDR stage allows for carrying out R\&D studies to pursue the detailed feasibility assessment of the various technological items that are comprised in the FCC--hh baseline. A set of so-called strategic R\&D topics have been identified, which are essential prerequisites for the preparation of a sound technical design. It is clear that in addition to the several technological challenges, a crucial aspect to consider and to assess carefully is the large-scale production of the \SI{16}{T} magnets. It is worth stressing that a detailed analysis of the possibility to establish partnerships has been carried out, and a series of universities and research institutes have been identified as possible partners. Furthermore, whenever possible and appropriate, industrial partners have been also identified. The list of the strategic R\&D topics is as follows \begin{itemize} \item 16 Tesla superconducting high-field dual aperture accelerator magnet. \item Cost-effective and high-performance Nb$_3$Sn superconducting wire at industrial scale. \item High-temperature superconductors. The integrated project time line allows for the exploration and development of high-temperature superconductor (HTS) magnet technology, and of possible hybrid magnets, enabling improved performance, i.e.\ higher fields, or higher temperature. HTS options might be more rewarding than Nb$_3$Sn technology, as they might allow for higher fields, better performance, reduced cost, or higher operating temperature and for this last aspect, HTS could be game changers. \item Energy efficient, large-scale cryogenic refrigeration plants for temperatures down to \SI{40}{K}. \item Invar-based cryogenic distribution line. \item Superconducting septum magnet (to be merged with high-power switching elements). \item High-speed, high-power switching system for beam transfer elements. \item Decentralized, high-capacity energy storage and release. \item Advanced particle detector technologies. \item Efficient and cost-effective DC power distribution. \item Efficient treatment and use of excavation material. \end{itemize} \subsection{High-Field Magnet R\&D} The primary technology of the future circular hadron collider, FCC--hh, is the high-field magnets, and both high-field dipoles and quadrupoles~\cite{FCC-hhCDR} are required, or, possibly, combined-function magnets~\cite{our_paper6}. For constructing the accelerator magnets of the present LHC, the Tevatron, RHIC, and HERA, wires based on Nb-Ti superconductor were used. However, Nb-Ti magnets are limited to maximum fields of about \SI{8}{T}, as being operated at the LHC. The HL--LHC will, for the first time in a collider, deploy some tens of dipole and quadrupole magnets with a peak field of \SI{11}{}--\SI{12}{T}, based on a new high-field magnet technology using a Nb$_3$Sn superconductor. The Nb$_3$Sn superconductor holds the promise to approximately double the magnetic field, from $\sim \SI{8}{T}$ at the LHC to \SI{16}{T} for the FCC--hh. Recently, several important milestones were accomplished in the development of high-field Nb$_3$Sn magnets. At CERN, a block-coil magnet, FRESCA2, with a \SI{100}{mm} bore, achieved a world-record field of \SI{14.6}{T} at \SI{1.9}{K}~\cite{Willering:2019hhu}. In the US, a Nb$_3$Sn cosine-theta accelerator dipole short-model demonstrator with \SI{60}{mm} aperture~\cite{zlobin-napac19}, reached a similar field, of \SI{14.5}{T} at \SI{1.9}{K}~\cite{US-MDP:2021weg}. Forces acting on the magnet coils greatly increase with the strength of the magnetic field, while, at the same time, most higher-field conductors, such as the brittle Nb$_3$Sn, tend to be more sensitive to pressure. Therefore, force management becomes a key element in the design of future high-field magnets. Beside the development of optimized magnet design concepts, such as canted cosine-theta dipoles~\cite{caspi2013canted}, higher field can be facilitated by a higher-quality conductor. A Nb$_3$Sn wire development programme was set up for the FCC~\cite{8629920}. For Nb–Ta–Zr alloys, it could be demonstrated that an internal oxidation of Zr leads to the refinement of Nb$_3$Sn grains and, thereby, to an increase of the critical current density~\cite{buta}. The phase evolution of Nb$_3$Sn wire during heat treatment is equally under study, as part of the FCC conductor development programme in collaboration with TVEL, JASTEC, and KEK~\cite{9369038}. Advanced Nb$_3$Sn wires including Artificial Pinning Centers (APCs) developed by two separate teams (FNAL, Hyper Tech Research Inc., and Ohio State; and NHMFL, FAMU/FSU) achieved the target critical current density for FCC, of \SI{1500}{A \per mm^2} at 16 T~\cite{uswire1, uswire2}, which is 50\% higher than for the HL--LHC superconductor. The APCs decrease the magnetization heat during field ramps, improve the magnet field quality at injection, and reduce the probability of flux jumps~\cite{xu2014refinement}. In addition to Nb$_3$Sn wires, also high-temperature superconductors (HTS) are of interest, since they might allow for higher fields, operation at higher temperature, and, ultimately, perhaps even lower cost. In this context, the FCC conductor programme has been exploring the potential of ReBCO (Rare-earth barium copper oxide) coated conductors (CCs). In particular, the critical surfaces for the current density, $J_\mathrm{c} (T, B, \theta)$, of coated conductors from six different manufacturers: American Superconductor Co.~(US), Bruker HTS GmbH~(Germany), Fujikura Ltd~(Japan), SuNAM Co. Ltd~(Korea), SuperOx ZAO~(Russia) and SuperPower Inc.~(US) have been studied~\cite{senatore2015}. Outside the accelerator field, HTS magnet technology could play an important role for fusion research. A number of companies are developing HTS magnets in view of fusion applications. One of these companies is Commonwealth Fusion Systems, which, in partnership with MIT’s Plasma Science and Fusion center, is designing SPARC, a compact net fusion energy device~\cite{sparc}. The SPARC magnets are based on second generation ReBCO conductors. Recently, the SPARC team successfully demonstrated a coil with \SI{20}{T} field~\cite{mitnews}. An interesting view on HTS prospects is presented in a Snowmass 2020 Letter of Interest~\cite{snowmassmatias}, according to which the actual material and process costs of HTS tapes are a small fraction of their current commercial value, and that there is a historical link between manufactured volume and price~\cite{mikek}. \section{Staging options and upgrades} Considerations on possible staging options for the FCC--hh can be made on the basis of the experience of LHC and HL--LHC. Various types of energy upgrade, from a limited one (of about 7\%, based on the assumed engineering margins of the the various systems and in particular of the main dipoles) to a major one (of about 93\%, the so-called HE--LHC~\cite{HE-LHCCDR}, based on FCC--hh-type main dipoles to be installed in the LHC tunnel) have been considered, but no one has been retained as an efficient upgrade path. On the other hand, upgrade of the luminosity has been approved as the route to improve the LHC performance within the LHC Luminosity Upgrade Project~\cite{BejarAlonso:2749422}. It is worth noting that the LHC luminosity upgrade goal is achieved thanks to changes in the LHC ring, leading to a reduction of $\beta^\ast$, but also to the upgrade of the injectors chain to generate brighter beams and higher currents~\cite{Damerau:1976692}. We may, therefore, speculate that an energy upgrade is not a realistic option for FCC--hh, unless even higher-field magnets, e.g.~based on HTS, became available. Instead, a luminosity upgrade, following the two FCC--hh stages already foreseen, with an ideal delivery of about \SI{2}{} or \SI{8}{\femto \barn^{-1}} per day, respectively, could be considered an interesting option. However, further reducing $\beta^\ast$ (the nominal value in the second stage of FCC--hh is \SI{30}{cm}) does not much increase the integrated luminosity without a higher proton intensity. Already, as designed, the FCC--hh machine is cycling for about half of the time (with fairly demanding assumptions on the ramp speed of the injectors, either a slightly modified LHC or a new superconducting SPS), and the protons are burnt off quickly in collision (see Fig.~4 in~\cite{PhysRevSTAB.18.101002} and the associated equations). Burning off the protons even more quickly cannot much raise the integrated luminosity, but will mostly increase the event pile up. On the other hand, the FCC--hh baseline only considers rather moderate intensities from the injector of $\sim 10^{11}$ protons per bunch and \SI{0.5}{A} beam current, which are at least a factor $\sim 2$ below the capabilities of the upgraded LIU/HL--LHC complex. Brightness of the injected beam is not a critical issue for FCC--hh, since the radiation damping will anyhow shrink the beam in the collider. Maximum integrated luminosity could be attained by exploiting the maximum proton rate, bunch intensity, and beam current available from the CERN LHC complex. However, the beam current in the FCC--hh rings is limited due to the SR heat load and the associated cryogenics power requirements. With HTS magnets operating at an elevated temperature, these cryogenics needs would be relaxed, and a higher beam current might become possible. Another possible approach would be not to cycle the FCC--hh collider, but to run it at constant magnetic fields and approximately constant beam current, using a top-up injection scheme as was successfully implemented for the two B-factories, is in routine use at the present SuperKEKB, and forms a key ingredient of the future FCC--ee lepton collider. For the case of FCC--hh, top-up injection requires the installation of a fast ramping \SI{50}{TeV} full energy injector, which might become available thanks to advancing magnet technology. To facilitate the design, the beam current in the top-up injector could be restricted to e.g. 10\% of the collider beam current. Such a top-up injector could increase the integrated luminosity of the FCC--hh by a significant factor. Lepton colliders utilize radiation damping to merge injected particles with the stored beam. If the radiation damping in FCC--hh proved too slow for this purpose, the merging of injected and stored beams could be accomplished by other methods, e.g., by injection into nonlinear resonance islands, which are then collapsed~\cite{PhysRevSTAB.10.034001,PhysRevSTAB.18.074001}, or, alternatively, by innovative damping of the injected beam, e.g.~through optical stochastic cooling or coherent electron cooling. So, in short, the use of HTS magnets allowing for higher beam current or the installation of a novel fast cycling full energy top-up injector would be two plausible paths to increase the integrated luminosity of FCC--hh by a significant factor. It is also worth mentioning that heavy-ion collisions are part of the FCC baseline, although they formally represent an extension with respect to the proton-proton programme. However, lepton-hadron collisions, the so-called FCC--eh, are not part of the baseline and would be an appealing upgrade. Other extensions of the FCC--hh scope could be collisions with isoscalar light ion beams~\cite{KRASNY2020103792}, the realization of a Gamma Factory~\cite{krasny2015gamma,Krasny:2690736,krasny2020gamma}, and becoming an ingredient of a high-energy muon collider~\cite{MAP,IMC,JINSTMC,Antonelli:2015nla,Zimmermann:2018wfu,Zimmermann-ipac22}. \section{Synergies with other concepts and/or existing facilities} Clearly, a natural synergy exists between FCC--ee and FCC--hh. Moreover, The FCC--hh can profit from the experience of LHC/HL--LHC in several aspects. The HL--LHC bases its luminosity increase upon the use of Nb$_3$Sn quadrupoles for the final focus. Hence, the experience gained in the design, prototype, construction, test, and operation of the new triplets will be essential for FCC--hh. A similar situation occurs in the domain of physics detectors, where the planned upgrade to cope with the HL--LHC performance will bring the detectors in a new territory, thus approaching that of the FCC--hh. Hence, also in this domain, FCC--hh can build on the experience of the HL--LHC. It is also evident that strong synergy is present between FCC--hh and HL--LHC at the level of beam dynamics, due to the similarity of some regimes. It is possible to identify, as areas with similar challenges, optics control in the ring, in general, and in the experimental insertions, in particular, emittance preservation of high-brightness beams, electron-cloud effects, beam instabilities, as well as, e.g. machine operation with crab cavities. Finally, it will be easy to find synergies in the domain of energy efficiency and environmental impact with other projects, as these two aspects are gaining so much focus that will become essential items for any large-scale facility for physics research. \section{Overview of FCC--hh as presented in the 2019 CDR} \label{sec:intro} The discovery of the Higgs boson, announced exactly ten years ago, brought to completion the search for the fundamental constituents of matter and interactions that represent the so-called \emph{Standard Model} (SM). Several experimental observations require an extension of the Standard Model. For instance, explanations are needed for the observed abundance of matter over antimatter, the striking evidence for dark matter, and the non-zero neutrino masses. Therefore, a novel research infrastructure, based on a highest-energy hadron collider, FCC--hh, with a center-of-mass collision energy of \SI{100}{TeV} and an integrated luminosity of at least a factor of five larger than the HL--LHC~\cite{HL-LHCNature2019,BejarAlonso:2749422} is proposed to address the aforementioned aspects~\cite{FCC-hhCDR,FCC-hhNature2019,FCC-hhNature2020}. The current energy frontier limit for collider experiments will be extended by almost an order of magnitude, and the mass reach for direct discovery will achieve several tens of TeV. Under these conditions, for instance, the production of new particles, whose existence could have emerged from precision measurements during the preceding e$^+$e$^-$ collider phase (FCC--ee), would become possible. An essential task of this collider will be the accurate measurement of the Higgs self-coupling, as well as the exploration of the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. This unique particle collider infrastructure, FCC--hh, will serve the world-wide physics community for about $25$~years. However, it is worth stressing that in combination with the lepton collider~\cite{FCC-eeCDR} as initial stage, the FCC integrated project will provide a research tool until the end of the 21st century. The FCC construction project will be carried out in close collaboration with national institutes, laboratories, and universities world-wide, with a strong participation of industrial partners. It is worth mentioning that the coordinated preparatory effort is based on a core of an ever-growing consortium of already more than 145 institutes world-wide. \subsection{Accelerator layout} The FCC--hh~\cite{FCC-hhCDR,FCC-hhNature2019,FCC-hhNature2020} is designed to provide proton–proton collisions with a center-of-mass energy of \SI{100}{TeV} and an integrated luminosity of $\approx$ \SI{20}{\per \atto \barn} in each of the two primary experiments for $25$~years of operation. The FCC--hh offers a very broad palette of collision types, as it is envisaged to collide ions with protons and ions with ions. The ring design also allows one interaction point to be upgraded to electron–proton and electron–ion collisions. In this case, an additional recirculating, energy-recovery linac will provide the electron beam that collides with one circulating proton or ion beam. The other experiments can operate concurrently with hadron collisions. The FCC--hh will use the existing CERN accelerator complex as the injector facility. The accelerator chain, consisting of CERN’s Linac4, PS, PSB, SPS, and LHC, could deliver beams at \SI{3.3}{TeV} to the FCC--hh, thanks to transfer lines using \SI{7}{T} superconducting magnets that connect the LHC to FCC--hh. This choice also permits the continuation of CERN’s rich and diverse fixed-target physics programme in parallel with FCC--hh operations. Limited modifications of the LHC should be implemented, in particular, the ramp speed can be increased to optimize the filling time of the FCC--hh. Furthermore, reliability and availability studies have confirmed that operation can be optimized such that the FCC--hh collider can achieve its performance goals. However, the power consumption of the aging LHC cryogenic system is a concern. Note that the required 80\%–90\% availability of the injector chain could best be achieved with a new high-energy booster. As an alternative, direct injection from a new superconducting synchrotron at \SI{1.3}{TeV} that would replace the SPS is also being considered. In this case, simpler normal-conducting transfer lines with magnets operating at \SI{1.8}{T} are sufficient. For this scenario, more studies on beam stability in the collider at injection are required. Key parameters of the collider presented in the CDR are given in Table~\ref{tab:param}. In the CDR, the circumference of FCC--hh was \SI{97.75}{km}. Recently a placement optimization has led to a ``lowest-risk'' layout with a circumference of \SI{91.17}{km} (also see Section \ref{sec:progress} and Fig.~\ref{fig:FCC-hh-current}), comprising four short straight sections of \SI{1.4}{km} length for the experimental insertions, and four longer straight sections of about \SI{2.16}{km} each, that would house, e.g.~the radiofrequency (RF), collimation, and beam extraction systems. Two high-luminosity experiments are located in the opposite insertions PA and PG, which ensures the highest luminosity, reduces unwanted beam-beam effects, and is independent of the beam-filling pattern. The main experiments are located in \SI{66}{m} long experimental caverns, sufficient for the detector that has been studied and ensuring that the final focus system can be integrated into the available length of the insertion. Two additional, lower luminosity experiments are located in the other two experimental insertions. \begin{table}[htb] \begin{center} \caption{Key FCC--hh baseline parameters from the 2019 CDR \cite{FCC-hhCDR} compared to LHC and HL--LHC parameters.} \begin{tabular}{|l|r|r|r|r|} \hline & \multicolumn{1}{c|}{LHC} & \multicolumn{1}{c|}{HL--LHC} & \multicolumn{2}{c|}{FCC--hh} \\ & & & Initial & Nominal \\ \hline \multicolumn{5}{|l|}{Physics performance and beam parameters} \\ \hline Peak luminosity\tablefootnote{For the nominal parameters, the peak luminosity is reached during the run.} ($10^{34} \SI{}{cm^{-2} s^{-1}}$) & $1.0$ & $5.0$ & $5.0$ & $< 30.0$ \\ Optimum average integrated & $0.47$ & $2.8$ & $2.2$ & $8$ \\ luminosity/day (\SI{}{\femto \barn^{-1}}) & & & & \\ Assumed turnaround time (\SI{}{h}) & & & $5$ & $4$ \\ Target turnaround time (\SI{}{h}) & & & $2$ & $2$ \\ Peak number of inelastic events/crossing & $27$ & $135$\tablefootnote{The baseline assumes leveled luminosity.} & $171$ & $1026$ \\ Total/inelastic cross section & \multicolumn{2}{c|}{$111/85$} & \multicolumn{2}{c|}{$153/108$} \\ $\sigma$ proton (\SI{}{\milli \barn}) & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ Luminous region RMS length (\SI{}{cm}) & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{$5.7$} \\ Distance IP to first quadrupole $L^\ast$ (\SI{}{m}) & \multicolumn{2}{c|}{$23$} & \multicolumn{2}{c|}{$40$} \\ \hline \multicolumn{5}{|l|}{Beam parameters} \\ \hline Number of bunches $n$ & \multicolumn{2}{c|}{$2808$} & \multicolumn{2}{c|}{$10400$} \\ Bunch spacing (\SI{}{ns}) & \multicolumn{2}{c|}{$25$} & \multicolumn{2}{c|}{$25$} \\ Bunch population $N$ ($10^{11}$) & $1.15$ & $2.2$ & \multicolumn{2}{c|}{1.0} \\ Nominal transverse normalised & $3.75$ & $2.5$ & \multicolumn{2}{c|}{$2.2$} \\ emittance (\SI{}{\micro m}) & & & \multicolumn{2}{c|}{} \\ Number of IPs contributing to $\Delta Q$ & $3$ & $2$ & $2+2$ & $2$ \\ Maximum total beam-beam tune shift $\Delta Q$ & $0.01$ & $0.015$ & $0.011$ & $0.03$ \\ Beam current (\SI{}{A}) & $0.58$ & $1.12$ & \multicolumn{2}{c|}{0.5} \\ RMS bunch length\tablefootnote{The HL--LHC assumes a different longitudinal distribution; the equivalent Gaussian RMS is \SI{9}{cm}.} (\SI{}{cm}) & \multicolumn{2}{c|}{$7.55$} & \multicolumn{2}{c|}{$8$} \\ $\beta^\ast$ (\SI{}{m}) & $0.55$ & $0.15$ & $1.1$ & $0.3$ \\ RMS IP spot size (\SI{}{\micro m}) & $16.7$ & $7.1$ & $6.8$ & $3.5$ \\ Full crossing angle (\SI{}{\micro rad}) & $285$ & $590$ & $104$ & $200$\tablefootnote{The luminosity reduction due to the crossing angle will be compensated using the crab crossing scheme.} \\ \hline \end{tabular} \end{center} \label{tab:param} \end{table} The regular lattice in the arc consists of \ang{90} FODO cells with a length of about \SI{213}{m}, six \SI{14}{m}-long dipoles between quadrupoles, and a dipole filling factor of about $0.8$. Therefore, a dipole field around \SI{16}{T} is required to maintain the nominal beams on the circular orbit. The dipoles are based on Nb$_3$Sn, are operated at a temperature of \SI{2}{K}, and are a key cost item of the collider. Efforts devoted to increasing the current density in the conductors to \SI{1500}{A \per mm^2} at \SI{4.2}{K}, were successful~\cite{uswire1, uswire2}. Several optimized dipole designs have been developed in the framework of the EuroCirCol H2020 EC-funded project. The cosine-theta design has been selected as baseline, because it provided a beneficial reduction of the amount of superconductor needed for the magnet coils. Several collaboration agreements are in place with organisations such as the French CEA, the Italian INFN, the Spanish CIEMAT, and the Swiss PSI, to build short model magnets. It is worth mentioning that a US DOE Magnet Development Programme is actively working to demonstrate a \SI{15}{T} superconducting accelerator magnet and has reached \SI{14.5}{T}. As the current plans are that FCC--hh is implemented following FCC--ee in the same underground infrastructure, the time scale for design and R\&D for FCC--hh is of the order of 30 years. This additional time will be used to develop alternative technologies, such as magnets based on high-temperature superconductors with a potential significant impact on the collider parameters, relaxed infrastructure requirements (cryogenics system), and increased energy efficiency (temperature of magnets and beam screen). \subsection{Luminosity performance} The initial parameters, with a maximum luminosity of $5\times 10^{34}$ \SI{}{cm^{-2} s^{-1}}, are planned to be reached in the first years. Then, a luminosity ramp up will be applied, to reach the nominal parameters with a luminosity of up to $3 \times 10^{35}$ \SI{}{cm^{-2} s^{-1}}. Correspondingly, the integrated luminosity per day will increase from \SI{2}{\femto \barn^{-1}} to \SI{8}{\femto \barn^{-1}}. A luminosity of $2 \times 10^{34}$ \SI{}{cm^{-2} s^{-1}} can be achieved at the two additional experimental insertions, although further studies are needed to confirm this. High brightness and high-current beams, with a quality comparable to that of the beams of the HL--LHC, combined with a small $\beta^\ast$ at the collision points ensure the high luminosity. The parasitic beam-beam interactions are controlled by introducing a finite crossing angle, whose induced luminosity reduction is compensated by means of crab cavities. Further improvement of the machine performance might be achieved by using electron lenses and current carrying wire compensators. The fast burn-off under nominal conditions prevents from using the beams for collisions for more than \SI{3.5}{h}. Hence, the turn-around time, i.e. the time from one luminosity run to the next one, is a critical parameter to achieve the target integrated luminosity. In theory, a time of about \SI{2}{h} is within reach, but to include a sufficient margin, turn-around times of \SI{5}{h} and \SI{4}{h} are assumed for initial and nominal parameters, respectively. Note that an availability of 70\% at flat top for physics operation is assumed for the estimate of the overall integrated luminosity. The collider performance can be affected by various beam dynamics effects that can lead to the development of beam instabilities and quality loss. To fight against these effects a combination of fast transverse feedback and octupoles is used to stabilize the beam against parasitic electromagnetic interaction with the beamline components. Electron cloud build-up, which could render the beam unstable, is suppressed by appropriate hardware design. The impact of main magnet field imperfections on the beam is mitigated by high-quality magnet design and the use of corrector magnets. \subsection{Technical systems} Many technical systems and operational concepts for FCC--hh can be scaled up from HL--LHC or can be based on technology demonstrations carried out in the frame of ongoing R\&D projects. Particular technological challenges arise from the higher total energy in the beam (20 times that of LHC), the much increased collision debris in the experiments (40 times that of HL--LHC), and far higher levels of synchrotron radiation in the arcs (200 times that of LHC). The high luminosity and beam energy will produce collision debris with a power of up to \SI{0.5}{MW} in the main experiments, with a significant fraction of this lost in the ring close to the experiment. A sophisticated shielding system, similar to HL--LHC~\cite{BejarAlonso:2749422}, protects the final focusing triplet, avoids quenches, and reduces the radiation dose. The current radiation limit of \SI{30}{MGy} for the magnets, imposed by the resin used, will be reached for an integrated luminosity of \SI{13}{\atto \barn^{-1}}, but it is projected that the improvement of both the shielding and the radiation hardness of the magnets is possible. Hence, it is likely that the magnets will not have to be replaced during the entire lifetime of the project. The robust collimation and beam extraction system protects the machine from the energy stored in the beam. The design of the collimation system is based on the LHC system~\cite{LHCDR,BejarAlonso:2749422}, however, with a number of improvements. Additional protection has been added to mitigate losses in the arcs that would otherwise quench magnets. Improved conceptual designs of collimators and dogleg dipoles have been developed to reduce the beam-induced stress to acceptable levels. Further R\&D should aim at gaining margins in the design to reach comfortable levels. The extraction system uses a segmented, dual-plane dilution kicker system to distribute the bunches in a multi-branch spiral on the absorber block. Novel superconducting septa capable of deflecting the high-energy beams are currently being developed. The system design is fault tolerant, and the most critical failure mode, erratic firing of a single extraction kicker element, has limited impact thanks to the high granularity of the system. Investigations of suitable absorber materials including 3D carbon composites and carbon foams are ongoing in the frame of the HL--LHC project. The cryogenic system must compensate the continuous heat loads in the arcs of \SI{1.4}{W \per m} at a temperature below \SI{2}{K}, and the \SI{30}{W \per m \per aperture} due to synchrotron radiation at a temperature of \SI{50}{K}, as well as absorbing the transient loads from the magnets ramping. The system must also be able to fill and cool down the cold mass of the machine in less than 20 days, while avoiding thermal gradients higher than \SI{50}{K} in the cryomagnet structure. Furthermore, it must also cope with quenches of the superconducting magnets and be capable of a fast recovery from such situations that leaves the operational availability of the collider at an adequate level. The number of active cryogenic components distributed around the ring is minimized for reasons of simplicity, reliability, and maintenance. Note that current helium cryogenic refrigeration only reaches efficiencies of about 30\% with respect to an ideal Carnot cycle, which leads to high electrical power consumption. For this reason, part of the FCC study is to perform R\&D on novel refrigeration down to \SI{40}{K} based on a neon-helium gas mixture, with the potential to reach efficiencies higher than 40\%, thus bringing a reduction of the electrical energy consumption of the cryogenics system by 20\%. The cryogenic beam vacuum system ensures excellent vacuum to limit beam-gas scattering, and protect the magnets from the synchrotron radiation of the high-energy beam, also efficiently removing the heat. It also avoids beam instabilities due to parasitic beam-surface interactions and electron cloud effects. Note that the LHC vacuum system design is not suitable for FCC--hh, hence a novel design has been developed in the scope of the EuroCirCol H2020-funded project. It is as compact as possible to minimize the magnet aperture and consequently magnet cost. The beam screen features an anti-chamber and is copper coated to limit the parasitic interaction with the beam; the shape also reduces the seeding of the electron cloud by backscattered photons, and additional carbon coating or laser treatment prevents the build-up. This novel system is operated at \SI{50}{K} and a prototype has been validated experimentally in the KARA synchrotron radiation facility at KIT (Germany). The RF system is similar to the one of LHC with an RF frequency of \SI{400}{MHz}, although it provides a higher maximum total voltage of \SI{48}{MV}. The current design uses 24 single-cell cavities. To adjust the bunch length in the presence of longitudinal damping by synchrotron radiation, controlled longitudinal emittance blow-up by band-limited RF phase noise is implemented. \subsection{Ion operation} A first parameter set for ion operation has been developed based on the current injector performance. If two experiments operate simultaneously for 30 days, one can expect an integrated luminosity in each of them of \SI{6}{\pico \barn^{-1}} and \SI{18}{\pico \barn^{-1}} for proton-lead ion operation with initial and nominal parameters, respectively. For lead-ion lead-ion operation \SI{23}{\nano \barn^{-1}} and \SI{65}{\nano \barn^{-1}} could be expected, although more detailed studies are in progress to address the key issues in ion production and collimation and to review the luminosity predictions. \section{Civil engineering} As stated above, the FCC--hh collider will be installed in a quasi-circular tunnel composed of arc segments interleaved with straight sections with an inner diameter of at least \SI{5.5}{m} and a circumference of \SI{91.17}{km}. The internal diameter tunnel is required to house all necessary equipment for the machine, while providing sufficient space for transport and ensuring compatibility between FCC--hh and FCC--ee requirements. Figure~\ref{fig:tunnel} shows the cross section of the tunnel in a typical arc region, including several ancillary systems and services required. Furthermore, about \SI{8}{km} of bypass tunnels, about 18 shafts, 10 large caverns and 8 new surface sites are part of the infrastructure to be built. \begin{figure}[htb] \begin{center} \includegraphics[trim=2truemm 0truemm 2truemm 0truemm,width=0.60\hsize,clip=]{FCC-hh_tunnel.pdf} \end{center} \caption{Cross section of the FCC--hh tunnel of an arc (from~\cite{FCC-hhCDR}). The gray equipment on the left represents the cryogenic distribution line. A \SI{16}{T} superconducting magnet is shown in the middle, mounted on a red support element. An orange transport vehicle with another superconducting magnet is also shown, in the transport passage.} \label{fig:tunnel} \end{figure} The underground structures should be located as much as possible in the sedimentary rock of the Geneva basin, known as Molasse (which provides good conditions for tunneling) and avoid the limestone of the nearby Jura. Moreover, the depth of the tunnel and shafts should be minimized to control the overburden pressure on the underground structures and to limit the length of service infrastructure. These requirements, along with the constrain imposed by the connection to the existing accelerator chain through new beam transfer lines, led to the clear definition of the study boundary, which should be respected by all possible tunnel layouts considered. A slope of 0.2\% in a single plane will be used for the tunnel to optimize the geology intersected by the tunnel and the depth of the shafts, as well as to implement a gravity drainage system. The majority of the machine tunnel will be constructed using tunnel boring machines, while the sector passing through limestone will be mined. The CDR study was based on geological data from previous projects and data available from national services, and based on this knowledge, the civil engineering project is considered feasible, both in terms of technology and project risk control. It is also clear that dedicated ground and site investigations are required during the early stage of the preparatory phase to confirm the findings, to provide a comprehensive technical basis for an optimized placement and as preparation for project planning and implementation processes. It is worth mentioning that for the access points and their associated surface structures, the priority has been given to the identification of possible locations that are feasible from socio-urbanistic and environmental perspectives. Even in this case, the technical feasibility of the construction has been studied and is deemed achievable. \section{Detector considerations} The FCC--hh is both a discovery and a precision measurement machine, with the mass reach increased with respect to the current LHC by a factor of seven. The much larger cross sections for SM processes combined with the higher luminosity lead to a significant increase in measurement precision. This implies that the detector must be capable to measure multi-TeV jets, leptons and photons from heavy resonances with masses up to \SI{50}{TeV}, as well as the known SM processes with high precision, and to be sensitive to a broad range of BSM signatures at moderate $p_\mathrm{T}$. Given the low mass of SM particles compared to the \SI{100}{TeV} collision energy, many SM processes feature a significant forward boost while keeping transverse momentum distributions comparable to LHC energies. Hence, a detector for \SI{100}{TeV} must increase the acceptance for precision tracking and calorimetry to $|\eta| \approx 4$, while retaining the $p_\mathrm{T}$ thresholds for triggering and reconstruction at levels close to those of the current LHC detectors. The large number of p–p collisions per bunch crossing, which leads to the so-called pile-up, imposes stringent criteria on the detector design. Indeed, the present LHC detectors cope with pile-up up to 60, the HL--LHC will generate values of up to 200, whereas the expected value of 1000 for the FCC--hh poses a technological challenge. Novel approaches, specifically in the context of high precision timing detectors, will likely allow such numbers to be handled efficiently. \begin{figure}[htb] \begin{center} \includegraphics[trim=2truemm 0truemm 2truemm 0truemm,width=0.90\hsize,clip=]{FCC-hh_detector.pdf} \end{center} \caption{Conceptual layout of the FCC--hh reference detector (from~\cite{FCC-hhCDR}). It features an overall length of \SI{50}{m} and a diameter of \SI{20}{m}. A central solenoid with \SI{10}{m} diameter bore and two forward solenoids with \SI{5}{m} diameter bore provide a \SI{4}{T} field for momentum spectroscopy in the entire tracking volume.} \label{fig:detector} \end{figure} Figure~\ref{fig:detector} shows the conceptual FCC--hh reference detector, which serves as a concrete example for subsystem and physics studies aimed at identifying areas where dedicated R\&D efforts are needed. The detector has a diameter of \SI{20}{m} and a length of \SI{50}{m}, similar to the dimensions of the ATLAS detector at the LHC. The central detector, with coverage of $|\eta| < 2.5$, houses the tracking, electromagnetic calorimetry, and hadron calorimetry surrounded by a \SI{4}{T} solenoid with a bore diameter of \SI{10}{m}. The required performance for $|\eta| > 2.5$ is achieved by displacing the forward parts of the detector away from the interaction point, along the beam axis. Two forward magnet coils, generating a \SI{4}{T} solenoid field, with an inner bore of \SI{5}{m} provide the required bending power. Within the volume covered by the solenoids, high-precision momentum spectroscopy up to $|\eta| \approx 4$ and tracking up to $|\eta| \approx 6$ is ensured. Alternative layouts concerning the magnets of the forward region are also studied~\cite{FCC-hhCDR}. The tracker is specified to provide better than 20\% momentum resolution for $p_\mathrm{T}=\SI{10}{TeV/c}$ for heavy $Z'$ type particles, and better than 0.5\% momentum resolution at the multiple scattering limit, at least up to $|\eta| = 3$. The tracker cavity has a radius of \SI{1.7}{m} with the outermost layer at around \SI{1.6}{m} from the beam, providing the full spectrometer arm up to $|\eta| = 3$. The electromagnetic calorimeter (EMCAL) uses a thickness of around 30 radiation lengths, and together with the hadron calorimeter (HCAL), provides an overall calorimeter thickness of more than 10.5 nuclear interaction lengths to ensure 98\% containment of high-energy showers and to limit punch-through to the muon system. The EMCAL is based on liquid argon (LAr) due to its intrinsic radiation hardness. The barrel HCAL is a scintillating tile calorimeter with steel and Pb absorbers, divided into a central and two extended barrels. The HCALs for the endcap and forward regions are also based on LAr. The requirement of calorimetry acceptance up to $|\eta| \approx 6$ translates into an inner active radius of only \SI{8}{cm} at a $z$-distance of \SI{16.6}{m} from the interaction point. The EMCAL is specified to have an energy resolution around $10\%/\sqrt{E}$, while the HCAL around $50\%/\sqrt{E}$ for single particles. The features of the muon system have a significant impact on the overall detector design. As nowadays there is little doubt that large-scale silicon trackers will be core parts of future detectors, the emphasis on standalone muon performance is less pronounced, and the focus is rather shifted towards the aspects of muon trigger and muon identification. In the reference detector, the magnetic field is unshielded, with several positive side effects that concur to a sensible cost reduction. The unshielded coil can be lowered through a shaft of \SI{15}{m} diameter and the detector can be installed in a cavern of \SI{37}{m} height and \SI{35}{m} width, similar to the present ATLAS cavern. The magnetic stray field reaches \SI{5}{mT} at a radial distance of \SI{50}{m} from the beamline, so that no relevant stray field leaks in the service cavern, placed \SI{50}{m} away from the experiment cavern and separated by rock. The shower and absorption processes inside the forward calorimeter produce a large number of low-energy neutrons, a significant fraction of which enters the tracker volume. To keep these neutrons from entering the muon system and the detector cavern, a heavy radiation shield is placed around the forward solenoid magnets to close the gap between the endcap and forward calorimeters. The technologies selected for the various subsystems should stand significant radiation levels. On the first silicon layer at $r = \SI{2.5}{cm}$ the charged-particle rate is around \SI{10}{GHz \per cm^2}, and it drops to about \SI{3}{MHz \per cm^2} at the outer radius of the tracker, whereas inside the forward EMCAL the number rises to \SI{100}{GHz \per cm^2}. The \SI{1}{MeV} neutron equivalent fluence, a key number for long-term damage of silicon sensors and electronics in general, evaluates to a value of $6 \times 10^{17} \SI{}{\per cm^2}$ for the first silicon layer, beyond a $ r = \SI{40}{cm}$ the number drops below $10^{16}\SI{}{\per cm^2}$, and in the outer parts of the tracker it is around $5 \times 10^{15} \SI{}{\per cm^2}$. This means that technologies used for the HL--LHC detectors are therefore applicable when $r > \SI{40}{cm}$, while novel sensors and readout electronics have to be developed for the innermost parts of the tracker. The charged particle rate in the muon system is dominated by electrons, created from high energy photons in the \SI{}{MeV} range by processes related to thermalization and capture of neutrons that are produced in hadron showers mainly in the forward region. In the barrel muon system and the outer endcap muon system, the charged particle rate does not exceed \SI{500}{Hz \per cm^2}, the rate in the inner endcap muon system increases to \SI{10}{kHz \per cm^2}, and to \SI{500}{kHz \per cm^2} in the forward muon system, at a distance of \SI{1}{m} from the beam. These rates are comparable to those of the muon systems of the current LHC detectors, therefore, gaseous detectors used in these experiments can be adopted. \section{Cost and schedule} In the FCC integrated project, the FCC--hh is preceded by the lepton collider Higgs, top, and electroweak factory, FCC--ee. Here, both civil engineering and general technical infrastructures of the FCC--ee can be fully reused for FCC--hh, thus substantially lowering the investments for the latter to \SI{17000}{MCHF}, according to the CDR estimate \cite{FCC-hhCDR}. The particle collider- and injector-related investments amount to 80\% of the FCC--hh cost, namely to about \SI{13600}{MCHF}. The major part of this accelerator cost corresponds to the expected price of the 4700 Nb$_3$Sn \SI{16}{T} main dipole magnets, totaling \SI{9400}{MCHF}, for a target cost of \SI{2}{MCHF}/magnet. For completeness, we note that in the CDR, the construction cost for FCC--hh as a single standalone project, i.e.\ without prior construction of an FCC--ee lepton collider, was estimated to be about \SI{24000}{MCHF} for the entire project. The FCC--hh operation costs, other than electricity cost, are expected to remain limited, based on the evolution from LEP to LHC operation today, which shows a steady decrease in the effort needed to operate, maintain and repair the equipment. The cost-benefit analysis of the LHC/HL--LHC programme reveals that a research infrastructure project of such a scale and high-tech level has the potential to generate significant socio-economic value throughout its lifetime, in particular if the tunnel, surface, and technical infrastructures from a preceding project have been amortized. In the integrated FCC project, disassembly of the FCC--ee and subsequent installation of the FCC--hh take about 8--10 years. The projected duration for the operation of the FCC--hh facility is 25 years, to complete the currently envisaged proton-proton collision physics programme. As a combined, ``integrated'' project, namely FCC--ee followed by FCC--hh, the FCC covers a total span of at least 70 years, i.e.~until the end of the 21st century. \section{Progress since the CDR} \label{sec:progress} \subsection{Evolution of the baseline layout} Among the several domains of activity that have been pursued since the publication of the CDR, it is important stressing the intense efforts devoted to placement studies, which refined the results discussed in~\cite{FCC-hhCDR}. These aim to determine an optimal tunnel layout that could fulfill the multiple constraints imposed by geological situation, territorial and environmental aspects. Furthermore, in the frame of FCC--ee studies, it emerged that implementing four experimental interaction points is an interesting option worth investigating. Beam dynamics considerations impose a symmetrical positioning of the four experimental points. Hence, to allow sharing the experimental caverns between FCC--ee and its hadron companion, the same principle should also be applied to the FCC--hh lattice. The outcome of these considerations is the new layout shown in Fig.~\ref{fig:FCC-hh-current}. The circumference of the proposed layout is \SI{91.17}{km}. The proposed layout has an appealing side effect, namely, only eight access points are present, with a non-negligible impact on the civil engineering works and costs. \begin{figure}[htb] \begin{center} \includegraphics[trim=50truemm 0truemm 50truemm 0truemm,width=0.80\hsize,clip=]{FCC-hh_layout_v3.pdf} \end{center} \caption{Sketch of the proposed eight-point FCC--hh layout.} \label{fig:FCC-hh-current} \end{figure} The four experimental points are located in PA, PD, PG, and PJ, respectively. The length of the straight sections has been revised, following the results of the placement studies: a short straight section, \SI{1.4}{km} in length like in the baseline lattice, is used to house the experimental interaction points; a long straight section, \SI{2.16}{km} in length, is used to house the key systems. Currently, it is proposed to install the beam dump in PB, the betatron collimation in PF, the momentum collimation in PH, and the RF system in PL. These preliminary assignments should be confirmed by detailed studies. Such studies should also assess the feasibility of the optics required for the various systems, following the sizable length reduction (from \SI{2.8}{km} of the CDR baseline version to \SI{2.16}{km} for the new version). The total length of the arcs is \SI{76.93}{km}, and, unlike the baseline configuration, all arcs have the same length. The reduction of the total arc length implies that the collision energy falls short of \SI{100}{TeV} by few TeV, and this is not felt as a hurdle. The FODO cell length is unchanged. The rearrangement of the experimental points has an impact on the injection and transfer line design. The configuration inherited from the LHC design, in which the injection is performed in the same straight section in which the secondary experiments are installed, has to be dropped as it would lead to very long transfer lines. Therefore, the current view consists of combining the injection with beam dump (in PB) and with RF (in PL). Then, to save in tunnel length, it is proposed that the transfer lines run in the FCC--hh ring tunnel from close to PA until the injection point (see Fig.~\ref{fig:FCC-hh-current}). An additional benefit of this solution is that the transfer line magnets would be normal-conducting and rather relaxed in terms of magnetic properties. Integration of the transfer lines in the ring tunnel is being actively pursued to assess the feasibility of this proposal. \subsection{Alternative configuration} In parallel to the studies for the optimization of the baseline layout, some efforts have been devoted to the analysis of alternative approaches to the generation of the ring optics. Indeed, the standard paradigm to collider optics consists in using separate-function magnets, in particular in the regular arcs, in conjunction with a FODO structure. However, a combined-function optics might provide interesting features, particularly appealing for an energy-frontier collider. A combined-function optics has the potential of providing a higher dipole filling factor, thus opening to interesting optimization paths of the dipole field and beam energy. Currently, this research has explored the benefits of a combined-function periodic cell~\cite{our_paper6}. It also optimized some of the parameters of the cell, such as its length~\cite{our_paper8}, showing that the combined-function magnet is equally feasible as the baseline magnet. Furthermore, a complex optics, including arc and dispersion suppressors, can indeed be realized with combined-function magnets. As a next step, the investigations will consider the various systems of corrector magnets planned in the baseline FODO cell and optimize them in the context of a combined-function periodic cell. \section{Conclusions} The FCC--hh baseline comprises a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb$_3$Sn technology pioneered at the HL--LHC. An energy-efficient cryogenic refrigeration infrastructure, based on a neon-helium light gas mixture, and a high-reliability and low-loss cryogenic distribution infrastructure are also key elements of the baseline. Highly segmented kickers, superconducting septa and transfer lines, and local magnet energy recovery, are other essential components of the proposed FCC--hh design. Furthermore, technologies that are already being gradually introduced at other CERN accelerators will be deployed in the FCC--hh. Given the time scale of the FCC integrated program that allows for around 30 years of R\&D for FCC-hh, an increase of the energy efficiency of a particle collider can be expected thanks to high-temperature superconductor R\&D, carried out in close collaboration with industrial partners. The reuse of the entire CERN accelerator chain, serving also a concurrent physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. The FCC--hh will be a strong motor of economic and societal development in all participating nations, because of its large-scale and intrinsic character of international fundamental research infrastructure, combined with tight involvement of industrial partners. Finally, it is worth stressing the training provided at all education levels by this marvelous scientific tool. \bibliographystyle{JHEP} \section{Design overview} \subsection{Status} The design of the FCC--hh collider has been presented in a Conceptual Design Report (CDR)~\cite{FCC-hhCDR}, which describes the baseline configuration of the ring (see Section~\ref{sec:intro} and following for a brief review of the baseline design and of the recent developments). Note that the discussion presented in the rest of this paper is essentially based on the material collected in the CDR. Since the publication of the CDR, substantial progress has been made, in particular in the domain of ring placement and layout, and the main results are summarized in Section~\ref{sec:progress}. \subsection{Performance matrix} \subsubsection{Attainable energy} The target energy of \SI{100}{TeV} fully relies on the successful development of \SI{16}{T}, superconducting magnets, and any failure to meet the target magnetic field will necessarily impact the final energy of the collider. To mitigate the risk linked to this challenging and new technology, R\&D efforts are needed and accurately detailed in~\cite{FCC-hhCDR}. In this respect, the experience from HL--LHC will be important. \subsubsection{Attainable luminosity and luminosity integrals} Possible limiting factors for the collider luminosity seem more linked to luminosity integrals rather than attainable luminosity. In the case of the LHC, the attainable luminosity has surpassed the nominal one thanks to several elements. Higher-brightness beams delivered by the injectors boosted the luminosity, while in the LHC ring the excellent optics control, which includes measurement and correction, together with an optimal use of the available beam aperture, thanks also to the use of tighter collimators settings~\cite{FusterLHCapertureEPJP}, provided the final touch. It is worth stressing that in the LHC, $\beta^\ast=\SI{30}{cm}$, corresponding to the nominal FCC--hh value, has already been achieved and surpassed. On the downside, the larger number of quadrupole magnets in the straight sections of the FCC--hh might challenge the correction algorithms devised so far for the LHC, and new approaches should be explored. Furthermore, the actual operational performance with crab cavities is still unknown, but the HL--LHC will provide an ideal test-bed for getting ready in view of the FCC--hh. In this respect, attaining the FCC--hh target integrated luminosity might be more challenging for several reasons. The injector chain will increase in terms of the number of accelerator rings; the number of magnets (and of active elements in general) in the FCC--hh lattice will also increase with respect to the LHC or HL--LHC; repairing activities will be challenging, also taking into account the distances to be covered to access the faulty hardware and the large number of components. All these considerations suggest that operational efficiency might be at risk, and that appropriate mitigation measures should be considered (e.g. repairing activities carried out by robots). \subsubsection{Injector and driver systems} The baseline option for the FCC--hh ring is to use the LHC injector chain and the LHC as pre-injector. An alternative consists of replacing the LHC in its role of pre-injector with a superconducting ring to be installed in the SPS tunnel. The LHC injector chain is, with no doubt, a key element in the success of the LHC. The increase in its complexity, with the addition of the LHC, will potentially impact on its reliability. Furthermore, the various accelerators in the injector chain will have rather different ages, with the Proton Synchrotron being the oldest ring (it was commissioned in November 1959). This might have impact on the overall performance and should be properly addressed, e.g. with an appropriate long-term maintenance programme. \subsubsection{Facility scale} Figure~\ref{fig:placement} shows some of the FCC-hh implementation variants under study, including the LHC and the SPS accelerators. The large scale of the FCC--hh ring and the related infrastructure, implies a certain number of risk factors stemming from the civil engineering activities. The tunneling activities (for the ring tunnel as well as the ancillary tunnels) are comparable to those of the recently completed Gotthard Base tunnel (total of about \SI{152}{km}, including two \SI{57}{km} tunnel tubes) in Switzerland. Nevertheless, the handling of excavation materials might pose problems. In this respect, mitigation measures have been put in place in terms of R\&D for finding efficient treatment and use of these materials. As far as the infrastructure on the surface is concerned, possible difficulties might arise because of the regional and national frameworks in the two Host States that regulate the acceptance of an infrastructure development project plan. In this respect, close contacts have been established with national regulatory bodies to mitigate this risk. \begin{figure}[htb] \begin{center} \includegraphics[trim=2truemm 0truemm 2truemm 0truemm,width=0.60\hsize,clip=]{FCC-hh_placement.pdf} \end{center} \caption{Picture of some of the FCC-hh implementation variants under study, including the LHC and the SPS accelerators.} \label{fig:placement} \end{figure} \subsubsection{Power requirements} The FCC--hh collider complex is expected to require about \SI{580}{MW} of electrical power, which could be reduced to about \SI{550}{MW} with further optimization. Of these \SI{550}{MW}, about \SI{70}{MW} are needed for the injector complex, \SI{70}{MW} for cooling, ventilation, and general services. A further \SI{40}{MW} are consumed by the four physics detectors, and \SI{20}{MW} are allocated to the data centers for the four experiments. Among all the subsystems, the highest demand comes from the FCC--hh cryogenics, which requires about \SI{276}{MW} (about \SI{250}{MW} after further optimization, to be compared with about \SI{40}{MW} for the existing cryoplants of the LHC, with a three times shorter ring circumference), roughly half of which is needed to extract the $\sim \SI{5}{MW}$ of FCC--hh synchrotron radiation heat load from inside the cold arcs. These power requirements were obtained thanks to a careful optimization of the FCC--hh components, and, in particular, by an optimized beam-screen temperature, energy-efficient designs, and the use of new energy-saving technologies. Note that losses in the power transmission corresponding to about 5\%-7\% of the peak power should be added to estimate the needed grid power. In addition to the successful efforts in optimizing the power consumption of the FCC--hh, attempts to further decrease the power needed are planned. These studies will be essential to improve the energy efficiency of the collider and thus enhance the public acceptance of this large-scale facility. The three-pronged strategy, put in place already since the conceptual design study phase, envisages a reduction of energy consumption, increase of efficiency of energy use, and the recovery and reuse of energy for other purposes. This strategy will further be pursued in the next phase of the FCC--hh study. \subsection{Challenges} Although the FCC--hh clearly poses a number of possible obstacles in several areas (beam physics and technology), it builds on the experience of previous operational colliders, such as LHC and HL--LHC, which ensures the possibility of developing sound mitigation measures for the various challenges. As an example, it is worth mentioning that the machine design heavily relies on that of the LHC and HL--LHC, which instills confidence in the projected performance. The unprecedented beam energy of \SI{8.3}{GJ} represents a challenge for all systems devoted to protecting the hardware integrity of the FCC--hh ring, such as the collimation and dump systems. Such a challenge translates into beam dynamics challenges, e.g. the optics design of for the straight sections housing collimation and dump systems, which should satisfy multiple constraints, such as phase-advance relations, beam aperture constraints, and beam impedance, just to mention a few. The requirements also bring technological challenges in several areas, e.g.in terms of materials selected for the collimators jaws, and beam dumps, but also for the hardware related to the kickers that are used to dump the beams and to dilute them before interacting with the dump material. The field quality of the main magnets at injection energy is also an aspect that deserves particular care, as an insufficient field quality might lead primarily to beam loss and possible also emittance growth, with a direct impact on machine performance. The technology upon which the FCC--hh design relies is that of high-field Nb$_3$Sn superconducting magnets. Multiple challenges can be identified, linked to different aspects of this hardware. For instance, one challenge is the development of the Nb$_3$Sn wire to sustain the high critical currents needed to achieve the \SI{16}{T} magnetic field. Such a goal should be achieved with the constraint that the wire be economically affordable, given the large-scale production of magnets needed for FCC--hh. An appropriate magnet design is another challenge, as this goal should be achieved by fulfilling several criteria, such as the minimization of the amount of superconductor and the field quality at injection energy. The complexity of this novel hadron collider is such that several other technologies are a key to implementing the FCC--hh. The most important ones are an efficient and cost-effective cryogenic refrigeration, superconducting septum magnets, and solid state generators. The best candidate for better (with respect to the LHC and HL--LHC choice) cryogenic refrigeration is based on a mixture of neon and helium. Superconducting septum magnets are essential for a compact and efficient design of the beam-dump system. Modular, scalable, fast, and affordable high-power switching systems are key components of beam transfer systems. Solid-state devices, currently not commercially available, offer high-performance capabilities, which are needed for efficient FCC--hh operation. These technologies, which are connected with an overall increase of the operation efficiency of accelerator systems, naturally lead to the consideration of environmental aspects linked to the FCC--hh. Such a large-scale facility has an unavoidable impact on the environment due to the civil engineering works, radioactive waste, and energy efficiency. Concerning the first two aspects, CERN has a long experience due to the LEP/LHC experience. Although the FCC--hh scale exceeds by far the LEP/LHC scale, since the beginning of the studies, the respect of the environment and the minimization of the impact on it has been the main guideline. This criterion is applied not only to the underground infrastructure, but also to the surface infrastructure, given its direct societal impact. The radioactive waste management is a delicate aspect, but all means have been put in place to integrate it since the beginning in the global implementation project. Concerning energy efficiency, it is clear that this aspect is new and high in the societal opinion; for this reason several options have been studied and are actively pursued to provide more efficient energy consumption, e.g. via a new cryogenic system, as well as to recover, whenever possible, energy, which is the case of the waste heat recovery. \section{Technology requirements} The technological choices presented in the FCC--hh CDR represent feasible options for the implementation of the hadron collider. The time needed to move from the CDR stage to a TDR stage allows for carrying out R\&D studies to pursue the detailed feasibility assessment of the various technological items that are comprised in the FCC--hh baseline. A set of so-called strategic R\&D topics have been identified, which are essential prerequisites for the preparation of a sound technical design. It is clear that in addition to the several technological challenges, a crucial aspect to consider and to assess carefully is the large-scale production of the \SI{16}{T} magnets. It is worth stressing that a detailed analysis of the possibility to establish partnerships has been carried out, and a series of universities and research institutes have been identified as possible partners. Furthermore, whenever possible and appropriate, industrial partners have been also identified. The list of the strategic R\&D topics is as follows \begin{itemize} \item 16 Tesla superconducting high-field dual aperture accelerator magnet. \item Cost-effective and high-performance Nb$_3$Sn superconducting wire at industrial scale. \item High-temperature superconductors. The integrated project time line allows for the exploration and development of high-temperature superconductor (HTS) magnet technology, and of possible hybrid magnets, enabling improved performance, i.e.\ higher fields, or higher temperature. HTS options might be more rewarding than Nb$_3$Sn technology, as they might allow for higher fields, better performance, reduced cost, or higher operating temperature and for this last aspect, HTS could be game changers. \item Energy efficient, large-scale cryogenic refrigeration plants for temperatures down to \SI{40}{K}. \item Invar-based cryogenic distribution line. \item Superconducting septum magnet (to be merged with high-power switching elements). \item High-speed, high-power switching system for beam transfer elements. \item Decentralized, high-capacity energy storage and release. \item Advanced particle detector technologies. \item Efficient and cost-effective DC power distribution. \item Efficient treatment and use of excavation material. \end{itemize} \subsection{High-Field Magnet R\&D} The primary technology of the future circular hadron collider, FCC--hh, is the high-field magnets, and both high-field dipoles and quadrupoles~\cite{FCC-hhCDR} are required, or, possibly, combined-function magnets~\cite{our_paper6}. For constructing the accelerator magnets of the present LHC, the Tevatron, RHIC, and HERA, wires based on Nb-Ti superconductor were used. However, Nb-Ti magnets are limited to maximum fields of about \SI{8}{T}, as being operated at the LHC. The HL--LHC will, for the first time in a collider, deploy some tens of dipole and quadrupole magnets with a peak field of \SI{11}{}--\SI{12}{T}, based on a new high-field magnet technology using a Nb$_3$Sn superconductor. The Nb$_3$Sn superconductor holds the promise to approximately double the magnetic field, from $\sim \SI{8}{T}$ at the LHC to \SI{16}{T} for the FCC--hh. Recently, several important milestones were accomplished in the development of high-field Nb$_3$Sn magnets. At CERN, a block-coil magnet, FRESCA2, with a \SI{100}{mm} bore, achieved a world-record field of \SI{14.6}{T} at \SI{1.9}{K}~\cite{Willering:2019hhu}. In the US, a Nb$_3$Sn cosine-theta accelerator dipole short-model demonstrator with \SI{60}{mm} aperture~\cite{zlobin-napac19}, reached a similar field, of \SI{14.5}{T} at \SI{1.9}{K}~\cite{US-MDP:2021weg}. Forces acting on the magnet coils greatly increase with the strength of the magnetic field, while, at the same time, most higher-field conductors, such as the brittle Nb$_3$Sn, tend to be more sensitive to pressure. Therefore, force management becomes a key element in the design of future high-field magnets. Beside the development of optimized magnet design concepts, such as canted cosine-theta dipoles~\cite{caspi2013canted}, higher field can be facilitated by a higher-quality conductor. A Nb$_3$Sn wire development programme was set up for the FCC~\cite{8629920}. For Nb–Ta–Zr alloys, it could be demonstrated that an internal oxidation of Zr leads to the refinement of Nb$_3$Sn grains and, thereby, to an increase of the critical current density~\cite{buta}. The phase evolution of Nb$_3$Sn wire during heat treatment is equally under study, as part of the FCC conductor development programme in collaboration with TVEL, JASTEC, and KEK~\cite{9369038}. Advanced Nb$_3$Sn wires including Artificial Pinning Centers (APCs) developed by two separate teams (FNAL, Hyper Tech Research Inc., and Ohio State; and NHMFL, FAMU/FSU) achieved the target critical current density for FCC, of \SI{1500}{A \per mm^2} at 16 T~\cite{uswire1, uswire2}, which is 50\% higher than for the HL--LHC superconductor. The APCs decrease the magnetization heat during field ramps, improve the magnet field quality at injection, and reduce the probability of flux jumps~\cite{xu2014refinement}. In addition to Nb$_3$Sn wires, also high-temperature superconductors (HTS) are of interest, since they might allow for higher fields, operation at higher temperature, and, ultimately, perhaps even lower cost. In this context, the FCC conductor programme has been exploring the potential of ReBCO (Rare-earth barium copper oxide) coated conductors (CCs). In particular, the critical surfaces for the current density, $J_\mathrm{c} (T, B, \theta)$, of coated conductors from six different manufacturers: American Superconductor Co.~(US), Bruker HTS GmbH~(Germany), Fujikura Ltd~(Japan), SuNAM Co. Ltd~(Korea), SuperOx ZAO~(Russia) and SuperPower Inc.~(US) have been studied~\cite{senatore2015}. Outside the accelerator field, HTS magnet technology could play an important role for fusion research. A number of companies are developing HTS magnets in view of fusion applications. One of these companies is Commonwealth Fusion Systems, which, in partnership with MIT’s Plasma Science and Fusion center, is designing SPARC, a compact net fusion energy device~\cite{sparc}. The SPARC magnets are based on second generation ReBCO conductors. Recently, the SPARC team successfully demonstrated a coil with \SI{20}{T} field~\cite{mitnews}. An interesting view on HTS prospects is presented in a Snowmass 2020 Letter of Interest~\cite{snowmassmatias}, according to which the actual material and process costs of HTS tapes are a small fraction of their current commercial value, and that there is a historical link between manufactured volume and price~\cite{mikek}. \section{Staging options and upgrades} Considerations on possible staging options for the FCC--hh can be made on the basis of the experience of LHC and HL--LHC. Various types of energy upgrade, from a limited one (of about 7\%, based on the assumed engineering margins of the the various systems and in particular of the main dipoles) to a major one (of about 93\%, the so-called HE--LHC~\cite{HE-LHCCDR}, based on FCC--hh-type main dipoles to be installed in the LHC tunnel) have been considered, but no one has been retained as an efficient upgrade path. On the other hand, upgrade of the luminosity has been approved as the route to improve the LHC performance within the LHC Luminosity Upgrade Project~\cite{BejarAlonso:2749422}. It is worth noting that the LHC luminosity upgrade goal is achieved thanks to changes in the LHC ring, leading to a reduction of $\beta^\ast$, but also to the upgrade of the injectors chain to generate brighter beams and higher currents~\cite{Damerau:1976692}. We may, therefore, speculate that an energy upgrade is not a realistic option for FCC--hh, unless even higher-field magnets, e.g.~based on HTS, became available. Instead, a luminosity upgrade, following the two FCC--hh stages already foreseen, with an ideal delivery of about \SI{2}{} or \SI{8}{\femto \barn^{-1}} per day, respectively, could be considered an interesting option. However, further reducing $\beta^\ast$ (the nominal value in the second stage of FCC--hh is \SI{30}{cm}) does not much increase the integrated luminosity without a higher proton intensity. Already, as designed, the FCC--hh machine is cycling for about half of the time (with fairly demanding assumptions on the ramp speed of the injectors, either a slightly modified LHC or a new superconducting SPS), and the protons are burnt off quickly in collision (see Fig.~4 in~\cite{PhysRevSTAB.18.101002} and the associated equations). Burning off the protons even more quickly cannot much raise the integrated luminosity, but will mostly increase the event pile up. On the other hand, the FCC--hh baseline only considers rather moderate intensities from the injector of $\sim 10^{11}$ protons per bunch and \SI{0.5}{A} beam current, which are at least a factor $\sim 2$ below the capabilities of the upgraded LIU/HL--LHC complex. Brightness of the injected beam is not a critical issue for FCC--hh, since the radiation damping will anyhow shrink the beam in the collider. Maximum integrated luminosity could be attained by exploiting the maximum proton rate, bunch intensity, and beam current available from the CERN LHC complex. However, the beam current in the FCC--hh rings is limited due to the SR heat load and the associated cryogenics power requirements. With HTS magnets operating at an elevated temperature, these cryogenics needs would be relaxed, and a higher beam current might become possible. Another possible approach would be not to cycle the FCC--hh collider, but to run it at constant magnetic fields and approximately constant beam current, using a top-up injection scheme as was successfully implemented for the two B-factories, is in routine use at the present SuperKEKB, and forms a key ingredient of the future FCC--ee lepton collider. For the case of FCC--hh, top-up injection requires the installation of a fast ramping \SI{50}{TeV} full energy injector, which might become available thanks to advancing magnet technology. To facilitate the design, the beam current in the top-up injector could be restricted to e.g. 10\% of the collider beam current. Such a top-up injector could increase the integrated luminosity of the FCC--hh by a significant factor. Lepton colliders utilize radiation damping to merge injected particles with the stored beam. If the radiation damping in FCC--hh proved too slow for this purpose, the merging of injected and stored beams could be accomplished by other methods, e.g., by injection into nonlinear resonance islands, which are then collapsed~\cite{PhysRevSTAB.10.034001,PhysRevSTAB.18.074001}, or, alternatively, by innovative damping of the injected beam, e.g.~through optical stochastic cooling or coherent electron cooling. So, in short, the use of HTS magnets allowing for higher beam current or the installation of a novel fast cycling full energy top-up injector would be two plausible paths to increase the integrated luminosity of FCC--hh by a significant factor. It is also worth mentioning that heavy-ion collisions are part of the FCC baseline, although they formally represent an extension with respect to the proton-proton programme. However, lepton-hadron collisions, the so-called FCC--eh, are not part of the baseline and would be an appealing upgrade. Other extensions of the FCC--hh scope could be collisions with isoscalar light ion beams~\cite{KRASNY2020103792}, the realization of a Gamma Factory~\cite{krasny2015gamma,Krasny:2690736,krasny2020gamma}, and becoming an ingredient of a high-energy muon collider~\cite{MAP,IMC,JINSTMC,Antonelli:2015nla,Zimmermann:2018wfu,Zimmermann-ipac22}. \section{Synergies with other concepts and/or existing facilities} Clearly, a natural synergy exists between FCC--ee and FCC--hh. Moreover, The FCC--hh can profit from the experience of LHC/HL--LHC in several aspects. The HL--LHC bases its luminosity increase upon the use of Nb$_3$Sn quadrupoles for the final focus. Hence, the experience gained in the design, prototype, construction, test, and operation of the new triplets will be essential for FCC--hh. A similar situation occurs in the domain of physics detectors, where the planned upgrade to cope with the HL--LHC performance will bring the detectors in a new territory, thus approaching that of the FCC--hh. Hence, also in this domain, FCC--hh can build on the experience of the HL--LHC. It is also evident that strong synergy is present between FCC--hh and HL--LHC at the level of beam dynamics, due to the similarity of some regimes. It is possible to identify, as areas with similar challenges, optics control in the ring, in general, and in the experimental insertions, in particular, emittance preservation of high-brightness beams, electron-cloud effects, beam instabilities, as well as, e.g. machine operation with crab cavities. Finally, it will be easy to find synergies in the domain of energy efficiency and environmental impact with other projects, as these two aspects are gaining so much focus that will become essential items for any large-scale facility for physics research. \section{Overview of FCC--hh as presented in the 2019 CDR} \label{sec:intro} The discovery of the Higgs boson, announced exactly ten years ago, brought to completion the search for the fundamental constituents of matter and interactions that represent the so-called \emph{Standard Model} (SM). Several experimental observations require an extension of the Standard Model. For instance, explanations are needed for the observed abundance of matter over antimatter, the striking evidence for dark matter, and the non-zero neutrino masses. Therefore, a novel research infrastructure, based on a highest-energy hadron collider, FCC--hh, with a center-of-mass collision energy of \SI{100}{TeV} and an integrated luminosity of at least a factor of five larger than the HL--LHC~\cite{HL-LHCNature2019,BejarAlonso:2749422} is proposed to address the aforementioned aspects~\cite{FCC-hhCDR,FCC-hhNature2019,FCC-hhNature2020}. The current energy frontier limit for collider experiments will be extended by almost an order of magnitude, and the mass reach for direct discovery will achieve several tens of TeV. Under these conditions, for instance, the production of new particles, whose existence could have emerged from precision measurements during the preceding e$^+$e$^-$ collider phase (FCC--ee), would become possible. An essential task of this collider will be the accurate measurement of the Higgs self-coupling, as well as the exploration of the dynamics of electroweak symmetry breaking at the TeV scale, to elucidate the nature of the electroweak phase transition. This unique particle collider infrastructure, FCC--hh, will serve the world-wide physics community for about $25$~years. However, it is worth stressing that in combination with the lepton collider~\cite{FCC-eeCDR} as initial stage, the FCC integrated project will provide a research tool until the end of the 21st century. The FCC construction project will be carried out in close collaboration with national institutes, laboratories, and universities world-wide, with a strong participation of industrial partners. It is worth mentioning that the coordinated preparatory effort is based on a core of an ever-growing consortium of already more than 145 institutes world-wide. \subsection{Accelerator layout} The FCC--hh~\cite{FCC-hhCDR,FCC-hhNature2019,FCC-hhNature2020} is designed to provide proton–proton collisions with a center-of-mass energy of \SI{100}{TeV} and an integrated luminosity of $\approx$ \SI{20}{\per \atto \barn} in each of the two primary experiments for $25$~years of operation. The FCC--hh offers a very broad palette of collision types, as it is envisaged to collide ions with protons and ions with ions. The ring design also allows one interaction point to be upgraded to electron–proton and electron–ion collisions. In this case, an additional recirculating, energy-recovery linac will provide the electron beam that collides with one circulating proton or ion beam. The other experiments can operate concurrently with hadron collisions. The FCC--hh will use the existing CERN accelerator complex as the injector facility. The accelerator chain, consisting of CERN’s Linac4, PS, PSB, SPS, and LHC, could deliver beams at \SI{3.3}{TeV} to the FCC--hh, thanks to transfer lines using \SI{7}{T} superconducting magnets that connect the LHC to FCC--hh. This choice also permits the continuation of CERN’s rich and diverse fixed-target physics programme in parallel with FCC--hh operations. Limited modifications of the LHC should be implemented, in particular, the ramp speed can be increased to optimize the filling time of the FCC--hh. Furthermore, reliability and availability studies have confirmed that operation can be optimized such that the FCC--hh collider can achieve its performance goals. However, the power consumption of the aging LHC cryogenic system is a concern. Note that the required 80\%–90\% availability of the injector chain could best be achieved with a new high-energy booster. As an alternative, direct injection from a new superconducting synchrotron at \SI{1.3}{TeV} that would replace the SPS is also being considered. In this case, simpler normal-conducting transfer lines with magnets operating at \SI{1.8}{T} are sufficient. For this scenario, more studies on beam stability in the collider at injection are required. Key parameters of the collider presented in the CDR are given in Table~\ref{tab:param}. In the CDR, the circumference of FCC--hh was \SI{97.75}{km}. Recently a placement optimization has led to a ``lowest-risk'' layout with a circumference of \SI{91.17}{km} (also see Section \ref{sec:progress} and Fig.~\ref{fig:FCC-hh-current}), comprising four short straight sections of \SI{1.4}{km} length for the experimental insertions, and four longer straight sections of about \SI{2.16}{km} each, that would house, e.g.~the radiofrequency (RF), collimation, and beam extraction systems. Two high-luminosity experiments are located in the opposite insertions PA and PG, which ensures the highest luminosity, reduces unwanted beam-beam effects, and is independent of the beam-filling pattern. The main experiments are located in \SI{66}{m} long experimental caverns, sufficient for the detector that has been studied and ensuring that the final focus system can be integrated into the available length of the insertion. Two additional, lower luminosity experiments are located in the other two experimental insertions. \begin{table}[htb] \begin{center} \caption{Key FCC--hh baseline parameters from the 2019 CDR \cite{FCC-hhCDR} compared to LHC and HL--LHC parameters.} \begin{tabular}{|l|r|r|r|r|} \hline & \multicolumn{1}{c|}{LHC} & \multicolumn{1}{c|}{HL--LHC} & \multicolumn{2}{c|}{FCC--hh} \\ & & & Initial & Nominal \\ \hline \multicolumn{5}{|l|}{Physics performance and beam parameters} \\ \hline Peak luminosity\tablefootnote{For the nominal parameters, the peak luminosity is reached during the run.} ($10^{34} \SI{}{cm^{-2} s^{-1}}$) & $1.0$ & $5.0$ & $5.0$ & $< 30.0$ \\ Optimum average integrated & $0.47$ & $2.8$ & $2.2$ & $8$ \\ luminosity/day (\SI{}{\femto \barn^{-1}}) & & & & \\ Assumed turnaround time (\SI{}{h}) & & & $5$ & $4$ \\ Target turnaround time (\SI{}{h}) & & & $2$ & $2$ \\ Peak number of inelastic events/crossing & $27$ & $135$\tablefootnote{The baseline assumes leveled luminosity.} & $171$ & $1026$ \\ Total/inelastic cross section & \multicolumn{2}{c|}{$111/85$} & \multicolumn{2}{c|}{$153/108$} \\ $\sigma$ proton (\SI{}{\milli \barn}) & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ Luminous region RMS length (\SI{}{cm}) & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{$5.7$} \\ Distance IP to first quadrupole $L^\ast$ (\SI{}{m}) & \multicolumn{2}{c|}{$23$} & \multicolumn{2}{c|}{$40$} \\ \hline \multicolumn{5}{|l|}{Beam parameters} \\ \hline Number of bunches $n$ & \multicolumn{2}{c|}{$2808$} & \multicolumn{2}{c|}{$10400$} \\ Bunch spacing (\SI{}{ns}) & \multicolumn{2}{c|}{$25$} & \multicolumn{2}{c|}{$25$} \\ Bunch population $N$ ($10^{11}$) & $1.15$ & $2.2$ & \multicolumn{2}{c|}{1.0} \\ Nominal transverse normalised & $3.75$ & $2.5$ & \multicolumn{2}{c|}{$2.2$} \\ emittance (\SI{}{\micro m}) & & & \multicolumn{2}{c|}{} \\ Number of IPs contributing to $\Delta Q$ & $3$ & $2$ & $2+2$ & $2$ \\ Maximum total beam-beam tune shift $\Delta Q$ & $0.01$ & $0.015$ & $0.011$ & $0.03$ \\ Beam current (\SI{}{A}) & $0.58$ & $1.12$ & \multicolumn{2}{c|}{0.5} \\ RMS bunch length\tablefootnote{The HL--LHC assumes a different longitudinal distribution; the equivalent Gaussian RMS is \SI{9}{cm}.} (\SI{}{cm}) & \multicolumn{2}{c|}{$7.55$} & \multicolumn{2}{c|}{$8$} \\ $\beta^\ast$ (\SI{}{m}) & $0.55$ & $0.15$ & $1.1$ & $0.3$ \\ RMS IP spot size (\SI{}{\micro m}) & $16.7$ & $7.1$ & $6.8$ & $3.5$ \\ Full crossing angle (\SI{}{\micro rad}) & $285$ & $590$ & $104$ & $200$\tablefootnote{The luminosity reduction due to the crossing angle will be compensated using the crab crossing scheme.} \\ \hline \end{tabular} \end{center} \label{tab:param} \end{table} The regular lattice in the arc consists of \ang{90} FODO cells with a length of about \SI{213}{m}, six \SI{14}{m}-long dipoles between quadrupoles, and a dipole filling factor of about $0.8$. Therefore, a dipole field around \SI{16}{T} is required to maintain the nominal beams on the circular orbit. The dipoles are based on Nb$_3$Sn, are operated at a temperature of \SI{2}{K}, and are a key cost item of the collider. Efforts devoted to increasing the current density in the conductors to \SI{1500}{A \per mm^2} at \SI{4.2}{K}, were successful~\cite{uswire1, uswire2}. Several optimized dipole designs have been developed in the framework of the EuroCirCol H2020 EC-funded project. The cosine-theta design has been selected as baseline, because it provided a beneficial reduction of the amount of superconductor needed for the magnet coils. Several collaboration agreements are in place with organisations such as the French CEA, the Italian INFN, the Spanish CIEMAT, and the Swiss PSI, to build short model magnets. It is worth mentioning that a US DOE Magnet Development Programme is actively working to demonstrate a \SI{15}{T} superconducting accelerator magnet and has reached \SI{14.5}{T}. As the current plans are that FCC--hh is implemented following FCC--ee in the same underground infrastructure, the time scale for design and R\&D for FCC--hh is of the order of 30 years. This additional time will be used to develop alternative technologies, such as magnets based on high-temperature superconductors with a potential significant impact on the collider parameters, relaxed infrastructure requirements (cryogenics system), and increased energy efficiency (temperature of magnets and beam screen). \subsection{Luminosity performance} The initial parameters, with a maximum luminosity of $5\times 10^{34}$ \SI{}{cm^{-2} s^{-1}}, are planned to be reached in the first years. Then, a luminosity ramp up will be applied, to reach the nominal parameters with a luminosity of up to $3 \times 10^{35}$ \SI{}{cm^{-2} s^{-1}}. Correspondingly, the integrated luminosity per day will increase from \SI{2}{\femto \barn^{-1}} to \SI{8}{\femto \barn^{-1}}. A luminosity of $2 \times 10^{34}$ \SI{}{cm^{-2} s^{-1}} can be achieved at the two additional experimental insertions, although further studies are needed to confirm this. High brightness and high-current beams, with a quality comparable to that of the beams of the HL--LHC, combined with a small $\beta^\ast$ at the collision points ensure the high luminosity. The parasitic beam-beam interactions are controlled by introducing a finite crossing angle, whose induced luminosity reduction is compensated by means of crab cavities. Further improvement of the machine performance might be achieved by using electron lenses and current carrying wire compensators. The fast burn-off under nominal conditions prevents from using the beams for collisions for more than \SI{3.5}{h}. Hence, the turn-around time, i.e. the time from one luminosity run to the next one, is a critical parameter to achieve the target integrated luminosity. In theory, a time of about \SI{2}{h} is within reach, but to include a sufficient margin, turn-around times of \SI{5}{h} and \SI{4}{h} are assumed for initial and nominal parameters, respectively. Note that an availability of 70\% at flat top for physics operation is assumed for the estimate of the overall integrated luminosity. The collider performance can be affected by various beam dynamics effects that can lead to the development of beam instabilities and quality loss. To fight against these effects a combination of fast transverse feedback and octupoles is used to stabilize the beam against parasitic electromagnetic interaction with the beamline components. Electron cloud build-up, which could render the beam unstable, is suppressed by appropriate hardware design. The impact of main magnet field imperfections on the beam is mitigated by high-quality magnet design and the use of corrector magnets. \subsection{Technical systems} Many technical systems and operational concepts for FCC--hh can be scaled up from HL--LHC or can be based on technology demonstrations carried out in the frame of ongoing R\&D projects. Particular technological challenges arise from the higher total energy in the beam (20 times that of LHC), the much increased collision debris in the experiments (40 times that of HL--LHC), and far higher levels of synchrotron radiation in the arcs (200 times that of LHC). The high luminosity and beam energy will produce collision debris with a power of up to \SI{0.5}{MW} in the main experiments, with a significant fraction of this lost in the ring close to the experiment. A sophisticated shielding system, similar to HL--LHC~\cite{BejarAlonso:2749422}, protects the final focusing triplet, avoids quenches, and reduces the radiation dose. The current radiation limit of \SI{30}{MGy} for the magnets, imposed by the resin used, will be reached for an integrated luminosity of \SI{13}{\atto \barn^{-1}}, but it is projected that the improvement of both the shielding and the radiation hardness of the magnets is possible. Hence, it is likely that the magnets will not have to be replaced during the entire lifetime of the project. The robust collimation and beam extraction system protects the machine from the energy stored in the beam. The design of the collimation system is based on the LHC system~\cite{LHCDR,BejarAlonso:2749422}, however, with a number of improvements. Additional protection has been added to mitigate losses in the arcs that would otherwise quench magnets. Improved conceptual designs of collimators and dogleg dipoles have been developed to reduce the beam-induced stress to acceptable levels. Further R\&D should aim at gaining margins in the design to reach comfortable levels. The extraction system uses a segmented, dual-plane dilution kicker system to distribute the bunches in a multi-branch spiral on the absorber block. Novel superconducting septa capable of deflecting the high-energy beams are currently being developed. The system design is fault tolerant, and the most critical failure mode, erratic firing of a single extraction kicker element, has limited impact thanks to the high granularity of the system. Investigations of suitable absorber materials including 3D carbon composites and carbon foams are ongoing in the frame of the HL--LHC project. The cryogenic system must compensate the continuous heat loads in the arcs of \SI{1.4}{W \per m} at a temperature below \SI{2}{K}, and the \SI{30}{W \per m \per aperture} due to synchrotron radiation at a temperature of \SI{50}{K}, as well as absorbing the transient loads from the magnets ramping. The system must also be able to fill and cool down the cold mass of the machine in less than 20 days, while avoiding thermal gradients higher than \SI{50}{K} in the cryomagnet structure. Furthermore, it must also cope with quenches of the superconducting magnets and be capable of a fast recovery from such situations that leaves the operational availability of the collider at an adequate level. The number of active cryogenic components distributed around the ring is minimized for reasons of simplicity, reliability, and maintenance. Note that current helium cryogenic refrigeration only reaches efficiencies of about 30\% with respect to an ideal Carnot cycle, which leads to high electrical power consumption. For this reason, part of the FCC study is to perform R\&D on novel refrigeration down to \SI{40}{K} based on a neon-helium gas mixture, with the potential to reach efficiencies higher than 40\%, thus bringing a reduction of the electrical energy consumption of the cryogenics system by 20\%. The cryogenic beam vacuum system ensures excellent vacuum to limit beam-gas scattering, and protect the magnets from the synchrotron radiation of the high-energy beam, also efficiently removing the heat. It also avoids beam instabilities due to parasitic beam-surface interactions and electron cloud effects. Note that the LHC vacuum system design is not suitable for FCC--hh, hence a novel design has been developed in the scope of the EuroCirCol H2020-funded project. It is as compact as possible to minimize the magnet aperture and consequently magnet cost. The beam screen features an anti-chamber and is copper coated to limit the parasitic interaction with the beam; the shape also reduces the seeding of the electron cloud by backscattered photons, and additional carbon coating or laser treatment prevents the build-up. This novel system is operated at \SI{50}{K} and a prototype has been validated experimentally in the KARA synchrotron radiation facility at KIT (Germany). The RF system is similar to the one of LHC with an RF frequency of \SI{400}{MHz}, although it provides a higher maximum total voltage of \SI{48}{MV}. The current design uses 24 single-cell cavities. To adjust the bunch length in the presence of longitudinal damping by synchrotron radiation, controlled longitudinal emittance blow-up by band-limited RF phase noise is implemented. \subsection{Ion operation} A first parameter set for ion operation has been developed based on the current injector performance. If two experiments operate simultaneously for 30 days, one can expect an integrated luminosity in each of them of \SI{6}{\pico \barn^{-1}} and \SI{18}{\pico \barn^{-1}} for proton-lead ion operation with initial and nominal parameters, respectively. For lead-ion lead-ion operation \SI{23}{\nano \barn^{-1}} and \SI{65}{\nano \barn^{-1}} could be expected, although more detailed studies are in progress to address the key issues in ion production and collimation and to review the luminosity predictions. \section{Civil engineering} As stated above, the FCC--hh collider will be installed in a quasi-circular tunnel composed of arc segments interleaved with straight sections with an inner diameter of at least \SI{5.5}{m} and a circumference of \SI{91.17}{km}. The internal diameter tunnel is required to house all necessary equipment for the machine, while providing sufficient space for transport and ensuring compatibility between FCC--hh and FCC--ee requirements. Figure~\ref{fig:tunnel} shows the cross section of the tunnel in a typical arc region, including several ancillary systems and services required. Furthermore, about \SI{8}{km} of bypass tunnels, about 18 shafts, 10 large caverns and 8 new surface sites are part of the infrastructure to be built. \begin{figure}[htb] \begin{center} \includegraphics[trim=2truemm 0truemm 2truemm 0truemm,width=0.60\hsize,clip=]{FCC-hh_tunnel.pdf} \end{center} \caption{Cross section of the FCC--hh tunnel of an arc (from~\cite{FCC-hhCDR}). The gray equipment on the left represents the cryogenic distribution line. A \SI{16}{T} superconducting magnet is shown in the middle, mounted on a red support element. An orange transport vehicle with another superconducting magnet is also shown, in the transport passage.} \label{fig:tunnel} \end{figure} The underground structures should be located as much as possible in the sedimentary rock of the Geneva basin, known as Molasse (which provides good conditions for tunneling) and avoid the limestone of the nearby Jura. Moreover, the depth of the tunnel and shafts should be minimized to control the overburden pressure on the underground structures and to limit the length of service infrastructure. These requirements, along with the constrain imposed by the connection to the existing accelerator chain through new beam transfer lines, led to the clear definition of the study boundary, which should be respected by all possible tunnel layouts considered. A slope of 0.2\% in a single plane will be used for the tunnel to optimize the geology intersected by the tunnel and the depth of the shafts, as well as to implement a gravity drainage system. The majority of the machine tunnel will be constructed using tunnel boring machines, while the sector passing through limestone will be mined. The CDR study was based on geological data from previous projects and data available from national services, and based on this knowledge, the civil engineering project is considered feasible, both in terms of technology and project risk control. It is also clear that dedicated ground and site investigations are required during the early stage of the preparatory phase to confirm the findings, to provide a comprehensive technical basis for an optimized placement and as preparation for project planning and implementation processes. It is worth mentioning that for the access points and their associated surface structures, the priority has been given to the identification of possible locations that are feasible from socio-urbanistic and environmental perspectives. Even in this case, the technical feasibility of the construction has been studied and is deemed achievable. \section{Detector considerations} The FCC--hh is both a discovery and a precision measurement machine, with the mass reach increased with respect to the current LHC by a factor of seven. The much larger cross sections for SM processes combined with the higher luminosity lead to a significant increase in measurement precision. This implies that the detector must be capable to measure multi-TeV jets, leptons and photons from heavy resonances with masses up to \SI{50}{TeV}, as well as the known SM processes with high precision, and to be sensitive to a broad range of BSM signatures at moderate $p_\mathrm{T}$. Given the low mass of SM particles compared to the \SI{100}{TeV} collision energy, many SM processes feature a significant forward boost while keeping transverse momentum distributions comparable to LHC energies. Hence, a detector for \SI{100}{TeV} must increase the acceptance for precision tracking and calorimetry to $|\eta| \approx 4$, while retaining the $p_\mathrm{T}$ thresholds for triggering and reconstruction at levels close to those of the current LHC detectors. The large number of p–p collisions per bunch crossing, which leads to the so-called pile-up, imposes stringent criteria on the detector design. Indeed, the present LHC detectors cope with pile-up up to 60, the HL--LHC will generate values of up to 200, whereas the expected value of 1000 for the FCC--hh poses a technological challenge. Novel approaches, specifically in the context of high precision timing detectors, will likely allow such numbers to be handled efficiently. \begin{figure}[htb] \begin{center} \includegraphics[trim=2truemm 0truemm 2truemm 0truemm,width=0.90\hsize,clip=]{FCC-hh_detector.pdf} \end{center} \caption{Conceptual layout of the FCC--hh reference detector (from~\cite{FCC-hhCDR}). It features an overall length of \SI{50}{m} and a diameter of \SI{20}{m}. A central solenoid with \SI{10}{m} diameter bore and two forward solenoids with \SI{5}{m} diameter bore provide a \SI{4}{T} field for momentum spectroscopy in the entire tracking volume.} \label{fig:detector} \end{figure} Figure~\ref{fig:detector} shows the conceptual FCC--hh reference detector, which serves as a concrete example for subsystem and physics studies aimed at identifying areas where dedicated R\&D efforts are needed. The detector has a diameter of \SI{20}{m} and a length of \SI{50}{m}, similar to the dimensions of the ATLAS detector at the LHC. The central detector, with coverage of $|\eta| < 2.5$, houses the tracking, electromagnetic calorimetry, and hadron calorimetry surrounded by a \SI{4}{T} solenoid with a bore diameter of \SI{10}{m}. The required performance for $|\eta| > 2.5$ is achieved by displacing the forward parts of the detector away from the interaction point, along the beam axis. Two forward magnet coils, generating a \SI{4}{T} solenoid field, with an inner bore of \SI{5}{m} provide the required bending power. Within the volume covered by the solenoids, high-precision momentum spectroscopy up to $|\eta| \approx 4$ and tracking up to $|\eta| \approx 6$ is ensured. Alternative layouts concerning the magnets of the forward region are also studied~\cite{FCC-hhCDR}. The tracker is specified to provide better than 20\% momentum resolution for $p_\mathrm{T}=\SI{10}{TeV/c}$ for heavy $Z'$ type particles, and better than 0.5\% momentum resolution at the multiple scattering limit, at least up to $|\eta| = 3$. The tracker cavity has a radius of \SI{1.7}{m} with the outermost layer at around \SI{1.6}{m} from the beam, providing the full spectrometer arm up to $|\eta| = 3$. The electromagnetic calorimeter (EMCAL) uses a thickness of around 30 radiation lengths, and together with the hadron calorimeter (HCAL), provides an overall calorimeter thickness of more than 10.5 nuclear interaction lengths to ensure 98\% containment of high-energy showers and to limit punch-through to the muon system. The EMCAL is based on liquid argon (LAr) due to its intrinsic radiation hardness. The barrel HCAL is a scintillating tile calorimeter with steel and Pb absorbers, divided into a central and two extended barrels. The HCALs for the endcap and forward regions are also based on LAr. The requirement of calorimetry acceptance up to $|\eta| \approx 6$ translates into an inner active radius of only \SI{8}{cm} at a $z$-distance of \SI{16.6}{m} from the interaction point. The EMCAL is specified to have an energy resolution around $10\%/\sqrt{E}$, while the HCAL around $50\%/\sqrt{E}$ for single particles. The features of the muon system have a significant impact on the overall detector design. As nowadays there is little doubt that large-scale silicon trackers will be core parts of future detectors, the emphasis on standalone muon performance is less pronounced, and the focus is rather shifted towards the aspects of muon trigger and muon identification. In the reference detector, the magnetic field is unshielded, with several positive side effects that concur to a sensible cost reduction. The unshielded coil can be lowered through a shaft of \SI{15}{m} diameter and the detector can be installed in a cavern of \SI{37}{m} height and \SI{35}{m} width, similar to the present ATLAS cavern. The magnetic stray field reaches \SI{5}{mT} at a radial distance of \SI{50}{m} from the beamline, so that no relevant stray field leaks in the service cavern, placed \SI{50}{m} away from the experiment cavern and separated by rock. The shower and absorption processes inside the forward calorimeter produce a large number of low-energy neutrons, a significant fraction of which enters the tracker volume. To keep these neutrons from entering the muon system and the detector cavern, a heavy radiation shield is placed around the forward solenoid magnets to close the gap between the endcap and forward calorimeters. The technologies selected for the various subsystems should stand significant radiation levels. On the first silicon layer at $r = \SI{2.5}{cm}$ the charged-particle rate is around \SI{10}{GHz \per cm^2}, and it drops to about \SI{3}{MHz \per cm^2} at the outer radius of the tracker, whereas inside the forward EMCAL the number rises to \SI{100}{GHz \per cm^2}. The \SI{1}{MeV} neutron equivalent fluence, a key number for long-term damage of silicon sensors and electronics in general, evaluates to a value of $6 \times 10^{17} \SI{}{\per cm^2}$ for the first silicon layer, beyond a $ r = \SI{40}{cm}$ the number drops below $10^{16}\SI{}{\per cm^2}$, and in the outer parts of the tracker it is around $5 \times 10^{15} \SI{}{\per cm^2}$. This means that technologies used for the HL--LHC detectors are therefore applicable when $r > \SI{40}{cm}$, while novel sensors and readout electronics have to be developed for the innermost parts of the tracker. The charged particle rate in the muon system is dominated by electrons, created from high energy photons in the \SI{}{MeV} range by processes related to thermalization and capture of neutrons that are produced in hadron showers mainly in the forward region. In the barrel muon system and the outer endcap muon system, the charged particle rate does not exceed \SI{500}{Hz \per cm^2}, the rate in the inner endcap muon system increases to \SI{10}{kHz \per cm^2}, and to \SI{500}{kHz \per cm^2} in the forward muon system, at a distance of \SI{1}{m} from the beam. These rates are comparable to those of the muon systems of the current LHC detectors, therefore, gaseous detectors used in these experiments can be adopted. \section{Cost and schedule} In the FCC integrated project, the FCC--hh is preceded by the lepton collider Higgs, top, and electroweak factory, FCC--ee. Here, both civil engineering and general technical infrastructures of the FCC--ee can be fully reused for FCC--hh, thus substantially lowering the investments for the latter to \SI{17000}{MCHF}, according to the CDR estimate \cite{FCC-hhCDR}. The particle collider- and injector-related investments amount to 80\% of the FCC--hh cost, namely to about \SI{13600}{MCHF}. The major part of this accelerator cost corresponds to the expected price of the 4700 Nb$_3$Sn \SI{16}{T} main dipole magnets, totaling \SI{9400}{MCHF}, for a target cost of \SI{2}{MCHF}/magnet. For completeness, we note that in the CDR, the construction cost for FCC--hh as a single standalone project, i.e.\ without prior construction of an FCC--ee lepton collider, was estimated to be about \SI{24000}{MCHF} for the entire project. The FCC--hh operation costs, other than electricity cost, are expected to remain limited, based on the evolution from LEP to LHC operation today, which shows a steady decrease in the effort needed to operate, maintain and repair the equipment. The cost-benefit analysis of the LHC/HL--LHC programme reveals that a research infrastructure project of such a scale and high-tech level has the potential to generate significant socio-economic value throughout its lifetime, in particular if the tunnel, surface, and technical infrastructures from a preceding project have been amortized. In the integrated FCC project, disassembly of the FCC--ee and subsequent installation of the FCC--hh take about 8--10 years. The projected duration for the operation of the FCC--hh facility is 25 years, to complete the currently envisaged proton-proton collision physics programme. As a combined, ``integrated'' project, namely FCC--ee followed by FCC--hh, the FCC covers a total span of at least 70 years, i.e.~until the end of the 21st century. \section{Progress since the CDR} \label{sec:progress} \subsection{Evolution of the baseline layout} Among the several domains of activity that have been pursued since the publication of the CDR, it is important stressing the intense efforts devoted to placement studies, which refined the results discussed in~\cite{FCC-hhCDR}. These aim to determine an optimal tunnel layout that could fulfill the multiple constraints imposed by geological situation, territorial and environmental aspects. Furthermore, in the frame of FCC--ee studies, it emerged that implementing four experimental interaction points is an interesting option worth investigating. Beam dynamics considerations impose a symmetrical positioning of the four experimental points. Hence, to allow sharing the experimental caverns between FCC--ee and its hadron companion, the same principle should also be applied to the FCC--hh lattice. The outcome of these considerations is the new layout shown in Fig.~\ref{fig:FCC-hh-current}. The circumference of the proposed layout is \SI{91.17}{km}. The proposed layout has an appealing side effect, namely, only eight access points are present, with a non-negligible impact on the civil engineering works and costs. \begin{figure}[htb] \begin{center} \includegraphics[trim=50truemm 0truemm 50truemm 0truemm,width=0.80\hsize,clip=]{FCC-hh_layout_v3.pdf} \end{center} \caption{Sketch of the proposed eight-point FCC--hh layout.} \label{fig:FCC-hh-current} \end{figure} The four experimental points are located in PA, PD, PG, and PJ, respectively. The length of the straight sections has been revised, following the results of the placement studies: a short straight section, \SI{1.4}{km} in length like in the baseline lattice, is used to house the experimental interaction points; a long straight section, \SI{2.16}{km} in length, is used to house the key systems. Currently, it is proposed to install the beam dump in PB, the betatron collimation in PF, the momentum collimation in PH, and the RF system in PL. These preliminary assignments should be confirmed by detailed studies. Such studies should also assess the feasibility of the optics required for the various systems, following the sizable length reduction (from \SI{2.8}{km} of the CDR baseline version to \SI{2.16}{km} for the new version). The total length of the arcs is \SI{76.93}{km}, and, unlike the baseline configuration, all arcs have the same length. The reduction of the total arc length implies that the collision energy falls short of \SI{100}{TeV} by few TeV, and this is not felt as a hurdle. The FODO cell length is unchanged. The rearrangement of the experimental points has an impact on the injection and transfer line design. The configuration inherited from the LHC design, in which the injection is performed in the same straight section in which the secondary experiments are installed, has to be dropped as it would lead to very long transfer lines. Therefore, the current view consists of combining the injection with beam dump (in PB) and with RF (in PL). Then, to save in tunnel length, it is proposed that the transfer lines run in the FCC--hh ring tunnel from close to PA until the injection point (see Fig.~\ref{fig:FCC-hh-current}). An additional benefit of this solution is that the transfer line magnets would be normal-conducting and rather relaxed in terms of magnetic properties. Integration of the transfer lines in the ring tunnel is being actively pursued to assess the feasibility of this proposal. \subsection{Alternative configuration} In parallel to the studies for the optimization of the baseline layout, some efforts have been devoted to the analysis of alternative approaches to the generation of the ring optics. Indeed, the standard paradigm to collider optics consists in using separate-function magnets, in particular in the regular arcs, in conjunction with a FODO structure. However, a combined-function optics might provide interesting features, particularly appealing for an energy-frontier collider. A combined-function optics has the potential of providing a higher dipole filling factor, thus opening to interesting optimization paths of the dipole field and beam energy. Currently, this research has explored the benefits of a combined-function periodic cell~\cite{our_paper6}. It also optimized some of the parameters of the cell, such as its length~\cite{our_paper8}, showing that the combined-function magnet is equally feasible as the baseline magnet. Furthermore, a complex optics, including arc and dispersion suppressors, can indeed be realized with combined-function magnets. As a next step, the investigations will consider the various systems of corrector magnets planned in the baseline FODO cell and optimize them in the context of a combined-function periodic cell. \section{Conclusions} The FCC--hh baseline comprises a power-saving, low-temperature superconducting magnet system based on an evolution of the Nb$_3$Sn technology pioneered at the HL--LHC. An energy-efficient cryogenic refrigeration infrastructure, based on a neon-helium light gas mixture, and a high-reliability and low-loss cryogenic distribution infrastructure are also key elements of the baseline. Highly segmented kickers, superconducting septa and transfer lines, and local magnet energy recovery, are other essential components of the proposed FCC--hh design. Furthermore, technologies that are already being gradually introduced at other CERN accelerators will be deployed in the FCC--hh. Given the time scale of the FCC integrated program that allows for around 30 years of R\&D for FCC-hh, an increase of the energy efficiency of a particle collider can be expected thanks to high-temperature superconductor R\&D, carried out in close collaboration with industrial partners. The reuse of the entire CERN accelerator chain, serving also a concurrent physics programme, is an essential lever to come to an overall sustainable research infrastructure at the energy frontier. The FCC--hh will be a strong motor of economic and societal development in all participating nations, because of its large-scale and intrinsic character of international fundamental research infrastructure, combined with tight involvement of industrial partners. Finally, it is worth stressing the training provided at all education levels by this marvelous scientific tool. \bibliographystyle{JHEP}
2,869,038,154,524
arxiv
\section{Introduction} In this paper, a manifold is assumed to be closed, connected, orientable and smooth. The systole of a manifold $M$ is the least length of non-contractible closed loops in $M$. One can generalize this concept to the least volume of $k$--dimensional nonzero homology classes, so called as the homology systole. Now we can imagine such systoles have some kind of relations with the entire volume of $M$, and it is natural to ask what kind of relationship exists. As an answer, Gromov proved a theorem which says that the existence of non-trivial cup product implies the existence of the stable isosystolic inequality as follows. \begin{GromovThm}{\rm \cite[7.4.C]{Gro83}}\qua \label{thm:Gromov83} Let $M$ be an $n$--manifold (which can be non-orientable). If there exist some reduced real cohomology classes $\alpha_1^*, \cdots, \alpha_k^*$ with $\alpha_i^*$ in $\tilde{H}^{d_i}(M;\mathbb{R})$ and a nonzero cup product $\alpha_1^* \cup \cdots \cup \alpha_k^*$ in $\tilde{H}^n(M;\mathbb{R})$, then there exists $C > 0$ satisfying \begin{align*} \prod_{i=1}^{k}\stsys_{d_i}(M,\mathcal{G}) \le C \cdot \mass\bigl( [M], \mathcal{G} \bigr) \end{align*} for all Riemannian metric $\mathcal{G}$ on $M$ where $\stsys_{d_i}$ is the stable $d_i$--systole and $[M]$ is the fundamental class of $M$ with coefficients in $\mathbb{Z}$ for orientable cases or $\mathbb{Z}/2\mathbb{Z}$ for non-orientable cases. \end{GromovThm} The greatest $k$ satisfying the stable isosystolic inequality is called as the stable systolic category of $M$ which is introduced by Katz and Rudyak \cite{KatRud06}, and it is known as a homotopy invariant by Katz and Rudyak \cite{KatRud08}. We will show the stable systolic category of $0$-universal manifold is also invariant under the rational equivalences in \fullref{cor:REinvariance}. For an orientable manifold $M$, Gromov's Theorem implies that the stable systolic category is not smaller than the real cup-length. So, is there some manifold $M$ such that the stable systolic category is greater than the real cup-length? If such $M$ exists, then the inversion of Gromov's Theorem will fail for $M$, while this interesting question is not answered yet. Instead of the answer, it is known the equality of them for some manifolds, eg, Dranishnikov and Rudyak \cite{DraRud09}. In this paper, we also show more equality later in \fullref{thm:LSSCPstsyscat} and \fullref{thm:CatstsysProductSphere}. \subsection{Definition of the stable systolic category} Let $(X,A)$ be an object of the local Lipschitz category $\mathfrak{L}$. In this paper, we suppose $X$ is a nonempty subset of some finite dimensional Euclidean space $\mathbb{R}^n$ with the standard norm. So $X$ possesses the restricted metric of $\mathbb{R}^n$. Let $G$ be a $\mathbb{Z}$--module with a norm $| \cdot |$ which makes $G$ a complete metric space. If $G$ is $\mathbb{Z}$ or $\mathbb{R}$, we assume that norm of $G$ is the standard norm. The comass of a differential form $\omega$ on $X$ is defined as $$ \comass(\omega) := \sup \bigl\{ | \omega_x(\tau) | : x \in X, \text{orthonormal } q \text{--frame } \tau \bigr\} . $$ Also, the mass of a $q$--current $T$ in $X$ is the dual norm of comass, ie, $$ \mass(T) := \sup \bigl\{ T(\omega) : \text{differential } q\text{--form } \omega, \comass(\omega) \le 1 \bigr\}. $$ A Lipschitzian singular $q$--cube $\kappa\co I^q \to X$, induces a homomorphism $\kappa_{\flat}$ from the module of polyhedral chains $\polyhedralD_q(X;G)$ to the module of rectifiable currents $\rectifiableD_q(X;G)$. Then the mass of $\kappa$ is defined by the mass of the image $\kappa_{\flat}I^q$ where $I^q$ is the corresponding polyhedral $q$--current of the unit rectangular parallelepiped $I^q$. This correspondence of $\kappa$ to $\kappa_{\flat}I^q$ gives a chain map $\Phi$ of degree $0$ from the chain complex of all Lipschitzian singular cubes into the chain complex of flat chains $\flatD_*(E|X)$. Then we can verify that $\Phi$ induces an isomorphism $\Phi_*$ between the flat homology module $H^{\flat}_q(X,A;G)$ and the singular homology module $H_q(X,A;G)$ for all $q$. One can find more details of these definitions at Federer \cite{Fed69}, Federer \cite{Fed74}, Federer and Fleming \cite{FedFle60}, Serre \cite{Ser51} and White \cite{White99}. For a Lipschitzian singular chain $c$, there exists a representation $\sum_i \kappa_i \otimes g_i$ where $g_i$ is contained in $G$ and $\kappa_i$ is a Lipschitzian singular $q$--cube which is not overlapping each other (subdivide if necessary). Then the {\it mass} of $c$ is defined as $$ \mass(c) := \sum_i |g_i| \cdot \mass(\kappa_i) \,. $$ The {\it mass} or {\it volume} of a singular homology class $\eta$ in $H_q(X,A;G)$ is defined by $$ \mass(\eta;G) := \inf\bigl\{ \mass(c) : \eta = [c],\ c \text{ is a Lipschitzian cycle} \bigr\} \,. $$ If $G$ is $\mathbb{R}$, the mass is a norm on the homology vector spaces. We will omit $G$ in the case of $\mathbb{Z}$. The {\it $q$--dimensional homology systole} of $(X,A)$ is defined by infimum of mass of non-trivial $q$--th integral homology classes. However Gromov \cite{Gro83} claims that Gromov's Theorem will fail for $S^1 \times S^3$, if we consider the homology systoles instead of the stable systoles. Briefly, we can consider the stable systole as a systole in the real homology vector spaces. Here we give formal definition for the stable systole. The {\it stable mass} on $H_q(X,A;\mathbb{Z})$ is defined as $$ \stmass(\eta) := \inf\bigl\{ \mass(m \cdot \eta) / m : 0 < m \in \mathbb{Z} \bigr\} $$ for all $\eta \in H_q(X,A;\mathbb{Z})$. The inclusion $\iota\co \mathbb{Z} \to \mathbb{R}$ induces the coefficient homomorphism $\iota_*$ on homology. Federer \cite[5.8]{Fed74} showed the stable mass of $\eta$ is equal to the mass of the image $\iota_*\eta$. So we can define the {\it $q$--dimensional stable systole} of $(X,A)$ as \begin{align*} \stsys_q(X,A) := \inf\left\{ \stmass(\eta) : \eta \in H_q(X,A;\mathbb{Z}),\ \iota_*\eta \neq 0 \right\}. \end{align*} A homology $q$--systole or a stable $q$--systole is called {\it trivial}, if it is infinite. If the $q$--th real homology vector space $H_q(X,A;\mathbb{R})$ is zero, then the stable $q$--systole is trivial for all Riemannian metrics on $(X,A)$. Hence if the $q$--th integral homology module $H_q(X,A;\mathbb{Z})$ is a torsion module, then the stable $q$--systole is trivial for every metric on $(X,A)$. For a given positive integer $n > 0$, a $k$--tuple $P = (p_1, \cdots, p_k)$ of positive integers is called a {\it partition} of $n$ if $n = p_1 + \cdots + p_k$ and $p_1 \le \cdots \le p_k \le n$. A partition $P$ is called {\it positive} (or {\it non-negative}) if $p_i > 0$ (or $p_i \ge 0$) for all $i$. The {\it size} of a partition which denoted by $\size(P)$ is defined by the cardinality of positive integers contained in the partition. Hence if a $k$--tuple $P$ is a positive partition, then the size of partition is $k$. From now on, we suppose a partition is positive unless otherwise stated. For a partition $P$, the {\it duplicated number} of $p_i$ is the cardinality of elements in $P$ who are equal to $p_i$. Now we define concepts for an $n$--manifold $M$. A partition $P$ of $n$ is called {\it stable systolic categorical} for $M$, if there exists a real number $C > 0$ and non-trivial stable $p_i$--systoles such that $$\prod_{i=1}^{\size(P)}\stsys_{p_i}(M,\mathcal{G}) \le C \cdot \mass\bigl( [M], \mathcal{G}; \mathbb{Z}/2\mathbb{Z} \bigr)$$ for every Riemannian metric $\mathcal{G}$ on $M$ where the fundamental class $[M]$ in $H_n(M;\mathbb{Z}/2\mathbb{Z})$. \begin{defn} The {\it stable systolic category} of $M$ is defined by $$\catstsys(M):=\sup \bigl\{\mathrm{size}(P):P \text{ is stable systolic categorical partition for } M \bigr\}.$$ \end{defn} The {\it real cup-length} of $M$ is defined by $$\cuplength_{\mathbb{R}}(M) := \min \big\{ k \ge 0 : \alpha_0 \cup \alpha_1 \cup \cdots \cup \alpha_k = 0 \text{ for all } \alpha_i \in \tilde{H}^*(M;\mathbb{R}) \big\}$$ where $\tilde{H}^*(M;\mathbb{R})$ is the reduced real cohomology ring of $M$. As we said before, the real cup-length is a lower estimate for the stable systolic category from Gromov's Theorem. If $M$ is non-orientable, then the top dimensional real cohomology vector space $H^n(M;\mathbb{R})$ vanishes. So every cohomology class in $H^n(M;\mathbb{R})$ vanishes, we can not apply Gromov's Theorem for top dimension. This is a reason to consider only orientable manifolds in this paper. \subsection{Acknowledgments} The author expresses gratitude to Professor Norio Iwase, whose guidance and support to write this paper. \section{Preliminaries on the stable systoles} Many equations and inequalities for mass are studied. One can find those results at Babenko \cite{Bab93}, Federer \cite{Fed69} and Whitney \cite{Whi57}. Here we state or recall some of them for the stable systoles, with some appropriate modifications applied. \begin{prop} \label{prop:stable0systole} For a local Lipschitz neighborhood retract $X$ in $\mathbb{R}^n$, the stable $0$--systole is $1$. \end{prop} \begin{proof} Let $\current_0(X)$ be the vector space of $0$--currents. A map $\mathfrak{d}\co X \to \current_0(X)$ can be defined as $\mathfrak{d}(x)(\omega) = \mathfrak{d}_{x}(\omega) := \omega(x)$ for a point $x$ of $X$ and a differential $0$--form $\omega$ on $X$. Then $\mathfrak{d}_x$ is a polyhedral $0$--current with $\mass(\mathfrak{d}_x) = 1$. This implies that $\mathfrak{d}_{x}$ is a normal $0$--cycle with coefficients~$\mathbb{Z}$. Furthermore, the image $\iota_*\Phi_*^{-1}[\mathfrak{d}_x]$ is not vanished in $H_0(X;\mathbb{R})$. So we have $$\stsys_0(X) = \mass\bigl( \iota_*\Phi_*^{-1}[\mathfrak{d}_x] \bigr) = 1$$ for an arbitrary point $x$ in $X$. \end{proof} \begin{lemma} \label{lemma:MassRescaleEquality} For a local Lipschitz neighborhood retract $X$ in $\mathbb{R}^n$, if one rescale the standard metric $\mathcal{G}$ on $\mathbb{R}^n$ by the square of a real number $t > 0$, then the quotient mass of a homology class $\eta \in H_q(X;G)$ increase by the $t^q$ times. Furthermore, the stable $q$--systole satisfies $$\stsys_q(X,t^2 \mathcal{G}|X) = t^q \cdot \stsys_q(X, \mathcal{G}|X)$$ where $\mathcal{G}|X$ is the restriction of $\mathcal{G}$ on $X$. \end{lemma} \begin{proof} A similar result was introduced by Whitney \cite{Whi57} for the real flat chains. So the first result is satisfied for an arbitrary homology class. Also the definition of the stable systole implies \begin{align*} \stsys_q(X,t^2 \mathcal{G}|X) & = \inf\left\{ t^q \cdot \mass(\iota_*\eta, \mathcal{G}|X; \mathbb{R}) : \eta \in H_q(X,A;\mathbb{Z}),\ \iota_*\eta \neq 0 \right\} \end{align*} which means the equality for the stable systoles. \end{proof} \begin{prop}{\cite[X.6 and X.7]{Whi57}}\qua \label{prop:PushForwardMassInequality} Let $X$ and $Y$ be open subsets in $\mathbb{R}^m$ and $\mathbb{R}^n$ respectively. For a locally Lipschitzian map $f\co X \to Y$ and an integral rectifiable $q$--current $T$ whose support is contained in a compact subset $K$ of $X$, there exists an inequality $$\mass(f_{\flat}T) \le \Lip(f|K)^q \cdot \mass(T)$$ where $\Lip(f|K)$ is the lower bound of Lipschitz constants of the restriction $f|K$. \end{prop} \begin{prop} \label{prop:MappedLowerBoundMass} Let $(X,A)$ and $(Y,B)$ be local Lipschitz neighborhood retract pairs in $\mathbb{R}^m$ and $\mathbb{R}^n$ respectively. If $f\co (X,A) \to (Y,B)$ is a locally Lipschitzian map, then for any homology class $\eta$ of $H_q(X,A;G)\,$, there is a compact subset $K$ of $\mathbb{R}^m$ which satisfies $$0 \le \mass(f_*\eta;G) \le \Lip(f|K)^q \cdot \mass(\eta;G)$$ where $f_*\co H_q(X,A;G) \to H_q(Y,B;G)$ is the induced homomorphism. \end{prop} \begin{proof} Note that $f$ induces a homomorphism $f_\flat\co Z_q(X,A;G) \to Z_q(Y,B;G)$ on flat cycles as well as $f_{\flat}\flatD_q(\mathbb{R}^m|A;G) \subset \flatD_q(\mathbb{R}^n|B;G)$ . For a given flat homology class $\Phi_*\eta$, let $T$ be a representative normal $q$--cycle in $Z_q(X,A;G)$. The naturality of $\Phi_*$ implies $\Phi_* f_* \eta = f_* \Phi_* \eta = f_*[T] = [f_{\flat} T]$. Also the relation of cosets $[f_{\flat} T] = [f_{\flat}T+f_{\flat}\flatD_q(\mathbb{R}^m|A;G)] = [f_{\flat}T+\flatD_q(\mathbb{R}^n|B;G)]$ implies that the relation of the sets $$\bigl\{ f_\flat T : [T] = \Phi_*\eta \bigr\} \subset \bigl\{ S : [S] = \Phi_*f_*\eta \bigr\} \subset Z_q(Y,B;G) \,.$$ With the definition of the mass of homology class, we obtain \begin{align*} \mass(f_*\eta ; G) & \le \inf\bigl\{ \mass(f_\flat T) : [T] = \Phi_*\eta \bigr\}. \end{align*} Because of $T$ is compact supported, there is a compact subset $K$ of $\mathbb{R}^m$ with $\support(T) \subset \interior(K)$. Here we can apply \fullref{prop:PushForwardMassInequality} for $T$, so we have \begin{align*} \mass(f_*\eta ; G) & \le \Lip(f|K)^q \cdot \inf\bigl\{ \mass(T) : [T] = \Phi_*\eta \bigr\} \end{align*} which implies the result. \end{proof} \begin{lemma} \label{lemma:StableSystoleMappedInequality} Let $(X,A)$ and $(Y,B)$ are local Lipschitz neighborhood retract pairs. If a locally Lipschitzian map $f\co (X,A) \to (Y,B)$ induces a monomorphism $f_*\co H_q(X,A;\mathbb{R}) \to H_q(Y,B;\mathbb{R})$, then there is a compact subset $K$ in the ambient space of $X$ satisfying $$\stsys_q(Y,B) \le \Lip(f|K)^q \cdot \stsys_q(X,A) .$$ Furthermore, if $H_q(X,A;\mathbb{R})$ is nonzero, then $\stsys_q(Y,B)$ is a positive real number. \end{lemma} \begin{proof} \fullref{prop:MappedLowerBoundMass} and $f_*\bigl(H_q(X,A;\mathbb{R}) \setminus \{0\}\bigr) \subset \bigl(H_q(Y,B;\mathbb{R}) \setminus \{0\}\bigr)$ imply the existence of inequality in the stable systole level. For integral homology class $\eta$ with $\iota_*\eta$ is nonzero, the image $f_*\iota_*\eta$ does not vanish, since $f_*$ is a monomorphism. Recall that the mass of real homology classes is a norm, hence $\mass(f_*\iota_*\eta)$ is a positive real number. Furthermore, the stable $q$--systole does not converges to zero, since $\mathbb{Z}$ is discrete. \end{proof} \begin{prop} \label{prop:MassCrossEquality} Let $X$ and $Y$ be open subsets of $\mathbb{R}^m$ and $\mathbb{R}^n$ respectively. For rectifiable currents $S$ in $\rectifiableD_p(X)$ and $T$ in $\rectifiableD_q(Y)$, the mass of their cross product is equal to the multiplication of their masses, ie, $$\mass(S \times T) = \mass(S) \cdot \mass(T)$$ with respect to the product metric on $X \times Y$. \end{prop} \begin{proof Since $S$ and $T$ are rectifiable currents, mass can be written by associated Radon measures $\Vert S \Vert$, $\Vert T \Vert$ and $\Vert S \times T \Vert$. Therefore Fubini's Theorem (see Federer \cite[2.6.2.(2)]{Fed69}) implies $$\mass(S \times T) = \Vert S \times T \Vert(X \times Y) = \Vert S \Vert (X) \cdot \Vert T \Vert (Y) = \mass(S) \cdot \mass(T)$$ the result. \end{proof} \begin{lemma} \label{lemma:QuotientMassCrossEquality} Let $(X,A)$ and $(Y,B)$ are local Lipschitz neighborhood retract pairs. For homology classes $\xi \in H_p(X,A;G)$ and $\eta \in H_q(Y,B;G)$, we can estimate \begin{align*} \mass(\xi \times \eta; G) & \le \mass(\xi; G) \cdot \mass(\eta; G) \\ \tag*{\text{\sl and}} \stsys_{p+q}\bigl( (X,A) \times (Y,B) \bigr) & \le \stsys_p(X,A) \cdot \stsys_q(Y,B) \end{align*} with respect to the product metric on $(X,A) \times (Y,B)$. \end{lemma} \begin{proof} Let $S$ and $T$ be representative rectifiable cycles corresponding to $\xi$ and $\eta$ respectively, ie, $\Phi_*\xi = [S]$ with $S \in Z^{\flat}_{p}(X,A;G)$ and $\Phi_*\eta = [T]$ with $T \in Z^{\flat}_{q}(Y,B;G)$. Then the naturality of a cross product implies that there is a representative rectifiable current with the form of a cross product $S \times T$ in the coset $[c] = \Phi_*(\xi \times \eta)$. Therefore \begin{align*} \bigl\{ S \times T : [S] \times [T] = \Phi_*\xi \times \Phi_*\eta \bigr\} & = \bigl\{ S \times T : [S \times T] = \Phi_*(\xi \times \eta) \bigr\} \\ & \subset \bigl\{ c : [c] = \Phi_*(\xi \times \eta) \bigr\} \\ & \subset Z^{\flat}_{p+q}\bigl( (X,A) \times (Y,B) ; G \bigr) . \end{align*} Hence \fullref{prop:MassCrossEquality} implies an inequality \begin{align*} \mass(\xi \times \eta; G) & \le \inf\bigl\{ \mass(S \times T) : [S] \times [T] = \Phi_*\xi \times \Phi_*\eta) \bigr\} \\ & = \mass(\xi; G) \cdot \mass(\eta; G) \end{align*} on homology level. To show the inequality of the stable systoles, recall that the cross product homomorphism \begin{align*} H_p(X,A;\mathbb{R}) \otimes H_q(Y,B;\mathbb{R}) \to H_{p+q}\bigl( (X,A) \times (Y,B) ; \mathbb{R} \bigr) \end{align*} is a monomorphism. Therefore we can estimate the stable $q$--systole as \begin{align*} \stsys_{p+q}\bigl( (X,A) \times (Y,B) \bigr) & \le \inf \left\{\mass(\xi \times \eta) : \begin{tabular}{l} $\xi \in H_p(X,A;\mathbb{Z})$, $\iota_*\xi \neq 0$, \\ $\eta \in H_q(Y,B;\mathbb{Z})$, $\iota_*\eta \neq 0$ \end{tabular} \right\} \\ & \le \stsys_p(X,A) \cdot \stsys_q(Y,B) \,. \end{align*} where the second inequality is obtained by the result on homology level. \end{proof} \begin{lemma} \label{lemma:StsysProjectionEquality} Suppose $X$ and $Y$ are local Lipschitz neighborhood retracts. If $Y$ is connected and the K\"{u}nneth formula gives an isomorphism of non-trivial vector spaces \begin{align*} H_q(X;\mathbb{R}) \otimes H_0(Y;\mathbb{R}) \cong H_q\bigl( X \times Y ;\mathbb{R} \bigr) \neq \{0\} \,, \end{align*} then the stable $q$--systole satisfies $$0 < \stsys_q\bigl( X \times Y \bigr) = \stsys_q(X) < \infty.$$ with respect to the product metric on $X \times Y$. \end{lemma} \begin{proof} Let $\mathfrak{pr}_1\co X \times Y \to X$ be the first projection. From the assumption, for a nonzero homology class $\eta$ in $H_q ( X \times Y ;\mathbb{R} )$, there exist $[S] \neq 0$ in $H^{\flat}_q(X;\mathbb{R})$ and $[T] \neq 0$ in $H^{\flat}_0(Y;\mathbb{R})$ whose cross product is the image of $\eta$ in $H^{\flat}_q\bigl(X \times Y;\mathbb{R}\bigr)$ with the same positive mass, ie, $$\mass\bigl( [S] \times [T] \bigr) = \mass(\eta) > 0.$$ Note that the vector space of normal $0$--chains $\normalD_0(Y;\mathbb{R})$ is equal to the vector space of polyhedral $0$--chains $\polyhedralD_0(Y;\mathbb{R})$ which is generated by $\{\mathfrak{d}_y : y \in Y \}$ where $\mathfrak{d}$ is defined in the proof of \fullref{prop:stable0systole}. For every points $y$ and $y'$ in $Y$, $[\mathfrak{d}_y] = [\mathfrak{d}_{y'}]$ implies that there is a nonzero real number $r$ such that $[T] = r [\mathfrak{d}_y]$ with $\mass [T] = |r| \cdot \mathfrak{d}_y (1_Y^*) = |r|$. Also, every $[S] \times [T]$ has representation of $[r \cdot S] \times [\mathfrak{d}_y]$, therefore $\mathfrak{pr}_{1*}$ is an isomorphism with $\mathfrak{pr}_{1*} \bigl( [S] \times [T] \bigr) = [r \cdot S]$. Hence \fullref{lemma:StableSystoleMappedInequality} implies $$\stsys_q(X \times Y) \ge \stsys_q(X) > 0$$ with the fact of $\mathfrak{pr}_{1}$ is a Lipschitzian map with $\Lip(\mathfrak{pr}_{1}) = 1$. As a result, we obtain the equality by combining the result of \fullref{lemma:QuotientMassCrossEquality}. \end{proof} \section{Calculation by dimension and constructing metrics} \label{section:Result} At first, we will calculate the stable systolic category from the dimensional information of homology. If the homology group is not so complex such as a real homology sphere, we know the stable systolic category by only using dimensional information. If a oriented manifold has a relatively simple cup-product structure such as $n$--fold producted space of spheres, then the stable systolic category can be also calculated instantly. Such methods to calculate the stable systolic category can be generalized as follows. For a topological space $X$, let $\LPD(X)$ denote the {\it least positive dimension} of real cohomology vector spaces of $X$. So $\LPD(X) = l$ if and only if $\tilde{H}^i(X;\mathbb{R}) = \{0\}$ for $0 < i < l$ and $\tilde{H}^l(X;\mathbb{R}) \neq \{0\}$. If $M$ is an $m$--manifold, then $\LPD(M)$ is smaller than $m$. \begin{defn} An $n$--dimensional CW space $X$ is said to {\it have maximal real cup length}, if there exist some real cohomology classes $\alpha_1, \cdots, \alpha_r$ with $\alpha_i \in \tilde{H}^{d_i}(X;\mathbb{R})$, a nonzero cup-product $\alpha_1 \cup \cdots \cup \alpha_r \in \tilde{H}^n(X;\mathbb{R})$ and $r := \lfloor n/\LPD(X) \rfloor$ where $\lfloor x \rfloor$ denotes the floor of a real number $x$. \end{defn} \begin{example} Let $S$ be a manifold which is a real homology sphere. Then $S$ has maximal real cup length, because of $\LPD(S) = \dim(S)$. The $n$--fold direct product of $S$ also has maximal real cup length. The direct product $S^2 \times S^3$ of spheres has maximal real cup length. \end{example} \begin{cor} If an $m$--manifold $M$ has maximal real cup length, then the stable systolic category of $M$ is equal to the real cup-length of $M$, ie, $$\catstsys(M) = \cuplength_{\mathbb{R}}(M) = \lfloor m/\LPD(M) \rfloor .$$ \end{cor} \begin{proof} We need to verify that $\catstsys(M) \le \cuplength_{\mathbb{R}}(M)$. Let $r := \lfloor m/\LPD(M) \rfloor$. If $(d_1, \cdots, d_{k})$ is a partition of $m$ such that each stable $d_i$--systole is non-trivial, then $d_i \ge \LPD(M)$, so there is an inequality $${k} \cdot \LPD(M) \le m = d_1+ \cdots + d_{k} < (r+1) \cdot \LPD(M)$$ which implies ${k} \le r = \cuplength_{\mathbb{R}}(M)$. \end{proof} In general, the direct product $M \times N$ of manifolds does not have maximal real cup length even if $M$ and $N$ have maximal real cup-length. For example, the direct product of spheres $S^1 \times S^2$ does not have maximal real cup length. \begin{lemma}\label{lemma:lowerboundP} If manifolds $M_1^{m_1}, \cdots, M_n^{m_n}$ have maximal real cup length, then the stable systolic category of their $n$--fold direct product $M_1 \times \cdots \times M_n$ is greater than the sum of stable systolic categories for each $M_i$, ie, $$\catstsys\left( M_1 \times \cdots \times M_n \right) \ge \catstsys(M_1) + \cdots + \catstsys(M_n).$$ \end{lemma} \begin{proof} Since $M_i$ has maximal real cup length, there is nonzero cup product $\bigcup_{j=1}^{r_i} \alpha_i^j$ in $H^{m_i}(M_i;\mathbb{R})$ where $r_i := \lfloor m_i/\LPD(M_i) \rfloor = \catstsys(M_i)$ for $1 \le i \le n$. By the K\"{u}nneth formula, the $n$--fold cross product on the top dimensions induces an isomorphism \begin{equation*} \bigotimes_{i=1}^n H^{m_i}(M_i;\mathbb{R}) \cong H^{m}\left(\textstyle\prod_{i=1}^n M_i; \mathbb{R}\right) \quad \text{where} \quad m := \textstyle\sum\limits_{i=1}^n m_i . \end{equation*} This implies the existence of a nonzero cup product \begin{align*} \prod_{i=1}^n \textstyle\left(\bigcup_{j=1}^{r_i} \alpha_i^j\right) & = \bigcup_{i=1}^n \textstyle \mathfrak{pr}_i^*\biggl(\,\bigcup\limits_{j=1}^{r_i} \alpha_i^j\biggr) = \textstyle\bigcup\limits_{j=1}^{r_1} \mathfrak{pr}_1^*\alpha_1^j \cup \cdots \cup \bigcup\limits_{j=1}^{r_n} \mathfrak{pr}_n^*\alpha_n^j \end{align*} in the top-dimensional real cohomology vector space $H^{m}\left(\textstyle\prod_{i=1}^n M_i; \mathbb{R}\right)$, where $\mathfrak{pr}_i\co M_1 \times \cdots \times M_n \to M_i$ is the $i$--th projection. This cup product implies that $\catstsys\bigl(\textstyle\prod_{i=1}^n M_i\bigr) \ge r_1 + \cdots + r_n$ by Gromov's Theorem. \end{proof} \begin{prop}\label{prop:LPDofProduct} For manifolds $M$ and $N$, the least positive dimension of cohomology of $M \times N$ is the minimum of $\LPD(M)$ and $\LPD(N)$. \end{prop} \begin{proof} From the K\"{u}nneth formula, $H^i(M \times N;\mathbb{R}) = \{0\}$ for $0 < i < \min\bigr(\LPD(M),\LPD(N)\bigl)$. If $l := \min\bigr(\LPD(M),\LPD(N)\bigl) = \LPD(M)$, then $H^l(M;\mathbb{R})$ is nonzero and the cross product homomorphism $H^l(M;\mathbb{R}) \otimes H^0(N;\mathbb{R}) \to H^l(M \times N;\mathbb{R})$ is a monomorphism. Therefore $H^l(M \times N;\mathbb{R})$ is nonzero. The case of $\LPD(M) > \LPD(N)$ is shown by using the same arguments. \end{proof} For integers $i$ and $j \neq 0$, let $\MOD(i,j)$ denotes the remainder from the division of $i$ by $j$. \begin{cor}\label{cor:productedLSSCP} Suppose manifolds $M^m$ and $N^n$ have maximal real cup length, and an integer $l := \LPD(M \times N)$. If $M$ and $N$ satisfy the conditions \begin{align*} & \lfloor m / \LPD(M) \rfloor = \lfloor m / l \rfloor, \quad\quad \lfloor n / \LPD(N) \rfloor = \lfloor n / l \rfloor \\ \tag*{\text{\sl and}} & \MOD(m,l) + \MOD(n,l) < l , \end{align*} then $M \times N$ has maximal real cup length. Therefore, $$\catstsys(M \times N) = \catstsys(M) + \catstsys(N) \,.$$ \end{cor} \begin{proof} Let integers $r := \lfloor m / l \rfloor$ and $s := \lfloor n / l \rfloor$. \fullref{prop:LPDofProduct} implies that $l = \min\bigl(\LPD(M),\LPD(N)\bigr) = \LPD(M \times N)$. So we can formulate $\lfloor (m+n) / \LPD(M \times N) \rfloor = r + s + \lfloor \MOD(m,l) + \MOD(n,l) \rfloor$. By the assumption, $\lfloor \MOD(m,\LPD(M)) + \MOD(n,\LPD(N)) \rfloor$ is zero, so we have $$\lfloor (m+n) / \LPD(M \times N) \rfloor = r + s .$$ Thus it is sufficient to show that there is a nonzero cup product with the length of $r+s$. Since $M$ and $N$ have maximal real cup length, there are cohomology classes $\alpha_1, \cdots, \alpha_r$ and $\beta_1, \cdots, \beta_s$ with their cup products are nonzero cohomology classes $\bigcup_{i=1}^r \alpha_i$ in $H^m(M;\mathbb{R})$ and $\bigcup_{i=1}^s \beta_i$ in $H^n(M;\mathbb{R})$. From the proof of \fullref{lemma:lowerboundP}, there is a nonzero cup product $\bigcup_{i=1}^r \mathfrak{pr}_1^*\alpha_i \cup \bigcup_{i=1}^s \mathfrak{pr}_2^*\beta_i$ in the top dimensional cohomology vector space $H^{m+n}(M \times N;\mathbb{R})$. \end{proof} Without the condition of the product $M \times N$ has maximal real cup length, we can generalize this corollary as follow. \begin{thm} \label{thm:LSSCPstsyscat} Let manifolds $M^m$ and $N^n$ have maximal real cup length. If $$\MOD\bigl(m,\LPD(M)\bigr) + \MOD\bigl(n,\LPD(N)\bigr) < \max\bigl(\LPD(M),\LPD(N)\bigr) \,,$$ then the stable systolic category of their product $M \times N$ is the sum of each stable systolic category, ie, $$\catstsys(M \times N) = \catstsys(M) + \catstsys(N) \,.$$ \end{thm} \begin{proof} Since $M$ and $N$ have maximal real cup length, $$r := \lfloor m/\LPD(M) \rfloor = \catstsys(M) \quad \text{ and } \quad s := \lfloor n/\LPD(N) \rfloor = \catstsys(N) \,.$$ In the case of $\LPD(M) = \LPD(N)$ is \fullref{cor:productedLSSCP}. So we will assume $\LPD(M) < \LPD(N)$. From \fullref{lemma:lowerboundP}, $\catstsys(M \times N) \ge \catstsys(M) + \catstsys(N) = r+s$ . Therefore, it is sufficient to show that any partition of $m{+}n$ whose size is greater than $r{+}s$, is not a stable systolic categorical partition. Suppose $m+n = d_1 + \cdots + d_k$ is a stable systolic categorical partition for $M \times N$ with some integer $1 \le r' \le k$ and the condition $0 < \LPD(M) \le d_1 \le \cdots \le d_{r'} < \LPD(N)$. For an arbitrary $t \ge 1$, let $\mathcal{G}_{t} := t^2 \mathcal{G}_M + \mathcal{G}_N$ be a Riemannian metric on $M \times N$. Then \fullref{lemma:MassRescaleEquality} and \fullref{lemma:StsysProjectionEquality} imply that the stable systoles for the partition $d_1 + \cdots + d_k$ satisfies \begin{align*} \prod_{i=1}^k\stsys_{d_i}(M \times N, \mathcal{G}_{t}) & \ge \prod_{i=1}^{r'}\stsys_{d_i}(M, t^2\mathcal{G}_{M}) \prod_{j=r'+1}^{k}\stsys_{d_j}(M \times N, \mathcal{G}_{t}) \\ & = t^{d_1 + \cdots + d_{r'}}\prod_{i=1}^{r'}\stsys_{d_i}(M, \mathcal{G}_{M}) \prod_{j=r'+1}^{k}\stsys_{d_j}(M \times N, \mathcal{G}_{t}) \end{align*} Since we assume that $t \ge 1$, we can obtain the inequality $\stsys_{d_j}(M \times N, \mathcal{G}_{t}) \ge \stsys_{d_j}(M \times N, \mathcal{G}_{1})$ for each $r'+1 \le j \le k$. On the other hands, the mass of integral fundamental class $[M \times N]$ is characterized by \fullref{lemma:MassRescaleEquality} and \fullref{lemma:QuotientMassCrossEquality} as \begin{align*} \mass\bigl( [M \times N],\mathcal{G}_t \bigr) & \le \mass\bigl( [M],t^2\mathcal{G}_M \bigr) \cdot \mass\bigl( [N], \mathcal{G}_N \bigr) \\ & = t^m \cdot \mass\bigl( [M], \mathcal{G}_M \bigr) \cdot \mass\bigl( [N], \mathcal{G}_N \bigr) . \end{align*} Here if we assume that $d_1 + \cdots + d_{r'} > m$, then we have \begin{align*} \frac{\prod\limits_{i=1}^k\stsys_{d_i}(M {\times} N, \mathcal{G}_{t})}{\mass\bigl( [M {\times} N],\mathcal{G}_t \bigr)} & \ge t^{m - (d_1 + \cdots + d_{r'})} \cdot \frac{\prod\limits_{i=1}^{r'}\stsys_{d_i}(M, \mathcal{G}_{M}) \cdot \prod\limits_{j=r'+1}^{k}\stsys_{d_j}(M {\times} N, \mathcal{G}_{1})}{\mass\bigl( [M], \mathcal{G}_M \bigr) \cdot \mass\bigl( [N], \mathcal{G}_N \bigr)} \end{align*} where the right-hand side of the inequality diverges as $t \to \infty$. This contradicts to that $(d_1, \cdots, d_k)$ is a stable systolic categorical partition. Hence we obtain $d_1 + \cdots + d_{r'} \le m$ and $d_{r'+1} + \cdots + d_{k} \ge n$. This condition for $m$ implies $$r' \le \lfloor (d_1 + \cdots + d_{r'}) / \LPD(M) \rfloor \le \lfloor m / \LPD(M) \rfloor \le r \,.$$ Let $s' := k - r'$. From the assumption, $\LPD(M) / \LPD(N) < 1$ and $$\MOD(m,\LPD(M)) + \MOD(n,\LPD(N)) < \LPD(N),$$ so we can calculate as \begin{align*} & k = r' + s' \le r + s \end{align*} which implies $\catstsys(M \times N) \le \catstsys(M) + \catstsys(N)$ . \end{proof} \begin{cor} Suppose manifolds $M_0 \times M_1 \times \cdots \times M_{k}$ and $M_{k+1} \times \cdots \times M_n \times M_{n+1}$ have maximal real cup length with \begin{align*} & \LPD(M_0) = \LPD(M_1) = \cdots = \LPD(M_k) \\ \tag*{\text{\sl and}} & \LPD(M_{k+1}) = \cdots = \LPD(M_n) = \LPD(M_{n+1}) \,. \end{align*} Let $r_i := \lfloor \dim(M_i)/\LPD(M_i) \rfloor$ for $0 \le i \le n+1$ . If $M_0, \cdots, M_{n+1}$ satisfy conditions $\dim(M_i) = \LPD(M_i) \cdot r_i$ for $1 \le i \le n$ and \begin{eqnarray*} \dim(M_0) - \LPD(M_0)\cdot r_0 + \dim(M_{n+1}) - \LPD(M_{n+1})\cdot r_{n+1} \\ < \max\bigr(\LPD(M_0),\LPD(M_{n+1})\bigl) , \end{eqnarray*} \begin{gather*} \tag*{\text{\sl then:}} \catstsys\left( {\textstyle\prod\limits_{i=0}^{n+1}M_i} \right) = \sum_{i=0}^{n+1} \catstsys(M_i) = \sum_{i=0}^{n+1} r_i \end{gather*} \end{cor} For the product $S^1 \times S^2$ of spheres, \fullref{thm:LSSCPstsyscat} can not applied. So we must approach from the other viewpoints to obtain the stable systolic category. \begin{thm} \label{thm:CatstsysProductSphere} If manifolds $S_1^{m_1}, \cdots, S_n^{m_n}$ are real homology spheres, then the stable systolic category of their $n$--fold direct product is the number of spheres. \end{thm} \begin{proof} Since every real homology spheres have maximal real cup length, \fullref{lemma:lowerboundP} gives us a lower bound $\catstsys(S_1 \times \cdots \times S_n) \ge n$. Suppose $m_i \le m_{i+1}$ for each $1 \le i \le n$. Then a partition $(m_1, \cdots, m_n)$ of $\sum_i m_i$ can be rewritten as $(r_1, \cdots, r_1, r_2, \cdots, r_{l-1}, r_l, \cdots, r_l)$ where $r_i$ is a range. This corresponding to rewrite \begin{align*} S_1^{m_1} \times \cdots \times S_n^{m_n} & = \left(S_1^{r_1} \times \cdots \times S_{s_1}^{r_1}\right) \times \left(S_{s_1+1}^{r_2} \times \cdots \times S_{s_1+s_2}^{r_2}\right) \times \cdots \\ & \qquad \times \left(S_{s_1+\cdots+s_{l-1}+1}^{r_l} \times \cdots \times S_{s_1+\cdots+s_{l-1}+s_l}^{r_l}\right) \end{align*} where $r_i := m_{s_1+\cdots+s_{i-1}+1} = \cdots = m_{s_1+\cdots+s_{i-1}+s_i}$ with $r_i < r_{i+1}$ and $s_i > 0$ is the duplicated number of $r_i$, so that $s_1 + \cdots + s_l = n$. For simplicity, let $$X_p := S_1 \times \cdots \times S_{s_1 + \cdots + s_p} \quad \text{and} \quad Y_p := S_{s_1 + \cdots + s_p} \times \cdots \times S_{n}$$ for $1 \le p \le n$. Then $S_1 \times \cdots \times S_n = X_p \times Y_p$ and we can observe that $\mathcal{G}_{p,t} := t^2 \mathcal{G}_{X_p} + \mathcal{G}_{Y_p}$ is a Riemannian metric on $X_p \times Y_p$ for $t > 0$ when $\mathcal{G}_{X_p} + \mathcal{G}_{Y_p}$ is a Riemannian metric on $X_p \times Y_p$. Now we can apply \fullref{lemma:StsysProjectionEquality} and \fullref{lemma:MassRescaleEquality}, so $$\stsys_q(X_p \times Y_p, \mathcal{G}_{p,t}) = \stsys_q(X_p, t^2 \mathcal{G}_{X_p}) = t^q \cdot \stsys_q(X_p, \mathcal{G}_{X_p})$$ for the non-trivial stable systoles in the dimension of $1 \le q \le s_1 + \cdots + s_p$. Here we suppose $(d_1, \cdots, d_k)$ with $d_i \le d_{i+1}$, is the longest stable systolic categorical partition for $S_1 \times \cdots \times S_n$. Then we can rewrite $(d_1, \cdots, d_k)$ by the ranges $\{r_1, \cdots, r_l\}$ and the duplicated number $s_i' \ge 0$ of $r_i$. We will show this partition is not longer than $n$ by induction on $p$ for $1 \le p \le l$ and contradiction. Assume that $s_i' = s_i$ for $1 \le i \le p-1$. If $s_p' > s_p$, then using a similar argument in the proof of \fullref{thm:LSSCPstsyscat}, we can observe that the right-hand side of the inequality \begin{align*} \frac{\prod\limits_{i=1}^{k}\stsys_{d_i}(X_p {\times} Y_p, \mathcal{G}_{p,t})}{\mass\bigl([X_p {\times} Y_p], \mathcal{G}_{p,t}\bigr)} & \ge t^{w} \cdot \frac{\prod\limits_{i=1}^{p}\stsys_{r_i}(X_p, \mathcal{G}_{X_p})^{s_i'} \prod\limits_{i=p+1}^{l}\stsys_{r_i}(X_p {\times} Y_p, \mathcal{G}_{p,1})^{s_i'}}{\mass\bigl([X_p], \mathcal{G}_{X_p}\bigr) \cdot \mass\bigl([Y_p], \mathcal{G}_{Y_p}\bigr)} \end{align*} diverge as $t \to \infty$ where $w := r_1(s'_1 - s_1) + \cdots + r_i(s'_p - s_p) > 0$. This contradicts to that the partition $(d_1, \cdots, d_k)$ is stable systolic categorical, and hence we obtain $s_p' \le s_p$. However we must choose $s_p' = s_p$ to make the longest partition. As a result, the size of the longest stable systolic categorical partition can not exceed $n = s_1 + \cdots + s_l$. \end{proof} \section{Invariance under the rational equivalences} \label{section:RHEinvariant} Suppose $M$ and $N$ are $n$--manifolds. Let $K$ and $L$ be a triangulation of $M$ and $N$ respectively. In this section, $K$ and $L$ are subdivided if necessary, but we will use the same symbol. For a continuous map $f\co M \to N$, there is a non-degenerate simplicial approximation $g\co K \to L$ of $f$. For an open $n$--simplex $e$ in $L$, consider a map $h\co K \overset{g}{\to} L \to L / (L \setminus e)$. We will call $\deg(h)$ the {\it degree} of $g$ at $e$ which is denoted by $\deg_e(g)$. Let $$D(g) := \sup\bigl\{ \vert \deg_e(g) \vert : \text{open } n \text{-simplex } e \text{ in } L \bigr\}.$$ Here $D(g)$ is finite, because of we can assume that $K$ and $L$ are finite simplicial complexes. For an arbitrary Riemannian metric $\mathcal{G}_N$ on $N$ and $\varepsilon > 0$, there is a piecewise linear metric $\mathcal{G}_L = \mathcal{G}_L(\varepsilon)$ on $L$ satisfying $$\bigl|\, \stsys_q(L,\mathcal{G}_L) - \stsys_q(N,\mathcal{G}_N) \,\bigr| \le \varepsilon$$ for every non-trivial stable $q$--systoles (compare Federer \cite[4.1.22]{Fed69}) and the realization of $L$ with $\mathcal{G}_L$ is a PL section of the normal bundle over $N$ with $\mathcal{G}_N$ in some finite dimensional Euclidean space. Such metric can be obtained by subdividing $K$ and $L$, and translating vertices in $L$ along the fiber of the normal bundle to do not degenerate any simplex. For $0 < \varepsilon' < \varepsilon$, a suitable metric $\mathcal{G}_L(\varepsilon')$ also can be acquired by the same way. Hence we can assume that $D(g)$ is not changed by $\varepsilon$ and $\mathcal{G}_L$. As $\varepsilon$ approaches to $0$, each $L$, $\mathcal{G}_L$ and $g^*\mathcal{G}_L$ converges to $N$, $\mathcal{G}_N$ and a piecewise Riemannian metric on $M$ respectively. Under this circumstance, we obtain following lemma. \begin{lemma} \label{lemma:EquivalentStableSystole} Suppose $q$--th real homology vector space of $K$ and $L$ are non-trivial. If $g\co K \to L$ induces a monomorphism $g_*$ between the $q$--th real homology vector spaces, then $$\stsys_q(L,\mathcal{G}_L) \le \stsys_q(K,g^*\mathcal{G}_L) \le D(g) \cdot \stsys_q(L,\mathcal{G}_L) < \infty$$ for every piecewise linear metric $\mathcal{G}_L$ on $L$. \end{lemma} \begin{proof} With the pullback PL metric $g^*\mathcal{G}_L$ on $K$, $g$ is a distance decreasing map. Combining this with \fullref{lemma:StableSystoleMappedInequality}, $$\stsys_q(L,\mathcal{G}_L) \le \Lip(g)^q \cdot \stsys_q(K,g^*\mathcal{G}_L) \le \stsys_q(K,g^*\mathcal{G}_L).$$ On the other hands, the inverse image of an arbitrary $q$--simplex of $L$ is $D(g)$ of $q$--simplices as at most, since $g$ is a non-degenerate simplicial map and every $q$--simplex is contained in the boundary of some $n$--simplex for $q < n$. Also each simplex in the inverse image has same mass of the preimage, since the restriction of $g$ on each simplex is isometry. This implies that the mass of a $q$--chain $c$ of $K$ is not greater than $D(g)$ times of the mass of the image $g_{\flat}(c)$ which is not trivial. Therefore we can verify that $$\stsys_q(K,g^*\mathcal{G}_L) \le D(g) \cdot \stsys_q(L,\mathcal{G}_L)$$ for an arbitrary PL metric $\mathcal{G}_L$. \end{proof} \begin{rmk} If $K$ is not a triangulation of a manifold, we can not sure that every $q$--simplex of $K$ is contained in the boundary of some $n$--simplex for $q < n$. For example, a triangulation of the one-point union $S^1 \vee S^2$ has some $1$--simplex in $S^1$ which is not contained in the boundary of any $2$--simplex. \end{rmk} Since the stable systolic category is a homotopy invariant, here we obtain following proposition using similar techniques of Katz and Rudyak \cite{KatRud08}. \begin{prop} Let $M$ and $N$ are $n$--manifolds. If there exists a smooth map $f\co M \to N$ which induces a monomorphism on every real homology vector space, then $\catstsys(M) \le \catstsys(N)$. \end{prop} \begin{proof} We apply \fullref{lemma:EquivalentStableSystole}, \begin{align*} & \stsys_{q}(N,\mathcal{G}_N) \le \stsys_{q}(L,\mathcal{G}_L) + \varepsilon \le \stsys_{q}(K,g^*\mathcal{G}_L) + \varepsilon \\ \tag*{\text{and}} & \stsys_{q}(N,\mathcal{G}_N) + \varepsilon \ge \stsys_{q}(L,\mathcal{G}_L) \ge 1/D(g) \cdot \stsys_{q}(K,g^*\mathcal{G}_L) \end{align*} where $L$ converges to $N$ in some Euclidean space and $g^*\mathcal{G}_L$ converges to a piecewise Riemannian metric $\mathcal{G}_M$ on $M$ as $\varepsilon$ approaches to $0$. Suppose there exists a stable systolic categorical partition $(d_1, \cdots, d_k)$ for $M$. Then there exist $C > 0$ and $\delta = \delta(\varepsilon) > 0$ such that $\delta$ converges to $0$ as $\varepsilon$ approaches to $0$ and $$\prod_{i=1}^k \stsys_{d_i}(K,g^*\mathcal{G}_L) \le C \cdot \mass([K],g^*\mathcal{G}_L) + \delta,$$ because of each metric $g^*\mathcal{G}_L$ can be approximated by some Riemannian metrics on $M$. We can assume that $\varepsilon \le \stsys_{d_i}(N,\mathcal{G}_N)$ for all $i$, so \begin{align*} \prod_{i=1}^k \stsys_{d_i}(L,\mathcal{G}_L) & \le 2^k \cdot \prod_{i=1}^k \stsys_{d_i}(K,g^*\mathcal{G}_L) \\ & \le 2^k C \cdot \mass([K],g^*\mathcal{G}_L) + 2^k \delta \\ & \le 2^k C A(g) \cdot \mass([L],\mathcal{G}_L) + 2^k(C A(g) \cdot \varepsilon + \delta). \end{align*} This implies the partition $(d_1, \cdots, d_k)$ is also stable systolic categorical for $N$. Therefore we obtain the result $\catstsys(M) \le \catstsys(N)$. \end{proof} Let $X$ and $Y$ are simply connected spaces. A continuous map $f\co X \to Y$ is called a {\it rational equivalence}, if the induced map $f^*\co H^*(Y; \mathbb{Q}) \to H^*(X; \mathbb{Q})$ is an isomorphism. \begin{cor} \label{cor:REinvariance} The stable systolic category of a $0$--universal manifold is invariant under the rational equivalences. \end{cor} \begin{proof} For a $0$--universal manifold $M$ and a rational equivalence to a space $X$, there exists a rational equivalence from $X$ to $M$. \end{proof} \bibliographystyle{gtart}
2,869,038,154,525
arxiv
\section{Introduction} \input{02_introduction} \input{03_caveat} \section{Assumptions} \input{04_assumptions} \section{Approach} \input{05_approach} \section{Models} \label{sec:models} \input{06_models_1_seir} \subsection{Agent-Based Simulation} \input{06_models_2_fred} \section{Experiments} \input{07_exp_1_seir} \subsection{FRED Simulator} \input{07_exp_2_fred} \section{Software} \input{08_discussion} \input{09_commentary} \section{Acknowledgements} \input{10_ack} \bibliographystyle{unsrtnat} \section{Specific Findings and Recommendations} This paper is the result of a ``crash'' research project which was conceived and executed over the span of approximately seven interrupted days while under university shutdown and imposed social distancing measures in British Columbia. Our altruistic aim was to contribute our scientific knowledge towards COVID-19{} outbreak management. In particular we hope this work helps lead to the rapid development and deployment of tools that could make policy-making more efficient, primarily automatic inference tools for control that might lead to policy-makers being able to choose policies that have lower social and economic costs. Because this research was conducted under severe time-constraints, and has been released in a way that is not yet subject to peer-review, it is imperative to highlight what should and should not be taken away from it at the onset. The {\em only} things that may safely be taken away from this paper are the following: \begin{itemize} \item Existing agent-based and other simulators can be used for planning by framing the planning problem as inference. \item Automated inference tools can be used to perform the required inference. \item Opportunities exist for various fields to come together to improve both understanding of and availability of these techniques and tools. \item Further research and development into modelling and inference is recommended to be immediately undertaken to explore the possibility of more efficient, less economically devastating control of the COVID-19 pandemic. \end{itemize} What should {\em not} be taken away from this paper are any other conclusions, including in particular the following: \begin{itemize} \item Any conclusions that one might draw from plots in this paper about time periods that controls must be imposed to manage the COVID-19{} outbreak. Until qualified policy-makers and epidemiologists weigh in, the fact that, for instance, Figures~\ref{fig:exp:seir:stoch_det:policy} and \ref{fig:exp:seir:mpc} suggest that controls must be kept in place for over a year to keep COVID-19{} from overwhelming hospital systems, should {\em not} be quoted or used in the press unless, again, accompanied by commentary from qualified epidemiologists. \item Any conclusion or statements that there might exist less aggressive measures that could still be effective in controlling COVID-19{}. \end{itemize} We use more qualifying statements than usual throughout this work in an attempt to avoid potentially inappropriate headlines. As scientists trying to contribute ``across the aisle,'' we are simply trying to avoid misunderstandings and sensationalism. \subsection{An Abstract Epidemiological Dynamics Model} \label{sec:approach:abstract-model} In this work we will look at both compartmental and agent-based models. An overview of these specific types of models appears later. For the purposes of understanding our approach to planning as inference, it is helpful to describe the planning as inference problem in a formalism that can express both types of models. The approach of conducting control via planning as inference follows a general recipe: \begin{enumerate} \item Define the latent and control parameters of the model and place a prior distribution over them. \item Either or both define a likelihood for the observed disease dynamics data and design constraints that define acceptable disease progression outcomes. \item Do inference to generate a posterior distribution on control values that conditions both on the observed data and the defined constraints. \item Make a policy recommendation by picking from the posterior distribution consisting of effective control values according to some utility maximizing objective. \end{enumerate} We focus on steps 1-3 of this recipe, and in particular do not explore simultaneous conditioning. We ignore the observed disease dynamics data and focus entirely on inference with future constraints. We explain the rationale behind these choices near the end of the paper. Very generally, an epidemiological model consists of a set of global parameters and time dependent variables. Global parameters are $(\theta, \eta)$, where $\theta$ denotes parameters that can be controlled by policy directives (e.g.~close schools for some period of time or decrease the general level of social interactions by some amount), and $\eta$ denotes parameters which can not be affected by such measures (e.g.~the incubation period or fatality rate of the disease). The time dependent variables are $(X_t, Y_t, Z_t)$ and jointly they constitute the full state of the simulator. $X_t$ are the latent variables we are doing inference over (e.g.~the total number of infected people or the spatio-temporal locations of outbreaks), $Y_t$ are the observed variables whose values we obtain by measurements in the real world (e.g.~the total number of deaths or diagnosed cases), and $Z_t$ are all the other latent variables whose values we are not interested in knowing (e.g.~the number of contacts between people or hygiene actions of individuals). For simplicity, we assume that all variables are either observed at all times or never, but this can be relaxed. The time $t$ can be either discrete or continuous. In the discrete case, we assume the following factorization \begin{align} p(\theta, \eta, X_{0:T}, Y_{0:T}, Z_{0:T}) = p(\theta) p(\eta) p(X_0, Y_0, Z_0 | \theta, \eta) \prod_{t=1}^T p(X_t, Y_t, Z_t | X_{t-1}, Y_{t-1}, Z_{t-1}, \theta, \eta) . \end{align} Note that we do not assume access to any particular factorization between observed and latent variables. We assume that a priori the controllable parameters $\theta$ are independent of non-controllable parameters $\eta$ to ensure that conditioning on desired properties of the epidemic performed in Section \ref{sec:control-inference} has the intended effect of affecting controllable parameters only. \subsection{Inference} \label{sec:inference} The classical inference task \citep{kypraios2017tutorial,mckinley2018approximate,toni2009approximate,minter2019approximate,chatzilena2019contemporary} is to compute the following conditional probability \begin{align} p(\eta, X_{0:T} | Y_{0:T}, \theta) = \int p(X_{0:T}, Z_{0:T}, \eta | Y_{0:T}, \theta) \ dZ_{0:T} . \end{align} In the example given earlier $X_t$ would be the number of infected people at time $t$ and $Y_t$ would be the number of hospitalized people at time $t$. If the non-controllable parameters $\eta$ are known they can be plugged into the simulator, otherwise we can also perform inference over them, like in the equation above. This procedure automatically takes into account prior information, in the form of a model, and available data, in the form of observations. It produces estimates with appropriate amount of uncertainty depending on how much confidence can be obtained from the information available. The difficulty lies in computing this conditional probability, since the simulator does not provide a mechanism to sample from it directly and for all but the simplest models the integral can not be computed analytically. The main purpose of probabilistic programming tools is to provide a mechanism to perform the necessary computation automatically, freeing the user from having to come up with and implementing a suitable algorithm. In this case, approximate Bayasian computation (ABC) would be a suitable tool. We describe it below, emphasizing again that its implementations are already provided by existing tools \citep{kypraios2017tutorial,mckinley2018approximate,toni2009approximate,minter2019approximate,chatzilena2019contemporary}. The main problem in this model is that we do not have access to the likelihood $p(Y_t | X_t, \theta, \eta)$ so we can not apply the standard importance sampling methods. To use ABC, we extend the model with auxiliary variables $Y^{obs}_{0:T}$, which represent the actual observations recorded, and use a suitably chosen synthetic likelihood $p(Y^{obs}_t | Y_t)$, often Gaussian. Effectively, that means we're solving the following inference problem, \begin{align} p(X_{0:T} | Y^{obs}_{0:T}, \theta) = \int \int \int p(\eta, X_{0:T}, Y_{0:T}, Z_{0:T} | Y^{obs}_{0:T}, \theta, \eta) \ dY_{0:T} \ dZ_{0:T} \ d\eta , \end{align} which we solve by importance sampling from the prior. Algorithmically, this means independently sampling a large number $N$ of trajectories from the simulator \begin{align} (\eta^{(i)}, X^{(i)}_{0:T}, Y^{(i)}_{0:T}, Z^{(i)}_{0:T}) \sim_{i.i.d.} p(\eta, X_{0:T}, Y_{0:T}, Z_{0:T} | \theta) \quad \text{for} \ i \in \{1,\dots,N\} , \end{align} computing their importance weights \begin{align} w_i = \frac{p(Y^{obs \ (i)}_{0:T} | Y^{(i)}_{0:T})}{\sum_{j=1}^N p(Y^{obs \ (j)}_{0:T} | Y^{(j)}_{0:T})} , \end{align} and approximating the posterior distribution \begin{align} p(X_{0:T} | Y_{0:T}, \theta) \approx \hat{p}(X_{0:T} | Y_{0:T}, \theta) = \sum_{i=1}^N w_i \delta_{X^{(i)}_{0:T}} , \end{align} where $\delta$ is the Dirac delta putting all the probability mass on the point in its subscript. In more intuitive terms, we are approximating the posterior distribution with a collection of weighted samples where weights indicate their relative probabilities. \subsection{Control as Inference: Finding Actions That Achieve Desired Outcomes} \label{sec:control-inference} In traditional inference tasks we condition on data observed in the real world. In order to do control as inference, we instead condition on what we \emph{want} to observe in the real world, which tells us which actions are likely to lead to such observations. This is accomplished by introducing auxiliary variables that indicate how desirable a future state is or is not. In order to keep things simple, here we restrict ourselves to the binary case where $Y_t \in \{0,1\}$, where $1$ means that the situation at time $t$ is acceptable and $0$ means it is not. This indicates which outcomes are acceptable, allowing us to compute a distribution over those policies, while leaving the choice of which specific policy likely to produce an acceptable outcome to policymakers. For example, $Y_t$ can be $1$ when the number of patients needing hospitalization at a given time $t$ is smaller than the number of hospital beds available and $0$ otherwise. To find a policy $\theta$ that is likely to lead to acceptable outcomes, we need to compute the posterior distribution \begin{align} p\left(\theta \mid \forall_t:\, Y_t = 1\right) \label{eq:control-inference} , \end{align} Once again, probabilistic programming tools provide the functionality to compute this posterior automatically. In this case, rejection sampling would be an appropriate algorithm, which repeatedly samples values of $\theta$ from the prior $p(\theta)$, runs the full simulator using $\theta$, and keeps the sampled $\theta$ only if all $Y_t$ are $1$. The collection of accepted samples approximates the desired posterior. This tells us which policies are most likely to lead to a desired outcome but not how likely a given policy is to lead to that outcome. To do that, we can evaluate the conditional probability $p(Y_t = 1 \ \text{forall} \ t | \theta)$, which is known as the model evidence, for a particular $\theta$. A more sophisticated approach would be to condition on the policy leading to a desired outcome with a given probability $p_0$, that is \begin{align} p\left(\theta \mid p\left(\forall_t:\, Y_t = 1 \mid \theta\right) > p_0\right) \label{eq:nested} . \end{align} For example, we could set $p_0 = 0.95$ to find a policy that ensures availability of hospital beds for everyone who needs one with at least $95\%$ probability. The conditional probability in Equation \ref{eq:nested} is more difficult to compute than the one in Equation \ref{eq:control-inference}. It can be approximated by methods such as nested Monte Carlo (NMC) \citep{rainforth2017nesting}, which are natively available in advanced probabilistic programming systems such as Anglican \citep{tolpin2016design} and WebPPL \citep{webppl} but in specific cases can also be implemented on top of other systems, such as PyProb \citep{le-2016-inference}, with relatively little effort, although using NMC usually has enormous computational cost. To perform rejection sampling with nested Monte Carlo, we first draw a sample $\theta_i \sim p(\theta)$, then draw $N$ samples of $Y^{(j)}_{0:T} \sim_{i.i.d.} p(Y_{0:T} | \theta_i)$ and reject $\theta_i$ if fewer than $p_0N$ of sampled sequences of $Y$s are all $1$s, otherwise we accept it. This procedure is continued until we have a required number $K$ of accepted $\theta$s. For sufficiently high values of $N$ and $K$ this algorithm approximates the posterior distribution \eqref{eq:nested} arbitrarily well. However we compute the posterior distribution, it contains multiple values of $\theta$ that represent different policies that, if implemented, can achieve the desired result. In this setup it is up to the policymakers to choose a policy $\theta^*$ that has support under the posterior, i.e.~yields the desired outcomes, taking into account to some notion of utility. \subsection{Model Predictive Control: Reacting to What's Happened} \label{sec:mpc} During an outbreak governments continuously monitor and assess the situation, adjusting their policies based on newly available data. The theoretical framework to do this is that of model predictive control. In this case, $Y_t$ consists of variables $Y^{data}_t$ that can be measured as the epidemic unfolds (such as the number of deaths) and the auxiliary variables $Y^{aux}_t$ that indicate whether desired goals were achieved, just like in Section \ref{sec:control-inference}. Say that at time $t=0$ the policymakers choose a policy to enact $\theta^*_0$ based on the posterior distribution \begin{align} p(\theta | \forall_{t > 0} \ Y^{aux}_t = 1) = \int \int p(\theta | X_0, Y^{data}_0, Z_0, \forall_{t > 0} \ Y^{aux}_t = 1) p(X_0, Z_0) \ dX_0 \ dZ_0 . \end{align} Then at time $t=1$ they will have gained additional information $Y^{data}_1$, leading to a new belief over the current state that we denote as $\hat{p}_1(X_1,Z_1)$, for which we give a formula in the general case in Equation \ref{eq:belief-state}. The policymakers then choose the policy $\theta^*_1$ from the posterior distribution $p(\theta | \forall_{t > 1} \ Y^{aux}_t = 1)$. Generally, at time $t$ we compute the posterior distribution conditioned on the current state and achieving desirable outcomes in the future \begin{align} p(\theta | \forall_{t' > t} \ Y^{aux}_{t'} = 1) = \int \int p(\theta | X_{t}, Y^{data}_{t}, Z_{t}, \forall_{t' > t} \ Y^{aux}_{t'} = 1) \hat{p}_{t}(X_{t}, Z_{t}) \ dX_{t} \ dZ_{t} . \label{eq:mpc} \end{align} Policymakers then can use this distribution to choose the policy $\theta^*_{t}$ that is actually enacted. The current belief state is computed by inference \begin{align} \hat{p}(X_t, Z_t) = \int \int p(X_t, Z_t | Y_t, \theta^*_t, \eta, X_{t-1}, Y_{t-1}, Z_{t-1}) \hat{p}(X_{t-1}, Z_{t-1}) \ dX_{t-1} \ dZ_{t-1} . \label{eq:belief-state} \end{align} Equation \ref{eq:belief-state} can be computed using methods described in Section \ref{sec:inference}, while Equation \ref{eq:mpc} can be computed using methods described in Section \ref{sec:control-inference}. \subsection{Time-Varying Control: Long Term Planning}\label{sec:approach:time_varying_control} It is also possible to explicitly model changing policy decisions over time, which enables more detailed planning, such as when to enact certain preventive measures such as closing schools. Notationally, this means instead of a single $\theta$ there is a separate $\theta_t$ for each time $t$. We can then find a good sequence of policy decisions by performing inference just like in Section \ref{sec:control-inference} by conditioning on achieving the desired outcome \begin{align} \label{eq:condition-outcome} p(\theta_{0:T} | \forall{t} \ Y_t = 1) . \end{align} The inference problem is now more challenging, since the number of possible control sequences grows exponentially with the time horizon. Still, the posterior can be efficiently approximated with methods such as Sequential Monte Carlo. It is straightforward to combine this extension with model predictive control from Section \ref{sec:mpc}. The only required modification is that in Equation \ref{eq:mpc} we need to condition on previously enacted policies and compute the posterior over all future policies. \begin{align} p(\theta_{t+1:T} | \forall_{t' > t} \ Y^{aux}_{t'} = 1) = \int \int p(\theta_{t+1:T} | \theta^*_{0:t}, X_{t}, Y^{data}_{t}, Z_{t}, \forall_{t' > t} \ Y^{aux}_{t'} = 1) \hat{p}_{t}(X_{t}, Z_{t}) \ dX_{t} \ dZ_{t} . \end{align} At each time $t$ the policymakers only choose the current policy $\theta^*_t$, without committing to any future choices. This combination allows for continuously reevaluating the situation based on available data, while explicitly planning for enacting certain policies in the future. In models with per-timestep control variables $\theta_t$, it is very important that in the model (but not in the real world) the enacted policies must not depend on anything else. If the model includes feedback loops for changing policies based on the evolution of the outbreak, it introduces positive correlations between lax policies and low infection rates (or other measures of severity of the epidemic), which in turn means that conditioning on low infection rates is more likely to produce lax policies. This is a known phenomenon of reversing causality when correlated observational data is used to learn how to perform interventions \cite{pearl-causality}. \subsection{Automation} We have intentionally not really explained how one might actually computationally characterize any of the conditional distributions defined in the preceding section. For the compartmental models that follow, we provide code that directly implements the necessary computations. Alternatively we could have used the automated inference facilities provided by any number of packages or probabilistic programming systems. Performing inference as described in existing, complex simulators is much more complex and not nearly as easy to implement from scratch. However, it can now be automated using the tools of probabilistic programming. \subsubsection{Probabilistic Programming} \label{sec:probprog} Probabilistic programming \citep{MEE-18} is a growing subfield of machine learning that aims to build an analogous set of tools for automating inference as automatic differentiation did for continuous optimization. Like the gradient operator of languages that support automatic differentiation, probabilistic programming languages introduce observe operators that denote conditioning in the probabilistic or Bayesian sense. In those few languages that natively support nested Monte Carlo \citep{tolpin2016design,dippl}, language constructs for defining conditional probability objects are introduced as well. Probabilistic programming languages (PPLs) have semantics \citep{staton2016semantics} that can be understood in terms of Bayesian inference \citep{ghahramani2015probabilistic,gelman2013bayesian,bishop2006pattern}. The major challenge in designing useful PPL systems is the development of general-purpose inference algorithms that work for a variety of user-specified programs. The work in the paper uses only the very simplest, and often least efficient, general purpose inference algorithms, importance sampling and rejection sampling. Others are covered in detail in \citep{MEE-18}. Of all the various probabilistic programming systems, only one is readily compatible with inference in existing stochastic simulators: PyProb\citep{LE-19}. Quoting from its website\footnote{\url{https://github.com/pyprob/pyprob}} ``PyProb is a PyTorch-based library for probabilistic programming and inference compilation. The main focus of PyProb is on coupling existing simulation codebases with probabilistic inference with minimal intervention.'' A textbook, technical description of how PyProb works appears in \citep[Chapt.~6]{MEE-18}. Recent examples of its use include inference in the standard model of physics conditioning on observed detector outputs \citep{BAY-18,BAY-19}, inference about internal composite material cure processes through a composite material cure simulator conditioned on observed surface temperatures \citep{MUN-20}, and inference about malaria spread in a malaria simulator \citep{gram2019hijacking}. \subsection{A Compartmental Model of COVID-19} \label{sec:models:seir} We begin by introducing a low-dimensional compartmental model to explore our methods in a well-known model family, before transitioning to a more complex agent-based simulator. The model we use is an example of a classical SEIR model~\cite{kermack1927contribution, blackwood2018introduction, sei3r2020website}. In such models, the population is subdivided into a set of compartments, representing the susceptible (uninfected), exposed (infected but not yet infectious), infectious (able to infect/expose others) and recovered (unable to be infected). Within each compartment, all individuals are treated identically, and the full state of the simulator is simply the size of the population of each compartment. Our survey of the literature found a lack of consensus about the compartmental model and parameters which most faithfully simulate the COVID-19 scenario. Models used range from standard SEIR \cite{ROV-20-italy, MAS-20-France, TRA-20-Italy, LIU-20}, SIR \cite{PUJ-20, TRA-20-Italy, WEB-20, TEL-20-Portugal, JIA-20}, SIRD \cite{ANA-20, LIU-20b-China, CAC-20}, QSEIR \cite{LIU-20-China}, and SEAIHRD \cite{ARE-20}. The choice depends on many factors, such as how early or late in the stages of an epidemic one is, what type of measures are being simulated, and the availability of real word data. We opted for the model described in this section, which seems to acceptably represent the manifestation of the disease in populations. Existing work has investigated parameter estimation in stochastic SEIR models~\cite{lekone2006stochasticepi, ROBERTS201549}. Although we will discuss how we set the model parameters, we emphasize that our contribution is instead in demonstrating how a calibrated model could be used for planning. \paragraph{Model description} \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=2em, text centered, rounded corners, minimum height=2em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm,minimum height=2em] \begin{figure} \centering \begin{tikzpicture}[every text node part/.style={align=center}, node distance=2cm] \node [block, ] (S) {$S$}; \node [block, right of=S, xshift = 2cm] (E) {$E$}; \node [block, right of=E] (I1) {$I_1$}; \node [block, right of=I1] (I2) {$I_2$}; \node [block, right of=I2] (I3) {$I_3$}; \node [block, above of=I2] (R) {$R$}; \node [cloud, right of =I3] (death) {death}; \path [line] (S) -- node[yshift=.5cm] {$(1-u) \frac{1}{N_t}\sum\limits_{i=1}^3 \beta_i I_{i,t}$} (E); \path [line] (E) -- node[yshift=.4cm] {$\alpha$} (I1); \path [line] (I1) -- node[yshift=.4cm] {$p_1$} (I2); \path [line] (I2) -- node[yshift=.4cm] {$p_2$} (I3); \path [line] (I1) -- node[fill=white, yshift=.0cm] {$\gamma_1$} (R); \path [line] (I2) -- node[fill=white, yshift=.0cm] {$\gamma_2$} (R); \path [line] (I3) -- node[fill=white, yshift=.0cm] {$\gamma_3$} (R); \path [line] (I3) -- node[yshift=.4cm] {$\kappa$} (death); \end{tikzpicture} \caption{Flow chart of the $\text{SEI}^3\text{R}${} model we employ. A member of the susceptible population $S$ moves to exposed $E$ after being exposed to an infectious person, where ``exposure'' is defined as the previous susceptible person contracting the illness. After some incubation period, a random duration parameterized by $\alpha$, they develop a mild infection ($I_1$). They may then either recover, moving to $R$, or progress to a severe infection ($I_2$). From $I_2$, they again may recover, or else progress further to a critical infection ($I_3$). From $I_3$, the critically infected person will either recover or die.} \label{fig:SEIR-fig} \end{figure} We use an $\text{SEI}^3\text{R}${}~model~\cite{sei3r2020website}, a variation on the standard SEIR model which allows additional modelling freedom. It uses six compartments: susceptible ($S$), exposed ($E$), infectious with mild ($I_1$), severe ($I_2$) or critical infection ($I_3$), and recovered ($R$). We do not include baseline birth and death rates in the model, although there is a death rate for people in the critically infected compartment. The state of the simulator at time $t \in [0, T]$ is $X_t = \{ S_t, E_t, I_{1,t}, I_{2,t}, I_{3,t}, R_t\}$ with $S_t,$ $E_t,$ $I_{1,t},$ $I_{2,t},$ $I_{3,t},$ and $R_t$ indicating the population sizes (or proportions) at time $t$. The unknown model parameters are $\eta = \{ \alpha, \beta_{1}, \beta_{2}, \beta_{3}, p_{1}, p_{2}, \gamma_{1}, \gamma_{2}, \gamma_{3}, \kappa \}$, each with their own associated prior. To the model we add a free, control parameter, denoted $u\in\left[ 0, 1\right]$, that acts to reduce the transmission of the disease. Since $u$ is the only free parameter, $\theta = u$. An explanation of $u$ is given later in the text. There are no internal latent random variables ($Z_t$) in this model. In this paper we do not demonstrate inference about $\theta$ given $Y^{obs}$ within this model, and so do not consider $Y^{obs}$ here. We do, however, consider $Y^{aux}$ to perform policy selection, and discuss the form of $Y^{aux}$ later. Defining the total live population (i.e. the summed population of all compartments) at time $t$ to be $N_t$, the dynamics are given by the following equations, and shown in Figure~\ref{fig:SEIR-fig}. \newcommand{(1-u)\frac{1}{N_t}\sum\nolimits_{i=1}^3 \beta_i I_{i,t} S_t}{(1-u)\frac{1}{N_t}\sum\nolimits_{i=1}^3 \beta_i I_{i,t} S_t} \newcommand{\alpha E_t}{\alpha E_t} \newcommand{p_1 I_{1,t}}{p_1 I_{1,t}} \newcommand{p_2 I_{2,t}}{p_2 I_{2,t}} \newcommand{\kappa I_{3,t}}{\kappa I_{3,t}} \newcommand{\gamma_1 I_{1,t}}{\gamma_1 I_{1,t}} \newcommand{\gamma_2 I_{2,t}}{\gamma_2 I_{2,t}} \newcommand{\gamma_3 I_{3,t}}{\gamma_3 I_{3,t}} \begin{minipage}{\linewidth} \centering \noindent\begin{minipage}{.45\linewidth} \begin{alignat}{2} &\frac{\mathrm{d}}{\mathrm{d}t}S_t &&= - (1-u)\frac{1}{N_t}\sum\nolimits_{i=1}^3 \beta_i I_{i,t} S_t \\ &\frac{\mathrm{d}}{\mathrm{d}t}E &&= (1-u)\frac{1}{N_t}\sum\nolimits_{i=1}^3 \beta_i I_{i,t} S_t - \alpha E_t \\ &\frac{\mathrm{d}}{\mathrm{d}t}I_{1, t} &&= \alpha E_t - p_1 I_{1,t} - \gamma_1 I_{1,t} \end{alignat} \end{minipage} \noindent\begin{minipage}{.45\linewidth} \begin{alignat}{2} &\frac{\mathrm{d}}{\mathrm{d}t}I_{2, t} &&= p_1 I_{1,t} - p_2 I_{2,t} - \gamma_2 I_{2,t} \\ &\frac{\mathrm{d}}{\mathrm{d}t}I_{3, t} &&= p_2 I_{2,t} - \kappa I_{3,t} - \gamma_3 I_{3,t} \\ &\frac{\mathrm{d}}{\mathrm{d}t}R_t &&= \gamma_1 I_{1,t} + \gamma_2 I_{2,t} + \gamma_3 I_{3,t}. \end{alignat} \end{minipage} \vspace{.25cm} \end{minipage} For the purposes of simulations with this model, we initialize the state with $0.01\%$ of the population having been exposed to the infection, and the remaining $99.99\%$ of the population being susceptible. The population classified as infectious and recovered are zero, i.e. $X_0 = \left\lbrace 0.9999, 0.0001, 0, 0, 0, 0 \right\rbrace$ and $N_t = 1$. \begin{figure}[t] \begin{subfigure}[b]{\textwidth} \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/seir/deterministic_simulation/deterministic_trajectory_full_nominal.pdf} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/seir/deterministic_simulation/deterministic_trajectory_zoom_nominal.pdf} \end{subfigure} \vspace{-.3cm} \caption{ Deterministic trajectory with zero control input ($u = 0$). } \label{fig:exp:seir:det_nom} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/seir/deterministic_controlled/deterministic_trajectory_full_controlled.pdf} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/seir/deterministic_controlled/deterministic_trajectory_zoom_controlled.pdf} \end{subfigure} \vspace{-.3cm} \caption{ Deterministic trajectory controlled to limit maximum infected population ($u = 0.37$). } \label{fig:exp:seir:det_con} \end{subfigure} \caption{Populations per compartment during deterministic $\text{SEI}^3\text{R}${} simulations, both without intervention (top) and with intervention (bottom). Plots in the left column show the full state trajectory, and in the right column are cropped to more clearly show the exposed and infected populations. Without intervention, the infected population requiring hospitalization ($20\%$ of cases) exceeds the threshold for infected population ($0.0145$, black dashed line), overwhelming hospital capacities. With intervention ($u$=0.37) the infected population always remains below this limit. Note that we re-use the colour scheme from this figure through the rest of the paper.} \label{fig:exp_seit:det_nom_con} \end{figure} \paragraph{Example trajectories} Before explaining how we set the $\text{SEI}^3\text{R}${} model parameters, or pose inference problems in the model, we first verify that we are able to simulate feasible state evolutions. As we will describe later, we use parameters that are as reflective of current COVID-19{} epidemiological data as possible given the time we had to prepare and present this work. We are \emph{not} suggesting that these trajectories are accurate predictions of the evolution of COVID-19{}. Figures~\ref{fig:exp:seir:det_nom} and \ref{fig:exp:seir:det_con} show deterministic simulations from the model with differing control values $u$. Shown in green is the susceptible population, in blue is the exposed population, in red is the infectious population, and in purple is the recovered population. The total live population is shown as a black dotted line. All populations are normalized by the initial total population, $N_0$. The dashed black line represents a threshold under which we wish to keep the number of infected people under at all times. The following paragraph provides the rationale for this goal. \paragraph{Policy goal} As described in Section~\ref{sec:mpc}, parameters should be selected to ensure that a desired goal is achieved. In all scenarios using the $\text{SEI}^3\text{R}${} model, we aim to maintain the maximal infectious population proportion requiring healthcare below the available number of hospital beds per capita, denoted $C$. This objective can be formulated as an auxiliary observation, $Y_{0:T}^{aux}$, introduced in Section \ref{sec:approach}, as: \begin{equation} Y_{0:T}^{aux} = \mathbb{I}\left[ \left(\max_{t\in 0:T} \left( I_{1,t} + I_{2,t} + I_{3, t} \right)\right) < C\right], \end{equation} where $I_{1,0:T}$, $I_{2,0:T}$ and $I_{3,0:T}$ are sampled from the model, conditioned on a $\theta$ value. This threshold value we use was selected to be $0.0145$, as there are $0.0029$ hospital beds per capita in the United States~\cite{worldbank-us-beds}, and roughly $20\%$ of COVID-19{} cases require hospitalization. This constraint was chosen to represent the notion that the healthcare system must have sufficient capacity to care for all those infected who require care, as opposed to just critical patients. However, this constraint is only intended as a demonstrative example of the nature of constraints and inference questions one can query using models such as these, and under the formalism used here, implementing and comparing inferences under different constraints is very straightforward. More complex constraints may account for the number of critical patients differently to those with mild and severe infections, model existing occupancy or seasonal variations in capacity, or, target other metrics such as the number of deceased or the duration of the epidemic. The constraint is not met in Figure~\ref{fig:exp:seir:det_nom}, but is in Figure~\ref{fig:exp:seir:det_con}, where a greater control input $u$ has been used to slow the spread of the infection. This is an example of the widely discussed ``flattening of the curve.'' As part of this, the infection lasts longer but the death toll is considerably lower. \paragraph{Control input} As noted before, we assume that only a single ``controllable'' parameter affects our model, $u$. This is the reduction in the ``baseline reproductive ratio,'' $R_0$, due to policy interventions. Increasing $u$ has the same effect as reducing the infectiousness parameters $\beta_1$, $\beta_2$ and $\beta_3$ by the same proportion. $u$ can be interpreted as the effectiveness of policy choices to prevent new infections. Various policies could serve to increase $u$, since it is a function of both, for example, reductions in the ``number of contacts while infectious'' (which could be achieved by social distancing and isolation policy prescriptions), and the ``probability of transmission per contact'' (which could be achieved by, e.g., eye, hand, or mouth protective gear policy prescriptions). It is likely that both of these kinds of reductions are necessary to maximally reduce $u$ at the lowest cost. For completeness, the baseline reproductive ratio, $R_0$, is an estimate of the number of people a single infectious person will in turn infect and can be calculated from other model parameters~\cite{sei3r2020website}. $R_0$ is often reported by studies as a measure of the infectiousness of a disease, however, since $R_0$ can be calculated from other parameters we do not explicitly parameterize the model using $R_0$, but we will use $R_0$ as a convenient notational shorthand. We compactly denote the action of $u$ as controlling the baseline reproductive rate to be a ``controlled reproductive rate,'' denoted $\hat{R}_0$, and calculated as $\hat{R}_0 = (1 - u) R_0$. This is purely for notational compactness and conceptual ease, and is entirely consistent with the model definition above. \paragraph{Using point estimates of model parameters} We now explain how we set the model parameters to deterministic estimates of values which roughly match COVID-19{}. The following section will consider how to include uncertainty in the parameter values. Specifically, the parameters are the incubation period $\alpha^{-1}$; rates of disease progression $p_1$ and $p_2$; rates of recovery from each level of infection, $\gamma_1,$ $\gamma_2,$ and $\gamma_3$; infectiousness for each level of infection, $\beta_1,$ $\beta_2,$ and $\beta_3$; and a death rate for critical infections, $\kappa$. $u \in [ 0, 1 ]$ is a control parameter, representing the strength of action taken to prevent new infections~\cite{boldog2020risk}. To estimate distributions over the uncontrollable model parameters, we consider their relationships with various measurable quantities \begin{minipage}{\linewidth} \noindent\begin{minipage}{.4\linewidth} \begin{alignat}{2} \label{eq:seir-quantities-begin} &\text{incubation period} &&= \alpha^{-1} \phantom{\frac{1}{1}} \\ &\text{mild duration} &&= \frac{1}{\gamma_1 + p_1} \\ &\text{severe duration} &&= \frac{1}{\gamma_2 + p_2} \\ &\text{critical duration}& &= \frac{1}{\gamma_3 + \kappa} \label{eq:seir-quantities-end-firstcolumn} \end{alignat} \end{minipage} \noindent\begin{minipage}{.6\linewidth} \begin{alignat}{2} &\text{mild fraction} &&= \frac{\gamma_1}{\gamma_1 + p_1} \\ &\text{severe fraction} &&= \frac{\gamma_2}{\gamma_2 + p_2} \cdot \left( 1 - \text{mild fraction} \right) \\ &\text{critical fraction} &&= 1 - \text{severe fraction} - \text{mild fraction} \phantom{\frac{1}{1}} \\ &\text{fatality ratio} &&= \frac{\kappa}{\gamma_3 + \kappa} \cdot (\text{critical fraction}). \label{eq:seir-quantities-end} \end{alignat} \end{minipage} \vspace{.5cm} \end{minipage} Given the values of the left-hand sides of each of Equations \ref{eq:seir-quantities-begin}-\ref{eq:seir-quantities-end}, (as estimated by various studies) we can calculate model parameters $\alpha, p_1, p_2, \gamma_1, \gamma_2, \gamma_3$ and $\kappa$ by inverting this system of Equations. These parameters, along with estimates for $\beta_1$, $\beta_2$, and $\beta_3$, and a control input $u$, fully specify the model. Reference \cite{sei3r2020website} uses such a procedure to deterministically fit parameter values. Given the parameter values, the simulation is entirely deterministic. Therefore, setting parameters in this way enables us to make deterministic simulations of ``typical'' trajectories, as shown in \ref{fig:exp_seit:det_nom_con}. Specifying parameters in this way and running simulations in this system provides a low overhead and easily interpretable environment, and hence is an invaluable tool to the modeller. \paragraph{Dealing with uncertainty about model parameter values} \input{figures/SEIR_tex/stochastic_simulation} Deterministic simulations are easy to interpret on a high level, but they require strong assumptions as they fix the values of unknown parameters to point estimates. We therefore describe how we can perform inference and conditioning in a stochastic model requiring less strict assumptions, and show that we are able to provide meaningful confidence bounds on our inferences that can be used to inform policy decisions more intelligently than without this stochasticity. As described in Section~\ref{sec:approach}, stochasticity can be introduced to a model through a distribution over the latent global parameters $\eta$. Examples of stochastic simulations are shown in Figure \ref{fig:exp:seir:stoch:full}. Clearly there is more capacity in this model for representing the underlying volatility and unpredictability of the precise nature of real-world phenomena, especially compared to the deterministic model. However, this capacity comes with the reality that increased effort must be invested to ensure that the unknown latent states are correctly accounted for. For more specific details on dealing with this stochasticity please refer back to Section \ref{sec:approach}, but, in short, one must simulate for multiple stochastic values of the unknown parameters, for each value of the controllable parameters, and agglomerate the many individual simulations appropriately for the inference objective. When asking questions such as "will this parameter value violate the constraint?" there are feasibly some trajectories that are slightly above and some slightly below the trajectory generated by the deterministic simulation due to the inherent stochasticity (aleatoric uncertainty) in the real world. This uncertainty is integrated over in the stochastic model, and hence we can ask questions such as "what is the probability that this parameter will violate the constraint?" Using confidence values is this way provides some measure of how certain one can be about the conclusion drawn from the inference -- if the confidence value is very high then there is a measure of ``tolerance'' in the result, compared to a result with a much lower confidence. We define a joint distribution over model parameters as follows. We consider the $95\%$ confidence intervals of $\beta_1$, $\beta_2$, and $\beta_3$ and the values in the left hand-sides of Equations \eqref{eq:seir-quantities-begin}-\eqref{eq:seir-quantities-end-firstcolumn}, and assume that their true values are uniformly distributed across these confidence intervals. Then at each time $t$ in a simulation, we sample these values and then invert the system of Equations \ref{eq:seir-quantities-begin}-\ref{eq:seir-quantities-end} to obtain a sample of the model parameters. More sophisticated distributions could easily be introduced once this information becomes available. We now detail the nominal values used for typical trajectories (and the confidence intervals used for sampling). The nominal values are mostly the same as those used by~\citep{sei3r2020website}. We use: an incubation period of 5.1 days (4.5-5.8)~\citep{lauer2020incubation}; a mild infection duration of 6 days (5.5-6.5)~\citep{woelfel2020clinical}; a severe infection duration of 4.5 days (3.5-5.5)~\citep{sanche2020novel}; a critical infection duration of 6.7 days (4.2-10.4); fractions of mild, severe, and critical cases of $81\%$, $14\%$ and $5\%$~\citep{wu2020characteristics}; and a fatality ratio of $2\%$~\citep{wu2020characteristics}. We also use $\beta_1 = 0.33$ / day (0.23-0.43), and $\beta_2 = 0.$ / day (0.-0.05), and $\beta_3 = 0.$ / day (0.-0.025). Where possible, the confidence intervals are obtained from the studies which estimated the quantities. Where these are not given, we use a small range centred on the nominal value to account for possible imprecision. \subsubsection{Turning FRED into A Probabilistic Program} \label{sec:models:fred:probabilistic-model} The FRED simulator has a parameter file which stipulates the values of $\theta$ and $\eta.$ In other words both the controllable and non-controllable parameters live in a parameter file. FRED, when run given a particular parameter file, produces a sample from the distribution $p(X_{0:T},Z_{0:T}|\theta, \eta).$ Changing the random seed and re-running FRED will result in a new sample from this distribution. The difference between $X_{0:T}$ and $Z_{0:T}$ in FRED is largely in the eye of the beholder. One way of thinking about it is that $X_{0:T}$ are all the values that are computed in a run and saved in an output file and $Z_{0:T}$ is everything else. In order to turn FRED into a probabilistic programming model useful for planning via inference several small but consequential changes must be made to it. These changes can be directly examined by browsing one of the public source code repositories accompanying this paper.\footnote{\url{https://github.com/plai-group/FRED}} First, the random number generator and all random variable samples must be identified so that they can be intercepted and controlled by PyProb. Second, any variables that are determined to be controllable (i.e.~part of $\theta$) need to be identified and named. Third, in the main stochastic simulation loop, the state variables required to compute $Y_t^{aux}$ and $Y_t^{obs}$ must be extracted. Fourth, these variables must be given either synthetic ABC likelihoods or given constraints in the form of likelihoods. Finally, a mechanism for identifying, recording, and or returning $X_{0:T}$ to the probabilistic programming system must be put in place. FRED, like many stochastic simulators, includes the ability to write-out results of a run of the simulator to the filesystem. This, provided that the correspondence between a sample $\theta^{(i)}$ and the output file or files that correspond to it is established and tracked, is how $X_{0:T}^{(i)}$ is implicitly defined. In the interest of time and because we were familiar with the internals of PyProb and knew that we would not be using inference algorithms that were incompatible with this choice, the demonstration code does not show a full integration in which all random variables are controlled by the probabilistic programming system, instead, it only controls the sampling of $\theta$ and the observation of $Y_t^{aux}.$ Notably this means that inference algorithms like lightweight Metropolis Hastings \citep{wingate2011lightweight}, which are also included in PyProb, cannot be used with the released integration code. \subsubsection{Details of FRED+PyProb Integration}\label{sec:models:fred:pyprob} Our integration of PyProb into FRED only required the addition of a few dozen lines of code to FRED's code base. The authors on this paper had no prior knowledge of FRED and did not have access to collaborators familiar with the FRED codebase, however, despite this it took only two developers less than two days to complete the integration and run the first successful experiments. More details about the integration of FRED and PyProb include: \begin{enumerate} \item The simulator is connected to PyProb through a cross-platform execution protocol (PPX\footnote{\url{https://github.com/pyprob/ppx}}). This allows PyProb to control the stochasticity in the simulator, and requires FRED to wait, at the beginning of its execution, for a handshake with PyProb through a messaging layer. \item PyProb overwrites the policy parameter values $\theta$ with random draws from the user-defined prior. While PyProb keeps internally tracks of all random samples it generates, we also decided to write out the updated FRED parameters to a parameter file in order to make associating $\theta^{(i)}$ and $X_{0:T}^{(i)}$ easy and reproducible. \item For each daily iteration step in FRED's simulation, we call PyProb's \texttt{observe} function with a likelihood corresponding to the constraint we would like to hold in that day. \end{enumerate} With these connections established, importance sampling of our inference objective in the FRED model can be directed by PyProb. We also remind the reader that, like in Section~\ref{sec:approach:time_varying_control}, more complex controls can be considered, in principle allowing for complex time-dependent policies to be inferred. We do not examine this here, but note that this extension is straightforward to implement in the probabilistic programming framework, and that PyProb is particularly well adapted to coping with the additional complexity. Compared to sampling parameter values for FRED at the beginning of the simulation, such time-varying policies are not readily available in FRED's configuration and to implement them would require changing FRED's internal state during the simulation. \subsection{$\text{SEI}^3\text{R}${} Model} \label{sec:exp:fancyseir} The most straightforward approach to modelling infectious diseases is to use low-dimensional, compartmental models such as the widely used susceptible-infectious-recovered (SIR) models, or the $\text{SEI}^3\text{R}${} variant introduced in Section \ref{sec:models:seir}. These models are fast to simulate and easy to interpret, and hence form a powerful, low-overhead analysis tool. \subsubsection{Deterministic Model} The system of equations defining the $\text{SEI}^3\text{R}${} model form a deterministic system when global parameter values, such as the mortality rates or incubation periods, are provided. However, the precise values of these parameter values are unknown, and instead only confidence intervals for these parameters are known, i.e. the incubation period is between $4.5$ and $5.8$~\citep{lauer2020incubation}. This variation may be due to underlying aleatoric uncertainty prevalent in biological systems, or epistemic uncertainty due to the low-fidelity nature of SIR-like models. We do not discuss them here, but work exists automatically fitting point-wise estimates of model parameter values directly from observed data~\citep{wearing2005appropriate, mamo2015mathematical}. Regardless of whether one obtains a point estimate of the parameter values by averaging confidence intervals, or by performing parameter optimization, the first step is to use these values to perform fully deterministic simulations, yielding simulations such as those shown in Figure \ref{fig:exp:seir:det_nom}. Simulations such as this are invaluable for understanding the bulk dynamics of systems, investigating the influence of variations in global parameter values or investigating how controls affect the system. However, the ultimate utility in these models is to \emph{use} them to inform policy decisions to reduce the impact of outbreaks. As eluded to above, this is the primary thrust of this work, combining epidemiological simulators with automated machine learning methodologies to model policy outcomes, by considering this problem as \emph{conditioning} simulations on outcomes. To demonstrate such an objective, we consider maintaining the infected population below a critical threshold $C$ at all times. In a deterministic system there are no stochastic quantities and hence whether the threshold is exceeded is a deterministic function of the controlled parameters, i.e. the value of $p(\forall_{t > 0} Y_t^{aux}=1 | \theta)$ (related to \eqref{eq:control-inference} via Bayes rule) is binary in a deterministic system and hence takes a value of either $0$ or $1$. Therefore, we can simply simulate the deterministic system for a finite number of $\theta$ values, and select those parameter values that do not violate the constraint. We vary the free parameter $u\in\left[0, 1\right]$, where $u$ is a scalar value that reduces the baseline reproduction rate as $\hat{R}_0 = (1-u)R_0$. We define $u$ in this way such that $u$ represents an \emph{intervention}, or change from normal conditions. The parameter $u$ is the only parameter we have control over, and hence $\theta = u$. Results for this are shown in Figure \ref{fig:exp:seir:det_plan}. It can then be read off that under the deterministic model $\hat{R}_0$ must be reduced by at least $37.5\%$ of $R_0$ to satisfy the constraint. Figure \ref{fig:exp:seir:det_plan} shows trajectories simulated using insufficient intervention with $u=0.3$ ($\hat{R}_0 = 70\%R_0$), acceptable intervention of $u=0.375$ ($\hat{R}_0 = 62.5\%R_0$), and excessive intervention of $u=0.45$ ($\hat{R}_0 = 55\%R_0$), and show that these parameters behave as expected, violating the constraint, remaining just under the threshold and remaining well beneath the threshold respectively. \input{figures/SEIR_tex/fig_det_planning} \subsubsection{Stochastic Simulation} While the above example demonstrates how parameters can be selected by conditioning on desired outcomes, we implicitly made a critical modelling assumption. While varying the free parameter $u$, we fixed the \emph{other} model parameter values ($\alpha^{-1}, \gamma_1$, etc) to single values. We therefore found a policy intervention in an unrealistic scenario, namely one in which we (implicitly) claim to have certainty in all model parameters except $u$. To demonstrate the pitfalls of analyzing deterministic systems and applying the results to an inherently stochastic system such as an epidemic, we use the permissible value of $u$ solved for in the deterministic system, $u=0.375$, and randomly sample values of the remaining simulation parameters. This ``stochastic'' simulator is a more realistic scenario than the deterministic variant, as each randomly sampled $\eta$ represents a unique, plausible epidemic being rolled out from the current world state. The results are shown in Figure~\ref{fig:exp:seir:stoch_det:under}. Each line represents a possible epidemic. We can see that using the previously found value of $u$ results in a large number of epidemics where the infectious population exceeds the constraint, represented by the red trajectories overshooting the dotted line. Simply put, the control parameter we found previously fails in as unacceptable number of simulations. This detail highlights the shortcomings of the deterministic model: in the deterministic model a parameter value was either accepted or rejected with certainty. There was no notion of the variability in outcomes, and hence we have no mechanism to concretely evaluate the risk of a particular configuration. \input{figures/SEIR_tex/fig_stoch_with_det} Instead, we can use a stochastic model which at least does account for some aleatoric uncertainty about the world. We repeat the analysis picking the required value of $u$, but this time using the stochastic model detailed in Section~\ref{sec:models:seir}. In practice, this means the (previously deterministic) model parameters detailed in Equations~\ref{eq:seir-quantities-begin} - \ref{eq:seir-quantities-end} are randomly sampled for each simulation according to the procedure outlined following the equations. To estimate the value of $p\left(\forall_t : Y_t^{aux} = 1 | \theta \right)$, for a given $u$ value, we sample $M$ stochastic trajectories from the system. We then simply count the number of trajectories for which the condition $\forall_t : Y_t^{aux} = 1$ holds, and divide this count by $M$. Intuitively, this is operation is simple: for a given $\theta$, simulate a number of possible trajectories, and, as the number of simulations $M$ tends to infinity, the fraction that satisfy the constraint is the desired probability value. We note that this operation corresponds to an ``inner'' Monte Carlo expectation, sampling under the distribution of simulator trajectories conditioned on $\theta$, evaluating the expected number of trajectories that do not violate the threshold. This value is then passed through a non-linear indicator function extracting those parameters that yield a confidence above a certain threshold. We are then free to use any method we please for exploring $\theta$ space, or, evaluating additional Monte Carlo expectations under the resulting $\theta$ distribution. As such, this system is a nested Monte Carlo sampler~\cite{rainforth2017nesting}. The results are shown in Figure~\ref{fig:exp:seir:stoch_det:stoch_param}. The certainty in the result under the stochastic model is not a binary value like in the deterministic case, and instead occupies a continuum of values representing the confidence of the results. We see that the intersection between the red and green curves occurs at approximately $0.5$, explaining the observation that approximately half of the simulations in Figure \ref{fig:exp:seir:stoch_det:under} exceed the threshold. We can now ask questions such as: "what is the parameter value that results in the the constraint not being violated, with $90\%$ confidence." We can read off rapidly that we must instead reduce the value of $\hat{R}_0$ to $50\%$ of its original value to satisfy this confidence based constraint. Repeating the stochastic simulations using these computed values confirms that very few simulations violate the constraint (Figure \ref{fig:exp:seir:stoch_det:border}). The ability to tune the outcome based on a required level of confidence is paramount for safety-critical applications, as it informs how sensitive the system is to the particular parameter choice and is more resilient to model misspecification. \input{figures/SEIR_tex/fig_nmc} \subsubsection{Model predictive control} We have shown how one can select the required parameter values to achieve a desired objective. To conclude this example, we apply the methodology to iterative planning. The principal idea underlying this is that policies are not static and can be varied over time conditioned on the current observed state. Under the formalism used here, this is as simple as re-applying the stochastic planning each time step to produce a new policy conditioned on new information. \input{figures/SEIR_tex/fig_mpc} We show a demonstration of this in Figure \ref{fig:exp:seir:mpc}. In this example, we begin at time $t=200$ with non-zero infection rates. We solve for a policy that satisfies the policy with $90\%$ certainty, and show this confidence interval over trajectories as a shaded region. We then simulate the true evolution of the system for a single step sampling from the conditional distribution over state under the selected control parameter. We then repeat this process at regular intervals, iteratively adapting the control to the new world state. We see that the confidence criterion is always satisfied and that the infection is able to be maintained at a reasonable level. We do not discuss this example in more detail, and only include it as an example of the utility of framing the problem as we have, insomuch as iterative re-planning based on new information is a trivial extension under the formulation used. \subsubsection{Policy-based controls} We have illustrated how simulations can be used to answer questions about the suitability of parameter values we can influence, while marginalizing over those parameter we do not have control over. However, $u$ is not something that is \emph{directly} within our control. Instead, the value of $u$ is set through changing policy level factors. As an exploratory example we suggest that the value of $u$ is the square root of the product of two policy-influenceable factors: the fractional reduction in social contact, $\rho$, below its normal level (indicated as a value of $1.0$), and the transmission rate relative to the normal level, $\tau$, where we again denote normal levels as $1.0$. This relationship is shown in Figure \ref{fig:exp:seir:stoch_det:policy}. We indicate $u$ level sets that violate the constraint in red, and valid sets in green. We suggest taking the least invasive, valid policy, being represented by the highest green curve. Once the analysis above has been performed to obtain a value of $u$, that satisfies the required infection threshold, it defines the set of achievable policies. Any combination of $\tau$ and $\eta$ along this curve render the policy valid. Here, additional factors may come into consideration that make particular settings of $\tau$ and $\eta$ more or less advantageous. For instance, wearing more PPE may be cheaper to implement and less economically and socially disruptive than social distancing, and so higher values of $\tau$ may be selected relative to $\eta$. This reduces to a simple one-dimensional optimization of the cost surface along the level-set. While we have simply hypothesized this as a potential relationship, it demonstrates how policy level factors influence simulations and outcomes. While the SEIR model family is an invaluable tool for analyzing and understanding the bulk dynamics of outbreaks, it is too coarse-grained for actual, meaningful, localized policy decisions to be made, especially when those policy decisions are directly influencing populations. Further, these notions of ``policy'' are somewhat abstract here because of the high-level nature of the $\text{SEI}^3\text{R}${} model used. We now go on to resolve these issues by using the more sophisticated, agent-based simulator, FRED, where simulations are able to represent localized variations, and, where real policy measures are more easily defined. \section{Discussion} Our experience in conducting this research has led us to identify a number of opportunities for improvement in the simulation-based inference and control spaces. \subsection{Software Tools} Building and maintaining an SEIR-type model is a simple multiple-hour homework-like project. Building and maintaining a simulator like FRED (or the US National Large Scale Agent Model \citep{parker2011distributed}) is a massive undertaking and requires too much time to replicate or, frankly, significantly upgrade in a crisis situation. As far as we could find when conducting this work, there is no central repository of up-to-date open-source agent-based epidemiological models nor an organizing body that we could find that we could interface to immediately. We may well be wrong, and such a thing does exist; however, if it does, it was insufficiently obvious for us to have found it quickly. \subsection{Methodology} \label{sec:methodology} There appear to be gaps between the fields of control, epidemiology, statistics, policy-making, and probabilistic programming. To quote \citet{lessler2016trends}, ``the historic separation between the infectious disease dynamics and `traditional' epidemiologic methods is beginning to erode.'' We, like them, see our work as a new opportunity for ``cross pollination between fields.'' Again, the most closely related work that we found in the literature is all focused on automatic model parameter estimation from observational data \citep{kypraios2017tutorial,mckinley2018approximate,toni2009approximate,minter2019approximate,chatzilena2019contemporary}. These methods and the models to which they have been applied could be repurposed for planning, in the way we have shown that it can be done, simply by changing the quantities that they observe to include control optimality variables as we have demonstrated. There are at least two existing papers that we have identified that explore using probabilistic programming coupled to epidemiological simulators; \citep{funk2020choices} which used Libbi \citep{murray2013bayesian} and \citep{gram2019hijacking} which used PyProb. The latter is an example of work that ``hijacks'' a malaria simulator in the same way we ``hijacked'' the FRED simulator in this paper. Neither explicitly addresses control. There is another (\citep{chatzilena2019contemporary}) that uses the probabilistic programming system STAN \citep{carpenter2017stan} to address parameter estimation in SEIR-type models, but it too does not explore control, nor could it be repurposed to control an agent-based model. Like \citep{kypraios2017tutorial,mckinley2018approximate,toni2009approximate,minter2019approximate,chatzilena2019contemporary} we too could have demonstrated automated parameter estimation in both of the models that we consider, for instance to automatically adjust uncertainty in disease-specific parameters by conditioning on observable quantities that are measured as, for instance, COVID-19{} is evolving. For instance, we could also condition the model on total reported number of deaths due to COVID-19{}, collected on a daily or weekly interval. However, as the epidemiological community already relies upon long-standing methods established for estimating ``confidence intervals'' for model parameters during evolving pandemics, we exclusively restricted ourselves to demonstrating how to achieve control via inference and assume that all control inference tasks are conducted using priors that are posteriors from other, established parameter estimation techniques. Combining the two kinds of observations in a single inference task is technically straightforward but does require care in interpretation. \section{Final Thoughts} \label{sec:final-thoughts} The programming languages for artificial intelligence (PLAI) group at UBC, the research affiliation of all the authors, is primarily involved in developing next-generation AI tools and techniques. We felt however, that current circumstances demanded we lend whatever we could to the global fight against COVID-19{}. Beyond the specific contributions outlined above, our secondary aim in writing this paper was to encourage other researchers to contribute their expertise to the fight against COVID-19{} as well. We believe the world will be in a better place more quickly if they do, and hope we have contributed to bringing this about.
2,869,038,154,526
arxiv
\section{\textrm{I}. Quantitative interpretation of the new 2D plasma mode} To estimate the frequency of the new 2D plasma mode, the plasmon wave can be treated in terms of its equivalent $LC$ oscillator circuit~\cite{Aizin:12, Yoon:14}. Consider each plasmon wave fragment of length $\lambda_p=1/q$ as a separate oscillator. The kinetic energy of collective electron oscillations in such an oscillator can be modelled using the kinetic inductance of a non-magnetic origin, $L$. On the other hand, the electric potential energy associated with the Coulomb's restoring force driving local electrons into plasmonic oscillation can be modelled using the electrical capacitance, $C$. In the case under consideration, the capacitance of the plasmonic oscillator is determined by dimensions of the central metal strip (Fig.~1). \begin{equation} C = \varepsilon_0 \varepsilon \, \frac{W \lambda_p}{d}. \end{equation} Then, the electrons capacitively accumulated under the central strip discharge through the regions adjacent to 2DES (Fig.~1). The kinetic energy, $E_k$, of the accelerating electrons is closely linked to the kinetic inductance. In fact, for the electron velocity, $v$, at a given time, the total kinetic energy, $K$, of the electrons in the 2DES can be expressed as $E_k = 2 \times m^{\ast}∗v^2/2 \times n_s \lambda_p^2$. Taking into account the fact that current flows on both sides of the strip up to the distance $\lambda_p$, the total current becomes $I=2 n_s e v \lambda_p$, which leads to $E_k=L I^2/2$, where $L$ is the total kinetic inductance of the plasmonic oscillator under consideration. As a result, we obtain: $L= L_k/2 = m^{\ast}/2 n_s e^2$. Then, the resultant plasmon frequency can be defined as: \begin{equation} \omega_p=\frac{1}{\sqrt{L C}} = \sqrt{\frac{2 n_s e^2 d}{m^{\ast} \varepsilon \varepsilon_0} \frac{1}{W \lambda_p}}= \sqrt{\frac{2 n_s e^2 d}{m^{\ast} \varepsilon \varepsilon_0 W} q}. \end{equation} Remarkably, our simple quantitative model leads to precise reproduction of the spectrum of the novel 2D plasmon Eq.~(1) calculated based on the exact theory~\cite{Volkov:19}. \begin{figure}[b!] \includegraphics[scale=0.8]{suppl1.eps} \caption{Schematic drawing of the plasmon oscillator equivalent circuit. Metallic gate of width $W$ and length $L$ is etched on the top surface of the crystal. Grounding contacts of the 2DES are added at each side of the gate strip. Quantum well is located at a distance $h$ below the sample surface. $C$ and $L_k$ are the effective capacitance and the kinetic inductance of the plasmonic oscillator. The diagram is not drawn to scale.} \label{image1} \end{figure}
2,869,038,154,527
arxiv
\section{Introduction} Imagine the following persona: Anna is a university student in graphic design. She is active, easy going, and organized; keeping a structured well-planned calendar. She is 21 and high on the trait of Openness to Experience \cite{tkalcic2015personality}, and enjoys traveling to new places and meeting new people, as well as discovering and listening to new music using online radio services.\\ To improve Anna's listening experience, we consider her context. To decide which aspects of context to look at, we ran an exploratory crowdsourced survey. Out of 103 respondents 97 listen to different music based on their mood (e.g., happy, calm, sad), 92 based on their activity (e.g., commuting, jogging); 38 based on the ambience (e.g. sunny, rainy, loud, quiet), and 32 based on the location (e.g., a city park, a beach). From free-text answers we know that preferences of respondents also change according to the weather, time of day, people around, headphones or speakers used, upcoming concerts, the difficulty of work they do, languages they learn, reminiscence, and music that they just heard somewhere. Several respondents suggested combined causes, such as friends that are around and the activity they performed together. We propose that Anna's online radio would suggest unexpected pleasant surprises based on her momentary unique (and therefore ephemeral) context. The contribution of this paper is using multiple sensors and external data sources to make the inference of life-logging events richer (through combinations of inputs), and more reliable (using multiple sensors to improve fault tolerance). It describes how rich context information can be combined in a large number of ways to improve the diversity of recommendations, which will lead to more opportunities for music discovery. \section{Related work} The type of contextual recommendations that can be made is shaped by sensors and signal processing used. Nowadays it is possible to accurately detect activities such as biking, driving, running, or walking based on smartphone sensors \cite{Liao2015}, or based on environmental sound cues \cite{Shaikh2008}. It is also possible to detect personality traits based on phone call patterns and social network data of the user \cite{deOliveira2011}. Similarly, interest in an object can be inferred based on ambient noise levels, and positions of people and objects in relation to each other \cite{dim2015automatic}. In the SenSay system, phone settings and preferences are set based on detected environmental and physiological states \cite{Siewiorek2003}. With improvements in smartphone technology, there is a lot of potential for using rich contextual information to improve recommendations, in particular considering that people prefer to listen to different music in different contexts \cite{Cunningham08, Schedl2014, Su2010}. Among the first to propose a context-aware music recommendation system are Park et al. \cite{Park2006}. They used weather data (from sensors and external data sources), and user information, to predict the appropriate music genre, tempo, and mood. Music can also be recommended based on user's heart beat to bring its rate to a normal level \cite{Liu2009}; activities detected automatically (e.g. running, walking, sleeping, working, studying, and shopping) \cite{Wang2012}; driving style, road type, landscape, sleepiness, traffic conditions, mood, weather, natural phenomena \cite{Baltrunas2011}; and emotional state to help to transition to a desired state \cite{Han2010}. Soundtracks have also been recommended for smartphone videos based on location (using GPS and compass data for orientation), and extra information from 3rd party services such as Foursquare \cite{Yu2012}. These examples use sensors and external data sources for music recommendation. Some of these context-aware music discovery systems recommend not just relevant, but new music to users \cite{Wang2013}. Our contribution is to combine rich context in a way that is a) fault tolerant, and b) aims to facilitate music discovery, by constructing a momentary ephemeral context. \section{Approach} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/framework_8.pdf} \caption{An example of music recommendation using as input ephemeral context constructed from high-level features inferred from smartphone sensors, wearables, and external data sources.} \label{framework} \end{figure} Here we give an example of how our approach could work to recommend music to Anna by detecting her ephemeral context based on high-level features (e.g. activity, mood, or the weather), which are inferred from low-level sensor data (see Figure \ref{framework}) and discuss the benefits of such approach. \textbf{From low-level to high-level features.} We infer that Anna's \emph{activity} is ``jogging'' from the pattern of her smartphone's accelerometer and GPS, and because this activity was also planned in her calendar. Her \emph{speed} is classified as ``fast'' for jogging, because her speed is 15km/h, while usually she runs 13km/h. For the \emph{social} component she is classified as ``alone'', since her Bluetooth sensor does not see Bluetooth sensors of her friends' smartphones and the microphone does not recognise voices around. The \emph{location} is ``downtown of Sydney'' based on the coordinates given by her smartphone's GPS, and the point of interest identified by GoogleMaps API, as well as recent reviews about it from FourSquare. The \emph{weather} is ``heavy rain'' according to the moisture sensor of her phone and the weather forecast for the location from Weather API. The \emph{time of day} is ``night'', because her smartphone time is 23:56. Her \emph{physical state} is ``tired'' based on the high heart rate measured by her smart bracelet, and the respiratory pattern coming from her breathing sensor. Her \emph{mood} is detected as ``angry'' by her smartphone front camera \cite{Busso2004} and based on her public interactions on social media (e.g., angry emoticon). We combine these high-level features to construct a momentary ephemeral context, which becomes: ``\emph{Jogging fast alone in downtown of Sydney under a heavy rain at night being tired and angry}". \textbf{From individual recommenders to a hybrid one.} We propose to use several individual recommenders focused on different sets of high-level features (e.g. a recommender looking only at location, weather, and time). A hybrid recommender later weights recommendendations of each individual one, based on explicit preferences of Anna, and the reliability of underlying high-level features, if detected at all. Anna can change weights to make an emphasis on a certain aspect, such as location or activity, depending on the way she wants to explore music. We provide an interactive web-based demonstration\footnote{The demo, the code, and the results of the exploratory survey are available at \url{https://github.com/pavelk2/ICML2017Demo}.} of how such a hybrid recommender might work based on ephemeral context. \textbf{Benefits.} Our approach allows us to effectively address fault tolerance and leverage music discovery: \textbf{Fault tolerance.} Different factors are used as a measure of fault tolerance. For example, if GPS and calendar locations are different, the system will omit location-based recommendations from the hybrid recommender. \textbf{Music discovery.} Since ephemeral context frequently changes, the recommendations supplied will vary from moment to moment, leading to more opportunities for music discovery (e.g. 8 high-level features taking 8 values each, give more than 16mln combinations). \section{Outlook} Our next steps will be dedicated to \emph{identification of combinations of high-level features} influencing music preferences, possibly via a user study; to \emph{evaluation of recommendations' relevance} with cultural preferences in mind, potentially using crowdsourcing \cite{Ahn2008}, and to study how to make such rich user profiling compliant with \emph{privacy} concerns \cite{coles2011looking}. We also plan to study how to improve the transparency of ephemermal recommendations using textual and visual explanations. As such, we aim to deliver a streaming music experience, driven by context, while giving the user a sense of transparency and control. We are confident that music discovery through rich context is a very promising research topic, allowing streaming services to provide better personalized experiences to their listeners. \clearpage